Over the past ten years, we have seen a substantial increase in the amount of data that can be obtained from biological experiments. Where once we interrogated small pieces of DNA, we now interrogate entire genomes. Not only is more data being produced, much of this data is also being made public. We see the availability of public data as a tremendous resource for learning and exploration. This summer, Geospiza will begin work with the Northwest Association for Biomedical Research and the Puget Sound Center for Learning and Technology on Bio-ITEST, a project funded by the National Science Foundation, to help high school teachers learn how to do a new kind of science and investigate these stores of public data.
Through Bio-ITEST, teachers and students, will use the GeneSifter Analysis Edition to work with molecular data from the public databases. They will compare the expression of genes under different conditions and from different kinds of cells. Learning how sets of genes are controlled, the functions of these genes within a cell, and the pathways where the genes function, will help students appreciate the richness of the molecular world and help prepare them for careers in this new age of biology.
Finches continually evolve and so does science. Follow the conversation to learn how data impacts our world view.
Tuesday, March 31, 2009
Tuesday, March 17, 2009
Introducing BioHDF
Today we released news of our funding for the BioHDF project. This project, in collaboration with The HDF Group, is focused on developing scalable bioinformatics technologies to support the many current and emerging Next Generation Sequencing (NGS) applications such as Transcription Profiling, Digital Gene Expression, Small RNA Analysis, Copy Number Variation and Resequencing.
You might ask, isn’t Geospiza already doing all of those things listed above? The answer is yes, but there is more to NGS work than meets the eye.
When we work with NGS data today, we do so with software and systems that are inefficient in three important areas: Space, Time, and Bandwidth. Although people are managing to get by, as throughput continues to increase, space, time and bandwidth issues are going become more acute and will lead to disproportionally higher software development and data management costs. BioHDF will help solve the space, time, and bandwidth issues at the infrastructure level.
Space issues are related to storing the data. As we know, NGS systems create a lot of data. Less understood is that when the data are processed the outputs of the alignment programs produce an amount of data equal or larger than the original input files. Thus, one challenge groups face, is that despite their plans for a certain amount of storage, after they've run some programs, they get surprised by finding they’ve run out of space, sometimes even in the middle of a one day program run :-(
Current practices for computing alignments also create time inefficiencies in detailing with NGS data. Today’s algorithms are improving, but still largely require that data be held in computer memory (RAM) during processing. When problems get too large, algorithms use disk space in a random way. Also known as swapping, this process kills performance and jobs must be terminated. In many cases, the problem is handled by breaking the problem down into smaller units for computation, writing scripts to track the pieces and steps, and putting the output files together at the end.
Additionally, many programs require that data be in a certain format prior to processing. As files get ever larger, the time needed to reformat data adds significantly to the computational burden.
Bandwidth is a measure of the data transfer rate. Small files transfer quickly, but when they get larger, there is a noticeable lag between the start and finish. With NGS, the data transfer rate becomes a significant factor in systems planning. Some groups have gone to great lengths to improve their networks when setting up their labs for NGS. In our cloud computing services, we use specialized software to improve data transfer rates and ensure transfers are complete because tools like ftp are not robust enough to reliably handle NGS data volumes.
BioHDF will help us work with space, time, bandwidth constraints.
Meeting the above challenge, requires that we have well performing software frameworks and underlying data management tools to store and organize data in better ways than complex mixtures of flat files and relational databases. Geospiza and The HDF Group are collaborating to develop open-source, portable, scalable, bioinformatics technologies based on HDF5 (Hierarchical Data Format). We call these extensible domain-specific data technologies “BioHDF.” BioHDF will implement a data model that supports primary DNA sequence information (reads, quality values, meta data) and the results from sequence alignment and variation detection algorithms. BioHDF will extend HDF5 data structures and library routines with new features (indexes, additional compression, graph layouts) to support the high performance data storage and computation requirements of Next Gen Sequencing.
You might ask, isn’t Geospiza already doing all of those things listed above? The answer is yes, but there is more to NGS work than meets the eye.
When we work with NGS data today, we do so with software and systems that are inefficient in three important areas: Space, Time, and Bandwidth. Although people are managing to get by, as throughput continues to increase, space, time and bandwidth issues are going become more acute and will lead to disproportionally higher software development and data management costs. BioHDF will help solve the space, time, and bandwidth issues at the infrastructure level.
Space issues are related to storing the data. As we know, NGS systems create a lot of data. Less understood is that when the data are processed the outputs of the alignment programs produce an amount of data equal or larger than the original input files. Thus, one challenge groups face, is that despite their plans for a certain amount of storage, after they've run some programs, they get surprised by finding they’ve run out of space, sometimes even in the middle of a one day program run :-(
Current practices for computing alignments also create time inefficiencies in detailing with NGS data. Today’s algorithms are improving, but still largely require that data be held in computer memory (RAM) during processing. When problems get too large, algorithms use disk space in a random way. Also known as swapping, this process kills performance and jobs must be terminated. In many cases, the problem is handled by breaking the problem down into smaller units for computation, writing scripts to track the pieces and steps, and putting the output files together at the end.
Additionally, many programs require that data be in a certain format prior to processing. As files get ever larger, the time needed to reformat data adds significantly to the computational burden.
Bandwidth is a measure of the data transfer rate. Small files transfer quickly, but when they get larger, there is a noticeable lag between the start and finish. With NGS, the data transfer rate becomes a significant factor in systems planning. Some groups have gone to great lengths to improve their networks when setting up their labs for NGS. In our cloud computing services, we use specialized software to improve data transfer rates and ensure transfers are complete because tools like ftp are not robust enough to reliably handle NGS data volumes.
BioHDF will help us work with space, time, bandwidth constraints.
Meeting the above challenge, requires that we have well performing software frameworks and underlying data management tools to store and organize data in better ways than complex mixtures of flat files and relational databases. Geospiza and The HDF Group are collaborating to develop open-source, portable, scalable, bioinformatics technologies based on HDF5 (Hierarchical Data Format). We call these extensible domain-specific data technologies “BioHDF.” BioHDF will implement a data model that supports primary DNA sequence information (reads, quality values, meta data) and the results from sequence alignment and variation detection algorithms. BioHDF will extend HDF5 data structures and library routines with new features (indexes, additional compression, graph layouts) to support the high performance data storage and computation requirements of Next Gen Sequencing.
Wednesday, March 11, 2009
Grant Opportunities for Next Generation DNA Sequencing
Deadlines are quickly approaching for new NIH opportunities being offered through the American Recovery and Reinvestment Act (ARRA). If you are on NIH’s mailing lists you’re getting a small flood of emails informing you of coming opportunities. If you are not, well, you should be. In the meantime a few web links for different announcements are posted below.
A number of the RFA’s will be useful for obtaining equipment like Next Generation Sequencers. Remember, when preparing proposals a sound informatics plan will make your application stand out. Contact us if you’d like more information.
The announcements below describe programs that are being administered by the National Center for Research Resources (NCRR).
NOT-RR-09-008 - Addition of Recovery Funds to the Shared Instrumentation Grant Program. For shared instruments in the range of $100,000 to $500,000
http://grants.nih.gov/grants/guide/notice-files/NOT-RR-09-008.html
APPLICATION SUBMISSION/ RECEIPT DATE: MARCH 23, 2009
PAR-09-118 - Recovery Act Limited Competition: High-End Instrumentation Grant Program (S10). For a single major item of equipment to be used for biomedical research that costs between $600,000 - $8,000,000
http://grants.nih.gov/grants/guide/pa-files/PAR-09-118.html
LETTERS OF INTENT RECEIPT DATE: APRIL 6, 2009
APPLICATION DUE DATE: MAY 6, 2009
In addition to the above notices, NCRR will have funds for remodeling and renovation work. See their site for more information.
The NCRR grants are just one of the many new opportunities available. NIH has also designated at least $200 million in FYs 2009 - 2010 for a new initiative called the NIH Challenge Grants in Health and Science Research, to fund 200 or more grants (contingent upon the submission of a sufficient number of scientifically meritorious applications). This new program will support research on Challenge Topics which address specific scientific and health research challenges in biomedical and behavioral research that will benefit from significant 2-year jumpstart funds. The due date is April 27, 2009.
Further information is available at: http://grants.nih.gov/recovery/
A number of the RFA’s will be useful for obtaining equipment like Next Generation Sequencers. Remember, when preparing proposals a sound informatics plan will make your application stand out. Contact us if you’d like more information.
The announcements below describe programs that are being administered by the National Center for Research Resources (NCRR).
NOT-RR-09-008 - Addition of Recovery Funds to the Shared Instrumentation Grant Program. For shared instruments in the range of $100,000 to $500,000
http://grants.nih.gov/grants/guide/notice-files/NOT-RR-09-008.html
APPLICATION SUBMISSION/ RECEIPT DATE: MARCH 23, 2009
PAR-09-118 - Recovery Act Limited Competition: High-End Instrumentation Grant Program (S10). For a single major item of equipment to be used for biomedical research that costs between $600,000 - $8,000,000
http://grants.nih.gov/grants/guide/pa-files/PAR-09-118.html
LETTERS OF INTENT RECEIPT DATE: APRIL 6, 2009
APPLICATION DUE DATE: MAY 6, 2009
In addition to the above notices, NCRR will have funds for remodeling and renovation work. See their site for more information.
The NCRR grants are just one of the many new opportunities available. NIH has also designated at least $200 million in FYs 2009 - 2010 for a new initiative called the NIH Challenge Grants in Health and Science Research, to fund 200 or more grants (contingent upon the submission of a sufficient number of scientifically meritorious applications). This new program will support research on Challenge Topics which address specific scientific and health research challenges in biomedical and behavioral research that will benefit from significant 2-year jumpstart funds. The due date is April 27, 2009.
Further information is available at: http://grants.nih.gov/recovery/
Sunday, March 8, 2009
Bloginar: Next Gen Laboratory Systems for Core Facilities
Geospiza kicked off February by attending the AGBT and ABRF conferences. As part of our participation at ABRF, we presented a scenario, in our poster, where a core lab provides Next Generation Sequencing (NGS) transcriptome analysis services. This story shows how GeneSifter Lab and Analysis Edition’s capabilities overcome the challenges of implementing NGS in a core lab environment.
Like the last post, which covered our AGBT poster, the following poster map will guide the discussion.
As this poster overlaps the previous poster in terms providing information about RNA assays and analyzing the data, our main points below will focus on how GeneSifter Lab Edition solves challenges related to laboratory and business processes associated with setting up a new lab for NGS or bringing NGS into an existing microarray or Sanger sequencing lab.
Section 1 contains the abstract, an introduction to the core laboratory, and background information on different kinds of transcription profiling experiments.
The general challenge for a core lab lies in the need to run a business that offers a wide variety of scientific services for which samples (physical materials) are converted to data and information that have biological meaning. Different services often require different lab processes to produce different kinds of data. To facilitate and direct lab work, each service requires specialized information and instructions for samples that will be processed. Before work is started, the lab must review the samples and verify that the information has been correctly delivered. Samples are then routed through different procedures to prepare them for data collection. In the last steps, data are collected, reviewed, and the results are delivered back to clients. At the end of the day (typically monthly), orders are reviewed and invoices are prepared either directly or by updating accounting systems.
In the case of NGS, we are learning that the entire data collection and delivery process gets more complicated. When compared to Sanger sequencing, genotyping, or other assays that are run in 96-well formats, sample preparation is more complex. NGS requires that DNA libraries be prepared and different steps of the of process need to be measured and tracked in detail. Also, complicated bioinformatics workflows are needed to understand the data from both a quality control and biological meaning context. Moreover, NGS requires a substantial investment in information technology.
Section 2 walks through the ways in which GeneSifter Lab Edition helps to simplify the NGS laboratory operation.
Order Forms
In the first step, an order is placed. Screenshots show how GeneSifter can be configured for different services. Labs can define specialized form fields using a variety of user interface elements like check boxes, radio buttons, pull down menus, and text entry fields. Fields can be required or be optional and special rules such as ranges for values can be applied to individual fields within specific forms. Orders can also be configured to take files as attachments to track data, like gel images, about samples. To handle that special “for lab use only" information, fields in forms can be specified as laboratory use only. Such fields are hidden to the customers view and when the orders are processed they are filled later by lab personnel. The advantage of GeneSifter’s order system is that the pertinent information is captured electronically in the same system that will be used to track sample processing and organize data. Indecipherable paper forms are eliminated along with the problem of finding information scattered on multiple computers.
Web-forms do create a special kind of data entry challenge. Specifically, when there is a lot of information to enter for a lot samples, filling in numerous form fields on a web-page can be a serious pain. GeneSifter solves this problem in two ways:
First, all forms can have “Easy Fill” controls that provide column highlighting (for fast tab-and-type data entry), auto fill downs, and auto fill downs with number increments so one can easily “copy” common items into all cells of a column, or increment an ending number to all values in a column. When these controls are combined with the “Range Selector,” a power web-based user interface makes it easy to enter large numbers of values quickly in flexible ways.
Second, sometimes the data to be entered is already in an Excel spreadsheet. To solve this problem, each form contains a specialized Excel spreadsheet validator. The form can be downloaded as an Excel template and the rules, previously assigned to field when the form was created, are used to check data when they are uploaded. This process spots problems with data items and reports ten at upload time when they are easy to fix, rather than later when information is harder to find. This feature eliminates endless cycles of contacting clients to get the correct information.
Laboratory Processing
Once order data are entered, the next step is to process orders. The middle of section 2 describes this process using an RNA-Seq assay as an example. Like other NGS assays, the RNA-Seq protocol has many steps involving RNA purification, fragmentation, random primed conversion into cDNA, and DNA library preparation of the resulting cDNA for sequencing. During the process, the lab needs to collect data on RNA and DNA concentration as well as determine the integrity of the molecules throughout the process. If a lab runs different kinds of assays they will have to manage multiple procedures that may have different requirements for ordering of steps and laboratory data that need to be collected.
By now it is probably not a surprise to learn that GeneSifter Lab Edition has a way to meet this challenge too. To start, workflows (lab procedures) can be created for any kind of process with any number of steps. The lab defines the number of steps and their order and which steps are required (like the order forms). Having the ability to mix required and optional steps in a workflow gives a lab the ultimate flexibility to support those “we always do it this way, except the times we don’t” situations. For each step the lab can also define whether or not any additional data needs to be collected along the way. Numbers, text, and attachments are all supported so you can have your Nanodrop and Bioanalyzer too.
Next, an important feature of GeneSifter workflows is that a sample can move from one workflow to another. This modular approach means that separate workflows can be created for RNA preparation, cDNA conversion, and sequencing library preparation. If a lab has multiple NGS platforms, or a combination of NGS and microarrays, they might find that a common RNA preparation procedure is used, but the processes diverge when the RNA is converted into forms for collecting data. For example, aliquots of the same RNA preparation may be assayed and compared on multiple platforms. In this case a common RNA preparation protocol is followed, but sub-samples are taken through different procedures, like a microarray and NGS assay, and their relationship to the “parent” sample must be tracked. This kind of scenario is easy to set up and execute in GeneSifter Lab Edition.
Finally, one of GeneSifter’s greatest advantages is that a customized system with all of the forms, fields, Excel import features, and modular workflows can be added by lab operators without any programming. Achieving similar levels of customization with traditional LIMS products takes months and years with initial and reoccurring costs of six or more figures.
Collecting Data
The last step of the process is collecting the data, reviewing it, and making sequences and results available to clients. Multiple screenshots illustrate how this works in GeneSifter Lab Edition. For each kind of data collection platform, a “run” object is created. The run holds the information about reactions (the samples ready to run) and where they will be placed in the container that will be loaded into the data collection instrument. In this context, the container is used to describe 96 or 384-well plates, glass slides with divided areas called lanes, regions, chambers, or microarray chips. All of these formats are supported and in some cases specialized files (sample sheets, plate records) are created and loaded into instrument collection software to inform the instrument about sample placement and run conditions for individual samples.
During the run, samples are converted to data. This process, different for each kind of data collection platform, produce variable numbers and kinds of files that are organized in completely different ways. Using tools that work with GeneSifter, raw data and tracking information are entered into the database to simplify access to the data at a later time. The database also associates sample names and other information with data files, eliminating the need to rename files with complex tracking schemes. The last steps of the process involve reviewing quality information and deciding whether to release data to clients or repeat certain steps of the process. When data are released, each client receives an email directing them to their data.
The lab updates the orders and optionally creates invoices for services. GeneSifter Lab Edition can be used to manage those business functions as well. We’ll cover GeneSifter’s pricing and invoicing tools at some other time, be assured they are as complete as the other parts of the system.
NGS requires more than simple data delivery
Section 3 covers issues related to the computational infrastructure needed to support NGS and the data analysis aspects of the NGS workflow. In this scenario, our core lab also provides data analysis services to convert those multi-million read files into something that can be used to study biology. Much of this covered in the previous post, so it will not be repeated here.
I will summarize by making the final points that Geospiza’s GeneSifter products cover all aspects of setting up a lab for NGS. From sample preparation, to collecting data, to storing and distributing results, to running complex bioinformatics workflows and presenting information in ways to get scientifically meaningful results, a comprehensive solution is offered. GeneSifter products can be delivered as hosted solutions to lower costs. Our hosted, Software as a Service, solutions allow groups to start inexpensively and manage costs as the needs scale. More importantly, unlike in-house IT systems, which require significant planning and implementation time to remodel (or build) server rooms and install computers, GeneSifter products get you started as soon as you decide to sign up.
Like the last post, which covered our AGBT poster, the following poster map will guide the discussion.
As this poster overlaps the previous poster in terms providing information about RNA assays and analyzing the data, our main points below will focus on how GeneSifter Lab Edition solves challenges related to laboratory and business processes associated with setting up a new lab for NGS or bringing NGS into an existing microarray or Sanger sequencing lab.
Section 1 contains the abstract, an introduction to the core laboratory, and background information on different kinds of transcription profiling experiments.
The general challenge for a core lab lies in the need to run a business that offers a wide variety of scientific services for which samples (physical materials) are converted to data and information that have biological meaning. Different services often require different lab processes to produce different kinds of data. To facilitate and direct lab work, each service requires specialized information and instructions for samples that will be processed. Before work is started, the lab must review the samples and verify that the information has been correctly delivered. Samples are then routed through different procedures to prepare them for data collection. In the last steps, data are collected, reviewed, and the results are delivered back to clients. At the end of the day (typically monthly), orders are reviewed and invoices are prepared either directly or by updating accounting systems.
In the case of NGS, we are learning that the entire data collection and delivery process gets more complicated. When compared to Sanger sequencing, genotyping, or other assays that are run in 96-well formats, sample preparation is more complex. NGS requires that DNA libraries be prepared and different steps of the of process need to be measured and tracked in detail. Also, complicated bioinformatics workflows are needed to understand the data from both a quality control and biological meaning context. Moreover, NGS requires a substantial investment in information technology.
Section 2 walks through the ways in which GeneSifter Lab Edition helps to simplify the NGS laboratory operation.
Order Forms
In the first step, an order is placed. Screenshots show how GeneSifter can be configured for different services. Labs can define specialized form fields using a variety of user interface elements like check boxes, radio buttons, pull down menus, and text entry fields. Fields can be required or be optional and special rules such as ranges for values can be applied to individual fields within specific forms. Orders can also be configured to take files as attachments to track data, like gel images, about samples. To handle that special “for lab use only" information, fields in forms can be specified as laboratory use only. Such fields are hidden to the customers view and when the orders are processed they are filled later by lab personnel. The advantage of GeneSifter’s order system is that the pertinent information is captured electronically in the same system that will be used to track sample processing and organize data. Indecipherable paper forms are eliminated along with the problem of finding information scattered on multiple computers.
Web-forms do create a special kind of data entry challenge. Specifically, when there is a lot of information to enter for a lot samples, filling in numerous form fields on a web-page can be a serious pain. GeneSifter solves this problem in two ways:
First, all forms can have “Easy Fill” controls that provide column highlighting (for fast tab-and-type data entry), auto fill downs, and auto fill downs with number increments so one can easily “copy” common items into all cells of a column, or increment an ending number to all values in a column. When these controls are combined with the “Range Selector,” a power web-based user interface makes it easy to enter large numbers of values quickly in flexible ways.
Second, sometimes the data to be entered is already in an Excel spreadsheet. To solve this problem, each form contains a specialized Excel spreadsheet validator. The form can be downloaded as an Excel template and the rules, previously assigned to field when the form was created, are used to check data when they are uploaded. This process spots problems with data items and reports ten at upload time when they are easy to fix, rather than later when information is harder to find. This feature eliminates endless cycles of contacting clients to get the correct information.
Laboratory Processing
Once order data are entered, the next step is to process orders. The middle of section 2 describes this process using an RNA-Seq assay as an example. Like other NGS assays, the RNA-Seq protocol has many steps involving RNA purification, fragmentation, random primed conversion into cDNA, and DNA library preparation of the resulting cDNA for sequencing. During the process, the lab needs to collect data on RNA and DNA concentration as well as determine the integrity of the molecules throughout the process. If a lab runs different kinds of assays they will have to manage multiple procedures that may have different requirements for ordering of steps and laboratory data that need to be collected.
By now it is probably not a surprise to learn that GeneSifter Lab Edition has a way to meet this challenge too. To start, workflows (lab procedures) can be created for any kind of process with any number of steps. The lab defines the number of steps and their order and which steps are required (like the order forms). Having the ability to mix required and optional steps in a workflow gives a lab the ultimate flexibility to support those “we always do it this way, except the times we don’t” situations. For each step the lab can also define whether or not any additional data needs to be collected along the way. Numbers, text, and attachments are all supported so you can have your Nanodrop and Bioanalyzer too.
Next, an important feature of GeneSifter workflows is that a sample can move from one workflow to another. This modular approach means that separate workflows can be created for RNA preparation, cDNA conversion, and sequencing library preparation. If a lab has multiple NGS platforms, or a combination of NGS and microarrays, they might find that a common RNA preparation procedure is used, but the processes diverge when the RNA is converted into forms for collecting data. For example, aliquots of the same RNA preparation may be assayed and compared on multiple platforms. In this case a common RNA preparation protocol is followed, but sub-samples are taken through different procedures, like a microarray and NGS assay, and their relationship to the “parent” sample must be tracked. This kind of scenario is easy to set up and execute in GeneSifter Lab Edition.
Finally, one of GeneSifter’s greatest advantages is that a customized system with all of the forms, fields, Excel import features, and modular workflows can be added by lab operators without any programming. Achieving similar levels of customization with traditional LIMS products takes months and years with initial and reoccurring costs of six or more figures.
Collecting Data
The last step of the process is collecting the data, reviewing it, and making sequences and results available to clients. Multiple screenshots illustrate how this works in GeneSifter Lab Edition. For each kind of data collection platform, a “run” object is created. The run holds the information about reactions (the samples ready to run) and where they will be placed in the container that will be loaded into the data collection instrument. In this context, the container is used to describe 96 or 384-well plates, glass slides with divided areas called lanes, regions, chambers, or microarray chips. All of these formats are supported and in some cases specialized files (sample sheets, plate records) are created and loaded into instrument collection software to inform the instrument about sample placement and run conditions for individual samples.
During the run, samples are converted to data. This process, different for each kind of data collection platform, produce variable numbers and kinds of files that are organized in completely different ways. Using tools that work with GeneSifter, raw data and tracking information are entered into the database to simplify access to the data at a later time. The database also associates sample names and other information with data files, eliminating the need to rename files with complex tracking schemes. The last steps of the process involve reviewing quality information and deciding whether to release data to clients or repeat certain steps of the process. When data are released, each client receives an email directing them to their data.
The lab updates the orders and optionally creates invoices for services. GeneSifter Lab Edition can be used to manage those business functions as well. We’ll cover GeneSifter’s pricing and invoicing tools at some other time, be assured they are as complete as the other parts of the system.
NGS requires more than simple data delivery
Section 3 covers issues related to the computational infrastructure needed to support NGS and the data analysis aspects of the NGS workflow. In this scenario, our core lab also provides data analysis services to convert those multi-million read files into something that can be used to study biology. Much of this covered in the previous post, so it will not be repeated here.
I will summarize by making the final points that Geospiza’s GeneSifter products cover all aspects of setting up a lab for NGS. From sample preparation, to collecting data, to storing and distributing results, to running complex bioinformatics workflows and presenting information in ways to get scientifically meaningful results, a comprehensive solution is offered. GeneSifter products can be delivered as hosted solutions to lower costs. Our hosted, Software as a Service, solutions allow groups to start inexpensively and manage costs as the needs scale. More importantly, unlike in-house IT systems, which require significant planning and implementation time to remodel (or build) server rooms and install computers, GeneSifter products get you started as soon as you decide to sign up.
Wednesday, March 4, 2009
Bloginar: The Next Generation Dilemma: Large Scale Data Analysis
Previous posts shared some the things we learned at the AGBT and ABRF meetings in early February. Now it is time to share the work we presented, starting with the AGBT poster, “The Next Generation Dilemma: Large Scale Data Analysis.”
The goal of the poster was to provide a general introduction to the power of Next Generation Sequencing (NGS) and a framework for data analysis. Hence, the abstract described the NGS general data analysis process; its issues and what we are doing for one kind of transcription profiling, RNA-Seq. Between then and now we learned a few things... And the project grew.
The map below guides my “bloginar” poster presentation. In keeping with the general theme of the abstract we focused on transcription analysis, but instead of focusing exclusively on RNA-Seq, the project expanded to compare three kinds of transcription profiling: RNA-Seq, Tag Profiling, and Small RNA Analysis. A link to the poster is provided at the end.
Section 1 provides a general introduction into NGS by discussing the ways NGS is being used to study different aspects of molecular biology. It also covers how the data are analyzed in thee phases (primary, secondary, tertiary) to convert raw data into biologically meaningful information. The three phase model has emerged as a common framework to describe the process of converting image data into primary sequence data (reads) and then turning the reads into information that be used in comparative analyses. Secondary analysis is the phase where reads are aligned to reference sequences to get gene names, position, and (or) frequency information that can be used to measure changes, like gene expression, between samples.
The remaining sections of the poster use examples from transcription analysis to illustrate and address the multiple challenges (listed below) that must be overcome to efficiently use NGS.
Sections 3, 4, and 5 outline three transcriptome scenarios (RNA-Seq, Tag Profiling, and Small RNA, respectively) using real data examples (references provided in the poster). Each scenario follows a common workflow involving the preparation of DNA libraries from RNA samples, followed by secondary analysis, followed by tertiary analysis of the data in GeneSifter Analysis Edition.
For RNA-Seq, two datasets corresponding to mouse erythroid stem (ES) and body (EB) cells were investigated. DNA libraries were produced from each cell line. Sequences were collected from the library and compared to the RefSeq (NCBI) database according to the pipeline shown. The screen captures (middle of the panel) show how the individual reads map to each transcript along with the total numbers of hits summarized by chromosome. The process is repeated twice, once for each cell line, and the two sets of alignments are converted to Gene Lists for comparative analysis in GeneSifter laboratory edition to observe differential expression (bottom of the panel).
The Tag Profiling panel examines data from a recently published experiment (a reference is provided in the poster) in which gene expression was studied in transgenic mice. I’ll leave out the details of the paper, and only point out how this example shows the differences between Tag Profiling and RNA-Seq data. Because Tag Profiling collects data from specific 3’ sites in RNA, the aligned data (middle of the panel) show alignments as single “spikes” toward the 3’ end of transcripts. Occasionally multiple peaks are observed. The question being, are the additional peaks the result of isoforms (alternative polyA sites) or incomplete restriction enzyme digests? How might this be sorted out? Like RNA-Seq, the bottom panel shows the comparative analysis of replicate samples from the wild type (WT) and transgenic (TG) mice.
Data from a small RNA analysis experiment are analyzed in the third panel. Unlike RNA-Seq and Tag Profiling, this secondary analysis has more comparisons of the reads to different sets of reference sequences. The purpose is to identify and filter out common artifacts observed in small RNA preparations. The pipeline we used, and data produced, are shown in the middle of the panel. Histogram plots of read length distribution, determined from alignments in different reference sources, are created because an important feature of small RNAs is that they are small. Distributions clustered around 22 nt indicate a good library. Finally, data are linked to additional reports and databases, like miRBase (Sanger Center), to explore results further. In the example shown, the first hit was to a small RNA that has been observed in opossums; now we have human counter part. In total, four, samples were studied. Like RNA-Seq and Tag Profiling, we can observe the relative expression of each small RNA by analyzing the datasets together (hierarchical clustering, bottom).
Section 6 presents some of the challenges of scale issues that accompany NGS, and how we are addressing these issues with HDF5 technology. This will be a topic of many more posts in the future.
We close the poster by addressing the challenges listed above with the final points:
The goal of the poster was to provide a general introduction to the power of Next Generation Sequencing (NGS) and a framework for data analysis. Hence, the abstract described the NGS general data analysis process; its issues and what we are doing for one kind of transcription profiling, RNA-Seq. Between then and now we learned a few things... And the project grew.
The map below guides my “bloginar” poster presentation. In keeping with the general theme of the abstract we focused on transcription analysis, but instead of focusing exclusively on RNA-Seq, the project expanded to compare three kinds of transcription profiling: RNA-Seq, Tag Profiling, and Small RNA Analysis. A link to the poster is provided at the end.
Section 1 provides a general introduction into NGS by discussing the ways NGS is being used to study different aspects of molecular biology. It also covers how the data are analyzed in thee phases (primary, secondary, tertiary) to convert raw data into biologically meaningful information. The three phase model has emerged as a common framework to describe the process of converting image data into primary sequence data (reads) and then turning the reads into information that be used in comparative analyses. Secondary analysis is the phase where reads are aligned to reference sequences to get gene names, position, and (or) frequency information that can be used to measure changes, like gene expression, between samples.
The remaining sections of the poster use examples from transcription analysis to illustrate and address the multiple challenges (listed below) that must be overcome to efficiently use NGS.
- High end infrastructures are needed to manage and work with extremely large data sets
- Complex, multistep analysis procedures are required to produce meaningful information
- Multiple reference data are needed to annotate and verify data and sample quality
- Datasets must be visualized in multiple ways
- Numerous Internet resources must be used to fill in additional details
- Multiple datasets must be comparatively analyzed to gain knowledge
Sections 3, 4, and 5 outline three transcriptome scenarios (RNA-Seq, Tag Profiling, and Small RNA, respectively) using real data examples (references provided in the poster). Each scenario follows a common workflow involving the preparation of DNA libraries from RNA samples, followed by secondary analysis, followed by tertiary analysis of the data in GeneSifter Analysis Edition.
For RNA-Seq, two datasets corresponding to mouse erythroid stem (ES) and body (EB) cells were investigated. DNA libraries were produced from each cell line. Sequences were collected from the library and compared to the RefSeq (NCBI) database according to the pipeline shown. The screen captures (middle of the panel) show how the individual reads map to each transcript along with the total numbers of hits summarized by chromosome. The process is repeated twice, once for each cell line, and the two sets of alignments are converted to Gene Lists for comparative analysis in GeneSifter laboratory edition to observe differential expression (bottom of the panel).
The Tag Profiling panel examines data from a recently published experiment (a reference is provided in the poster) in which gene expression was studied in transgenic mice. I’ll leave out the details of the paper, and only point out how this example shows the differences between Tag Profiling and RNA-Seq data. Because Tag Profiling collects data from specific 3’ sites in RNA, the aligned data (middle of the panel) show alignments as single “spikes” toward the 3’ end of transcripts. Occasionally multiple peaks are observed. The question being, are the additional peaks the result of isoforms (alternative polyA sites) or incomplete restriction enzyme digests? How might this be sorted out? Like RNA-Seq, the bottom panel shows the comparative analysis of replicate samples from the wild type (WT) and transgenic (TG) mice.
Data from a small RNA analysis experiment are analyzed in the third panel. Unlike RNA-Seq and Tag Profiling, this secondary analysis has more comparisons of the reads to different sets of reference sequences. The purpose is to identify and filter out common artifacts observed in small RNA preparations. The pipeline we used, and data produced, are shown in the middle of the panel. Histogram plots of read length distribution, determined from alignments in different reference sources, are created because an important feature of small RNAs is that they are small. Distributions clustered around 22 nt indicate a good library. Finally, data are linked to additional reports and databases, like miRBase (Sanger Center), to explore results further. In the example shown, the first hit was to a small RNA that has been observed in opossums; now we have human counter part. In total, four, samples were studied. Like RNA-Seq and Tag Profiling, we can observe the relative expression of each small RNA by analyzing the datasets together (hierarchical clustering, bottom).
Section 6 presents some of the challenges of scale issues that accompany NGS, and how we are addressing these issues with HDF5 technology. This will be a topic of many more posts in the future.
We close the poster by addressing the challenges listed above with the final points:
- High performance data management systems are being developed through the BioHDF project and GeneSifter system architectures.
- The examples show how each application and sequencing platform requires a different data analysis workflow (pipeline). GeneSifter provides a platform to develop and make bioinformatics pipelines and data readily available to communities of biologists.
- The transcriptome is complex, different libraries of sequence data can filter known sequences (e.g. rRNA) and discover new elements (miRNAs) and isoforms of expressed genes.
- Within a dataset, read maps, tables, and histogram plots are needed to summarize and understand the kinds of sequences present and how they relate to an experiment.
- Links to Entrez Gene, the USCS genome browser, and miRBASE, show how additional information can be integrated into the application framework and used.
- Next Gen transcriptomics assays are similar to microarray assays in many ways, hence software systems like Geospiza’s GeneSifter are useful for comparative analysis.
Sunday, March 1, 2009
Sneak Peak: Small RNA Analysis with Geospiza
Join us this Wednesday, March 4th at 10:00 A.M. PST (1:00 P.M. EST), for a webinar focusing on small RNA analysis. Eric Olson, our VP of Product Development and principle designer of Geospiza’s GeneSifter Analysis Edition will present our latest insights on analyzing large Next Generation Sequencing datasets to study small RNA biology.
Follow the link to register for this interesting presentation.
Abstract
Next Generation Sequencing allows whole genome analysis of small RNAs at an unprecedented level. Current technologies allow for the generation of 200 million data points in a single instrument run. In addition to allowing for the complete characterization of all known small RNAs in a sample, these applications are also ideal for the identification of novel small RNAs. This presentation will provide an overview of micro RNA expression analysis from raw data to biological significance using examples from publicly available datasets and Geospiza’s GeneSifter software.
Follow the link to register for this interesting presentation.
Abstract
Next Generation Sequencing allows whole genome analysis of small RNAs at an unprecedented level. Current technologies allow for the generation of 200 million data points in a single instrument run. In addition to allowing for the complete characterization of all known small RNAs in a sample, these applications are also ideal for the identification of novel small RNAs. This presentation will provide an overview of micro RNA expression analysis from raw data to biological significance using examples from publicly available datasets and Geospiza’s GeneSifter software.
Subscribe to:
Posts (Atom)