I'm calling variants from exome sequencing data and I need to evaluate the efficiency of the capture and the coverage along the target regions.
This sounds like a great use case for bedtools, your swiss-army knife for genomic arithmetic and interval manipulation. I'm lucky enough to be able to walk down the hall and bug Aaron Quinlan, creator of bedtools, whenever I have a "how do I do X with bedtools?" question (which happens frequently).
As open-source bioinformatics software documentation goes, bedtools' documentation is top-notch. In addition, Aaron recently pointed out a work-in-progress bedtools cookbook that he's putting together, giving code examples for both typical and clever uses of bedtools.
Getting back to my exome data, one way to visualize this is to plot the cumulative distribution describing the fraction of targeted bases that were covered by >10 reads, >20 reads, >80 reads, etc. For example, covering 90% of the target region at 20X coverage may be one metric to assess your ability to reliably detect heterozygotes. Luckily for me, there's a bedtools protocol for that.
The basic idea is that for each sample, you're using bedtools coverage to read in both a bam file containing your read alignments and a bed file containing your target capture regions (for example, you can download NimbleGen's V3 exome capture regions here). The -hist option outputs a histogram of coverage for each feature in the BED file as well as a summary histogram for all of the features in the BED file. Below I'm using GNU parallel to run this on all 6 of my samples, piping the output of bedtools to grep out only the lines starting with "all."
Now that I have text files with coverage histograms for all the regions in the capture target, I can now plot this using R.
You can see that sample #2 had some problems. Relative to the rest of the samples you see it take a quick nose-dive on the left side of the plot relative to the others. Running this through Picard showed that 86% of the reads from this sample were duplicates.
Thanks, Aaron, for the tips.
Bedtools protocols: https://github.com/arq5x/bedtools-protocols
Thursday, March 20, 2014
Wednesday, March 12, 2014
Software Carpentry at UVA, Redux
Software Carpentry is an international collaboration backed by Mozilla and the Sloan Foundation comprising a team of volunteers that teach computational competence and basic programming skills to scientists. In addition to a suite of online lessons, Software Carpentry also runs two-day on-site bootcamps to teach researchers skills such as using the Unix shell, programming in Python or R, using Git and GitHub for version control, managing data with SQL, and general programming best practices.
It was just over a year ago when I organized UVA's first bootcamp. Last year we reached our 50-person registration limit and had nearly 100 people on the wait list in less than two days. With support from the the Center for Public Health Genomics, the Health Sciences Library, and the Library's Research Data Services, we were able to host another two-day bootcamp earlier this week (we maxed out our registration limit this year as well). A few months ago I started Software Carpentry's training program, which teaches scientists how to teach other scientists how to program. It was my pleasure to be an instructor at this year's bootcamp along with Erik Bray and Mike Hansen.
Erik kicked off day one with a short introduction to what Software Carpentry is all about as well as setting the stage for the rest of the bootcamp -- as more fields of research become increasingly more data rich, computational skills become ever more critical.
I started the morning's lessons on using the Unix shell to get more stuff done in less time. Although there were still a few setup hiccups, things went a lot smoother this year because we provided a virtual machine with all of the necessary tools pre-installed.
We spent the rest of the morning and early afternoon going over version control with Git and collaboration using GitHub. I started out with the very basics -- the hows and whys of using version control, staging, committing, branching, merging, and conflict resolution. After lunch Erik and I did a live demonstration of two different modes of collaboration using GitHub. In the first, I pushed to a repo on GitHub and gave Erik full permissions to access and push to this repo. Here, we pushed and pulled to and from the same repo, and demonstrated what to do in case of a merge conflict. In the second demonstration we used the fork and pull model of collaboration: I created a new repo, Erik forked this, made some edits (using GitHub's web-based editor for simplicity), and submitted a pull request. After the demo, we had participants go through the same exercise -- creating their own repos with feedback about the course so far, and submitting pull requests to each other.
With the remaining hours in the afternoon, Erik introduced Python using the IPython notebook. Since most people were using the virtual machine we provided (or had already installed Anaconda), we had very minimal Python/IPython/numpy version and setup issues that may have otherwise plagued the entire bootcamp (most participants were using Windows laptops). By the end of the introductory python session, participants were using Python and NumPy to simulate logistic population growth with intermittent catastrophic population crashes, and using matplotlib to visualize the results.
Next, Mike introduced the pandas data analysis library for Python, also using an IPython notebook for teaching. In this session, participants used pandas to import and analyze year's worth of weather data from Weather Underground. Participants imported a CSV file, cleaned up the data, parsed dates written as text to create python datetime objects, used the apply function to perform bulk operations on the data, learned how to handle missing values, and synthesized many of the individual components taught in this and the previous session to partition out and perform summary operations on subsets of the data that matched particular criteria of interest (e.g., "how many days did it rain in November when the minimum temperature ranged from 20 to 32 degrees?").
Erik wrapped up the bootcamp with a session on testing code. Erik introduced the concept of testing by demonstrating the behavior of a function without revealing the source code behind it. Participants were asked to figure out what the function did by writing various tests with different input. Finally, participants worked in pairs to implement the function such that all the previously written tests would not raise any assertion errors.
Overall, our second Software Carpentry bootcamp was a qualitative success. The fact that we maxed out registration and filled a wait list within hours two years in a row demonstrates the overwhelming need for such a curriculum for scientists. Science across nearly every discipline is becoming ever more quantitative; researchers are realizing that to be successful, not only do you need to be a good scientist, a great writer, an eloquent speaker, a skilled graphic designer, a clever marketer, an efficient project manager, etc., but that you'll also need to know some programming and statistics also. This week represented the largest Software Carpentry event ever, with simultaneous bootcamps at the University of Virginia, Purdue, New York University, UC Berkeley, and the University of Washington. I can only imagine this trend will continue for the foreseeable future.
Subscribe to:
Posts (Atom)