## Thursday, February 26, 2015

### Using and Abusing Data Visualization: Anscombe's Quartet and Cheating Bonferroni

Anscombe’s quartet comprises four datasets that have nearly identical simple statistical properties, yet appear very different when graphed. Each dataset consists of eleven (x,y) points. They were constructed in 1973 by the statistician Francis Anscombe to demonstrate both the importance of graphing data before analyzing it and the effect of outliers on statistical properties.
Let’s load and view the data. There’s a built-in dataset, but I munged the data into a tidy format and included it in an R package that I wrote primarily for myself.
``````# If you don't have Tmisc installed, first install devtools, then install
# from github: install.packages('devtools')
# devtools::install_github('stephenturner/Tmisc')
library(Tmisc)
data(quartet)
str(quartet)
``````
``````## 'data.frame':    44 obs. of  3 variables:
##  \$ set: Factor w/ 4 levels "I","II","III",..: 1 1 1 1 1 1 1 1 1 1 ...
##  \$ x  : int  10 8 13 9 11 14 6 4 12 7 ...
##  \$ y  : num  8.04 6.95 7.58 8.81 8.33 ...
``````
set x y
I 10 8.04
I 8 6.95
I 13 7.58
II 10 9.14
II 8 8.14
II 13 8.74
III 10 7.46
III 8 6.77
III 13 12.74
IV 8 6.58
IV 8 5.76
IV 8 7.71
Now, let’s compute the mean and standard deviation of both x and y, and the correlation coefficient between x and y for each dataset.
``````library(dplyr)
quartet %>%
group_by(set) %>%
summarize(mean(x), sd(x), mean(y), sd(y), cor(x,y))
``````
``````## Source: local data frame [4 x 6]
##
##   set mean(x) sd(x) mean(y) sd(y) cor(x, y)
## 1   I       9  3.32     7.5  2.03     0.816
## 2  II       9  3.32     7.5  2.03     0.816
## 3 III       9  3.32     7.5  2.03     0.816
## 4  IV       9  3.32     7.5  2.03     0.817
``````
Looks like each dataset has the same mean, median, standard deviation, and correlation coefficient between x and y.
Now, let’s plot y versus x for each set with a linear regression trendline displayed on each plot:
``````library(ggplot2)
p = ggplot(quartet, aes(x, y)) + geom_point()
p = p + geom_smooth(method = lm, se = FALSE)
p = p + facet_wrap(~set)
p
``````

This classic example really illustrates the importance of looking at your data, not just the summary statistics and model parameters you compute from it.
With that said, you can’t use data visualization to “cheat” your way into statistical significance. I recently had a collaborator who wanted some help automating a data visualization task so that she could decide which correlations to test. This is a terrible idea, and it’s going to get you in serious type I error trouble. To see what I mean, consider an experiment where you have a single outcome and lots of potential predictors to test individually. For example, some outcome and a bunch of SNPs or gene expression measurements. You can’t just visually inspect all those relationships then cherry-pick the ones you want to evaluate with a statistical hypothesis test, thinking that you’ve outsmarted your way around a painful multiple-testing correction.
Here’s a simple simulation showing why that doesn’t fly. In this example, I’m simulating 100 samples with a single outcome variable y and 64 different predictor variables, x. I might be interested in which x variable is associated with my y (e.g., which of my many gene expression measurement is associated with measured liver toxicity). But in this case, both x and y are random numbers. That is, I know for a fact the null hypothesis is true, because that’s what I’ve simulated. Now we can make a scatterplot for each predictor variable against our outcome, and look at that plot.
``````library(dplyr)
set.seed(42)
ndset = 64
n = 100
d = data_frame(
set = factor(rep(1:ndset, each = n)),
x = rnorm(n * ndset),
y = rep(rnorm(n), ndset))
d
``````
``````## Source: local data frame [6,400 x 3]
##
##    set       x       y
## 1    1  1.3710  1.2546
## 2    1 -0.5647  0.0936
## 3    1  0.3631 -0.0678
## 4    1  0.6329  0.2846
## 5    1  0.4043  1.0350
## 6    1 -0.1061 -2.1364
## 7    1  1.5115 -1.5967
## 8    1 -0.0947  0.7663
## 9    1  2.0184  1.8043
## 10   1 -0.0627 -0.1122
## .. ...     ...     ...
``````
``````ggplot(d, aes(x, y)) + geom_point() + geom_smooth(method = lm) + facet_wrap(~set)
``````

Now, if I were to go through this data and compute the p-value for the linear regression of each x on y, I’d get a uniform distribution of p-values, my type I error is where it should be, and my FDR and Bonferroni-corrected p-values would almost all be 1. This is what we expect — remember, the null hypothesis is true.
``````library(dplyr)
results = d %>%
group_by(set) %>%
do(mod = lm(y ~ x, data = .)) %>%
summarize(set = set, p = anova(mod)\$"Pr(>F)") %>%
mutate(bon = p.adjust(p, method = "bonferroni")) %>%
mutate(fdr = p.adjust(p, method = "fdr"))
results
``````
``````## Source: local data frame [64 x 4]
##
##    set      p   bon   fdr
## 1    1 0.2738 1.000 0.749
## 2    2 0.2125 1.000 0.749
## 3    3 0.7650 1.000 0.900
## 4    4 0.2094 1.000 0.749
## 5    5 0.8073 1.000 0.900
## 6    6 0.0132 0.844 0.749
## 7    7 0.4277 1.000 0.820
## 8    8 0.7323 1.000 0.900
## 9    9 0.9323 1.000 0.932
## 10  10 0.1600 1.000 0.749
## .. ...    ...   ...   ...
``````
``````library(qqman)
qq(results\$p)
``````

BUT, if I were to look at those plots above and cherry-pick out which hypotheses to test based on how strong the correlation looks, my type I error will skyrocket. Looking at the plot above, it looks like the x variables 6, 28, 41, and 49 have a particularly strong correlation with my outcome, y. What happens if I try to do the statistical test on only those variables?
``````results %>% filter(set %in% c(6, 28, 41, 49))
``````
``````## Source: local data frame [4 x 4]
##
##   set      p   bon   fdr
## 1   6 0.0132 0.844 0.749
## 2  28 0.0338 1.000 0.749
## 3  41 0.0624 1.000 0.749
## 4  49 0.0898 1.000 0.749
``````
When I do that, my p-values for those four tests are all below 0.1, with two below 0.05 (and I'll say it again, the null hypothesis is true in this experiment, because I've simulated random data). In other words, my type I error is now completely out of control, with more than 50% false positives at a p<0.05 level. You'll notice that the Bonferroni and FDR-corrected p-values (correcting for all 64 tests) are still not significant.

The moral of the story here is to always look at your data, but don't "cheat" by basing which statistical tests you perform based solely on that visualization exercise.

## Wednesday, February 4, 2015

### Microbial Genomics: the State of the Art in 2015

Current Opinion in Microbiology recently published a special issue in genomics. In an excellent editorial overview, “Genomics: The era of genomically-enabled microbiology”, Neil Hall and Jay Hinton give an overview of the state of the field in microbial genomics, summarize recent contributions, and give a great synopsis of each of the reviews in this issue. Hall and Hinton’s editorial overview goes into a little more depth, but here’s a rundown of the reviews in this special issue. There’s a lot of good stuff here!
Quantitative bacterial transcriptomics with RNA-seq (James Creecy and Tyrrell Conway) discusses RNA-seq in bacteria and how transcriptome analysis adds a wealth of annotation information to the genome.
One chromosome, one contig: complete microbial genomes from long-read sequencing and assembly (Sergey Koren and Adam Phillippy) describes newer long-read sequencing technologies and their characteristics, discusses how microbial genomes can be easily and automatically finished using these methods for under \$1,000, and discusses challenges for microbial and metagenome assembly.
Using comparative genomics to drive new discoveries in microbiology (Daniel Haft) describes progress using comparative genomics to make new discoveries, and takes the reader on a “bioinformatics journey” to describe a code-breaking exercise in comparative genomics that starts with weak hypotheses and uses genomics to fill in the biological picture.
Taking the pseudo out of pseudogenes (Ian Goodhead and Alistair Darby) reviews how pseudogenes are surprisingly prevalent, and discusses how problems with genome annotation can be addressed by combining multiple “omics” data.
Ten years of pan-genome analyses (George Vernikos et al.) describes how pan-genome analyses provide a framework for predicting and modling genomic diversity, where the “core genome” of many bacterial species constitutes only the minority of genes.
Lateral gene transfers and the origins of the eukaryote proteome: a view from microbial parasites (Robert Hirt et al.) reviews the dynamic nature of lateral gene transfer, its role in microbial diversity, how it contributes to eukaryotic genomes, and how once again integrating different “omics” methodologies is needed to recognize the extent to which LGT affects eukaryotes.
The application of genomics to tracing bacterial pathogen transmission (Nicholas Croucher and Xavier Didelot) reviews how bacterial whole-genome sequencing gives you the ultimate resolution for investigating direct pathogen transmission, distinguishing transmission chains, and defining outbreaks. If you haven’t kept up with this quickly growing body of literature, this review is a great place to start catching up.
The impact of genomics on population genetics of parasitic diseases (Daniel Hupalo et al.) describes the influence of genomics on parasite population genetics and how burgeoning genomic data has enabled new types of investigations, and focuses on Plasmodium population genomics as a foundation for studies of neglected parasites.

## Tuesday, February 3, 2015

### R + ggplot2 Graph Catalog

Joanna Zhao’s and Jenny Bryan’s R graph catalog is meant to be a complement to the physical book, Creating More Effective Graphs, but it’s a really nice gallery in its own right. The catalog shows a series of different data visualizations, all made with R and ggplot2. Click on any of the plots and you get the R code necessary to generate the data and produce the plot.

You can use the panel on the left to filter by plot type, graphical elements, or the chapter of the book if you’re actually using it. All of the code and data used for this website is open-source, in this GitHub repository. Here's an example for plotting population demographic data by county that uses faceting to create small multiples:
``````library(ggplot2)
library(reshape2)
library(grid)

this_base = "fig08-15_population-data-by-county"

my_data = data.frame(
Race = c("White", "Latino", "Black", "Asian American", "All Others"),
Bronx = c(194000, 645000, 415000, 38000, 40000),
Kings = c(855000, 488000, 845000, 184000, 93000),
New.York = c(703000, 418000, 233000, 143000, 39000),
Queens = c(733000, 556000, 420000, 392000, 128000),
Richmond = c(317000, 54000, 40000, 24000, 9000),
Nassau = c(986000, 133000, 129000, 62000, 24000),
Suffolk = c(1118000, 149000, 92000, 34000, 26000),
Westchester = c(592000, 145000, 123000, 41000, 23000),
Rockland = c(205000, 29000, 30000, 16000, 6000),
Bergen = c(638000, 91000, 43000, 94000, 18000),
Hudson = c(215000, 242000, 73000, 57000, 22000),
Passiac = c(252000, 147000, 60000, 18000, 12000))

my_data_long = melt(my_data, id = "Race",
variable.name = "county", value.name = "population")

my_data_long\$county = factor(
my_data_long\$county, c("New.York", "Queens", "Kings", "Bronx", "Nassau",
"Suffolk", "Hudson", "Bergen", "Westchester",
"Rockland", "Richmond", "Passiac"))

my_data_long\$Race =
factor(my_data_long\$Race,
rev(c("White", "Latino", "Black", "Asian American", "All Others")))

p = ggplot(my_data_long, aes(x = population / 1000, y = Race)) +
geom_point() +
facet_wrap(~ county, ncol = 3) +
scale_x_continuous(breaks = seq(0, 1000, 200),
labels = c(0, "", 400, "", 800, "")) +
labs(x = "Population (thousands)", y = NULL) +
ggtitle("Fig 8.15 Population Data by County") +
theme_bw() +
theme(panel.grid.major.y = element_line(colour = "grey60"),
panel.grid.major.x = element_blank(),
panel.grid.minor = element_blank(),
panel.margin = unit(0, "lines"),
plot.title = element_text(size = rel(1.1), face = "bold", vjust = 2),
strip.background = element_rect(fill = "grey80"),
axis.ticks.y = element_blank())

p

ggsave(paste0(this_base, ".png"),
p, width = 6, height = 8)
``````

Keep in mind not all of these visualizations are recommended. You’ll find pie charts, ugly grouped bar charts, and other plots for which I can’t think of any sensible name. Just because you can use the `add_cat()` function from Hilary Parker’s cats package to fetch a random cat picture from the internet and create an `annotation_raster` layer to add to your ggplot2 plot, doesn’t necessarily mean you should do such a thing for a publication-quality figure. But if you ever needed to know how, this R graph catalog can help you out.
``````library(ggplot2)

## devtools::install_github("hilaryparker/cats")
library(cats)
## library(help = "cats")

p = ggplot(mpg, aes(cty, hwy)) + 