Logo of plosonePLoS OneView this ArticleSubmit to PLoSGet E-mail AlertsContact UsPublic Library of Science (PLoS)
PLoS One. 2010; 5(11): e13584.
Published online Nov 3, 2010. doi:  10.1371/journal.pone.0013584
PMCID: PMC2972202

Comprehensive Approach to Analyzing Rare Genetic Variants

Alfred Lewin, Editor

Abstract

Recent findings suggest that rare variants play an important role in both monogenic and common diseases. Due to their rarity, however, it remains unclear how to appropriately analyze the association between such variants and disease. A common approach entails combining rare variants together based on a priori information and analyzing them as a single group. Here one must make some assumptions about what to aggregate. Instead, we propose two approaches to empirically determine the most efficient grouping of rare variants. The first considers multiple possible groupings using existing information. The second is an agnostic “step-up” approach that determines an optimal grouping of rare variants analytically and does not rely on prior information. To evaluate these approaches, we undertook a simulation study using sequence data from genes in the one-carbon folate metabolic pathway. Our results show that using prior information to group rare variants is advantageous only when information is quite accurate, but the step-up approach works well across a broad range of plausible scenarios. This agnostic approach allows one to efficiently analyze the association between rare variants and disease while avoiding assumptions required by other approaches for grouping such variants.

Introduction

There is increasing evidence supporting the role of rare variants in both monogenic and complex diseases [1][6]. In parallel with this new sequencing technologies are providing an avenue for effective detection of rare variants in the human genome [7]. Such technologies are helping the 1000 Genomes Project catalogue less common variants (http://www.1000genomes.org). These advances in our ability to study rare variants should substantially improve our insight into the genetic basis of health and disease.

Evaluating the potential impact of rare variants on disease is complicated, however, by their uncommon nature. Several approaches have been proposed for the analysis of rare variants. On the one extreme is collecting such an enormous study sample that rare variants are detected sufficiently often to allow for testing each variant individually; for example, Nejentsev et al. [8] discovered a rare variant with minor allele frequency (MAF) 0.46% in Type I Diabetes cases and 0.67% in controls, using 17,730 individuals. Evaluating each individual rare variant will generally not be effective for smaller sample sizes or for variants that have even lower MAFs than that of Nejentsev et al. [8] due to data sparsity. In particular, conventional analyses may produce extremely unstable estimates of rare variant effects on disease and be essentially uninformative.

An alternative is to combine rare variants together into groups in a reasonable manner so they can be efficiently analyzed. Note that when we use “efficient” in this manuscript, we will always be referring to statistical power; computational time will be referred to as runtime. One might simply tabulate in cases and controls the number of individuals that have any rare variants (e.g., within a given locus), and contrast these counts. Morgenthaler et al. [9] have termed this the Cohort Allelic Sums Test (CAST). This approach essentially assumes that the rare variants have similar effects on disease. In other words, CAST gives equal weights to all rare variants combined together. It also treats individuals who are heterozygous and homozygous in an identical manner, although there will be few of the latter when studying rare variants.

Another option is to somehow weight each rare variant and then combine them. The optimal approach will upweight the variants most likely to cause disease and downweight variants that have no effect on disease. The weights could be calculated in a number of different ways. Madsen and Browning [10] propose weighting each allele by the inverse of the estimated standard deviation of the total number of mutations in the controls. Rare variants can also be simultaneously analyzed with common variants in a multivariate test, as in the Combined Multivariate and Collapsing (CMC) method [11]. Here, a multivariate test is constructed using a term for collapsed rare variants plus terms for each of the common alleles. This allows for collapsing variants only when needed due to their rarity, and analyzing more common variants on an individual basis.

The decision to aggregate rare variants – with or without explicit weighting – requires a number of strong assumptions about the similarity of their effects on disease. This raises a critical unanswered question: how to best combine rare variants for analysis? For instance, one might choose a minor allele frequency threshold to define what is “rare,” or choose a weighting scheme for the variants (even if constant weights). In addition, one might decide to only aggregate nonsynonymous variants in the coding regions [9] as these might be the most likely to cause disease [12]. Such a grouping could be further refined to only nonsynonymous variants that lead to putatively deleterious mutations that impair the function of the protein (e.g., using predictive algorithms such as SIFT [13], PMUT [14], or PolyPhen [15]). However, such algorithms vary in the information used, and can give different results, which would lead to different groupings of rare variants. For example, we found that the agreement among SIFT, PMUT, and PolyPhen in predicting the impact of mutations was only An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e001.jpg in the data we used for our simulation study (discussed below). Clearly it is very difficult to define a priori what rare variants should be aggregated into a single group for analysis.

Two methods have recently been proposed to collapse rare variants in a data-driven manner. Price et al. [16] extend the CAST [9] and the weighted approach [10] by testing multiple allele frequency thresholds, rather than choosing one fixed threshold, and also extend the test to quantitative traits. However, they assume that all rare variants are deleterious; while this may be a reasonable assumption for many diseases [2], there is also the possibility that some rare variants are protective. Han and Pan [17] allow for both deleterious and protective variants by letting the data determine whether an allele should be protective or harmful when collapsing, and also suggests collapsing common variants into the test. We combine and further extend these approaches in a more flexible data-driven model to decide how best to group rare variants for association analysis.

Our approach considers multiple possible groupings, choosing the “best” set based on statistical criteria, and correcting by permutation. One can use prior information from several sources to define these groupings; e.g., different protein coding function algorithms. Alternatively, or in addition, one can use data-driven methods to define these groupings based only on statistical criterion; e.g., all possible allele frequencies, all possible subsets of rare variants, or a “step-up” approach we propose here. That is, we use the data to decide whether a variant should be deleterious or protective, or whether the variant should even be in the model at all. We use a simulation study to evaluate these approaches. The simulations are based on data from deeply sequenced candidate genes in the one-carbon folate metabolic pathway [18].

Methods

General framework

Assume that we have undertaken a study of the relationship between An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e002.jpg genetic variants and a phenotype An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e003.jpg among An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e004.jpg individuals. Let An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e005.jpg be the additive coding for a marker (i.e., the number of minor alleles individual An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e006.jpg has at variant An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e007.jpg); others can be considered, but a dominant coding will be almost identical to an additive coding for a rare variant. Then a flexible disease model for the relationship can be given by

equation image
(1)

where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e009.jpg is an individuals phenotype (dichotomous or continuous) and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e010.jpg is a link function (e.g., logit for logistic regression or the identity for linear regression). With rare variants, however, the data is too sparse to estimate each individual's An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e011.jpg. For example, suppose we try to fit a logistic regression to test for the genetic association of a rare variant with disease. Without an enormous sample size, the estimate of a single rare variant's effect on An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e012.jpg (An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e013.jpg) may be extremely unstable and essentially uninformative.

An alternative is to somehow aggregate multiple rare variants, and leverage their combined strength to improve estimation. This can be formalized with a second-stage model for the parameters of interest, a vector of coefficients An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e014.jpg

equation image
(2)

where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e016.jpg is a vector of combined genetic effects (e.g., a single collapsed effect, or two terms for a protective and deleterious effect) that we want to evaluate; An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e017.jpg is a second-stage design matrix that incorporates information on factors about the genetic variants; and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e018.jpg is a random effect. Equation 2 is essentially a prior model that distinguishes how one can “borrow information” across rare variants. Together equations 1 and 2 define a hierarchical model that can be used to incorporate complex interrelationships among the variants and their putative effects on disease.

However, most of the existing rare variant approaches essentially model a single combined genetic effect An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e019.jpg, aggregating all of the data features into a single An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e020.jpg for each SNP, and assume An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e021.jpg. We build on these approaches, and for focus and tractability do not explore a fully parametrized hierarchical model; further details on the potential value of this approach are given in the discussion. Now combining Equations 1 and 2 gives the model

equation image
(3)

That is, one is essentially modeling and estimating the effect of a weighted combination of variants An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e023.jpg.

We will explore different ways to model An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e024.jpg in this paper, from data-driven methods to those based completely on prior information. There have been several approaches proposed to modeling An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e025.jpg in the literature. The simplest is to set An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e026.jpg and sum them together. This is similar to the CAST approach [9], which uses an indicator variable for the presence of any rare variant. Here we use a multiplicative model An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e027.jpg, where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e028.jpg is a continuous weight (e.g., to incorporate allele frequencies), An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e029.jpg determines the direction of the variant effect (deleterious or protective), and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e030.jpg is an indicator variable determining whether the allele belongs in the model for variable selection. Note that in our description of these parameters below, we will be using the data to estimate them; we will correct for this by permutation at the end of the procedure.

For the continuous weight An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e031.jpg, one can incorporate allele frequency information (or set this to An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e032.jpg). For example, Madsen and Browning [10] consider all alleles to be deleterious, and set An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e033.jpg for dichotomous traits to the inverse square root of the expected variance based on allele frequencies An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e034.jpg in the controls, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e035.jpg, with pseudocounts (i.e., adding 1 to the numerator and denomerator when estimating An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e036.jpg to prevent any zero weights). Price et al. [16] extend this to continuous traits by estimating the allele frequency An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e037.jpg including all samples.

If we believe all variants have a deleterious effect, we can set An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e038.jpg to be An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e039.jpg, and ignore this parameter. Otherwise, we can let the data decide how to specify An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e040.jpg. Han and Pan [17] addressed this first fitting a marginal regression model for the association between the variant and disease, and then flipping the coding of the genotype when the estimated coefficient is negative and reaches a certain significance threshold. We use a slightly different method for rare variants. For dichotomous traits, if an allele is more prevalent in controls than cases, we set An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e041.jpg to indicate it is likely deleterious, and if it is more prevalent in cases than in controls, we set An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e042.jpg to indicate it is protective. For continuous traits we use the sign of the estimated covariance between the trait and marker; this is equivalent to the sign of the regression coefficient, just slightly faster to calculate.

Lastly, we have An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e043.jpg, which determines whether a variable enters into the model. One example would be to set this by a hard minor allele frequency threshold (e.g., as in CAST [9]). However, we may also wish to try the approach at several allele frequency thresholds, or even all possible allele frequency thresholds [16]. In this case, we change our notation so that we are considering a set An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e044.jpg of models with elements indexed by a vector An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e045.jpg as An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e046.jpg. Testing all allele frequencies would be equivalent to running the test for each An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e047.jpg, where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e048.jpg is the set of unique allele frequencies.

Another example of how to chose An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e049.jpg is as an indicator for variants in coding regions, since they may be more likely causal than those elsewhere [12]. We may wish to consider only those mutations that are nonsynonymous, and in particular those that are highly deleterious. Several algorithms exist for estimating the magnitude of the deleterious effect of mutations on protein function, but they do not always agree. Again, we might even also consider using several algorithms to define different groups to test. One may wish to use a consensus of all of these functional designations to group rare variants, or even use continuous information from the protein coding function algorithms. We can combine this with our ideas for testing multiple allele frequency thresholds.

There is one other model we will introduce for An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e050.jpg, but it will be clearer after we describe the test statistic and understand its computational runtime. To speed up the approach one could use linear regression for all phenotypes, instead of logistic regression [16], [17]. We instead take the mean centered score of An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e051.jpg from Equation 3 divided by the empirical variance: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e052.jpg, where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e053.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e054.jpg, and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e055.jpg. Then An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e056.jpg follows a chi-squared distribution with one degree of freedom. When we are considering a set of models An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e057.jpg for An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e058.jpg, then the final test statistic of the procedure is given by An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e059.jpg. Then to compute the p-value of the test, we permute the phenotypes of the individuals, and recompute An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e060.jpg for permutation An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e061.jpg, following the entire procedure as before. Then the p-value for An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e062.jpg permutations is given by An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e063.jpg.

With the computational complexity of testing multiple weights in mind, we also consider a data-driven method for specifying An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e064.jpg. The approach we described above for testing all allele frequencies is computationally of order linear time in the number of variants. In contrast, having An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e065.jpg index all possible subsets of variants is on the order of factorial time in the number of variants, and is too computationally intensive for all but the smallest genes. Instead, we propose a “step-up” approach that has a computational runtime inbetween these two methods. This is similar to stepwise regression, but instead of selecting additional independent predictors, the step-up approach chooses the best combination of rare variants into a single aggregated group. With this approach we first compute the univariate test statistic An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e066.jpg for each variant An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e067.jpg. We then determine the “best” (i.e., An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e068.jpg) of these models; denote this model An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e069.jpg, with test statistic An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e070.jpg. We then build on the model with variant An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e071.jpg by computing the test statistic An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e072.jpg for each marker An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e073.jpg and the best marker An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e074.jpg from the first approach. Denote the best added variant of this second step as An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e075.jpg. If An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e076.jpg, then the algorithm terminates. Otherwise, the algorithm continues until An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e077.jpg. Again the p-value is obtained by permutation, repeating the entire procedure for each phenotype permutation. This algorithm's speed is of at worst a squared number of time in the number of variants.

We can further extend this to allow the set of all models considered to include any combination of the approaches from above, restricted to being computationally feasible. That is, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e078.jpg could index across all of the steps in the step-up model based on SIFT functional markers, and all of the steps in the step up model based on PMUT functional markers. This effectively uses the “best” of these two procedures. However, the more rare variant groupings and tests considered, the less efficient and more computationally intensive the approach will be compared to that which most accurately tests the true underlying model. When the disease model is not well understood, as is probably the case for many rare variants, it is advantageous to consider several different groupings and/or tests. In our simulations, we explore this trade-off between considering many possibilities and making strong assumptions.

Models for variant weights

In the previous section we described a general framework and strategies for constructing a model for the variant weights An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e079.jpg and evaluating an aggregated genetic effect on disease An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e080.jpg. Here we enumerate the models that we will compare in our subsequent simulations (distinct from the models we will use to generate our data). We first investigated the following models with An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e081.jpg (i.e., all variants are deleterious) and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e082.jpg (i.e., they are equally deleterious):

  1. MAFAn external file that holds a picture, illustration, etc.
Object name is pone.0013584.e083.jpg: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e084.jpg, where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e085.jpg is defined:
    1. SIFT: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e086.jpg (this will be the true generating model, so as if we knew the true underlying model);
    2. Nonsynonymous: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e087.jpg (modeling all mutations that alter protein coding function).
    This is similar to CAST, but summing An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e088.jpg rather than an indicator variable of any mutation.
  2. MAFAn external file that holds a picture, illustration, etc.
Object name is pone.0013584.e089.jpg: Same as (1), but An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e090.jpg.
  3. All MAF: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e091.jpg, where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e092.jpg is (i.e., all allele frequencies as described above)
    1. Nonsynonymous: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e093.jpg;
    2. All protein coding: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e094.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e095.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e096.jpg (i.e., try several protein coding functions since we will see they often differ);
    3. Non-generating protein coding: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e097.jpg, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e098.jpg (i.e., exclude the protein coding function grouping information actually used to generate the data, and see if the other grouping methods, PMUT or polyphen, can still detect an association).
  4. Step: An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e099.jpg based on the “step-up” approach described above.

In addition to these, we then fit models An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e100.jpg, the same as An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e101.jpg but with An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e102.jpg set to the inverse variance of variant An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e103.jpg using controls for dichotomous traits, and all subjects for continuous traits. Next we refit both models in An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e104.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e105.jpg, and choosing the “best”. Finally, we tested An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e106.jpg with An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e107.jpg (i.e., signed, as described previously). Note that in these scenarios the weights presented here do not make as much sense for protective variants (i.e., especially weighting based on allele frequency in controls).

Simulation design

We investigated several different rare variant disease models. Dichotomous traits were simulated using the disease model given in equation 1 under a logit link, and continuous traits with the identity link. We simulated a range of odds ratios (An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e108.jpg to An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e109.jpg) for dichotomous traits and mean differences (standard normal, 0.15 to 0.6) for continuous traits; a wide range of values are used here because rare variants are expected to have moderate to high penetrances [19], [20]. We also undertook simulations for an odds ratio of 1 or mean difference 0 to make sure the tests maintain the proper type I error. For dichotomous traits, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e110.jpg was chosen to keep the population prevalence fixed at 0.01. Other values for the population prevalence were considered, but did not materially affect the results. For continuous traits, An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e111.jpg is irrelevant.

The variant data was generated using the haplotype frequencies across genes from an existing sequence-level dataset. One thousand cases were drawn according to the joint distribution of An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e112.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e113.jpg, and 1000 controls from the joint distribution of An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e114.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e115.jpg, or 2000 individuals with a quantitative trait. A vector of genetic variants An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e116.jpg was drawn from haplotype frequencies of 480 individuals in which the coding regions of 16 genes in the folate metabolic pathway [18] were sequenced, in the California Newborn Screening Program; more results are given in the results section.

We ran 500 simulations per gene, and averaged the empirical power over all of the genes according to a type I error rate of 0.05 (i.e., average power for gene-specific detection, not pathway). We ran 500 permutations for each test (except CMC, for which an asymptotic test is available [11]). In practice one might wish to run a larger number of permutations for regions suggestive of association. 500 permutations were run here for simulation speed, as many tests were considered, and should be accurate for the simulations. Unless otherwise stated, we used the SIFT algorithm to determine if alleles were considered intolerant (including those with low confidence) and thus associated with disease, or tolerated and not associated with disease [13]. The power plots we present are the average over these genes. In each gene, we tried to construct and normalize our coefficients in such a way that the maximum contribution of any allele was less than or equal to the odds ratio.

We ran several simulations for dichotomous traits with the following values of An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e117.jpg (Equation 1):

  1. Constant effect for all variants: Let An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e118.jpg be the odds ratio, and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e119.jpg be the cutoff for whether an allele is rare and deleterious. Define An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e120.jpg.
  2. Varying the causal frequency: Since we do not actually know the true allele frequency, we undertook several other simulations varying the “causal” rare allele frequency. That is, we allowed the cutoff An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e121.jpg to follow a discrete uniform distribution according to the allele frequencies in each gene that were less than 0.05, varying this for each simulation. We define An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e122.jpg.
  3. Continuous penetrance of disease: Here, let An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e123.jpg be the continuous coding of SIFT [13] for variant An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e124.jpg, which ranges from 0 to 1, with 0 being predicted as more deleterious. We define An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e125.jpg. Variants that have a higher probability of deleteriousness as per the SIFT algorithm are simulated to increase the odds of disease proportionately higher.
  4. Incorporating rare and common variants: We control how much more deleterious a rare variant is than more common variants with the parameter An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e126.jpg and define An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e127.jpg. When An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e128.jpg, rarer variants have a very strong effect, and common variants have almost no effect. For larger values of An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e129.jpg, common variants have an increasing effect on disease. Note that here we use PMUT to increase the number of genes with deleterious common variants (four rather than one with SIFT).
  5. Incorporating protective and deleterious alleles: We randomly partitioned each gene such that approximately An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e130.jpg of the total allele frequency of rare functional variants were deleterious, and the rest protective. We define An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e131.jpg, where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e132.jpg was An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e133.jpg for deleterious alleles and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e134.jpg for protective alleles. We then repeated this with approximately An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e135.jpg of the total allele frequency as deleterious.

We also reran simulations 1 and 5 for continuous traits. Here we replace the odds ratio An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e136.jpg with the mean difference for each additional dosage of a variant allele, and sampling the trait according to a An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e137.jpg distribution.

Results

Dataset description

The deep sequenced dataset on which our simulations were based was rich with rare variants; out of 764 putative SNPs, 653 had allele frequencies less than An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e138.jpg, and 583 had an an allele frequency less than An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e139.jpg. In the nonsynonymous regions of these genes we compared the SIFT [13], PMUT [14], and PolyPhen [15] methods of predicting whether the variants were deleterious protein coding mutations. Figure 1 shows the number of rare variants as characterized by these algorithms, for varying allele frequencies. We found that there was limited concordance among these methods (at best An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e140.jpg, Table 1). This is similar to Chun et al. [21]. Nevertheless, the low concordance among these three algorithms is actually beneficial for our simulations because it adds variability reflecting reality. When we use SIFT to generate the disease model, it is interesting to assess how well the other approaches work. Data from 13 of the 16 genes were included in the analysis because each of the 13 had at least one intolerant nonsynonymous mutation as predicted by the SIFT algorithm (full details of this and other methods are in Table 1), whereas the remaining 3 had no predicted deleterious changes.

Figure 1
Deleteriousness of variants detected by sequencing one-carbon folate metabolic pathway candidate genes.
Table 1
Protein Function by Gene.

Simulation results

Each simulation enumerated above is highlighted in Figures 2 and and3.3. In these figures, the different scenarios are distinguished by the three indices separated by commas along the X-axes. The first label indicates which of the four tests was used (i.e., the model for An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e147.jpg): constant (C), weighted (W), or both constant and weighted (B). The second label is for the parameter An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e148.jpg and indicates whether the sign was set to a constant 1 (An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e149.jpg), or allowed to vary as described above (An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e150.jpg). The third label is for the model parameter An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e151.jpg, and indicates whether the test was done restricting to a particular algorithm's deleterious call (e.g., SIFT) or all nonsynonymous changes (NS), and what range of alleles or groupings that test was applied to. The latter corresponds to: the exact generating alleles (Perf for “perfect”, i.e., testing only the alleles contributing to disease), all allele frequencies (MAF), all functional groupings (F), all functional groupings except that used to generate the data (An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e152.jpgF), a hard allele frequency threshold (e.g., “An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e153.jpg”), the CMC method with a hard threshold (only run for common variants, simulation 4), or the step-up algorithm described in the methods section (step). Unless otherwise stated, the order of the tests in the plots are by the most overall powerful (averaged over the 4 ORs or mean differences).

Figure 2
Results from simulation study comparing power for rare variant analysis approaches.
Figure 3
Further results comparing power across rare variant approaches.

Figure 2A shows the results from simulation 1, the fixed MAF threshold of 0.01. The weighted method generally performs better than constant weights (even when we are testing the exact markers we use to generate, Perf) and appreciably better than applying constant weights to all minor allele frequencies as does using a fixed threshold (e.g., An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e162.jpg or An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e163.jpg). We also note that the step-up method also performs well in this circumstance. Lastly, signing the variants does not make the power much worse even though all SIFT variants are assumed deleterious. Figure 2B shows the results from simulation 2, under the more realistic scenario with different allele frequencies generating each simulation. Here the step-up method performs the best, aside from the unrealistic Perf test. In comparison with simulation 1, we see a more dramatic power reduction for the unweighted (C) tests that allow for multiple MAFs. Figure 2C shows the results from simulation 3, with a continuously generated deleteriousness of alleles. Surprisingly, the weighted method with a MAFAn external file that holds a picture, illustration, etc.
Object name is pone.0013584.e164.jpg for aggregating variants has the most power in this figure. However, the step-up is nearly identical (C or W). As above, the weighting by minor allele frequencies in controls (W) generally worked better than not weighting (C). In these tests a similar step-down approach was tried, but it did not work well (results not shown).

We then looked at the effect of common variation according to the PMUT algorithm in simulation 4 (4 genes had common variants, Figure 1) [14]. In Figure 2D we vary the parameter An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e165.jpg for each situation, and fix the odds ratio at 2. Here the order of the tests is not as informative as it was for the other plots; it is best to separately consider the different approaches' power for each value of An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e166.jpg in Figure 2D. To emphasize this, Figure 2D is ordered by the power at An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e167.jpg. For An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e168.jpg and An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e169.jpg, the rare variant methods perform the best. Step-up performs well, but we see a small power loss for the An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e170.jpg approach, unlike before. However, if common variants have any appreciable effect on disease (An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e171.jpg), then the CMC approach works best. This is likely because it is more flexible and does not assume that the more common variants have the same effect at the expense of a few degrees of freedom. As expected, we also saw that requiring a hard cutoff of MAFAn external file that holds a picture, illustration, etc.
Object name is pone.0013584.e172.jpg or An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e173.jpg performed poorly (Figure 2D).

In the top panels of Figure 3 we can see the effect of protective and deleterious mutations (simulation 5). Figure 3A shows a An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e174.jpg split, while 3B shows a An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e175.jpg split of deleterious vs. protective variants. It is not surprising that the methods which sign variants based on case-control differences generally performed the best here, especially for the An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e176.jpg split. What is slightly surprising is that the unsigned step-up routine performs nearly as well as the signed step-up routine that does not. Even the constant threshold performs well, if it is signed. The unsigned methods look slightly better in the An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e177.jpg split than they do in the An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e178.jpg split, although the signed methods are preferred.

When considering continuous traits our simulations gave generally similar results as seen for dichotomous traits. Figure 3C shows results for simulation An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e179.jpg - data generated from SIFT prediction where all variants with MAFAn external file that holds a picture, illustration, etc.
Object name is pone.0013584.e180.jpg are causal. Results are similar to simulation 1 with the weighted and step-up approaches performing best, and allowing for any MAF doing worse. Figure 3D presents results for simulation An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e181.jpg for the An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e182.jpg split. For continuous data, the signed tests show even more benefit than for dichotomous traits. In fact, assuming that all variants are deleterious works quite poorly, except for the step-up approach, which still did reasonably well.

Discussion

We have compared several different approaches to rare variant analysis that incorporate varying amounts of prior information in deciding how to aggregate such variants. When one does not know how rare variants affect disease, and is hesitant to make the strong assumptions required to collapse them together, the completely agnostic step-up approach presented here may be the most appropriate. It performed either the best, or close to the best (excluding the “perfect” but unrealistic tests) in the various situations considered.

When it is possible that both protective and deleterious variants are present, we found it useful to sign variants (although little difference between stepwise and signed stepwise). Signing variants greatly improved the efficiency when both protective and deleterious variants are present, although some efficiency was lost when only deleterious alleles were present. The weighting schemes we considered based on allele frequency (models for An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e183.jpg) generally did not work well when both protective and deleterious variants were present. However, these weights were designed for the situation when all alleles are deleterious, and do improve the efficiency in those situations (with the exception of step-up, where there is little difference). Using a hard cutoff performed relatively poorly unless it accurately reflected the underlying disease model; aside from that, a slightly higher allele frequency threshold generally worked better. When using a slightly softer assumption of testing all MAF thresholds, we found that incorporating functional information from protein coding function algorithms generally improved the efficiency of the test, and added only a minor extra computational burden. Note, however, that we used the SIFT algorithm to generate this data in our simulations, so it is biased towards using that information. Yet even the other protein coding function algorithms (e.g., PMUT, PolyPhen) did well with all MAF when this information was not available. The more flexible step-up approach does not need to rely on having such information.

Our simulations focused on combining rare variants within particular genes. One can extend this approach to pathways, exomes, or entire genomes, although the latter may be computationally challenging. Some computational time may be saved by using an adaptive permutation that stops earlier for genes or regions that appear to have no impact. For exomes, one could also further collapse entire pathways instead of genes. A fast analysis of different pathways could be done by testing each gene individually, and combining the resulting p-values with the Fisher product test statistic [10], or applying another step-up approach to further combine the aggregated scores from each gene. Testing all MAF instead of the step-up approach is also an alternative if computational time is an issue [16].

Many complex diseases are likely due to a combination of rare and common variants. One can jointly analyze rare and common variants as in the CMC approach [11], but the rare variants must have a large enough effect size to contribute much to the efficiency of the test. Note that we did not consider various groupings for the CMC test because multivariate logistic regression was prohibitively slow for us to run many permutation tests in the simulations. An alternative may be using linear regression. In practice a combination of some of rare variant aggregation methods with the CMC method might be the most appropriate for many risk loci.

Another promising approach for rare variant analysis is hierarchical modeling [22][25]. We presented a general model in equations 1 and 2 that is essentially hierarchical, and even made some explicit prior assumptions about the variant effects distribution (e.g., a point mass with no variability). Further extending these models with other hyperparameters offers an opportunity to potentially improve upon existing rare variant techniques and is an important area of future research.

As with any genetic analysis, one may need to adjust for potential confounding (e.g., due to population stratification). Dichotomous covariates, or covariates with only a few levels, can be included easily in these rare variant approaches by stratifying on them. Otherwise the residuals of a logistic/linear repression of the trait on the covariates of interest can be fit with the continuous version of the test. One could also just use the model in Equation 1 adjusting for covariates; here, one might always use linear regression as it will be faster. The score test from linear regression is nearly the same as the score test from logistic regression, with the modification that the information contributions of each subject is weighted by An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e184.jpg, where An external file that holds a picture, illustration, etc.
Object name is pone.0013584.e185.jpg, rather than an assumed constant residual variance as in ordinary linear regression.

In summary our simulations suggest that the step-up approach works quite well without requiring a priori information about how to aggregate rare variants for analysis. This agnostic approach was generally one of the best under a broad range of scenarios, and should perform well under disease models different than those considered here. Of course, when one knows the underlying disease model, aggregating rare variants to reflect this information will excel. In practice, however, combining rare variants may require strong and sometimes conflicting assumptions; softening such assumptions with a hierarchical model may prove valuable for rare variant analyses. Software for the approaches considered here is freely available in the R package “thgenetics” available from CRAN (http://cran.r-project.org/).

Acknowledgments

Our thanks to Dr. Gary Shaw and the California Department of Public Health for use of the deeply sequenced genetic data.

Footnotes

Competing Interests: The authors have declared that no competing interests exist.

Funding: TJH was supported by National Institutes of Health (NIH) R25CA112355 training grant. NJM was supported by NIH grant R01GM072859 (NIGMS). JSW was supported by NIH grants R01CA88164 and U01CA127298. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

1. Pritchard JK. Are rare variants responsible for susceptibility to complex diseases? Am J Hum Genet. 2001;69:124–137. [PMC free article] [PubMed]
2. Cohen JC, Kiss RS, Pertsemlidis A, Marcel YL, McPherson R, et al. Multiple rare alleles contribute to low plasma levels of hdl cholesterol. Science. 2004;305:869–872. [PubMed]
3. Azzopardi D, Dallosso AR, Eliason K, Hendrickson BC, Jones N, et al. Multiple rare nonsynonymous variants in the adenomatous polyposis coli gene predispose to colorectal adenomas. Cancer Res. 2008;68:358–363. [PubMed]
4. Bodmer W, Bonilla C. Common and rare variants in multifactorial susceptibility to common diseases. Nat Genet. 2008;40:695–701. [PMC free article] [PubMed]
5. Hershberger RE, Norton N, Morales A, Li D, Siegfried JD, et al. Coding sequence rare variants identified in mybpc3, myh6, tpm1, tnnc1 and tnni3 from 312 patients with familial or idiopathic dilated cardiomyopathy. Circ Cardiovasc Genet 2010 [PMC free article] [PubMed]
6. Dickson SP, Wang K, Krantz I, Hakonarson H, Goldstein DB. Rare variants create synthetic genome-wide associations. PLoS Biol. 2010;8:e1000294. [PMC free article] [PubMed]
7. Mardis ER. The impact of next-generation sequencing technology on genetics. Trends Genet. 2008;24:133–141. [PubMed]
8. Nejentsev S, Walker N, Riches D, Egholm M, Todd JA. Rare variants of ifih1, a gene implicated in antiviral responses, protect against type 1 diabetes. Science. 2009;324:387–389. [PMC free article] [PubMed]
9. Morgenthaler S, Thilly WG. A strategy to discover genes that carry multi-allelic or mono-allelic risk for common diseases: a cohort allelic sums test (cast). Mutat Res. 2007;615:28–56. [PubMed]
10. Madsen BE, Browning SR. A groupwise association test for rare mutations using a weighted sum statistic. PLoS Genet. 2009;5:e1000384. [PMC free article] [PubMed]
11. Li B, Leal SM. Methods for detecting associations with rare variants for common diseases: application to analysis of sequence data. Am J Hum Genet. 2008;83:311–321. [PMC free article] [PubMed]
12. Stenson PD, Mort M, Ball EV, Howells K, Phillips AD, et al. The human gene mutation database: 2008 update. Genome Med. 2009;1:13. [PMC free article] [PubMed]
13. Ng PC, Henikoff S. Sift: Predicting amino acid changes that affect protein function. Nucleic Acids Res. 2003;31:3812–3814. [PMC free article] [PubMed]
14. Ferrer-Costa C, Orozco M, de la Cruz X. Sequence-based prediction of pathological mutations. Proteins. 2004;57:811–819. [PubMed]
15. Ramensky V, Bork P, Sunyaev S. Human non-synonymous snps: server and survey. Nucleic Acids Res. 2002;30:3894–3900. [PMC free article] [PubMed]
16. Price AL, Kryukov GV, de Bakker PIW, Purcell SM, Staples J, et al. Pooled association tests for rare variants in exon-resequencing studies. Am J Hum Genet. 2010;86:832–838. [PMC free article] [PubMed]
17. Han F, Pan W. A data-adaptive sum test for disease association with multiple common or rare variants. Hum Hered. 2010;70:42–54. [PMC free article] [PubMed]
18. Reed MC, Nijhout HF, Neuhouser ML, Gregory JF, Shane B, et al. A mathematical model gives insights into nutritional and genetic aspects of folate-mediated one-carbon metabolism. J Nutr. 2006;136:2653–2661. [PubMed]
19. Smith DJ, Lusis AJ. The allelic structure of common disease. Hum Mol Genet. 2002;11:2455–2461. [PubMed]
20. Iyengar SK, Elston RC. The genetic basis of complex traits: rare variants or “common gene, common disease”? Methods Mol Biol. 2007;376:71–84. [PubMed]
21. Chun S, Fay JC. Identification of deleterious mutations within three human genomes. Genome Res. 2009;19:1553–1561. [PMC free article] [PubMed]
22. Thomas D, Siemiatycki J, Dewar R, Robins J, Goldberg M, et al. The problem of multiple inference in studies designed to generate hypotheses. American Journal of Epidemiology. 1985;122:1080–1095. [PubMed]
23. Lewinger JP, Conti DV, Baurley JW, Triche TJ, Thomas DC. Hierarchical bayes prioritization of marker associations from a genome-wide association scan for further investigation. Genet Epidemiol. 2007;31:871–882. [PubMed]
24. Witte JS. Genetic analysis with hierarchical models. Genet Epidemiol. 1997;14:1137–1142. [PubMed]
25. Capanu M, Presnell B. Misspecification tests for binomial and beta-binomial models. Statistics in Medicine. 2008;27:2536–2554. [PubMed]

Articles from PLoS ONE are provided here courtesy of Public Library of Science
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...