Processing math: 100%

Fixing the bridge between biologists and statisticians

Models are wrong... but, some are useful (G. Box)!


Genotype experiments: fitting a stability variance model with R

Published at June 6, 2019 ·  8 min read

Yield stability is a fundamental aspect for the selection of crop genotypes. The definition of stability is rather complex (see, for example, Annichiarico, 2002); in simple terms, the yield is stable when it does not change much from one environment to the other. It is an important trait, that helps farmers to maintain a good income in most years.

Agronomists and plant breeders are continuosly concerned with the assessment of genotype stability; this is accomplished by planning genotype experiments, where a number of genotypes is compared on randomised complete block designs, with three to five replicates. These experiments are repeated in several years and/or several locations, in order to measure how the environment influences yield level and the ranking of genotypes.

...

How do we combine errors, in biology? The delta method

Published at May 25, 2019 ·  7 min read

In a recent post I have shown that we can build linear combinations of model parameters (see here ). For example, if we have two parameter estimates, say Q and W, with standard errors respectively equal to σQ and σW, we can build a linear combination as follows:

Z=AQ+BW+C

where A, B and C are three coefficients. The standard error for this combination can be obtained as:

...

Dealing with correlation in designed field experiments: part II

Published at May 10, 2019 ·  12 min read

With field experiments, studying the correlation between the observed traits may not be an easy task. Indeed, in these experiments, subjects are not independent, but they are grouped by treatment factors (e.g., genotypes or weed control methods) or by blocking factors (e.g., blocks, plots, main-plots). I have dealt with this problem in a previous post and I gave a solution based on traditional methods of data analyses.

In a recent paper, Piepho (2018) proposed a more advanced solution based on mixed models. He presented four examplary datasets and gave SAS code to analyse them, based on PROC MIXED. I was very interested in those examples, but I do not use SAS. Therefore, I tried to ‘transport’ the models in R, which turned out to be a difficult task. After struggling for awhile with several mixed model packages, I came to an acceptable solution, which I would like to share.

...

Dealing with correlation in designed field experiments: part I

Published at April 30, 2019 ·  7 min read

Observations are grouped

When we have recorded two traits in different subjects, we can be interested in describing their joint variability, by using the Pearson’s correlation coefficient. That’s ok, altough we have to respect some basic assumptions (e.g. linearity) that have been detailed elsewhere (see here). Problems may arise when we need to test the hypothesis that the correlation coefficient is equal to 0. In this case, we need to make sure that all the couples of observations are taken on independent subjects.

...

Some everyday data tasks: a few hints with R

Published at March 27, 2019 ·  9 min read

We all work with data frames and it is important that we know how we can reshape them, as necessary to meet our needs. I think that there are, at least, four routine tasks that we need to be able to accomplish:

  1. subsetting
  2. sorting
  3. casting
  4. melting

Obviously, there is a wide array of possibilities; I’ll just mention a few, which I regularly use.

Subsetting the data

Subsetting means selecting the records (rows) or the variables (columns) which satisfy certain criteria. Let’s take the ‘students.csv’ dataset, which is available on one of my repositories. It is a database of student’s marks in a series of exams for different subjects.

...

Drowning in a glass of water: variance-covariance and correlation matrices

Published at February 19, 2019 ·  3 min read

One of the easiest tasks in R is to get correlations between each pair of variables in a dataset. As an example, let’s take the first four columns in the ‘mtcars’ dataset, that is available within R. Getting the variances-covariances and the correlations is straightforward.

data(mtcars)
matr <- mtcars[,1:4]

#Covariances
cov(matr)
##              mpg        cyl       disp        hp
## mpg    36.324103  -9.172379  -633.0972 -320.7321
## cyl    -9.172379   3.189516   199.6603  101.9315
## disp -633.097208 199.660282 15360.7998 6721.1587
## hp   -320.732056 101.931452  6721.1587 4700.8669
#Correlations
cor(matr)
##             mpg        cyl       disp         hp
## mpg   1.0000000 -0.8521620 -0.8475514 -0.7761684
## cyl  -0.8521620  1.0000000  0.9020329  0.8324475
## disp -0.8475514  0.9020329  1.0000000  0.7909486
## hp   -0.7761684  0.8324475  0.7909486  1.0000000

It’s really a piece of cake! Unfortunately, a few days ago I had a covariance matrix without the original dataset and I wanted the corresponding correlation matrix. Although this is an easy task as well, at first I was stuck, because I could not find an immediate solution… So I started wondering how I could make it.

...

Going back to the basics: the correlation coefficient

Published at February 7, 2019 ·  7 min read

A measure of joint variability

In statistics, dependence or association is any statistical relationship, whether causal or not, between two random variables or bivariate data. It is often measured by the Pearson correlation coefficient:

ρX,Y=corr(X,Y)=cov(X,Y)σXσY=n1=1[(XμX)(YμY)]σXσY

Other measures of correlation can be thought of, such as the Spearman ρ rank correlation coefficient or Kendall τ rank correlation coefficient.

...

My first experience with blogdown

Published at November 15, 2018 ·  1 min read

This is my first day at work with blogdown. I must admit it is pretty overwhelming at the beginning …

I thought that it might be useful to write down a few notes, to summarise my steps ahead, during the learning process. I do not work with blogdown everyday and I tend to forget things quite easily. Therefore, these notes may help me recap how far I have come. And they might also help other beginners, to speed up their initial steps with such a powerful blogging platform.

...

Sample variance and population variance: which of the two?

Published at November 9, 2018 ·  7 min read

Teaching experimental methodology in agriculture related master courses poses some peculiar problems. One of these is to explain the difference between sample variance and population variance. For the students it is usually easy to grasp the idea that, being the mean the ‘center’ of the dataset, it is relevant to measure the average distance to the mean for all individuals in the dataset. Of course, we need to take the sum of squared distances, otherwise negative and positive residuals cancel each other out.

...

Is R dangerous? Side effects of free software for biologists

Published at June 8, 2014 ·  3 min read

When I started my career in the biological field (it’s already 25 years ago), only the luckiest of us had access to very advanced statistical software. Licenses were very expensive and it was not easy to convince the boss that they were really necessary: “Why do you need to spend so much money to perform an ANOVA?”. Indeed, simple one-way or two-ways ANOVAs were quite easy to perform and one of the people in my group had already built the appropriate routines for several designs, by using the GW-BASIC language. But I wanted more!

...