vignettes/MultipleComparisons.Rmd
MultipleComparisons.Rmd
For the analyses, the following functions of agricolae are used: LSD.test
, HSD.test
, duncan.test
, scheffe.test
, waller.test
, SNK.test
, REGW.test
(Hsu, 1996; Steel et al., 1997) and durbin.test
, kruskal
, friedman
, waerden.test
and Median.test
(Conover, 1999).
For every statistical analysis, the data should be organized in columns. For the demonstration, the agricolae database will be used.
The sweetpotato
data correspond to a completely random experiment in field with plots of 50 sweet potato plants, subjected to the virus effect and to a control without virus (See the reference manual of the package).
[1] 17.1666
[1] 27.625
Model parameters: Degrees of freedom and variance of the error:
df<-df.residual(model) MSerror<-deviance(model)/df
It includes the multiple comparison through the method of the minimum significant difference (Least Significant Difference), (Steel et al., 1997).
# comparison <- LSD.test(yield,virus,df,MSerror) LSD.test(model, "virus",console=TRUE)
Study: model ~ "virus"
LSD t Test for yield
Mean Square Error: 22.48917
virus, means and individual ( 95 %) CI
yield std r LCL UCL Min Max
cc 24.40000 3.609709 3 18.086268 30.71373 21.7 28.5
fc 12.86667 2.159475 3 6.552935 19.18040 10.6 14.9
ff 36.33333 7.333030 3 30.019601 42.64707 28.0 41.8
oo 36.90000 4.300000 3 30.586268 43.21373 32.1 40.4
Alpha: 0.05 ; DF Error: 8
Critical Value of t: 2.306004
least Significant Difference: 8.928965
Treatments with the same letter are not significantly different.
yield groups
oo 36.90000 a
ff 36.33333 a
cc 24.40000 b
fc 12.86667 c
In the function LSD.test
, the multiple comparison was carried out. In order to obtain the probabilities of the comparisons, it should be indicated that groups are not required; thus:
# comparison <- LSD.test(yield, virus,df, MSerror, group=FALSE) outLSD <-LSD.test(model, "virus", group=FALSE,console=TRUE)
Study: model ~ "virus"
LSD t Test for yield
Mean Square Error: 22.48917
virus, means and individual ( 95 %) CI
yield std r LCL UCL Min Max
cc 24.40000 3.609709 3 18.086268 30.71373 21.7 28.5
fc 12.86667 2.159475 3 6.552935 19.18040 10.6 14.9
ff 36.33333 7.333030 3 30.019601 42.64707 28.0 41.8
oo 36.90000 4.300000 3 30.586268 43.21373 32.1 40.4
Alpha: 0.05 ; DF Error: 8
Critical Value of t: 2.306004
Comparison between treatments means
difference pvalue signif. LCL UCL
cc - fc 11.5333333 0.0176 * 2.604368 20.462299
cc - ff -11.9333333 0.0151 * -20.862299 -3.004368
cc - oo -12.5000000 0.0121 * -21.428965 -3.571035
fc - ff -23.4666667 0.0003 *** -32.395632 -14.537701
fc - oo -24.0333333 0.0003 *** -32.962299 -15.104368
ff - oo -0.5666667 0.8873 -9.495632 8.362299
Signif. codes:
0 * 0.001 ** 0.01 * 0.05 . 0.1 ’ ’ 1**
$statistics
MSerror Df Mean CV t.value LSD
22 8 28 17 2.3 8.9
$parameters
test p.ajusted name.t ntr alpha
Fisher-LSD none virus 4 0.05
$means
yield std r LCL UCL Min Max Q25 Q50 Q75
cc 24 3.6 3 18.1 31 22 28 22 23 26
fc 13 2.2 3 6.6 19 11 15 12 13 14
ff 36 7.3 3 30.0 43 28 42 34 39 40
oo 37 4.3 3 30.6 43 32 40 35 38 39
$comparison
difference pvalue signif. LCL UCL
cc - fc 11.53 0.0176 * 2.6 20.5
cc - ff -11.93 0.0151 * -20.9 -3.0
cc - oo -12.50 0.0121 * -21.4 -3.6
fc - ff -23.47 0.0003 *** -32.4 -14.5
fc - oo -24.03 0.0003 *** -33.0 -15.1
ff - oo -0.57 0.8873 -9.5 8.4
$groups
NULL
attr(,"class")
[1] "group"
With the function LSD.test
we can make adjustments to the probabilities found, as for example the adjustment by Bonferroni, holm and other options see Adjust P-values for Multiple Comparisons, function ‘p.adjust(stats)’ (R Core Team, 2020).
LSD.test(model, "virus", group=FALSE, p.adj= "bon",console=TRUE)
Study: model ~ "virus"
LSD t Test for yield
P value adjustment method: bonferroni
Mean Square Error: 22
virus, means and individual ( 95 %) CI
yield std r LCL UCL Min Max
cc 24 3.6 3 18.1 31 22 28
fc 13 2.2 3 6.6 19 11 15
ff 36 7.3 3 30.0 43 28 42
oo 37 4.3 3 30.6 43 32 40
Alpha: 0.05 ; DF Error: 8
Critical Value of t: 3.5
Comparison between treatments means
difference pvalue signif. LCL UCL
cc - fc 11.53 0.1058 -1.9 25.00
cc - ff -11.93 0.0904 . -25.4 1.54
cc - oo -12.50 0.0725 . -26.0 0.97
fc - ff -23.47 0.0018 ** -36.9 -10.00
fc - oo -24.03 0.0015 ** -37.5 -10.56
ff - oo -0.57 1.0000 -14.0 12.90
yield groups
oo 37 a
ff 36 a
cc 24 b
fc 13 c
difference pvalue signif.
cc - fc 11.53 0.0484 *
cc - ff -11.93 0.0484 *
cc - oo -12.50 0.0484 *
fc - ff -23.47 0.0015 **
fc - oo -24.03 0.0015 **
ff - oo -0.57 0.8873
Other comparison tests can be applied, such as duncan, Student-Newman-Keuls, tukey and waller-duncan
For Duncan, use the function duncan.test
; for Student-Newman-Keuls, the function SNK.test
; for Tukey, the function HSD.test
; for Scheffe, the function scheffe.test
and for Waller-Duncan, the function waller.test
. The arguments are the same. Waller
also requires the value of F-calculated of the ANOVA treatments. If the model is used as a parameter, this is no longer necessary.
It corresponds to the Duncan’s Test (Steel et al., 1997).
duncan.test(model, "virus",console=TRUE)
Study: model ~ "virus"
Duncan's new multiple range test
for yield
Mean Square Error: 22
virus, means
yield std r Min Max
cc 24 3.6 3 22 28
fc 13 2.2 3 11 15
ff 36 7.3 3 28 42
oo 37 4.3 3 32 40
Alpha: 0.05 ; DF Error: 8
Critical Range
2 3 4
8.9 9.3 9.5
Means with the same letter are not significantly different.
yield groups
oo 37 a
ff 36 a
cc 24 b
fc 13 c
Student, Newman and Keuls helped to improve the Newman-Keuls test of 1939, which was known as the Keuls method (Steel et al., 1997).
# SNK.test(model, "virus", alpha=0.05,console=TRUE) SNK.test(model, "virus", group=FALSE,console=TRUE)
Study: model ~ "virus"
Student Newman Keuls Test
for yield
Mean Square Error: 22
virus, means
yield std r Min Max
cc 24 3.6 3 22 28
fc 13 2.2 3 11 15
ff 36 7.3 3 28 42
oo 37 4.3 3 32 40
Comparison between treatments means
difference pvalue signif. LCL UCL
cc - fc 11.53 0.0176 * 2.6 20.5
cc - ff -11.93 0.0151 * -20.9 -3.0
cc - oo -12.50 0.0291 * -23.6 -1.4
fc - ff -23.47 0.0008 *** -34.5 -12.4
fc - oo -24.03 0.0012 ** -36.4 -11.6
ff - oo -0.57 0.8873 -9.5 8.4
Multiple range tests for all pairwise comparisons, to obtain a confident inequalities multiple range tests (Hsu, 1996).
# REGW.test(model, "virus", alpha=0.05,console=TRUE) REGW.test(model, "virus", group=FALSE,console=TRUE)
Study: model ~ "virus"
Ryan, Einot and Gabriel and Welsch multiple range test
for yield
Mean Square Error: 22
virus, means
yield std r Min Max
cc 24 3.6 3 22 28
fc 13 2.2 3 11 15
ff 36 7.3 3 28 42
oo 37 4.3 3 32 40
Comparison between treatments means
difference pvalue signif. LCL UCL
cc - fc 11.53 0.0350 * 0.91 22.16
cc - ff -11.93 0.0360 * -23.00 -0.87
cc - oo -12.50 0.0482 * -24.90 -0.10
fc - ff -23.47 0.0006 *** -34.09 -12.84
fc - oo -24.03 0.0007 *** -35.10 -12.97
ff - oo -0.57 0.9873 -11.19 10.06
This studentized range test, created by Tukey in 1953, is known as the Tukey’s HSD (Honestly Significant Differences) (Steel et al., 1997).
outHSD<- HSD.test(model, "virus",console=TRUE)
Study: model ~ "virus"
HSD Test for yield
Mean Square Error: 22
virus, means
yield std r Min Max
cc 24 3.6 3 22 28
fc 13 2.2 3 11 15
ff 36 7.3 3 28 42
oo 37 4.3 3 32 40
Alpha: 0.05 ; DF Error: 8
Critical Value of Studentized Range: 4.5
Minimun Significant Difference: 12
Treatments with the same letter are not significantly different.
yield groups
oo 37 a
ff 36 ab
cc 24 bc
fc 13 c
outHSD
$statistics
MSerror Df Mean CV MSD
22 8 28 17 12
$parameters
test name.t ntr StudentizedRange alpha
Tukey virus 4 4.5 0.05
$means
yield std r Min Max Q25 Q50 Q75
cc 24 3.6 3 22 28 22 23 26
fc 13 2.2 3 11 15 12 13 14
ff 36 7.3 3 28 42 34 39 40
oo 37 4.3 3 32 40 35 38 39
$comparison
NULL
$groups
yield groups
oo 37 a
ff 36 ab
cc 24 bc
fc 13 c
attr(,"class")
[1] "group"
Include the argument unbalanced = TRUE in the function. Valid for group = TRUE/FALSE
A<-sweetpotato[-c(4,5,7),] modelUnbalanced <- aov(yield ~ virus, data=A) outUn <-HSD.test(modelUnbalanced, "virus",group=FALSE, unbalanced = TRUE) print(outUn$comparison)
difference pvalue signif. LCL UCL
cc - fc 11.3 0.252 -8 30.6
cc - ff -9.2 0.386 -28 10.1
cc - oo -12.5 0.196 -32 6.8
fc - ff -20.5 0.040 * -40 -1.2
fc - oo -23.8 0.022 * -43 -4.5
ff - oo -3.3 0.917 -23 16.0
yield groups
oo 37 a
ff 34 a
cc 24 ab
fc 13 b
If this argument is not included, the probabilities of significance will not be consistent with the choice of groups.
Illustrative example of this inconsistency:
difference pvalue
cc - fc 11.3 0.317
cc - ff -9.2 0.297
cc - oo -12.5 0.096
fc - ff -20.5 0.071
fc - oo -23.8 0.033
ff - oo -3.3 0.885
yield groups
oo 37 a
ff 34 ab
cc 24 ab
fc 13 b
Duncan continued the multiple comparison procedures, introducing the criterion of minimizing both experimental errors; for this, he used the Bayes’ theorem, obtaining one new test called Waller-Duncan (Waller and Duncan, 1969; Steel et al., 1997).
# variance analysis: anova(model)
Analysis of Variance Table
Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
virus 3 1170 390 17.3 0.00073 ***
Residuals 8 180 22
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
with(sweetpotato,waller.test(yield,virus,df,MSerror,Fc= 17.345, group=FALSE,console=TRUE))
Study: yield ~ virus
Waller-Duncan K-ratio t Test for yield
This test minimizes the Bayes risk under additive loss and certain other assumptions
......
K ratio 100.0
Error Degrees of Freedom 8.0
Error Mean Square 22.5
F value 17.3
Critical Value of Waller 2.2
virus, means
yield std r Min Max
cc 24 3.6 3 22 28
fc 13 2.2 3 11 15
ff 36 7.3 3 28 42
oo 37 4.3 3 32 40
Comparison between treatments means
Difference significant
cc - fc 11.53 TRUE
cc - ff -11.93 TRUE
cc - oo -12.50 TRUE
fc - ff -23.47 TRUE
fc - oo -24.03 TRUE
ff - oo -0.57 FALSE
In another case with only invoking the model object:
outWaller <- waller.test(model, "virus", group=FALSE,console=FALSE)
The found object outWaller has information to make other procedures.
names(outWaller)
[1] "statistics" "parameters" "means" "comparison" "groups"
print(outWaller$comparison)
Difference significant
cc - fc 11.53 TRUE
cc - ff -11.93 TRUE
cc - oo -12.50 TRUE
fc - ff -23.47 TRUE
fc - oo -24.03 TRUE
ff - oo -0.57 FALSE
It is indicated that the virus effect “ff” is not significant to the control “oo”.
outWaller$statistics
Mean Df CV MSerror F.Value Waller CriticalDifference
28 8 17 22 17 2.2 8.7
This method, created by Scheffe in 1959, is very general for all the possible contrasts and their confidence intervals. The confidence intervals for the averages are very broad, resulting in a very conservative test for the comparison between treatment averages (Steel et al., 1997).
# analysis of variance: scheffe.test(model,"virus", group=TRUE,console=TRUE, main="Yield of sweetpotato\nDealt with different virus")
Study: Yield of sweetpotato
Dealt with different virus
Scheffe Test for yield
Mean Square Error : 22
virus, means
yield std r Min Max
cc 24 3.6 3 22 28
fc 13 2.2 3 11 15
ff 36 7.3 3 28 42
oo 37 4.3 3 32 40
Alpha: 0.05 ; DF Error: 8
Critical Value of F: 4.1
Minimum Significant Difference: 14
Means with the same letter are not significantly different.
yield groups
oo 37 a
ff 36 a
cc 24 ab
fc 13 b
The minimum significant value is very high. If you require the approximate probabilities of comparison, you can use the option group=FALSE.
outScheffe <- scheffe.test(model,"virus", group=FALSE, console=TRUE)
Study: model ~ "virus"
Scheffe Test for yield
Mean Square Error : 22
virus, means
yield std r Min Max
cc 24 3.6 3 22 28
fc 13 2.2 3 11 15
ff 36 7.3 3 28 42
oo 37 4.3 3 32 40
Alpha: 0.05 ; DF Error: 8
Critical Value of F: 4.1
Comparison between treatments means
Difference pvalue sig LCL UCL
cc - fc 11.53 0.0978 . -2 25.1
cc - ff -11.93 0.0855 . -25 1.6
cc - oo -12.50 0.0706 . -26 1.0
fc - ff -23.47 0.0023 ** -37 -9.9
fc - oo -24.03 0.0020 ** -38 -10.5
ff - oo -0.57 0.9991 -14 13.0
In a factorial combined effects of the treatments. Comparetive tests: LSD, HSD, Waller-Duncan, Duncan, Scheff\'e, SNK
can be applied.
# modelABC <-aov (y ~ A * B * C, data) # compare <-LSD.test (modelABC, c ("A", "B", "C"),console=TRUE)
The comparison is the combination of A:B:C.
Data RCBD design with a factorial clone x nitrogen. The response variable yield.
yield <-scan (text = "6 7 9 13 16 20 8 8 9 7 8 8 12 17 18 10 9 12 9 9 9 14 18 21 11 12 11 8 10 10 15 16 22 9 9 9 " ) block <-gl (4, 9) clone <-rep (gl (3, 3, labels = c ("c1", "c2", "c3")), 4) nitrogen <-rep (gl (3, 1, labels = c ("n1", "n2", "n3")), 12) A <-data.frame (block, clone, nitrogen, yield) head (A)
block clone nitrogen yield
1 1 c1 n1 6
2 1 c1 n2 7
3 1 c1 n3 9
4 1 c2 n1 13
5 1 c2 n2 16
6 1 c2 n3 20
outAOV <-aov (yield ~ block + clone * nitrogen, data = A)
anova (outAOV)
Analysis of Variance Table
Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
block 3 21 6.9 5.82 0.00387 **
clone 2 498 248.9 209.57 6.4e-16 ***
nitrogen 2 54 27.0 22.76 2.9e-06 ***
clone:nitrogen 4 43 10.8 9.11 0.00013 ***
Residuals 24 29 1.2
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
outFactorial <-LSD.test (outAOV, c("clone", "nitrogen"), main = "Yield ~ block + nitrogen + clone + clone:nitrogen",console=TRUE)
Study: Yield ~ block + nitrogen + clone + clone:nitrogen
LSD t Test for yield
Mean Square Error: 1.2
clone:nitrogen, means and individual ( 95 %) CI
yield std r LCL UCL Min Max
c1:n1 7.5 1.29 4 6.4 8.6 6 9
c1:n2 8.5 1.29 4 7.4 9.6 7 10
c1:n3 9.0 0.82 4 7.9 10.1 8 10
c2:n1 13.5 1.29 4 12.4 14.6 12 15
c2:n2 16.8 0.96 4 15.6 17.9 16 18
c2:n3 20.2 1.71 4 19.1 21.4 18 22
c3:n1 9.5 1.29 4 8.4 10.6 8 11
c3:n2 9.5 1.73 4 8.4 10.6 8 12
c3:n3 10.2 1.50 4 9.1 11.4 9 12
Alpha: 0.05 ; DF Error: 24
Critical Value of t: 2.1
least Significant Difference: 1.6
Treatments with the same letter are not significantly different.
yield groups
c2:n3 20.2 a
c2:n2 16.8 b
c2:n1 13.5 c
c3:n3 10.2 d
c3:n1 9.5 de
c3:n2 9.5 de
c1:n3 9.0 def
c1:n2 8.5 ef
c1:n1 7.5 f
This analysis can come from balanced or partially balanced designs. The function BIB.test
is for balanced designs, and BIB.test
, for partially balanced designs. In the following example, the agricolae data will be used (Joshi, 1987).
# Example linear estimation and design of experiments. (Joshi) # Institute of Social Sciences Agra, India # 6 varieties of wheat in 10 blocks of 3 plots each. block<-gl(10,3) variety<-c(1,2,3,1,2,4,1,3,5,1,4,6,1,5,6,2,3,6,2,4,5,2,5,6,3,4,5,3, 4,6) Y<-c(69,54,50,77,65,38,72,45,54,63,60,39,70,65,54,65,68,67,57,60,62, 59,65,63,75,62,61,59,55,56) head(cbind(block,variety,Y))
block variety Y
[1,] 1 1 69
[2,] 1 2 54
[3,] 1 3 50
[4,] 2 1 77
[5,] 2 2 65
[6,] 2 4 38
BIB.test(block, variety, Y,console=TRUE)
ANALYSIS BIB: Y
Class level information
Block: 1 2 3 4 5 6 7 8 9 10
Trt : 1 2 3 4 5 6
Number of observations: 30
Analysis of Variance Table
Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
block.unadj 9 467 51.9 0.90 0.547
trt.adj 5 1156 231.3 4.02 0.016 *
Residuals 15 863 57.5
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coefficient of variation: 13 %
Y Means: 60
variety, statistics
Y mean.adj SE r std Min Max
1 70 75 3.7 5 5.1 63 77
2 60 59 3.7 5 4.9 54 65
3 59 59 3.7 5 12.4 45 75
4 55 55 3.7 5 9.8 38 62
5 61 60 3.7 5 4.5 54 65
6 56 54 3.7 5 10.8 39 67
LSD test
Std.diff : 5.4
Alpha : 0.05
LSD : 11
Parameters BIB
Lambda : 2
treatmeans : 6
Block size : 3
Blocks : 10
Replication: 5
Efficiency factor 0.8
<<< Book >>>
Comparison between treatments means
Difference pvalue sig.
1 - 2 16.42 0.0080 **
1 - 3 16.58 0.0074 **
1 - 4 20.17 0.0018 **
1 - 5 15.08 0.0132 *
1 - 6 20.75 0.0016 **
2 - 3 0.17 0.9756
2 - 4 3.75 0.4952
2 - 5 -1.33 0.8070
2 - 6 4.33 0.4318
3 - 4 3.58 0.5142
3 - 5 -1.50 0.7836
3 - 6 4.17 0.4492
4 - 5 -5.08 0.3582
4 - 6 0.58 0.9148
5 - 6 5.67 0.3074
Treatments with the same letter are not significantly different.
Y groups
1 75 a
5 60 b
2 59 b
3 59 b
4 55 b
6 54 b
function (block, trt, Y, test = c(“lsd”, “tukey”, “duncan”, “waller”, “snk”), alpha = 0.05, group = TRUE) LSD, Tukey Duncan, Waller-Duncan and SNK, can be used. The probabilities of the comparison can also be obtained. It should only be indicated: group=FALSE, thus:
out <-BIB.test(block, trt=variety, Y, test="tukey", group=FALSE, console=TRUE)
ANALYSIS BIB: Y
Class level information
Block: 1 2 3 4 5 6 7 8 9 10
Trt : 1 2 3 4 5 6
Number of observations: 30
Analysis of Variance Table
Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
block.unadj 9 467 51.9 0.90 0.547
trt.adj 5 1156 231.3 4.02 0.016 *
Residuals 15 863 57.5
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
coefficient of variation: 13 %
Y Means: 60
variety, statistics
Y mean.adj SE r std Min Max
1 70 75 3.7 5 5.1 63 77
2 60 59 3.7 5 4.9 54 65
3 59 59 3.7 5 12.4 45 75
4 55 55 3.7 5 9.8 38 62
5 61 60 3.7 5 4.5 54 65
6 56 54 3.7 5 10.8 39 67
Tukey
Alpha : 0.05
Std.err : 3.8
HSD : 17
Parameters BIB
Lambda : 2
treatmeans : 6
Block size : 3
Blocks : 10
Replication: 5
Efficiency factor 0.8
<<< Book >>>
Comparison between treatments means
Difference pvalue sig.
1 - 2 16.42 0.070 .
1 - 3 16.58 0.067 .
1 - 4 20.17 0.019 *
1 - 5 15.08 0.110
1 - 6 20.75 0.015 *
2 - 3 0.17 1.000
2 - 4 3.75 0.979
2 - 5 -1.33 1.000
2 - 6 4.33 0.962
3 - 4 3.58 0.983
3 - 5 -1.50 1.000
3 - 6 4.17 0.967
4 - 5 -5.08 0.927
4 - 6 0.58 1.000
5 - 6 5.67 0.891
names(out)
[1] "parameters" "statistics" "comparison" "means" "groups"
rm(block,variety)
bar.group: out$groups
bar.err: out$means
The function PBIB.test
(Joshi, 1987), can be used for the lattice and alpha designs.
Consider the following case: Construct the alpha design with 30 treatments, 2 repetitions, and a block size equal to 3.
# alpha design Genotype<-paste("geno",1:30,sep="") r<-2 k<-3 plan<-design.alpha(Genotype,k,r,seed=5)
Alpha Design (0,1) - Serie I
Parameters Alpha Design
=======================
Treatmeans : 30
Block size : 3
Blocks : 10
Replication: 2
Efficiency factor
(E ) 0.62
<<< Book >>>
The generated plan is plan$book. Suppose that the corresponding observation to each experimental unit is:
yield <-c(5,2,7,6,4,9,7,6,7,9,6,2,1,1,3,2,4,6,7,9,8,7,6,4,3,2,2,1,1, 2,1,1,2,4,5,6,7,8,6,5,4,3,1,1,2,5,4,2,7,6,6,5,6,4,5,7,6,5,5,4)
The data table is constructed for the analysis. In theory, it is presumed that a design is applied and the experiment is carried out; subsequently, the study variables are observed from each experimental unit.
data<-data.frame(plan$book,yield) # The analysis: modelPBIB <- with(data,PBIB.test(block, Genotype, replication, yield, k=3, group=TRUE, console=TRUE))
ANALYSIS PBIB: yield
Class level information
block : 20
Genotype : 30
Number of observations: 60
Estimation Method: Residual (restricted) maximum likelihood
Parameter Estimates
Variance
block:replication 3.8e+00
replication 6.1e-09
Residual 1.7e+00
Fit Statistics
AIC 214
BIC 260
-2 Res Log Likelihood -74
Analysis of Variance Table
Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
Genotype 29 69.2 2.39 1.4 0.28
Residuals 11 18.7 1.70
Coefficient of variation: 29 %
yield Means: 4.5
Parameters PBIB
.
Genotype 30
block size 3
block/replication 10
replication 2
Efficiency factor 0.62
Comparison test lsd
Treatments with the same letter are not significantly different.
yield.adj groups
geno10 7.7 a
geno19 6.7 ab
geno1 6.7 ab
geno9 6.4 abc
geno18 6.1 abc
geno16 5.7 abcd
geno26 5.2 abcd
geno8 5.2 abcd
geno17 5.2 abcd
geno29 4.9 abcd
geno27 4.9 abcd
geno11 4.9 abcd
geno30 4.8 abcd
geno22 4.5 abcd
geno28 4.5 abcd
geno5 4.2 abcd
geno14 4.1 abcd
geno23 4.0 abcd
geno15 3.9 abcd
geno2 3.8 bcd
geno4 3.8 bcd
geno3 3.7 bcd
geno25 3.6 bcd
geno12 3.6 bcd
geno6 3.6 bcd
geno21 3.4 bcd
geno24 3.0 bcd
geno13 2.8 cd
geno20 2.7 cd
geno7 2.3 d
<<< to see the objects: means, comparison and groups. >>>
The adjusted averages can be extracted from the modelPBIB.
head(modelPBIB$means)
The comparisons:
head(modelPBIB$comparison)
The data on the adjusted averages and their variation can be illustrated with the functions plot.group and bar.err. Since the created object is very similar to the objects generated by the multiple comparisons.
Analysis of balanced lattice 3x3, 9 treatments, 4 repetitions.
Create the data in a text file: latice3x3.txt and read with R:
trt<-c(1,8,5,5,2,9,2,7,3,6,4,9,4,6,9,8,7,6,1,5,8,3,2,7,3,7,2,1,3,4,6,4,9,5,8,1) yield<-c(48.76,10.83,12.54,11.07,22,47.43,27.67,30,13.78,37,42.37,39,14.46,30.69,42.01, 22,42.8,28.28,50,24,24,15.42,30,23.8,19.68,31,23,41,12.9,49.95,25,45.57,30,20,18,43.81) sqr<-rep(gl(4,3),3) block<-rep(1:12,3) modelLattice<-PBIB.test(block,trt,sqr,yield,k=3,console=TRUE, method="VC")
ANALYSIS PBIB: yield
Class level information
block : 12
trt : 9
Number of observations: 36
Estimation Method: Variances component model
Fit Statistics
AIC 265
BIC 298
Analysis of Variance Table
Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
sqr 3 133 44 0.69 0.57361
trt.unadj 8 3749 469 7.24 0.00042 ***
block/sqr 8 368 46 0.71 0.67917
Residual 16 1036 65
---
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
Coefficient of variation: 28 %
yield Means: 29
Parameters PBIB
.
trt 9
block size 3
block/sqr 3
sqr 4
Efficiency factor 0.75
Comparison test lsd
Treatments with the same letter are not significantly different.
yield.adj groups
1 44 a
9 39 ab
4 39 ab
7 32 bc
6 31 bc
2 26 cd
8 18 d
5 17 d
3 15 d
<<< to see the objects: means, comparison and groups. >>>
The adjusted averages can be extracted from the modelLattice.
print(modelLattice$means)
The comparisons:
head(modelLattice$comparison)
The function DAU.test
can be used for the analysis of the augmented block design. The data should be organized in a table, containing the blocks, treatments, and the response.
block<-c(rep("I",7),rep("II",6),rep("III",7)) trt<-c("A","B","C","D","g","k","l","A","B","C","D","e","i","A","B", "C", "D","f","h","j") yield<-c(83,77,78,78,70,75,74,79,81,81,91,79,78,92,79,87,81,89,96, 82) head(data.frame(block, trt, yield))
block trt yield
1 I A 83
2 I B 77
3 I C 78
4 I D 78
5 I g 70
6 I k 75
The treatments are in each block:
by(trt,block,as.character)
block: I
[1] "A" "B" "C" "D" "g" "k" "l"
------------------------------------------------------------
block: II
[1] "A" "B" "C" "D" "e" "i"
------------------------------------------------------------
block: III
[1] "A" "B" "C" "D" "f" "h" "j"
With their respective responses:
by(yield,block,as.character)
block: I
[1] "83" "77" "78" "78" "70" "75" "74"
------------------------------------------------------------
block: II
[1] "79" "81" "81" "91" "79" "78"
------------------------------------------------------------
block: III
[1] "92" "79" "87" "81" "89" "96" "82"
Analysis:
modelDAU<- DAU.test(block,trt,yield,method="lsd",console=TRUE)
ANALYSIS DAU: yield
Class level information
Block: I II III
Trt : A B C D e f g h i j k l
Number of observations: 20
ANOVA, Treatment Adjusted
Analysis of Variance Table
Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
block.unadj 2 360 180.0
trt.adj 11 285 25.9 0.96 0.55
Control 3 53 17.6 0.65 0.61
Control + control.VS.aug. 8 232 29.0 1.08 0.48
Residuals 6 162 27.0
ANOVA, Block Adjusted
Analysis of Variance Table
Response: yield
Df Sum Sq Mean Sq F value Pr(>F)
trt.unadj 11 576 52.3
block.adj 2 70 34.8 1.29 0.34
Control 3 53 17.6 0.65 0.61
Augmented 7 506 72.3 2.68 0.13
Control vs augmented 1 17 16.9 0.63 0.46
Residuals 6 162 27.0
coefficient of variation: 6.4 %
yield Means: 82
Critical Differences (Between)
Std Error Diff.
Two Control Treatments 4.2
Two Augmented Treatments (Same Block) 7.3
Two Augmented Treatments(Different Blocks) 8.2
A Augmented Treatment and A Control Treatment 6.4
Treatments with the same letter are not significantly different.
yield groups
h 94 a
f 86 ab
A 85 ab
D 83 ab
C 82 ab
j 80 ab
B 79 ab
e 78 ab
k 78 ab
i 77 ab
l 77 ab
g 73 b
Comparison between treatments means
<<< to see the objects: comparison and means >>>
options(digits = 2) modelDAU$means
yield std r Min Max Q25 Q50 Q75 mean.adj SE block
A 85 6.7 3 79 92 81 83 88 85 3.0
B 79 2.0 3 77 81 78 79 80 79 3.0
C 82 4.6 3 78 87 80 81 84 82 3.0
D 83 6.8 3 78 91 80 81 86 83 3.0
e 79 NA 1 79 79 79 79 79 78 5.2 II
f 89 NA 1 89 89 89 89 89 86 5.2 III
g 70 NA 1 70 70 70 70 70 73 5.2 I
h 96 NA 1 96 96 96 96 96 94 5.2 III
i 78 NA 1 78 78 78 78 78 77 5.2 II
j 82 NA 1 82 82 82 82 82 80 5.2 III
k 75 NA 1 75 75 75 75 75 78 5.2 I
l 74 NA 1 74 74 74 74 74 77 5.2 I
modelDAU<- DAU.test(block,trt,yield,method="lsd",group=FALSE,console=FALSE) head(modelDAU$comparison,8)
Difference pvalue sig.
A - B 5.7 0.23
A - C 2.7 0.55
A - D 1.3 0.76
A - e 6.4 0.35
A - f -1.8 0.78
A - g 11.4 0.12
A - h -8.8 0.21
A - i 7.4 0.29
Conover, W. J. (1999). Practical Nonparametric Statistics.
Hsu, J. C. (1996). Multiple Comparisons: Theory and Methods. Chapman; Hall/CRC.
Joshi, D. D. (1987). Linear Estimation and Design of Experiments. Wiley Eastern Limited, New Delhi, India.
R Core Team (2020). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing Available at: https://www.R-project.org/.
Steel, Torry, and Dickey (1997). Principles and Procedures of Statistic a Biometrical Approach. The McGraw Hill Companies, Inc.
Waller, R. A., and Duncan, D. B. (1969). A Bayes Rule for the Symmetric Multiple Comparisons Problem. Journal of the American Statistical Association 64, 1484–1503.