Group Testing Complexity

Articles
Authors

Bilder, C. R. and Nguyen, M. and Yaseen, M. and Tebbs, J. M. and McMahan, C. S.

Published

April 22, 2026

Publication details

The American Statistician, 00(0), 1–8, 2026

Links

pdf

 

Background: Laboratories use group testing (pooled testing) to test high volumes of clinical specimens for pathogens, reducing the number of tests needed. Algorithms are typically compared by their expected number of tests (efficiency), but this measure alone does not account for implementation complexity.

Methods: The authors propose a new measure called “complexity” – the expected number of stages for an individual to be declared positive or negative. They derive expressions for complexity for five group testing algorithms (Dorfman, hierarchical, array testing, array testing with a master pool, and Sterrett) and evaluate them across disease prevalences (p = 0.01–0.10) with sensitivity/specificity of 0.99.

Results: No single algorithm was best for both efficiency and complexity. Sterrett testing and array testing with a master pool were among the most efficient but also the most complex (expected stages >2). Array testing had the best complexity (e.g., 1.03 at p = 0.01), while Dorfman testing had the second-best complexity. In a LabCorp COVID-19 application (4×4 and 5×5 arrays), array testing had lower complexity than Dorfman testing but up to twice the expected number of tests.

Conclusion: The complexity measure provides a new tool for laboratories to choose group testing algorithms, balancing resource savings against practical implementation constraints. The authors recommend calculating both efficiency and complexity, and note that array testing’s low complexity supports its wider adoption. R functions and a Shiny app are provided for calculations.

Subscribe

Receive occasional updates on statistics, data science, R packages, and teaching resources.

Visitor analytics: privacy-friendly GoatCounter.