Avoiding model selection bias in small-sample genomic datasets

D Berrar, I Bradbury, Werner Dubitzky

    Research output: Contribution to journalArticle

    36 Citations (Scopus)

    Abstract

    Motivation: Genomic datasets generated by high-throughput technologies are typically characterized by a moderate number of samples and a large number of measurements per sample. As a consequence, classification models are commonly compared based on resampling techniques. This investigation discusses the conceptual difficulties involved in comparative classification studies. Conclusions derived from such studies are often optimistically biased, because the apparent differences in performance are usually not controlled in a statistically stringent framework taking into account the adopted sampling strategy. We investigate this problem by means of a comparison of various classifiers in the context of multiclass microarray data. Results: Commonly used accuracy-based performance values, with or without confidence intervals, are inadequate for comparing classifiers for small-sample data. We present a statistical methodology that avoids bias in cross-validated model selection in the context of small-sample scenarios. This methodology is valid for both k-fold cross-validation and repeated random sampling.
    LanguageEnglish
    Pages1245-1250
    JournalBioinformatics
    Volume22
    Issue number10
    DOIs
    Publication statusPublished - May 2006

    Fingerprint

    genomics
    methodology
    sampling
    confidence interval
    fold
    comparison

    Cite this

    Berrar, D ; Bradbury, I ; Dubitzky, Werner. / Avoiding model selection bias in small-sample genomic datasets. In: Bioinformatics. 2006 ; Vol. 22, No. 10. pp. 1245-1250.
    @article{ffb62d06adf048c8ad08a64bccef5541,
    title = "Avoiding model selection bias in small-sample genomic datasets",
    abstract = "Motivation: Genomic datasets generated by high-throughput technologies are typically characterized by a moderate number of samples and a large number of measurements per sample. As a consequence, classification models are commonly compared based on resampling techniques. This investigation discusses the conceptual difficulties involved in comparative classification studies. Conclusions derived from such studies are often optimistically biased, because the apparent differences in performance are usually not controlled in a statistically stringent framework taking into account the adopted sampling strategy. We investigate this problem by means of a comparison of various classifiers in the context of multiclass microarray data. Results: Commonly used accuracy-based performance values, with or without confidence intervals, are inadequate for comparing classifiers for small-sample data. We present a statistical methodology that avoids bias in cross-validated model selection in the context of small-sample scenarios. This methodology is valid for both k-fold cross-validation and repeated random sampling.",
    author = "D Berrar and I Bradbury and Werner Dubitzky",
    year = "2006",
    month = "5",
    doi = "10.1093/bioinformatics/btl066",
    language = "English",
    volume = "22",
    pages = "1245--1250",
    journal = "Bioinformatics",
    issn = "1367-4803",
    number = "10",

    }

    Berrar, D, Bradbury, I & Dubitzky, W 2006, 'Avoiding model selection bias in small-sample genomic datasets', Bioinformatics, vol. 22, no. 10, pp. 1245-1250. https://doi.org/10.1093/bioinformatics/btl066

    Avoiding model selection bias in small-sample genomic datasets. / Berrar, D; Bradbury, I; Dubitzky, Werner.

    In: Bioinformatics, Vol. 22, No. 10, 05.2006, p. 1245-1250.

    Research output: Contribution to journalArticle

    TY - JOUR

    T1 - Avoiding model selection bias in small-sample genomic datasets

    AU - Berrar, D

    AU - Bradbury, I

    AU - Dubitzky, Werner

    PY - 2006/5

    Y1 - 2006/5

    N2 - Motivation: Genomic datasets generated by high-throughput technologies are typically characterized by a moderate number of samples and a large number of measurements per sample. As a consequence, classification models are commonly compared based on resampling techniques. This investigation discusses the conceptual difficulties involved in comparative classification studies. Conclusions derived from such studies are often optimistically biased, because the apparent differences in performance are usually not controlled in a statistically stringent framework taking into account the adopted sampling strategy. We investigate this problem by means of a comparison of various classifiers in the context of multiclass microarray data. Results: Commonly used accuracy-based performance values, with or without confidence intervals, are inadequate for comparing classifiers for small-sample data. We present a statistical methodology that avoids bias in cross-validated model selection in the context of small-sample scenarios. This methodology is valid for both k-fold cross-validation and repeated random sampling.

    AB - Motivation: Genomic datasets generated by high-throughput technologies are typically characterized by a moderate number of samples and a large number of measurements per sample. As a consequence, classification models are commonly compared based on resampling techniques. This investigation discusses the conceptual difficulties involved in comparative classification studies. Conclusions derived from such studies are often optimistically biased, because the apparent differences in performance are usually not controlled in a statistically stringent framework taking into account the adopted sampling strategy. We investigate this problem by means of a comparison of various classifiers in the context of multiclass microarray data. Results: Commonly used accuracy-based performance values, with or without confidence intervals, are inadequate for comparing classifiers for small-sample data. We present a statistical methodology that avoids bias in cross-validated model selection in the context of small-sample scenarios. This methodology is valid for both k-fold cross-validation and repeated random sampling.

    U2 - 10.1093/bioinformatics/btl066

    DO - 10.1093/bioinformatics/btl066

    M3 - Article

    VL - 22

    SP - 1245

    EP - 1250

    JO - Bioinformatics

    T2 - Bioinformatics

    JF - Bioinformatics

    SN - 1367-4803

    IS - 10

    ER -