Skip to main content

Trinity College Dublin, The University of Dublin

Menu Search



Module Descriptor School of Computer Science and Statistics

Module CodeST7003
Module NameDesign and Analysis of Experiments
Module Short TitleN/a
ECTS10
Semester Taught2
Contact HoursLecture hours:20
Lab hours:4
Tutorial hours:0

Total hours:24
Module PersonnelLecturing staff:Dr. Michael Stuart
Learning Outcomes

On successful completion of this module, students should be able to

  • compare and contrast observational and experimental studies,
  • describe and explain the roles of control, blocking, randomisation and replication in experimentation,
  • explain the advantages of statistical designs for multifactor experiments,
  • describe and explain the genesis of a range of basic experimental design structures,
  • implement and interpret the analysis of variance for a range of experimental designs,
  • describe the models underlying the analysis of variance for a range of basic experimental designs,
  • produce and interpret graphs for data summary and model diagnostics,
  • provide outline descriptions of more elaborate designs and data analyses,
  • outline strategic issues involved in the design and implementation of experiments.
Learning Aims

This module is concerned with the design of data collection exercises for the assessment of the effects of making deliberate changes to factors associated with a process or system and the analysis of the data subsequently produced.

In order to assure that the experimental changes caused the observed effects, strict conditions of control of the process must be adhered to. Specifically, the conditions under which the experimentation is conducted must be as homogeneous as possible with regard to all extraneous factors that might affect the process, other than the experimental factors that are deliberately varied. Design principles intended to assure such control of experimental conditions are advocated.

It should be noted that the nature and degree of control implicit in this description are frequently not attainable in the social sciences, where observational as opposed to experimental studies are the rule and alternative strategies are required to attempt to assure that observed changes caused observed effects. Such observational studies are not pursued in this module.

The simplest experiments involve comparison of process results when a single factor is varied over two possible conditions. When more than two factors are involved, issues regarding the most efficient choice of combinations of factor conditions and ability to detect interactions between factors become important. With many factors and many possible experimental conditions for each factor, the scale of a comprehensive experimental design becomes impractical and suitable strategies for choosing informative subsets of the full design are needed.

The analysis of data resulting from well designed experiments is often very simple and graphical analysis can be very effective. Standard statistical significance tests may be used to assure that apparent effects are real and not due simply to chance process variation. In cases with more complicated experimental structure, a more advanced technique of statistical inference, Analysis of Variance, may be used. Confidence intervals are used in estimating the magnitude of effects.
Minitab may be used to assist both with design set up and with analysis of subsequent data, both graphical and formal. There will be two laboratory sessions involving the use of Minitab.

Case studies and illustrations from a range of substantive areas will be discussed.

Module Content

The need for experiments

  • experimental and observational studies
  • cause and effect
  • control

Basic design principles for experiments

  • Control
  • Blocking (pairing)
  • Randomisation
  • Replication
  • Factorial structure

Standard designs

  • Randomised blocks
  • Two-level factors
  • Multi-level factors
  • Split units

Analysis of experimental data

  • Exploratory data analysis
  • Effect estimation and significance testing
  • Analysis of variance
  • Statistical models, fixed and random effects
  • Model validation, diagnostics

Review topics

  • Block structure and treatment structure
  • Repeated measures
  • Analysis of Covariance
  • Clinical trials
  • Response surface designs
  • Robust designs
  • Non-Normal errors
  • Strategies for Experimentation
Recommended Reading List

Core material:
Mullins, E., Statistics for the Quality Control Chemistry Laboratory, Royal Society of Chemistry, 2003, particularly Chapters 4-5, 7-8.

Suggested Text:
Montgomery, D.C., Design and analysis of experiments, 8th ed., Wiley, 2013.

Alternative Text:
Dean, Angela and Voss, Daniel, Design and analysis of experiments, Springer, 1999.

Recommended Reading (not essential for examination purposes):
Box, G.E.P, Hunter, J.S. and Hunter, W.G., Statistics for Experimenters, 2nd. ed., Wiley, 2005.
Cox, D.R., Planning of Experiments, Wiley, 1958.

Daniel, C., Applications of Statistics to Industrial Experimentation, Wiley, 1976.
Fisher, R.A., The Design of Experiments, Oliver and Boyd, 1935, (8th ed. 1966).
Mead, R., Gilmour, SG and Mead, A, Statistical Principles for the Design of Experiments: Applications to Real Experiments, Cambridge University Press, 2012.
Robinson, G.K., Practical Strategies for Experimenting, Wiley, 2000.

Supplemental lectures notes available on course website.

Module PrerequisitesBase module ST7001
Assessment Details

% Exam:100
% Coursework:0
Description of assessment & assessment regulations.

Module Website
Academic Year of DataN/a