2008 saw two sets of inter-subject comparability studies: different aims and approaches; different conclusions. The QCA studies were prompted by notions of the 'soft subjects'--the usual suspects; the University of Durham research by notions of the 'harder' STEM subjects--science, technology, engineering and mathematics. All things it seems are not equal.
The QCA studies
The QCA report (Inter-subject comparability studies, QCA, February 2008) discusses the relative merits of the two main approaches that can be adopted:
* qualitative: small-scale reviews based on comparative judgements
* quantitative: large-scale statistical analyses
From to time, concerns are raised about whether the standards required
to achieve success in GCSE and GCE A Levels are the same across
different subjects. The basis of such concerns varies. At their
simplest, they derive from the numbers of candidates succeeding in
the different subjects ...
There are more sophisticated approaches to the use of numbers, but
these still depend on a purely statistical analysis, using measures of
either prior or concurrent attainment to make the comparison.
Typically both approaches are used, the one to inform the other. In opting for small-scale 'expert' reviews the QCA imposed inevitable constraints on the scope of the studies.
Subject experts, with a background in assessment, were employed as
reviewers to analyse assessment materials and candidate work across
two or more cognate subjects, to draw comparisons and highlight
differences in demand ... recruited through a combination of
advertisement and recommendation ... who had experience of teaching
more than one subject, and of teaching at least one subject at A
level. Knowledge of the examination system was an advantage, but
not essential ... each participant had a main subject specialism,
but was able to compare that subject with others in the study. To
avoid bias, the aim was to have the main subjects evenly
represented across the teams, although this was not achieved for
studies 2a and 2b. Lead consultants were also appointed. Their role
was to assist in the development of the various instruments used in
the studies, to advise consultants on subject-specific matters and
lead subject-specific discussions at meetings, and to prepare the
subject-specific parts of the report.
There were two main components of this work on examinations:
* an analysis of specification materials and an evaluation of the
demands of each subject for each qualification
* a comparison of the work of candidates within each subject
The first constraint, 'cognate' subjects, in our case A Level English literature, history and media studies (Study 2b), evidently assumed to be much the same sort of thing, though history is also paired with geography (Study 1a), so seemingly not the same sort of thing. There may be some sense in the notion of 'cognate subjects', most obviously in the languages, not reviewed in this exercise, though if we were to pair say Spanish and Mandarin we might have to think again. Similarly, if less so, in the sciences, biology, chemistry and physics (Study 1b), though biology was also paired with psychology and sociology (Study 2a), so perhaps not the same sort of thing as the physical sciences. But if the object of the exercise is to seek 'differences in demand', then the notion of 'cognate' subjects is a nonsense to start with.
And it altogether misses the point. What people want to know are not marginal differences within subject groupings, but any significant differences across subject groupings: are mathematics and physics necessarily harder than other subjects; is it evidently easier to get 'good' grades in some subjects--are they really 'soft' options?
Finding 'that overall there was no clear evidence of significant differences in demand between the three subjects in our group doesn't answer that one; they might all be 'soft subjects'. …