Academic journal article New Zealand Journal of Psychology

Testing a 'Trilemma' Instrument for Vocational-Interest Assessment

Academic journal article New Zealand Journal of Psychology

Testing a 'Trilemma' Instrument for Vocational-Interest Assessment

Article excerpt

How an individual ranks items by preference, or on any-other form of subjective scale, can be assessed efficiently by presenting the items three at a time for three-way, forced-ranking decisions. We propose that these choices are most informative if each selection of items is guided by their empirical locations within a multi-dimensional model. We designed a questionnaire on this principle to elicit vocational preferences from secondary-school students. Ninety-nine items, each specifying a vocational interest or activity, appear in 66 "trilemmas", so each item is ranked twice. Because the choices are not constrained by a pre-determined theory of occupational cognition, the data are not restricted to a single form of analysis, though here they are interpreted using multivariate regression within the multi-dimensional model. In a first stage of validating the questionnaire, responses were simulated on the basis of existing data. The scores accruing to each item from each participant's simulated trilemma responses were compared against actual item rankings. In a second stage, the questionnaire was administered to 299 students, who found it easy to complete. The underlying structure of the responses matched the results of earlier research. The results for individual participants were meaningful, and replicated data from 17 students who had previously ranked the same items with a more conventional task.

**********

The process of constructing a Likert scale is a standard part of a university psychology curriculum. Psychometric tests of one form or another are ubiquitous. But when a self-report questionnaire asks untrained participants to assign numerical values to each of a list of statements (e.g. 1 to 5 on a scale of 'strongly disagree' to' strongly agree'), the task does not come naturally. Response-style problems such as amenability bias and halo effects may affect the data, even when the task is stripped of numerical nuances and reduced to the simplicity of binary choices (i.e. when the questionnaire is a checklist of items to be endorsed or rejected).

Pairwise comparisons (forced binary choices) are one procedure adopted as a remedy for these problems; participants consider every pair of items, and must choose one, rejecting the other (e.g. Bechtel, 1976). Summing the number of 'acceptance' decisions for a given item, over all such pairs, gives it a numerical value on a fine-grained scale. However, the number of choices grows rapidly as the number of items increases: for 99 items there are 4851 pairs. Thus Schucker (1959) suggested presenting the items three at a time to be ranked for preference, so that in each triad one is accepted, one is rejected, and the third is in the middle. This is equivalent to eliciting three pairwise comparisons at once. All possible comparisons among 99 items are covered by 1617 of these three-way forced-ranking choices.

This is a complete design. Each item would appear 49 times, being ranked against different alternatives each time. A logical extension of this line of thought is to reduce the number of three-way forced-ranking choices by using balanced incomplete designs, in which each item occurs in (for instance) two of these 'trilemmas', in a different context each time. Such a design would provide scores for an item ranging from -2 (if the participant rejects it on both occurrences) up to +2 (if it is accepted both times). The procedure implies that the score for a given item is only meaningful relative to other items from the same participant. There are no absolute scales, on which a participant's score for the item (or a combination of items) can be compared to scores from other participants. The data are ipsative, in other words; we return in the Discussion to the questions raised by this.

Elsewhere we have argued that such choices should be optimised to make each one as stark and as informative as possible (Bimler et al., 2005). A trilemma is non-optimal if raters' preferences for two of the items in it are highly correlated. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.