Though this reported case study and other studies results have illustrated the strengths of the pairwise comparison technique for determining a combined metric. Interface developers on their own can generate quite widely differing percentage contribution by either method, and some mechanism for averaging the values of a group of developers is therefore necessary. Averaging differing percentage contributions generated by the numerical assignment tends to lead to little discrimination between metrics whereas doing this for pairwise compared data tends to leads to greater difference between metrics, Achieving a greater discrimination is only a good thing if it correctly represents the real views of the developers. In this respect the ability to determine consistency ratios leads to a quantitative way in which developer assessments can be analysed, individual views eliminated if necessary, and the overall metric substantiated. A key feature of the pairwise comparison technique is that developers / evaluators are forced to make trade offs between desired quality characteristics and a higher consensus about their relative importance is thereby generated.
H. X. Lee, Y. Choong, and G. Salvendy, ( 1997), "A proposed index of usability: a method for comparing the relative usability of different software systems", Behaviour & Information Technology, 16 (4/5), pp. 267-278.
Karlsson, J. ( 1996), "Software requirements prioritizing", Proceedings of the second international conference on requirements engineering, (ICRE'96), IEEE Computer Society Press, pp. 110-116.
Saaty, T. L. ( 1980), "The analytic hierarchy process", Mc-Graw Hill.
Smith, A. and Dunckley, L. ( 1998), "Using the LUCID method to optimise the acceptability of shared interfaces", Interacting with Computers, 9, pp. 335-345.
Smith, A., and Dunckley, L. ( 1999), "Importance of collaborative design in computer interface design", Contemporary Ergonomics, 99, Taylor and Francis.