Effects of Different Response Formats in Measuring Educational Research Literacy

By Schladitz, Sandra; Ophoff, Jana Groß et al. | Journal for Educational Research Online, May 1, 2017 | Go to article overview

Effects of Different Response Formats in Measuring Educational Research Literacy


Schladitz, Sandra, Ophoff, Jana Groß, Wirtz, Markus, Journal for Educational Research Online


1. Theoretical background

1.1 Educational Research Literacy

Educational Research Literacy (ERL; Groß Ophoff, Schladitz, Lohrmann, & Wirtz, 2014) is defined as the ability to purposefully access, comprehend, and reflect on scientific information, as well as to apply the resulting conclusions to arising problems (Shank & Brown, 2007). This Engagement with Research can be distinguished from Engagement in Research, with the latter describing an active participation in the scientific community by generating new knowledge (Borg, 2010). Several aspects are relevant for ERL: Information Literacy describes the ability to generate appropriate research questions and to use resources effectively to fi nd relevant information to answer those questions (Blixrud, 2003). The ability to read and interpret the findings, especially in quantitative domains, is described with Statistical Literacy (Watson & Callingham, 2003). The final important step is being able to reflect on and critically evaluate the results, which can be described as Evidence-Based Reasoning (Brown, Nagashima, Fu, Timms, & Wilson, 2010). Students and professionals with high competency levels in these aspects are able to base their decision-making on well-founded arguments and keep up with the constantly changing knowledge society (Grundmann & Stehr, 2012). The project LeScEd - Learning the Science of Education (Schladitz et al., 2013) focuses on conceptualizing and measuring ERL to gain better insight into the structure and levels of this competency. This is the basis to evaluate the success of (university) education, because it allows depicting the development of abilities over the course of students' professional lives.

An important factor in competency assessment is the use of an appropriate response format that does not distort the results. In the focus of the current study, two commonly used response formats are investigated, which require participants either to choose a correct response from a given set of possibilities (multiple-choice [MC] format) or to formulate a response without any given options (free-response [FR] format). These different demands may correspond with different underlying cognitive processes that occur during item processing (Hancock, 1994; Martinez, 1999), e.g., the processing depth of the content. Based on Bloom's taxonomy of learning domains (Anderson et al., 2001), the most basic domain is being able to simply remember information, while creating information is the highest domain. This corresponds with the demands of response formats. Memory performance can easily be tested with MC items, while a production task necessitates a FR format. Another distinction between the response formats is the inherent guessing probability. MC items with their presenting response options allow the test taker to simply recognize the correct response even if they would not have been able to freely produce it without the prompt. In the worst case scenario, participants would simply guess correctly and the results would be distorted because responses were based on luck instead of actual knowledge. To sum up, different response formats are appropriate in different situations, depending on content, learning domain, and risk of guessing.

The purpose of this paper is to examine issues related to the use of different response formats in assessing ERL. As some testing situations can be considered high-stakes testing with future educational or career decisions depending on the test results (Powell, 2012; Wilson, 2007), the main concern is objective performance. If different tests using different item formats are applied but one format shows a generally higher difficulty than the other, comparability of test results may be severely limited. Even though large scale competency assessments like the PISA studies (Programme for International Student Assessment; OECD, 2016) use the same tasks and therefore same response formats for all participants, smaller assessments like examinations in different universities may employ different formats and thereby give an advantage to some test takers while impairing the performance of others (Becker & Watts, 2001). …

The rest of this article is only available to active members of Questia

Already a member? Log in now.

Notes for this article

Add a new note
If you are trying to select text to create highlights or citations, remember that you must now click or tap on the first word, and then click or tap on the last word.
One moment ...
Default project is now your active project.
Project items
Notes
Cite this article

Cited article

Style
Citations are available only to our active members.
Buy instant access to cite pages or passages in MLA 8, MLA 7, APA and Chicago citation styles.

(Einhorn, 1992, p. 25)

(Einhorn 25)

(Einhorn 25)

1. Lois J. Einhorn, Abraham Lincoln, the Orator: Penetrating the Lincoln Legend (Westport, CT: Greenwood Press, 1992), 25, http://www.questia.com/read/27419298.

Note: primary sources have slightly different requirements for citation. Please see these guidelines for more information.

Cited article

Effects of Different Response Formats in Measuring Educational Research Literacy
Settings

Settings

Typeface
Text size Smaller Larger Reset View mode
Search within

Search within this article

Look up

Look up a word

  • Dictionary
  • Thesaurus
Please submit a word or phrase above.
Print this page

Print this page

Why can't I print more than one page at a time?

Help
Full screen
Items saved from this article
  • Highlights & Notes
  • Citations
Some of your highlights are legacy items.

Highlights saved before July 30, 2012 will not be displayed on their respective source pages.

You can easily re-create the highlights by opening the book page or article, selecting the text, and clicking “Highlight.”

matching results for page

    Questia reader help

    How to highlight and cite specific passages

    1. Click or tap the first word you want to select.
    2. Click or tap the last word you want to select, and you’ll see everything in between get selected.
    3. You’ll then get a menu of options like creating a highlight or a citation from that passage of text.

    OK, got it!

    Cited passage

    Style
    Citations are available only to our active members.
    Buy instant access to cite pages or passages in MLA 8, MLA 7, APA and Chicago citation styles.

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences." (Einhorn, 1992, p. 25).

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences." (Einhorn 25)

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences." (Einhorn 25)

    "Portraying himself as an honest, ordinary person helped Lincoln identify with his audiences."1

    1. Lois J. Einhorn, Abraham Lincoln, the Orator: Penetrating the Lincoln Legend (Westport, CT: Greenwood Press, 1992), 25, http://www.questia.com/read/27419298.

    Cited passage

    Thanks for trying Questia!

    Please continue trying out our research tools, but please note, full functionality is available only to our active members.

    Your work will be lost once you leave this Web page.

    Buy instant access to save your work.

    Already a member? Log in now.

    Search by... Author
    Show... All Results Primary Sources Peer-reviewed

    Oops!

    An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.