Academic journal article Memory & Cognition

Using a Model of Hypothesis Generation to Predict Eye Movements in a Visual Search Task

Academic journal article Memory & Cognition

Using a Model of Hypothesis Generation to Predict Eye Movements in a Visual Search Task

Article excerpt

Published online: 18 September 2014

© Psychonomic Society, Inc. 2014

Abstract We used a model of hypothesis generation (called HyGene; Thomas, Dougherty, Sprenger, & Harbison, 2008) to make predictions regarding the deployment of attention (as assessed via eye movements) afforded by the cued recall of target characteristics before the onset of a search array. On each trial, while being eyetracked, participants were first presented with a memory prompt that was diagnostic regarding the target's color in a subsequently presented search array. We assume that the memory prompts led to the generation of hypotheses (i.e., potential target characteristics) from longterm memory into working memory to guide attentional processes and ocular-motor behavior. However, given that multiple hypotheses might be generated in response to a prompt, it has been unclear how the focal hypothesis (i.e., the hypothesis that exerts the most influence on search) affects search behavior. We tested two possibilities using first fixation data, with the assumption that the first item fixated within a search array was the focal hypothesis. We found that a model assuming that the first item generated into working memory guides overt attentional processes was most consistent with the data at both the aggregate and single-participant levels of analysis.

Keywords Attention . Working memory . Memory . Hypothesis generation . Visual search

In the present work, we examined the interactive relationship between information search and the processes involved with working memory (WM) and long-term memory (LTM) retrieval. In recent years, much work in attention has focused on the finding that the contents of WM can influence visual search in predictable ways. For example, Soto and colleagues have shown that representations in WM can bias visual search toward obj ects in the environment that match or are related to representations held in WM, with other researchers finding similar results (e.g., Downing, 2000; Huang & Pashler, 2007b; Soto, Heinke, Humphreys, & Blanco, 2005; Soto & Humphreys, 2007; although see Woodman & Luck, 2007). This bias typically manifests in increased visual search reaction times (RTs) when an item to be maintained in WM is subsequently presented as a distractor in a search array, relative to when the WM item is not presented in the search array. However, our present work differs from this past work in two ways.

First, since the prior work has examined top-down guided search using the match-to-sample or a similar paradigm (e.g., Olivers, 2009; Soto et al., 2005; Woodman & Luck, 2007),1 this work has focused on tasks in which the participant was given a particular representation to hold in WM at the start of a trial. For example, Soto and colleagues have typically provided a to-be-memorized colored shape (e.g., red circle), known as the WM item, followed by a search array with the WM item either present in or absent from the search array. Participants then performed a recognition task to ensure that the WM item was maintained throughout the course of the trial. These studies focused on how an item that has just been placed into WM biases attentional processes. In contrast, in the present work our participants were required to generate (from LTM) their own representations to be held in WM during visual search. This might seem like a trivial change, but it complicates the process considerably, because participants were free to generate and maintain several possible representations.

Because the content of WM was based on retrieval from LTM, any biases in the retrieval process could have cascading effects on WM and, consequently, on visual search (see Olivers, 2011). Furthermore, this raises questions about how participants would select amongst competing representations to help guide search when multiple representations are retrieved from LTM. This issue has been explored recently by Olivers, Peters, Houtkamp, and Roelfsema (2011), who discussed findings within the attention literature to suggest that only one representation can guide attention and that this representation is used as a search template, even though other representations may be maintained in WM (accessory memory items). …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.