Academic journal article North American Journal of Psychology

Object Appearance and Scene Viewpoint Cannot Be Recognized in Conjunction

Academic journal article North American Journal of Psychology

Object Appearance and Scene Viewpoint Cannot Be Recognized in Conjunction

Article excerpt

Humans possess a remarkable ability to recognize previously viewed pictures. In the most dramatic demonstrations of this ability, observers study thousands of pictures and are later able to discriminate studied from non-studied pictures on the basis of visual details alone at rates far exceeding chance (e.g., Brady, Konkle, Alvarez & Oliva, 2008; Konkle, Brady, Alvarez & Oliva, 2010a, 2010b; Standing, 1973; Standing, Conezio & Haber, 1970). What is the nature of the representations underlying picture recognition? Intuition might suggest that picture-like, iconic representations must underlie performance on picture recognition tests, especially when the recognition test requires observers to remember subtle visual details. However, it is widely agreed that representations in visual long-term memory (VLTM) lack the metric precision of picture-like, iconic representations (e.g., Hollingworth, 2008; Intraub, 1997; Konkle et al., 2010a, 2010b; Simons & Levin, 1997; Varakin & Loschky, 2010). Thus, research on the nature of representations in VLTM seems clear on two points: 1) visual memory contains detailed information about the visual appearance of objects from previously viewed pictures but 2), not as many details as would iconic representations. However, extant research is less clear on how information in VLTM is organized in comparison to the picture from which the information was originally obtained. After all, pictures contain a variety of features that are organized in a particular way. It may be that representations underlying performance on picture recognition tests lack picture-like metric precision, but retain picture-like organization. Does VLTM maintain representations whose features are functionally organized in much the same way as the visual features were organized in the picture, albeit without the precision of iconic representation? The current experiments seek to extend recent results that are relevant to this issue.

The basic issue is how one kind of information from a picture (e.g., the appearance of a particular object) is connected with another piece of information from the same picture (e.g., the overall viewpoint of the picture) in VLTM. Some results suggest that memory for an object's appearance is linked in some way with other contextual information about the scene in which the object appeared. For example, if objects are originally studied within the context of a larger scene, performance on a subsequent object recognition memory test is better when the same context is present as opposed to absent at test (Hayes, Nadel & Ryan, 2007; Hollingworth, 2006). These results make it clear that object representations in visual memory are connected with a scene context. However, the nature of the connection is not clear.

There are two general ways in which information in memory may be connected: integration and association (Hayes et al., 2007; Varakin & Loschky, 2010). If two things are integrated, they are effectively fused; accessing one thing means accessing the other, and losing access to one thing means losing access to the other. However, if two things are associated, it means one thing points to the other, but they are not fused; accessing one thing does not mean accessing the other, and losing access to one thing does not imply losing access to the other. Still, by virtue of association, being presented with one piece of information should help observers retrieve the other piece of information. As reviewed by Varakin and Loschky (2010), the reduction in recognition performance that results from changing an object's background context from study to test can be explained in terms of association--integration is not necessary (see also Hanna and Remington, 1996).

Varakin and Loschky (2010) reported a series of experiments testing whether object appearance information and viewpoint information are integrated in VLTM. In physical pictures, object appearance and viewpoint are integrated, so if representations in memory of previously viewed pictures are picture-like, then object appearance and viewpoint would be integrated in VLTM too. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.