Academic journal article Canadian Journal of Experimental Psychology

The Brain's Representations May Be Compatible with Convolution-Based Memory Models

Academic journal article Canadian Journal of Experimental Psychology

The Brain's Representations May Be Compatible with Convolution-Based Memory Models

Article excerpt

(ProQuest: ... denotes formulae omitted.)

A major goal in memory research, spanning psychology, neuroscience and artificial intelligence, has been to model memory for associations (e.g., CAT-DOG, BRIDGE-LAMPPOST). Distributed memory models that have been tested on human memory behaviour typically assume that items are represented as vectors, where each dimension of the vector stands for the value of a feature of the item, although those features are usually not specified, but conceptualised as abstract. Indeed, modellers have typically made no attempt to derive item representations from realworld features, and may even have implicitly assumed that none exists.

Such models have used two major mathematical vector operations (or their close relatives): matrix outer-product (e.g., Anderson, 1970; Humphreys, Bain, & Pike, 1989; Pike, 1984; Rumelhart, Hinton, & Williams, 1986) and convolution (e.g., Longuet-Higgins, 1968; Metcalfe Eich, 1982; Murdock, 1982; Plate, 1995, 2003). In the simplest matrix model, an association of two item-vectors is encoded as the outer product of the two vectors, and stored by summing those outer products into a memory matrix. AlternaThis tively, in a convolution-based model, an association of two item vectors is encoded by applying the convolution operation to the two vectors representing a pair of items, which, itself, results in a vector. In the case of circular convolution (defined below, in Equation 6), the association even has the same dimensionality as the item vectors (Plate, 1995). Those convolutions are then summed into a cumulative memory vector.

Each model mechanism has both strengths and weaknesses (for discussions, see, e.g., Pike, 1984; Plate, 1995). Although we will not definitively decide between these model mechanisms here, we present one line of reasoning that suggests convolution may be neurally plausible. This addresses one particular characteristic of convolution models that has been flagged as a potential weakness. That is, convolution (unlike matrix outer-product) will only work if item representations are "noise-like." This term means that element values are not statistically related to one another; in technical terms, the auto-correlation of values across vector indices must be nearly zero (except for a single value of 1 at lag = 0; this is known as a Kronecker 8 vector). Vectors with patterns of values that have this property approximate what is called "white noise." Noise-like representations are typically generated in models by randomly assigning each element of the vector a value drawn from a normal distribution (Plate, 1995), the key point being that each vector element is drawn completely independently from all other element values.

However, if information people remember derives from the natural world, there is no a priori reason to assume that representations of that information will be anything close to noise-like. Consider that naturalistic stimuli (like photographs of the real world) are not noise-like, but in fact, highly auto-correlated. Spe- cifically, naturalistic signals tend to have power spectra of the form, P(f) = /_", also known as "coloured noise" where P and f refer to power (amplitude squared) and frequency, respectively, and 0 < a < 2 (Field, 1987). White noise would have a = 0; in contrast, naturalistic stimuli tend to have lower-frequency components that are much larger (overrepresented) than higher frequencies. Thus, one could criticise convolution as implausible and impractical because it is unsuited to the statistical properties of real-world information. Alternatively, one could ask how the brain generates noise-like representations from information (e.g., stimuli) that contain auto-correlations.

Plate (1995, 2003) suggested a way around this limitation, which Kelly (2010) successfully demonstrated (see also, Kelly, Blostein, & Mewhort, 2013). They started with naturalistic (autocorrelated) stimuli, and then applied a randomly selected permutation to the order of vector dimensions before encoding. …

Search by... Author
Show... All Results Primary Sources Peer-reviewed

Oops!

An unknown error has occurred. Please click the button below to reload the page. If the problem persists, please try again in a little while.