Download: pdf (6 pages, 73 Kb), html
Abstract: A distributed attractor network is trained on an abstract version of the task of deriving the meanings of written words. When processing a word, the network starts from the final activity pattern of the previous word. Two words are semantically related if they overlap in their semantic features, whereas they are associatively related if one word follows the other frequently during training. After training, the network exhibits two empirical effects that have posed problems for distributed network theories: much stronger associative priming than semantic priming, and significant associative priming across an intervening unrelated item. It also reproduces the empirical findings of greater priming for low-frequency targets, degraded targets, and high-dominance category exemplars.
Copyright Notice: The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.