• Sibley, D. E., Kello, C. T., Plaut, D. C., and Elman, J. L. (in press). Sequence encoders enable large-scale lexical modeling: Reply to Bowers and Davis (2009). Cognitive Science, 33, 1187-1191.

    Download: pdf (5 pages; 44 Kb)

    Abstract: Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed-width distributed representations of variable-length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large-scale word-reading models. In this reply, it is noted that the sequence encoder has facilitated the creation of large-scale word-reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis.

    Copyright Notice: The documents distributed here have been provided as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.