LENS

Examples: recoder.in


In this example, the network must solve an auto-encoder task. However, there are 16 units in the input and output there is a very narrow constriction of just two units in between. But the network has at its disposal two recurrent layers, one before and one after the constriction.

Can the network learn to serialize the message it must convey and encode it as a sequence of symbols?

There are clear parallels between this task and the problem of creating a spoken language to convey complex ideas through the constriction of our vocal tract and auditory system.

Unfortunately, this task seems to be very difficult for the network. It tends to produce a single stable pattern at the constriction and tries to get as much information out of it as possible. I have tried many different manipulations to encourage it to develop a temporally changing code, without much success.

Maybe you'll have better luck.


Douglas Rohde
Last modified: Fri Nov 17 18:41:15 EST 2000