As evidenced by the implementation of the deterministic Boltzmann machine, it is possible to create non-backprop networks in Lens. Now that the DBM can serve as a model, there isn't really much to say about how to go about implementing a Hebbian network. Because you can redefine the functions that train or test the network at the level of the tick, example, or batch, there is a lot of freedom to implement algorithms that are quite unlike backpropagation.
In any network, however, you may wish to retain the formalism that link error derivatives are collected while training on a batch and then used by the weight update algorithm to change the weights. In this way, the same update algorithms can operate on backprop and Hebbian networks, although some Hebbian algorithms may only work properly with steepest descent and online learning.
If you are doing this, it is important to remember that the values you store in the links' deriv fields should be proportional to the negative of the desired weight change. For example, imagine you were writing a simple Hebbian rule in which the change to weight Wij should be equal to e Oi Oj, where e is a small epsilon and the O's are the outputs of the two units. After the network has finished its settling, you will want to increment the deriv of each link by -Oi Oj. The epsilon is just your learning rate and will appear when the actual weight change is made.
If your network requires that links in opposite directions between two units share the same weight, that will not be easily accommodated in Lens because of the compact representation used to store links. However, there are some ways to do it. One would be to just create a single set of links and change the input combining procedure to use these links twice when calculating the inputs to the two units. A more complicated solution would be to add a field to each link that allowed it to store a pointer to the corresponding link. You would then need to write a special group connection procedure to set those links and possibly a new weight update algorithm to ensure that the link weights stay the same.
A final option, given that most Hebbian learning rules are symmetrical and a pair of links will always keep the same weight if they are initialized the same way, is to just change the network resetting procedure to give the corresponding links the same value at the start and then treat them as separate links. This has the advantage that it is simpler, but the network may be up to twice as slow as a good implementation of the first option.