So, for example, with the code i have so far, I can take a simple S-V-O type sentence and put it into a Case Grammar representation:
Jack ate an apple ->
Verb = eat
Agent = Jack
Patient = an apple
Time = past
(I don't have time implemented yet, but you get the idea.)
Can I duplicate this behavior with an artificial neural network?
One immediate problem that comes to mind is, how do you handle variable-length input to an ANN? A network has a fixed number of input neurons, but sentences can vary in length.
One solution is to set a limit on length of input and pad with blanks. But this seems unsatisfying.
Another solution is to use a recurrent network, where you submit one word at a time, along with the saved hidden layer from the previous input. This page has a more complete description.
However, reading further in the CLASPnet paper, I see that each word he's submitting to the network has a 12-letter limit; if the word has fewer than 12 letters, he pads it with spaces (he's doing some strange duplicationg stuff as well, but still there are spaces for many shorter words). So the problem of variable input is not really solved. Unless you submit only one character at a time to the network, and let it build up its own representation of words? That seems to involve a lot of extra processing...
Anyone got any ideas on how to submit variable-length input to a neural netword?