I came across a paper by Torbjorn Lager (from 2005): "Multi-Paradigm Programming in Oz for Natural Language Processing". Slide 41 shows how "John smiles", in natural language subject-predicate syntax, is converted into function-argument form: smiles(j).
My question is, why? Why convert subject-predicate into function-argument, then back again when you want to communicate with a reader? It seems that converting back and forth between subject-predicate and function-argument creates an impedance mismatch, adding needless complexity.
An example of the extra steps needed is demonstrated by the Math agent presented in Lesson 6. We start with a subject-predicate form in our heads: one plus one. We convert that into function-argument form: add(1 1 Ans). Then in the definition of add(N M A), we use aRb form again: A=N+M. Why did we have to go through the step of converting N+M=A to add(N M A)?
Farther on in Lager's paper, in slides 83 and 84, an inference process is displayed. John is inferred to be happy, since he is a man who whistles, and every man who whistles is happy. But the inference procedure has to translate the subject-predicate syntax of the natural language sentences into function-argument form. Why? What do we gain by such translations?
In contrast, my logicagent can do a similar inference using natural language syntax:
> Every man who whistles is happy.
> John is a man who whistles.
> Every man who whistles includes a man who whistles.
> Is John happy?
Yes, John is happy.
>Why is John happy?
John is happy because: john is a man who whistles, and a man who whistles is a member of every man who whistles, and every man who whistles is happy.
The idea being: we should embrace subject-predicate syntax, especially when dealing with natural language processing.
---
I started following the second part of this MOOC, without having taken the first part which introduced the Oz research language. I was able to pick up the language enough to do most of the programming assignments.
I'm trying now to make the logicagent handle the separate statements "John whistles" and "John is a man", inferring "John is a man who whistles" from them. I see how to do it; it involves some busy work, pulling information out of hashes, doing some permutations, etc. The idea is: I have a graph with "John whistles" and "John is a man"; then I dynamically build "John is a man who whistles" from those two statements, and use that on-the-fly construction to connect to "happy" through "Every man who whistles is happy".
The hidden inductive step, concealed by first-order logic's quantifiers, is "Every man = a man". You are assuming that if you have a man, he will be like every man you've seen so far. Thus, deduction contains induction. You might be able to get away with it in a rigidly formal language, but in natural language "a man who whistles" doesn't have to be like "every man who whistles."