Learning Propositional Logic From Scratch
Paper in proceedings, 2014
We present a computational model for developing intelligent agents that are able to reason in multiple symbolic domains. The agents have deductive and inductive reasoning abilities. The deductive mechanism is based on a simple cognitive model with bounded cognitive resources. The main learning mechanism is a formalization of Occam’s razor. Agents constructed in our model can learn generalized knowledge from concrete examples. For example, an agent can learn elementary arithmetic and propositional logic, and then compute correct answers to previously unseen questions such as “what is 27*9?” and “is P∨(P→Q) a tautology?”. We illustrate the learning process in the case of propositional logic, where an agent first learns the syntax and then some basic logical laws. The performance of this agent beats average human scores in terms of accuracy when tested on tautologies used in an experiment reported in .