Explanations are the fuel of progress, the fundamental tool through which humans have earned more and more control over their future throughout history.
So far, the production of explanations has been a unique prerogative of humans, who greatly improved the process over the last centuries with the emergence of the scientific method. In this talk, we will try to formalize this epistemological breakthrough to make it digestible by a machine, with the ultimate goal of building an artificial scientist and breaking the monopoly of humans in producing new symbolic explanations.
To this end, we will introduce the concept of Explanatory Learning (EL). Diverging from traditional AI methods that rely on human-coded interpreters—like Program Synthesis—EL is premised on the idea that a true artificial scientist can only emerge when a machine can autonomously interpret symbols.
This shift in perspective presents a fresh outlook on a machine’s ability to understand and use language, which we will contemplate through the unexpected findings of our core experiment: the creation of a successful artificial scientist in Odeen, a simple simulated universe full of phenomena to explain.