Blog

Seminar with Jacques Pitrat: the father of symbolic AI

Share This Post

A few weeks ago, Alain Kaeser, Yseop’s Founder & Chief Science Officer, and myself attended a seminar at Paris University in which Jacques Pitrat presented his work researching the possibilities of AI. Jacques Pitrat is considered as the father of symbolic AI in France. Graduating from the École Polytechnique, the most prestigious French Scientific and Engineering University, he completed a PhD thesis in 1966 with a theorem demonstration program. He then worked as a researcher and taught AI at Pierre et Marie Curie University (Paris-6) until 1998.

From the outset, Pitrat focused on meta-cognition applied to software. The question he is trying to solve is: what is the minimal knowledge one can give to a machine so that this machine is able to learn by itself and solve any problem? He called this research the “boostrap” problem. In other words, Pitrat actively started to work, 50 years ago, on what is now called the “generalized IA”, a topic that looks like a brand new question for research. This field of meta-cognition studies is well-known outside of computer science, for example in education science: how should teachers teach their students how to learn?

The programmer’s version of meta-cognition is called meta-programming. This means, for example, marking a function to let the system “know” that f(x,y) = f(y,x) instead of manually programming every symmetry sub-case. Meta-programming is a way of giving the machine knowledge about its knowledge.

Pitrat’s approach to AI was to create what he calls an “artificial researcher on AI” (in French: “Chercheur Artificiel en IA” or CAIA). CAIA is a rule-based mathematical problem-solver. Each time Pitrat tackles a new problem, he tries to add new rules and (mostly) meta-rules to CAIA, in order to solve this new problem, without breaking the previous knowledge and capacity to solve the problems it used to solve before. CAIA currently solves 230 problem families and is usually better than dedicated algorithms. Better yet, these new meta-rules are used to rewrite existing modules in CAIA. This makes Pitrat believe that “AI is better at programming AI than humans”. And this is the real goal of CAIA: build a system that iteratively rebuilds itself, progressively reducing its reliance on human-programmed knowledge. This should lead to a progressive definition of the bootstrap.

Pitrat knows that this is a very long process and that he certainly won’t see the completion of this project, even though CAIA today includes about 15% of modules programmed by itself.

Some of the seminar attendees were lucky enough to be taught by Pitrat, which was the case for both of us. While others seemed to be present just because of the “generalized IA” theme, probably expecting this keynote to present new trending deep-learning related approach. They were certainly disappointed. The others were impressed to see this man following its track year after year and building a system that progressively becomes more “intelligent”.

So what about so-called “singularity” (the moment where the machine will actually know how to learn how to respond to any problem)? As it’s still a very long way away, says Pitrat, we should also continue to work on “quick-wins” such as deep-learning. Pitrat opened the way to meta-programming (which is used in Yseop Rules), the research community should continue to work on this topic, even if it’s still a much longer-term goal.

Scroll to Top