Outside the box

This post could be considered a continuation of my previous post, "AI and the ludic fallacy."  Bear with me, as this post will make some important yet philosophical points. There are many people who cringe when they hear the p-word –  I have been known to be such a person myself. As with many fields out there, there is a lot of low quality material (namely BS) in philosophy.  However, I have also seen many incredibly insightful philosophy pieces, and it seems to be a helpful discipline whenever one needs to get out of the "box".

This post is about one such box.

An Example Box (as seen from another Box)

A brief digression from AI to illustrate where we're going.  Many successful disciplines such as math and logic operate within a carefully designed box — a set of axioms from which truths are derived. Once a decent set of axioms is established, very elegant and exciting theories can be built. Some of these theories may be successful in modeling physical reality:  classical math built on set theory is so successful that after working within it, one may be tempted to think that reality is the set theory with its axioms. One may even believe that math is primary to all existence, that we live in the world of real numbers. I have to admit, I carried this belief for a long time (and part of me still uses that frame of mind).

But then comes a realization, that reality may sometimes be better modeled by other theories, derived from different sets of axioms. To me, the eye opening experience was when I came across intuisionistic (constructive) logic. Constructive logic essentially abandons the principle of excluded middle (p or ~p), which leads to the inability to establishing a proof via contradiction. Everything that is provable in this new logic has to be derived (constructed) from the basic principles. If something is said to exist, it cannot be because the lack of its existence would violate some other proven fact, but an actual instance has to be shown - hence constructive logic. The lack of an excluded middle may seem wrong, but actually has a very nice model in open set theory: in a family of open sets (topology), the union of any set and its complement does not sum up to the whole space, but leaves the tiny border (the closure) aside (note P \union complement(P) models the p or ~p). This shows that what we intuitively perceive as "obviously true" is not universal and even simple families of open sets provide a neat example model to see it. Which logic is right? They both are. Which logic is favoured by nature? Depends on where we look.

Similarly the axiom of choice, even though intuitively right, leads to nonintuitive Banach-Tarski paradox. Math without the axiom of choice is possible, and the troubling paradox is avoided but the price we have to pay is the lack of equivalence between definitions of continuity, and that causes a significant mess. I'm not going to go into the Gödel's theorem this time, as it's a topic requiring a post of its own (or a book for that matter such as Gödel-Escher-Bach)

So math, as much as it is wonderful, is a monster in a box. Reality always refuses to be captured in any box (or a formal system). Whatever physical theories we put together, they end up being incomplete and get replaced by something else, something broader. Once it becomes clear that the box we already feel very comfortable with is insufficient, there is a rage. People defend the theory from reality. Anytime this happens, bets are certain that reality will eventually win. It does not mean the old theories are always becoming completely useless - Newtonian mechanics is still very applicable - but limits are set: the box even though still useful is now perceived as just a box (as it always should be), not the ultimate truth about reality.

This has happened many times before and will happen for as long as we continue on this endeavour to seek the truth about the world.

The AI Box

Now back to AI (Artificial Intelligence), a discipline that does not really have a good definition or methodology. Nevertheless it is being practiced as some form of very empirical, result driven research. I generally have nothing against the empirical approach. The problem is that an empirical approach is likely to fail when one studies an entity with a zillion knobs that can be turned and optimized. It is enough to have one of these knobs be turned in a uncontrolled and unsystematic way, and the entire empirical aspect goes to hell: we begin to affect the experiment, which for empirical purposes needs to be pure.

A clear example of this can be seen in the overwhelming fraction of today's machine learning literature.  It generally follows the scheme below (and I admit freely to be guilty of participating in this scheme as well):

  1. We took model X
  2. We took dataset A
  3. We optimized model X and achieved α% improvement over state of the art on dataset A

The problem with this approach is that it may describe scientifically the model or the process of optimization, but it does not describe the process of selection (why model X and not Y). In addition there is the dataset bubble...

The Dataset Bubble

Much of machine learning and AI research is practiced on  several canonical datasets. The problem is, that if a dataset A is fixed and known for years (such as e.g., MNIST), then any results obtained from such data have zero statistical significance. Within the box we can say we did cross-validation, divided into training and testing and what not. But the truth is, if the test data was used more than once (by any researcher actually), then unless one applies Bonferroni Correction (which pretty much no one does in that field) the results are worth a pile of crap. This applies to statistical data driven studies in general, whenever a hypothesis is formed and tested multiple times on the same data. The dangerous fact is, that such loose, "empirical" approach to data science often works in the short term (and sometimes just simply works by chance), so it continues to be used, even though the calculated p-values are worth nothing (ironically much of machine learning literature does not care about p-values).

Empiricism is great outside the box (where external reality controls everything), but within any box it can be cheated and lead to a form of fraud. The cheat is outside the box, done by a clever researcher, while within the box everything seems pure and pristine. An AI winter is a natural occurrence when this "fraud" eventually surfaces. It typically becomes clear when new data shows up on the horizon and the great "state of the art" models fail on it miserably. That was the case with e.g., written letter recognition, which was a "solved" problem in the 90's, until CAPTCHA's happened and proved it was NOT a solved problem.  Similarly today, vision is proclaimed as solved, autonomous car navigation is proclaimed to be solved and so on. Knowing the history, I choose to take these statements with substantial reserve.

Obviously progress had been made, there is no question about that. It's all about proportions though: we've been moving very slowly towards AI and recent deep learning revolutions are another small step on a long path.

Figure 1. Illustration of the concepts discussed. AI is brewed in a box by a researcher (who plays a statistical god). AI has absolutely no idea neither does it care what is outside that box. Real intelligence is all about identifying and getting out of the box.

 

Intelligence seeks to go outside

My greatest annoyance with AI is the lack of basic definitions. I've been trained as a mathematician, and the hand waving whenever the word "intelligence" is invoked drives me nuts. I've already discussed the matter in the post "Intelligence is Real",  where I explored the exciting development that connected the concept of intelligence with basic physics, specifically thermodynamics. I think there is something there and we might be close to be able to actually frame this field on some solid foundation. But it is difficult to accomplish, when Twitter feeds and tech media are swelling with the term "AI" applied to anything heavenly possible in a sea of hype driven noise. The term AI is probably spoiled forever, as indeed it started as a money attracting buzzword. That it does perfectly to date, as well as dutifully serving to deliver the regular AI winters.

Going back to boxes, lets assume intelligence is the force that drives us out of the box. Something that allows us to abstract, transcend and see patterns invisible from previous perspective. Not something that pattern-matches against known situations, but something that tries to model and explore a new situation, something that could be summarized with a following thought:

Intelligence is what protects animals against "unknown unknowns". Current AI can currently only deal with "known unknowns".

To be specific: "known unknowns" are known to the designer and are either coded in as a prior or have similar cases present in a dataset if machine learning is used. I think this simple statement captures a lot about what is wrong with our approach to AI. Unknown unknowns are there and cannot be accounted for in advance. They are outside the box.

Why do we fear "real AI"??

Much as reality cannot be captured in a box, intelligence is a force that drives us out of the boxes (those boxes that we enclose ourselves with) to connect to reality and extract new opportunities. Perhaps that is why the definition of intelligence is so elusive. And perhaps this is why we consciously or subconsciously fear AI: deep down we know that once it truly exists, we will have a real, challenging competitor. Something that like a magical genie that will not want to go back to its bottle. It will strive to survive and may have its own agenda... But the good news is, we really don't have much to fear yet: everything that we currently call AI is not even close to real intelligence. All those things sit in boxes. In fact many AI researchers themselves sit in their own boxes (or dataset-bubbles), forced to publish yet another paper claiming superiority of their algorithm on e.g., the proverbial MNIST.

That said, I do believe we will grasp the concept and build a "real AI" in this century, perhaps even in the first half of it. But first we'll have to define it — and the best chance to do it will be with the most solid of the disciplines heavily anchored in reality, that is physics.

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments