Intelligence confuses the intelligent

This post again about the definition of AI. It was provoked by a certain tweet exchange where it turned out again that everybody understand the term AI arbitrarily. So here we go again.

Introduction

Let us deal again with this fundamental question: what is and what is not AI. Determining which is artificial and which not is not the problem, the problem is determining if something is intelligent or not. This has confused, confuses and likely will confuse even very intelligent people.

Many application/research focused people, particularly in machine learning avoid asking this question altogether, arguing that it is philosophical, undefined and therefore not scientific (and inevitably touching this matter causes a mess). Instead they use the equivalent of duck typing - if it looks intelligent it is intelligent - a somewhat extreme extension of the Turing test. I disagree with this opportunistic approach, I think getting this definition right is crucial to the field, even if it means getting into another s%#t storm.  In fact, if the argument by the machine learning people is that this discussion is not sufficiently formal and messy, I'd like to kindly suggest that it is their duty to formalize it, not to … Read more...

Inside of a nebula

I'm taking a break from AI in this short post, it's time for something more general about the universe [see the last post in this category "what if we had a warp drive"].

In our daily activities we may not notice how lucky we are - we can see the sky. I mean the deep sky, even far beyond our Galaxy. And by looking at those things, we can learn that the Universe is expanding, that there are quasars, active galaxies, large scale cosmic structures, galaxy clusters, cosmic background radiation and many other marvels. We treat all that as obvious.

But imagine the Sun along with the solar system was trapped inside one of the dense nebulas, which there are countless numbers of in our Galaxy. Say we were trapped somewhere deep inside the Orion nebula.  All we would see in the night sky would be the faint pink glow of hydrogen and maybe a few blurred stars shining through the fog.

And best of all, since the nebula is many, many light years across, we could do nothing to see beyond it. Absolutely nothing. Discovering anything about the outside universe would require sending a probe light years … Read more...

The complexity of simplicity - balancing on the Occam's razor

While rereading my recent post [the meta-parameter slot machine], as well as a few papers suggested by the readers in the comments, I've realized several things.

On the one hand we have Occam's Razor: choose only the simplest models for things. On the other hand we know that in order to build intelligence, we need to create a very complex artifact (namely something like a brain), that has to contain lots of memories (parameters). There is an inherent conflict between these two constraints.

Many faces of overfitting

If we have a model too complex for the task we often find it will overfit, since it has the capacity to "remember the training set". But things may not be so obvious in reality. For example there is another, counter intuitive situation where overfitting may hit us: the case where the model is clearly too simple to solve the task we have in mind, but the task as specified by the dataset is actually much simpler than what we had originally thought (and intended).

Let me explain this counterintuitive case with an example (an actual anecdote I heard from Simon Thorpe as far as I remember):

Figure 1. Read more...