In my career I've encountered researchers in several fields who try to address the (artificial) intelligence problem. What I found though, is that researchers acting within those fields had a vague idea of all the others trying to answer the same question from a different perspective (in fact I had a very faint idea myself initially as well). In addition, following the best tradition of Sarye's law there is often tension and competition between the researchers occupying their niches resulting in violent arguments. I've had the chance to interact with researchers representing pretty much all of the disciplines I'll mention here, and as many of the readers of this blog may be involved in research in one or a few of them, I decided it might be worthwhile to introduce them to each other. Within each community I'll try to explain (at least from my shallow perspective) the core assumption, prevalent methodology, and the possible benefits and drawbacks of the approach as well as a few representative literature/examples (purely subjective choice). My personal view is that the answer to the big AI question cannot be obtained within any of these disciplines, but will eventually be found somewhere between them, and … Read more...
Month: March 2017
The meta-parameter slot machine
Today we'll step back a bit and consider the psychology of a machine learning researcher when he does his job, a subject which interests me deeply and one that I've already touched in another post. Some of this comes from my own introspection, as I've been doing machine learning for quite a few years now.
Emails and ML models trigger dopamine
It is a well known fact from biology that little achievements trigger the release of small amounts of dopamine - a neurotransmitter that is believed to be involved in reinforcement learning. The dopamine makes us feel good and also triggers plasticity in certain parts of the brain (likely allowing the brain to "remember" what behaviour lead to the reward). Reinforcement learning however has its issues, since the reward can appear by coincidence and therefore reinforce the "wrong cause". This is very much visible these days with Internet, emails and texts: since receiving an important and rewarding message reinforces the behaviour which lead to it - and that most likely was pressing "get mail" button - we get addicted to checking email! Same applies to social media, texting, and is also the mechanism underlying gambling. In reality rewards … Read more...
Give me a dataset to train, and I shall move the world
There is a widespread belief among the "artificial intelligentsia" that with the advent of deep learning all it takes to conquer some new land (application) is to create a relevant dataset, get a fast GPU and train, train, train. However, as it happens with complex matters, this approach has certain limitations and hidden assumptions. There are at least two important epistemological assumptions:
- Given big enough sample from some distribution we can approximate it efficiently with a statistical/connectionist model
- A statistical sample of a phenomenon is enough to automate/reason/predict the phenomenon
Both of these assumptions are not universally correct.
Universal approximation is not really universal
There is a theoretical result known as the universal approximation theorem. In summary it states that any function can be approximated to an arbitrary precision by (at least) three level composition of real functions, such as e.g. a multilayer perceptron with sigmoidal activation. This is a mathematical statement, but a rather existential one. It does not say if such approximation would be practical or achievable with, say, gradient descent approach. It merely states that such approximation exists. As with many such existential arguments, their applicability to real world is limited. In the real world, we … Read more...