PVM is out

So finally after many months we can share our progress. Predictive Vision Model (PVM) is a new recurrent learning architecture we've been exploring for a while now. The paper showing initial results is available here https://arxiv.org/abs/1607.06854 and the corresponding code is https://github.com/braincorp/PVM .

So what is PVM? It is a new approach to learning foundations of perception in an unsupervised way. We exploit the idea of multi-scale and multi-level stacked predictive encoders (similar to autoencoder but tries to predict the next frame in a sequence of inputs). We then find, that if we train this architecture online, we can liberally wire it with feedback and lateral connectivity and nothing breaks! So we end up with a scalable, unsupervised architecture that naturally operates in time and is able to exploit all the regularities, which are so obvious to us - humans  highly visual animals - that we don't even notice them consciously until we are faced with an optical illusion.

This is really just the beginning of the work. We experimented a lot, therefore we decided not to invest into a GPU implementations, but now this certainly is a good avenue to pursue. Recurrent feedback and online operation make it difficult (if not impossible) to implement it in any of the numerous deep learning framework (at least to my limited understanding of those frameworks, and I will be excited if somebody who knows them better would give it a try).

We show results for visual tracking but that is not our end goal. In fact it is just something we can measure, but the question of real measure for such systems is open. We want those systems to build representation of reality that would useful for visually guided behaviours. We rely on the assumption that if the system understand the reality only to the extent to which it can predict it - a principle that seems to apply to many intelligent agents, including humans and even collectives of humans.  Now obviously ability to predict things in the long term is in many cases limited, but a lot of other things are regular and actually can be predicted quite well.

OK, let me not get ahead of myself. Just read the paper and send me feedback!

If you found an error, highlight it and press Shift + Enter or click here to inform us.

Comments

comments