Simon - I'm joined today by Frederik Mallmann-Trenn, Assistant Professor in Data Science at King's College, London to discuss neural networks and how concepts that have structure are represented in our brain.  


Simon - Could you talk me through some of your recent research on learning hierarchy and why spiking neural networks can address some of the challenges with learning structured concepts.  


“Our goal was to understand how the brain works and how hierarchical concepts are learned.” 


Frederik - If you consider the following. A human is a body and a head, and the head in turn has a mouth, nose and eyes, and the eyes have an iris, pupil etc. You can make this as find grained as you like and in the end you get some hierarchy. What's interesting about the brain is that somehow the brain is able to recognize these things human, for example, even if some of the sub concepts are missing. For instance, what can you see here on this Picasso?  



Simon - So I think it's a girl  


Frederik – It is and it's quite fascinating if you think about it because you don't see any mouth. You don't see the nose, you're not even seeing any legs for that matter and somehow you're still able to recognize that it’s indeed a girl.  


That's exactly what we try to model and try to understand. To do this more accurately we used spiking neural networks instead of artificial neural networks,  


the difference being that spiking neural networks use binary activation function so it either fires or it doesn't. 


In contrast. artificial neural networks use something like Sigmoids or ReLU which have a continuous activation function. Another major difference since it's not differential you can't use stochastic gradient descent and it's also not really bio-plausible. So instead we use something that is bio-plausable, Ojas rule to update the weights in the network. 


Simon - The process of learning for a human brain takes lots of input over many years. How quickly are you able to train these models and how much data is required? Is it the same process as training an artificial neural network?  


Frederik – It’s true, humans definitely have an advantage there. Based on the theoretical result we obtained the time required to learn these concepts depends on the learning rate and on the depth of your concept. So the more high level you are, the longer it would take you to learn and will also depend on the number of sub concepts and so on. But this dependence is not really strong.  


How long a spiking neural network take in comparison to ANN is a very interesting question and I'm glad you asked, it's actually the next thing on our list and currently I'm recruiting PhD students to look into this.  


What we were able to show for the SNN is that eventually the brain is going to learn these concepts and not only that, it's going to learn it in a very robust and interesting way.  


It somehow preserves a one-to-one mapping, so you can find your concept space, the hierarchy, mapped into your brain. 


We’re also able to show the more layers you use, the faster your learning and how few neurons you need in the end.  


Simon – In order to speed things up could you incorporate things like reinforcement learning?  


Frederik - That's a really good question. It would be great to incorporate Q learning to get some bio-plausible version of Deep Q networks, but the problem is it's not entirely clear how you would model the results, the rewards. Our model is completely unsupervised and we don't have any feedback from the environment but in the real world, there is some sort of feedback.  


If you think about it, there are some rewards; a baby looks at a cat and says it's a dog, the parents might be unhappy about this and the baby might be able to recognize this and get some negative feedback. On the other hand, if the baby identifies the cat correctly, the parents might smile and the baby has its reward.  




Simon - This is fairly fundamental AI research. Are there any immediate practical implications that data scientist could draw on from this approach?  


Frederik - Yeah, for instance, adversarial examples, where you take a picture of a panda and you add some well crafted noise to it and this noise will make it look like a rocket. To you as a human it still looks like a panda. You can't see any rocket at all, but  


the neural network will be tricked into thinking this is definitely a rocket and it might be absolutely certain about its prediction. A brain couldn’t confuse a panda and a rocket. 


These adversarial examples are not limited to image recognition. You could also have something like that employed in financial world. So if you have a neural network that has a goal of buying and selling stocks based on say a trending hashtags on Twitter or just looking at recently placed orders. Then you might be able to trick this network into buying or selling stocks you don't really want and it's worth noting that the examples above, you don't really need to have the network itself.  


It's been shown that there's a lot of transferal knowledge as well so if you're able to fool one network, chances are you might be able to fool another network that you haven't even seen before. So that's a threat.  


Simon - There's a lot of game theory that takes place in financial markets so if you're saying spiking neural networks is harder to game, then that's really interesting. So for people considering using SNN or at least experimenting with that kind of approach, what sort of advice would you give to data scientists more accustomed to an artificial neural network and what are the practical considerations?  


Frederik – Our current hypothesis is spiking neural networks might have a number of advantages and we think that it makes it more resilient but for example, there's this problem that you're not able to differentiate the activation function. So it's not really clear exactly how we would use it for a different task, say supervised learning so there ANN’s might still be the better choice. But then on the other hand, there are some other advantages SNN comes with because of the binary input it's also more energy efficient and hardware friendly.  


Simon - Okay, that's a good for the environment as well. Frederik, thank you very much for your time. We've barely skimmed the surface of your research but it's really fascinating so I'll include a link below to Frederick's homepage which has lots more interesting papers on noise reduction as well.  



Find out more about Frederik's work here: