IBM’s research blog shares an article about “polyphonic music prediction using the Johann Sebastian Bach chorales dataset” achieved by using “biologically plausible neurons,” a new approach to deep learning “that incorporates biologically-inspired neural dynamics and enables in-memory acceleration, bringing it closer to the way in which the human brain works.”

At IBM Research Europe we have been investigating both Spiking Neural Networks (SNNs) and Artificial Neural Networks (ANNs) for more than a decade, and one day we were struck with the thought: “Could we combine the characteristics of the neural dynamics of a spiking neuron and an ANN?” The answer is yes, we could. More specifically, we have modelled a spiking neuron using a construct comprising two recurrently-connected artificial neurons — we call it a spiking neural unit (SNU)… It enables a reuse of architectures, frameworks, training algorithms and infrastructure. From a theoretical perspective, the unique biologically-realistic dynamics of SNNs become available for the deep learning community…

Furthermore, a spiking neural unit lends itself to efficient implementation in artificial neural network accelerators and is particularly well-suited for applications using in-memory computing. In-memory computing is a promising new approach for AI hardware that takes inspiration from the architecture of the brain, in which memory and computations are combined in the neurons. In-memory computing avoids the energy cost of shuffling data back and forth between separate memory and processors by performing computations in memory — phase change memory technology is a promising candidate for such implementation, which is well understood and is on its way to commercialization in the coming years. Our work involves experimental demonstration of in-memory spiking neural unit implementation that exhibits a robustness to hardware imperfections that is superior to that of other state-of-the-art artificial neural network units…

The task of polyphonic music prediction on the Johann Sebastian Bach dataset was to predict at each time step the set of notes, i.e. a chord, to be played in the consecutive time step. We used an SNU-based architecture with an output layer of sigmoidal neurons that allows a direct comparison of the obtained loss values to these from ANNs. The SNU-based network achieved an average loss of 8.72 and set the SNN state-of-the-art performance for the Bach chorales dataset. An sSNU-based network further reduced the average loss to 8.39 and surpassed corresponding architectures using state-of-the-art ANN units.
Slashdot reader IBMResearch notes that besides being energy-efficient, the results “point towards the broad adoption of more biologically-realistic deep learning for applications in artificial intelligence.”

of this story at Slashdot.

…read more

Source:: Slashdot