December 15, 2015

Revealed: A Scale-Out Synaptic Supercomputer (NS1e-16)

Guest Blog by William P. Risk and Michael V. Debole with Contributions from Rodrigo Alvarez-Icaza and Filipp Akopyan.

A few months ago, we unveiled the NeuroSynaptic Evaluation (NS1e) board, which contained a single TrueNorth chip, along with circuitry for interfacing the chip to sensors and real-world data. These boards were used in our August 2015 “Boot Camp” event, in which participants learned how to program the chip to implement cognitive systems [Brain-Inspired Computing Boot Camp Begins]. During BootCamp, each NS1e board was housed in its own plastic case, and for convenience, we built a rack to hold the 48 boards used during that event. Although the rack nicely organized and displayed the boards, a bulky assembly of power strips, ethernet switches, and servers were also required for their use.

Recently, a government client requested that we build a system of 16 NS1e boards, with power unit, ethernet switch, and Linux server all housed in a compact, self-contained unit, where each of the NS1e boards can be seamlessly integrated, but mounted in such a way that any individual board could be swapped in or out easily. This requirement led us to explore designs in which individual NS1e boards are mounted on cards that could be inserted vertically into a card rack (Figure 1) and all elements were mounted in a small desktop rack unit (shown below).

“Fig1
Figure 1. NS1e Card Rack

 

We initially considered two similar designs, both using a 6U high desktop rack with components stacked as follows: (bottom) 1U – power-strip / network switch, 3U – NS1e card rack, 1U NS1e card power, 1U server. We ultimately chose the design with wiring in the back as it provided a cleaner looking front panel.

“Fig2
Figure 2. Final Design Concept

 

The next step was determining how to make the concept become a reality. For the most part, this was a straightforward process since we were able to use many off-the-shelf components (server, network switch, power strip, etc..). However, powering 16 NS1e boards required a bit of engineering to reduce the space required. As standalone boards, each is typically powered by an AC-DC adapter which simply plugs into a standard outlet, but including 16 bulky “wall warts” in a 1U form factor was impractical. In addition, we wanted to provide the capability to remotely monitor the current consumption of each individual board and to control its power state. To solve this problem we turned to a USB-style power distribution module developed by Cambrionix. While normally intended to charge and sync cell phones and tablets, it’s port capacity (16 USB ports) and current limits were suitable for our purposes. However, with typical USB connectors plugged into the Cambrionix board, the height required was close to 2U (3.5″), greater than the 1U we had allocated for the power distribution unit in the initial design. Fortunately, the card rack holding the NS1e boards did not occupy the full depth of the rack and we had just enough room to design a step-down enclosure using 1U of space above the NS1e drawer and dropping down to 2U in the back (See below). Finally, to give some visual appeal, we united the 16 individual NS1e boards by spreading a graphic (our award-winning visualization of the network diagram of the monkey brain) across their front panels and added some accentuating LED strip lighting on both sides of the drawer and below the chassis.

Building the system, once all the planning was complete, was relatively straightforward:

Build Evolution

“Fig3
Figure 3. Initial Skeleton

 

“Fig4
Figure 4. Early Prototype Front and Back

 

“Fig5
Figure 5. Functional Prototype (Alpha)

 

“Fig6
Figure 6. Functional Prototype (Beta)

 

“Fig7
Figure 7. Custom Power Enclosure

 

“Fig8
Figure 8. Final Lab Photo

 

“Fig9
Figure 9. Final Photo

 

Then crated and shipped!

 

“Fig10
Figure 10. Preparing to ship system to clients

 

The end result is a system that provides 16 million neurons and 4 billion synapses in a package about the size of a carry-on suitcase!

December 14, 2015

Mystery!

What did Santa Claus bring from IBM Research's Brain-inspired Computing Team? I will reveal this week here on http://modha.org and on Twitter at @DharmendraModha.

Mystery Box

December 03, 2015

NIPS 2015: Backpropagation for Energy-Efficient Neuromorphic Computing

Guest Post by Steven K. Esser, Rathinakumar Appuswamy, Paul A. Merolla, and John V. Arthur.

At the 2015 Neural Information Processing Systems (NIPS) conference, in a paper entitled Backpropagation for Energy-Efficient Neuromorphic Computing, we will be presenting our latest research in adapting machine learning techniques for use in training TrueNorth networks. In essence, this is our first step towards bringing together deep learning (for offline learning) together with brain-inspired computing (for online delivery).

This work is driven by an interest in using neural networks in embedded systems to solve real world problems. Such systems must satisfy both performance requirements, namely accuracy and generalizability, as well as platform requirements, such as a small footprint, low power consumption and real time capabilities. We have seen many recent examples demonstrating machine learning is able to meet performance needs, and other examples that neuromorphic approaches such as the TrueNorth are well suited to the platform needs.

An interesting challenge arises in trying to bring machine learning and neuromorphic hardware together. To achieve high efficiency, TrueNorth uses spiking neurons, discrete synapses and constrained connectivity. However, backpropagation, the algorithm at the core of much of machine learning, uses continuous-output neurons, high precision synapses, and typically operates with no limits on the number of inputs per neuron. How then can we build systems that take advantage of algorithmic insights from machine learning and the operational efficiency of neuromorphic hardware?

In our work, we demonstrate a learning rule and network topology that reconcile this apparent incompatibility by training in a continuous and differentiable probabilistic space that has a direct correspondence to spikes and discrete synaptic states in the hardware domain. Using this approach, we achieved near state-of-the-art performance on the MNIST handwritten digit dataset (99.42%), and the best accuracy to date using spiking neurons and/or low-precision discrete synapses. We have demonstrated three orders of magnitude less energy per classification than the next best low power approach.

NIPS 2015
Accuracy and energy of network trained using our approach running on the TrueNorth chip. Ensembles of multiple networks were tested, with ensemble size indicated next to each data point. It is possible to trade-off accuracy versus energy.

The software behind the published algorithm is already in the hands of nearly 100 developers most of whom attended the August 2015 Boot Camp, and we expect a number of new results in 2016.

Looking to the future, we are working to expand the repertoire of machine learning approaches for training TrueNorth networks. We have exciting work brewing in our lab using TrueNorth with convolution networks, and have achieved near state of the art on a number of additional datasets.

The paper can be found at https://papers.nips.cc/paper/5862-backpropagation-for-energy-efficient-neuromorphic-computing

Stay tuned!

November 30, 2015

R&D 100 Award, Editor's Choice

TrueNorth received 2015 R&D 100 Award and was named Editor's Choice under IT/Electrical Category.

Summer 2016 Internships

Apply here.

October 16, 2015

Career Opportunity: Brain-inspired Computing

Apply here and here. If because of a new system the links sometimes do not work, please visit here and search for 10095BR and 13809BR.

September 26, 2015

Digital India Dinner with Narendra Modi

On September 26, 2015, I attended Digital India Dinner with The Prime Minister of India, Honorable Narendra Modi. Amongst the attendees were Satya Nadella (CEO, Microsoft), Sundar Pichai (CEO, Google), John Chambers (CEO, Cisco), Shantanu Narayen (CEO, Adobe).

Narendra Modi
Narendra Modi.

August 17, 2015

IBM’s ‘Rodent Brain’ Chip Could Make Our Phones Hyper-Smart

See WIRED article here.

August 10, 2015

Exploring neuromorphic natural language processing with IBM's TrueNorth

Guest post by Peter U. Diehl from ETH Zurich. Peter's research is focused on bringing together the fields of neuromorphic computing, machine learning and computational neuroscience.

At the Telluride Neuromorphic Cognition Engineering Workshop 2015 a team from IBM Research (Rodrigo Alvarez-Icaza, John Arthur, Andrew Cassidy, and Paul Merolla) brought their newly developed low-power neuromorphic TrueNorth chip to introduce this platform to a broader research community. Among the other participants were Guido Zarella, principal research scientist at the MITRE Corporation and an expert in natural language processing (NLP), Bruno Pedroni, PhD student at UCSD and previous intern at IBM Research Almaden, Emre Neftci, professor at UC Irvine and a pioneer on using deep learning with spiking neural networks, and myself. Together we pursued the ambitious goal of bringing deep learning based NLP to neuromorphic systems.

Driven by the ever increasing amounts of natural language text available on the world wide web and by the necessity to make sense of it, the field of NLP showed dramatic progress in recent years. Simultaneously, the field of neuromorphic computing has started to emerge. Neuromorphic systems are modeled after the brain, which leads to hardware that consumes orders of magnitude less power than its conventional counterpart. However, such a new architecture requires new algorithms since most of the existing ones are designed for von-Neumann architectures and usually cannot be mapped directly.

At Telluride, myself and the group mentioned above were eager to fill the existing algorithmic gap of NLP for neuromorphic computing by mapping existing state-of-the-art NLP systems for von-Neumann architectures to TrueNorth. Achieving this goal enables a range of highly attractive technologies like high-quality analysis of user input on mobile devices with negligible battery drain, or data-centers for understanding queries that consume orders of magnitude less power than conventional high-performance computers. During the workshop we focused on two tasks.

  • The first task is sentiment analysis on TrueNorth, that is, predicting the "happiness" associated with the given words. Our system, called "TrueHappiness", uses a fully-connected feedfoward neural network which is trained using backpropagation, and that is converted to a TrueNorth compatible network after the training finished.
  • The second task is question classification, where we identify what kind of answer the user is looking for in a given question. Similarly to the design of TrueHappiness, we start by using deep learning techniques, that is, we train a recurrent neural network using backpropagation and convert it afterwards to a spiking neural network suitable for TrueNorth.
For both tasks we managed to implement end-to-end systems where the user can type words that are mentioned in Wikipedia that are then used for sentiment analysis or (in case of a question) are analyzed with regards to the desired content. Demos and details about the training and conversion of both systems will be soon available at peterudiehl.com.

Although the designed algorithms are not yet viable for commercial scale applications because we are just getting started, they provide an important first step and a generally applicable framework for the mapping of traditional deep learning systems to neuromorphic platforms and thereby opening up neuromorphic computing and deep learning for entirely new applications. This is also the vision at the IBM TrueNorth Boot Camp at IBM Research in San Jose, where I am at the moment. Together with over 60 other participants we are diving into TrueNorth programming, creating new neuromorphic algorithms and mapping existing von-Neumann architecture algorithms to TrueNorth to advance the state-of-the-art in low-power computing.

modha-web.jpg

Categories

  • Talk
Creative Commons License
This weblog is licensed under a Creative Commons License.
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.