August 03, 2015

Brain-inspired Computing Boot Camp Begins

Today, August 3, 2015 at 9am PST, we launched a three-week-long Boot Camp. Read my thoughts here. Representing five continents, sixty+ participants from the following institutions are attending including IBMers from IBM Research - Australia, IBM Research - Tokyo Lab, IBM Brazil, and IBM UK:

  • Air Force Research Lab, Rome, NY
  • Argonne National Lab
  • Arizona State University
  • Army Research Lab
  • California Institute of Technology
  • Cornell University
  • Johns Hopkins University
  • Imperial College, London
  • Institute of Neuroinformatics, ETH Zurich
  • Lawrence Berkeley National Lab
  • Lawrence Livermore National Lab
  • National University of Singapore
  • Naval Research Lab
  • Pennsylvania State University
  • Riverside Research
  • Rensselaer Polytechnic Institute
  • SRC
  • Syracuse University
  • Technology Services Corporation
  • University of California, Davis
  • University of California, Los Angeles
  • University of California, San Diego
  • University of California, Santa Cruz
  • University of Dayton
  • University of Pittsburgh
  • University of Tennessee, Knoxville
  • University of Western Ontario
  • University of Wisconsin-Madison
In addition, in attendance are research interns at IBM Research - Almaden from:
  • MIT
  • Pennsylvania State University
  • University of California, Irvine
  • University of Geneva
  • University of Heidelberg
  • University of California, San Diego

Full House
Full House!

Here is a rough transcript of my opening remarks.

History

When history looks back and asks "When did Brain-inspired Computing reach critical mass?", I believe the answer will be 9am PST on August 3, 2015.

Welcome

Thank you for investing three precious weeks of your life, we will strive to make it worth it. Special welcome to Telluride participants who have come back for another 3 weeks. All of you are a very special group of people -- incredibly talented, excited, and engaged researchers who have the right background. Because of the space limitations, many could not be admitted. Today is the first time that IBM is opening the SyNAPSE Ecosystem. In fact, you are getting access to these boards before our own team. According to the dictionary, to pioneer is the one to "develop or be the first to use or apply (a new method, area of knowledge, or activity)". So all of you are pioneers. It takes two hands to clap and and you are our partner in this quest to bring brain-inspired computers to the society.

Where are we?

With 1 million neurons, 256 million synapses, 4096 cores interconnected via a network-on-chip at less than 100mW of power consumption, TrueNorth is literally a supercomputer the size of a postage stamp consuming the power of a hearing aid battery.

The chip has an entirely novel parallel, distributed, modular, scalable, fault-tolerant, event-driven architecture that breaks path with nearly 70 year dominance of the von Neumann blueprint. Today, the effort is much broader than the chip. It is an end-to-end ecosystem consisting of development boards; a simulator; a programming language; an integrated programming environment; a library of algorithms as well as applications; firmware; deep learning tools; a teaching curriculum; and cloud enablement.

We got here via a 10 year sustained effort with cumulative effort of 250 person years.

Where we are headed is ever more useable and useful ecosystem and what is forthcoming is a sequence of ever-denser and energy-efficient chips with the end goal to literally build a brain-in-a-shoebox.

What is the best possible outcome?

The best possible outcome is to map the entirety of existing cache of neural network algorithms and applications to this energy-efficient substrate. And, to invent entirely new algorithms that were hereto before impossible to imagine. Then, to compose these algorithms to exhibit mind-like behavior on top of the brain-like substrate.

What is the challenge?

You have all heard about deep learning. Deep learning and neuromorphic computing are two sides of the same coin. Deep Learning is about capability in terms of application performance and accuracy. Brain-inspired Computing Is about energy-efficiency, volume-efficiency, speed-efficiency, and scalability. Deep Learning is used to program neuromorphic computing. Neuromorphic computing used to deliver deep learning. The goal is to bring together these two complimentary revolutions.

To exploit the full potential of the substrate, the challenge is to shift the thinking. Energy-efficiency was realized by architecture shift to spiking neurons, limited precision synapses, and connectivity within cores and between cores.

The reason it is possible to shift the thinking is because brain itself is able to achieve its remarkable function with similar constraints and because neural networks are remarkably forgiving structures.

The first thought of nearly everyone is to train a classical network, and as an afterthought map it to hardware. This approach leads to inefficient and inelegant solutions. Thinking has to shift from "train and constraint" to a mindset of "constraint and train". The challenge is to design efficient algorithms from the start. As a motivational example in historical context, let us go back to "sorting algorithms" for today's computers, where quadratic-time algorithms were upturned by Tony Hoare's beautiful and efficient quicksort. Tony Hoare was awarded the Turing Prize for his many contributions and perhaps one of you, someday, might receive the same honor, perhaps for Brain-inspired Computing. Neural networks are remarkably versatile structures -- there are often multiple paths to the solution. Amongst these, the challenge is to seek the most efficient and most elegant and most beautiful algorithms. The opportunity for innovation is huge and unprecedented. Supervised learning, unsupervised learning, reinforcement learning as well as feedforward networks, recurrent networks all are up for grabs. The frontier is open. And, it is yours to capitalize. I predict that by December 2016 there will be at least one major breakthrough -- inventing a fresh new algorithm for neuromorphic computing -- from one of you. My hope is that each of you will be part of at least one such breakthrough.

What are we teaching at the Boot Camp?

We are teaching an overview of the ecosystem. We are teaching end-to-end examples with simple datasets so as to completely and comprehensively touch on the entire tool set. In particular, we are teaching how we are leveraging Caffe (a deep learning tool) to produce native-TrueNorth programs.

In the final week, we will provide clinics and help from both infrastructure and algorithmic perspectives to ensure that you are successful after the bootcamp which is really the whole point of this exercise.

How are we teaching it?

The approach is to teach the building blocks. We are looking forward to seeing what you produce with them. The possibilities are endless because permutations and combinations of the underlying build blocks are limited only by your imagination and creativity, which I know is infinite.

Boot Camp is about teaching the techniques of fishing, and not about catching one big, fat fish right away. it is about bricks, and not about buildings. It is end-to-end integration, and not fragmentation. It is simple, and not complex. It is practical, and not theoretical. It is about breadth across tools and techniques, and not about depth in a particular application. It is about ingredients and recipes and not tasty dishes. It is atomic and not molecular. It is sequential, where each step leads to the next level. It is slow, and not fast. It is flexible, and not a priori cast in stone.

Now, our team has worked very hard but much is to be done. You are experiencing a living and growing set of tools and not something that is polished and dead. We welcome all feedback, and will humbly accept it.

Conclusion

Sixty years ago, IBM released FORTRAN to the world. At that time, there were only a few hundred or at best low thousands of programmers. Today, we a team of 30 people, feel that our ecosystem has tripled in size instantaneously by adding 60 new partners. The hope is that each of you will become an ambassador for the project. A seed carrier. And, together we will draw in ever greater number of collaborators to create a spiral of value co-creation.

On our part, we are committed to continue to provide systems and tools to enable you to build more algorithms and applications.

This is a historic moment. Let us partner and together we can bring brain-inspired computers to society.

In conclusion, I invite the whole Brain-inspired Computing Team to join with me in welcoming our friends.

Finally, for all our guests, please join me in thanking the Brain-inspired Computing Team for countless nights, weekends, and their blood, sweat, and tears in making this event a reality.

Thank you.

August 02, 2015

Introducing an Ecosystem for Brain-inspired Computing

Read my blog here.

August 01, 2015

Telluride Neuromorphic Cognition Engineering Workshop

Guest Post by Rodrigo Alvarez-Icaza, John Arthur, Andrew Cassidy, and Paul Merolla.

Each July for the last 20 years or so, a group of neuroscientists, engineers, and computer scientists come together for a three week neuromorphic engineering workshop in the scenic town of Telluride. Telluride is best known for its ski slopes, but in the summer, it is the perfect place to hunker down and work on collaborative, hands on neuromorphic projects. This year, four of us who are IBM research scientists in the Brain-inspired Computing Group brought IBM's latest generation TrueNorth chip to Telluride with one goal in mind: enable workshop participants to use TrueNorth for their own projects. To our surprise and delight, although many of the participants had never actually seen or used TrueNorth before, they were all up and running in almost no time. Here is a quick run down of what happened.

Telluride Town Center


The Setup:

The bulk of the workshop took place at the Telluride elementary school, and in particular, in one of the classrooms. Shown below are the four of us, arriving to our new home for the next 3 weeks.

Telluride Town Center
Rodrigo Alvarez-Icaza, John Arthur, Paul Merolla, and Andrew Cassidy (left to right) at the Telluride elementary school
Photo Credit: Tobi Delbruck

We brought a bunch of goodies to Telluride, including 10 of our latest mobile development boards, some of which are being unpacked by Rodrigo (top). The board (bottom) has a TrueNorth chip (SyNAPSE), an FPGA, and a host of sensors and connectors. The basic setup is that participants can log into these boards through our local servers, and run their real time spiking neural networks.

Telluride Town Center
Rodrigo Alvarez-Icaza unpacking the boards

Telluride Town Center
Close up of a TrueNorth mobile development board

Hands on projects:

Our IBM group, along with Arindam Basu (NTU), ran a workgroup called Spike-Based Cognitive Computing. In this group, we divided into sub projects. The projects based on TrueNorth included:

  • ATIS camera: MNIST classifier
     Garrick Orchard, (Singapore Institute for Neurotechnology) and Kate Fischl (JHU)
  • Sparse Representations for speech recognition (TIDIGITs)
     Jie "Jack" Zhang (JHU) and Kaitlin Fair (Georgia Tech/AFRL)
  • Word vector associative memory (semantic similarity)
     Dan Mendat (JHU) and Guillaume Garreau (JHU)
  • Word vector analogies
     Dan Mendat (JHU)
  • Word "happiness" score (regression) using word vectors
     Peter Diehl (INI) and Bruno Pedroni (UCSD)
  • Question (sentence) classification using Recurrent NNs
     Emre Neftci (UC Irvine), Peter Diehl (INI) and Bruno Pedroni (UCSD)
  • FSMs and WTAs (for working memory, etc.)
     Suraj Honnuraiah (Institute of Neuroinformatics)
  • Sensors to TN:
     DAVIS: Luca Longinotti (iniLabs)
     spiking cochlea: Shih-Chii Liu ((Institute of Neuroinformatics)
     spiking sonar: Timmer Horiuchi (U. Maryland)
     spiking radar: Saeed Afshar, University of Western Sydney
     FPGA cochlea: Guillaume Garreau (JHU)

You can find more information on these projects on the Neuromorph site. Here, we highlight one of the projects, which culminated with a real time demo!


Real-time digit classification on TrueNorth with a spiking retinal camera front end:

In this project, the goal was to connect a spiking retinal camera (called the ATIS) to a TrueNorth chip to perform pattern classification. Using the ATIS as a front end for TrueNorth opens up the possibility for a fast, low power object recognition system that works only using spikes.

There are two main steps involved in realizing the real-time digit classification system. The first step consists of creating and training the object recognition model to run on TrueNorth. The second step is to connect the ATIS to TrueNorth to achieve real time operation.

Telluride Town Center

We made use of a publicly available spike-based conversion of the MNIST dataset which was taken with the ATIS sensor mounted on a pan-tilt while viewing MNIST digits on a computer monitor. Details of the dataset creation, as well as a download of the dataset itself, are available at:  http://www.garrickorchard.com/datasets/n-mnist

Video 1

Video 2

The continuous spike stream was converted to static images by accumulating spikes for 10ms at a time to create a static images for training. These static images are used to create a Lightning Memory Mapped Database (LMDB) on which training was performed using the caffe deep learning framework (modified to support TrueNorth). A simple 1 layer Neural Network was used, with 100 neurons trained to respond to each of the 10 digits. The final output of the system is a histogram of the number of spikes output by neurons representing each class. The class with the most spikes is deemed to be the most likely output.


Real time results:

A laptop powers and interfaces to the ATIS sensor which is mounted on a helmet worn by a user. This laptop performs simple noise filtering on ATIS spikes and activity based tracking of the MNIST digit on a screen. Spikes occurring within the 28x28 pixel region of interest being tracked are remapped to target corresponding cores and axons on the TrueNorth (sometimes multiple axons per spike). The laptop accumulates spikes until 130 spikes are available for classification, at which time all 130 spikes are communicated to TrueNorth over UDP.

The trained neural network runs on TrueNorth and output spikes are communicated to a second laptop using UDP. This second laptop performs visualization of the results.

Video 3

On the spiking MNIST test set, we achieved 76%-80% accuracy at 100 classification/sec. The goal was not really classification accuracy per se, but rather was to learn to create end-to-end demonstrations. The classification rate (100/sec) is limited by the fact that we use 10ms of data for each classification, but TrueNorth is capable of performing 1000 such classifications per second. In the real-time system the classifier on TrueNorth uses only 4 cores (0.1% of the chip). Temporally, utilization on this 0.1% of the physical chip is below 10% (i.e. the 4 cores are idle >90% of the time) when performing 100 classifications per second.

July 16, 2015

Education Session for US Senate & House

Senate and Housel

July 08, 2015

Energy-efficient neuromorphic classifiers

Professor Stefano Fusi at Center for Theoretical Neuroscience at Columbia University (who was part of IBM Team for DARPA SyNAPSE in Phases 0, 1, and 2) has released a very interesting pre-print entitled "Energy-efficient neuromorphic classifiers". Here is the Abstract (highlights are mine):
Neuromorphic engineering combines the architectural and computational principles of systems neuroscience with semiconductor electronics, with the aim of building efficient and compact devices that mimic the synaptic and neural machinery of the brain. Neuromorphic engineering promises extremely low energy consumptions, comparable to those of the nervous system. However, until now the neuromorphic approach has been restricted to relatively simple circuits and specialized functions, rendering elusive a direct comparison of their energy consumption to that used by conventional von Neumann digital machines solving real-world tasks. Here we show that a recent technology developed by IBM can be leveraged to realize neuromorphic circuits that operate as classifiers of complex real-world stimuli. These circuits emulate enough neurons to compete with state-of-the-art classifiers. We also show that the energy consumption of the IBM chip is typically 2 or more orders of magnitude lower than that of conventional digital machines when implementing classifiers with comparable performance. Moreover, the spike-based dynamics display a trade-off between integration time and accuracy, which naturally translates into algorithms that can be flexibly deployed for either fast and approximate classifications, or more accurate classifications at the mere expense of longer running times and higher energy costs. This work finally proves that the neuromorphic approach can be efficiently used in real-world applications and it has significant advantages over conventional digital devices when energy consumption is considered.

April 30, 2015

World Economic Forum: Top 10 Emerging Technologies of 2015

World Economic Forum named "Neuromorphic technology" as one of "Top 10 Emerging Technologies of 2015" and specifically cited IBM's TrueNorth Chip (see page 12 of the report).

April 16, 2015

Cognitive Systems Colloquium: Videos

Guest Post by Ben G. Shaw, Organizing Chair of Cognitive Systems Colloquium.

This is continued from the previous post dated November 12, 2014.

To highlight the transformative potential of IBM's Neurosynaptic System and its impact on computation in the Cognitive Era, IBM Research hosted nearly 200 eminent thinkers and pioneers in the field of brain-inspired computing at the IBM Research - Almaden Cognitive Systems Colloquium. The program featured over a dozen outstanding speakers and distinguished panelists. Attendees included nearly 200 thought leaders and potential early adopters from government, industry, academia, research and the venture community.

Recurring Themes of the Day:

  • The Brain: how advances in understanding nature's most efficient and powerful computational substrate are revealing new paradigms for computing
  • Technology: as von Neumann computation comes up against fundamental limitations that are bringing Moore's law to an end, how new approaches can revolutionize important classes of computation
  • Applications: how efficient, embedded neural computation may benefit individuals, businesses and society by making objects, environments and systems more aware and responsive
  • Ecosystems: how new technologies and offerings will gain breadth, depth and momentum to transform industries from robotics to healthcare, agriculture to mobile devices, transportation to public safety.

SyNAPSE Deep Dive:

In addition to reviewing the state of knowledge in the field of brain-inspired computing and a forward-looking panel discussion, participants took a concentrated "Deep Dive" into the recently announced IBM Neurosynaptic System including the 1-million neuron TrueNorth chip, architecture, development boards, programming paradigm, applications, education and ecosystem. Inspired by the brain, TrueNorth is an architecture and a substrate for non-von Neumann, event-driven, multi-modal, real-time spatio-temporal pattern recognition, sensory processing and integrated sensor-actuator systems. TrueNorth's extreme power efficiency and inherent scalability will revolutionize applications in mobile and embedded systems, at the same time allowing neural algorithms to achieve previously unattainable scales, running quickly, efficiently and natively in hardware.

Distinguished Speakers and Panelists:

Audience:

The audience included luminaries such as Turing Prize Awardee, Ivan Sutherland, and Von Neumann Prize Awardee, Nimrod Megiddo. Four IBM Fellows were in attendance (Ronald Fagin, C. Mohan, Hamid Pirahesh, Stuart Parkin), as were prominent founders and visionaries in the field of brain-inspired computing, including Warren Hunt (UT Austin), Tim Lance (NYSERNet), Einar Gall (Neurosciences Institute), Gert Cauwenberghs (UCSD), Ken Kreutz-Delgado (UCSD) and Jeff Krichmar (UC Irvine).

March 18, 2015

Distinguished Alumnus Award

On March 10, 2015, at the 56th Foundation Day of IIT Bombay, I was selected for Distinguished Alumnus Award. I am grateful for the education that I received at IIT Bombay, for my teachers, for my fellow students, for my hostel mates, for the mess workers who fed me for four years, for the support staff, for my colleagues at IBM, and, of course, my family. Of the nearly 50,000 alumni, to date, roughly 100 have been honored. Previous Awardees include Nandan Nilekani and Kanwal Rekhi as well as two IBM Fellows Subramanian Iyer and Ramesh Agarwal.

IIT Bombay
Photo Credit: Hita Bambhania-Modha

February 26, 2015

IBM Brings the Brain Chip to Capitol Hill

For details, see here.

Capitol Hill

modha-web.jpg

Categories

  • Talk
Creative Commons License
This weblog is licensed under a Creative Commons License.
The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.