Inside IBM's cognitive chip
There are enormous benefits for any architecture that puts the memory adjacent to the processing units on the same chip. You eliminate the latency of registers to L1 memory to L2 memory to main memory giving enormous speed improvements. For example, consider a simulation where many cells are hidden from a point of view (graphics) or are slowly changing. An on chip network of simple processors and memory could quickly identify the cells that were probably going to change in this iteration. An attached array of more powerful processors could do the heavy duty processing but using fewer of them.
Some people I know got a 100x improvement in image processing by using a chip with a network of simple 18bit floating point processors to eliminate hidden elements in an image. The reduced set of cells could be processed in real time.
There is room for lots of interesting products on a budget smaller IBM's, although it is hard to do hardware with the budget of a YC startup.
The press release seems to have more information. http://www-03.ibm.com/press/us/en/pressrelease/35251.wss
"One core contains 262,144 programmable synapses and the other contains 65,536 learning synapses."
I'm more curious how one would actually program such a chip, and considering the amount of memory required to parse through a learning dataset, how they interface between external and internal memory. I can't believe it would be a matter of compilers, or what would be the use of the new architecture, especially as they claim it as a departure from the Von Neumann paradigm.
I'd be very interested to know more details about the structure of the RAM units. (Neither the original article nor the IBM press release at http://www-03.ibm.com/press/us/en/pressrelease/35251.wss talk much about it. The videos on the SyNAPSE website are also pretty distilled.) A biological synapse does, in a sense, serve as one of thousands tiny memory units on a neuron. But they are more capable than a single bit. A packet of neurotransmitter released into the synaptic cleft does activate the synapse, but the synapse itself (technically, the post-synaptic group of neurotransmitter receptors) responds in a graded (i.e., a non-binary) manner, strongly dependent on a) its recent history of activation, b) other synaptic activations in the vicinity, and c) and the local biophysical environment. I think I'd want at least 8 bits for the signal amplitude plus a few more to store some associated state variables.
I'd guess that the IBM team would have to replicate a lot of that "hardware" to get the emergent behavior of a piece of cortex. I do know that computational neuroscientists have struggled for the last 15 years or so get reduced neuron models (i.e., point neurons with a small number of rudimentary synapses) to even crudely mimic full-featured neurons. It looks like this group recognizes this issue in that they are already starting with many hundreds of synapses.
Another thing to keep in mind is that a processor inspired by brain hardware will be most likely very efficient in the tasks a mammalian brain is good at (pattern recognition, pattern completion, pattern separation, etc.), but will concomitantly be worse at things that a standard serial instruction processor excels at. I'd bet that the IBM folks are looking to merge a cognitive processor and a classical processor into a single unit.
Also, I know very little about compilers, assembly language, and other close-to-the-metal issues, but it appears that this processor would be very different to program.
There's been another submission regarding IBMs new technology, with a lot of good comments:
Very cool. TLDR:
1. RAM is no longer seperate from the CPU, the RAM transistors are intermingled with the CPU transistors in the same silicon, all interconnected in what sounds like a kind of neural mesh.
2. Computational units are the 'neurons', RAM units are the 'synapses'. RAM units send signals to CPU units causing the latter to perform simple computations, the result of which is sent to other RAM units, which in turn send new signals to other CPU units, etc. etc.
3. Primary benefit is decreased power consumption (no electricity wasted shuttling data back and forth across a memory bus), and improved performance on certain types of algorithms like pattern recognition.
4. Probably won't be as good as contemporary CPUs at other tasks though.
If you think these chips are cool, you should check out Kwabena Boahen's:
http://www.stanford.edu/group/brainsinsilicon/goals.html
They're capable of complex network simulations in real-time, with control over ion channel tuning.
I am curious of the information architecture.
"The main benefit is decreasing power consumption. Because the memory and computation are intermingled, less energy is wasted shuffling electrons back and forth. "
This is no more cognitive than any artificial neural network we have until now. The only innovation is lower power consumption. So, it's just a power-friendly artificial neural network simulator. I would really like to see details about the abilities of these processors (why aren't they available anywhere?). When they say "programmable synapses" do they mean they have a programmable weight? Real synapses are not nearly like that, with very intricate computational abilities that come from the dynamics of presynaptic release and postsynaptic binding. How are they going to simulate something like this: http://www.sciencemag.org/content/329/5999/1671.short
"Cognitive" sounds just like marketing fluff here. Scientists should not adopt this marketer-speak.
What is the answer to life, the universe and everything, Mr. IBM Cognitive Chip?