Probability Chip
The thesis on which this is based:
http://phm.cba.mit.edu/theses/03.07.vigoda.pdf
edit: p 135 is where he starts talking about implementation in silicon
I'm curious how they deal with probabilities very close to 1 or 0. Usually when people are doing bayesian things with probabilities they work in logistic space so that the precision of values close to 1 or 0 is effectively unbounded. That seems like a hard thing to do with an analog circuit.
Is it likely (in the future) to see more domain specific chips? Something like what http://www.deshawresearch.com/ has created---a custom chip Anton, optimised for Molecular Dynamics simulations.
One step closer. http://en.wikipedia.org/wiki/Technology_in_The_Hitchhiker%27...
My Ph.D advisor will go crazy, he had his European research project on a probability computer turned down a few months ago.
Isn't this just the revenge of the analog computer?
Not saying it's a bad idea... I'm really for the idea of revisiting assumptions in computer design.
I thought I'd heard something like this before. From 2004: http://www.eetasia.com/ART_8800354714_499488_NT_92255b4a.HTM
That's a turbo decoder rather than a generic probability calculator, but it's doing probability calculations in the analog domain.
This sort of thing may make sense for error correction, but I don't think people will run general probability calculations on it. Too difficult to debug :-)
Though, I do wonder if they can simulate a neuron more efficiently than digital logic.
Sounds a lot like the ByNase protocol that Ward Cunningham (inventor of the wiki) came up with:
Printer friendly, (almost) no ads, no pointless images:
http://www.technologyreview.com/printer_friendly_article.asp...
similar to the fuzzy-logic chips of the 90's?
How does this compare to what Navia Systems is working on?
But how do you connect it to the cup of no tea?
undefined