I love AI research. I dabble in it myself and keep up to speed pretty frequently. So when an article like this comes along, I'm elated to read it. Then comes a quick punch to thr gut near the end: 'building the intellectual property base....'. No, this isnt to advance computing in general and make money later. Its to rack up patents.
AI research will stagnate quickly if these early, mildly successful researchers instantly patent everything and anything. AI has come as far as it has because people shared methods, ideas and research freely.
Edit: /u/IAmTheOneWhoKnocks has pointed out that they do some work at http://numenta.org so its not all bad. I am still quite unnerved by that statement though.
>They said that at their current pace, it'd take something like thousands or maybe even millions of years to map one brain.
Being a computer scientist currently in the field and getting my masters specializing in artificial intelligence, more specifically researching the neocortex, where 70% of intelligence is stored, I firmly disagree with you.
Yes, the brain is complex, but companies are already starting to map out the neurons and do research when it comes to creating intelligent machines.
Numenta is one of these companies.
I suggest you do a little more research before commenting on a topic you know little about. It is not going to take thousands of years to map out the brain. 20, 30, maybe even 40 years, but thousands/millions? Maybe for you, but not for anyone else.
Absolutely no idea where he got the idea that a computer capable of mimicking the brain would require 10 TW of power.
Rough estimate: If we assume that HTM is a good model of the cortex, and we look at the problem simply as one of processing power, optimizing HTM algorithms for performance gives us somewhere on the order of about 54.9 TFLOPS for the ~16 billion pyramidal neurons in the cortex operating at ~200 Hz. Take into account the fact that only around ~30-50% of the cortex is active at a time, and it becomes 164.9 - 274.9 TFLOPS. Of course, there are other parts of the brain that are fairly important, like the hippocampus, basal ganglia, and cerebellum, but those are quite unlikely to bring the numbers up by an enormous factor.
To put this all into perspective, AMD is rumored to be releasing a 17 TFLOPS GPU sometime this year.
On intelligence by Jeff Hawkins is an amazing book an artificial intelligence. Hawkins' company has an open source project called NuPIC that would be a good place to get some hands on experience. It is Python based, and has a somewhat steep learning curve, so it might serve better as a beacon that you can work towards, rather than an actual project as of right now.
There are approximately 20 billion neurons in the cortex (there are another 66 billion in other parts of the brain, but let's just go with the cortex to simplify things a little. Other parts of the brain are structured differently). 75% of those are pyramidal cells, while the rest are mainly interneurons, which aren't believed to store information. Each pyramidal cell can have 1-10 thousand synapses. Synapses are pretty unreliable, so it isn't clear if "synapse weights" are really relevant, but there does appear to be some type of "permanence" value in each synapse. Let's say this equates to 4 bits of information, or about 16 different levels of permanence.
16 billion * 1-10 thousand * 4 bits = 8 - 80 TB of data.
Of course, this is just the cortex, and is a rough approximation. Hierarchical Temporal Memory, a theoretical computational model of the cortex, uses what essentially amounts to large bloom filters with thresholds as neurons, which are a form of probabilistic memory storage. This means that you can store a tremendous amount of information in a small area, but it isn't easy to get the information back (it really can only check if incoming information is likely to have been seen before), and it only can do so to a certain degree of accuracy. Essentially, the brain has a certain amount of memory, but it's probably not like normal computer memory.
Jeff Hawkins founded Numenta to build the things he talks about in On Intelligence, you should definitely check out Numenta's open source project, NuPIC. We're working on building algorithms which operate by the principles of the human neocortex. It's fascinating stuff, and some of the most advanced (not to mention open source) research in building a functional Theory of Cortex!
PM me (or reply publicly) if you need help with anything in NuPIC.
Very robust idea of intelligence, and that there is a single unified algorithm for cortical behavior. From this we can contextualize the torrent of new Neuroscience data.
Strong AI depends on input over time, unlike machine learning techniques such basic neural nets, boltzmann machines, etc, which are completely static.
This book also is an important basis to understand the implementation rationale behind his company's product, which is described in this paper.
You can also check out Jeff's work, called Grok, which is an implementation of the first parts of his ideas. Also there's an hour long talk at Google which is very good. Also features Kurzweil at the end asking a question.
1) Do you know how to program? 2) Abstract algebra is awesome 3) Little more controversial, but this paper on the cortical learning algorithm substantially changed my life.
> Any recommendations on books/ sections I should know in and out?
You should probably read the docs that Numenta has made available about NuPIC and HTM. I would probably focus here, because apparently they use this.
All of the other desired qualifications are "Experience with XYZ", which you can't really get in a short amount of time. I mean, you could take some intro to ML courses on Coursera, EdX or Udacity, but it sounds like that will not be advanced enough for this job. I like Christopher Bishop's Pattern Recognition and Machine Learning book, but it's also not something you breeze through. It's probably a lot more doable to read the Wikipedia page and make sure that you at least know what they're talking about with all of the approaches.
I would probably try to figure out some specific things about the company. Are they doing computer vision? Then read a little bit about that. Are they focusing on facial recognition (I'm just making stuff up here)? Then so should you. What kind of algorithms do they use aside from NuPIC?
> Should I focus on those things or focus on typical "software engineering" questions? Like Big-O of search/ sort algorithms, writing data structures and all that.
Judging from the desired qualifications I probably wouldn't focus on this. They seem to care much more about ML in particular than they do about more general software engineering and computer science stuff. I'd worry more about knowing various ML algorithms than the traditional CS fare. It's good to know the basics though. Learning the complexity (Big-O) of these algorithms and data structure operations should not take too much time, so it might be worthwhile.
IMO, the only researcher who is getting close to some sort of breakthrough in general sensorimotor learning is Jeff Hawkins. Here's a presentation from last year: Sensory-Motor Integration in HTM Theory.
My only problem with Hawkins's approach is HTM itself which, I think, is not yet ready for prime time.
Why do you feel the concept is wrong?
These two things support the concept proposed. So does how we teach our children and how we think about concepts:
Actually we have a fairly good understanding of the neocortex. See http://numenta.org/cla.html. We know enough to know that it isn't enough for consciousness. It is basically a very clever pattern storage, matching and prediction system.