As others have stated, "True" AI is a mountain of a task. People (myself included) devote their entire lives to such a task. I carry degrees in computer science, mathematics, and physics. I will share a list of books in my library to help you get started. Most of these things you may find difficulty reading. Don't get discouraged and always look for areas where you can expand your knowledge.
List of relevant resources
Obviously the internet is a mostly free resource with an almost limitless source of information. Here are a few topics for you to google; for you to explore and find out what interests you most.
As for learning Python, understand that Python is a scripting language. It is designed to be easy to use ( i.e. easy to program with) but the state of the AI industry is still sticking with the C and C++ languages for most of their work. Learning Python is still a valuable skill so don't give up on it. Another language that is growing in popularity in the AI area is R. https://www.r-project.org/
I hope this doesn't overwhelm you with information. ^ ^ '
Also, play Portal.
Edit: Spelling (never been my strong area)
The idea is that if we're making all of our AIs female, we're doing no better than reenforcing notions that female voices and personas exist to serve us. If you value promoting equality, I'd recommend looking into it.
From http://www.geekwire.com/2016/why-is-ai-female-how-our-ideas-about-sex-and-service-influence-the-personalities-we-give-machines/ > Assigning gender to these AI personalities may say something about the roles we expect them to play. Virtual assistants like Siri, Cortana, and Alexa perform functions historically given to women. They schedule appointments, look up information, and are generally designed for communication.
From https://newrepublic.com/article/134560/stop-giving-digital-assistants-female-voices > Ultimately, the more our culture teaches us to associate women with assistants, the more real women will be seen as assistants, and penalized for not being assistant-like. At this moment in culture, when more and more attention is being paid to women’s roles in the workplace, it’s essential to pay attention to our cultural inputs, too. Let’s eschew the false choice between male and female voices. If these A.I. assistants are meant to lead us into the future, why not transcend gender entirely— perhaps a voice could be ambiguously gendered, or shift between genders? At the very least, the default settings for these assistants should not always be women. Change Viv to Victor, and maybe one fewer woman will be asked to be the next meeting’s designated note-taker.
Ugh sorry to be the bearer of bad news, but these ads were NOT written by AI:
Still hilarious, though!
That's Sebastian Thrun teaching, the guy who started the Google self-driving car project. He does videos like this for Udacity.com, which he founded originally to give free AI lessons via the internet. This seems to be from the original Intro to AI class, taught by Thrun and Peter Norvig; unit 9, video 35, as suggested by the video name above.
The entire lecture series, along with quizes and programming exercises, is available for free here:
https://www.udacity.com/course/intro-to-artificial-intelligence--cs271
Learn Python. It's one of the more prominent languages for AI. Take a lot of math courses. They're important for AI. Stick with college. This isn't 1975. You need a college degree to work in the tech industry. Most AI jobs are actually going to people with PhDs.
This is a pretty good intro to AI in Python.
So I was really curious about these guys' technology, so I started to do some digging. I was hoping some of the founders' earlier work had been published somewhere during their post-secondary education. I discovered that the founder and CEO is Bobby LaFever. According to his LinkedIn, Mr. LaFever is still currently a "Customer Experience Manager" at Signs365.
Maxima's CTO is Forest Beaver who, according to his LinkedIn, graduated from community college in 2014. Neither name produced any hits on Google Scholar, unfortunately.
I am going to refrain from making any judgement other than that "Forest Beaver" is a very interesting name. What are your thoughts about this startup?
As an AI student, I think a lot of the recommendations below are either to complex for a beginner or border line sci fi.
I would recommend reading this as a good starting point:
http://www.tutorialspoint.com/artificial_intelligence/index.htm
Also python is a good language for programming in AI, this is a book I am reading at the moment and it starts off quite easy, however I am using a library myself for a project at the moment because it is quicker:
https://leanpub.com/genetic_algorithms_with_python
(Put the sliders down to download the book for free)
If you want to do a beginner level program, best bet is probably think of a problem then build an 'intelligent system' to solve it, it does not have to be some sci fi solution, you could do something very basic then keep adding to it, I find home automation a good and easy place to start.
MIT has a great Open Courseware course on AI with video lectures and everything! There's also Berkeley's edX course CS188 which is similarly great with video lectures. Lastly I would say the price of Artificial Intelligence: A Modern Approach, 3rd edition is worth it and in my opinion it draws the connections across vast fields to get a cohesive view of the whole field (instead of learning AI as an assorted bag of unrelated tricks).
"The Quest for Artificial Intelligence" by Nils Nilsson gives an overview of the field and its history. "Godel, Escher, Bach" by Douglas Hofstadter is also an interesting semi-related read.
After that I would recommend "Artificial Intelligence: A Modern Approach" by Russel and Norvig for a more technical overview.
Russell and Norvig have a great AI text book, Artificial Intelligence: A Modern Approach. The ToC is here: http://aima.cs.berkeley.edu/ Its pretty readable and gives a very good introduction to a lot of stuff in AI. As a bonus, that site links to some useful stuff.
When you're asking for help, it's usually best to provide as much context as possible. What grade are you in? What did the assignment say? How much work is it supposed to be? What does "not much experience" mean? Can you program (and do you want to)? Is there anything is particular that you're interested in (AI is pretty broad). etc. etc.
One nice beginner project is to write an AI for Tic-Tac-Toe. You can use minimax and maybe alpha-beta pruning. If you (or your school) have a LEGO robot, you could program it to avoid obstacles and/or follow a black line on a white floor or something like that.
Here are some suggestions from other people.
more context here. theres an earlier human version thispersondoesnotexist.com that does better. https://thenextweb.com/insider/2019/02/22/thiscatdoesnotexist-com-is-uhm-not-as-good-as-its-human-counterpart/
Even with a fairly black-box representation of a human brain, having a human brain on a reasonably-priced computer would still be an enormous achievement. Assuming we have no idea how it works, we could still:
Copy an already educated brain and load it on another system, thereby skipping the education process.
Place a brain in a liquid-nitrogen cooled overclocked computer and get it to solve problems faster than any human.
Allow the brains to examine their own memory state and machine code and attempt optimizations and bug fixes.
Skipping education, retirement and child-rearing would make a computer brain spend about 3x as much time on work. If we could optimize-out time spent on recreation and sleeping, then we could get the computer to spend about 9x as much time on work. If it normally runs on a massively multi-core 5.2 GHz CPU (example), and we placed it in a nitrogen-cooled environment and raised it to 10.4 GHz, it could then run twice as fast, and, in total, could get work done about 18x faster than a person.
Keep in mind that with real people, there are limits. Adding more people to a project will allow you to get some things done faster, but some parts of a project require work to be done one step at a time, and adding extra people won't make those parts go any faster. An artificial brain could potentially get those parts done 18x faster than normal, or more, depending on how fast we can make its processor go. Even if it costs a few hundred thousand dollars for a CPU like that, it would be worth it in many cases.
"They" as in me? Yeah this works today actually, on your own desktop. The restoration stuff isn't freely available (that's only on my computer at home at this stage), but colorization can be done on full movies. You can do shorter clips with the Colab but I'd suggest a home install for full movies.
Absolutely no idea where he got the idea that a computer capable of mimicking the brain would require 10 TW of power.
Rough estimate: If we assume that HTM is a good model of the cortex, and we look at the problem simply as one of processing power, optimizing HTM algorithms for performance gives us somewhere on the order of about 54.9 TFLOPS for the ~16 billion pyramidal neurons in the cortex operating at ~200 Hz. Take into account the fact that only around ~30-50% of the cortex is active at a time, and it becomes 164.9 - 274.9 TFLOPS. Of course, there are other parts of the brain that are fairly important, like the hippocampus, basal ganglia, and cerebellum, but those are quite unlikely to bring the numbers up by an enormous factor.
To put this all into perspective, AMD is rumored to be releasing a 17 TFLOPS GPU sometime this year.
On intelligence by Jeff Hawkins is an amazing book an artificial intelligence. Hawkins' company has an open source project called NuPIC that would be a good place to get some hands on experience. It is Python based, and has a somewhat steep learning curve, so it might serve better as a beacon that you can work towards, rather than an actual project as of right now.
Not sure if this the right place for it, but Peter Norvig and Stuart Russel (authors of "Artificial Intelligence: A Modern Approach") have an open source repository on Github with implementations and tutorials on algorithms from the book. It has helped me a lot so I thought I would share.
(Sorry if I'm breaking any rules with this, I'm new here)
The 'frame rate' of our eyes is around 60 fps (Edit:) as a very crude approximation.
But it sounds like you're talking about the speed of thinking, so the more direct analogy is 'clock speed', which in computers dictates the rate at which processing occurs. Interestingly, there are different 'oscillations' in different parts of the brain—alpha, beta, etc.—they vary between ~5 Hz (cycles/second) to ~80 Hz.
It's clear that these oscillations aren't precisely like computer clock speeds; there's some lack of clarity on how similar they are. But there is evidence that 'wakefulness' and 'alertness' are associated with faster speeds ('beta waves', ~20Hz) than 'relaxed' phases. Again, the connection to how quickly you process information, communicate, etc. is quite indirect.
Edit: forgive the lack of qualification on the eyes component. Let me be clear that our eyes do not see an entire frame at a time, so any single number is at best an approximation.
Yes, it's very much alive. The homepage is not updated often, but the google group and the github account are updated frequently.
I really like this summary of how AlphaZero was programmed to learn. https://www.chess.com/amp/article/how-does-alphazero-play-chess
The crux of it is that programmers have learned to use a very sophistocated medley of decision-making heuristics that do in fact emulate the human element of chess tactics. Really, AlphaZero does develop a sort of intuition, just like our greats! It's incredible, but I think a hallmark of neural nets. The kicker with AlphaZero is Monte Carlo randomness! It's innovative efficiency.
That said, no, I wouldn't call this strong AI or AGI. It excels at this one specific task-- not to diminish chess as a sport; I am a huge fan and watch chess matches like I watch football-- but playing lacks any defined consciousness. The intuition is really an amazingly powerful thing, but it is based in the described set of algorithms. AlphaZero does move away from the brute force of older minimax. Does it come closer to modeling the human brain? I think so, especially when we consider that it's the human brain's tendency to max efficiency and skip arduous calculations whenever possible. The best machine decision heuristics should parallel the best human ones.
yeah, I saw that the AI challenge ended today... I also did something similar for another course already and it ended up in a heuristic method...
I just found this though: robocode
I just started a lit review on this. You are more than welcome to have my current list of papers. I'm focused on emotion recognition, and more non-facial, but I think you'll find some papers to be of interest.
There is no demo now, only a collection of AI algorithms that are being used on different projects in various narrow-AI ways, but there is a small team of full-time programmers in Hong Kong working on such a demo using the Unity3D game engine plus a new 3D SpaceTimeServer and Planner in OpenCog. The new SpaceServer and Planner were recently pushed to Launchpad and the other code will show up in the next few weeks on Github at https://github.com/opencog (the trunk is moving over from its current location on Launchpad https://launchpad.net/opencog, which is why the Github project looks so sparse).
That description helps a lot. Especially given the sheer number of predictors and that d > N (i.e., that your predictors outnumber your instances), Naive Bayes performed quite admirably. I assume that you're stripping out a lot of uninformative terms (function words, for instance).
It might also be possible (though with extra work) to identify features that really zero in on your categories. So, for example, consider making a document-term matrix for all categories' item descriptions. This gives a picture of the overall distribution of terms. But it is quite probably the case that individual categories' term distributions differ on only a handful of keywords. The union of all categories' "maximally identifying" keywords would then create a much-reduced feature set with more predictive utility. Now, it's easy for me to say that, but actually doing it would be a bit of a computational chore.
I do think you can get some mileage out of random forests, and you almost certainly should be able to use the features as you've already coded them. If they'll work for NB, they should work for RF. As for Java ... you shouldn't have a problem. But I'd opt to use an existing package. Weka is Java based, and you can do a lot more at the command line than you can using Weka's GUI, so you may be right at home there.
Note that AIMA is focusing mostly on narrow (weak) AI. Which is essentially problem solving in a smart manner. Machine Learning, Computational Logics, Expert Systems etc.
If you want to get into strong A.I. (building SAMANTHA) than I suggest you to read some books on Cognitive Psychology. You could start with "Cognitive Psychology and its Implications" from Anderson.
Also Neuroscience basics would be a good basis as neuroscience mostly dominates CogPsych at the moment. But be prepared. That route is not easy to go, its not that you will just program some intelligence over the weekend. That makes it quite difficult to follow if you are not in grad school or a RI.
For some practical stuff you could look at some cool generative models like: https://prezi.com/qpttsqeyv-a8/a-dbn-with-spiking-neurons/ Note that this requires good knowledge about neural nets, sensors, mathematics etc.
I'm working on an AI specialization in Georgia Tech's OMSCS and I really like it so far. That's a pretty big leap if you're not sure you're going to want to stay in AI, though.
I'd highly recommend reading the book "Artificial Intelligence: A Modern Approach" by Stuart Russel and Peter Norvig. That book was my textbook in two undergrad AI classes and then again in a master's degree class. It's pretty much the bible for artificial intelligence work and gives a really good overview of what actual AI work and theory entails.
It's pretty long, so you might want to just thumb through each chapter and stop for anything that sounds especially interesting. Also, PDFs of the book are easy to find online, but I don't want to link one since the legality is questionable.
First learn a programming language and get comfortable with it. I suggest Python since it is widely used for AI and it is pretty simple. You can take basic coding classes if you like, or you can pick up a textbook, it doesn't really matter. There is no "superior" way of learning a language, just pick what works best for you and make sure that you practice. After learning a language its a good idea to learn about the fundamental data structures and algorithms in Computer Science. Algorithms by Robert Sedgewick is a good choice to learn this stuff. Finally, Artificial Intelligence: A Modern Approach by Peter Norvig will serve as a great introduction to learning some actual AI.
The book "Artificial Intelligence: A Modern Approach" by Russell and Norvig is a good start (or at least, a good reference):
A course based on the book starts on edX Jan 17:
https://www.edx.org/course/artificial-intelligence-ai-columbiax-csmm-101x
AI is a big field. Deep neural networks are popular at the moment and the advice from /u/rod333 is geared mainly at these. There are other useful areas though that would require different background knowledge. In addition to his suggestions I'd add core computer science courses such as "data structures and algorithms" and "object oriented programming" and maths courses on probability.
Have a look at https://www.edx.org/course/artificial-intelligence-uc-berkeleyx-cs188-1x. It's the first half of an introductory course on AI. There are also materials available to self study the second half. Note the prerequisites though: "Object-Oriented Programming, Recursion, Python or ability to learn Python quickly, Data Structures, Arrays, Hashtables, Stacks, Queues, Priority Queues, Traversal, Backpointers, Probability, Random Variables, and Expectations (Discrete), Basic Asymptotic Complexity (Big-O), Basic Counting (Combinations and Permutations)"
The book "Artificial Intelligence: A Modern Approach" is a good survey of the full breath of AI.
I learned a lot about AI in the AI master's program I took at university. So I would really suggest to find a university where they teach AI. If this isn't an option right now, start by reading AI-related books, papers and articles. To find articles, you can search for AI (or AI-related topics) on Google Scholar. I would also suggest reading the book Artificial Intelligence: A Modern Approach. It's a good starting point. Also keep checking out the artificial subreddit and related subreddits.
I recommend picking up 'Artificial Intelligence: A Modern Approach' by Norvig and Russell. It's probably the best all-round textbook available. Either of the second or third edition is fine. Otherwise, there are some free books: http://www.bigdata-madesimple.com/20-free-books-to-get-started-with-artificial-intelligence/
'Quest for AI' is very good.
It was my first time watching it today. I am considering applying for the Master in Informatics / Intelligent Systems, at the university in Lugano next year, and I found this video through his site.
Another video that I watched was this video in 2012 by Ray Kurzweil. It was in the recommendations bar of Juergen's video on Vimeo.
I was always skeptical of some of Kurzweil's statements, and I still am for many of them, but what he showed me in that video was that despite my skepticism, he is an interesting man to listen to. I think it's fun to watch, and he did offer me new perspectives. It's the kind of video that is a good starting point for a long interesting conversation with like-minded people.
As somebody who has completed the most basic level of ML education, it would seem to me that /u/senjutsuka is confused. So confused, he doesn't quite realise he's confused yet.
I'm normally pretty confused, but I'm fairly confident in this case he is more confused than me.
Here is the paper if anyone is interested: http://arxiv.org/abs/1504.00702
The discussion on hackernews has a little more technical info, including an endorsement by Andrej Karpathy on the impact of this technique.
Here's what I would suggest, simply because that's what I'm doing right now. Buy yourself a copy of "Artificial Intelligence, a Modern Approach", 3rd edition by Stuart Russell and Peter Norvig and enroll in the Udacity nanodegree program (https://www.udacity.com/ai) and get right down to it. The course will give you a gentle introduction to the subject and point you to relevant material in the AIMA book that'll help you get a good understanding at a reasonable pace. You'll also need to pick up Python, which is not much of a challenge. YouTube will give you a good start, and a textbook will help get into details. Subscribe to Siraj Ravel on YouTube and follow industry till you start to get hang of the concepts at a much deeper level. Enjoy the journey, and have fun!
I'm a fan of Jurafsky's book, and I've heard good things about (Stanford's NLP class)[https://www.coursera.org/course/nlp] on Coursera from colleagues (though I have not taken it myself).
This wouldn't be part of my response to OP since he's still in college and this is probably more depth than he needs, but: machine learning has become a fundamental tool in pretty much every AI subfield, and NLP is no exception. The group I work in does a lot of NLP research (my work sort of straddles the line between research and eng), and most of the interesting NLP research talks will leave you pretty lost if you don't have a reasonably solid understanding of machine learning. So if you're serious about learning and working with NLP in depth, going through some ML resources is pretty important too (in my case I just bought a handful of textbooks and worked through them, starting with Linear Algebra).
I got it! When AI will replace us we will be all employed as Amazon Mechanical Turks to generate training data for it. We be turkin'.
Or, how they put it:
>Humans don’t get replaced when the machine begins to design creatively, instead we step into the newly evolved phase of the mentor.
Similar application - interpolating frames for slow motion playback - https://thenextweb.com/artificial-intelligence/2018/06/18/nvidias-ai-creates-amazing-slow-motion-video-by-hallucinating-missing-frames/feed/
We are going to see a lot of algorithmic alterations to video/photography in the future.
That link is fantastic, here is a link to Thrun's course that uses that book as its main textbook: https://www.udacity.com/course/intro-to-artificial-intelligence--cs271
Here's a sreen grab of some content that I found that does an ELI5 on the hieroglyphics I thought was good:
Personally I use https://feedly.com/. It's useful for organizing all my sources and browsing them all efficiently in a single interface. I follow blogs, youtube channels, subreddits, online magazines, journals etc.
(And no, I don't work for them :P )
I might be biased, but yeah, I still think it's meaningful to work on EAs. I think EAs will always have a place because of how easy it is to apply them to almost any problem. Also, there's always going to be the set of optimization problems where the function being optimized is really really hard to characterize. For example, something where human's are in the loop evaluating solutions. Another good one is here.
Yes, they can be slow. But there's still a range of problems where as long as the run time is not prohibitively slow, such as training neural networks, that EA's performance over existing algorithms like backpropagation is significant enough to warrant their use and further work on developing EAs.
What type of work are you doing with EAs?
Check out Wordnet. This has all that and more! The data are in text files with a fairly straightforward indexed structure (from memory when I looked at it a few years ago).
Right, but for example vision is much more distributed than just visual processing in occipital lobe. You have strong projections from V1-V4 into lateral inferior parietal cortex, inferior temporal areas, all in parallel doing various functions. You also have parallel processing occurring upstream of visual cortex.
With some inputs, you even have (delayed) response from motor and somatosensory cortices.
http://www.scholarpedia.org/article/What_and_where_pathways
My point is, while I understand your postulate and agree with pieces of it, the brain doesn't compartmentalize sensory processing. It's incredible distributed, every piece talks to each other in some way. I'm not sure how this fits into the "choose the best one" argument, it sounds like you would still need a master conductor, and that's not something the brain really has.
If you're looking at strong AI, you're probably interested in the field of computational intelligence (recurrent neural networks, fuzzy systems, reinforcement learning, genetic algorithms and so on).
Scholarpedia, a website started by leading computational neuroscientist Eugene Izhikevich, is a great place with nicely curated information. http://www.scholarpedia.org/article/Category:Computational_Intelligence
I googled this and looked at the summary here. IDK, to me this looks approach suffers from the symbol grounding problem, like most approaches to AGI.
I'm just finishing up a fantastic lecture series called "Philosophy of Mind: Brains, Consciousness, and Thinking Machines" from The Great Courses, about this very subject: https://www.thegreatcourses.com/courses/philosophy-of-mind-brains-consciousness-and-thinking-machines.html
Looks like it's on sale right now, but I got my audiobook listening copy from my local library as a digital download.
I disagree that AI must have the intent to harm humanity to be considered to have "taken over". Taking over does not require malicious intent. It just means to assume control. So if an AI performs blood glucose monitoring in real time, I say it has taken over because it is in control. Surrendering to an AI is a different situation and it not OP's original talking point.
I didn't know about that one yet. Did you mean the Intro to Artificial Intelligence course? It looks interesting, especially the part where they go deeper into the applications of AI. Thanks for sharing.
> > authority that should be ascribed to his predictions about the future.
> You understand that is a logical fallacy right there right?
What I'm saying there is that the factual things he talks about are wrong, so it isn't likely that his speculations will turn out be any less wrong. I don't see the logical fallacy, perhaps if you'd point it out it'd be easier.
The factual claims he's making are undeniably wrong. First, he's saying some kNN-variant is in play with Google's autocomplete and with self driving cars.
Google's autocomplete can't be done with kNN, you'll need something like Hidden Markov Models or Recurrent Neural Nets, so you can model sequences. I believe it was confirmed at some point that they used HMMs.
Self-driving cars use a whole host of algorithms, but kNN isn't relevant. This is because the car's sensor input is very high dimensional, which kNN handles notoriously poorly. They will use Convolutional Neural Nets for perception, and something like Particle filters to model the world. There's a course on Udacity that explains the basics, taught by the guy who lead the self driving cars team at Google.
He also says that brute force search is a new approach to AI. Not true, this idea is as old as computer science itself, and literally the first chapter of the standard AI textbook.
AI is an academic field where a lot of smart people have been doing good work for a long time. A lot of things are well understood. It's pretty arrogant when someone just makes up falsehoods from their own intuition and then proclaims it as fact.
You're in luck that you ask in this day and age! This is a bunch of free courses, search for 'intelligence' and 'machine' (for machine learning), and you'll find what you're after (machine learning should probably be tackled after you've got some of the basics of plain AI, and indeed, programming! - down). This one looks perfect, it's got a subreddit over there -> /r/aiclass.
A nice, if perhaps overpowering book is Russell and Norvig's "Artificial Intelligence, a Modern Approach" (expensive). Good luck!
That was a well written article.
Geoff Hinton deserves kudos for sticking with NNs for so long and reviving the field not once but twice!
Sure, deep learning networks will not get us to AI "just by making it bigger and faster" [1], but they are a must have tool in one's AI/ML toolbox. So off I head to coursera.
Norvig's comment on story understanding reminds me of this old diagram by Minsky: NNs as part of a story understanding AI
[1] Note that Hinton only says that its easy to improve DLN performance simply by making them bigger as opposed to all the clever feature engineering that other techniques require. In particular, nowhere does he claim that you can build an AI simply by making a DLN big enough.
Jeff Hawkins founded Numenta to build the things he talks about in On Intelligence, you should definitely check out Numenta's open source project, NuPIC. We're working on building algorithms which operate by the principles of the human neocortex. It's fascinating stuff, and some of the most advanced (not to mention open source) research in building a functional Theory of Cortex!
PM me (or reply publicly) if you need help with anything in NuPIC.
I just came across http://theaigames.com/competitions/warlight-ai-challenge-2 -- it reminds me a lot of Ants... not least because I recognised current leader GreenTea. Somewhat unusually in this space, its engine is not open source.
CodeCombat, while primarily focused on teaching programming, also runs occasional competitions: http://blog.codecombat.com/
In order for it to be universal, the leaders of those companies have to participate in it. No stock option packages, no other pay. Plus this will get the bottom pay people out of the bottom. Technically fusing more money into the economy. If you revolt and bring up arms against this idea, they will never think about it again. No group of people seems to be able to organize like they used to be able to because of all the distractions the 21st century offers. UBI can easily lead to a wealth distribution of current wealth, so all the rich have to give it up later. If you think you can fight the rich head on, you know the class that has literally ruled since civilization began, you will lose that fight. They have been at the top long enough to know more tricks to stay there than we can imagine. The Art of War comes into play even in economics. You are talking as if you don't know your enemy. Do you think the rich are stupid? They have more time and money and people to think about all of the counters we come up with, because that is their job. It has to be a careful attack by us. Or they will crush us as they always do.
Does your app use natural language processing to communicate in English?
Does your app store what it knows or hears?
Does your app use reasoning to use the stored information to answer questions and to draw new conclusions?
Does your app use machine learning to adapt to new circumstances and to detect and extrapolate patterns?
If yes, yes, yes, and yes, then yes.
Courtesy of Artificial Intelligence: A Modern Approach
It's not a short or a cheap read, but Russell and Norvig's <em>Artificial Intelligence: A Modern Approach</em> should definitely be in your AI library.
If you're looking for something light (that is, intended for a general audience), I liked The Most Human Human. It's about a guy participating in the Turing test on the human side, and what humanness is vs what AI is.
If you're ready for something more technical, I say just go for Artificial Intelligence: A Modern Approach. This is the standard introductory textbook that pretty much everyone uses. It's a broad presentation of (pretty much) the entire field.
Buy Artificial Intelligence: A Modern Approach by Russell and Norvig and start making your way through it. After that, start doing little projects: look around for something that seems interesting/useful and ask "how can that be automated?" Then go about doing it. I would start with making agents to play games, i.e. chess, checkers, sudoku, etc.
Can't really see your point? Maybe the problem is, that I don't know yet how things work at the University. I try again...
I want to work on robots and systems from a theoretical software side. For that, I applied in CS BS, I'll specialise in Visual Computing & Machine Learning (they belong together apparently) and if everything goes well, I will apply for the "Robotics, Systems & Control" master to dig deeper into this specific field. Math is the alpha and the omega, I know that.
It's been 10 months since I decided to do "something with robots", and I'll graduate from High School in 5 months. I have started to heavily prepare myself for University, because first and foremost, I want to avoid being the guy who knows absolutely nothing about anything (and I'm quite bored in school too). I worked myself all the way through khanacademy.org for refreshing maths and physics, codecademy.com, two relatively long C++ tutorials and working now on "The Elements of Computing Systems". Additionally, I'll work through the books "Introduction to Algorithms" and "Artificial Intelligence A Modern Approach" until summer vacation. Then I plan to do some math courses on coursera.org or something else until University begins.
Seriously, I have no idea what to expect from University and how I should manage things to become what I think I want. I'm quite lost.
If you have a rudimentary understanding of algorithms, I would suggest Artificial Intelligence: A Modern Approach, by Stuart Russel and Peter Norvig. The book is comprehensive, well-written, and covers a wide area of different techniques and approaches within AI. Be aware that the book is written as a textbook, so do not expect philosophy or speculation inside - only what is possible and feasible given current state-of-the-art.
Spend the money for Norvig's 'Artificial Intelligence: A Modern Approach', second edition or higher, used if need be, there must be some out there. It'll get you so much further, faster, and it has references for further exploration.
I hope this gets picked up by a lot of developers. It's pretty clear this ui will be completely limited by the number of apps other devs develop/integrate with it, which means they either need to pay developers really well or have ios/android level of user adoption and at the moment this sub seems to be the only one talking about it.. we'll see...
also hound does like 90% of the stuff this app does. They're really getting a lot of mileage out of this "creators of Siri" schtick.
edit: says "developers" 30,000 doesn't describe the api devs would use once -__-
Personally I've yet to use it. If you're completely new to ML I'd probably advise against starting there. I'd suggest learning the foundations. /u/RustyJosh pointed out https://www.coursera.org/learn/machine-learning/ which covers the main tools(by which I mean techniques and algorithms) you should know about for doing machine learning, and then I'd say choose some problems and start with Python and scikit-learn and start applying the ML techniques.
THEN try moving to TensorFlow. Obviously if you've done all the above, go for it.
A great place to start is Coursera's Machine Learning class: https://www.coursera.org/learn/machine-learning/
It will cover the basics of a variety of learning algorithms, and you implement each one as a part of the class.
I recommend the following free class: https://www.coursera.org/learn/machine-learning
One of the weeks explains back propagation in technical detail, and then for the homework assignment, you program it in Matlab/Octave.
Where are you looking for job openings? If I look on LinkedIn almost all of the jobs of the first page ask for a degree in something like CS (and it usually needs to be a MS or PhD). The few that don't list it explicitly ask for experience in the field, which is difficult to prove without prior jobs or a degree (remember you have to be invited based on your resume before you can start convincing them). Maybe it's different outside of California (I had to search somewhere) and the places in Europe where I've been, but in my experience most jobs will definitely want you to have a relevant degree.
I'm also surprised there are no jobs for evolutionary computation experts, given the immense array of problems these algorithms could be applied to. Why do you think that is?
Now that I reread myself, I feel confusion in my own words.
In short, I am thinking about making a hobbyist database.
I am experienced about databases. I'm not by any mean an expert, but if it talks to you, I know what a Boyce-Codd Normal Form is.
In said subreddit, there are a lot of :
ads
tutorials
some state of art
peoples showing their settings / aquisitions (generally a year after the ads, btw.)
I feel like there is currently no good way of finding existing available pieces, and I would really like to be able to make extended researches.
But for that, I would need to keep track of every post in that subreddit (which is easy, especially know there is (https://ifttt.com/)[some automation tools available], then to identify what kind of post it is.
So the whole process is going to be like that :
Someone post something on reddit, which triggers a ping toward my server.
My server, then, vacuums the post.
Right there, that is when I want some magic to happened : I want to be able to know wether it is an ad or not, and if it is, I will scrap it.
I will do the scraping "manually" in the beginning, because I have no idea on how to extract some details (name and type of the product) ...
Then I will feed the details to the database.
A question about language though : Is it feasible to recognize a noun in a text ? Deciding that a special group of words are a nominal group ?
If you are looking for something a bit newer IBM_Watson was recently used for creating recipes.
Maybe 'The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel...after that it gets a little tricky' is what you are looking for?
Complete noob here, but as far as I understood his principle, he's basically looking for changes in RAM ("bytes going up"). Is this what you meant with "change-based intelligence"?
Although their results are not that good from what I've seen, combining GAs with neural networks is something that Miikkulainen has been working on together with others (the NEAT algorithm). You could look for him and check what the co-authors are doing.
Teachers have to know deeplearning in order to teach deeplearning. And deeplearning is a new field, rapidly evolving, Lots of research is going on. And basic deeplearning requires knowledge in linear algebra.
There an online corse backed by google. But from what I hear it's an advanced corse.
Best starting resource i've found is probable Siraj Raval on youtube .
Good recommendations! Another one that a lot of people point to is Andrew Ng's coursera course.
In general there are surprisingly many courses on most topics available as video lectures these days, with a lot of them being posted directly to youtube.
Sorry my response was a bit rushed and I think I miss read your question. As far as the math and programming you are on the right track and probably ahead of most of your peers.
The only thing I would add is make sure you have a good study regime. At my program I don't think that anyone that puts in 40-50 hours a week are having serious problems.
Coursera has a great course in Control of Mobile Robots that I took last year. Even if you can't understand all the concepts watching the videos gives a good insight into an aspect of robotics control. There is a new course starting January 20th.
My first impression of AIXI was bad, very bad, since it broke my bullshit meter. I mean if you have infinite computing resources, anything is possible. I could even prove that black == white or that AI == classification ;) Anyway, my views on AIXI are better expressed by Richard Loosemore in this AGI forum discussion (check out the exchange between him and Matt Mahoney. Ben Goertzel's comments are also interesting there).
To /u/Trnogger : You are right in advising to start with Python, but you said you Python would die out. It is not possibly gonna happen, I guess. Because python is a simple language and therefore it is easy to bring in new programmers using that. You know, when some guys see all those braces and C-style for loops, they go crazy and rage quit. Python is easy for AI development too.
So my idea is that you should learn both Python and C/C++ first. Do some warm up questions from high school programming books and go on do some 30-40 problems from http://projecteuler.net with C/C++ (Might be hard, but somehow get to do it. You can do it if you believe in yourself). BTW don't forget Python. You should then learn many libraries from Python, take Machine Learning and Neural Network courses. Try to do your work in Python cuz it allows you to focus simply. You don't have to cook up too much loops etc. Once you achieve your result, you must then recode your result into C/C++ cuz they are way faster in execution. Hope this helped. :D
A command line interface. You know, entering stuff at the prompt. Bash:
http://en.wikipedia.org/wiki/Bash_%28Unix_shell%29
I use Linux operating system, Mint 14 and Ubuntu 12.04, 64 bit OS. I also have Windows 7 on a different partition of my hard drive, but rarely use it. The Linux 64 bit operating system (OS) are as powerful as any other system OS out there and they are for freeeeeeeeee!:
The question is far too broad, but as a nice entry point I'd suggest you to learn about expert systems. It's a good example of AI from historical perspective and also can impress you quite a lot (like this one http://en.akinator.com/ ).
Hi, If you are on an introduction level I recommend taking this course, wich is free and you have a lot of practise exercices about many of the AI Alghorims. https://www.udacity.com/course/intro-to-artificial-intelligence--cs271
The book is really not that hard to read if you come from a CS (or math) background. It also has a ton of exercises and all algorithms are outlined in pseudocode, so you can go and implement them and play around with them a bit.
There's also https://www.udacity.com/course/cs271 which also uses the same book.
Honestly I wouldn't go back to school for CS. You have a degree and you work at a major tech company. The first coupled with some programming knowledge, which you can demonstrate with projects you've made on Github etc, will be enough to get your foot in the door somewhere. The second part is you already work for a major tech company.
​
Reach out to someone in software development there and try and get in contact with a manager or hiring personnel for that area. Express your interest now, seek their guidance and then you've got a point of reference in time where you can point back to and say "Hey, I emailed you x months ago expressing my desire to move in to dev. I took your advice onboard and I've since begun learning x, y and z through x.x.x and self-learning. Here are some of my projects. If you don't mind me asking your opinion and advice again, do you think I am ready to start applying for positions? If not, where would you recommend I direct my studies to build upon what I've already demonstrated?".
​
I imagine biology would be a valuable degree to have to companies developing AI solutions for medical or other industries.
​
I'd recommend Udemy.com for bootcamps on stuff like Python. Wait for them to go on offer, though. Full price they're like $200, but they are very often on sale for $10-20. You get A LOT for your money, especially at discounted prices.
​
But yeah, I don't think you'd need to go back to school, especially since half of it will be generic classes you've probably already done and only two years of practical programming experience when you could just start now and do as much or as little studying as you like or your schedule permits.
You can learn a lot that's related to the field in the Machine Learning course. I took the section last fall and it was really good. Ng is a very good teacher imho. Looks like the course is about half way through, but you can still join and catch up or just float through at your own pace.
What country do you live in? Treasure the fact that you have a foot in the door at the 'sentient' systems company. Not many AI jobs exist on the planet. Do you want to keep working at that job? If so, ask your management for advice about how to work there as a career and also continue your education. Look for free classes online at sites like Coursera and Udacity. Consider carefully whether you can turn the job you already have into the career of your dreams. Talk to the management about the kind of AI development you find interesting. Ask them if they think the company might do anything like that someday. If they say yes, start working on a roadmap for both the company and your career.
Having reached the final stages of a degree means you have learned the skills required to learn on your own. Apply those skills. Dig in your heels. In your free time, read and study the way you think you would need to study if you went to grad school. Spend a few years getting the experience of full time work in the real world. Then, come back to this question.
He was right about progress in those fields, he said 5 to 10 years, but aren't both things he hoped for in 2016 already available now in 2018, although maybe not to a perfected state but still ... ? Good machine translation with DeepL and semantic search with Talk To Books - progress is so fast in this field that things tend to appear sooner than predicted.
>I agree
Then you understand why your measure of a "good conclusion" is fundamentally flawed.
>What makes you think that I haven't?
You asked me about algorithms for correct logical deduction. It might be worthwhile to revisit those topics.
>No one uses it in debates. No one.
This is not correct. I've seen formal proofs used in debates very effectively. Formal logic may not be used in political debates, but they are used. Regardless, this has zero bearing one what future AIs will find convincing.
>Are you implying that someone built a better tool using propositional logic, and so I am wasting my time?
No, I'm saying that future AIs may not accept logically sound arguments, much less merely persuasive arguments.
>You are saying my tool has no hope of ever helping people use logic better?
I'm saying that it does not use logic to begin with, and may also suffer from subjectivity.
>Can you send me a free copy of the tool that uses propositional logic that is better than my Microsoft Access Database?
Here is a website where you can download Prolog, an extremely powerful declarative programming language: http://www.swi-prolog.org/
Thank you for your answer. Actually, I've been working with CUDA for two years now, mainly for improving image filtering algorithms. Should I bother learning this: http://www.codeproject.com/KB/graphics/GPUNN.aspx ? I've been reading about cognitive neuroscience, but I worry it only relates loosely to what I'll learn in the next few years, university-wise.
For an alternative field...
Go check out OpenCV. It has more to do with image processing, but utilizes machine learning techniques. Tons of tutorials and it's a good foundation (imo) for anyone who's interested in getting into AI/Machine learning.
Personal experience with it : I used this software back in college to help build an image recognition application that could identify house addresses from bing street-car images. I personally like image recognition based AI as I feel there are ~~more~~ better data caches to train on and it's a bit more intuitive.
I just started with this site: http://www.codingame.com. They let you code in many languages...the one you posted is only javascript, which I don't know.
What's your experience with fightcodegame? Have you played, or just heard about it?
I have written a book about this exact subject. It comes out next Monday on Halloween! If you are interested in reading it, you can find it on Amazon https://www.amazon.com/Civil-Rights-Addressing-Artificial-Intelligence-ebook/dp/B01LYFUJYT/
I can also email you a free copy if you like.
I do think that some rights would be right for AI, other rights might be very dangerous or unnecessary... It's all in the book.
I'm currently reading Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality by Robert M. Geraci. This book explores how religious ideas have infested our expectations for AI. It's arguments are quite similar to The Secret Life of Puppets by Victoria Nelson which was an even deeper consideration of the metaphysical implications of uncanny representations of human beings whether in the form of dolls, puppets, robots, avatars, or cyborgs. I think it is really important to understand what is driving the push for this technology.
Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat is also a good book on the dangers of AI.
You want more book recommendations? Well, one of the creepiest aspects of AI is that Amazon is using it for its recommendation engine. So just go on Amazon and it will be an AI that recommends more books for you to read!
I believe Structure and Interpretation of Computer Programs is written for Lisp. It's a general computer science/programming book so it's not really AI related though, and quite old now. Other than that just check out some stuff online by some of the older AI guys, who often stuck with lisp.
The main AI textbook, AI A Modern Approach (AIMA) is also written with Lisp in mind and the authors maintain the Lisp examples online.
Well that's up to you mostly. To my mind, there's a world of difference between machine learning and artificial intelligence. Mainly because machine learning relies heavily on algorithms and principles, some of which are centuries old (Bayesian probability being a very common one for instance). Which is essentially a computation that leaves you with "this is your best bet". Don't get me wrong, those are brilliant and we have achieved a whole lot with those and we still do. They are still the easiest ways to deal with classification of large volumes of data for example. But that is not intelligence in the slightest. They can and are a useful tool in that field but by using those, you(your software) is not looking into the semantical meaning of the data you are feeding it with. Which, I believe, has noting to do with real intelligence. I believe neural networks and evolutionary computation are significantly closer to what artificial intelligence should be. Not in the sense of "self-awareness" but more into the sense of being able to predict and analyze the semantics of what you are giving it. Being able to ask relevant and meaningful questions and build models and networks based on that would be the ultimate achievement I think
Edit: Some useful reads I mentioned:
Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig
The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman
Pattern Recognition and Machine Learning by Christopher Bishop
Fundamentals of Deep Learning: Designing Next-Generation Artificial Intelligence Algorithms
Introduction to Artificial Intelligence by Philip Jackson
On Intelligence by Jeff Hawkins
How to Create a Mind: The Secret of Human Thought Revealed by Ray Kurzweil
I would advise you to find an introductory book; Artificial Intelligence: A Modern Approach by Russell and Norvig might be a good place to start, as it looks into several areas of AI (search, logic, planning, machine learning, probabilistic reasoning, natural language processing, computer vision) without introducing too many complex and unnecessary details (unnecessary for a beginner, I mean).
Apart from that, you can check out this MIT course or this edX course (there are no open sessions for the edX course at the moment, but I think you can still enrol in the archived one).
Good luck.
Pinker - How the Mind Works
As a software engineer, I once very much enjoyed some of the old school books such as Norvig's Artificial Intelligence: A Modern Approach, and his Paradigms of Artificial Intelligence Programming: Case Studies in Common Lisp.
I've read several neural net books too, but none of them stuck in my head as much as those.
I would first pick up 'Artificial Intelligence: A Modern Approach' and work on a pet project which can be as simple as building AI for playing chess; to get a feel for the start of a foundation. Then based on your interest work towards the specialized problems related to computer vision (handwritten digit recognition using MNIST dataset) or natural language processing (part of speech tagging). Andrew Ng's Coursera course is a very good start for this part. By this time you'll yourself have a pretty good idea where to go next.
See my comment in this thread, where I try to teach her a model. She is built to attempt to do that, but she isn't very capable and unrelated facts overwrite each other. These facts erroneously represent the same relation. She also does show some interest when I teach her what a "skeen" is.
However goal based behavior such as active learning is not the same as knowledge representation AI.
If you haven't already, look into prolog, datalog, miniKanren, clojure's core.logic and hyper-graph databases. Look into chapters 7-9 in Artificial Intelligence: A Modern Approach. (current). Read about SAT solvers, their use in inference, and check out the Wiki articles about propositional logic, first order logic and higher order logic.
Hope it helps.
Hi there.
Getting a job as an undergrad research assistant is another big way to make your mark. Make it clear that you're looking for such opportunities, and if they have funding they will probably come back to hire you. Unfortunately, the "having funding" part can be rough sometimes. Often, this work can be tedious and boring, but it will give you a chance to work with graduate students. If they're impressed by you, they'll say something to their adviser, who will hear it and probably start giving you more responsibilities.
As for them being sluggish to respond, that sucks. Professors are usually way more busy than it appears to those who don't work with them often. January and February are deadline months for some of the big conferences, so understand they're probably under pressure at the moment.
Be polite in your emails, but make them as absolutely as short as possible, and require the absolutely smallest response possible: "Hi Dr. X, I'm a sophomore in CS who's interested trying my hand at undergraduate research, especially in AI. Please let me know if you have positions available, and consider forwarding this email to colleagues. My resume is attached. Thank you kindly, challischill." The less work your emails require in response, the quicker they will be to respond.
As for the Norvig book, you're looking for "Artificial Intelligence: A Modern Approach." A pdf is here: http://zakki.dosen.narotama.ac.id/files/2012/02/A-Modern-Approach-.pdf I think there's a newer edition, but the old edition remains relevant.
You're off to a good start. Make your presence known and you'll find some opportunities. Please don't hesitate if you have more questions, and best of luck. :)
Thanks!
The kindle version is for free, but I just updated the date in the foreword and it seems like Amazon needs a bit of time to make it available again.
AI has already beat us in the game go and other video games. It can already make art, jokes, etc and speak with realistic voices/tone/language. AI can diagnose and read medical imaging with a far higher accuracy rate than physicians. AI can solve engineering problems better than us and can already code better than us (https://thenextweb.com/artificial-intelligence/2017/10/16/googles-ai-can-create-better-machine-learning-code-than-the-researchers-who-made-it/). AI can scan take in data, process it, and act on that data far faster and do such processing on the scale of millions of operations at once whole we can only handle a few tasks - even with cybernetic enhancements we will never be able to consciously process information at nearly the same level. For example, AI is already used in China to identify thousands of people in a crowd instantly and can pull up their pictures and identifying information from huge databases. I think you underestimate how powerful AI is already, let alone how powerful it will be in the near future. With the rise of quantum computers, the computational power advantage over existing supercomputers is in the magnitude of thousands (https://www.google.com/amp/s/www.cnbc.com/amp/2019/10/23/google-claims-successful-test-of-its-quantum-computer.html). Computers can already process millions of operations while we can do only do a few - imagine quantum AI.