I agree, but I will add a crucial part: learn python to do specific things. Don’t learn the language for the sake of knowing python. Learn it in an applied way, which means doing end-to-end projects.
OP, you’re in a perfect position to be able to level up quickly because you have data to work with: the data at your job.
I recommend using python to do stuff you mentioned already doing: pull data, clean it up, make some visualizations, build models using scikit-learn/statsmodels, report model comparisons in a visual way.
Hand-in-hand with all this, I would get one of the many “machine learning with python” books and work through it using the data from your company. Not only will you learn the material faster because you are contextualizing the new concepts with data you understand, you’ll be able to impact your company with the assets you create as you learn. I found this book to be particularly nice, though you have many options.
Hope this helps!
>because they’ll effectively be the AI that would replace them
It kind of depends on how we define the "they", the "I", the conscious. If the biological part of the new entity over time ends up being the part responsible for 0.1% of the thinking (decision making, reflecting, inventing etc.), we'll have to ask tough questions on whether it's still "us", or just the intelligence carrying around a decorative flesh remainder of a human... a human who might not even understand much of the 99.9% that's going on in the thinking. In a positive reading of this, on the other hand, we can argue that even in that case we merely upgraded our own conscious but it's still us. And maybe Elon's bet is that whether 0.1% or a proper upgrade, it's better -- facing an emerging Superintelligence -- than 0%.
I have recommend this book to all my students: https://www.amazon.com/dp/1492032646?ref=ppx_pop_mob_ap_share
It covers high level usage of machine learning libraries and a wide variety of concepts. It also has a github which hosts all of the example code in Jupiter notebooks which is neat. From there you can drill down on any concepts that interest you.
This is SO important. We should be doing this faster than China.
A branch of artificial intelligence is that of breeding and gene editing. Selectively selecting for genetic intelligence could lead to rapid advances in human intelligence. In 'Superintelligence: Paths, Dangers, Strategies', the most recent book by Oxford professor Nick Bostrum, as well as his paper 'Embryo Selection for Cognitive Enhancement', the case is made for very simple advances in IQ by selecting certain embryos for genetic attributes or even, in this case, breeding for them, and the payoff in terms of raw intelligence could be staggering.
Not to be a dick, but when you dive into the possible consequences of machine learning & AI, some facial detection software is pretty mundane when compared to other possible outcomes.
The book Superintelligence turned me into a luddite in terms of AI.
This is some kind of weird gatekeeping where AI keeps being redefined until it just means adult human intelligence. I have a textbook that literally has artificial intelligence in the title.
Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems https://www.amazon.com/dp/1492032646/
Hands down the best book in the field. Talks about both Keras and raw TensorFlow.
Just gonna drop this gem here. http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742
Doesn't have to be skynet level smart to fuck shit up. Also once its self modifying it's a whole other ballgame.
I glanced at "Hands-On Machine Learning with Scikit-Learn and TensorFlow" by Aurelien Geron and thought it is quite good. But I have not had a chance to read it deeply yet.
​
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1491962291
> I can't help but cringe every time he assumes that self-improvement is so easy for machines so that once it becomes possible at all, AI skyrockets into superintelligence in a matter of weeks.
He doesn't assume it, he concludes it after discussing the topic in depth.
Pages 75-94 of his book. Preview available via Amazon.
How is your math? If you're comfortable with multivariable calc, linear algebra, and some statistics (mostly the basics) then you should be ok. If not, it is helpful (but not totally necessary) to refresh.
This book that helped me out a ton when I was first learning TF, it has hands-on projects which cover SVMs, neural nets, computer vision, NLP, and RL. It doesn't shy away from the mathematical rigor, so you actually come away with a theoretical understanding of the algorithms, which makes a lot seem less like a black box, and because the projects are hands-on you actually know how to apply the theory into actual code.
I was a SWE for 3 years and then transitioned into a MLE role. Been in the ML space for 2 years now and I’m loving it. Your first point is the most important one. Running ML in production is like 95% SWE. There is a significant shortage of SWE skills in the ML space. Having a masters will definitely help get you through the screening process for most companies. As for what you need to know, this varies significantly from position to position. Personally, I would recommend reading 2 books. Hands-On Machine Learning and Deep Learning for Coders with fastai and PyTorch. In my opinion, if you understand the material in these books very well, you will be well suited for most MLE positions.
If you do want to learn more, the seminal book on this topic has already been written.
“Superintelligence” by Nick Bostrom.
https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/1501227742
Well, I have read the book below and a few other resources.
Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems. https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646/ref=sr_1_6?crid=1CIL9ZC0E0J3G&dchild=1&keywords=introduction+machine+learning&qid=1622981241&sprefix=introduction+machine+%2Caps%2C199&sr=8-6
The issue is that Prado has a very mathematical approach, and unless you have developed the intuition going through simpler examples, it will not make much intuitive sense. For instance, in section 5.4, he is applying the backshift operator to a matrix of features and then proceeds to relate that to the binomial expansion. Even for someone familiar with both concepts, it is hard to grasp the intuition behind that. There are several such examples.
I'm not sure how advanced your statistical background is yet, but the best purchase I ever made was <strong>An Introduction to Statistical Learning: with Applications in R</strong> by Hastie et. al.
It gives you a basic, intuitive background on various machine learning methods without getting into nitty gritty probability or statistical theory. And it has really helpful problem sets at the end of each chapter that shows you how to apply each of them in R, and which packages you'll need.
Seriously, that thing is like my bible. The authors have made a pdf available on the internet as well, but I'd highly suggest springing for a hard copy. It's pretty cheap as far as textbooks go.
Other than that, I've never been one to learn through online courses or books. I'd second /u/veeeerain and just do a bunch of projects using datasets from sources like Kaggle. Maybe start a blog to keep a portfolio of all the cool things you do. ;)
These two books are VERY good starting points for Machine Learning: 1. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems: https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646
If the math isn’t giving you problems then I think the issue is that you just don’t have a good intuition of how the algorithms are supposed to behave. I think your best bet is reading through ISLR for a more in depth understanding of various learning algorithms. Then I think you should be able to implement at the least the basic algorithms (Linear and Logistic Regression) from scratch.
I wasn't really worried about the "AI menace" until I read Bostrom's "Superintelligence" and learned that the threat isn't necessarily what everyone assumes. Bostrom's someone I respect enormously and his scholarship and rigor in the field of existential risk is peerless.
I recommend this book in machine learning. It was what got me started :)
> rather than as a user of well known and used networks and techniques.
I see what you're getting at. Unfortunately ML is very heavy on the theory aspect. It's a bit more applied math than just comp sci, so knowing what's going on under the hood is very important, even in a high level overview kind of setting.
I read the first edition of this book, Hands-on Machine Learning and I found it to be incredibly helpful in easing me into both the theory and practice of ML from basic concepts. And just well written enough to be a pretty enjoyable read
I'd look at applying to S2DS if you can get in. C++ and ROOT have very little relevance nowadays in data science (or, ever, really). Knowing the fundamentals of some of the ML algorithms in ROOT wont hurt you, but you need to learn scikit-learn, numpy, scipy, etc in Python as a bare minimum.
Plenty of (free) courses on Coursera, too.
It's an extremely competitive market, and whilst some of the stuff she's done will be useful, she'll be (in many employers' eyes, at least) behind the people graduating with Masters in Data Science/Comp Sci.
Recommend this, too:
https://www.amazon.co.uk/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/ref=asc\_df\_1461471370/?tag=googshopuk-21&linkCode=df0&hvadid=310848077451&hvpos=&hvnetw=g&hvrand=3789835037830509153&hvpone=...
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646/ref=pd_lpo_14_t_0/132-8288571-1969948?_encoding=UTF8&pd_rd_i=1492032646&pd_rd_r=4b6fe4e1-2140-46e7-b75e-f27c3b0ee7d7&pd_rd_w=ESZpn&pd_rd_wg=pRBJV&pf_rd_p=a0d6e967-6561-454c-84f8-2ce2c92b79a6&pf_rd_r=0HZX1HXJKFVYWWMX2EEZ&psc=1&refRID=0HZX1HXJKFVYWWMX2EEZ This is the book you want if you know calculus (and python, though I suppose you could learn that through the examples if you have a base in coding) already. This helped me learn machine learning and how to apply it to data science more than my college courses.
This is a great book. Godel Escher Bach... mind expanding is what I would call it.
https://www.amazon.com/G%C3%B6del-Escher-Bach-Eternal-Golden/dp/0465026567
https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1492032646/
1,523 ratings
4.8 out of 5
https://www.amazon.com/Introduction-Machine-Learning-Python-Scientists/dp/1449369413/
300 ratings
4.5 out of 5
> I don't believe anyone has a satisfying answer though.
The question may well be unanswerable. This leads to one of my favorite unanswerable questions: why is there something instead of nothing? (Favorite because anyone who says they've got an answer to this question is almost certainly either delusional or lying... so it's a good litmus test for 'woo'.)
I highly recommend Hofstadter's Godel, Escher, Bach: An Eternal Golden Braid for a very readable exploration on why this is unanswerable.
Indeed, you can preorder it from Amazon here
​
I've currently reading it, if you have subscriptions to safaribooksonline the 2nd version is already available in draft mode, and find that it contains just enough theory + hands on to keep things interesting.
​
The second chapter dive straight into an End to end so that you get the overview picture right away instead of spending too much time on any particular area.
If you're worried about not doing projects and participating in Kaggle competitions, why not do those things? They're pretty low risk and high reward. If you're feeling shaky on the theory, read papers and reference textbooks, take notes, and implement things that interest you. For deep learning stuff there are some good resources here: https://github.com/ChristosChristofidis/awesome-deep-learning. For more traditional methods you can't go wrong with Chris Bishop's book (try googling it for a cheaper alternative to Amazon ;): https://www.amazon.com/Pattern-Recognition-Learning-Information-Statistics/dp/0387310738. Side projects can really help here, and the key is to use references, but don't just copy-paste. Think of something you'd like to apply machine learning to with a reasonable scope. Search google scholar/arxiv for papers that do this or something similar, read them, and learn the techniques. For reading research papers in an area where you're not extremely knowledgeable, use the references in the text or google things you don't know and make sure you understand so you could teach someone else. If you're interested in the topic and exhausted the references, go up the tree and use google scholar to find papers that list the one you're reading as a reference - you usually find interesting applications or improvements on the technique. You can also often find open source training data in the appendices of papers. Kaggle also has a ton of datasets, including obviously the ones they provide for competitions.
So... nothing is really engraved in a subconscious, because it's constantly changing and it's highly complex. There isn't a function. There's no deactivation switch, because there's no switch in the first place. The mind is not a machine and hypnotists aren't programmers.
Hypnotists are guides. They specialize in navigating some of this very poorly mapped territory. They're often quite good at it. In some cases - like smoking cessation or phobia reduction - they're reliably good at particular functions - so much so that it's published and statistically significant.
Don't let the reliability of some operations fool you, though. The mind isn't a series of mapped switches and mechanical functions, and IMO it never will be. As a result, the reliable answers in spaces like this will generally be frustratingly vague, just because no one can say "yup, I just slap that tear switch and call tech support if it doesn't work."