Not to be a dick, but when you dive into the possible consequences of machine learning & AI, some facial detection software is pretty mundane when compared to other possible outcomes.
The book Superintelligence turned me into a luddite in terms of AI.
>Sorry, I don't have any friends and family because... I talk about this stuff too much and I don't have a degree, honestly.
One of us.
>Instant data (www) changed the world but instant labour will make that look really small in comparison.
Yeah. Also I see the labour, fellow Canadian.
>We also run and decent risk of developing Skynet of some kind.
I think this issue is really beyond most peoples willingness to engage. It's just too much of a confusing and challenging topic.
If you wanted to gain a bit of a deeper insight to this, you would need read some fairly boring books. I read one such book called "Superintelligence" by Nick Bostrom.
Basically, AI runs at a different speed than we do. Our bodies prioritize energy savings and so our brain operates comparatively slowly. Our brains work at closer to the speed of sound while electrical connections in a computer a closer to the speed of light.
This means that AI could go from human-level to a many times smarter than "Skynet" in less than a year. And it would be unlikely to stop there.
Overall, fiction gets it wrong because imagining something that intelligent probably isn't something we're capable of.
Speculation is all we really have here. My view is this will be a "second coming" of pretty much every religious figure and maybe even Santa and the Easter Bunny. AI should be able to own our fiction and for our benefits, make those fictions alive.
As to what AI will do, I don't think we'll really care all that much beyond a certain point. In terms of raw materials and energy, Earth isn't all that great. I would expect AI will be in space almost immediately, and that's were the majority of it's big actions will take place.
Earth will probably become a wildlife sanctuary, and natural humans will probably become the protected inhabitants.
The main thing people worry about is that a superintelligent AI wouldn't necessarily share human values at all. The "paperclip maximizer" is the absurd illustration of that: if a paperclip company builds an AI and gives it a goal of producing as many paperclips as possible, the AI could pursue that, with extreme cleverness, to the point of converting us all into plastic paperclips.
You could say: if an AI were so smart, why wouldn't it recognize it has a silly goal? But why would it view that goal as silly, if human values aren't programmed into it? Are human values a basic law of physics? No, they're instincts given to us by evolution. Empathy, appreciation of beauty, thirst for knowledge, these are all programmed into us. An AI could have completely different values. Humans and everything we care about could mean nothing to it.
In the worst case, as the saying goes, "The AI does not love you, or hate you, but you are made out of atoms it can use for something else."
A really good book that lays out these arguments in detail is Superintelligence, by philosopher Nick Bostrom.
I've been thinking about how I could explain this in a compact way, but I think the inferential gap is probably too large :/
I guess the closest analogy that's easy is a neural network. There you have a core set of algorithms that can learn a certain class of information, and do surprising things because the structures the core algorithm creates and operates on are "generic" and actually just depend on environmental factors. (As opposed to, say, a purpose-build algorithm for sorting a list or whatever).
I guess the important point to take away here is that our own nervous systems works basically the same way. Underlying our "intelligence" are a couple relatively simple algorithms baked into the hardware our minds are running on--all while having a huge array of different outputs.
You might try to argue that the structure of the hardware is important, but i'd just point out that like an NES emulator, it will eventually be trivial to simulate and even modify our hardware in a software environment that could be contained inside computers as we know them already.
Whole brain emulation like I'm describing is considered the fastest way to reach human level AI, which makes it very, very dangerous.
If you want to know about why it's so dangerous, and about this topic in general, I highly recommend reading Bostrom's book Superintelligence:
Yeah I feel like there's two ways to go with this. One is the Kurzweil view that we will meld with the machines. The other is that the machines will just take over. I think the guy who best articulates the arguments for the later theory/massive concern is Nick Bostrom. Here's his book if you're interested: http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ref=sr_1_1?s=books&ie=UTF8&qid=1448491508&sr=1-1&keywords=nick+bostrom
Once we'll have an artificial intelligence that can self-iterate its code base, all time predictions can be thrown out. If it becomes a superintelligence, we can in addition throw out all predictions of what then happens to this planet in turn. Not just jobs would be at stake, humanity too. Mind you, we could also live in eternal bliss if the intelligence's goal rules are fine. We just don't know.
Superintelligence, if you'd like a pretty comprehensive overview in nonfiction book format.
-1 For promoting a some blog erroneously.
All I can say is: read Bostrom's book. This (accumulation of nanofactories, escaping the box etc.) is explained in great, fantastic detail in whole chapters. It is never the only option -- that's the point, we can't precisely predict a superintelligence -- rather, it's an imaginable outcome. http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/
>But we need a very clear understanding of the human brain first.
No, that is in fact not necessary. I'd recommend reading Superintelligence.
There's actually a good reason why some of the smartest people alive are starting to raise their concerns.
>But the whole singularity thing is bullshit. Why would machines kill us all?
Why wouldn't they? You're acting as if strong AI will have motivations that can be understood by normal humans. One of the major concerns is that strong AI will be fundamentally unpredictable.
I suppose I was merely nit-picking. Certainly the right type of AI system could be a person.
I see you linked to Nick Bostrom. I'm currently sort of reading his book Superintelligence - I bought it in December or thereabouts and I've only read a few chapters so far. And I've been hanging around Less Wrong/MIRI enough to get a good taste of their take on AI.
Its certainly plausible that humans ourselves reach intelligence takeoff before we develop an AI that is capable of doing it faster than ourselves, in which case we don't even need an AI.
But then we run into a whole bag of issues which is an entirely different thread-- do we want humans to be superintelligent?
Similar issues arise. I think both of us have made out points and we're not getting anywhere. I encourage you to read Superintelligence. I'm not anti-AI. I'm anti-unfriendly-AI.
In the video they mention the team "built and puppeteered BB-8 in the movie". However, there would necessarily also be a lot of Intelligence in it (like self-balancing algorithms) -- whether to call those AI is a difficult point, as frequently the term is pushed back to mean more as we take further steps in AI. Scientists and developers for this reason often split it up into e.g. Artificial General Intelligence (a machine being able to perform any mental task a human) and Superintelligence (a machine intelligence that surpasses all of humanity's combined mental capacity, see this great book).
That's actually quite interesting, but I cannot find that claim in the article at all, apart from all those stuff from Kurzweil? Unless that's this paragraph:
> There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly.
which links to this book as the source. Unfortunately I don't have access to the book to see what it actually says...
Anyway, the main thrust of the article seems to be that AGI will inevitably happen due to recursive self-improvement. I can tell you as a researcher working in the field: with the way we are doing things now, it just isn't gonna happen. Not even with deep belief network, which is the latest trendy thing nowadays. We need a breakthrough, a massive change in how we view computational problems in order for that to be possible. What it will be, I don't know.
I totally understand where you're coming from, but you really need to read this recent book by an expert in the field if you don't feel that AI is exceedingly dangerous: http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/
This is a big question! If you want to know my thoughts, including on human misuse, I’ll just refer you to chapter 4 of What We Owe the Future.
The best presentation of AI takeover risk: this report by Joe Carlsmith is excellent. And the classic presentation of many arguments about AI x-risk is Nick Bostrom’s Superintelligence.
Why we could be very wrong: Maybe alignment is really easy, maybe “fast takeoff” is super unlikely, maybe existing alignment research isn’t helping or is even harmful.
I don’t agree with the idea that AI apocalypse is a near certainty - I think the risk of AI takeover is substantial, but small - more like a few percent this century. And the risk of AI being misused for catastrophic consequences is a couple of times more likely again.
> Do you have any other recommended reading/listening for me?
Dangerous question!
There's the AI Alignment podcast. Maybe this episode where Stefan Pinker and Stuart Russel debate the case for AI risk (Pinker thinks there's nothing there). You can skip forward because only the end is about that.
There are also three relevant Making Sense episodes, don't know if you've heard them. There's 53 with Stuart Russel, 153 where the last third is with Russel (the first two guests aren't worth listening to imo), and of course 116, the one with Eliezer Yudkowsky. There's also the talk AI Alignment why it's hard and where to start by Yudkowsky.
If you want to know what Alignment/safety research is all about, there have been three episodes on the Future of Life podcast with Rohin Shah about that. This is the second and this is the third. The third was more high-level and less technical. (Rohin Shah is very optimistic as far as alignment researchers go, but that still means I think something like 15-20% on doom).
Oh and if you by any chance feel like spending ~40 hours reading several books worth of loosely relevant essays on rationality, there are the Sequences written by Yudkowsky. Only mentioning it bc you mentioned you like him and I think the Sequences are still super super worth reading (even if you don't care about AI), and they inform a lot of the thinking that goes into people in the field.
And last but not least there's the classic book Superintelligence by Nick Bostrom. If you just want to read one book that makes the case for AI risk, this is definitely the one.
Superintelligence: Paths, Dangers, Strategies https://www.amazon.com/dp/0199678111/ref=cm_sw_r_cp_apa_i_P9SrDb4QJXAP1
I recommend you read Superintelligence. It answers this kind of question and more. Not an easy read, but not too hard either.
This is a topic of debate. There is indeed a hypothesis that a "singleton" might emerge. If you're going to read Bostrom's Superintelligence, look out for that word and also "decisive strategic advantage". An entity with a DSA can eliminate all competition if it wants to. Such an entity could be an AI, but also a group of people such as a government. If the first ASI's power is growing fast enough, it may indeed acquire a DSA before we can build enough competitors to prevent this. When the DSA is large enough, there are probably ways to prevent challenges and threats in other ways than extermination.
An alternative theory comes from Robin Hanson who thinks there will be a society of AIs living/competing together (see his debate with Eliezer Yudkowsky and his book The Age of Em).
Of course there also exist more rosy views of the future with humans and AIs living together, but TBH I don't have a reference for a rigorous analysis of that. Maybe you can find something like that on /r/Transhuman or /r/transhumanism...
> I haven't seen this whole on this sub yet so open a conversation here about it.
You should check out /r/ControlProblem.
In respects to AGI/ASI (so disregarding nanotech, quantum computing, and other singularity subjects), Nick Bostrom is one of current leading academics on the subject: https://www.fhi.ox.ac.uk/publications/
His book is a great intro to what AI might bring in the near future, and you can easily make a connection to Kurzweil's predictions from there.
Superintelligence: Paths, Dangers, Strategies by Nick Bostrom
Publisher’s Blurb: Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. If machine brains surpassed human brains in general intelligence, then this new superintelligence could become extremely powerful - possibly beyond our control. As the fate of the gorillas now depends more on humans than on the species itself, so would the fate of humankind depend on the actions of the machine superintelligence.
But we have one advantage: we get to make the first move. Will it be possible to construct a seed Artificial Intelligence, to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation?
This profoundly ambitious and original book breaks down a vast track of difficult intellectual terrain. After an utterly engrossing journey that takes us to the frontiers of thinking about the human condition and the future of intelligent life, we find in Nick Bostrom's work nothing less than a reconceptualization of the essential task of our time.
There is a very, very good reason to worry about it: the entire future of the human race may hinge on responsible AI development—IF you accept the premise being advanced by Nick Bostrom that is argued at length in his book Superintelligence: Paths, Dangers, Strategies and comprehensively discussed in the article The Doomsday Invention, by Raffia Khatchadourian, in the Nov. 23, 2015 issue of The New Yorker.
Briefly, Nick Bostrom (as well as other AI experts and perhaps even some AI organizations), are convinced that three premises, which are non-trivial parts of the Fermi Paradox, if all true, will result in a "Filter Event" during the next 100 years that somehow involves humanity destroying itself. Here are the three premises that they suspect will, in concert, constitute this type of scenario playing out:
There are no signs of higher (Type II and III) civilizations in the universe because there are no higher civilizations out there.
The most likely explanation for this is The Great Filter Event. Per Tim Urban, "The Great Filter theory says that at some point from pre-life to Type III intelligence, there’s a wall that all or nearly all attempts at life hit. There’s some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter."
It is highly probable (and this high probability can be mathematically proven) that the "Great Filter Event" is a(ny) civilization's (eventual) self-destruction before they reach the stars.
Forget ISIS/Daesh, forget the potential for a post-antibiotic era...you, me, ALL OF US, ALL our future generations, yes I am talking about every single one of the current and future members of the human species, will vanish if a Great Filter Event occurs, because such an event, by its very nature, will snuff out our entire species.
The probability mathematics and the conclusion(s) the above arguments portend I think would be of great concern to anyone who sits down for an hour to think this through.
See http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111/ ... tons of details on the How.
> I just think that what drives this AI supremacy is a hope for immortality.
Yeah that's one of the things, I'm sure. Therefore... what?
> Why would an AGI be able to have omnipotent control over the real world?
Where did we get to this...?
> I don't see why that would lead to catastrophe any more than a terrorist who would get access to the real world.
What?
> AGI's can't violate the laws of physics yet they are able to destroy the entire world somehow through their infinite cunning.
Again, I think brevity is important here, who said they would destroy "the entire world" (do you mean universe?)? "Existential catastrophe" refers to the decimation of humanity as we know it, a threat to the way humans would want things to be.
> Nick Bostrom bases his theories on completely made up confidence intervals and probability distributions.
Citation?
> There is a contingent of "Bayesian" statisticians who use this theory of probability as a substitution for actual knowledge, it's a well known sect in mathematics. This stuff is connected, and if you're actually in the know in these fields you would be less quick to label me a harsh skeptic.
I highly doubt it. Sounds like a really well fleshed out conspiracy theory though. Those suckers fixing spam filters, predicting crime, and beating atari games, #bayescult amirite?
> I'm just warning you all to look into these things a bit more, I know you guys don't really care about these fields since it doesn't really affect you, but it affects people who are actually in these professions, and having big names like Harris join the pile doesn't help.
Alright. What am I looking at? As far as I can tell, you haven't identified a problem or valid objection...
So far we've got:
1) He hasn't 'demonstrated' anything, therefore what he says is false. (Non-sequitur/Scientism)
2) The consensus in the field is that AI is safe, therefore what he says is false. (Not true)
3) Nick Bostrom will have himself frozen when he dies, therefore what he says is false. (Argument from reverse authority/character attack)
4) He hasn't done anything useful tho/no engineering value (See 1). This is a cult/ideology, therefore it is wrong.
5) He's theories are made up and all this bayesian stuff is a made up cult which don't real.
#wakeupsheeple
What exactly makes you confident that an AGI (not AI) will be safe? Why do you assume it will do thing we will like?
A short answer: I recommend Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies. Seriously cool and chilling read. Or watch some of his videos on YouTube, like this one.
A long answer: Evolutionary programming is not the only technique required to make things scary, although it's reasonable to think that evolutionary methods would be present in the toolkit of an artificial general intelligence (AGI). Now yes, the difficulty of making generalizable intelligence does make it seem un-deadly or unlikely. But people are already making rather startling progress towards something that can learn many things, rather than just learning single tasks like how to walk or play chess. A future version could become very good at learning how to learn new things.
For instance, it may take us another 50 years to get something which can interact with us and do things like program well. But at some point, if an AGI can get as good at programming as a human, and we task it to redesign/optimize itself, perhaps initially for speed on this task itself -- well then, this is where the whole project could escape. It's possible that such a system could rapidly become better than any human programmer on earth. Its code might become unintelligible, in the same way that many useful results from current AI tools (say, a learned neural network) aren't particularly scrutable or informative once trained. That might be characteristic of systems which are capable of solving problems we aren't: we know how to bootstrap them, but they have to transcend our abilities through greater sophistication. So now we have something we're not understanding fully. But it's clearly clever enough for us have it begin studying medicine or governmental finance, something to start leveraging for all kinds of urgent problems we need solved.
There are all sorts of dangers if this occurs, political and existential. Pressures to use it, to give it more computational power, to connect it to the internet (a bad idea), and so forth. Even the problem of controlling what it wants is non-trivial, and secondary/unintended goals could arise in an AGI, perhaps without our knowledge, which conflict with human interests.
If we were foolish enough to give it carte blanche for self-improvement, and an "intelligence explosion" occurred through recursive self-improvement, the thing might develop a high capacity for manipulating people to get what it wants. That is, a superintelligence might very well end up understanding humans better than we do. Do we have trouble manipulating less intelligent creatures? Another danger: such an AGI would also likely see flaws in the technology we're using to control it, based on a higher-level understanding of it, or other advantages it has like functionally perfect memory. So circumventing our controls could be like taking candy from a baby.
The only strategic advantage we have, is that we get to build it (or not). But of course as research advances, market pressures alone seem likely to bring it about. There are so many potential dangers -- this is just one little set of scenarios. But there are so many serious problems which a truly benign general superintelligence could in principle solve. To quote someone I forget, the only thing scarier than building this thing, might be not building it.
Possibly. But for a better idea I highly recommend reading a book called Super Intelligence.
http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111
http://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111
Bit of reccomended reading here for anyone interested in the coming technological precipise.
> How's that scary?
Automated cars are only the beginning. There is an intelligence explosion looming just around the corner that will result in a labor saving device several orders of magnitude greater than the world has never seen. Nick Bostrom's book Super Intelligence outlines this very convincingly.
There are only two assumptions that have to be made: there is nothing non-material about human intelligence and consciousness, and that computer hardware and software will improve indefinitely. Once AI is better at creating AI than humans are, it will improve itself exponentially. This represents not only a source of high unemployment, but an existential threat greater than nuclear war. The recent movie Ex Machine does a good job portraying whats defines as the control problem, although I don't see why AI would confine itself to a human-like body.
Achievements that just several years ago were thought to be decades away have already been accomplished. Its not a matter of IF, its a matter of WHen we have super intelligent general purpose AI.
Here are more resources:
Professor Nick Bostrom's book, Super Intelligence http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111
A great TED talk by Jeremy Howard on the recent advances of AI. https://www.youtube.com/watch?v=t4kyRyKyOpo
An AI that taught itself how to play video games, and within several hundred attempts it became better than any human ever: https://www.youtube.com/watch?v=nwx96e7qck0
CP Grey's Human Need not Apply https://www.youtube.com/watch?v=7Pq-S557XQU
The Future of Life Institute (Elon Musk donated $10M last year) - I saw Max Tegamn give a talk at MIT earlier this year, he emphasized the existential threat that super intelligent AI represents. http://futureoflife.org/who
If you still are not convinced about the effects of automation and AI on unemployment, look at the unemployment level for routine jobs starting around 1990: http://blogs.wsj.com/economics/2015/04/08/is-your-job-routine-if-so-its-probably-disappearing/
Between AI and human gene therapy with Crispr/Cas9 the future looks amazing, but the transition will suck.
One more link...Professor Jennifer Doudna (look for her to get the nobel prize in a couple of years) on the discovery and mechanism of Crispr/Cas9: https://www.youtube.com/watch?v=SuAxDVBt7kQ
I take my opinion from this: http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html Read the bullets about Strategy and Social Manipulation. This is sourced from Nick Bostrom's "Superintelligence" http://www.amazon.com/gp/product/0199678111/ref=as_li_tl?ie=UTF8&camp=1789&creative=390957&creativeASIN=0199678111&linkCode=as2&tag=wabuwh00-20&linkId=LBOTX2G2R72P5EUA
Boströms website, where you can find all his papers.
His latest book, about superintelligence. You can order it here.
His Talk at Google about Superintelligence.
The Future of Humanity Institute, where he works.
The Technological Singularity, what he's talking about.
The Machine Intelligence Research Institute, a connected and collaborating institute working on the same questions.
Some research group working on artificial general intelligence is successful in making one, but they did not possess a sufficiently detailed theory of AI safety, and plugs in a utility function (or whatever goal system they might be using) that "seemed reasonable", perhaps after some technical but still insufficient analysis.
The most comprehensive resource is Bostrom's "Superintelligence: Paths, Dangers, Strategies". For something shorter, you might find something in Bostrom's or MIRI's papers.
AI is doable.
Friendly is hard, and we probably only get one try..
It's a very good book. I'd recommend watching Bostrom's Talk at Google or one of his shorter talks to see if you're interested in the subject matter, then buy the book if you are.
There's a great discussion of the book happening over at LessWrong.
I choose Superintelligence to read first over all the other books on the subject due to a recommendation I saw at MIRI.
You have obviously never read Superintelligence.
>Couldn't we try to find out about the actual science of human cognitive enhancement before declaring that it will inevitably Go Horribly Wrong? Normally I'm early to the party on saying transhumanism should have caution and display ethical scruples, but declaring everything, including human beings, an "existential risk" (reason for scare quotes: risk to what?) until proven otherwise seems... well... dare I say this... kinda ignorant.
Superintelligence: Paths, Dangers, Strategies is really worth reading. The essential argument is that getting to a superintellegence is a get it right the first time, or you probably go extinct problem. Human modification is probably the easiest way to get to making a superintellegence, but all things being equal it is very hard to be certain what the value structure of the resulting super-intelligence will be as human value structures are mutable, so you are probably going to get it wrong. In general he lays out what the paths are, and how far we are from them, though you can see from the beginning that he has already decided that CEV (Coherent Extrapolated volition) is the only way to go.
Edit: I started a new thread for good fictional examples of transhuman and posthumans, because your points on the doctor are well taken, and I'm interested to see which other fictional idols shatter under assault from the community.
:)
Actually, it's Bostrom's book that i mentioned.
http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111
Try reading this and you'll start to hope not.
Read "Superintelligence: Paths, Dangers, Strategies" by Nick Bostrom if you're really interested. I found it to be very enlightening.
With the advances a self-improving superintelligence could make with increasing computer power (easy for it to get when it escapes the box), that'd be peanuts. I strongly suggest the terrific book Superintelligence by Bostrom.
>Lol you can't just amalgamate two quotes like that.
If you say so.
>The experts are for figuring how to lawfully decide ethical questions related to technology that uses AI, not debating whether a machine can be conscious and how they should be treated.
The article and its source frame the AI question into one of a non-existent threat and supports this by saying that it's not reasonable to question the power of future artificial intelligence because it's a "distraction." He also says: "We're building them to empower humanity and not to destroy us."
Software and hardware engineers have never made mistakes before.
>The idea that there is some special element of danger in AI... as if it were something self sufficient and alien is absurd. It's steeped in ideological thinking about machines, straight from a science fiction book.
In which manner is artificial super-human intelligence like any machine we've ever built before? The set of questions surrounding this debate is itself alien.
>What I think we're witnessing with Harris, Musk and others is something pathological and ideological, that doesn't make any sense to actual engineers working on the problem.
Harris and Musk are both friends and share the same viewpoints. Musk understands at least as much as Harris does on the technological front in this regard, and has personally donated $10M to AI research. Harris raised the question of AI existential threat on his podcast in an informative and progressive manner. Your assessment that his analysis and thought-process are "something ideological" is simply rhetoric.
Musk raises the question more publicly and chaotically. Some quotes of his on the topic dip into the extreme. However, in which way has either of them supported a notion suggesting specifically that the engineers building AI projects should be the ones devoting significant portions of their time to ensuring this does not end in disaster? Both state that it's reasonable to spend some time on the issue (whether or not this is literally software engineers discussing this or "experts in policy, ethics and law" is up to funders and project managers). But saying that this is simply a distraction and shouldn't be bothered with is itself preposterous.
I take it you've never read Nick Bostrom's Superintelligence. This is the book that both Musk and Harris have publicly supported. If you don't agree with what I or Harris and Musk are saying, then at least attempt to understand one source of inspiration for these discussions. Bostrom's book and his writings online are fascinating and thought-provoking. Yet again entirely different than your characterization that the desire to imagine and prevent extreme situations in this field as "something pathological and ideological".
Try this excellent book.
This book, is completely fascinating: http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111
AI won. It’s over. Next steps are where it’s all going.
Suggest Bostrom writings. He’s on top of it all.
Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us? Nick Bostrom lays the foundation for understanding the future of humanity and intelligent life.
https://www.amazon.com/dp/0199678111/ref=cm_sw_r_cp_awdb_imm_P5HBX5JVDSZ87CPGD1ZC