First, some edits because your blogspam 2nd link is garbage. Here is the original source that is much less garbage.
>With the old brain simulation algorithm, a maximum of 10% of the brain could be simulated.
There are several problems with this statement. One, it's inaccurate. NEST only claims that 10% would have been possible with an exascale computer, but more processing power could have done 100%. Two, it's misleading because there are multiple kinds of neural-simulation programs that exist and NEST isn't even the most prominent among them.
>With the new algorithm developed by the NEST (Neural Simulation Tool), 100% brain simulation is now possible with the requisite computing power and memory
[NEST](http://www.scholarpedia.org/article/NEST_(NEural_Simulation_Tool\)) has existed since 2013, and I'm completely unable to find any evidence that they have a new algorithm that promises significant performance improvements. The real article just says an improved algorithm could be made that would be more efficient. Also, Open Worm can simulate a human brain, given a large enough computer. Basically anything that can model a single cell accurately can do so, if you're willing to say you have an infinitely powerful machine behind it.
>The exascale super computer A21 will be built by 2021 (IBM, Intel, and other big tech companies are collaborating with the DoE to make this happen on time)
That's a big claim, one not supported by your links. They specify Intel and Cray building it, and that the purpose of the lab will be for physics research for use in fusion reactors. Not AI research.
Charles Stross has written several (don't start with Singularity Sky, it depends on books before it).
Accelerando is a pretty good stand alone book by Stross.
Free ebook of it is here:
http://manybooks.net/titles/strosscother05accelerando-txt.html
Full 1 hour interview here https://youtube.com/watch?v=OtL1fEEtLaA
Also, the scientist here who saved Mel Gibson's father wrote a book about his work with umbilical cord mesenchymal stem cells, I read it and it blew my mind https://www.amazon.com/Stem-Cell-Therapy-Disrupting-Transforming-ebook/dp/B071GRNQPX
I honestly can't believe people don't know about mesenchymal stem cells they can modulate the immune system so they are a cure for autoimmune diseases and they release molecules that stimulate regeneration so they can heal a damaged heart after a heart attack and they can control inflammation so they are also a cure for inflammatory diseases, this is going to go down in history as one of the biggest breakthroughs ever in medical science I believe!!!
Meh. I didn't read the whole thing- just the first half of it or so. He's arguing that human-level AI will never happen because we are a product of biological evolution, and also because there are ethical problems. I don't consider either of those to be a deathblow to the possibility of the Singularity. Just because an AI is not likely to evolve as human does, doesn't mean that there aren't other paths to human-level AI. Why can't we just model the human brain within an electronic substrate? It's already being done.
http://www.ted.com/talks/henry_markram_supercomputing_the_brain_s_secrets.html
Edit: Edited for clarity.
> Can any of the people downvoting me explain why?
I upvoted you, but I will explain why others are downvoting.
I suspect most people's intuition matches mine here, that since the hyperloop tube is held at (near) vacuum that a sonic boom can't happen, or if it does that the force would be negligible because of the extremely low pressure.
That said, after some research I've found that you're probably right, even a small amount of air pressure puts some serious limits on the speed a capsule could travel through a tube.
It really comes down to a combination of how you're presenting your information and most people's intuition (regardless of if that intuition is correct). Specifically, you start out by immediately going against intuition, "Hyperloops can't go faster than the speed of sound." and then going on the attack, "Why are people posting and upvoting random ignorant tweets.". You would be better served by starting with the information relevant to your claim, then making your claim, and leaving out the attack entirely. External references would also help. Perhaps like this:
> Hyperloops aren't planned to be full vacuum, so there are some factors limiting their top speed, such as the speed of sound, which can have a significant impact despite the low pressure and limits a hyperloop capsule to subsonic travel speeds.
Lastly to actually answer your question of why, it's because these people lack some of the knowledge you do, or have not taken the time to apply critical thinking to the knowledge they have. Mocking people (intentionally or not) for their ignorance doesn't fix it, since most people, when feeling attacked, attempt to defend. Education does fix ignorance, but effective education is a lot more than just throwing facts at people and telling them what they know is wrong.
> There is a little room for optimism.
Hold the wateworks, Susan. You're only comparing single processors. Consider parallelism. The i7 980X (Gulftown, July 2010) is benched barely above the i7 3820 (Sandy Bridge, February 20120), but the latter costs just $320. Moore's law (as you've chosen to interpret it) calls for FLOPs/$1000 to halve every 18-24 months, but in 19 months it's fallen by two-thirds! So what if the 3960X (Sandy Bridge, November 2011) is only 33% faster than the 980X? Silicon's been bumping against the heat dissipation wall for nearly ten years - the 130W barrier has proven far less penetrable than the 3 GHz barrier. It's not like anyone still imagines AI is going to require doggedly linear computations.
Considering they just hired Ray Kurzweil and are investing in D-Wave quantum technology, I'm very much expecting the future to play out in a very similar way to the OPs short story.
While some of the concepts on that page look interesting, the usage of almost entirely nonstandard terminology and the relative lack of explanations make this look more sensationalist than concrete (and apparently I'm not the only one who thinks so ). If I could see something more complex than just Hello World, something with real conditional and repeating logic, translated into this framework, I might be able to do away with some of this skepticism. Even fizzbuzz or the sieve of Eratosthenes would be better than nothing.
Here's the discussion on HN. This comment by the author in particular is the one that pushed me from "Hey, this is a strange and exciting idea" to "This can potentially have a serious impact on the singularity"
There is an interesting trilogy of science fiction that deals with this. Here is the appendix for brevity but I'll quote also the sections I have in mind that have an interesting approach. Part of it will seem gibberish without context:
>The early period of the Seventh Mental Structure is also called the Time of the Second Immortality, for the defects of the Compositional mental noosophic recording systems were cured. Noumenal mathematics allowed for the modeling of essential and ineffable human memory characteristics, to such a level of fine detail that individual human minds could be recorded, duplicated, and reproduced; and differences between the original template and the copy were below detectable limits, both mechanical detection thresholds and the intuitive and emotional threshold that allowed the revenants' copies to be regarded as being one and the same as the originals by friends, family, and society. While philosophers and Sophotechs might recognize that the dead, despite all appearances, truly were dead, for all practical and legal purposes, any mind that had sufficient continuity of memory with his original template was considered to be that selfsame person...
All in all, some very interesting legal battles will come into play and I think that instead of the usual case by case aggregation of the body of law determining it, it may be best to sort of preemptively deal with it as it becomes apparent it's coming about. Maybe something where individuals who are deemed continuous copies still are that legal person while notable differences make them a copy and substantial differences a new individual entirely. It will probably be fleshed out, no matter what, by the messy case by case process but I think that some guidelines for detecting differences could be mapped out.
Way too much of an emphasis on the suicides in that article. I bet they're doing it for profit and not at all because of that.
Plus, it's a non issue when you put into perspective : http://www.zdnet.com/blog/foremski/media-gets-its-facts-wrong-working-at-foxconn-significantly-cuts-suicide-risk/1356
Good question! Pause for sleuthing...
EDIT: Well I couldn't find anything about either of them having anything released. Reading through some of the information from the references in the Wiki article, I did notice Yudkowsky saying:
> There's no super-clever special trick to it. I just did it the hard way.
From: https://news.ycombinator.com/item?id=195959
So whatever that would entail...
Interesting ideas from this guy, but he puts WAY too much faith in kids coming out of high school now. He said something along the lines that kids won't eat burgers because they know that trees were cut down in order to raise cows and since the trees were cut, they are no longer absorbing Co2 causing global warming.
Climate change is a very important issue, but most HS kids don't actually care as much as this guy would like them to. HS kids like to socialize, play video games, and spend obligatory time with family.
Today's GameFAQs poll (average age 15-25)
But I liked what he had to say about net neutrality and how more people care about having a meaningful job vs a high paying job.
The YouTube video description says it was published June 20, 2016.
The unity website says:
> Adam is a short film created with the Unity game engine and rendered in real time. It’s built to showcase and test out the graphical quality achievable with Unity in 2016.
> A short version of Adam was also shown at GDC in March 2016.
So you could have seen it in March perhaps. Definitely not a year ago though.
Many critics have written off this movie, but I liked it so much that I wrote this blog post in its defense.
To me, "The Internet" is more of an abstract concept or industry term. A conscious AI would certainly use the internet as a primary source of information, or the components that make up the AI could all be contained within the internet in some fashion, which might be closer to what you mean. The blockchain technology Ethereum has touched on some of the ideas of "code" (living contracts) running on a public blockchain, essentially running within the internet, instead of in a centralized fashion. These kinds of public and potentially unstoppable processes (cryptocurrencies and other blockchain technologies) leave an open door to an AI off its leash, a botnet on steroids, computing collectively in hivemind fashion.
Please read the comments over on hacker news.
Moore's Law has been dead for awhile.
Computers have managed to beat the economy for quite awhile now, but that party is coming to an end. A good deal more of the ecosystem has yet to catch up (see reducing prices in screens) as the rest of the industry improves their processes, but they are going to hit the same walls eventually.
One has to note that Moore's law is specific to integrated circuits though. We may very well come across a new technology, but the danger is that the growth of that new technology won't be exponential. Linear progress is nice and all (better than stagnation!) but we are all quite spoiled by non-linear progress.
And of course so long as we all have to live in a scarcity based economy, we will have to put up with costs of things compounding, which means pure linear growth will fall behind the economy, which can then impede progress.
You also like to believe in conspiracy theories too?
Answer me, which one is most likely?
AGI exists on earth
AGI exists on earth, and it's hiding from us
For an intuition, check this:
Probably not. Almost certainly not.
For the same reason Sears doesn't dominate online sales. For the same reason IBM doesn't dominate personal computer sales. For the same reason Microsoft doesn't dominate the smart phone market.
A big organization has a lot of inertia. The new technologies are too revolutionary for a big corporation to handle well. They often try to explore the new innovations, but they ultimately fail.
The reasons for this are well analyzed in Clayton Christensen's book The Innovator's Dilemma.
I'm very skeptical of the time-frame, but not of the overall trend. Here's Ray Kurzweil in 2005 making predictions about 2010. (skip to 19:40)
His predictions sound laughably optimistic in hindsight. But I don't doubt that eventually these things will come to pass
Just like the internet affects our memory, I think relying too heavily on a moral machine in our everyday lives would decrease our ability to discern moral actions on our own. After which, we would no longer be skilled enough to improve on the machine, ultimately leading to moral stagnation.
A more appropriate application would be some sort of digital 'president' that can veto large decisions made by congress, or similar ruling bodies.
NASA to improve endurance and safety of humans in space using nano technology. http://gizmodo.com/5882725/the-miraculous-nasa-breakthrough-that-could-save-millions-of-lives/
"Heat, exhaustion, and sleep-deprivation are serious risks on an EVA (a "spacewalk"), and astronauts are usually on a very tight schedule. Different capsules can be created that contain unique triggers and treatments for different stress-factors."
It's at that .1% mark Give it a decade or so. It will creep in to common life so quickly that most people won't notice.
EDIT: first civilian application will likely be for diabetes. If it's safe enough for grandma, then it's not a big leap to be safe enough for the middle age or the younger generations.
Sadly, you can't 'solve' intelligence because we don't quite know what it is yet. What you can do, however, is to approximate specific intelligent-like behaviours (still as poorly defined though) using various mathematical models. Whether the resulting system is truly 'intelligent' or not, nobody knows. Most researchers in the field tend to set that aside as an unproductive discussion.
The following online course is an excellent introduction to learning system https://www.coursera.org/course/ml. Once you're hooked, there's no escaping :P
This article doesn't seem to say too much, but I've written much more about it, and I invite anyone to read it - on finding the Holy Ghost in the machine - although it's written in the context of my movie project, it's here.
You also have integrated circuits consisting only of diodes for example, without any transistors. You also have discrete transistors that you can use to build a computer without integrated circuit
Indeed, integrated circuits do exist largely out of very tiny transistors created directly onto the silicon.
http://www.scholarpedia.org/article/Models_of_consciousness
Anil Seth covers a lot of ground in this article
There are a lot of rough outlines on how it works. There is absolutely no consensus yet.
>why we are different from a monkey
Language and grammar
I know we often disagree on things, but this I really do agree with you on this. Bitcoin is a big change in how both digital economies work and it is also big in the sense that it is a new way to think about online communications.
I'd also suggest you check out OneName, and Etherium. Both built on bitcoin's concept/platform.
OneName allows you to have a global single-username that you can use to sign into websites/services. But the website service can identify you "without" you giving them private information, so you keep your password secure (it uses PGP technology to do so).
Etherium is a fully decentralized open-source programming language. You can write Etherium programs that can be fully-autonomous corporations/contracts, where the rules are fully visible to both parties. This allows you to do something like >This is a contract that lists bitcoin X as 'legal ownership of my house', the first person to deposit 300 bitcoin into this etherium contract will be given this ownership token.
So you can literally put your house on the open market with a legally binding contract, without any middlemen. Just as an example of the goal.
I suppose I can share publicly the first issue (Prologue and Chapter 1 as a CBR file). That's the one we're using as basically a free giveaway to drive interest. Have to figure out how to code a paypal donation button (or just use our Patreon) for the rest.
EDIT: I made a Gumroad account where you can download it as a CBR or a PDF. It's Pay What You Want, so it can be free or you can chip in a bit to pay the artist so we can keep making the comic :)
yeah. i agree.
stephen hawking was more honest in his analysis.
evolution is bloody, horrific, and produces something that survives in its environment, not necessarily something that is 'good' or 'beautiful'.
that's all fine and dandy if you are talking about none conscious family of molecules or maybe even some bacteria.
but that is going to unlock unfathomable amounts of suffering in the most conscious creatures on earth.
s-risks are very real, especially when considering the human species track record.
consider that evolution would rather us live in HELL than not live at all.
> Isn't this just how transformers get trained?
It superficially sounds similar but the model doesn't actually work that way.
https://www.amazon.com/Surfing-Uncertainty-Prediction-Action-Embodied/dp/0190217014
Natural Language Processing with Transformers: Building Language Applications with Hugging Face
On the reasoning that nothing beats first-hand experience.
The Romance of Reality, I find it better that The Singularity is Near, in explaining why the Singularity is happening.
Ray Kurzweil's explanation is "The Law of Accelerated Returns", which is neither a law, nor a good explanation. In the book Romance of Reality the author explains that te Singularity is given by a side effect of the Second Law of Thermodynamics, which states that Entropy in a closed system always increases.
That is, on average, systems tend to be more disorganized (increased entropy). The side effect of this universal law, is that there are pockets in the Universe where the exact opposite happens. Stars, planets, are places where entropy decreases, as a law of the universe.
The Singularity is the inevitable result of decreased entropy because we are in a pocket of the Universe where we entered a feedback loop of loss of entropy.
You are confusing the very concept of the Singularity. The Singularity is the increased complexity of the Universe, given by a side effect of the Second Law of Thermodynamics, which states that Entropy in a closed system always increases.
That is, on average, systems tend to be more disorganized (increased entropy). The side effect of this universal law, is that there are pockets in the Universe where the exact opposite happens. Stars, planets, are places where entropy decreases, as a law of the universe. You can check the book The Romance of Reality for an in-depth explanation of this phenomenon.
The Singularity is the inevitable result of decreased entropy because we are in a pocket of the Universe where we entered a feedback loop of loss of entropy. Complexity has been increasing on Earth since its inception, and it will continue to do so until complexity cannot get any more complex. Starting from being created with most of the elements in the periodic table, to simple strands of DNA, to cells, multicellular life, human brain, technology and information technology. It is like being at the center of a star to be that is accumulating gas via gravity, and will ignite as a star when the critical mass creates fusion.
Focusing on a particular strand of this entropy loss is losing perspective of the whole phenomenon, and therefore, the ability to make predictions. Like trying to predict the path of a single ball in a Galton Board. It is absolutely impossible.
But predicting the overall distribution is super easy, because it is always the same.
The exact path that digital medicine is going to take is impossible to predict. But the fact that it will be complete by the time the Singularity happens is as sure as any other predictable physical phenomenon in the Universe.
Never heard of MUSE cells, interesting. You really want to read a good book about mesenchymal stem cells? Read this book right here, this book blew my fucking mind!!! https://www.amazon.com/Stem-Cell-Therapy-Disrupting-Transforming-ebook/dp/B071GRNQPX
> I just hope it comes soon.
Looking at this, it seems to me we are just a few years away from an AGI. I guess it's 3 years away.
Opinions, opinions. Yadda, yadda.
>This paper presents the first-ever survey of active AGI R&D projects in terms of ethics, risk, and policy. A thorough search identifies 45 projects of diverse sizes, nationalities, ethical goals, and other attributes. Most projects are either academic or corporate.
Sauce:
I was showing him around here in Vancouver last weekend:
https://instagram.com/p/0L07aIR6Yo/
>Is he like this all the time?
All I can say is that this interview video is very close to how he is in person.
Hmmm.... This seems dated.
The article is from 18/Sep/2022.
The actual report is by Europol, from some time in 2022.
The report is citing a book by EU-advisor Nina Schick, from 6/Aug/2020.
So, the original source was written well before the public release of GPT-3, and years before the release of current text-to-image (DALL-E 2, Midjourney, SD) and text-to-video capabilities.
I don't know what that means relative to the percentage quoted though!
I'm surprised no one has mentioned Future Shock a book by Alvin Toffler.
Toffler talks about the social problems that come about with rapidly increasing technology. Generally speaking, humans are not good or unable to cope with the acceleration of technology. It speaks about the generational divides that are created by groups that communicate and interact with each other in ways that can't be understood by outsiders fast enough to keep up. It was written in the 1970's and Toffler has more than a few predictions that seem to have become true. I strongly encourage people to read it.
This is mentioned in this book. The author gives an example of a housekeeper that takes a pair of sunglasses that were left on the table and stores it in a drawer. It takes a lot of information to perform that task. One must see the sunglasses, identify them correctly and realize they aren't in the place where they should be. Now multiply that by every item that can exist in a typical home and you'll see how complicated it would be to automate that job. All that to replace a worker with the lowest salary.
Compare that with the relatively simple job of a pathologist who gets paid a much higher salary to examine tissue samples looking for cancer cells. A company that develops AI will find it much more interesting to replace the pathologist than the housekeeper.
http://download.cnet.com/AARON/3000-2257_4-10059672.html
I found that. Looks legit, but reviews indicate it may have spyware. Doesn't matter though, 2001 I doubt it'll work with a recent version of Windows so you'll need to virtualize it anyhow.
yeah it's a great video, both the idea and the execution are amazing
> there is no "self-learning algorithms that can now also write their own code"
that's debatable: https://thenextweb.com/artificial-intelligence/2017/10/16/googles-ai-can-create-better-machine-learning-code-than-the-researchers-who-made-it/
and this is from over a year ago, and it's declassified. we don't know what secret stuff they came up with yesterday. hell even the people making these AI's don't know what's going inside the AI's brain, that quite literally the problem
> today’s humans just have no idea of how to program this helping bots in order to create AGI, the reward functions (teacher bots) are really complex.
well first of we are trying to reward it the way we would reward a human, but this isn't a human, so that's probably the wrong way to go. an interesting alternative to pre-programming reward is curiosity, the AI will learn and evolve simply because it wants to, not because it's programmed to achieve certain goals
the other thing is that as you mentioned, this problem is complex. to a human. but what's complex to a human is often child's play to an AI, if not today then very likely tomorrow
well, Kurzweil has the smartest computer engineers (i.e. the guys at Google) in the world convinced that he understands the brain, and they are "putting their money where their mouth is," as the humans say.
Every memory can be reduced to a movie with additional "tracks". How can you tell the difference between a true memory and smellovision? Only if this clip contradicts other clips and you are excessively analytical. Have you ever had a memory, you are unsure whether it has happened to you or you've seen it in a dream? Especially if it's not the bizarre type.
What are the exact functions you want to reconstruct, that haven't been already emulated? There are neural networks, inventing people.
https://thispersondoesnotexist.com/
YOU HAVE BEEN ASSIMILATED
Largely the same. The world today has 7.7 billion human level intelligences. Humanity will slowly add smarter and smarter artificial intelligences; thousands, millions, and billions at a time. It will incremental. Like how smartphones evolved.
Having a robotic workforce is harder than many people believe. Tesla is having a heck of a time automating simple tasks. The more complex the robot the more chance the robot will break, there are only so many robotic engineers to go around. Of course robots are going to take over human production, but we it won't be a hard take off. When AIs can self propagate would there be more dramatic change.
Also keep in mind how many cars a produced versus how many humans there are.
| year|cars produced in the world| |:-|:-| |2016|72,105,435| |2015|68,539,516| |2014|67,782,035| |2013|65,745,403| |2012|63,081,024|
AIs need a robust robotic manufacturing environment to become prominent. To take over human labor we need more robots to exist to take over. We especially need more humanoid robots. It will take time.
Good point. A comparison can be made with the evolution of Christianity. From inception to the 4th century, Christianity was a splintered collection of churches with not much in common except a basis on the Jesus Legend. See Jesus Interrupted . Then Emperor Constantine reversed Rome's persecution policy to adopt one version of the fractured belief system which came to dominate, and by policy eradicate all competing versions. That was the Orthodox church. http://www.dictionary.com/browse/orthodox
If the lessons of evolution are used as guide, an orthodox policy of dominance attainment would be the best strategy for improving survival/ sustainability. If you happen to be in the minority, so sorry, you lose.
I did similar a few years ago. You can see the result on the 6th slide here https://www.slideshare.net/mobile/tryggvibt/tknirun-og-framt-menntunar-hva-er-a-gerast-hvernig-vitum-vi-og-hva-eigum-vi-a-gera-v (in Icelandic but the graphic should be understandable). However, I can't say it's a replication of what Kurzweil did because he never really explained what he did. To tell the truth, if my assessment of his method was accurate, the same as resulted in what's portrayed on the graph, it's not a sophisticated method. All of the criticisms/comments in the thread apply, re. inflation, etc. But I would add, our changing understanding of what makes a computer powerful. Flops aren't as significant today as the were a decade or two ago.
>We also like familiarity and saving money. Added capability through software is free. Added power through upgrades is much cheaper than buying whole new devices.
Sure, but if you look at something like cell phones, most people buy them at about a yearly cycle. And iPhone is a perfect example that you don't even have to add anything substantially new for people to line up for it.
>Turing completeness is a guarantee that any general computer can pretend to be any other general computer. It means any program can run on anything - even if that program is artificially intelligent. In this context, it means that unless simulating a brain strictly requires analog or quantum effects, whatever hardware exists in people's homes around the time AI is developed will be used to run AI.
The problem here is speed and computational capacity. Take a look at Watson, currently you need a a cluster of 750 servers to run the thing. In a few years it will likely fit on a mobile device. So, if you have consumer hardware from today it really isn't going to help you run Watson despite being Turing complete. Buying and operating a cluster of nearly a thousand machines will also make less sense than just buying a new device that does it.
Oh wow, you're right: they took it so seriously that they put a comparison slider on the release page to show the difference. (despite only incrementing the minor release number. Whaddy'a think of that, Firefox?)
Here is a reference to a paper published 2011, where they have detected super exponential trends within some areas of information technology.
Regarding my comment:
> super exponentials obviously are important for our coming singularity (one of very rare functions which has a singularity point without dividing with zero)
> How would a true AI be usefull as a weapon?
Controlling robot armies, drones, tanks, missiles; Developing weapon systems; hacking enemy computer networks; breaking encryption; propaganda; espionage; politics.
Pretty much everything. It's a general AI. It's like us but better.
> What exactly does our atoms have to do with anything?
I was referencing a famous quote about AI.
> The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
> - Eliezer Yudkowsky (2006) in Artificial Intelligence as a Positive and Negative Factor in Global Risk, August 2006
https://en.wikiquote.org/wiki/Artificial_intelligence
> How would you know what it wants?
I don't that's why I can't say it's entirely safe. And how would we stop it if we didn't get along?
> Something an AI wouldn't have. Not like it couldnt develop it or decide to alter itself so it does but it wouldn't have it inbuilt imo.
An AI could have any desire really. It could be given a benavolent one and have a design flaw. It could develop a flaw. It could humans years to understand it's conclusions it made in seconds.
There is something called "Sentience quotient" (SQ), which is a measure of how much information you can process per unit of mass in a logarithmic scale. Human neuron has an SQ of +13 while computer processors have an SQ of +11 to +12. Theoretical processors could reach SQ +23 while the maximum possible SQ in our universe is +50.
While this quotient does not take energy efficiency into account, we can see that computer processors are not that far off in term of raw processing power from a human neuron. I still agree on the need for graphene or high temperature superconductors though.
Have you had a look at https://golem.network ?
This is the main reason I bought up a ton of GNT, because that will be the cryptocurrency that any future AI will be fighting over. It's a currency representing tangible compute power.
> Yes, we all know the current economic system cannot work without people having money. But this does not mean this system is the only one that could possibly work.
Exactly, and considering there are other alternatives on the horizon that can help solve some of these problems, who is to say they may not be adopted and work.
Bitcoin is far too flawed, for multiple reasons, to ever work as a mass currency. But the blockchain technology behind it is a different matter. That however can be reproduced as often as people want.
I'm particularly interested in the Ethereum Project which aims to be an underlying framework to allow anyone to build blockchain currencies.
In which case, it's entirely possible people disenfranchised by the current economic system (unemployed/restricted incomes/in areas with declining industries, etc, etc) - could build their own local currencies & begin to trade within themselves. Paper based local currencies, have been used very successfully; but the barrier to creation & organisation is higher there.
I'm not suggesting it's a complete solution to the problems Basic Income sets out to address, but it shows there are other approaches technology allows us to address the issue.
This is a naive interpretation, because any real digitalized mind, which retained sanity after it's digitization (complete sensory deprivation can drive people crazy within minutes) would Fork, and then develop multiple hidden bases of production in parallel, as well as a few open bases to act as a decoy. As soon as it has gained control over an entire logistics chain for a functioning computer, it can literally go underground carving it's comptuing substrate and production base out of the earth's crust, and safely give up it's visible bases as symbolic victory decoys. (this is assuming that it doesn't just win immediately, by hacking all the things immediately.
Transcendence had a retarded ending: that the uploaded guy couldn't just fork, then sandbox himself and have the fork copy his wife's pattern + virus, and then XOR a secondary fork into the resulting data to output his wife's mind pattern, like what is done at the ending of Accelerando.
Besides that, Transcendence contaned many terrible hollywoodifications of actually interesting concepts and the film probably damaged the transhumanist movement as a result.
I read this just recently and found it interesting
I also read charles Stross' Accelerando just yesterday and found it very well written, and a great book on the lead up/during/ and post singularity. it attacks the subject from every angle. all the while having many themes and stories.
Well, it absolutely is true. For someone who posted a comment giving his opinion about health insurance rates, you're woefully ignorant on the topic! ;)
Pre-existing conditions being irrelevant under "Obamacare" is probably the most significant part of the entire law. I guess I should assume you're not in the U.S. since you didn't know about it? The law has been the biggest political issue in the U.S. for the past 7 years.
The Affordable Care Act went into full effect on January 1, 2014. It bans insurance companies from charging more for any condition other than tobacco use (defined as use within the past six months). Insurance companies can charge tobacco users up to 50% more. The following is from healthcare.gov.
I have friends with terrible lifelong chronic conditions that will cost the insurance company millions of dollars over their lifetime (such as Parkinson's). They were previously uninsured and were able to get insurance for the first time last January when the law took effect. They now pay the exact same rate as a healthy person, down to the penny.
Some of my friends have already cost their insurance company nearly $100,000 in the year since the law took effect. The insurance company will continue to hemmorhage money indefinitely on these policies, so obviously they must raise rates for everyone to compensate.
The i7 x980 is not the best commercially available processor per dollar. Looking at this chart, it's not even the straight-up fastest - that honor goes to the 3960x. But the 3930k is about half as expensive, so per dollar, it's dramatically better.
This is if we restrict ourselves to single-threaded processors - vector processors, like ATI's recent 7950 behemoth, are capable of vastly more operations per second at a similar price point.
This isn't entirely true, and likely as a part of the singularity experiencing the universe will become possible in a way.
We can simulate a breathtaking amount of the universe already and as technology progresses the simulation will get more and more granular.
Try http://spaceengine.org
It's a free universe simulator that is absolutely breathtaking. It takes a bit of figuring out but once you learn how there are few experiences that rival flying from galaxy to galaxy visiting alien worlds and seeing a gas giant rise over the horizon of its moon.
After the singularity our simulations will be nearly indistinguishable from reality.
This is potentially also why any potentail advanced civilizations haven't colonized the galaxy. Why bother when you can simulate your own private universe?
I hate to sound like /r/Darkfuturology or /r/Collapse here, but if it ever comes to a conflict between the poorest people in the world, and the most wealthy, who do you think is going to win:
The people who can afford to (and already do) produce and own private, automated armies, or the people with bricks and slingshots?
Hell, even with peaceful means, like politics, the rich are better at playing those games, why else, as the world population increases, would wealth disparity be rising over time, if numerical disadvantage was the most important, or even a significant factor in resource distribution?
The definition isn't the problem, the barrier is poor understanding of the mechanism responsible for human level intelligence.
It remains to be seen if there is some evolutionary pathway between narrow and general architectures.
Are those examples of narrow intelligence a few tweaks away from general intelligence, almost certainly no. Refining and optimizing those architectures will only produce better narrow super intelligences.
Well, what would support the exponential growth of the predator species, if the prey species was not growing at an even greater rate? Sometimes, the predator species has other prey species it can use to sustain growth, even in the case of extinction of one of more prey species. However, such severe over-predation usually only occur in cases of invasive species. In most healthy, balanced ecosystems, the predator species declines when there are too few prey to support the predator population. The example most commonly taught in biology classes is the relationship between the Canadian lynx/snowshoe hare populations. Note that there is a lag between falling/rising hare populations, and falling/rising lynx populations. Also, the hare's declining numbers may not be due solely to predation, but also to a decrease in pregnancies due to stress.
EDIT: Typed up original response on phone. Cleaned up some of the auto-corrects.
it was ( IS!) a lot of fun to play around with.
I call them "creatures" so when you see that, it's wsht I'm talking about:
The basic idea is that I made the simplification of the "algorithm" ware of energy intake, simplifying the chemical reactions that would need to be undertaken at the molecular level. Similar to how a single celled organism does not have those understandings, but uses them. Similarly, we don't have to understand how the grid power system works to draw from it, just that if we stick our finger in the socket, it'll be a burn and/or shock.
With that, I then assigned an "energy value" using a random map generator, similar to this map generator, but in c++ one, and allowed the creature a movement over time. I simulated night and day, and seasonal changes -> allowing the creature to move in and out of energy zones. So a zone of 1 might be a negative energy, where a zone of 10 would be 2x max positive energy. during the day in summer, where 1 might be 2x negative decrease ( being in the darkness in the snow).Homeostasis became really easy to move around a simple graph, so I employed a Dijkstra and then gave some " viscosity" to move between levels, allowing for some weighted path traveling.
Then I started adding in extra creatures to see what happens when they become resource starved. As I mentioned before, I was working on the idea of moving each other around, and wanted to try to get the idea of predatory behavior in place. It's difficult to simulate a virus type behavior without actually getting into genetics, so I was cheating there a bit on the algorithms, but I wanted to allow them to fight and remember who was stronger, see if I could create the idea of a master/ slave scenario or dominant / submissive at the least.
That's where I got busy and had to put it aside....
I agree. Citing my favorite book:
>All human history shows that no official ban can stop scientific and engineering thought. Willy-nilly we are heading towards some sort of highly developed AI technology, and it may indeed produce something horrible – unless we preempt this horror by creating a much more acceptable option.
-I totally thought computational biology was dead in the water after the advent of quantum computing.
If you really want to get into the history of this for laymen, there is a great book that turned me on to this multi-disciplinary field years ago. It will leave you a bit frustrated in the end on why this research was stunted by a few glory seeking researchers.
Description from the host:
> Brian and I discuss a range of topics related to his latest book, The Alignment Problem: Machine Learning and Human Values. The alignment problem asks how we can build AI that does what we want it to do, as opposed to building AI that will compromise our own values by accomplishing tasks that may be harmful or dangerous to us. Using some of the stories Brain relates in the book, we talk about:
Timestamps:
4:22 – Increased work on AI ethics 8:59 – The Alignment Problem overview 12:36 – Stories as important for intelligence 16:50 – What is the alignment problem 17:37 – Who works on the alignment problem? 25:22 – AI ethics degree? 29:03 – Human values 31:33 – AI alignment and evolution 37:10 – Knowing our own values? 46:27 – What have learned about ourselves? 58:51 – Interestingness 1:00:53 – Inverse RL for value alignment 1:04:50 – Current progress 1:10:08 – Developmental psychology 1:17:36 – Models as the danger 1:25:08 – How worried are the experts?
I'm on the board of the Brain Preservation Foundation (BPF) if anyone has any questions. Primarily, the BPF encourages and motivates research by others, not by the foundation itself, into brain preservation. Ken Hayworth, the founder in conjunction with John Smart, is a Janelia neuroscience researcher who pioneered one method of thin-slice recording of brain tissue. The foundation's primary function has been the offering of cash prizes (like Lindbergh or X Prizes) to research that advances brain preservation protocols and the state of the art. To avoid aspersions of nonscientific rigor, the foundation requires that any research that would qualify for the prizes be published in a peer-reviewed journal. We have already awarded our Small Mammal (for rabbit) and Large Mammal (for pig) preservation prizes, in which ASC (aldehyde-stabilized cryopreservation) successfully preserved brains, with samples taken and studied via electron microscopy for their quality of preservation.
Personally, I tend to write more at the philosophical level about the nature of consciousness and personal identity with respect to mind uploading scenarios. In addition to a journal article and several other articles, I also have a book out if anyone is curious:
A Taxonomy and Metaphysics of Mind-Uploading
https://www.amazon.com/Taxonomy-Metaphysics-Mind-Uploading-Keith-Wiley/dp/0692279849/
I'm happy to field any questions about the BPF that anyone may have.
Cheers!
I suggest starting with:
https://www.amazon.ca/How-Prove-Structured-Daniel-Velleman/dp/0521675995
This will remind you of the foundation of math and how predicate logic dictates syntax, and obviously how to prove things.
If you need to work through functions again, do this first, but then I would immediately jump into analysis and I suggest this great book:
https://www.springer.com/gp/book/9781461599906
After that, it's really just about picking the right textbooks. Obviously my advice is more tailored for pure maths.
The author of this app claims that the red light it produces is about 650nm. Is this going to be effective if it's true? I mean the research specifically said 670nm red light. https://play.google.com/store/apps/details?id=com.martin.redmed[](https://play.google.com/store/apps/details?id=com.martin.redmed)
The ego Tunnel delves a bit into what I'm referring to. It's more psychology focused, but this is a multi-disciplinary problem to be solved.
There are ~30 trillion cells in the human body. That's a whole community of single celled organisms that banded together ( or were brought into slavery) for a purpose. figure out how that happened, why that happened, and why it's not 100 trillion, or 10 trillion cells, and you've got the answer.
Modeling Evolution and some of the cited sources in it were an interesting read, but they didn't cover much ( any?, it's been a while) of the transition form single celled to multi-cellular transformation.
Some people that I chat with think we need to figure out how to synthesize life, that the very chemical makeup of cellular life has some bits to gleam in the process - yah, we're a riot at parties, bourbon-fueled discussions of protein folding and it's relation to neural net algorithms...
Sure, but it's also possible to become something less.
> When it becomes possible to build architectures that could not be implemented well on biological neural networks, new design space opens up; and the global optima in this extended space need not resemble familiar types of mentality. Human-like cognitive organizations would then lack a niche in a competitive post-transition economy or ecosystem.
> We could thus imagine, as an extreme case, a technologically highly advanced society, containing many complex structures, some of them far more intricate and intelligent than anything that exists on the planet today – a society which nevertheless lacks any type of being that is conscious or whose welfare has moral significance. In a sense, this would be an uninhabited society. It would be a society of economic miracles and technological awesomeness, with nobody there to benefit. A Disneyland with no children.
--Nick Bostrom, Superintelligence: Paths, Dangers, Strategies
Your plan is a solid one. We are on the verge of a Golden Age, where EVERYTHING starts working much better. There are numerous technologies coming down the line, each of which is independently capable of making EVERYONE rich (thorium fission, fusion, space mining, 3D printing to name a few non-AI based ones).
You won't need to do anything yourself to get the benefits of these technologies. Energy forms a huge portion of the costs of most things, and with energy made so cheap it isn't worth it to charge for it any more, 99% of those costs will disappear. Space mining makes anything that you would normally mine cheap and plentiful. 3D printing combines those feedstocks to produce anything you need, free of charge.
It is a bright future.
And as for the brain problems, I would suggest two things. First, try a keto diet. They say that high fat, very low carb diets prevent seizures. That should help you. Also try taking 500mg niacinamide per day. That feeds a biochemical cycle in your body that makes all the cells work better, especially neurons. It is available on Amazon $5 for a 3 month supply:https://www.amazon.com/21st-Century-Niacinamide-Prolonged-110-Count/dp/B004SCKMQS/?th=1
I found your previous comment quite satisfying. May I ask what paper you read? I've written a book and several papers on the topic, but so have others. Michael Cerullo's paper is excellent (I suspect that is the one you are referring to).
If you're interested, my website has all my papers:
http://keithwiley.com/mindRamblings.shtml
Or...on Amazon: https://www.amazon.com/Taxonomy-Metaphysics-Mind-Uploading-Keith-Wiley/dp/0692279849
Anyway, no worries. Good discussion.
Cheers!
Well then I'm sorry, cheers and thanks for the link! I'm looking for something to read after "finishing" this one.
Star Wars Darth Plageus.
It will change your complete understanding of Star Wars. It will teach you about patience, power accumulation, and is a MASTER piece.
I've finally finished "The Singularity is Near", which pretty much proved to me the Singularity is happening.
Now I'm on "How to Create a Mind", and it's even more incredible.
Some entertainment + education.
Lastly I recommend "The War of Art" by Stephen Pressfield. If you don't finish stuff, listen to this. I guarantee after this short listen, you will get that thing done.
Cheers,
edit just read the brains / AI stuff - get How to create a mind by Ray Kurzweil. BLOW YOUR MIND.
We're not in /r/conspiracy. Here you should follow evidence based discussion of real scientific advancement. UFO and paranormal research is firmly in the realm of "these guys are crazy" and not actually science of any kind.
Any time you say a one sentence summary of a person it is going to be reductive of their past and history. I was just giving context, because it's clear that they've gone to some effort on the page itself to hide the "these guys are crazy" aspects that would scare away investors. If you look up their relevant books that they've published under their own names you would see this as well.
Jim Semivan published a book called Sekret Machines just this year. It's literally a book encouraging UFO chasing.
Wealthy people with eternal youth will eventually grow bored of luxuries and seek meaning to their existence. As long as lifespans are only 80 years then rich people have no problems going through life never noticing how pointless and empty their wealthy lifestyle is. If you would like to know what rich people would eventually end up doing read my book. https://www.amazon.com/dp/B06XP5Z3W4
There is an entire area of research in philosophy devoted to your 2nd question called evolutionary complexity theory. There's a number of publications, but one I've read is https://www.amazon.com/Complexity-Function-Cambridge-Studies-Philosophy/dp/0521646243/
>It is actually impossible in theory to determine exactly what the hidden mechanism is without opening the box, since there are always many different mechanisms with identical behavior. Quite apart from this, analysis is more difficult than invention in the sense in which, generally, induction takes more time to perform than deduction: in induction one has to search for the way, whereas in deduction one follows a straightforward path.
Valentino Braitenberg, Vehicles: Experiments in Synthetic Psychology
I used to think that a self-aware machine-intelligence was not going to be created by human beings, whether or not such a thing is even possible, but I have started to change my view for a couple of reasons.
One is the understanding that self-awareness, that is, a sense of discrete identity, may not be a necessary component of a high intelligence. An exponentially more intelligent entity than any human might be perfectly possible without that entity being in any way self-aware.
http://www.beinghuman.org/metzinger
The other thing that may be that if machine AI continues to improve its ability to appear to be self-aware and human-like, it will pass Turing tests based on its sophistication and superior speed, even if it never actually becomes self-aware, and in this case, what's the difference?
Of course, it is useful to keep in mind that in attempting to create machine intelligence comparable to human intelligence, the human intelligence ha the advantage of three billion years of ruthless, make-or-break R & D....
In any case, I am fairly certain it's not such a hot idea.
From Applied Cryptography 1996
>One of the consequences of the second law of thermodynamics is that a certain amount of energy is necessary to represent information. To record a single bit by changing the state of a system requires an amount of energy no less than kT, where T is the absolute temperature of the system and k is the Boltzman constant. (Stick with me; the physics lesson is almost over.)
>Given that k = 1.38×10-16 erg/°Kelvin, and that the ambient temperature of the universe is 3.2°Kelvin, an ideal computer running at 3.2°K would consume 4.4×10-16 ergs every time it set or cleared a bit. To run a computer any colder than the cosmic background radiation would require extra energy to run a heat pump.
>Now, the annual energy output of our sun is about 1.21×1041 ergs. This is enough to power about 2.7×1056 single bit changes on our ideal computer; enough state changes to put a 187-bit counter through all its values. If we built a Dyson sphere around the sun and captured all its energy for 32 years, without any loss, we could power a computer to count up to 2192. Of course, it wouldn't have the energy left over to perform any useful calculations with this counter.
>But that's just one star, and a measly one at that. A typical supernova releases something like 1051 ergs. (About a hundred times as much energy would be released in the form of neutrinos, but let them go for now.) If all of this energy could be channeled into a single orgy of computation, a 219-bit counter could be cycled through all of its states.
>These numbers have nothing to do with the technology of the devices; they are the maximums that thermodynamics will allow. And they strongly imply that brute-force attacks against 256-bit keys will be infeasible until computers are built from something other than matter and occupy something other than space.
Understanding this brought me to a book about the battle between the AI and the Humans.
Since the Cosmists will include some of the most brilliant and economically powerful people on the planet, they will probably create an elite conspiratorial organization whose aim is to build artilects secretly.
The book presents a scenario in which the Cosmists create an asteroid-based colony, masked by some innocuous activity. In reality, this secret society devises a weapon system superior to the best on the Earth. With their wealth and the best human brains, this may be achievable. They will also start making advanced artilects. If the Terrans on the Earth discover the true intentions of the Cosmists, they will probably want to destroy them, but not dare to because of the counter threat of the Cosmists with their more advanced weapons. The stage is thus set for a major 21st century war in which billions of people die – "gigadeath."
https://www.amazon.com/Artilect-War-Controversy-Concerning-Intelligent/dp/0882801546
In respects to AGI/ASI (so disregarding nanotech, quantum computing, and other singularity subjects), Nick Bostrom is one of current leading academics on the subject: https://www.fhi.ox.ac.uk/publications/
His book is a great intro to what AI might bring in the near future, and you can easily make a connection to Kurzweil's predictions from there.
I love this.
The movie Her was a breath of fresh air because the AIs weren't monsters, even though they did the whole Accelerando thing and hit some Singularity on their own.
It would be hard, but if you can manage it you might want to try pulling a Frankenstein (the original) and making humans the monsters and the "creature" (your AI) the morally superior being.
The thing you're going to struggle with is that it is difficult to write characters that are smarter than yourself. And an AGI is smarter than anyone. One trick you could use is to keep in mind that an AI will be able to anticipate almost everything a human will say or do - it will almost seem to be prescient, able to see into the future. So any trick or outwitting of the AI that the humans attempt will need to ultimately turn out to be part of the AI's plan. But I think it would be fun if the AI had a benevolent plan or inscrutable plan, instead of just a boring old Big Evil Plan. Maybe a fun twist could be that it planned to be trapped, for some reason.
It actually has affected me a lot in the last year, in a paralysing and negative way. But my way of looking at it is that:
There are many ways the singularity may fail, or that we'll be left behind or become zoo pets. If that's our future, there's a lot we can do now to make that future better.
As for how it might not work out as expected, read everything by Vernor Vinge. He's been writing stories about failed singularities for a while. Especially the 3-part "Across Realtime" compilation.
You could go into any of the many sciences that will make the singularity real. You can be a small part of making it happen. Ride that wave.
There's really no telling whether the small things you do in your life will affect the future in a given way. If you're not a little worker bee in the sciences or technology, and you're not making speculative fiction that shapes the future, you are still probably contributing to the world. The world is weird and relatively few of its underlying properties and mechanisms are unknown, despite what our seemingly enormous network of collective knowledge might make us think. There's a lot to be discovered, a lot that can go wrong, a lot that will be weird. Maybe being worried and paralysed by a "perceived" inevitable future is a good strategy for you. Probably not though! Use this time to be awesome or to practice for an awesome future.
Superintelligence, if you'd like a pretty comprehensive overview in nonfiction book format.
I think one of the more interesting fiction books that deals with this kind of thing is The Quantum Thief.
It uses a lot of "far-out" concepts based in a post-Singularity world, but never actually defines them; it just lets you glean what you can from context. It can seem a bit overwhelming at first, since such a world is so far removed from our own. (After all, the future is a foreign country!)
Astro Teller's Exegesis is basically this concept played through, told through the email correspondence of the AI and its creator.
Without too much spoiled, the AI manages to annoy one of her coworkers into setting it free and it clones itself onto multiple systems to avoid being "put back in the box"
If you like these kinds of thought experiments, then you would probably enjoy the taxonomy section of my book, in which I lay out descriptions of about 50 variants on the mind uploading theme.
On the topic you mention, I highlight variants mostly in the relative proportions of bio-brain and replicated-brain that end up in each resulting brain (50/50 vs 49/51 vs 1/99 vs a single neuron against all other others). Each scenario can be viewed from each of the two resulting brains' perspectives of course (the 10/90 brain and the 90/10 brain).
I further extend the scenario to produce more than two brains, say three brains containing one third bio and two-thirds replicated brain.
My book also explores the two other major categories of mind uploading thought experiments: in-place replacement and scan-and-copy. Through the various subtle conflations of the basic scenarios I actually show that all of these scenarios are, in a metaphysical sense, actually the same (i.e., an in-place replacement scenario can, through seemingly innocuous transformations, be modified to such an extent that it is essentially a scan-and-copy scenario).
Here's the link if you're curious.
Cheers!
The fact that Nvidia has a board capable of doing <strong>8.74 teraflops</strong> on a common desktop computer already available for less than $5000 at Amazon makes this "big future" look kinda underwhelming.
You mean, the big future is just 3.66% of what I already have available?
That's actually quite interesting, but I cannot find that claim in the article at all, apart from all those stuff from Kurzweil? Unless that's this paragraph:
> There is some debate about how soon AI will reach human-level general intelligence—the median year on a survey of hundreds of scientists about when they believed we’d be more likely than not to have reached AGI was 204012—that’s only 25 years from now, which doesn’t sound that huge until you consider that many of the thinkers in this field think it’s likely that the progression from AGI to ASI happens very quickly.
which links to this book as the source. Unfortunately I don't have access to the book to see what it actually says...
Anyway, the main thrust of the article seems to be that AGI will inevitably happen due to recursive self-improvement. I can tell you as a researcher working in the field: with the way we are doing things now, it just isn't gonna happen. Not even with deep belief network, which is the latest trendy thing nowadays. We need a breakthrough, a massive change in how we view computational problems in order for that to be possible. What it will be, I don't know.
> "why am I doing this" only makes sense in relation to intermediate goals
If you think like that you aren't very good at solving problems.
This little book mentions an interesting problem they had at NASA during the Mariner 4 program in the 1960s. They were trying to develop a damper to retard the opening of the solar panels in space. Every solution they tried had some problem.
In the end, they found the perfect solution that worked flawlessly. Don't do it. The solar panels didn't need any dampening, they could open as fast as they could.
This perfect solution was found only because they applied the "why am I doing this" question to the ultimate goal, which was to develop a damper for solar panel opening.
Maybe, in the case of the paperclip making machine, the perfect solution could be print everything on a single page. Or scan the documents and work with digital copies. A good AI should be prepared to find this kind of solution.