If you want to do the "same thing" inside your browser, here is a link to an nifty car generation thingy.
It uses genetic algorithms, which I assume are what is talked about in the article.
As far as I can tell, it's pretty much the same thing. But it's a bit more interesting to watch, since it runs a whole bunch of cars at the same time and it keeps better track of how far your cars get too.
Guys, genetic algorithms is nothing new. In fact, here is something almost identical that you can run in your browser. Having the iterations carried out in the real work is just a gimmick, and a useless one at that given how long each iteration/generation takes to complete and test.
And yet reasoning seems to be an algorethemically driven process as well, at least with rational beings.
http://rednuht.org/genetic_cars_2
Computers can be made to modify themselves in a variety of ways. This is an example of a simple way.
Understanding of the process is unnecessary for human level responses. The fact that we can not even define consciousness simply implies to me that it is an illusion. Perhaps it increases survival. Maybe beings as intelligent as humans need the illusion of consciousness to not kill themselves out of total nihilism. A great many intelligent people are depressed and have dissociative phases.
Your argument seems to me to have some sort of religious bias? Humans have no conscious creator. Evolution has no reason and does not care for us.
"Irreducible Complexity" has a solution from computer science. We can use algorithms, like evolutionary algorithms to try many different possibilities. Many of them will fail miserably and won't work at all, others will work better. It's why some people are blind. Here is a simple example of the phenomena.
Two cars do not make a child. You take the most successful models and replicate them, and then mutate those replications and see if the mutation has made them more successful. Repeat this process, and the copies with bad mutations drop out, and the copies with good mutations stay in and continue to evolve.
Example of a genetic algorithm building cars to traverse random terrain.
All mac's arguments are easily refutable or straight out fallacious.
> you are taking evolution on faith.
You can deductively reason evolution is true if you believe two premises to be true:
1) traits are heritable
2) traits can positively or negatively affect a organisms fitness to reproduce
You can also see evolution working in genetic algorithms. http://rednuht.org/genetic_cars_2/
It's not proof of evolution in living organisms but it is proof that evolution does work the way scientists say it works.
Finally there are real speciations happening right now that are being studied. These are proofs that evolution is correct and if you really wanted to you could go study them yourself and see first hand evolution is correct.
So, you see you don't have to take it on faith that evolution is correct.
Also, Mac's argument about scientists being bitches is not a refutation of science but is actually a confirmation. The scientific method is not about proving hypothesis' correct but trying to prove them wrong. Scientific knowledge is improved by testing existing hypothesis' and trying to improve or replace them when they conflict with observations.
I can't for the life of me remember the name of that simulation I played with that had each generation "learn" by ruling out which parts don't work. I can't even remember if it was in flash or java. Will be thinking about it and searching every now and again.
Ninja edit: I'll try to remember to update.
Real edit: found it. It was a car learning to drive and it's html5 :)
Thanks, this is exactly what I was going to say. Looking into how genetic/evolutionary algorithms work can provide a practical example of how exactly this works. Here is a neat one.
Here's two interesting demos of evolutionary algorithms at work creating "things"
Now, they're using very different methods than you'd use to play a game but it's still an easily watchable demo of one method of computer learning.
BoxCar2D uses Flash, which doesn't work on some mobile devices. Luckily, there's a Javascript version here. It definitely is a good demonstration of concepts like mutation and selection and fitness.
It's also assuming that
1) The universe our universe is being simualted by has the same laws of physics as our unvierse. (We simualte 2D universes with rules that are simpler than ours all the time.)
2) That the simulation needs to store and calculate all that information, rather than only the information that is observed by...an observer.
have you looked up genetic algorithms(usually computer science stuff)?
there is a cool simulation. about cars. it only takes 10 generations ot get something that works ok. but it takes 100 generation to get something that works well, and a billion generations to get something that works wonderfully.
so i geuss it is a numeric proof which is good enough for me.
if you change the environment it kills everything that does not work, and the things that do work get really good at getting better quickly, but reach a point where they very slowly get better.
http://rednuht.org/genetic_cars_2/
this is about how evolution would work, very quickly things adapt to a new enviroment and then slowly they get better until reaching a platoo
There's 4 parts to using genetic algorithms:
0) A way of representing and testing candidate solutions 1) Some measure of "fitness" to decide which ones are best 2) a way of creating new solutions by combining (and mutating) older solutions 3) lots of fucking time
#2 is what separates genetic algorithms from a search with backtracking. There's an element of randomness and an element of trying variations and combinations of what works best.
http://rednuht.org/genetic_cars_2/ is an example of GAs at work.
In the video you linked, they're using simple neural networks to do the job. There's no deep logic just "hole 2 squares in front of me, hit jump button".
> it tries all combinations of values
No, the whole point is to not have to try all combinations & to solve problems where there is not finite set of possible solutions. You start with a bunch of random values (but nowhere near all of them) & hope to find something that works.
> how can it know that even if it wasn't the best at the first generation, it won't be the best soltuion once it gets 5 generations down the line?
Every generation is given the same test every time. If something fails on generation 1, it's going to fail on gen 5.
There is, however, a risk that you'll end up getting stuck in a local optimum. They'll be the best at what they do but there might be a better, completely different approach that the GA isn't going to find that by just tweaking things a bit. It's like getting hung up on making the best macaroni & cheese and ignoring that maybe pad thai might be better.
There's a javascript version of that program that does not use Flash. It also simulates a whole bunch of cars at the same time and is faster.
Here is it: http://rednuht.org/genetic_cars_2/
Edit: Flash kind of sucks.
This app was very fun. So I looked for other genetic simulators out there.
1. http://rednuht.org/genetic_cars_2/
3. http://rednuht.org/genetic_walkers/
I found these two mods of the app.
It's not really "learning", it's just optimizing parameters that result in somewhat complex behavior.
Something like this might be easier to visualize what's going on with genetic algorithms.
I am taking them at face value, because there's no reason to exaggerate their accomplishment.
I'm also a bit familiar with how this kind of programming works, and it literally is just trial and error.
Here's an example of how this kind of programming and design works, with car construction.
In their presentation, they said that they started with a blank slate, and rewarded some vaguely beneficial outcomes more than others, then let it rip for a preposterous amount of time.
Just as with the link I've provided, it randomly selected based on the best benchmark performances, and then optimized through trial and error.
> Then what I understand is that through ~~variation~~ mutation a land-dwelling mammal can evolve to be a cetacean, and mutations are (and I would like to know more about examples of mutations trough evolution) for example blue eyes and lactose tolerance.
Mutations happen that enter the gene pool (giving rise to variation,) and if they are beneficial, they can be selected for.
Imagine you have a dictionary with only 1 word:
This isn't very exciting. Now imagine that we start mutating these words a bit:
In each step, a mutation happened. In each step, I changed, added, removed, or swapped a letter. As a result of those mutations, I now have variation - my dictionary (ie, my gene pool) now has a lot more words to choose from.
> Is the statement "evolution would be practically halted without mutation" correct?
Here you go. A small, genetic algorithm driven "game." It's fun to watch your little creations learn and change as they reach further down the course. Bonus, it runs in your browser.
a very simplified view is that you encode the strategy elements needed to solve a problem as a set of "genes". then you package a set of genes into a program, and give that program the problem to solve. do that with a bunch of different permutations and combinations, and give them all the same problem; some of them will achieve more effective solutions than others. call this run of attempts one generation. for the next generation, you 'breed' programs by combining mixtures of their genes into the next generation of programs. the darwinian bit is that the programs that were more successful in the current generation contribute more of their genes to the next one.
here's a demonstration where the problem is to design a car that can traverse a hilly track, and where the genes are features like wheel size and body shape, that combine into a program which is a specific car design.
This reminds me of a similar thing where cars are randomly created and only the few that succeed on a randomly generated terrain succeed and mutations are based on them. Give it a look http://rednuht.org/genetic_cars_2/
I think this is really good (and way more impressive than the kind of coding and problems I was working on at 15). Your first technique here seem to be describing genetic algorithms. Which is quite powerful but they do take some time to train. It also reminded me of this: http://rednuht.org/genetic_cars_2/
I don't quite follow what you intend to do with your description of your second technique. It sounds like you're wanting to have the program learn things and then be able to connect new learnings with old ones. But how that learning happens is where the magic is and I'm don't follow how that happens in code.
Anyhow, I'm not an AI programmer, but I did want to give you some words of encouragement and at least point you to genetic algorithms, of which there's a lot of interesting work being done with them. So keep at it!
Here's one of my favorites:
Here's a quick online game:
Look for 'genetic algorithm', 'evolution simulation', or 'a-life'.
I might be biased, but yeah, I still think it's meaningful to work on EAs. I think EAs will always have a place because of how easy it is to apply them to almost any problem. Also, there's always going to be the set of optimization problems where the function being optimized is really really hard to characterize. For example, something where human's are in the loop evaluating solutions. Another good one is here.
Yes, they can be slow. But there's still a range of problems where as long as the run time is not prohibitively slow, such as training neural networks, that EA's performance over existing algorithms like backpropagation is significant enough to warrant their use and further work on developing EAs.
What type of work are you doing with EAs?
> but most miners do seem to be just for profit only.
Short term profit. Evolution takes time, even in internet time.
The early experiments in any evolutionary system often seem chaotic before settling into a flow.
Simulated annealing does not always look for random places. It has to find local maxima first. Current riven mod feature does not have that.
Your suggestion is closer to genetic algorithm with only 1 child each iteration and 100% mutation rate and size.
However it is a great suggestion. I also thought the same when riven mods came out.
You can still do this GA now, but not recommended for poor tenno like me.
If anyone has done or seen any simulations, low number of children (number of mods) drastically increases number of iterations (cycles) required to get near optimum (best stats).
You can increase number of children by getting more riven mods of same weapon (currently 15 max).
Even with a large number of children, some sims may be run for weeks or months.
Normally, the cost of each iteration does not change, but here in WF, it is exponential with kuva (I hate this part the most).
Thankfully, possible values are limited (e.g. crit chance, dmg, zoom), but still this is very tiresome and expensive.
Even if we don't think about weapon, kuva, sortie RNGs: Exponential cost -> exponential time to farm/cycle. Very random (negative and useless stats possible) because 100% mutation rate/size. Limited number of children. If you do a research like this, you will be sued by sponsers.
Here is a fun simulation. Almost addictive as Cookie Clicker.
(Mutation is to escape local maxima. Too small, they will be stuck on a "small hill." Too large, they might not find the top.)
Edit: mutation part is like simulated annealing. OP is not wrong.
Check out this little site. It uses genetical algorithms to generate 2D cars.
What it basically does is randomly toss body shapes and wheels together, and then look at which of them performs best. It then takes the best ones, randomly copies it's traits onto a new generation, and again takes the best one to copy and breed.
Another more abstract application is shown in this video. The algorithm tries random buttons, and then compares which input got it the farthest in the mario level. It then takes that knowledge and tries more advanced controls based on that previous best round.
They're called Evolutionary Algorithms because each round, it gets better and better. The working approaches have babies, while the not working ones die out and go extinct, just like animals in evolution. This is of course a bit of an overkill for easy programs that could just be solved by a deterministic solution, but for more complicated problems, you might even get solutions you didn't even know existed. You just give the program the 'rules' of the world, and the criteria on which it should measure itself, and have it chug along.
Essentially, these algorithms are written in such a way that a programmatically generated solution can be a) tested for fitness and b) slightly mutated. The goal is to keep testing various permutations that occur through mutation to find the best solution rather than a programmer having to know the best solution ahead of time.
Here's a simple example: http://rednuht.org/genetic_cars_2/
In the example, cars are randomly generated and set on a random course. The "fitness test" is simply which car makes it the furthest. Then that car is taken, mutated, and then the various mutants are run on the same course. If one of them does better than the first winner, then the new winner is used to make the next round of mutants. The theory is that, after a large number of runs, you'll have the car that is best at this particular course.
> And I was wondering, yeah, hexagonal cells have obvious advantages over any other shape cell. But evolution doesn't know that!
This is precisely why evolution, as a set of processes, is so amazingly powerful. A conscious "knowing" isn't needed in order to arrive at a solution to a problem.
Human engineers have picked up on this, and sometimes use genetic algorithms to design things. Here's an example of an antenna that was designed using a genetic algorithm. Here's an article that has a bunch more examples.
If you want to see an interactive example of genetic algorithms at work, check out this simulation. It runs right in your browser. Basically, it simulates some objects with wheels, axles, rigid structures and gravity. Some of these constructs end up propelling themselves farther than others. At the end of each round (or "generation") the best ones are carried forward to the next. Minor alterations (variations) are made randomly and the idea here is that some of these constructs are more car-like than others. The neat thing is is that if you let this run long enough (an hour, or two, or six) you'll start to see really good cars that are fast and are able to deal with rough terrain without any problems - all without the program knowing what the solution is ahead of time.
Wow, that's really pretty. I'm going to have to make a poster of that :=)
I didn't mean simulate the human body though. Have you seen the bike evolution apps, for example? http://rednuht.org/genetic_cars_2/
In this app, the gene directly describes the car. Instead, you could have the gene describe the chemical given off, and how the 'cells' react to that chemical. Thus building up much more biological shapes.
Make sense?
This video reminds me of this car evolution simulator.
It randomly generates a terrain and gives you a handful of randomly generated cars. The cars evolve after each generation to eventually breed a car that can go the furthest distance on the map.
It's quite addictive to sit and watch. You can even toy around with some of the parameters like mutation rate and gravity and see how the cars adapt.
You will be able to do something similar yourself but only if you have a good understanding of artificial intelligence (specifically evolutionary algorithms). I'm not sure if they have released the code for this. But I'm sure there are other open source projects which simulate some other behavior.
You can try this HTML5 based 2D simulator: http://rednuht.org/genetic_cars_2/
Yes, looks like your teacher was a bit dim and ignorant. It happens.
Actually, the same explanation is applicable to the transitional forms: transition between stable states is very rapid, so there is a very little chance for the intermediate forms to expand their area and to exist long enough to leave any fossil record at all. We observe all the same patterns in artificial evolution too, transitions are always extremely rapid, and if you do not have extinctions (or just enough of the external chaos) your system can get stuck in a stable state for eons.
You can watch it yourself here, for example: http://rednuht.org/genetic_cars_2/
> Still seems random who gets venom in the first place
It is completely random.
An approximate demonstration of how Evolution works in practice is here http://rednuht.org/genetic_cars_2/
In the beginning the software makes ~20 or so randomly created vehicles with various arrangements of wheels and body shapes. The creations that travel the furthest are copied, mixed together with the other top fitness creations, and then some random amount of their designs are mutated. That cycle repeats indefinitely and ideally each new generation is more successful. For me, by generation 20 most of them are reaching the other end of the test course.
Alexa and Siri are pattern (voice) recognition algos hooked to a google search. It doesn't do any thinking for itself.
That "AI" "teaching" itself how to "walk" is a program designed to find the fastest way to go from point A to point B. It's not hugely different from this.
That's not what I consider to be artificial intelligence.
They understand the AI just fine. The thing a lot of comments making the black box argument are missing is that extracting an encryption algorithm is outside the scope of this experiment.
Here's an excellent illustration of how this sort of thing works. It designs a bunch of variations of a car and records which cars do best on a generated track.
Now, say it designed a car that can make it further than any other car on the current track. You could look at the configuration of the car and theorize about why it made it so far, and you could even watch it go along the track and see what happened. Hell, you might even gain some valuable insights about designing cars.
But there isn't really a meaningful answer to "why" the car was designed that way it was, it's just the configuration that worked best according to the scenario you gave it.
And the bigger issue is if you tried to reproduce this car in the physical world you'd introduce a slew of variables the original simulation wasn't designed to account for.
In the same way, you could extract out what Alice and Bob did that Eve was bad at decrypting, but it probably wouldn't have a meaningful use outside of the exercise at hand.
That "rhythm based gameplay" is not for me. I've tried it once thanks to a friend. It felt like i've bound to it. I was forced to move with the rhythm which is not my style very much.
ninja edit: I am basically looking for something like this.
Basically , I just wanted to add from an external point of view the genetic algorithm process allows for fudge values when completing an task. The algorithm then fudges those randomly and keeps up with the best performance to gauge its progress and each iteration tries to better itself. The end result can sometimes be odd.
Genetic Algorithm Car is a good example of the oddity.
Neural Networks take input and train logic points/fake neurons whether the input is correct for a given state. The result is often something being trained much like a child. Repeated reference to the same object over time will eventually allow it to say a tiger is a tiger in a picture or as is referenced a conversation point is presented, deconstructed for structure, then meaning, and response is given.
There is a fairly creepy video of two ai's conversing. They both think the other is a robot and is some what hilarious.
On a very basic level, this little programs cars develop themselves through trial and error, You can only really control how quickly they change, but I enjoy watching them progress.
As others stated here: they used an evolutionary approach. you can look up "genetic algorithms" or "evolutionary strategies" (the former is the american, the latter the german version) take this as example: genetic cars
basically they do random searches for a good solution in an optimization problem. you need to know what you want and you have to measure it. in this case reaction to certain sound inputs. (this is called fitness - as in distance the car moved)
the term genetic comes from the way it works. you have a "gene", which describes the "individual" (as in car, or soundchip). this can be size of tires etc. in every generation there is a fixed amount of individuals, going through the test course. (or reacting to testsounds) so each individual gets a rating (fitness) after this you usually do serveral things:
which means: you select a few individuals, let them "mate", mutate the children slightly and create a new generation from those children. rinse and repeat. there are several techniques and theory behind this, but this is the baseline.
however the real challenge is to design the individuals since it affects the recombination and mutation a lot. this is not trivial and needs a lot of training as a human who develops this. you cant just sit down and do rocket science with GAs (i know, i tried for a semester) especially because GAs tend to find weaknesses in your system. if you a controller for a car which uses the least fuel i may just stand still. (all part of the design of GA and individuals) the same is true for the chip example.. it worked on one specific chip.
hope that wasn't too complicated and somewhat helpful. :)
Huh. This looks like it was directly stolen from this site. Not sure which came first.
Speaking of stuff stolen from rednuht.org, I liked his genetic algorithm cat, but found a bug in it. Here's my fixed version that makes progress a lot better, plus a couple other minor tweaks. I have more fun with this cat than the 3 wheelers honestly.
Genetic Cars is a good example of this problem. I'm not really familiar with the implementation details in the web version that I linked, but in my version the fitness is simply the maximum forward distance traveled by the car. When a track has a significant obstacle (say, a steep hill) you'll start out with only 1-2 cars that reach the hill. After a number of generation have passed, the population's designs converge towards those leaders who are making it to the obstacle until all of the cars are essentially identical. At that point, you basically end up waiting for a mutation to occur that lets the population push past the obstacle.
That's an example of natural selection, which I guess you could argue has a "goal" of "make babies that can make more babies". Evolution is just a vehicle of natural selection.
Here's a neat thing to look out how natural selection works. While the programmer obviously built it so that the goal is to make the car go as far at as possible, the program itself doesn't. It has no predetermined goal, it just runs. And cars that go farther "mutate" and go to the next round. It's a cool program, and it definitely helped shaped my view of evolution.
Unfortunately, the subset of computer problems which are solved by GPU programming is quite small. On top of my head : video processing & graphics; parameter optimization; bitcoin & passwords hashing.
the genetics car project is an idea which could be solved by massive parallel programming, but it's maybe a tad too complex for a side project.
I imagine my hypothetical embodied AGI would model its world in a similar way how the Google car maps the street. In conjunction with facebooks AI(which identifies human faces with 97% accuracy)identifying all human artifacts and the functions of the artifacts.
When I imagine it planning its route, its runs the simulation of it attempting its goal thousands of times and chooses the best one. Like how a genetic car algorithm produces the best car, it would produce the best course of action. It sounds like this is what AIXI does.
Finally I imagine my hypothetical embodied AGI to have understanding of context and linguistics through something like This newly announced AI project. Or wolfram's new system which can understand natural language.
In combination, what i imagine wouldnt directly result in singularity, but a useful servant that can complete basic commands and maintain itself.
Boxcar2D is an example of a genetic algorithm, where each car is first randomly generated and then the best designs are mutated with other successful designs until a car that's well suited to driving on the track is found.
I did a search for "genetic algorithm game" and came up with some quick hits: Genetic Cars 2 and Gen Car 2. Hopefully you should be able to find some more as it's quite an interesting thing to watch.