I've never liked this argument, because I don't think most people can feel the difference between 1 degree Celsius. I don't think I can, for me 21 is the same as 20 is the same as 22. If you need anything more precise... I mean, that's what decimal places are for.
Edit: For everyone saying they can feel 1F changes. I'm not saying it's not possible, just that many people can't. Also note that humidity plays a larger factor that absolute temperature as your skin measures thermal flux not absolute temperature.
And yes thermostats can come in <1C increments.
> If the temperature changes very slowly, for example at a rate of less than 0.5 °C per minute, then a person can be unaware of a 4-5 °C change in temperature, provided that the temperature of the skin remains within the neutral thermal region of 30-36 °C
From http://www.scholarpedia.org/article/Thermal_touch
I graduated from school in 2002 and was taught that in humans there are no stem cells in the brain, so you can't make more neurons once you're grown up. This turned out to be wrong (although I think it was already known to scientists in 2002, it just hadn't made it into the textbooks yet. Also, my biology teacher was an idiot).
Edit: Apparently this is still being taught. I'm on the go and can't reply to all your comments, so here are some links:
Just realized I never answered your question, having a hard time finding a progression of swings or slides and stuff.
I did run by a suggested article: The history of playground development is long and detailed, but for a well-sourced, well-researched article, see The Evolution of American Playgrounds by Dr. Joe Frost of the University of Texas at Austin.
As far as I know, we don't really know why yawning is contagious, but it seems to be linked with the ability to empathize. I've marked this as questionable, but please remember to link to a valid source next time. Thanks!
Here is some more interesting reading if anyone is interested:
Successive generations who are good at selling half truths, falsehoods, or misinformation eventually begin to believe it. To follow along with the marketing analogy, if you continue to hire people using these oversimplified terms, your marketing firm will begin to resemble your client base; 'getting high on your own supply', to use a turn of phrase.
Here's a pretty good resource: http://www.scholarpedia.org/article/Neuroethology_of_Insect_Walking
Scholarpedia in general is pretty cool. It's a peer reviewed wiki for science.
If you are interested in implementing a distributed, neuro-inspired walking controller, I suggest looking at walknet.
Compare this wiki article, with it's scholarpedia counterpart. (Just pressed "random" on scholarpedia and this came out.)
Wikipedia articles tend to serve as general (conceptual) overviews of whatever topic, plus some useful links to follow. -- Scholarpedia reads more like a textbook. Some of these articles are really high-level and extensive, and I don't think that's what wikipedia is about.
At least for me, wikipedia is more of a quick reference guide, not a place to actually study a topic.
I'm sorry but this could hardly be a surprise. This blink is called an 'Attentional blink' or AB and it has been first described by Raymond, Shapiro, & Arnell (1992). AB is a phenomenon in which the second of two targets cannot be detected or identified when it appears close in time to the first.
Link if interested: http://www.scholarpedia.org/article/Attentional_blink
Visual signal transduction occurs through a relatively slow (but still really fast to you an me) biochemical cascade that takes tens of milliseconds just in the photoreceptor cells alone
http://www.scholarpedia.org/article/G_protein-coupled_receptor.
not sure what you mean by complicated
in physics it doesn't matter how subjectively beautiful a theory is to you. it's important that it describes nature.
see here for the why do we need it http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory
That's not strictly true, when a predator overhunts the population will eventually decrease. This gives the prey population chance to recover until food is abundant enough that the predators can hunt easily.
I think it might be almost as much of an exaggeration to say "General Relativity can be quantised" as /u/Noodled9 says. We can, as you say, use techniques of effective field theory to modify a Lagrangian to make predictions about the effects of gravity when curvature is low.
http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory
There is still a large degree to which GR is very hard to deal with, just like QCD is a lot harder to deal with than QED.
From a laboratory experimentalist's point of view, of course, gravity's quantum effects are a complete non-issue, because the effective field theory tells us they are unmeasurably small, or, if you do things like experiment with neutron beams, you can calculate everything you need semi-classically.
But from a cosmologist's point of view, or a theorist, there is an enormous amount left to understand. If you have to explain in a phrase what these people are challenged by, "the difficulties of fully treating GR as a quantum theory" seems to me like a good summary.
>I imagine people would think the same about distance and length.
yeah we have no reason to assume discreteness of space.
>They probably don't believe in significance of the Planck length I assume...
.. ? i think you're misunderstanding what the Planck length is
Planck length does not represent a pixel size of space. it has a very different meaning, it indicates a scale at which you need to consider quantum effects of gravity.
mentioned here http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory
http://www.scholarpedia.org/article/Transcranial_magnetic_stimulation . When I say elctromagnetic force won't effect the neurons I am saying in terms of EMP pulse strength. These are isolated very strong pulses which would never occur alone. For the most part, this is also VERY new, and highly debated to be of any effect on its claims of what it treats for.
Edit: I addressed this in "Edit3" in my original comment. >These pulses are also matched in amplitude, strength, direction, and length to mimic the neuronal que to fire by using action potentials as reference. The odds of some EMP pulse being strong enough and matching all these criteria precisely is pretty much impossible. Again, when I was saying the electromagnetic force wouldn't effect the neurons, I meant in terms of any realistic EMP strength.
First, some edits because your blogspam 2nd link is garbage. Here is the original source that is much less garbage.
>With the old brain simulation algorithm, a maximum of 10% of the brain could be simulated.
There are several problems with this statement. One, it's inaccurate. NEST only claims that 10% would have been possible with an exascale computer, but more processing power could have done 100%. Two, it's misleading because there are multiple kinds of neural-simulation programs that exist and NEST isn't even the most prominent among them.
>With the new algorithm developed by the NEST (Neural Simulation Tool), 100% brain simulation is now possible with the requisite computing power and memory
[NEST](http://www.scholarpedia.org/article/NEST_(NEural_Simulation_Tool\)) has existed since 2013, and I'm completely unable to find any evidence that they have a new algorithm that promises significant performance improvements. The real article just says an improved algorithm could be made that would be more efficient. Also, Open Worm can simulate a human brain, given a large enough computer. Basically anything that can model a single cell accurately can do so, if you're willing to say you have an infinitely powerful machine behind it.
>The exascale super computer A21 will be built by 2021 (IBM, Intel, and other big tech companies are collaborating with the DoE to make this happen on time)
That's a big claim, one not supported by your links. They specify Intel and Cray building it, and that the purpose of the lab will be for physics research for use in fusion reactors. Not AI research.
In addition to the fact that the Planck length is not a "reality pixel," "quantizing" a field does not mean making it discrete.
Lack of a known particle is not what makes quantum gravity hard -- in fact, we basically already know what the particle should be, and we can construct a working theory of quantum gravity in the low-energy limit. When we try to expand this to high energies (which are the cases that are actually interesting), that's when we run into problems.
Nah, he's not talking about reflex (although it doesn't have anything to do with quantum uncertainty, like the post he was responding to). There actually is a pretty wide array of neuroscience results suggesting that our decisions are "made" in our brain long before our "consciousness" is "aware" of them.
Whatever those three words mean.
Check it out here.
>IIRC, ant colonies have figured out optimal paths faster than supercomputers.
In fact, there is such a thing as ant colony optimization, where optimization problems are solved using a method that is inspired by the behaviour of ants.
>I understand that whatever error makes the formulation of a unified theory an impossible task, was made in the 1920s/1930s. It could be a simple thing like a translation error from German to English and back, it could be a logical/mathematical error in the definitions of the concepts. It could be the guilt of Dirac who ignored the discussions which led to the Copenhagen interpretation and just kept on working on a fundamentally wrong theory, but one which works almost perfectly.
>And there's no out currently because everything works too well, nobody wants to give up the field. It reminds me of the switch from Geocentrism to Heliocentrism: Heliocentrism as a theory was worse back then, it took some time to iron out the problems.
Wow. Shouldn't you first get a good idea of quantum (field) theory why it's not possible to quantize gravity with the common procedures, before you make an opinionated comment on the deeper reasons of that? At any rate, this has extremely little to do with ordinary quantum mechanics and interpretations of quantum mechanics and frankly your post has some conspiracy vibes to it. I think it's most useful to say you should look up reddit posts where general relativity, quantization and renormalizability are discussed (many such posts exist). Furthermore I find this link useful http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory .
http://www.scholarpedia.org/article/Policy_gradient_methods is a good overview of those equations. Introductory RL course will cover them so if the role involves any RL this feels very fair. RL is sometimes covered in an intro ML/AI course although will vary as there's a lot of other topics to cover.
​
Data Science is very broad word. There's a lot of different subfields. Many of them will never touch RL. I think broad knowledge of basics is useful, but there will likely always be subfields you know nothing about. RL is pretty common in robotics.
One way of writing the Bekenstein-Hawking entropy of a black hole. Pretty much the weirdest/deepest equation I could think of off the top of my head.
The problem is actually how does chaos arise from quantum mechanics. Quantum mechanics is governed by the linear Schrödinger equation and thus does not display the sensitivity to initial conditions found in chaotic systems. There's a whole field of research dedicated to studying the issue.
/u/barrelofcannon linked some info about the Swift-Hohenberg equation which is used to model the formation of Benard cells.
No it does not. You get a theory of gravitons when you quantize GR.
http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory
This whole idea that "gravity being the curvature of spacetime and not a force" being "incompatible" with gravity being "mediated by quanta" (often misunderstood as saying that particles that interact through that force will send these quanta back and forth between them, which is not the case.) is out there and quite common but wrong.
[http://www.scholarpedia.org/article/Hunter-Gatherers_and_Play]
Just looking up the definition of things like relative poverty, egalitarian, private property, etc, can really help separate narrative and principal.
You don't even need a super computer for that. The butterfly effect is a property of chaotic systems and there are a number of very simple non-linear dynamical systems such as the Mackey Glass System or the Lorenz Attractor.
They might not look as impressive as you think they should, but they are exactly what the butterfly effect is about. The proof of the butterfly effect is in the definition of chaotic systems. And that there are real world systems that exhibit this behaviour is in the first paragraph of the wikipedia article you linked.
But regarding your phrasing of the effect, if you want to simulate an arbitrary small disturbance, you will get serious problems with your orders of magnitude. Imagine your simulation contains a variable that is somehere in the 10^-3 and you perturb it with a value of 10^-(10^10^10), your floating point variables will round off your pertubation (because they can only hold a certain precission) and you will have no effect at all. But that is only a technical limitation and has nothing to do with chaotic systems.
What makes you think people are less happy? Actually, people report about the same level of happiness as they did in the 50s. Consider then that in the 50s many, many women didn't even know how to make themselves orgasm. There are a lot of things that have improved since then. A more legitimate question would be, how come people aren't even happier considering how much their lives have improved legally, financially and socially.
People get divorced because they can. I don't think children are happier with parents in bad marriages than they are with separated parents. What we need to solve is that women often become the primary caregiver after divorced and that's a result of traditional gender norms. Custody should be shared equally. My parents did that and I'm actually glad they separated because they're much happier now than they were before. Divorces aren't fun but bad marriages aren't fun either.
Most men I know don't feel left out in the cold. Most men I know are happily married. To the extent they are "less motivated to work" than men were in the 50s it's because they prioritize spending time with their wives and kids. I honestly think your grim world view of gender relations is based on confirmation bias.
There are problems to be solved in western society, no doubt. But trying to turn the clock back or reinstate traditional gender roles isn't going to solve any of them. Besides, if TRP cared so much about the welfare of society and the virtue of men and women they wouldn't be contributing to making women "sluts" by sleeping around with as many of them as they can. That's blatant hypocrisy. TRP doesn't care about society.
It's not just about sex. All traumas work this way. When something really scary or terrible happens to you, part of you can become fixated on it -- you can both be repelled from it and drawn to act it out compulsively. Some people go one way, some people go the other, some people do both at the same time. And you don't really have a choice in how it affects you; it becomes hard-wired into your brain (specifically your Amygdala).
So it can both trigger you and get you upset, and it can prompt you to repeat things like it over and over to try to "master" it.
People who are beaten as kids are more likely to beat up other people -- and it's not that they were taught beating was good; presumably nothing is a better education in how beating is bad than being beaten is. It's that you try to repeat the super-bad things that happen to you as an attempt to understand and process them.
I took a lot of calculus and I use every bit of it. I did a masters in economics and used it pretty much everyday. I'm now in a stats PhD program and I certainly use it everyday. Calculus and linear algebra are probably the two most important math classes you can take for a PhD program.
I never took a class titled Numerical Analysis but did a quick Google search: http://www.scholarpedia.org/article/Numerical_analysis. It lists 3 main areas:
All 3 of these areas are useful and things I've done to some degree while in my PhD program. After reading through this webpage, I'd say it's a no-brainer to take the numerical analysis course.
Good news is there's plenty of research and measurements on eye movements. Seems to me (armchair optometrist reading that article) lots of movements are fairly slow compared to 5ms intervals. The other interesting thing is how predictable the eye movement is over certain distances and speeds. I imagine updating at 5ms (or faster with newer generations) you could reasonably predict where the eye is going to go and update the image before you even get there (or exactly when you get there)
I also seem to recall (though the source escapes me) reading that there's actually a significant latency between when your eye moves (and stop moves) and an image is consciously perceived; that the brain does a good job of filling in the gaps or making your perception not care about the gap of data. It might be interesting to see what happens if it gets that information slightly late, but I have no doubt that at 5ms (or some achievable faster frequency) that we won't perceive anything odd at all.
>(I don't know much about quantum physics, so bear with me)
cool but you need to actually study a ton of general relativity and quantum theory to even understand the problem of quantization of gravity.
> gravity is the distortion of space around any object with mass, and evades quantification due to lack of a known particle (say, a graviton)
that's not the reason.
http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory
> since the planck is the smallest possible distance between two points of space
it's not. Planck length is the scale at which quantum effects of gravity become important.
>, wouldn't that mean that space (and thus gravity) is quantized with the planck length being a "reality pixel" like a particle?
it's nothing to do with discreteness of spacetime. not a Pixel size.
and discreteness isn't the same thing as quantization.
Not a mathematician, but... :-)
It all started quite innocently. One of the first things I built during my PhD was a clock synchronization system. Our measurements showed some behaviour that we could not explain. So, in order to study it better, I built a simulation system. For the simulation, I needed to generate noise that fit real systems as closely as possible, including flicker noise. It turned out, that this "solved problem" was harder than I expected it to be, and wasn't solved at all. In order to get a better simulation, i tried to understand the mathematical description and soon found out that neither description we had was complete, nor that, which we EE considered to be true, was actually true. So started my journey to learn measure theory, fractional calculus and stochastic calculus (not necessarily in that order).
Which doesn't cite a source, here's one with sources http://www.scholarpedia.org/article/Thermal_touch >When the skin at the base of the thumb is at 33 °C, the threshold for detecting an increase in temperature is 0.20 °C and is 0.11 °C,
less than a third of a fahrenheit.
>If the temperature changes very slowly, for example at a rate of less than 0.5 °C per minute, then a person can be unaware of a 4-5 °C change in temperature, provided that the temperature of the skin remains within the neutral thermal region of 30-36 °C
Almost 10f
That's quite a range that doesn't correspond to any temperature scale.
If you want to get in an argument about science, keep in mind you have to demonstrate your claims too. If you look at the scientists who take down pseudoscience, such as James Randi, you'll notice that they provide careful rebuttals of pseudoscientific claims. They don't just wave their hands.
Reading the article, there are reasons to believe it's sound. Latent semantic analysis is falsifiable: the author notes that he plugged in subreddits unrelated to politics and reproduced preexisting correlations, e.g. /r/Minnesota + /r/NBA is close to /r/Timberwolves. If the code did not reproduce correlations already known to exist, it would have been falsified. Secondly, the source code and data is freely available, so anyone can check the code for irregularities and make sure the results match. (In fact, I probably will later.) There are other studies which have bolstered scientists' confidence in LSA, such as this one. There are more in the long list of references on LSA's Wikipedia page.
Like any scientific model, LSA's shortfalls have been discussed, and there are occasionally allegations of fly-by-night data scientists improperly interpreting their data, but this study is easy to double-check and replicate. If one thinks feature selection was massaged to fit an agenda, one should be able to point to the lines in the code that are suspicious.
So I'm not a medical professional, but I'm pretty sure that the key to feeling well-rested is to get an adequate amount of rest on a regular basis rather than running a deficit through the week and trying to catch up on the weekends. It sounds like you're lifestyle might be to blame. (I know, I do it too... we gotta stop.) Here's this if you're interested.
Is it always the same side? If so, your dog may actually be experiencing a condition called hemineglect. In humans, it leaves people unable to perceive one side of their field of vision. It isn't that they're blind, it's just that their brain can't process any input from that side.
You're presenting some pretty big claims here and should at least provide the source where these claims originate I think. Especially if someone is supposed to comment on it.
An article I found to give a good transition from GR to quantising GR is this one http://www.scholarpedia.org/article/Quantum_gravity_as_a_low_energy_effective_field_theory .
There isn't enough information in the diagram to remotely understand what is trying to be conveyed, but it would be a long shot to say that it is a theory of consciousness.
But if this diagram has intrigued you in that you think it is highlighting a way of understanding the physical basis of consciousness by focusing on the causal interaction of a system's parts, which you are inclined to agree with, or at least are curious about, you may like to read about the integrated information theory of consciousness (a prominent theory in the sciences), which at its core says that whether or not a system is consciousness depends on the particular causal interactions of that system.
Scholarpedia has some good articles on more advanced concepts and most articles on there are written by experts in their field. I find scholarpedia does a better job at explaining topics. A great plus to scholarpedia is their citations are usually well known literature on the topic, which you can then refer to for very in-depth understandings and as sources for research projects.
But seriously we just use Wikipedia for terminology things. Its a great resource and you will waste a lot of time and energy avoiding it.
Some good starting areas are Game theory and the Predator/Prey modelling.
A good example is the Prisoners Dilemma
If you apply the same types of modelling techniques, and compare how certain types of human behaviour influence the genetic success of an individual, you will find that many semingly complex examples of human behaviour are actually finely tuned and balanced techniques to maximise our reproductive success. Seemingly altruistic behaviours, when viewed from the perspective of the gene, become selfish.
Edit: was thinking while doing the dishes, thought I'd better point out that these modelling techniques are much more effective when predicting the behaviour of larger populations over time. At the level of the individual chaotic factors tend to have a much greater influence. So while it may be of use, for example, to provide explanations explaining the rise of MGTOW behaviour in males. It's not going to help you much in determining if Bob will choose to become a MGTOW.
>The history of synchronization goes back to the 17th century when the famous Dutch scientist Christiaan Huygens reported on his observation of synchronization of two pendulum clocks which he had invented shortly before: "... It is quite worth noting that when we suspended two clocks so constructed from two hooks imbedded in the same wooden beam, the motions of each pendulum in opposite swings were so much in agreement that they never receded the least bit from each other and the sound of each was always heard simultaneously. Further, if this agreement was disturbed by some interference, it reestablished itself in a short time. For a long time I was amazed at this unexpected result, but after a careful examination finally found that the cause of this is due to the motion of the beam, even though this is hardly perceptible."
Arxiv, bioRxiv, and scholarpedia essentially serve that purpose. But, it is important that all scientific papers get peer-reviewed with some level of acceptance and rejection. As soon as you incorporate peer-review you start to need some kind of editor to arbitrate between author and reviewer and that starts to cost money. This becomes especially difficult to support with a donation system because of the sheer quantity of papers that come out. Wikipedia architecture wouldn't even come close to be able to support the number of scientific articles that come out per day. You would need to host several thousands of new wiki-length articles every single day.
>That seems outright impossible.
Why? It sounds like you're more or less restating the hard problem of consciousness in the context of machines, but I'm not sure I'm following you to your claim of impossibility.
Models of visual hallucination invoke Turing instability a lot.
>An important idea that was first proposed by Alan Turing is that a state that is stable in the local system can become unstable in the presence of diffusion.[9]
"Approximately 93% of adult male psychopaths in the United States are either in prison or jail or on probation or parole." Source
Your eyes can follow moving objects, or flick from one position to another, but those are the only two times your brain processes information from them while in motion. When you scan a visual field, you are really just flicking from one point to another, but cannot notice the "jumpy" movements between the two points. (Read here for more about visual processing with eye movements, or about saccades, which is what the period of time where your eye moves is called).
Edit: for links
Per Scholarpedia, it manages gut motility (i.e. how much and how often your intestines churn or move stuff along), fluid exchange rates across the intestinal wall, and defences (diarrhea and possibly vomiting are part of its purview). There's a lot of cross-communication between it and the regular brain, as well.
>but doesn't understand how that's possible given that every eight years, the cells in our brains have completely changed, effectively giving us a new consciousness.
This is incorrect, most of cells in your brain are retained throughout ones life, in fact it wasn't till the last few decades that neuroscientists thought that there was any Neurogenesis (birth of new neurons) in the adult brain, they believed that it was static and as you age cells just die off over time. Now of course we have evidence of neurogenesis in areas like the olfactory bulb, in the ventricles and Dentate Gyrus of the hippocampus. While it is true that the majority of other cell types in the rest of the body are nearly completely replaced every 8 years this doesn't occur in the brain.
>Why would we have a new consciousness just because our brains have replenished?
Ask him that, see what his answers is
We don't have new memories or character traits. We learn our perception of reality, why would that change with a new brain?
Memories and character traits are different from self consciousness which is what your boyfriend is describing, you make a good point there if the brain determines consciousness, and ones memories and character traits how is that those are largely unchanging when ones consciousness is as your boyfriend asserts new every 8 years?
>We learn our perception of reality, why would that change with a new brain?
Why wouldn't a hypothetical new brain be different from the old brain? after all its the old brain that has done all the learning and not the new one.
Here is a source on adult neurognesis if you want to learn more
http://www.scholarpedia.org/article/Adult_neurogenesis
and Two good sources on consciousness
http://www.iep.utm.edu/consciou/ http://plato.stanford.edu/entries/consciousness/
There are two extreme limits of accretion disks around black holes:
1) Thick Disks
These disks have very low density and low accretion rates, but high temperatures because they are not able to efficiently radiate away their energy. To give you an idea, the densities are so low here that the protons and ions in the disk don't even have time to come into thermal equilibrium with each other, so you have a "two temperature" fluid. The collision rate between ions is similarly infrequent, so that fusion is extremely unlikely. The temperatures can get pretty high though, on the order of 10^9 Kelvin and up to ~10^11 Kelvin.
2) Thin Disks
These disks have high accretion rates and higher densities but lower temperatures because they are very efficient at radiating away energy. A typical temperature scale would be ~10^4 Kelvin, which can be compared to the temperature required for fusion in the core of the sun of ~10^7 Kelvin. Furthermore, the density here is still absurdly low compared to the density in the core of the sun (~ 10^-9 g/cm^3 vs. ~ 150 g/cm^3 in the sun).
I got the numbers for the thin disk here.
TLDR; In both cases, the density is just too low for the given temperatures for fusion to occur.
The fact that you're totally incredulous is what makes it awesome!
This is one of the most well-replicated (and easiest to replicate) psychology findings out there. Every time I show this video in a lecture, half the rooms misses the gorilla.
It's called "inattentional blindness". It's wild.
Yeah! Basically your mind constructs memories that didn’t actually happen. False memories can also come from your mind trying to fill in gaps.
Let’s say you saw someone stealing a bike (but didn’t get a good look at their face). When you were remembering the person, you try to remember what they looked like (even though you didn’t see them). You can mix up faces from earlier in your day, and sort of place one of the previous faces you’ve seen onto the body of the person stealing the bike. But the thing is, once you have the memory of “the guy who stole tho bike looked just like the guy on the bus!” That memory gets locked in. And it feels completely real, you don’t realize that you’ve created a false memory, and as such you just think “yeah I saw who stole the bike, it was the guy from the bus”
False memories can be REALLY dangerous when it comes to crimes and assaults. If someone has the “memory” of you being the one who assaulted them, they can pass a polygraph. There’s no solid way to prove if the memories are false.
If you want to read more about false memories check this out
You're right about atmospheric re-entry, but gas nebula are extremely diffuse. Even at mach 1.5 there would be little resistance from the gas.
> The density in the nebulae is very low, ranging from several hundred to a million atoms per cubic centimeter. Such conditions are better than any vacuum one can achieve on Earth.
1) The program is from 2001
2) The strong AI hypothesis was refuted by John Searle. 30 years ago.
3) Chatbots are not intelligent by anyone's measure
> Me:"If I told you I was a dog, would you find it strange to be that talking to a dog?" bot:"No, I hate dog's barking." Me:"Isn't it weird that a dog is talking to you on the internet?" bot:"No, we don't have a dog at home."
4) Convincing 33% of observers is not the Turing test. The test calls for the program to be indistinguishable from a human. Which from the above it clearly is not.
Einstein agreed with you “ god does not play dice “ which is one of the motivations for Bells theorem which “ Bell's theorem proves that quantum physics is incompatible with local hidden-variable theories.”
https://en.m.wikipedia.org/wiki/Bell%27s_theorem
Note ‘proves’. This theorem is not in dispute . Bells theorem proves that the random events we see in quantum mechanics is indeed random and that there is no underlying causes.
Yes it is probabilistic , just like throwing a dice has a probability of one in six of throwing a two , but that does nit make the two thrown determined, it remains random and probablistic.
This has all been observed and proved, one manifestation is Hawking radiation when these quantum events occur near black holes resukt8 g in one particle going down the hole and the other being radiated.
http://www.scholarpedia.org/article/Hawking_radiation
I know theists find this uncomfortable but just asserting it’s all wrong is not a sound argument
this seems to assume the rules are such that the subsequent state can be exactly determined from the current state. but in things like quantum mechanics one cannot determine exactly a subsequent state from a current state, so that seems to be one way it could break down. beyond that, the problem of free will and whether it's merely an illusion and if not, how it arises is a very old problem of philosophy lots of folks have grappled with. The hard problem of consciousness is a related notion - how do physical phenomena give rise to experience? It's considered and unanswered question.
Dennett is in no way an eliminativist about qualia. He explicitly says that whatever subjects report experiencing is something that a theory of consciousness should account for: even if the reports are illusory, the theory should explain why they're illusory. Neuroscientists have known for a long time that there's a blindspot in our retinas (of roughly 7° in diameter) where there are no photoreceptors since that's where optic nerves gather to exit the eyeball. This means that there's no peripheral information reaching the brain from this part of the visual field. And yet our visual field feels subjectively homogenous. Should we trust subjects when they report this? Sure. Does this mean they're right? No. But we need not worry since we can explain the mismatch. (The story would go like this: each conscious visual experience is an integration of several saccades, etc. etc.)
Dennett is not a reductionist either. In Consciousness Explained he defends a functionalist theory of consciousness, i.e. the multiple drafts model, in which contents only become conscious when they're available to be stored in memory or participate in behavioral modulation (specially verbal reports). It typically takes considerable time to determine whether the content had these effects. When we consider very brief intervals of time, e.g., t < 500 milliseconds, it may not be metaphysically determinate whether any of the mental states you had during t was or wasn't conscious. Dennett states that during this temporal window both hypothesis can be empirically equivalent. This is one of the most original features of his theory.
Most textbooks I know of on PDEs are quite dull and old-fashioned.
You could start with Paul's Online Math Notes
http://tutorial.math.lamar.edu/Classes/DE/IntroPDE.aspx
Or Scholarpedia
http://www.scholarpedia.org/article/Partial_differential_equation
Brownian motion essentially means that the frequency spectrum of the signal/values/amplitude falls as the inverse of the frequency squared (1/f^2 ).
It seems completely logical that there is some sort of random-walk noise floor in the stock market due to the decisions of individuals which are likely completely uncorrelated from the decisions of others. However, I would imagine that there's also a more correlated signal in the market (more 'pink' or 1/f-like), which originates from the decisions of some influencing the buy/sell decisions of others.
A quick Google-ing indicates there is some validity to this hypothesis: link
Since you (and u/lonebluespirit and u/strongshieldman )posted this, I just wanted to say that Giulio Tononi (from OP's quote) is a mathematician and neuroscientist who has developed one of the only really plausible (and mind-boggling) coherent theories of consciousness.
It's called Integrated Information Theory
If you don't want to visit the link, the abridged version is basically that it is a form of pantheism, in that any set of elements with a cause-effect relation can produce a certain amount irreducible information—meaning that a huuuuge number of things not thought of as "conscious" actually do have a low level of consciousness.
It's pretty cool, you should check it out
There were also edit wars where Wikipedia editors who stayed the longest wound up winning the vote on deleting it; it's sadly not a political issue that they've fixed even after a decade.
Citizendium and Scholarpedia are a bit more vetted and procedural to discourage online drama.
This page is a great place to start: http://www.scholarpedia.org/article/Basal_ganglia
It's obviously not a paper by itself but it's full of helpful references that could be a jumping off point. Are you looking for specific numbers of neurons, or just relative strengths and targets of projections?
It is not known how plasticity works in the brain and even under what circumstances new synapses are formed. There are many hypothesis out there. This is probably a good start. But if you know Hebbian learning it's unclear whether knowledge about spike timing dependent plasticity (STDP) will give you any more insight into how neurons work.
Not an expert (xD), but I can try. A Product of Experts refers to multiplying the density functions from simpler probability distributions and renormalise. Each expert covers a part of the problem (provides constraints on some dimensions) and you combine them to get your answer (covering all the dimensions of the data) - the final probability distribution.
There's a good explanation in the link I provided; but the general idea is that in a sum of experts (mixture model), we can have an x with high probability when only a single expert assigns high probability to that event; in a product of experts, all the constraints must be approximated --hence we get "sharper" distributions.
The assumption you make in a mixture model (weighted sum of densities) is that you have model in which each sample is generated by (1) choosing one of the individual generative models and (2) allowing that individual model to generate the data. You can use these models if it's tractable to fit them using, for instance, Expectation Maximization (EM). But they are not that good (inefficient) for high-dimensional data; mixture models produce a more "vague" distribution since the posterior distribution cannot be sharper than the individual models in the mixture, and each expert would need to be be somewhat adjusted to all the dimensions in order to produce a "useful" final sharp distribution. See this paper on the subject.
With a PoE, you have the problem of the partition function (needed to re-normalize) being intractable to compute, but as Hinton showed, Contrastive Divergence can get you there. I think you can find some additional motivation in the papers.
I hope this helps :)
Okay, here's the scientific explanation, and here's the original paper written by the guy who first invented them.
I would draw a parallel to graphic designers who actively exploit gestalt principles to create visual interest. There are fundamental ways in which we interpret given spacial configurations and figure-ground relationships. Exploiting these fundamentals in certain ways can produce interesting results.
I don't see how it's possible to consider an intentional approach to silhouette mere hype by fashion bloggers when it's a result of fashion designers applying long-standing principles of graphic design to the medium of fashion.
Perhaps contemporary designers are abusing silhouette or underachieving in their deliberate use of it.
Certainly, a collection is not "good" simply because the designer took a deliberate approach to silhouette. Imo, this particular collection is indeed pretty mediocre. We see minor variations within an unusual, but bland, set of silhouettes. There's nothing compelling about it as a collection, though individual pieces might work well in use.
> After all, it's just a shape, is a square better than a circle ?
It depends on the context. Squares and circles are decisive shapes, and they have different psychological effects on the viewer. Various ovals and rectangles might not be that interesting, but a circle or square can have a powerful impact. When used together, though, a configuration of specific rectangles might be more interesting than a group of squares. It's all a matter of proportion and spacial interaction.
I don't know if you're currently a student, but looking into a course or book on graphic design might give you exposure to these principles in a field other than fashion. Fashion is so complex that it's difficult to isolate its individual components. I believe this is what leads to the interpretation of some things as hype.
I see a lot of misunderstanding here, so I'll try and explain how I see Scholarpedia:
First off, Scholarpedia's breadth is vastly smaller than that of Wikipedia, even if we consider only "hard-science" articles. Its main strengths are in very specific subjects, such as computational neuroscience and neural networks, and perhaps a few others I'm less familiar with. However, the (relatively few) strong articles in Scholarpedia are extremely strong, and written by top experts.
For example, the article on the Hopfield model, an important concept in computational neuroscience, was written by John Hopfield himself. As another example, the article on Boltzmann machines was written by Geoff Hinton, who is undoubtedly the biggest name in the world in this field.
Both these articles go deeper and are more technical than their Wikipedia counterparts, which (for example) make them more fit as resources for students studying this material in a graduate course.
I get it. Your point is well taken. Causation is hard. Even the fancy pants techniques can go wrong (looking at you Granger Causality. Sometimes the acedemics can have fun with it. This paper is a good example: Chickens, Eggs, and Causality, or Which Came First? From the conclusion:
>The structural implications of our results are not yet clear. To draw them out fully will require collaboration between economists and poultry scientists. The potential here is great. As to other questions of temporal ordering, the chicken and egg question is only the most obvious application of causality testing. Other fruitful areas of research include the testing of "He who laughs last laughs best" and the multivariate "Pride goeth before destruction, and an haughty spirit before a fall."
I have. Especially alva, but ive had it happen with Niko once or twice this league. Only once in red maps tho.
random is indeed random. streaks or small series in such a tiny sample are not proofs of a trend in of themselves.
Another example of randomness, as of a few days ago Octavian was saying on his stream that he had yet to have a lich spawn this league from abyss content. This is as a sreamer who's job it is to play this game - putting in streamer hours. Me, as a casual dude running very un-invested red maps after work - has found 4-5 liches this league at probably 10th of the total playtime. Am i massively higrolling? or is Octavian very unlucky? Who knows. probably somewhere in between but - Random is random.
> Accretion disks may stretch out light years.
[Citation needed]. As far as I'm aware, black hole accretion disks may be anywhere from 10km to 1000000km in size, well short of light years.
This is a great overview by one of the pioneers.
>Polynomial reconstruction of function at specific points
The basic idea centers around this -- you're free to choose which points to reconstruct (or interpolate) from. You can construct a polynomial interpolation from points (i,i+1,i+2), or (i-1,i,i+1), or (i-2,i-1,i), all are valid ways to construct a unique quadratic. What WENO does is instead of just choosing one of those stencils, it computes all 3 and uses a weighted average of them. The weights are determined by solving some kind of function that minimizes the oscillation (or maximizes smoothness).
The advantage is that this can get you high order accuracy in the region of discontinuities by automatically biasing the stencil away from it (like upwinding with shocks) but retain stability properties.
The disadvantage is that this gets really expensive in 2D, and insanely expensive in 3D, and it doesn't play very well with unstructured grids compared to other high order methods.
Oh boy.
What you're describing is, indeed, a standard task that is commonly automated with software. See this link for details.
What you need to do is get the data into a format that is readable by some piece of software that can be used for spike sorting. Your best bet will probably be MatLab. There are myriad user-created spike sorting packages for MatLab. Do you have access to MatLab? I do not think this is something you can do using 'labchart.'
Galactic magnetic fields do exist, and the dynamics are complicated and not entirely understood. This isn't just a net field from stellar magnetic fields though. The galaxy itself possesses an organized field structure.
The rotation of the galaxy actually has an important role in the evolution of the field. As the galaxy rotates, field lines can get 'wrapped up', cancel out, and release energy. The process that renews the field is described by dynamo theory, which I don't have a complete understanding of.
At intergalactic distances, gravity would likely have a much more profound effect on galactic motion. A galactic magnetic field does help keep the galaxy stable against collapse though, and can even contribute to star formation.
http://www.scholarpedia.org/article/Galactic_magnetic_fields
This is pretty technical but also has a lot of info: https://www.google.com/url?sa=t&source=web&rct=j&url=http://www.mpifr-bonn.mpg.de/staff/rbeck/PSSS13.pdf&ved=0ahUKEwiT0tL1yYjRAhULsFQKHYtIBzgQFgg0MAM&usg=AFQjCNEYi9j0YNaiuHwINbThPp0cNCS1CQ
It's going to be fun when the neuroscience of the hypothalamus finally enters mainstream knowledge and people start to realize just how much of animal behavior (including humans) is instinct-driven.
Accretion disks can come in all sizes and cross-sectional shapes depending on the black hole and its surrounding environment. Typically they fall into one of three categories: thin disks, thick disks, and donuts. A single black hole can have disks from each category existing at different distances from the event horizon.
A summary of the various types and where to find them can be found in this chart.
On a perhaps overly unrelated note: the only twin prime image looks like a Glass pattern which is an interesting stimulus in studying perceptual organization / global form perception (my area of research). It has been proposed that we have specialized detectors for detecting/processing concentric patterns.
http://www.scholarpedia.org/article/Belousov-Zhabotinsky_reaction
Amazing stuff, first saw it on Jim Al-Khalili's "Secret Life of Chaos" documentary. Well worth a watch!
I mean you would have to study GR and QFT and start from the advanced sections in textbooks, QFT in curved spacetime and work your way into topics like string theory to get a good overview over that research area. here's some basic info too
The tongue has somatosensory receptors on it. Lots actually. And it has a representation in Primary Somatosensory Cortex. Actually the extent of the tongue's representation (relative to body area) is toward the top of the list of body parts.
Those vanilloid receptors are there to support detection of noxious stimuli (specifically temperature) that lead to the perception of pain. That's part of the function of the somatosensory system.
Re: olfaction. "Flavor" is considered to be a multisensory percept that blends olfaction and gustation. It's distinct from "taste" which is the perception of gustation and is fairly rudimentary, and distinct from "smell," the perception related to olfaction.
Quality extrapolation, thanks for the insight. The reason a mirror is not the same thing is due to the phase space (recording apparatus) and time lag between the reflection-loop. There is the state vector of the input image (consensus, real-time reality [statue]) and the state vector of the output image (digital, TV reflection[statue image]).
In the central nervous system, this is known as axonal conduction delay.
Convolutional neural networks were originally a model of the brain's visual system coming from computational neuroscience.
Fukushima, K., & Miyake, S. (1982). Neocognitron: A self-organizing neural network model for a mechanism of visual pattern recognition. In: "Competition and Cooperation in Neural Nets" (pp. 267-285). Springer, Berlin, Heidelberg.
People generally overestimate the 'cognitive resources' problem. Only 5% of the population is under an iq of 70, less than 0.5% is under an iq of 60%. So 95% of the population can potentially become literate and learn some calculus, etc. The vast majority of predatory situations are a result of a-symmetric power and information. And they tend to create/exploit psychological challenges, mostly resulting from trauma (physical and psychological). This is a big part of why UBI is so much better than jg, one of the most basic determinate factors of whether some thing is fun and improves your psychological welfare or causes 'bad stress' and harm. Is the legitimate opportunity to choose not to participate. JG, aka work or destitution, does not encourage esoteric understandings. The opportunity to play and compete, does. [http://www.scholarpedia.org/article/Hunter-Gatherers_and_Play]
Humans lived for hundreds of thousands of years in egalitarian societies. It wasn't until we started embracing hierarchical religions, that we started concentrating wealth, and justifying prolonged poverty.[http://www.scholarpedia.org/article/Hunter-Gatherers_and_Play]
I listened to a 'better courses', sieries of lectures on the economics of early human history since the 1400's, by Professor Donald J. Harreld Ph.D. He points out that the extremeness of poverty has been going up since we started farming. In many ways poverty has never been as bad as it is now. Ie, your individual opportunity to harvest your needs from nature, and your opportunity to compete with peers continues to become more consolidated to those with preexisting wealth.
The Roosevelt Institute has suggested that UBI paid for by debt, would effectively pay for itself, by growing the economy at least as fast as it would grow the debt, over the long term. This is relatively the same logic they used to pass paul ryan's tax cut.
>if automation truly displaces most workers how do we keep natural incentives for people to pursue work and knowledge?
People will 'naturally' pursue both work and knowledge, that is how we play. Here is an interesting article on the topic, [http://www.scholarpedia.org/article/Hunter-Gatherers_and_Play]
I really wanted to write a funny comment making fun of your post but I figured that would be really bogus. Instead, I figured I'd give you an honest critique.
Aesthetically, your main issues are spacing and composition. I've compiled some resources that could potentially be of use to you, but do your own research on those topics in regards to graphic and layout design and you'll definitely see an improvement in your UI work.
In regards to the concept, it's pretty poor. Honestly, you've set yourself up poorly by attempting to redesign an already well-designed application, totally disregarding the fact that you have absolutely zero user data to support any of your conclusions.
So yeah pack this one up and go read about margins and alignment and whathaveyou. Oh, here are those links:
http://gomedia.com/zine/tutorials/rule-four-spacing-is-your-friend/ https://www.gcflearnfree.org/beginning-graphic-design/layout-and-composition/1/ http://www.scholarpedia.org/article/Gestalt_principles
Answer to the title: run Solomonoff induction, obviously, and over the next few months proceed to make billions on the stock market, cure disease, prove the Riemann hypothesis, take over the world, etc etc. Answer to the description: probably have a crack at getting some deep RL working on StarCraft :D
Most researchers agree that the brain must somehow employ a neuronal mechanism for the process of binding different features of a stimulus together. How this happens is still under debate. An hypothesis called binding-by-synchrony does have some empirical evidence supporting it (http://www.scholarpedia.org/article/Binding_by_synchrony), however not everybody in the field shares this view. One of the major questions is how binding can occur over different sensory domains. How does the brain 'know' which information originates from the visual domain and which from, for example, the auditory domain. In the end all features of a stimulus, irrespective of their sensory modality, must be 'bound' together in a single conscious percept. How this happens is still very much a mystery.
Almost everything in space in magnetized. Stars, most planets, galaxies, interstellar gas, and the solar wind all have magnetic fields; only comets and asteroids and some planets don't seem to have measurable magnetic fields. Electric fields are weaker/more rare; to my knowledge the only electric fields we have detected are in the Earth's magneto-tail. I study magnetic fields in our Galaxy. More on galactic magnetic fields
you are a part of your brain, not the whole thing. your consciousness corresponds - somehow - to parts of your cerebral cortex and maybe some subcortical structures.
heartrate, breathing, etc are modulated by brain structures that are physically separate from the conscious parts.
There are a lot of simplified models which can reproduce the voltage curve of an excited cardiac cell. These usually amount to some mathematical representation of the underlying transmembrane currents which result in an action potential, and consequently leads to the observed "bumps" in the ECG.
Depending on what you need this for, you can choose which model would fit your needs the best. Some of the models have few variables (such as Fenton-Karma 3v) because they group various currents together for a more coarse-grained approach, whereas some have upwards of 60 to account for all of the complex interactions present.
Many of these models, and some background, is given here: http://www.scholarpedia.org/article/Models_of_cardiac_cell
It's not just a property of the resonance itself. It depends on other forces, whether there are any nearby resonances, and more ... basically, nonlinear dynamics is hard. Here are one or two context-dependent answers.
First, you'll want to look up the Chirikov overlap criterion. This basically says that when two resonances are too close to each other (so they 'overlap' ... not really a sharply defined concept), this signals the onset of chaos. This is the origin of clearing the Kirkwood gaps.
Second, close to a resonance, you can write down an effective Hamiltonian description which makes the slow variables look like the phase space of an (anharmonic) oscillator. So if you are near an orbital resonance, but not exactly at the middle of it, your orbit will librate about the middle. Depending on the details of the resonance, this can lead to large eccentricity variations (depending on how far from resonance you start). That can lead to orbit-crossing, so the planet-planet interaction may result in ejection. However, if you get adiabatically transferred to a resonant orbit, the orbital elements will end up close to 'exactly' on resonance, so there won't be eccentricity/inclination oscillations.
TL;DR: nonlinear dynamics be crazy.
I strongly suggest you to start from general textbooks that can give you a broad introduction to the field: the most classic and famous is "Principle of Neural Science" by Eric Kandel. If you want to get a little bit more quantitative, I suggest you to take a look at "Theoretical Neuroscience", by Peter Dayan and Larry Abbott.
Finally, Wikipedia is a surprisingly good source of information. In addition, you can also take a look at Scholarpedia, which is another wiki but it is written by the major experts in the field.
Quick and obvious links: http://en.wikipedia.org/wiki/N400_(neuroscience) http://www.scholarpedia.org/article/N400 http://en.wikipedia.org/wiki/P300_(neuroscience)
Do you understand anything about EEG recording and analysis? If not, then it would be useful to read up about that in general before looking at specific ERP components.
It is possible to cut off an octopus' arms and keep them (the arms) alive in a testing chamber for hours. Researchers have found that severed arms are able to recapitulate a variety of stereotyped arm movements, either in response to an electrically driven 'input' or simply due to a tactile or chemical stimulation of the skin (source), suggesting that many arm responses can be elicited without input from the brain (that is, are 'hardwired' within the arm itself). It's conceivable that these sort of responses are important for a number of behaviors, such as searching for and subsequently capturing prey as well as avoiding dangerous/painful stimuli.
Fun fact: Octopuses can taste with their arms
I'm gonna agree that masturbation can be normal and healthy.
Also, this is not the first time I've seen a post along these lines, so my take is that it's normal during "recovery" to have some weird feelings after masturbating.
One thing to consider is that your brain might be used to O's involving a much bigger spike of dopamine, etc., due to the artificial stimulation of porn. When your brain doesn't get the amount of reward it's "expecting," it can lead to a general negative/depressed feeling. With more and more time porn-free, you may get less of these shitty feelings after MO, as your brain recalibrates.
To get geeky, check out this summary of the neuroscience behind what I'm talking about.
You are talking about memory and perception. These things are not answered by neuroscience. The science that revolves around perception and memory is called 'cognitive psychology'. You ascribe ignorance to me while you yourself clearly have not even a clue where to actually look.
That's not true, it's just not listed in the DSM.
http://www.scholarpedia.org/article/Psychopathy
http://www.psychologytoday.com/blog/mindmelding/201301/what-is-psychopath-0
Antisocial Personality Disorder is a broader, more all- encompassing category in which Psychopaths and Sociopaths would fall.
The DSM doesn't use the terms because there was doubt that they could effectively "summarize" the conditions in a book considered a general guide... not because the terms are politically incorrect like "mongoloid".
If Psychopathy is suspected (which IS very rare) Doctors turn to the Psychopathy Checklist.
You can use it do decouple polynomial equations. Coupled polynomials 'almost decouple' in the Grobner basis. They don't literally decouple, but they function like solving a triangular linear system: one equation has only one variable. Solve that. The next equation in the basis has only the variable you just solved and one other variable. So you set up the various possible polynomials plugging in your solutions to the first variable, and solve for the possible values of the second variable. Rinse and repeat. It makes polynomial systems much easier to solve.
There's this article on Neural net language models by Dr. Yoshua Bengio of the University of Montreal, one of the big names in Deep Learning.
Figure six at the bottom of the page: http://www.scholarpedia.org/article/Models_of_cortical_spreading_depression
I get visual aural headaches and they suck. I assume they're migraine but I'm not sure. Tunnel vision + blurriness + things like this on my vision practically everywhere by the time it's in full force, and each of those little rainbow bits is actually zigzagging constantly, like the figure 6 and it basically means that, seemingly at random (or triggered by cigarette smoke, lack of sleep, lots of other things) I cannot see for an hour, and the light is so bright it's blinding and painful, as if you were pressing on your eyes for an hour, and at the end of it all, it fades away and excruciating pain comes into the left side of my brain, practically crippling. I get sensitive to light, even seeing a small part of light through the threads on a wet washrag over my face makes me nauseated with pain. Absolutely no light can get in, and that's hard to do considering the whole time I'm kicking my leg about and thrashing, because I can't seem to stay still while it's happening.
Before it's over I usually have to say "oh well" to the light sensitivity and attempt to dash off into the nearest area to vomit, and then when it's all over with I'm extremely fatigued and physically sore all over for a few days, and mentally scarred for a few months after an attack, because I'm completely afraid to leave the house and have one of these fits in public.
When I get them (not recently) I have one every couple of weeks, once I had one two days in a row, and then it'll go a month without one, and then I won't have them anymore. Not every time will I get visuals, but 90% of the time I do, and really, it's a crippling issue for me.
There's nothing wrong with citing articles with long history and proper references. Nature conducted a study in 2005 comparing Britannica against Wikipedia. Wikipedia articles compared had about 1/3 more factual errors than Britannica. There were about the same amount (4) of serious errors - general misunderstandings of vital concepts.
So if you use Wikipedia to get the general overview then there's no problem, if you are using to quote facts then you may be better off using some other sources. (Such as scholarpedia or specific literature.)