I think that many of your problems arise because of a flawed understanding of what probability is. Can I recommend this excellent book by ET Jaynes? It is written from the perspective of a Bayesian, and as a someone working in quantum information, there are multiple issues on which I have some disagreements with Jaynes, but it is an excellent and life-changing book nevertheless.
>Pls to point out conclusive proof doesn't say that Kejriwal is dishonest.
Well the burden of proof is on the one making the claim, or in this case you. It is like you have picked one interpretation of OP's statements, you can show that that interpretation is fallacious and therefore you just want to stick to that interpretation. I have just shown you that if you are a little generous, his statements make sense.
Again, all of this discussion is kind of pointless if you agree that given all that we know about Kejriwal, it is fair to say that it is pretty likely that he is not a very honest person.
Technical literature, even when it includes exposition in some formalism such as natural deduction, will include some vernacular. This vernacular will employ some amount of domain-relevant, specialized terminology, often pejoratively called “jargon.” To be put off by “proposition X characterizes mathematical object Y” because you interpret “characterize” to imply some sort of moral judgment is, first of all, not a reasonable interpretation or reaction, and secondly means you haven’t read much formal mathematics. It’s also unreasonable to expect a formal text to recapitulate standard terminology in the field. Formal texts necessarily make demands of their readers. If Probability Theory: The Logic of Science recapitulated its applied mathematics prerequisites, it would be a graduate-level applied mathematics text and probably not get past its first two actual chapters, because it’s already expecting you to solve functional equations by then. TAPL is written in a similar style, making similar demands of its readers. But even for a complete newcomer to the field, complaining about Pierce’s “thoughts and narrative,” particularly based on such an absurd interpretation of very normal usage in the field, is bizarre.
I really love Probability Theory: The Logic of Science by Jaynes. While it is not a physics book, it was written by one. It is very well written, and is filled with common sense (which is a good thing). I really enjoy how probability theory is built up within it. It is also very interesting if you have read some of Jaynes' more famous works on applying maximum entropy to Statistical Mechanics.
I don't understand your argument, or the debate for that matter.
In Bayesian probability, qualitative information can often be translated into a (perhaps only vaguely informative) prior distribution for either a continuously variable or a discrete parameter. A good book on this topic is Probability Theory: The Logic of Science by E. T. Jaynes. Jaynes also notes that finite sets of discrete propositions are all that is ever needed in practice, but that thinking of continuously variable parameters is often a natural and convenient approximation to a real problem because analytical methods become available.
This one? Damn, it's £40-ish. Any highlights or is it just a case of this book is the highlight?
It's on my wishlist anyway. Thanks.
There's a great book on the foundations of Bayesian inference called "Probability Theory: The Logic of Science", by E.T. Jaynes. I think it's a good reference for those interested.
I'll share one of my favorite books on this: Probability Theory: The Logic of Science https://www.amazon.com/dp/0521592712/ref=cm_sw_r_apan_glt_fabc_38Y81TY0APTN8V61WJZ7
I’m convinced it’s the most important subject there is. Period.
What I mean by that is that I don’t believe there’s a sound epistemology that doesn’t ultimately reduce to it. Epistemological theories that don’t are magical or wishful thinking at best, delusional and dangerous at worst.
Important texts in the field include:
The latter implements a simple Lisp dialect for exploring Chaitin’s approach. I also suggest playing with John Tromp’s Combinatory Logic and Binary Lambda Calculus Playground to get a hands-on feel for the subject.
I don’t know if this is for the lay person, but I’ve found it amazing. Maybe you wanna have a look at one of the illegal pdfs out there as it is not cheap... https://www.amazon.com/Probability-Theory-Science-T-Jaynes/dp/0521592712
> There are some philosophical reasons and some practical reasons that being a "pure" Bayesian isn't really a thing as much as it used to be. But to get there, you first have to understand what a "pure" Bayesian is: you develop reasonable prior information based on your current state of knowledge about a parameter / research question. You codify that in terms of probability, and then you proceed with your analysis based on the data. When you look at the posterior distributions (or posterior predictive distribution), it should then correctly correspond to the rational "new" state of information about a problem because you've coded your prior information and the data, right?
Sounds good. I'm with you here.
> However, suppose you define a "prior" whereby a parameter must be greater than zero, but it turns out that your state of knowledge is wrong?
Isn't that prior then just an error like any other, like assuming that 2 + 2 = 5 and making calculations based on that?
> What if you cannot codify your state of knowledge as a prior?
Do you mean a state of knowledge that is impossible to encode as a prior, or one that we just don't know how to encode?
> What if your state of knowledge is correctly codified but makes up an "improper" prior distribution so that your posterior isn't defined?
Good question. Is it settled how one should construct the strictly correct priors? Do we know that the correct procedure ever leads to improper distributions? Personally, I'm not sure I know how to create priors for any problem other than the one the prior is spread evenly over a finite set of indistinguishable hypotheses.
The thing about trying different priors, to see if it makes much of a difference, seems like a legitimate approximation technique that needn't shake any philosophical underpinnings. As far as I can see, it's akin to plugging in different values of an unknown parameter in a formula, to see if one needs to figure out the unknown parameter, or if the formula produces approximately the same result anyway.
> read this book. I promise it will only try to brainwash you a LITTLE.
I read it and I loved it so much for its uncompromising attitude. Jaynes made me a militant radical. ;-)
I have an uncomfortable feeling that Gelman sometimes strays from the straight and narrow. Nevertheless, I looked forward to reading the page about Prior Choice Recommendations that he links to in one of the posts you mention. In it, though, I find the puzzling "Some principles we don't like: invariance, Jeffreys, entropy". Do you know why they write that?
> Look, I see what you are getting at and I like it, but the fact remains that if it would violate the laws of physics for you to be capable of predicting what would happen in a real experiment on a measurement with real equipment, then that's not any kind of determinism that most people would find consistent with their understanding of that word.
That would be fine if there were no impact on reasoning in physics from confusing ontology ("real randomness") and epistemology ("we can't know the initial conditions of this complex, nonlinear, dynamical process"). But it does matter. Don't take my word for it; read physicist E.T. Jaynes' Probability Theory: The Logic of Science, or at least physicist Harold Jeffreys' Scientific Inference. Jaynes quite rightly refers to it as the mind projection fallacy. His book offers many, many examples of how you get erroneous calculations based on its presence. Again, highly recommended.
> Honestly, both of our arguments have become circular. This is because, as I have stressed, there is not enough data for it to be otherwise. Science is similar to law in that the burden of proof lies with the accuser. In this case there is no proof, only conjecture.
((Just in case it is relevant: Which two arguments do you mean exactly, because the circularity isn't obvious to me?))
In my opinion you can argue convincingly about future events where you are missing important data and where no definitive proof was given (like in the AI example) and I want to try to convince you :)
I want to base my argument on subjective probabilities. Here is a nice book about it. It is the only book of advanced math that I worked through ^^ (pdf).
My argument consists of multiple examples. I don't know where we will disagree, so I will start with a more agreeable one.
Let's say there is a coin and you know that it may be biased. You have to guess the (subjective) probability that the first toss is head . You are missing very important data: The direction the coin is biased to, how much it is biased, the material .... . But you can argue the following way: "I have some hypotheses about how the coin behaves and the resulting probabilities and how plausible these hypotheses are. But each hypothesis that claims a bias in favour of head is matched with an equally plausible hypothesis that points in the tail direction. Therefore the subjective probability that the first toss is head is 50%"
What exactly does "the subjective probability is 50%" mean? It means if I have to bet money where head wins 50 cent and tail wins 50 cent, I could not prefer any side. (I'm using small monetary values in all examples, so that human biases like risk aversion and diminishing returns can be ignored).
If someone (that doesn't know more than me) claims the probability is 70% in favour of heads, then I will bet against him: We would always agree on any odds between 50:50 and 70:30. Let's say we agree on 60:40, which means I get 60 cent from him if the coin shows tail and he gets 40 cent from me if the coin shows head. Each of us agrees to it because each one claims to have a positive expected value.
This is more or less what happened when I bet against the brexit with my roommate some days ago. I regularly bet with my friends. It is second nature for me. Why do I do it? I want to be better at quantifying how much I believe something. In the next examples I want to show you how I can use these quantifications.
What happens when I really don't know something. Let's say I have to guess my subjective probability that the Riemann hypothesis is true. So I read the Wikipedia article for the first time and didn't understand the details ^^. All I can use is my gut feeling. There seem to be some more arguments in favour of it being true, so I set it to 70%. I thought about using a higher value but some arguments might be biased by arguing in favour to what some mathematicians want to be true (instead of what is true).
So would I bet against someone who has odds that are different from mine (70:30) and doesn't know much more about that topic? Of course!
Now let's say in a hypothetic scenario an alien, a god, or anyone that I would take serious and have no power over him appears in front of me, chooses randomly a mathematical conjecture (here: it chooses the Rieman hypotheses) and speaks the following threat: "Tomorrow You will take a fair coin from your wallet and throw it. If the coin lands head you will be killed. But as an alternative scenario you may plant a tree. If you do this, your death will not be decided by a coin, but you will not be killed if and only if the Riemann hypothesis is true"
Or in other words: If the subjective probability that the Riemann hypothesis is true is >50% then I will prefer to plant a tree; otherwise, I will not.
This example shows that you can compare probabilities that are more or less objective (e.g. from a coin) with subjective probabilities and that you should even act on that result.
The comforting thing with subjective probabilities is that you can use all the known rules from "normal" probabilities. This means that sometimes you can really try to calculate them from assumptions that are much more basic than a gut feeling. When I wrote this post I asked myself what the probability is that the Riemann hypothesis will be proven/disproven within the next 10 years. (I just wanted to show you this, because the result was so simple, which made me happy, but you can skip that).
And this result is useful for me. Would I bet on that ratio? Of course! Would I plant a tree in a similar alien example? No I wouldn't, because the probability is <50%. Again, it is possible to use subjective probabilities to find out what to do.
And here is the best part, about using subjective probabilities. You said "Science is similar to law in that the burden of proof lies with the accuser. In this case there is no proof, only conjecture." But this rule is no longer needed. You can come to the conclusion that the probability is too low to be relevant for whatever argument and move on. The classic example of Bertrand Russel's teapot can be solved that way.
Another example: You can calculate which types of supernatural gods are more or less probable. One just needs to collect all pro and contra arguments and translate them to likelihood ratios . I want to give you an example with one type of Christian god hypothesis vs. pure scientific reasoning:
In the end you just multiply all ratios of all arguments and then you know which hypothesis of these two to prefer. The derived mathematical formula is a bit more complicated, because it takes into account that the arguments might depend on each other and that there is an additional factor (the prior) which is used to indicate how much you privilege any of these two hypotheses over all the other hypotheses (e.g. because the hypothesis is the most simple one).
I wanted to show you that you can construct useful arguments using subjective probabilities, come to a conclusion and then act on the result. It is not necessary to have a definitive proof (or to argue about which side has the burden of proof).
I can imagine two ways were my argument is flawed.
Jaynes' Probability Theory is fantastic.
Well, my argument isn't about TLW specifically, but the tendency of certain types to dog pile when it's pointed out their little talking point ain't all they think it's cracked up to be.
Re TLW, providing references to support what I'm saying is difficult because the problems are so glaring it's tacit. It's like asking for a reference for the scientific method. For those who'd like a course on probability, this is a good place to start...
http://www.amazon.com/Probability-Theory-The-Logic-Science/dp/0521592712
Here are some links that address TLW in detail. Apologist site, but this is a quality response. I would also draw your attention to the comments. Chris Johnson actually chimes in and it's pretty apparent he doesn't seem to even know how much he doesn't know. For what it's worth.
http://www.mormoninterpreter.com/a-bayesian-cease-fire-in-the-late-war-on-the-book-of-mormon/
http://www.mormoninterpreter.com/the-late-war-against-the-book-of-mormon/
Yes. Martial arts of the mind = bayesian black belt = human rationality = lesswrong, and Probability Theory
Also, there are resources for everything you mentioned. And yes there are ways of improving all of them.
Happiness: Authentic Happiness
Memory: Moonwalking with Einstein
Math Skills: Secrets of Mental Math
Knowledge: Well, knowledge is obtained through learning and experience. This one would take too long to go into off the top of my head.
Social EQ?: Emotional Intelligence 2.0
Some other extremely useful books: Brain Rules, Thinking in Systems, The Power of Full Engagement, Flow: The Psychology of Optimal Experience.. I could go on forever on books alone. I haven't even mentioned actual tools or personal metrics
Since it sounds like you're very self-motivated, I'm going to go with: community college for courses you don't really care about but that will transfer to a four-year university in case you eventually decide to go there, and stick to a course schedule that will allow you to keep working.
Working makes you money; money can buy you books on Amazon; books and the computer in front of you will let you learn anything and everything you will ever want to know. Your advanced math studies are the name of the game, because in the Internet era, everyone has stacks and stacks and stacks of raw data and very few people have any clue what to do with it. You're eventually going to want to be able to understand Probability Theory: The Logic of Science and The Minimum Description Length Principle, but in both cases your applied math is going to need to be at the advanced undergraduate or graduate level. You can do that without ever once setting foot in a classroom, but you'll have to be willing to spend hundreds of dollars on books and a lot of time doing the homework yourself. Here are some recommendations:
A word of warning about probability theory: most probability theory texts and teaching are garbage. Stick to Jaynes and the references he cites. But you have a pretty long road ahead of you before you have to worry about that. :-)
> Beliefs are not scientific.