I'm not sure how rational I am, but after reading Superforecasting I have been incorporating some of the practices it covers into my habits of mind. For example in chapter three the authors cover the importance of being very precise about probabilities when communicating:
>In 1961, when the CIA was planning to topple the Castro government by landing a small army of Cuban expatriates at the Bay of Pigs, President John F. Kennedy turned to the military for an unbiased assessment. The Joint Chiefs of Staff concluded that the plan had a “fair chance” of success. The man who wrote the words “fair chance” later said he had in mind odds of 3 to 1 against success. But Kennedy was never told precisely what “fair chance” meant and, not unreasonably, he took it to be a much more positive assessment. Of course we can’t be sure that if the Chiefs had said “We feel it’s 3 to 1 the invasion will fail” that Kennedy would have called it off, but it surely would have made him think harder about authorizing what turned out to be an unmitigated disaster.
There's a lot of good stuff in the book about trying to make informed decisions with limited information. I think it could be part of what you're looking for.
>The superforecasters are a numerate bunch: many know about Bayes’ theorem and could deploy it if they felt it was worth the trouble. But they rarely crunch the numbers so explicitly. What matters far more to the superforecasters than Bayes’ theorem is Bayes’ core insight of gradually getting closer to the truth by constantly updating in proportion to the weight of the evidence.19 That’s true of Tim Minto [top superforecaster]. He knows Bayes’ theorem but he didn’t use it even once to make his hundreds of updated forecasts. And yet Minto appreciates the Bayesian spirit. “I think it is likely that I have a better intuitive grasp of Bayes’ theorem than most people,” he said, “even though if you asked me to write it down from memory I’d probably fail.” Minto is a Bayesian who does not use Bayes’ theorem. That paradoxical description applies to most superforecasters.
Excerpt From: Philip E. Tetlock. “Superforecasting.”
Duuuuuuude have you read Superforecasting? This skill is literally an important part of the modern rationality movement.
Gawa kaya tayo ng subreddit para dito?
I wouldn't call it fluff. If read it all, it's pretty informative. He looked at Suzuki's comparables among players who produced close to his place. He separated his analysis in various groups to draw many comparisons:
Players who scored 1.5 PPQ by their third year in the CHL versus those who didn't. Suzuki belongs in the first group, and that's quite a star-studded line up.
The five players who scored a similar pace than Suzuki in the CHL and player four years there. The list include a few second line players (e.g., Danault and Kadri) but all players played some time in the NHL.
The players who scored at a similar pace than Suzuki, but played only three years in the CHL (e.g., Draisaitl, Couturier, Meier). The list is of a different level, which makes sense since all of them were rushed to the NHL, but all of them tracked Suzuki eerily closely during these three years.
The players who played either three or four years in the CHL while tracking closely Suzuki (which is a mix of the previous players, plus Huberdeau). It's a very solid list. If you ignore Etem's presence, Scott Laughton is arguably the second worst player on that list of 7 players.
Obviously it gives no definitive prediction. There is much uncertain about prospects, so with the rare exception of top-end talent, it's hard to predict whether the player is NHL-ready. All of us were wrong about Kotkaniemi's NHL-readiness last year, for example. That being said, it allows us to have a better informed idea of his odds of making the NHL.
If you look at the list of the four years CHL players, then Suzuki looks like it'll take some time to settle and will benefit from a year or two in the AHL. On the other hand, his first three years and his OHL playoff performance suggest a different kind of player. In either case, there's a solid chance we have a second line or better player. Suzuki's closest comparable is actually Kadri, which isn't bad at all. While it's possible he'll flop (i.e., Hodgson and Etem), he's more likely than not going to play in the NHL. He isn't overhyped.
People tend to look down upon uncertain conclusions (Truman famously demanded a one-handed economist), but studies demonstrate that analysts who make these mitigated predictions are by far the most accurate. Good forecasting requires to be aware of the uncertainty, and acknowledging it is a signal of competence. I'm more wary of those who use sport analytics to arrive to very confident conclusions.
Superforecasting: The Art and Science of Prediction https://www.amazon.com/dp/0804136718/ref=cm_sw_r_cp_api_glt_i_YKTFFWPAM2S2X416K60V
Open and free-form associations of unrestricted thinkers > institutionalized and regimented staff analysts, every single time.
Yes
Some people are quantifiably better predictors than others
Also see
And
Book on his forecasting research: https://www.amazon.com/Superforecasting-Science-Prediction-Philip-Tetlock/dp/0804136718
He discusses it in this podcast: https://conversationswithtyler.com/episodes/philip-e-tetlock/
I think Daniel Kahneman also discusses the same thing at some point (maybe?): https://conversationswithtyler.com/episodes/daniel-kahneman/
Not sure if there's an academic paper on it. The logic is simple -> experienced analysts have everything to lose by entering the tournament, so they refuse to play. Like becoming the world champion and then denying all challengers.
>The LessWrong-o-sphere (which tends to lean alt-right, see Neoreaction: A Basilisk)
On the internet, it is common for one community to form a caricature of another based on cherry-picked anecdotes which get passed around as representative. This youtube does a decent job of explaining how this happens. Here is a survey of the politics of LW users. A strong majority of users polled favored more immigration, and out of 1527, only 5 (0.3%) self-identified as "fascist" (compared with 131 "pragmatists", 217 "progressives", 237 "social democrats", 126 "socialists", etc.)
>Human societies are incredibly complex and getting nice, neat, numerical results to plug into your QALY table is functionally impossible (though that doesn't stop some people trying, c.f. the absolutely dire attempts to put numerical probabilities on societal collapse/human extinction)
If It’s Worth Doing, It’s Worth Doing With Made-Up Statistics. Napkin math doesn't hurt as long as you remember it is napkin math. And it can often outperform intuition: see The robust beauty of improper linear models in decision making for instance, or discussion in the book Superforecasting about how effective forecasters do better, in part, by trying to quantify things that might seem unquantifiable.
>The vast majority of people within EA are rich, privileged members of elite institutions funded by billionaires and thus have no incentive to try to make any changes other than reformist technical fixes.
Nor do they have incentives to donate a large fraction of their incomes, yet they do so anyways.
This argument also proves too much. Survey data shows that those in favor of big reforms, in the US at least, actually tend to be wealthier than the population at large:
>Progressive Activists [8% of the American population] have strong ideological views, high levels of engagement with political issues, and the highest levels of education and socioeconomic status. Their own circumstances are secure. They feel safer than any group, which perhaps frees them to devote more attention to larger issues of social justice in their society... Their emphasis on unjust power structures leads them to be very pessimistic about fairness in America.
>many ideas simply don't occur to them because their assumptions have never been meaningfully challenged
In fact, your critique appears to be a minor variant on one of the most common critiques of EA, "EA neglects systemic change". This critique is so common that 80K created a page on its website specifically to refute it.
I hope you appreciate that I mean you no disrespect and that you haven’t interpreted my remarks as adversarial.
> As a career software engineer, I consider myself qualified enough to make a careful appraisal of their claims. At the very least, I am more academically qualified in the subject matter than they are, qualified enough to read their claims and see that those claims are not based on any actual objective evidence, but only on subjective opinions of two people who have no formal background in the field.
Perhaps, but how can you make a "careful appraisal" without even reading the book which lays out the objective evidence in detail? Are you familiar with the overwhelming empirical evidence for specification gaming in Reinforcement Learning? The proofs of instrumental convergence and utility preservation under reasonable formalizations? Did you consider that Bostrom et al. might not be totally insane, but instead have access to additional information that you don’t presently have, and that’s why I recommended the book?
> That which can be asserted without evidence can be dismissed without evidence. I do not need to disprove something which does not have more than mere speculation going for it. That is a logical fallacy which attempts to shift the burden of proof onto me, to disprove a universal negative (that such an event is not going to happen). I have no obligation whatsoever to disprove a universal negative.
You misunderstand - I asked you to reproduce their arguments, to make their case as best you can, in order to verify that we’re talking about the same claims here.
> Okay, good. But please don't put words in my mouth. What I said was, "I have yet to hear about an actual AI researcher with serious concerns about this outcome, and a number of prominent, actual software developers have come forward to state that they don't believe it will happen, at least not in our lifetimes."
The implications of what you wrote were clear - it wasn’t an uncertain "I don’t know", it was "serious AI people, by and large. aren’t concerned". If you’d reviewed the surveys of the subject matter, you’d see that scientific consensus in the field is rapidly shifting towards this being a serious problem to tackle now. Also, it’s totally possible to be very concerned and also believe it’s far off - in fact, those working on AI safety generally have longer timelines than those of their colleagues.
Speaking of which, have you read Grace et al.’s survey on AI professionals’ timelines? Have you considered the research literature on long-term forecasting, how hard it is, and how likely we are to be right in this kind of situation? Have you considered the arguments made by There’s No Fire Alarm for Artificial General Intelligence?
My purpose in providing these resources is twofold: one, to point out aspects you may not have considered, and two, to subtly encourage you to realize how complex this is, and that you’re probably overconfident in how likely you are to be correct. I’m afraid of being overconfident about this, always looking for facets of the problem I may have overlooked - and it’s literally my research to think about this! The stakes are high, and merit careful consideration.
> So the author of Superintelligence himself has stated unequivocably that he is not arguing that we are on the cusp of a breakthrough or that we can predict such a breakthrough with any level of precision.
> And yet you are telling me to read this book to give me a background on the topic, under the pretense that it should somehow justify the claims of Musk and Hawking that we are on the cusp of such, or that we will see it within our lifetimes (a prediction of notable precision)? Except that by the author's own words, it does precisely the opposite -- it states that the book is not a justification for such.
Believe it or not, you don’t have to have a point mass belief, where you’re totally sure that a thing will happen by time X, or it won’t. My distribution is bimodal with a long tail - it’s probably either 0-1 fundamental innovations away (10-25 years), several away (30-45 years), or perhaps I misunderstand the nature of the task and it’s really far.
So if Musk, Hawking, or I believe that there’s even a 20% chance we’re all dead within two decades, then yes, we can take a position like this. (FWIW, I think it’s more like 12% within two decades)
Also, extremely short timelines are not relevant to the main claims they make, and medium timelines are supported by the considerations in Superintelligence.
> My position is that I don't place any probability distribution on such, and that you cannot reliably do so either. And based on the writing of the author of Superintelligence, he also cannot.
That’s not how probability distributions work. You can’t "refuse to have beliefs" about something you obviously have beliefs about. Here, it looks like you implicitly believe it’s far off, with a short left tail (low chance any of these alarmists are right) and a long right tail (it could take a long time), with some mass on "never". I think you’re too confident in your 100-year predictions about a technology for which we don’t know what is required.
> That's not what my post was about. My post was objecting to the unduly alarmist predictions made by two people who have an inadequate background to make such precisions.
They are not unduly alarmist, as I’ve argued.
> In any case, I do believe that strong AI is probably really far away, based on the lack of evidence that anything remotely resembling it exists today. Hell, I myself am actively waiting for a weak AI to perform at a human-equivalent level in Starcraft 2. I was pleased that a weak AI was able to perform at such a level for the board game Go (which I am also a competent player of), but that achievement, while admittedly a surprisingly rapid development, is not nearly enough to convince me that somehow a strong AI is on the horizon. It's step 8 out of step 382.
> I think it is acceptable to prepare for such an event, but I do not think it is acceptable to be radically alarmist about it as if such a thing were imminent, without clear evidence that it actually is -- and I definitely think it is unacceptable to be alarmist about such a topic from a position of academic ignorance on the subject matter.
I think the problem here is that you 1) think it’s far away, 2) think we’ll very reliably know far enough in advance to start preparing, 3) think that any problems are with high probability solvable within that warning time, and 4) that people will do something, even though 1] a culture has been set up where people who worry about this are derided and seen as low status, 2] many problems won’t arise until AGI level, when it’s too late, and 3] mobilization is hard. Can you justify all four beliefs with very high confidence, or explain why one isn’t necessary for good outcomes?
> No. I cannot. What's your point? I never claimed I had such an ability. I only claimed to disbelieve that such an event is on the horizon without evidence that it is ... and I will continue to believe that such an event is not on the horizon without first seeing evidence that it is, on the basis of Hitchens' razor, mentioned/quoted above.
So the treacherous turn undermines many of the arguments you made in later comments about how we’d stop it.
The machine learning model. Remember that Cognicism, the actual authors claims are not shown to users. The ML outputs an aggregate view of a collective of people trying to find a common truth together. The idea of "centralized arbiters of truth" really doesn't have much legs in my mind.
There are many attempts at making a "scoring algorithm" for truth. We talk about most of them in the manifesto.
Truthcoin (now hivemind) is basically just a crypto based on prediction markets. Simpler but I think it can be corrupted. Metaculus also focuses on prediction and they concede that there are infinite scoring functions.
From my perspective, the key is ML model, and FourThought API which is a constraint on how truths are evaluated. You don't just rate true or false, you rate on a spectrum of 0% likely to be true to 100% likely.
The ML model is using the raw text of the thoughts as well as the collective score. It's always trying to predict itself what accounts are making the predictions (or statements) that end up being logged to the chain.
The models seem to favor accounts that fall into the basic constraints laid out in this book They use Brier scores for evaluation like Prediction Book. In their case they find that scorers that make more nuanced predictions, and update their scores more often are more accurate. The ML models are meant to learn similar patterns in accounts.
The models are constantly learning, and becoming more rich with knowledge over time, and resistant to corruption by trolls.
Early models with not a lot of training time and pretty dumb and susceptible.
>Lol that Good Judgment Project has nothing to do with what you said
The link might not (I didn't have time to fully read that particular one), but needing to balance multiple viewpoints absolutely is one of many results of the project (I've read about this before, I didn't just google something hoping it would validate my view).
I'd strongly recommend Superforecasting by Philip Tetlock if you're interested. Here's a better summary of the results - note no. 5 in particular.
An important related concept is the hedgehog - fox model - Neoliberals and radical centrists very distinctly believe it is better to be a fox than a hedgehog; that it's healthier to view the world as large and full of multitudes as opposed to organised and predictable. I'd argue it's one of the fundamental differences between neolibs and libertarians.
I'm not really trying to convert you to neoliberalism or anything at this point, moreso just exploring things - I think that there are large and overlooked meta-level differences in Marxist vs Neoliberal thought that are actually crucial for understanding their object-level differences. It catches me off guard also.
For instance, when you wrote before that 'Marx solved where the profit came from', or that 'Das Kapital lays out the truth of material capitalist mode of production.', I realised that I was very unused to hearing words like 'truth' and 'solutions' applied to socio-economic theories, and as such I had totally forgotten that people actually think of theories as being true or false (I realise how ridiculous that sounds). But that's the depth of our meta-level divide: for us, models aren't statements of fact so much as tools for assessing statements of fact; models hardly sit on the spectrum of true vs false and instead inhabit various shades of usefulness.
If you want to better understand the (most defensible version of the) neoliberal/radcent mindset, I'd say this - we consider ourselves non-ideological, but not in the sense of people who falsely conflate the political centre with a lack of ideology. We're non-ideological in that we actively avoid commitment - we're constantly trying to reconsider and readjust our models and assumptions and try to glean insight from as diverse sources as possible. We're deeply skeptical of claims and truth and certainty - even our least controversial models (e.g supply and demand) have well known and important weaknesses, and you'd probably be downvoted for saying otherwise. We try to have no 'big picture' but a messy collage of countless smaller pictures continually being retouched and pasted over with little regard for how they fit in.
Sorry for the rambling. TL:DR I think it's unfair to accuse us of pure ideology because we're honestly quite committed to not doing that.