God, I love this effort! Recently finished reading Superforecasting and it's cool to see you having this forecasting tournament and applying Brier scores.
I am curious about asking only about playrate though, without any forecasts on winrate? I understand that for rarely-played cards the winrate will be highly variable (noisy) so you wouldn't want to be forecasting winrate as the sole metric - but, I would think forecasting winrate would be very interesting, and by weighting the winrate predictions by playrate (e.g. accurately forecasting the winrate of a 15% playrate card is much more highly weighted than forecasting the winrate of a 0.5% playrate card) you could get very meaningful and interesting statistic. Otherwise, you're kind of confounding "predict power level" with "predict meme factor/fun factor". E.g. if you predict a mono-Shurima type deck is going to suck and so forecast a low playrate, but it actually ends up with a high playrate because it's enjoyably memey, that won't answer the question of "who can better judge the expansion cards' power level?" Whereas a winrate prediction contest could mark who is a better forecaster of power level.
We're all trying to do the best we can in this crazy place called crypto.
As to your comment, that's why there is no DCF or dividend discount model or the like in my post. Scenario analysis is a start of something and it gives context. It leans on assumptions and guesses and it's weak for it. But it gives better context than just rolling the dice or guessing without going through the rigor of breaking it down. Have you read Superforecasters? I highly recommend it.
Another framework to use that might be of some use would just to look at comparable cryptos. LTC/BTC relationship, estimated of TX/day and number of users and value, etc.
> Gotchya. Here it seems like you're conceptualizing understanding as a binary value: you either understand something or you don't. But you talk about it in less binary terms above: we can have different extents of understanding.
Make me think back to Tetlock's Superforecasting and probability. i.e. If a person makes a lot of predictions and the probability of events that they judge to be 75% likely to happen turn out to happen about 75% of the time then I think most people would judge them to be accurate predictors even if their predictions aren't "yes" or "no".
One-in-a-million events happen all the time, otherwise they would be zero-in-a-million events. The thing to keep in mind is that betting odds are everyone's best bet on what the truth is; to a first approximation, nobody can consistently beat a prediction market.
Why would that be the case? Well, imagine if this wasn't true. That is, imagine if there was someone (call her Alice) who could consistently beat prediction markets.
Alice would probably notice that she can make a big pile of money by betting on predictions. If she thought that candidate Bob had better chances than candidate Carl, she could take bets that gave Bob <50% chance, and make bank.
She could put a bunch of cash in a prediction market, and turn it into a fortune. Eventually, her fortune would grow to such a point that her bets would change the prices! Because if she buys every single bet that gives Bob <50% chance, it means that Bob's chances are now valued at 50% or more.
Now, such Alices do exist. Most of them work in finance, and they're wicked fucking smart. They'll figure out that one stock is undervalued by pennies, and make thousands or millions of dollars based on that finding. But in the process, they end up adjusting the stock prices. If the market has some inaccuracy, profiting from it will make it disappear.
But you probably don't have a PhD in statistics, and you're probably not a superforecaster. So when you look at the odds, you should see them as a kind of wisdom of the crowd on steroids; if a single person knew better, the price would adjust.
This is the Efficient Market Hypothesis.
Unless recidivism accounts for the difference between CHLs and the rest of Texans, the conclusion remains unchanged, and that study remains a case of lying with statistics. If you want to imply that recidivism invalidates the raw trends, then that's on you to prove. There's data on recidivism from nij.gov and bjs.gov.
But if that were true, we would have seen the early data (1996-2000 or so) showing CHLs convicted at an equal or greater rate compared to the Texas population. That didn't happen.
What I'm implying is that conclusions like the ones found in that study:
>Our results imply that expanding the settings in which concealed carry is permitted may increase the risk of specific types of crimes, some quite serious in those settings. These increased risks may be relatively small. Nonetheless, policymakers should consider these risks when contemplating reducing the scope of gun-free zones.
Are flatly contradicted by the data when it's presented in the standard "crime rates between groups" way, rather than a convoluted "let's compare proportions of crime within groups" way. They would be totally right if the CHL murder rate per 100k population was higher than the Texas rate, but it never has been in recorded history.
Many people like to paint all CHLs with a broad, monster-shaped brush, as though they are far more likely to cause harm than the average population. If age factored into that fear, then those people would be far more terrified of teenagers and the demographic groups that do actually account for disproportionately more violent crime relative to their population size. Yet that's not the case.
And I don't see how sociopaths factor in here; they exist in both population groups. Sociopaths have been found to make up for 25-35% of prison populations [1], [2]. If more sociopaths were drawn to becoming a CHL than normal, that would be reflected in higher convictions for CHLs. So either they aren't actually drawn in, or the system is keeping them out, as intended.
Ultimately it comes down to this: Imagine you took two random Texans, and were told that one was a CHL. Right now many people assume that the CHL is going to be more likely to be violent than the non-CHL, and fear the CHL for it. Yet the evidence shows that by the end of the year, the CHL is 14x less likely to end up convicted of any crime, 14x less likely to end up convicted of a violent crime, and is overall less likely to end up convicted of a murder.
I'm implying that widespread fear and bigotry towards CHLs is wrong. The available data does not support it.
If this doesn't make sense, I suggest you check out Superforecasting. It covers a predictions tournament on world events and how a group civilians went head to head against CIA analysts armed with classified info. The civilians trounced them, and it tells how they did it. Here's a review and excerpts. There's also a Freakonomics podcast episode on the book.
Superforecasting has been on my "get to soon" list since I got it last Christmas. It just got a nice nod in the latest CAS magazine.
Along the probability/math lines, other books I've enjoyed are: * How Not to Be Wrong * The Drunkard's Walk
Recent psychology research is showing some people can make accurate predictions about the future--though, the research discussed concerns predictions on a <10 year timescale. Still a good place to start though.
People dis Kurzweil, but on Less Wrong a bunch of volunteers went through a ton of his old predictions and the result is that maybe 30% of them were accurate. Not super impressive, but it's a lot better than it could be given that the predictions were made 10 years in advance.
Your rules link doesn't work.
Read: http://www.amazon.com/Superforecasting-Prediction-Philip-E-Tetlock/dp/0804136696/
if you want some helpful insight into making accurate forecasts
Superforecasting: The Art and Science of Prediction by Philip Tetlock.
> I have trouble seeing the need for greater political diversity in the sense of having more right-wing academics. I think that this is an attempt at a false balance. The right-wing (of U.S. political culture) seems to be genuinely and significantly anti-science and anti-intellectual (q.v. intelligent design and climate change denial), so we should expect them to be very underrepresented in scientific circles. But this isn't a problem – this is probably as it should be.
This was actually a common element in the response pieces published alongside that article on lack of political diversity in the academy. There was discussion of cross-country differences in political attitudes, differences over times in what seemed to constitute the right-wing, and other dimensions along which discrimination seemed to exist (e.g. religious). Lee Jussim has those responses as well as the author's response posted on his website - they're not at the link that seems to most often get posted.
> Regarding the superior performance by laymen, it's possible that this could be the result of a wisdom of crowds phenomenon. Since the well-read layperson is exposed to the aggregated views of all of the experts via summary newspaper articles and the like, the layperson's opinion might be closer to the average of the entire field than the opinion of any one expert would be.
Try adding Tetlock's Superforecasting to your reading list. Certain individuals did better in some ways of evaluating things, but groups of those better-performing individuals did better still.