Another important sin I'd add is that a lot of people believe that if you look at Level A and it's statistically significant and you look at Level B and it's statistically significant, then Level A - Level B is statistically significant. That's not true because the standard deviation of the difference is not the same as the difference in standard deviations. You should always interpret a difference by directly examining the difference, and never by looking at the two levels on their own.
There's a really good book called Statistics Done Wrong that covers a lot of other common issues, like failure to conduct a power analysis and running lots of tests without adjusting the p-values for multiple comparisons.
https://www.amazon.com/Statistics-Done-Wrong-Woefully-Complete/dp/1593276206
A good guide on pitfalls to avoid and also the times that statisticians have been a little bit overconfident as the arbiters of empirical truth.
Sorry, in all likelihood the stats underlying this are probably crap.
The first issue is no significant tests, raw counts, or a link to a paper giving such things (which is the bare minimum a columnist must do even if they make the assumption that most readers won't care nor understand such things) is not provided in the article or on the organizations webpage.
However, the thing that gave me the most pause in the article is this line: "Sexual violence came up repeatedly in interviews of pregnant LGBTQ youths by researchers for Rainbow Health Initiative. Childhood sexual abuse was experienced by at least 5 of the 18 teens interviewed." This gave me pause not only due to the horrible nature of the content, but also because it seems to indicate that the pregnant teens across all three groups sums to just 18. Though it could also be read as the total number of pregnant teens in the study are just 18. (I think they meant the former but it's not completely clear.) Either way it leads to very poor statistical power.
I hate when this is done. They are trying to attack a truly important topic with crap statistics without enough power to come to the conclusions they claim. Which means people like me have to come in and slam a hammer of negativity down. When all of this could have been avoided with some planing and good experimental design.
Instead I have to say you can't trust their conclusions because they don't give the supporting statistics for significance and the only thing that would indicate how significant their results are does not look good.
As a quick side note, this is quite common in the sciences as well. In some disciplines the power is so poor that the majority of the studies are complete garbage as far as their statistical conclusions go.
For more on that and power/significance you can read: Statistics Done Wrong.
It's quite accessible considering the topic. It will also help you approach statistics in the media much more critically.
Science as an ideal is not the same as science as it is practice. In practice, there are patterns of error, misconduct, publication bias, misapplication of statistical techniques, the fact that the vast majority of results cannot be reproduced and so forth. Not to mention the larger epistemological questions around induction.
> ...harmful mutations is one of those instances where people are convinced a risk is significant when it really is not.
Again, how do you know this? You don't. It's your opinion, and you may be right. But, it's not something I'd like to gamble global food production on.
Census data yes. The framing of it, maybe not. Welfare is a tricky thing. You could even include public education as a function of the welfare state. I'll go watch the video and come back.
Okay got it and watched it. Few things that make me wary:
1) welfare as I thought is poorly defined. The way it's used you can't see what is included in the definition. Even food aid is loosely defined. I'd like to talk to the researcher who pulled this together to better understand.
2) secret knowledge. The government doesn't want you to know line is dangerous. The government isn't an amorphous machine its people. People who are generally fairly conservative. When folks claim secret knowledge they are selling something.
3) percentages. When it shows those bar graphs it's done on purpose. 10% of 100 million is way bigger than 10% of 10 million. That's the scale for raw numbers. You cut welfare you cut services for poor white kids in raw numbers.
4) the data is cross sectional. That means it's a snapshot in time. Not very good for inferential knowledge. I'd want trend lines at the very least and better definitions of the races.
Thanks for the video, that was interesting. Pick up a few books if you want to know more about not getting pulled into bad math.
Here is a good place to start if you've already read how to lie with statistics:
https://www.amazon.com/Statistics-Done-Wrong-Woefully-Complete/dp/1593276206