Judea Pearl wrote a book on this:
https://www.amazon.com/Book-Why-Science-Cause-Effect/dp/1541698967
Grossly oversimplified: In order to determine causality, you need a counterfactual. A counterfactual is a comparable 'what if' scenario in which the intervention is absent.
Most statisticians don't actually bother with causality. Instead they focus on how often a study would be true if repeated, this is called Frequentism. It's a robust standard to compare studies. But it's not very satisfying. One could argue the entire point of establishing whether or not something correlates with something else is because there is some hunch that the two might be causally related.
Shit, I'm sorry to have done this to you. In a nutshell, DAGs are a way to graphically represent causal relationships between variables (nodes). They're directed because the causal relationship can only go one way, and they're acyclic because there are no loops.
Ex.
X -> Y -> Z is a DAG that represents a causal effect of X on Z through Y.
X -> Y <- Z means X and Z cause Y, but X and Z are marginally independent but conditionally dependent.
X -> Y -> X is not permitted because of the cycle (loop) from X to X.
DAGs are nice because you can visualize conditional dependencies and graphically analyze causal relationships.
If you're really interested in this stuff, Judea Pearl's The Book of Why is an accessible introduction for laymen. Pearl is a Turing award winner and the godfather of causal inference. All his academic papers are available on his website, but they're not for the faint of heart.