Being honest with yourself is great. Not being able to build anything profitable is much better than tricking yourself into believing you make "profitable" algos on backtest, only to see them all fail in real life.
DO NOT trade forex. FX is super hard to trade: poor data, fragmented markets, most efficient asset class. Finding alpha in FX is 10x harder than in equities/commodities spot/futures.
Unless you are super comfortable with basic stuff DO NOT go to ML, especially deep learning. Simple stuff like momentum, mean-reversion first, then interpretable models, then ensembles of models, then DL. Otherwise you will never learn, nor make any progress at all.
This book is the bible: https://www.amazon.com/Elements-Statistical-Learning-Prediction-Statistics/dp/0387848576/ Read it 100x times and code as much examples yourself as possible.
That's how I see it: if it's used for prediction it is machine learning, even if it's a simple linear regression, or GLM.
Some people like to call it statistical learning (this book, great read)
The book
is the graduate student version of the undergraduate book
The Elements is one of the best written mathematics books I've read. It also takes a very geometric approach, which really appeals to me. I haven't read An Introduction, but I am sure it's great. Incidentally, Daniela Witten is worth following on Twitter.
>You can’t really propagate the uncertainty easily, also regression and ML models are mostly discriminative and so they are modeling Y|X, that means it is inherently conditional on the input data features X and thus the uncertainty in X does not need to be propagated even from a statistical point of view. Its already by definition conditional on assuming that X takes that value.
This comment is exactly right. u/rednirgskizzif, if you don't have the book yet, check out the book "Elements of Statistical Learning" section 2.4, equations 2.9 - 2.13. There they show the fundamental approach to minimizing the the *squared error loss* E(Y-F(X))^2 to find the best fit regression function f(x) = E(Y|X=x), which is *not* a chi2 which takes into account measurement uncertainty shown in equation 1 here!)
It's a bit more simple in some ways, but I agree u/rednirgskizzif that our training in physics leads us to care very much about uncertainties in models in a way that in data science/business world there aren't as many applications.
The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in Statistics) https://www.amazon.com/dp/0387848576/ref=cm_sw_r_cp_api_glt_fabc_0JWFX3T7XF7NYPK4X22D
Od zabavnog stiva citam https://en.wikipedia.org/wiki/The_Horus_Heresy_%28novels%29 , trenutacno na 20. knjizi.
Od tehnickih stvari, x86-64 assembly i http://www.amazon.com/The-Elements-Statistical-Learning-Prediction/dp/0387848576