Machine learning (ML) is the art of teaching your computer to do things without explicitly telling it HOW to do those things.
ML is also teaching your computer to do things that would usually need a human.
ML is also teaching your computer to do things with ease that you couldn't program it to do.
ML is pattern recognition.
If there's one thing I want you to take away from this it's:
^^^Which ^^^is, ^^^kinda ^^^unrelated, ^^^also ^^^a ^^^pretty ^^^good ^^^William ^^^Gibson ^^^novel.
In classical programming you EXPLICITLY tell your computer WHAT TO DO, not really WHAT YOU WANT.
IF x happens THEN do y, ELSE do z.
In ML, it's the other way around, you tell your computer WHAT YOU WANT, or rather WHAT PATTERNS YOU WANT IT TO FIGURE OUT (learn to differentiate between cats and dogs! = Figure out the rules and patterns that make a cat a cat and the rules and patterns that make a dog a dog) but you don't tell it HOW to do it. You let the ML algorithm figure that out on its own, it's kinda like telling a small child what you want it to do and then sitting back and watching it struggle, until it eventually figures out HOW to do it.
Classic example, teach a computer to differentiate between cat and dog pictures. How would you do it in a regular programming language? Write a billion IF - THEN - ELSE statements to account for thousands of dog and cat breeds, photographed from thousands of different angles under thousands of different lighting conditions?
Yeah, have fun.
That's where ML comes in. Train it on a couple thousand pictures (50:50 cat/dog) and you have a piece of software that figures out the rules itself, a software you can just throw any cat/dog picture at and it goes: yep, that's a cat. Heck, these days you can train a ML model in minutes to not only differentiate between cat/dog but also to tell you exactly what breed it is.
The same can be applied to all kinds of A or B problems. They are called classification problems, btw.
Let's say you're a doctor sitting on thousands of xray pictures and you know which ones are of cancer patients and which ones aren't. Feed that into your ML algorithm, let it figure out the patterns between cancer and not-cancer. This could be a great step within developing countries that might have enough nurses to push "start" on the xray machine, but not enough highly-qualified doctors with enough time to sift through all those images and correctly interpret the pictures.
ML could help with triage, run patient's pictures through the machine and put those that your machine determines to have high cancer probability at the top of the list to see an actual doctor.
Another problem set is regression, which basically means "guess the number". Let's say you want to sell your house and you want your machine to suggest a selling price. Feed your ML algorithm a thousand houses and as many parameters as you can: Selling price, number of bedrooms, square metres etc. Let the ML algorithm chew on that and figure out the pattern of what kind of house fetches what price. Then put in YOUR house parameters and it'll tell you: Based on what I've seen, your house will probably fetch this price.
Another application is clustering. Let's say you run an online shop and you have thousands of customers, but you don't know if there are actual groups of customers that you could easily advertise to. You know what everybody bought, you know where they're from due to the shipping adress, you know their age, gender, how much they spend, what items they've bought over their customer lifetime etc. Feed all that into your ML software and let it figure out demographic trends, groups and connections for you to inform your next advertising campaign.
These are all interpretative applications, but you can also throw generative problems at ML.
Feed a couple hundred or thousand classical songs into it and let it figure out what patterns and rules make a classical song, then use the resulting model to generate new, random classical music from scratch.
That's also how https://thispersondoesnotexist.com/ works. NVIDIA fed their machine a shit ton of pictures of people and told it: Figure out what is a person. What are the rules that make a face, what are the patterns.
Remember, pattern recognition on a human or, even better, beyond-human level is a huge part of ML.
And it did. And now it can generate you a photorealistic picture of a person that doesn't even exist, because it learned the rules and patterns. On its own. ML is approaching a rudimentary form of imagination at this point.
Deepfakes are another application. Basically you tell your machine "This video, this is what Steve Buscemi looks like. Figure out the rules that make 'Buscemi' what he is!" and "This is what my brother Frank looks like" and then "YOU figure out HOW to make my brother Frank's face conform to the 'Buscemi' rules you just figured out, facial expressions and mouth movements included!". And it will. It'll struggle for a while, but it will.
And that's one of the fun thing about ML, the iterative nature of the process. It learns from its mistakes. You tell your machine "do it once!" and it'll do maybe ok, most likely it'll crap its pants. Then you tell it "alright, do it a hundred times. This is the target! Each time, look at how far you've missed the target. Next time, miss the target a little less!".
Congratulations, you've just learned what a loss function is.
Much of that ML work can be accelerated greatly by running it on GPUs because there is a lot of matrix math involved and GPUs are REALLY good at that. That's why ML practitioners are really interested in fast GPUs. Also, lots of VRAM. Having lots of VRAM means you can load a lot of your dataset that you want your machine to learn from at once onto the GPU, that's why /u/Quaxi_ was hoping for even more than 12GB.
There are all kinds of reasons to get into ML, if you just abstract the examples I've given to whatever interests YOU. Just think about it this way for a start: Do you have a lot of data and are there patterns you want to find in it? Then learning ML might be for you!
Some food for thought:
https://www.ubuntupit.com/top-20-best-machine-learning-applications-in-real-world/
And the best, most practically oriented course I've come across so far: