I can't find the colorizer anywhere on github. Are you sure it's open source? I just found a demo.
I am not sure how to approach your question theoretically, but with my experience working with simple computer vision models, the activation functions do affect the final results because the final results are post-optimization. The optimization process requires the computation of the loss. In pytorch, one might use LogSoftmax to corporate with NLL loss when dealing with multi-class classification. After fitting, we will replace the LogSoftmax with a Softmax for interpretability.
This is a great book by the creator of the Python package keras.
Deep Learning with Python https://smile.amazon.com/dp/1617294438/ref=cm_sw_r_cp_api_glt_fabc_VADA4VPZ28G388R7M6MB?_encoding=UTF8&psc=1
Stop being a child, imagining what might be, and ask your parents for some money to sign up for a class to teach you how to do it.
https://www.udacity.com/course/machine-learning-for-trading--ud501
This is GPT-3, it isn't public yet - you can request access as a beta user at https://forms.office.com/Pages/ResponsePage.aspx?id=VsqMpNrmTkioFJyEllK8s0v5E5gdyQhOuZCXNuMR8i1UQjFWVTVUVEpGNkg3U1FNRDVVRFg3U0w4Vi4u
I am aware of LSTM, which is great, but is different from what I meant in which there neurons (and neurons in subnetworks) are constantly computing the feedforward algorithm at each timestep, thus producing N*M computations (neurons_count*synapses_count), while in what I meant, the number of computations always changes but is greatly lower than N*M.
I have actually implemented this and I talk about it in my bachelor thesis. It's just a few pages (chapter three) and includes other details such as synapse delay and the algorithm used to train the net with a GA.
​
Can you be more specific?
Color quantization to get the palette can also be achieved with neural networks, as you can see here: https://en.wikipedia.org/wiki/Color_quantization
For the FLIF file format, the specification is here: http://flif.info/papers/FLIF_ICIP16.pdf You can search for "MANIAC tree structure"
Sorry mate, but the evidence piles up.
If you are using PyTorch, make sure you have installed the right version for ROCm. Also make sure your GPU can be used for this. https://pytorch.org/get-started/locally/
The amount of samples is what I said indeed (64). My confusion stems from the fact that you call them "input nodes". A "node" in a neural network for means either an input or neuron with it's own unique/individual weights connected to the next layer. So the way I read what you propose is putting each of the 64 input samples through different first weights. This is incorrect, because they all go through the same network/nodes. You should see it as putting each of the 64 samples individually through the network the same way as you do now, but maybe computing that in one go. The important part of this is that you now use 64 random samples to compute the direction of the gradient before making a step instead of one, which gives you a more accurate approximation. Everything else stays the same, you just compute the loss/gradients over 64 samples to get a more accurate weight adjustment.
Weight initialization is (as all things deep learning) a topic for a lot of research. Often the size of the weights is determined by either the amount of inputs, outputs, or both of a layer. The Pytorch documentation refers to some papers about this.
It's not a silly question, don't worry. You're right, you can definitely load a trained network and apply it to your own images as long as they're in the same format. For instance, here you can find some info about already trained models in PyTorch. Here you can find some general info about loading models in general in PyTorch. Every other large deep learning library will probably also have this functionality since it's quite essential.
https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html
This is about as good as it gets for a spoon fed how to set up and train a GAN. DCGAN in particularly is a known architecture that seems to converge well for a variety of applications, you just need to be okay with the 64x64 resolution. You do need to copy the code and add your own data loader, but I definitely think you’ll be able to drop your own images in and probably get some decent results. Good luck!
It's a hobby for me too. I've been working on chess with limited success and audio classification which works very well. So tic-tac-toe is definitely an achievable goal and a good exercise to learn.
You don't really need to know any math but you do need to know python.
If you want to learn the math, I would suggests learning linear algebra and calculus. That's all that's needed.
Also you didn't mention a neural network library. Doing neural networks with just numpy would be difficult for me and I've studied the math.
Take a look at pytorch. It's simplifies greatly the work required to construct these network and to train the networks. Why reinvent the wheel?
Here's a link: