Well, according to the video description, it was colorized using software called DeOldify... So it was done programatically. Also, there are three available algorithms, and it appears these folks chose to go for image quality and flicker reduction over getting the most vibrant colors
I'm sure if you'd like to contribute some code to improve their algorithms while working with 110 year old film, they'd be happy to take your merge request.
I think the #1 trap to avoid is that of the “perpetual student”. That is-always taking classes/reading but not actually doing anything with it.
The Fast.AI courses are fantastic. I took them both this summer then went straight into project mode. I just happened to release this today- a state of the art image colorization model:
https://github.com/jantic/DeOldify
Not to brag too much but it’s awesome. I learned a hell of a lot in the process too- maybe just as much as I did with the courses!
tl;dr: Definitely put the emphasis on doing projects, even if you feel ill-equipped (you will).
Been using the amazing opensource work from https://github.com/jantic/DeOldify to do large batches of this already. If you cant wait for the google version this one is the best I have used.
John is an old lecturer of mine, he's a gent.
It's actually really easy to use the algorithm he used yourself, I am doing up a book for my old lad and grandmother for Christmas with a load of old family photos. You can find details here
Hey....not sure if you’ve heard of DeOldify but that might be a good reference. (I’m the author). It’s in PyTorch though. You’ll notice it uses a unique technique called NoGAN that I developed with Jeremy and Sylvain at FastAI that only uses GAN training as a fine tuning step after a bunch of more conventional pretraining.
https://github.com/jantic/DeOldify
I can tell you this much- GANs are a pain in the ass.
I just ran through the steps here, I use it on Windows 10 with a GTX 1070 and it works really well. From what I recall it was easy, You need to have anaconda installed.
Lol I didn't render the whole movie yet.... I finished up this technique this morning. This is bleeding edge tech.
​
Project is here:
I used a paid version of deolidify. But there is an open source version which is more or less the same.
"They" as in me? Yeah this works today actually, on your own desktop. The restoration stuff isn't freely available (that's only on my computer at home at this stage), but colorization can be done on full movies. You can do shorter clips with the Colab but I'd suggest a home install for full movies.
Sí, més o menys. El vaig coneixer perquè la web de genealogia MyHeritage el feia servir, i abans era de pagament, però durant el confinament el van fer gratuit.
Llavors, vaig començar a investigar, perque no m'agradava que posessin marques d'aigua i logotips a les fotos, i vaig veure que era un projecte lliure (https://github.com/jantic/DeOldify), però no tenen un "programa" en sí, sinó només una serie de codi i biblioteques i tal. No obstant, proporcionen un quadern de Google Colab on pots executar el codi tu mateix ( https://colab.research.google.com/github/jantic/DeOldify/blob/master/ImageColorizerColab.ipynb ), i si penges una imatge a internet (imgur.com per exemple) i li indiques, et fa la colorització.
Footage like this is colorized by something called a 'GAN', which is basically the computer just guessing what colors to make things until it's happy that the thing looks like it's supposed to. So, somewhere along the way, it obtained a bias for purple cars, at least of that shape/design. Hell, it could just be that these cars are most closely resembled by "hot rods" that are often painted with a glimmering purple look these days, and that's the view of them that the training data has.
This is very doable and has been done very well in the images domain. Please see work of Jeremy Howard and Jason Antic called Deoldify. Jason took tons of color pictures and then created black and white versions of these and then trained a network to map from the BW to Color. Then on old BW images, it adds color very nicely. The statement you have made is similar requirement for audio and hence should be doable. https://github.com/jantic/DeOldify. Specifically since Perceiver IO must be even more powerful to do such stuff.
A foto que usei foi tirada do blog chico-xavier, que diz que é a foto mais recente dele.
O processo foi feito em duas etapas:
Para a ampliação, utilizei o Topaz Gigapixel AI e para a colorização usei o DeOldify.
Turned the saturation of the original mv down 0 to make it black and white, then used DeOldify to recolor. Also followed this tutorial to know how to use DeOldify.
MyHeritage's colorization is based on the DeOldify open source project and the code is available here: https://github.com/jantic/DeOldify
It isn't too hard to get it running locally but does require some amount of Linux and optionally container skills.
Thanks! I haven't tried to do a blend yet. With these kind of pictures I usually like the result of the compressed mode more. It tends to remove a lot of artifacts.
The colorization is done by DeOldify. It doesnt work wel with al pictures, but you can get some great looking images.
Looking at the readme, Windows is not supported and any issues related to this are not fixed. Also looking at this issue, it may have something do with Windows/the cuda install.
For my limited use of AI tools I use the android app Remini through Bluestack emulator for faces.
For upscaling I use free online upscalers. I use different ones depending on what result I need. Some are really good on objects, others are really good for drawings and other unrealistic imagery.
As a starting point for coloring i use Deoldify.
If you are confident in Photoshop, Affinity Photo is a great alternative for most photo editing purposes. It also supports a lot of photoshop plugins. If you also manage to find the old and free google version of the NIK collection photoshop plugins, you have a great image editing solution for like a combined $50.
As a whole the Affinity Suite is great value for money.
Je suis en train de tenter de coloriser des photos de famille datant des années 50 avec DeOldify quelqu'un à déjà testé ? J'en ai fait une pour l'instant, très gourmand en ressources et un résultat assez maigre pour le moment...
Summary:
Of the ones this guy finds the best, the open source one is: https://github.com/jantic/DeOldify
(Where the paid ones, he finds better are versions of the same. MyHeritage is also DeOldify.)
Number 5 in his list is: https://github.com/ericsujw/InstColorization
Here the Black and White input picture.
I'm planning to use DeOldify this summer to colorize the first seasons of DW :) I know this is kinda of losing of the episodes' authenticity, but it's a little project of mine I'd like to do.
I know it has been done before, but I couldn't get official notes that these episodes were colorized.
Colourized using Deoldify
You can try it out here for imagesand here for videos.
Thanks. I have just used an open-source library - DeOldify I have generated this video using a Cloud tool (a hosted Jupyter notebook instance with 12GB RAM) and it took more than an hour to render this 2.47 min clip.
I then tried to set it up on my machine with 16GB Ram, it was super slow as the library required NVIDIA graphics, but I have AMD. It took about 8 hours to render a 3-minute clip. Now I am trying to find a workaround for this GPU issue and then render more videos locally.
About rendering speed, see what the library says -
> ...resolution at which the color portion of the video is rendered. Lower > resolution will render faster, and colors also tend to look more > vibrant. Older and lower quality film in particular will generally > benefit by lowering the render factor
This is probably the best I could do, due to the degraded quality of the image. Colorizing works best with high resolution images where edges are clearly defined for the neural net to use as guides.
I made this using DeOldify.
The github link is here. Check it out.
I made this using Deoldify. I put in a few different render factor images in there so you can pick which one you like. They range from pretty saturated to one that I think might be more realistic.
The github link is [here](https://github.com/jantic/DeOldify). Check it out.
Thought this might be a good test for the new Deoldifymodel. Here's the result with a few touchups from me
Hey man, so it is possible to use UpSampling layers to go from 1 channel input to 3 channel output. However, I would recommend just converting to 3 channel from the beginning. You can do this simply with OpenCV’s cvtColor function. Also check out https://github.com/jantic/DeOldify for more information on deep-learning based color restoration.
For colorizing I used this. It took about half an hour on Google Colab
It's also been upscaled with Gigapixel AI and added FPS with this by Denis Shirayev
Here's an ELI5-answer from an interview with a guy who made DeOldify:
>Now, let’s try same question, but as a Reddit-style ELI5 “Explain Like I’m Five.”
>
>Challenge accepted! So there’s two things that work to make the images – a generator and a critic. The generator knows how to recognize things in images and so it can take a look at a black and white image and figure out what most coloring should be used for most of the things in the image. If it doesn’t know, it tries its best to pick a color that makes sense. It’s trying its best to make the image look real, because then the critic will take a look and try to figure out if it’s real or not. The generator is constantly trying to fool the critic into believing that the images it makes are real. So it has to be creative – clothes can’t all be brown, for example! Otherwise, the critic will quickly figure out that the images created with all brown clothes are fake, and the generator will have failed to fool the critic. The generator and critic keep getting better from this back and forth with each other, and therefore the images get better and better.
This latest version is not but there is an older open source version (released May 2019): https://github.com/jantic/DeOldify/blob/master/README.md
Deepai’s colorization appears just to have put an api around the public repo of DeOldify which is free to use via google collab links available on the DeOldify page.
Buster Keaton: The General 1926 Colorized with Deep Learning AI, The Goat (1921) Colorized with AI, HD quality. Colorized using work of Jason Antic (Big thank you for your work sir!).
It was shot on July 23, 1926 – the most expensive single shot in silent-film history. Three or four thousand local people from all over the area had gathered on that hot summer day to witness what would be the single most expensive shot of the silent era. At three o’clock in the afternoon, Keaton gave the signal to the six cameramen to begin cranking. It had to be done in one take – Keaton couldn’t afford to build a new bridge, buy a new locomotive and try again. It had to be perfect. It culminates in a $42,000 scene (in 1926 dollars; in today’s dollars, that’s about half a million) in which the pursuing Union train tries to cross a railroad bridge after Johnnie has set it on fire. The bridge collapses in the middle and the train – a full, working steam locomotive and cars, not a model – plunges into the “Rock River.” In reality, it was the Row River, just south of Cottage Grove. (https://www.thevintagenews.com/2016/09/06/priority-train-scene-buster-keatons-general-expensive-scene-silent-film-history/)
You can watch the full colorized movie here :https://www.youtube.com/watch?v=7IvmzEuVkSY It have being colorized using: https://github.com/jantic/DeOldify , by Jason Antic
I use this: https://github.com/jantic/DeOldify, which is a machine learning library. The process is hard to get setup and you need to understand Python to a reasonable level but once that's done it's quite easy to play with.
Render times, even with a decent GFX card, are pretty long though.
I stumbled across this documentary and tried to colorize it. The results are far from perfect but hopefully, in the near future, we will have better restoration.
The tool used: https://github.com/jantic/DeOldify
I stumbled across this documentary and tried to colorize it. The results are far from perfect but hopefully, in the near future, we will have better restoration.
The tool used: https://github.com/jantic/DeOldify
I stumbled across this documentary and tried to colorize it. The results are far from perfect but hopefully, in the near future, we will have better restoration.
The tool used: https://github.com/jantic/DeOldify
First of all, thanks for u/DaCompy for the album with all the pictures.
Recently i being having some fun trying out the DeOldify project that add color to black and white images.
It first glance is easy to tell that the AI really like the color red/orange, but still it can create some cool colors here and there. Maybe in the future we manage to combine artists and AI to create fast colored mangas.