Thiss seems to have more info > One thing people might not realize (I'm not sure how obvious it is) is that these renders depend strongly on the statistics of the training data used for the ConvNet. In particular you're seeing a lot of dog faces because there is a large number of dog classes in the ImageNet dataset (several hundred classes out of 1000 are dogs), so the ConvNet allocates a lot of its capacity to worrying about their fine-grained features. In particular, if you train ConvNets on other data you will get very different hallucinations. It might be interesting to train (or even fine-tune) the networks on different data and see how the results vary. For example, different medical datasets, or datasets made entirely of faces (e.g. Faces in the Wild data), galaxies, etc. It's also possible to take Image Captioning models and use the same idea to hallucinate images that are very likely for some specific sentence. There are a lot of fun ideas to play with.
https://thispersondoesnotexist.com/
Yes it's exactly as mind fucking as you'd think upon reading the url.
When I have downtime at work I just go there.... there's probably something wrong with me.
So, after taking a look at this image in a lower resolution:
https://dreamscope-prod-v1.s3.amazonaws.com/images/16f02b2c-4a95-4626-95f6-bec98ddc7fef.jpeg
It looks like the original image was actually photoshopped to have the same guy's face repeated. I blew up the image and smoothed it with one pass through the neural network (a really hacky approximation of a super resolution neural network) and it looks like that is the case. For example, you can see the guy at the far left's face is far too big to be his actual head (because it's copy and pasted on).
https://dreamscopeapp.com/i/40331
Moral of the story? Never trust a deep dream without its original!
Looks like the OP might have used the 'charcoal' filter over at the dreamscopeapp service. I haven't figured out charcoal precisely yet (its undocumented), but in addition to converting to monochrome, it appears to involve maybe 1 octave of 3a/3x3.
It has to do with the particular dataset the DNN used was trained on. One of the most common ones, i believe googlenet, happened to have a notable preference for dogs as a result of its training.
Other ones, like MIT's (a video produce using it is included in the sticky) appears to have been trained mostly on buildings, resulting in a notable preference for buildings.
If you were to produce your own DNN and trained it mostly on cats, then pretty much every image produced using it would contain cats.
The DNNs used for deepdreaming are originally intending for classifying images, taking an image and telling us what features are in it. DNNs are trained in a way that is (at least in an ELI5 sense) analogous to the way humans learn. Deep dream images are then produced by amplifying features in the image that the DNN detects. In an ELI5 sense, googlenet has been taught to be very, very good at detecting dogs, and so if an image has a feature that is even vaguely dog-like by googlenets standards, then it gets drawn out. Thing is, googlenet isn't actually that smart, so it has a tendency to detect patterns that aren't there, in a way that is (again in an ELI5 sense) analogous to the more human Pareidolia
I honestly don't know how? I use https://dreamscopeapp.com/ with a little bit of extra processing, and the app seems to spit out images that are smaller. I'm also not sure how to scale it up and keep the texture?
If anyone knows how, I'd love to hear it...
I don't think so - seems like the output of one of the lower layers in the network, cf. eg. this
Not a bother at all! The basic steps:
This is all extremely rudimentary, but I can paste up what I have into a github gist if you're interested; it's more of an monotonically-increasingly-embarrassing-chain of accidental discoveries rather than a production pipeline.
However, I'm tightening up how it works and with a few more attempts I might have a non-embarrassing, robust version I can share with you folks.
Thanks. Sure...
Source: https://photos.google.com/photo/AF1QipM-5aKdu1VQUyUTZvFrXkqfMj8Vg8ox7crxktZW
Image Used for Transfer: https://photos.google.com/photo/AF1QipPF-wNCTzYLze_0vWUrgpZfmB87qwaXbJXHW8nA
Credits:
Source: Photo by Ramakrishnan Nataraj on Unsplash
Style: The Great Wave off Kanagawa, Top left crop
Deepdream: @neural.pablo (myself) on IG for more.
​
I use this https://dreamscopeapp.com/
It's not so bloated as other sites (no waiting times) and it has multiple styles.
To change the layers you need to tweak the iterations, octaves (you can find a lot of details about that in the stickied thread) and be wary of what database you're "burning" into the computer - as you can see most of these stuff have dogslugs and birds because that's what google has fed into the system. But the program is adaptable, and if you can find another database (there was a thread around showcasing such things) or even make your OWN (you need several thousand pictures first tho) you should be fine.
Currently Letsenhance.io is offering a few free upscales, I liked how it performed, Believe me I have installed the free github Enhanced program, but it doesn't scale to this, these guys have it going on, This is not a endorsement, Also I am not getting paid for this.
They operate in germany, so I highly recommend them if you want to 4x upscale your styles, Its recommend for professional Neural-Stylist :)
Cheers!
This is using neural-style from here. It was pretty easy to do, I used this guide and then ended up tweaking some parameters and trying a few different variations to get what you see.
Whoa! LOVING the current settings. Just dreamed this cat and cannot believe the results. Excellent blend of "painting" with "detail". @Bellamoid Have you used it in the past day and how do you like the results?
I use a simple website called https://dreamscopeapp.com - it is superfast (a couple seconds for a small pic, maybe 20 seconds for a large pic, it has around 15 preset filters that you can use. If you hit something nice then you can download it, alternatively you can upload the new pic you made and apply the same filter as before, in effect doing 2 or more runs with the same filter. PLay around, and it's worthwhile to make a free account so you can have hd quality (no spammy emails or anything). Goodluck!
Yes - on your phone!
Yes - precomputed :)
but still fun. hope you enjoy it.
https://play.google.com/store/apps/details?id=scratch.hex.net
:)
From https://news.ycombinator.com/item?id=9815873 : "One thing people might not realize (I'm not sure how obvious it is) is that these renders depend strongly on the statistics of the training data used for the ConvNet. In particular you're seeing a lot of dog faces because there is a large number of dog classes in the ImageNet dataset (several hundred classes out of 1000 are dogs), so the ConvNet allocates a lot of its capacity to worrying about their fine-grained features." ConvNet designating the Convolutional Neural Network
Credits:
Source: Photo by Ramakrishnan Nataraj on Unsplash
Style: Night Attack on the Sanjō Palace (handscroll detail) , Top right crop
Deepdream: @neural.pablo (myself) on IG for more.
Not exactly the same by any stretch. The apps are using the same type of edge-detection filtering that Photoshop has been using for years. This algorithm is doing an awful lot more. You'll never find a phone app or photoshop plugin that can replicate abstract styling like this. Phones couldn't process it to begin with and it takes much too long for it to be a useful plugin for Photoshop. At best, you get this type of thing.
Hey, thanks! I used https://dreamscopeapp.com/, which has a bunch of different filters. It's pretty interesting to play around with passing an image through first one filter and then another. If you don't like dogslugs - stay away from 'trippy'. I hope you enjoy!
If you do it that way, and actually achieve it, you can get great results in theory. I don't know if OP used a setup like that or if he used an online service to render the frames, maybe he will explain. For non-coders like myself the best option is to go to dreamscope. It has some ok filters and it's rather quick (no waiting times). It's a great place to start, and it's fun to explore and discover.
Ninja edit: no question is stupid, only some of the answers are.
I use a simple website called https://dreamscopeapp.com - it is superfast (a couple seconds for a small pic, maybe 20 seconds for a large pic, it has around 15 preset filters that you can use. If you hit something nice then you can download it, alternatively you can upload the new pic you made and apply the same filter as before, in effect doing 2 or more runs with the same filter. PLay around, and it's worthwhile to make a free account so you can have hd quality (no spammy emails or anything). Goodluck!
So, you did go into the BIOS and make sure virtualization is enabled, right?
Maybe you should try vagrant destroy again, then open up virtual box and delete any boxes left behind, then try vagrant up again.
If all else fails, I found a pretty great site to do the dreaming for you. I don't know how they do it, but it only takes about 15 seconds for it to process an image, and they have several different models as well.
Given its not my image, I'm not quite comfortable with that... though I'd love to have a high res desktop version.
I'm just using https://dreamscopeapp.com/, but it seems to downsize all the images... would anyone know how to achieve the same 'scale' of texture on something higher resolution?
it doesn't look like it uses DeepDream and it doesn't provide any details on the website. still the software is pretty neat.
Oh maybe it does but it cheats by not running it very long or with very much data.
The trippy filter looks more like typical deepdreams except it just seems to add a bunch of eyes
Take a look at https://algorithmia.com/algorithms/deeplearning/DeepFilter and https://algorithmia.com/algorithms/media/VideoTransform
Right now it doesn't work on really long videos but anything short of 5 min works pretty well, cheap too.
EDIT: also pm me or ping us in intercom and reference this comment, we'll sort you out with some free trial credits.
Anything I do as pure deepdream, I add that flair, This is a style transfer.
Here are is the content (a coloring book page from Happy Color™ – Color by Number for Android) and style.
Style and Content: https://i.imgur.com/DW4YLnJ.png
Over the past month I worked on a new social media app idea I had and came up with “Artisio.”
Artisio is free to download on the [App Store](https://apps.apple.com/us/app/artisio/id1639718637) and [Play Store](https://play.google.com/store/apps/details?id=com.buildloop.artisio)
I wanted to create a sort of hybrid between instagram and all the AI photo generators with a focus on paintings/drawings.
Select a preset of your choice (like Impressionism, gothic, modernism, etc…), fine tune the AI if needed, and select or take a picture. Then watch your painting come to life!
AI uses a combination of of glid-3-xl, azure vision
I would love the feedback from the community! If you enjoy the app, please leave a review and/or consider upgrading to premium
Sure, I just followed this guide provided by them: https://www.notion.so/Disco-Diffusion-AI-Art-on-Q-Blocks-e786a2ba47cf409390005e2dfc5180ec
It shows a 1 minute video to get started. Simply launch Disco diffusion GPU instance and you get all the required software pre-loaded. I just set my input prompt and then let the discoart tool render images.
If you get
unable to import caffe python module (skimage.io not found)
at this point, this above post should fix that too. https://groups.google.com/forum/#!topic/caffe-users/LoplkeX-UXQ points to the answer. http://scikit-image.org/download.html
sudo pip install -U scikit-image
does this, and should work as parallaxadaisical states but for some reason it doesn't. Weirdly if I try the program line by line in the python shell, it works, just not in the notepad.
Preceeding the problem code with import skimage is a workaround.
From first segment :
from google.protobuf import text_format import skimage import caffe
I got : NameError: name 'caffe' is not defined
but only once and it was (I think) due to me running things out of order, so try again if you see it. Onto next errors, not there yet. :)
Probably not the solution you're looking for but I like the quality and ease of use I get with this service: https://letsenhance.io/ They have a batch function, so kind of pipelined. Not free, but really fast and high quality.
I struggled with the information provided on this feed but after some research I found this article which will take you through step by step http://www.makeuseof.com/tag/create-artificial-fever-dreams-googles-deepdream/
> I haven't the slightest idea about any of this.
Then why are you being so defeatist? I thought you were responding from a position of experience and had some reason to suspect that ONNX wouldn't work here.
ONNX is a format, not a language. "Converting" a model to ONNX with pytorch just involves running a forward pass through the model and then invoking an export method.
https://pytorch.org/docs/stable/onnx.html
I haven't actually performed the procedure myself, but that doesn't mean I "know nothing about it." Maybe you should actually click on some of those links I sent you. Crazy idea.
You wanna give up before even trying, you do you. There are a lot of options available for you to pursue, or you could just pretend like you don't have a GPU at all even though we live in a world where these models can be run on your CELLPHONE because of tools like the ones I directed you to.
Maybe save the snark until you've actually tried something first. You're welcome.
Link to the original stock photo if someone wants to compare it.
https://pixabay.com/de/photos/gemischte-kampfk%C3%BCnste-sport-kick-1314503/
Certainly, they're both freely available on Pixabay, the first link is the image and the second link is the style (set to 80%):
https://pixabay.com/illustrations/daemon-face-scars-horror-spooky-5749778/
https://pixabay.com/illustrations/fractal-art-abstract-pattern-4772051/
I messed up and shared another version (now deleted) also created with style transfers but with a realistic style. This one is more appropriate for this sub ;)
​
Original Photo from Unsplash: https://unsplash.com/photos/CM1oVEUzsNM
WARNING
I believe that it is possible that either step 13 here or tha latest version of Java nukes the system PATH variables. I have no way knowing which it was so if you have funky stuff start happening like not being able to launch Quicktime or other software mysteriously breaking you might want to check your PATH variables. I was able to recover mine using the following guide:
Image by <a href="https://pixabay.com/users/LoboStudioHamburg-13838/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=66850">Thomas Ulrich</a> from <a href="https://pixabay.com/?utm_source=link-attribution&amp;utm_medium=referral&amp;utm_campaign=image&amp;utm_content=66850">Pixabay</a>
I send you the lien to the play store, but if you don't know it, it is difficult for you to help me ... I thank you for your interest.
https://play.google.com/store/apps /details?id=com.rubenmayayo.reddit
https://play.google.com/store/apps/details?id=com.rubenmayayo.reddit
Gotcha! Yes, displaying the filename would make it really clear, wouldn't it?
I toyed with that idea but couldn't get it to look nice. Instead, I made the button stay white when something has been selected.
​
Alternatively, I could put a little logo, such as this one, that would appear only when an image has been selected.
What do you think? Would this be better?
​
Finally got GPU access! They opened up loads of instances in Frankfurt. The only problem is that spot pricing currently for g2.8xlarge is around $30/hr and none of the P2 machines are available in that region. This link apparently says that Defined Duration usage in Frankfurt for g2.8xlarge is about $1.698/hr but that doesn't show when I actually reserve for a duration.
What the heck...getting a weird error when it tries to grab a package that I just can't seem to find a Google answer for....
PS C:\hashicorp\vagrant\bin\image-dreamer> vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'data-science-toolbox/dst' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
The box 'data-science-toolbox/dst' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Atlas, please verify you're logged in via
vagrant login
. Also, please double-check the name. The expanded
URL and error message are shown below:
URL: ["https://atlas.hashicorp.com/data-science-toolbox/dst"] Error: SSL certificate problem: unable to get local issuer certificate More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option.
> Thx, didn't know Adam could be as good as lbfgs. Does it need specific param tweaking depending on the specific input, or you've found a set that always performs best?
Other users have reported that my suggested parameters work well for a variety of input images, and my own testing shows that my parameteres work well with even more input images than the default parameters. In fact, my issues with some types of style images, are what drove me to conduct research on how Adam effects style transfer.
> https://dreamscopeapp.com/i/7jwJhMsxTR
> it's a recent one, so i don't think the toned down it... and I agree with you, it completely SUCKS.
I guess the recent examples that I was looking at, just had poor facial feature detection then.
Thx, didn't know Adam could be as good as lbfgs. Does it need specific param tweaking depending on the specific input, or you've found a set that always performs best?
I've browsed dreamscope and found one example of what you say:
https://dreamscopeapp.com/i/7jwJhMsxTR
it's a recent one, so i don't think the toned down it... and I agree with you, it completely SUCKS.
You can just re-apply filters again and again on https://dreamscopeapp.com. Just keep clicking "Apply" and when you're happy ,"Post".
For example, here's one that I applied the "trippy" filter to a few times:
Applied some Deep Dream filters to some of the lower res public facing gallery images of Edward Burtynsky. I discovered him through his documentary 'Manufactured Landscapes' which is/was on Netflix. His stuff is super well composed, strangely fantastical and makes for really good deep dreaming.
Some of them got multiple filters using and all were treated with dreamscope.
I used this site: https://dreamscopeapp.com
I layered some of the effects that don't seem to include the dogs/various animals. Try playing around with Art Deco, Salvia, or Botanical Dimensions. Have fun!
do you plan on making this website high-capacity? if you opened up the settings a bit more this would be a vastly powerful tool. i love how convincing the paint effects can be.
edit: i tried a picture of a tunnel and it came out looking like a butthole.
Our server currently takes between 15 seconds and 5 minutes depending on load and process that you choose. Right now it's more around 30 sec per deep dream. Which is only slightly longer than running it on your own GPU.
prisma seems to currently have the fastest implementation of the Deep Style algorithm but even with that you're looking at less than 1 frame per second while using 100% of your phone's processing power (that's assuming you could copy it, it's not open source). If you specifically mean the Deep Dream algorithm then the outlook is even worse than that, taking several minutes per frame(for any reasonably high resolution) even when running on a desktop graphics card.
Write a bash script that paints the previous frame and then rotates and crops the current frame a few pixels each time.
The neural-art script was made by jcjohnson at github. https://github.com/jcjohnson/neural-style.
Imagemagick to resize rotate and crop images. http://www.imagemagick.org/script/index.php
I finally gave into the temptation to do some deep dreaming - couldn't let everyone else have all the fun. :) Was able to get it all installed and working thanks to this post by /u/senor_prickneck
Still getting used to the process and all the options. Sure is fun to let it rip on an image and see what happens.
This image used these settings: iter_n=12, octave_n=4, octave_scale=1.5. Ran three passes then took the result into photoshop and restored parts of the original image. The Original image was one of my own snapshots from McDowell Creek park in Orgeon.
I used this guide: http://ryankennedy.io/running-the-deep-dream/
The zooming-in-image-series is part of that guide.
Then I used http://slowmovideo.granjow.net/download.php to make it smoother (i.e., add frames in between the real frames). That also added the glitches near the border of the images.
Apologies for the delay, things are a little on their ear on this end.
Now, let's talk about extracting frames more in depth. Deepdreamanim is designed to work with FFMPEG, which (I've been lead to believe) can take things to frames, however, I have been unable to get that to work on my end.
I've been using virtual dub to do all of the frame extraction and recompilation. (http://www.virtualdub.org/)
Open Virtual Dub, open the video file you want the frames from, and then use file, export, image sequence. A window will pop up asking where you want to store your frames, what format you want to save your frames as, and will give you the opportunity to change the file name output. You'll want to do that, but I'll come back to that.
Then clicking okay will generate the frames you're looking for. The number and size of the frames will depend on the quality of the video you started with. I know I can do stuff in about 480 because my rig is getting a bit aged.
But, at this point you should have a folder full of png files ready to be processed.
Something else to note is that virtualdub is kind of a gold standard for video processing, in my knowledge, and is widely supported with mods and other stuff. If it's a video, you can probably work with it in virtual dub (with a little work)
Naming stuff. Something I glossed over is that the input files need to have a specific filename convention like this:
frame_000041.png
This is one fo the things that deepdreamanim is looking for. If you don't have the names like that, it probably won't take too much of your business.
Edit:
I wrote a post about how to give the command to start processing frames, but I lost it when I edited this post. I'll come back to that in the next day or so. Do please let me know if you get frames extracted!
going to assume that you've got the ipython notebook script working, as well as dreamer.py on the command line, and / or that you know how to twiddle with code. personally i'm using the anaconda method because i don't know enough about VMs to get the vagrant method working. bonus with the anaconda method is that it uses the GPU.
long story short - ffmpeg for windows is a different beast to ffmpeg for linux. don't use ffmpeg. well, only use it to recompile processed framed into video at the end. there are plenty of other programs that do what you want it to do. find something to extract individual frames from the video file (VLC does it, personally i use Free Video to JPG Converter because i'm a script-kiddie n00b and it's less likely to get errors than VLC) and then something like Bulk Rename Utility to get the names to whatever you want (eg - deepdreamvideo needs files to be named 00000001.jpg etc) and then you can dreamify the files.
after that you'll probably need to go in to the .py files and tweak them to point to specifically where your caffe models are. once you've got that done, it's clear sailing. bon voyage!
>but I made a mistake during the conversion from Tensorflow which caused the upper layers to lost.
Can't you just import GoogleNet? Seems to be the same model.
I'm sure some programmer out there could semi-automate the process, by having an app create a secure shell to the linux box, dump a file with a few arguments, and spit out an image.png.
by repeating text strings, I mean creating a file in notepad (better if you use http://www.editpadlite.com/ -- something with the way linux handles new lines, and notepad's inability to properly create them)/
so, say you just ran this:
vagrant@data-science-toolbox:/vagrant/00p205$ python dreamify.py 000.jpg 001.png
...and you like the output, but the effect is kind of weak.
you repeat the command, but this time you type it all in a text editor, like this:
python dreamify.py 001.png 002.png; python dreamify.py 002.png 003.png; python dreamify.py 003.png 004.png; python dreamify.py 004.png 005.png; python dreamify.py 005.png 006.png; python dreamify.py 006.png 007.png; python dreamify.py 007.png 008.png
...then save it in the image-dreamer folder. give it a name like makemore.sh
You just created a script that tells linux to take the new ...001.png and run dreamify on that & put out a 002.png and then take 002.png and make it into 003.png (etc...) the semi-colon tells linux to wait until one line is done, then does the next line without you having to interact. If one image took 5minutes, the above command with 7 lines would take 35minutes
So now you can go back to the terminal and type this:
./makemore.sh [enter]
...come back later and see the new files -- each a bit more intense than the previous
> I thought it might be beyond me but these python scripts look manageable.
I mean...kinda. Using the scripts is relatively easy, you basically just have to call a few functions and let the NN do the rest. If you are really interested and have a few hours to spare, you should start by installing & setting up Anaconda and then install Tensorflow.
You can pm me if you have done that and i will send you my script with instructions.
There is an Android app called Fraksl that lets you create and interact with those kind of animations. You might have to play around with it for a minute to get the hang of it. But it's great fun!
Perhaps you should look into Stochastic and Serial forms of composition. While they are not exactly machine learning, they are fairly inhuman and mechanical genres that often include sophisticated algorithms in the compositional process.
A good starting place for stochastic composition is Iannis Xenakis' book "Formalized Music". It is quite a dense and difficult read that requires a thorough understanding of statistics and computer programming, and quite a bit of patience; but it is worth the effort. https://www.amazon.com/dp/1576470792/ref=cm_sw_r_cp_apa_tF9pAbY02TZPD
Luckily serialism is quite a bit more accessible as it is a little older, and there are a number of good books on the topic. I recommend Grant's book but any book about 20th century compositional techniques will discuss serialism. https://www.amazon.com/dp/0521619920/ref=cm_sw_r_cp_apa_8J9pAbNKDHCC2
Keep in mind that most musical machine learning trials have used incredibly complex fugal counterpoint as information stems, and expected output music that resembles the input. Doing a similar test with strict serial pieces as stems, while being significantly less "traditionally musical", will potentially output more interesting results.
There is also this app, fraksl which is pretty darn good that could do with music integration and maybe a cardboard mode that would allow for interaction with a gamepad.
https://play.google.com/store/apps/details?id=com.workSPACE.Fraksl
Shameless self-promo:
Just released Stylish for android: https://play.google.com/store/apps/details?id=com.voxeloid.stylish
It's using the fast-forward flavour of neural style (like Prisma) and everything is processed locally, no private images sneakily being uploaded to a server. (and it only takes a few seconds on modern phones)
The app is free, so far there aren't even ads in it.