I did this for some of the places I lived at or visited, it's quite magical being able to return to them in VR and feel like you're back there!
I got best results with photogrammetry and decent DSLR and taking lots of photos (hundreds of even thousands for large areas, I find it best to just move in paralel and take tons of photos, then filter them out by quality), but it also kinda depends on the place.
If you have lots of areas with uniform colors (e.g. white walls with nothing on them) those won't generally reconstruct too well so it's something to consider.
But if the room is decorated and has bunch of things, you can get really good results with this method.
Personally I use Agisoft Photoscan (now called Metashape) for the 3D reconstruction. There are also Reality Capture or Meshroom (which is free) that I've played with, but personally I don't like them as much, but people get good results with them too.
Yes Metashape has a perpetual license, 179$ for standard.
"Metashape license includes 12 month of e-mail based technical support and entitles the licensee to free updates of the software up to the version 1.9.x " (https://www.agisoft.com/buy/licensing-options/).
Hey! Here you can find some guides for taking photos for photogrammetry... 1.Tested.com 2. Agisoft guides
The 2nd one should be informative... Especially Chapter 2. Good luck!
Some of software like capturing reality only run on windows, and require an Nvidia GPU (CUDA). Metashape is more cross-platform, with win, mac and linux installers. Meshroom technically runs on mac but requires some more work, and still requires an Nvidia GPU.
IDK if you were meaning you intended to try processing scans while mobile, but be aware that any of this processing will slaughter your battery. Expect to operate plugged in to mains.
Storage - depends on quality/shotcounts/workflows. I'm using 60-200 shots/model to try and get pretty high quality results (scans>50mil polys), baking down to realtime ready models using marmoset, substance painter, PS etc.
The entire project directory (RAW photos, highpolys, lowpolys, texture bakes, intermediate models, final outputs etc.) for each 'asset' ends up being 10-20GB.
Again, ymmv depending on your exact workflow, model specifics. They can blow out pretty fast (have some 100GB+).
Will probably worth investing in a decent external SSD anyway, I've been using a Samsung T5 or T7 with good results.
Photogrammetry is essentially dimensionless. If real dimensions are important, the markers are needed. They are placed outside the object but used in the first pass to determine the scale, then edited out for the final.
https://www.agisoft.com/images/photo_tutorials_09.png
Agisoft Metashape standard is a great perpetual license option. The standard edition has about 90% of the capability of the professional edition, so you're unlikely to find it lacking if you just want to create simple models at home. Educational editions are also available if you're an educator or a student, which can bring down the price a bit.
The generation, texturing, and exporting of 3D models and exporting of those models is identical between both versions. The pro version does not give you any benefit there. However, the standard edition gives you fewer tools to tweak the process or analyze the result of the process.
As a quick overview, manual markers, scale bars, and coded/non-coded targets can't be used without upgrading to pro. Photos and videos are supported as inputs with the standard edition, but advanced sensors are not (multi-camera systems, multispectral/thermal, etc). Python scripting/headless operations are also pro-only features.
A free 30-day trial is for the professional version and the standard version. If you're smart, that means you can 60 days of free trials (first one version for 30 days, then the other). I suspect you'll find that you're not using the professional features unless you're truly using the software in a professional or high-throughput setting.
You can find a complete outline of the differences between the two editions here.
I'd recommend either Reality Capture, or Agisoft Metashape if you're looking to use CUDA to speed up your processing time.
And yes there's a few image sets from varying sources, here's some commonly used ones:
£10k is around the budget I think I'm looking at tbh. I don't think any more than 2 GPUs would be necessary, the program I use is mainly just a RAM hog.
I wasn't really aware of how 2+ GPUs worked tbh, that's part of why I posted. I only really know anything about building gaming pcs, and even then I'm pretty amateur. That being said, given the amount of money potentially being spent on this thing and the fact that I am a gamer: it would be nice if it was also the best gaming rig it could be too. e.g. Given that I probably don't need more than 2 GPUs even with 256GB+ ram, it seems like it would make sense to have them set up so games could take most advantage if possible.
Here is the requirements page for the main software I use, RAM really is the bottleneck in a big way. CPU and GPU are definitely important too, but so long as it has the memory it will still chug through it. Without the memory, it just errors out and will not process larger datasets at high precision.
tbh I might have been a bit overenthusiastic there, but definitely the more ram the better. I've previously sent big data sets out to paid compute services, but it would nice to be able to do it in-house in reasonable amounts of time.
I say 512gb because that was what I was able to come up with messing around on pcpartpicker, but it was a dual socket xeon rig and only a single gpu.
I know the ram is going to be horribly expensive. I'm budgeting around £10k for this machine, with at least half that going on ram. It's worth noting that this will pay for itself quite quickly, despite being a huge kick in the nuts up front.
Here are the system requirements for the main software I use. While it does talk about 128GB+ being "extreme", I find that high precision processing of large datasets still needs more. Especially if using RAW format input photos.
Metashape doesn't need Nvidia CUDA although it does run faster with it. And your S9 has a fantastic camera compared to almost any other phone. Even my s20 ultra camera sucks compared to my S9.
I've gotten tons of fantastic photogrammetry models from images taken on my S9, don't listen to the purest to tell you that you need a full body mirrorless DSLR etc etc. Like any hobby you're going to get the people that think there's only one way to skin a cat, and it's the expensive way. Helps them sleep at night, given all the money they spent on gear.
I don't know that there's any advantage to 3D scanning these electronics boards, but that doesn't mean you shouldn't try.
You can request a 30-day fully functional trial. Of meta shape, which is reasonably priced already:
It would help to read the manual: https://www.agisoft.com/downloads/user-manuals/
Aligning the cameras creates the sparse cloud. The dense cloud is the higher resolution version, ie. more accurate.
About scanning your friend.... please understand you can't actually stand still. You think you are still, but you are not at all still. Your skin is moving slightly as the blood pumps through it. Your muscles twitch slightly. Even if you are holding your breath, you are still absorbing the gas in your lungs and that is changing your shape. It's enough that is ruins the photogrammetry effect because the algorithms work on a pixel level resolution.
On top of not being able to stay still -- you moving around the room to take the pictures is going to fuck up the lighting and shadows, further hurting your model.
People build those expensive multi-camera rigs because it's literally the only way to accurately capture a living being. It's at least $10,000 to do a passable job. It's likely more than $100,000 to get enough cameras and lights to get something high quality.
Please do some research on this topic before making assumptions about the capabilities of the software.
Incorrect. Metashape does not limit any functionality in the 30-day trial version. In the demo version, one cannot save or export, but there is no limitation on functionality whatsoever. https://www.agisoft.com/downloads/request-trial/
It is possible in the pro version of Metashape, but you need a license for all three PCs. Check the forum and manual on how to set up network computing. Mind, Agisoft is the company, not the product. They have two main products and in Metashape Standard, you cannot do this. Check the comparison here: https://www.agisoft.com/features/compare/
yeah most need a nvidia card there are some cloud based ones to i think but they can be quite expensive.
and i think you dont need cuda for this one but i have not tried it my self https://www.agisoft.com/downloads/system-requirements/
I think I was mistaken earlier, it wasn't Puget systems but the actual metashape user manual that states this.
https://www.agisoft.com/pdf/metashape-pro_1_7_en.pdf
If you open the user manual document and search for "disable CPU" on page two the user manual suggests: " Use CPU enable flag to allow calculations both on CPU and GPU for GPU-supported tasks. However if at least one powerful discrete GPU is used it is recommended to disable CPU flag for stable and rapid processing. " Also, in the actual metashape GUI preferences -> GPU tab it also suggests "Warning: When using dedicated GPU's please turn of integrated GPUs and CPU for optimal performance."
You have me curious though, I will try to run a smaller model later on my machine with cpu enabled/disabled and I will post my results to see which option is faster.
photographer =x= photogrammetry
For right now, photogrammetry is a learned skill, regarding more than the camera hardware. Photogrammetry is for data acquisition, not the emo-feelies lighting effects of photography. In fact, for photogrammetry, you want flat, dull lighting, the lighting of an overhead sun coming through a thin cloud layer, diffuse. No fade focus, as sharp of focus as possible along the length of the object.
Agisoft has a tennis shoe sample set. They deliberately don't go all around or get the bottom. Just try that sample data set, then duplicate the same environment locally.
https://www.agisoft.com/downloads/sample-data/
It is so bizarre to see this "new, no gotz stinking Nvidia GPU, what is bestest app?, gotz iPhone 12...". Uh, shouldn't you be asking these questions in the Apple subs? Kind of funny seeing the AMD Rulez! H8 Nvidia! campaign lasting for so many years.
The Autodesk Recap is a 'Cloud' App, probably doesn't require a CUDA GPU. They charge for export output, same way they are all going. If Metashape runs, then the Agisoft sample data set should produce good results, in preview. The Metashape demo is gimped by a 50K limit on the export output 3D model.
Cool! Have you tried to build model directly from depthmaps? Sometimes it leads to better results (and it is faster).
Also Metashape 1.7 pre-release version can lead to better results (depth maps method was changed and now model from depthmaps is much better in many cases for me).
> I understand with current photogrammetry tech this is not possible unless done professionally
No.
You use markers if dimensional accuracy is critical.
https://www.agisoft.com/fileadmin/templates/images/downloads_04.jpg
Why have software developed to do photogrammetry when there are already several programs? It's not the cell phone, it is the camera inside it, the quality and resolution of the images it takes. For the foot, it is the same problem as all other methods, there isn't a way to 3D scan the top and bottom at once, any easy way. You could cut it off and put it on a turntable upside down, but that might be defeating the purpose...
It's possible that they had ideal conditions for shooting their footage, but I suspect there was a lot of digital post-shoot editing to prevent the sky from blinking light and dark, blurring the foreground, and perhaps even doing some digital inbetweening to keep the motion looking smooth.
You may have all the source footage needed. Current digital effects can really blur the boundary between photography and digital graphics. Tools like "Metashape" https://www.agisoft.com/ can create 3D models from 2D images, that can be rendered back to 2D with smoothed motion.
You should look into delighting your texture(s).
There are some pretty easy to follow guides, and it would probably improve the look a lot.
I've only done the process with Metashape, but I'm sure it's similar for other software suites. Unity has a free tool for this, and there are probably other free solutions out there.
https://assetstore.unity.com/packages/tools/utilities/de-lighting-tool-99583
It's not that hard tbh. This scene looks like it should be good enough for photogrammetry, firstly take an image of the background, secondly take pictures of the model [rotate it slightly every photo], thirdly download a trial version of Agisoft Metashape [or you can download Meshroom which is free but you will have to mask all images from the background], fourthly process the images according to this tutorial [ https://www.agisoft.com/index.php?id=49 ], fifthly export the mesh to Sketchfab.
And that's it [simply put].
yeah, you can do it in meshroom by adding some nodes to the geo section I think,
But I'm more an Agisoft guy though
I know meshroom is free but Agisoft is really cheap, and you can get a student version very cheap,
and you have way better tools
​
https://www.agisoft.com/buy/online-store/educational-license/
https://www.agisoft.com/index.php?id=49 This is the explanation from Agisoft themselves. Maybe if you make a mask of the background, then use your DSLR to take pictures in a “turntable method” type of configuration and then use this tutorial, your results will improve.
It’s mostly trial and error, so you are either missing something trivial or there is a minor error that you just didn’t notice.
It's a software called Agisoft, it creates a model from 3D Stills. https://www.agisoft.com/pdf/PS_1.1%20-Tutorial%20(BL)%20-%203D-model.pdf
He then used that model in Unreal Engine 4 on what appears to be a demo level.
That wouldn’t be very difficult using a quality digital SLR camera, the overhead gallery walkway and affordable photogrammetry software like Agisoft Metashape.
I used CloudCompare, and followed this guide, which made the process really simple.
I did the texture baking with the two halves onto one in Blender. The problem was that the some shadows/occlusion were in the scans, so when I baked the textures, that data remained in the texture. I ended up using the mesh with the texture in Agisoft De-Lighter and it fixed all my problems.
They are similar to the ones used in the Agisoft tennis shoe demo dataset.
https://www.agisoft.com/downloads/sample-data/
I don't know where Agisoft came up with them but looking at them, it makes total sense. I have used target markers in the past and they did seem to help with scan solution, but this experiment surprised me with how much "smarter" it makes automasking. I didn't know about the zephyr ones, will look at them.
Unfortunately, that would be because such a tool doesn’t really exist.
I have heard of a script for ImageMagick called “magic wand” (or similar) that kind of works like the magic wand selectors in photo editor apps. You script it to start at a certain position and it creates the selection and then you create more scripting to change those pixels to place or transparent.
You don’t need to alter the image dimensions.
If you’re taking the photos yourself, you’ll probably get better results by starting off with better images. See https://youtu.be/Il6LVXqSlRg and https://www.agisoft.com/support/tips-tricks/
Not too bad.
Try shooting “the void” to avoid needing to mask by hand.
Sometimes a photo just doesn’t want to be part of the model and that’s usually ok. Using some coded targets can help speed up alignment as well as align photos that won’t align otherwise. https://www.agisoft.com/pdf/PS_1.1_Tutorial%20(IL)%20-%20Coded%20Targes%20and%20Scale%20Bars.pdf
Here is a prime example of why to use the Agisoft data sample set as a benchmark. Go to samples page and there is an example of a tennis shoe, 6 photos, with 3 markers.
https://www.agisoft.com/downloads/sample-data/
The person taking the photos didn't go all the way around the tennis shoe but the 3D mesh that is visible should have some incredible detail. Not sure what the significance of the markers are, but they really seem to help the point cloud determination, with such few photos.
3D scanning can be benchmarked, versus 3D printing, which really cannot be benchmarked.
The Agisoft sample data is a way to reference the performance of different photogrammetry software on the same dataset.
When I started learning photogrammetry, I couldn't get any original datasets to be accepted. My reasoning was that the sample datasets do get accepted 100%, so started trying them out. One interesting outcome was finding one dataset from one software wouldn't be satisfactorily accepted by another software. It is very informative to examine the camera position solutions for these data sets, and what kind of lighting they were taken under. This Doll one kind of violates all the best practices, dimly lit room, uneven distancing, even going to frames cutting up the subject, but has all the elements to be 100% accepted. One thing though, there is a big hole under her chin, which the camera angle never caught.
Agisoft sample data
https://www.agisoft.com/downloads/sample-data/
> er det virkelig lavet med kun 5 billeder
Nej, det er hele 26 billeder :P Så det er stadig ret godt gået, især fordi flere af billederne næsten er identiske.
> Jeg må lige se det i VR når jeg kommer hjem
Jeg har ikke selv haft mulighed for at prøve det af, da jeg ikke har noget VR headset, men du skal nok forvente at du kommer til at føle dig som en kæmpe, det er i hvert fald sat til at man er 30 meter høj eller noget i den stil :P
> Må jeg spørge hvilke programmer du bruger til photogrammetry?
Lige nu er den en prøve version af Agisoft Metashape
https://www.agisoft.com/index.php?id=71
> Agisoft De-Lighter is a free stand-alone tool designed to remove shadows from model textures. The tool is targeted on removing cast shadows and ambient occlusion from 3D models. It requires user input in a form of rough brush strokes marking lit and shadowed areas. All input can be provided within Agisoft Delighter without need for any extra data such as ambient occlusion maps. De-lighting algorithm is optimized for 8-bit JPEG compressed textures and does not require specific image formats with higher color depth.
>(industry standard is 30% side and 60% foreward)
Is that the case for forestry mapping? Volumetric Surveying we go way higher than that for side, and higher for front.
AgiSoft suggest 60% side / 80% of forward overlap "at least." (warning PDF) and Pix4d is pretty similar 75% frontal and 60% side overlap bumping that up to "85% frontal and 70% side overlap for forests"
The one I have used in the past and think works really well is https://www.agisoft.com/
more recently I have seen people talk about Autodesk® ReCap™ https://www.autodesk.com/solutions/photogrammetry-software
Check out some of the Tips & Tricks from Agisoft, a photogrammetry software company. Most apply to all photogrammetry softwares but some are product specific to themselves.
make a base mesh in maya, bring it in zbrush or mudbox, add extra sculpting details, export your maps, add extra maps in substance painter or photoshop for colors and other channels, then have fun setting up shaders and rendering it inside maya.
or like the guy in the video, take plenty of pics of a real seashell from any angle, put them in a software like photoscan and you have your seashell ready
A Phantom 4 Pro camera is 20MP and it looks like the D3500 is 24.2MP. So you'll definitely see an increase in times over the Phantom. Metashape is also pretty bad at estimating processing times, so it probably won't be that long.
Double check your settings to make sure that your GPU is enabled in the MetaShape preferences. I'd let the computer run for a full day to see if you get beyond the depth map process. You might actually run into issues with only 64GB of RAM and that might be causing your processing to seize. Ultra-high is really RAM heavy and you might not have enough for those photos, depending on some other factors. It doesn't look like Agisoft has updated their charts, but you can see that the recommended RAM is way above what you're using.
The under $500 depth sensor 3D scanners have a long throw, about 2 feet, that makes them really unusable for small things, like something that is 1 foot tall. Agisoft has a sample data set with a porcelain doll about that size. The trial of Photoscan/Metashape allows that to be used and see what the solution looks like, although their output is a useless low polygon 3D model. The data set can also be used with the 3DF Free version, limited to 50 frames.
https://www.agisoft.com/downloads/sample-data/
There is an even simpler way which I had totally forgotten about, but it will work perfectly for your application! Check the following link and download the banana data set:
https://www.agisoft.com/index.php?id=49
Have a look through the images to see how they are taken. Basically you take 1 image without the object, then without moving the camera you add the object and take an image. Then you move the object and take another and so on until you've taken an image from all angles. Then you bring them into Metashape, mask the first empty image, and then align them on "high" with "Apply masks to Tie Points" checked.
A tutorial can be found here: https://www.youtube.com/watch?v=rYm4mxpbNvQ
30 day trial for Metashape: https://www.agisoft.com/downloads/installer/
When you start Metashape for the first time, before you load any images into it, do the following:
Go to: Tools -> Preferences -> GPU and check off your GPU. That way Metashape will youse your GPU for processing and it will be much faster!
If you have any questions feel free to ask! :)
Hi there!
As mentioned above it is not a good practice to use flashes, insted of this you may use Agisoft Texture De-Lighter utility after building the model with texture. Can you please write what kind of error do you have when processing in the Cloud?
https://www.agisoft.com/forum/index.php?topic=7968.0
> You can use spherical panoramas in equirectangular representation in PhotoScan. All you need is to load images, then go to the Tools Menu -> Camera Calibration window and switch the Camera Type from Frame (default) to Spherical for the calibration group that corresponds to the panoramic images.
> Then process as usual: Align Photos, Build Dense Cloud and etc.
> However, I also suggest to apply the mask (could be the same for all the images) to cover the unused area in the bottom on the images
a free alternative to retopologizing your mesh is instant meshes: https://github.com/wjakob/instant-meshes
and also agisoft has a free texture de-lighter available here: https://www.agisoft.com/downloads/installer/
and you can use any 3d program you desire to process the rest.
RC seems to be using a significantly different algorithm, possible more image analysis like Photosynth, than the more standard SFM (structure from motion). There are examples where it can take multiple partial images taken with different cameras at different times and still derive the 3D point cloud. SFM really needs to be able to track the camera path.
Try the Photoscan doll example data. It uses multiple partial object frames, and the lighting isn't good or uniform. Tried it once with Photoscan and it still had a lot of holes.
https://www.agisoft.com/datasets/doll.zip
Better to mask out water marks.
Photoscan has become Metashape, and they have a new pricing model. There is the 30-day trial version, the standard version and the professional version. The standard version is $180, the professional is $3500. The standard version is totally stripped down.
https://www.agisoft.com/buy/online-store/
The process you're describing (taking several photos of an object from various angles and having a computer stitch together a 3D object) is called photogrammetry. There are several pieces of software that can do this, most notably perhaps Agisoft Photoscan (which is now called [Metashape](https://www.agisoft.com/community/showcase/)).