Its researchware, and honestly better than 90% of the researchware out there that I have seen. They do mention the requirement but its pretty hidden.
https://github.com/alicevision/meshroom/wiki/Error:-This-program-needs-a-CUDA-Enabled-GPU
Software dev is expensive and there is only so much ~~slave labor~~ grad students can accomplish given limited time and resources. I just have an old outdated gaming laptop with an NVIDIA GPU, takes a while but eventually gets the job done.
Replying mostly because I'm not a fan of how reddit likes to down vote comments they don't agree with into oblivion but not say anything. Anyways, the "financial scheme" comment did seem a bit off to me. I've written a bunch of open source and its extremely unprofitable. You can literally write software that the internet is dependent on and no one will want to send a few $$ every now and then. If Meshroom is dying its probably because everyone who worked on it has a job that pays money now and moved on to other activities or their employer won't let them work on it.
Meshroom's git repo appears to be getting a few commits every week. https://github.com/alicevision/meshroom
I mean you can use MeshRoom still. You’ll just have to use a lot of work arounds. I’ve been using DraftMeshing and it seems to work ok (not great but since I’m a beginner I’m ok with it).
Meshroom uses CUDA (Nvidia gpu) for calculating the depthmaps. However, if you use draft meshing, where you create the mesh solely from the sfm point cloud, then you can actually do it with only cpu. It probably will take a while (depending on your laptops specs), but it should be doable. https://github.com/alicevision/meshroom/wiki/Draft-Meshing
My workflow is to use my phone camera (better cameras will produce better results) in the manual controls mode although just locking the exposure and focus in auto should work decently and take a bunch of angles of whatever I'm scanning, then import it in Meshroom, free software for photoscanning. There are some tutorials on the github page and there are some on youtube also. Tip: Reflections and uneven/bad lighting are your enemy.
Photoscans I've made with this method:
My workflow is to use my phone camera (better cameras will produce better results) in the manual controls mode although just locking the exposure and focus in auto should work decently and take a bunch of angles of whatever I'm scanning, then import it in Meshroom, free software for photoscanning. There are some tutorials on the github page and there are some on youtube also.
According to this:
https://github.com/alicevision/meshroom/wiki/Draft-Meshing
I tried to set up the nodes. But as you can see on my screenshot, i don't have the "sfmData" in "Structure from motion" and in "meshing". Where do i correct this?
Absolutely new to this program.
Since most of the work is done on the CPU anyway, Meshroom actually works fine without CUDA… you just have to drop the depth mapping stage from the pipeline. This actually speeds up the process, though the result will be lower quality.
Just in case anyway who sees this comment is interested, the official wiki has a page on it: https://github.com/alicevision/meshroom/wiki/Draft-Meshing
Download meshroom, https://github.com/alicevision/meshroom/releases Install it Take pictures of closed book, try to capture EVERY angle Import photos to meshroom Check this fast guide https://youtu.be/RiM3qfwDZ-g
Never used it but yesterday a user pointed out Meshroom. Never tried it myself yet, but it seems like a decent solution - its free (https://github.com/alicevision/meshroom) and as long as you are fine changing your workflow a bit (uploading pictures to a pc which processes the models) you should be fine.
The value you are looking for is 13.2mm
So probably something like:
Sony;Sony ZV-1;13.2;Sony
Check out your image metadata to get the actual model name and report back, so I can add it to our Database :)
https://github.com/alicevision/meshroom/wiki/Add-Camera-to-database
Constring a good 3d model from picutes is super resource intensive. I would suggest just using your phone as a camera and then using Meshroom to generate your models.
Did you add your smartphone to the alicevision list so I knows your sensors details? Alicevision is the part of meshroom that does the hard calculations and reads the meta data. It also dosent often have smart phone details.
When you upload photos is there a green circle next to each one, an orange circle, or a red circle?
Here isnthe wiki on how to do that if you're not getting green data.
Right click a photo and select properties. Then click on the tabs to find your Metadata for your phone. Except the sensor width. You'll need to Google that.
https://github.com/alicevision/meshroom/wiki/Add-Camera-to-database
Ok you need to do 2 things:
Btw you are using a version which is out of date by almost 2 years. I highly recommend you use the latest version (2021.1.0) which has many bug fixes, performance and UX improvements, etc.
Mine doesn't have "Load" either. I've read double-clicking on the texture-node works, but it hasn't for me, so far. This is where I found that answer.
If you want to use meshroom there is some information here: https://github.com/alicevision/meshroom/wiki/Renderfarm-submitters but it is not easy to set up.
i cant remenber in this moment but searching that error on forums i think i found an easy solution:
" You can use IrfranView
File->Batch Conversion
or Imagemagick. Make sure you set the quality to 100%. Now you can add the images to Meshroom (assuming the camera is in the sensor db). "
my personal advise, dont use meshroom, try 3DF Zephyr or Agisoft Metashape, also Reality Capture its so much better product and have nice value.
Meshroom is a photogrammetry software to take a collection of 2D photo images, construct a 3D point cloud from them, and export textured 3D models.
Meshroom is open-source.
https://github.com/alicevision/meshroom
GitHub - alicevision/meshroom: 3D Reconstruction Software is the one I've seen recommended the most. One such tutorial (reddit)
If you try it... let us know how it works out.
I can advise you to have a look at open source projects. By looking at the code you will learn really useful insights about software design. One project that uses QML/Ptyhon is https://github.com/alicevision/meshroom
Ok, answering my own post here. There seems to be an issue with anything later than meshroom 2018 whereby Windows terminates the DepthMap process due to what Windows thinks are lockups. It's all documented here:
https://github.com/alicevision/meshroom/issues/593
and the fix for me (and others) is to make a registry edit described by Davezap on the 2nd of May.
Hope this helps anyone else having similar challenges with Meshroom!
It's possible to use Meshroom with Google Colab see: https://github.com/alicevision/meshroom/wiki/Meshroom-in-Google-Colab-(cloud)
Ya I deleted and reinstalled multiple times, even tried an older version and got the same thing. Found this (https://github.com/alicevision/meshroom/issues/1093) on the github where a user had a crash and then experienced the same issue but no answer.
You can use Google Colab Meshroom adaptation instead, here's all information you'll need: https://github.com/alicevision/meshroom/wiki/Meshroom-in-Google-Colab-(cloud)
I found a solution by searching for the terms that u/Mr_N00P_N00P suggested in the Meshroom context. In particular, this page was helpful: https://github.com/alicevision/meshroom/issues/236
After the MeshFiltering stage in Meshroom, it is possible to swap out the outputMesh with an edited one. You can then use the edited, re-saved obj as a custom input on a Texturing node.
I duplicated the "Texturing" node, giving it a (Blender-modified) OBJ file for inputMesh instead of the default.
Blender now has a much cleaner texture (as you can see all the background stuff is gone).
A tricky part that I ran into: Meshroom initially complained about the OBJ file that I exported from Blender. It turns out that the culprit was a "Write Normals" checkbox under the Export Settings > Geometry. The box is checked by default; I had to uncheck it to make Meshroom happier. Thanks everyone for your inputs!
You could make a set of high-res screenshots of your model and feed them into Meshroom. If everything works out correctly you can then use Blender to convert the resulting model into something a 3D printer will digest.
You can use Draft Meshing in meshroom without nvidia, https://github.com/alicevision/meshroom/wiki/Draft-Meshing
Thats probably the best you are going to get without an nvidia gpu. Someone could implement a CPU version of depthmap, but its going to be so slow that few people would use it. Honestly even if you did have an nvidia GPU you are going to run into issues with only 8 gb of ram. I have 16GB and it struggles. Even with an nvidia gpu it sometimes takes several hours on a desktop. I use a DSLR and shoot raw, usually end up with 1-2 GB of images or more.
Thanks.. I forgot about meshmixer. I would of course like to find out if Meshroom could avoid it in the first place, but easy fix in meshmixer will be next step. And then I was actually planning to retexture it from Meshroom using the process from here: https://github.com/alicevision/meshroom/wiki/Texturing-after-external-re-topology Then it was the plan to use blender for decimation/retopology and baking of normal maps and such.
So I didn't come up with a solution for combining the two SfM clouds. I found a somewhat related issue on the github page, and got some tips for avoiding the issue in the first place, but it does not seem to be doable in Meshroom at the moment (https://github.com/alicevision/meshroom/issues/864).
However, I did solve the problem though, so I thought I'd share my solution. Instead of loading the images as two sets through augmentation, I loaded all the images as one set. Feature extract only SIFT but preset high. Giving that to the standard feature matching node (even with ImageMatch node tweaked to match by more features) resulted in only 33 of 176 cameras being reconstructed. However, when I turned on "Guided Matching" in the FeatureMatching node, then the SfM reconstructed all 176 cameras, and the cloud looks like the SfM of both side 1+2 as shown in the OP image. So basically a Success! Waiting overnight now to see the results of depthmap and meshing. "Only" caveat to guided matching on this amount of cameras and doing the processing on my laptop, is that the featurematching took approx. 10 hours to complete (and this was only using SIFT and not combining with akaze). So it was automatically combining all images correctly, with no need for manual alignment and merging, but it did take 10 hours. So there are pros and cons here, but it seems to have worked out :-)
>is MaxPoints not the maximum number of input points
There's maxPoints and maxInputPoints, I changed maxPoints. I think you're right that I went too low, I was reading this GitHub comment from the dev and accidentally lowered it to 100K instead of 1M.
However, even when drafting from SfM, there are big chunks missing from the monkey. So I feel like the issue may be from something earlier in the pipeline than meshing.
Have you tried using the draft meshing (https://github.com/alicevision/meshroom/wiki/Draft-Meshing) when you tweak values? Just to get some faster ideas on the final output.
Other than that, I would guess that there could be some trouble with the MaxPoints you changed. I am not totally sure on these parameters but is MaxPoints not the maximum number of input points? (E.g. default value is like 50 million?) So the reduction here seems extreme. Even if it is the maximum number of output vertices, it is still extreme (and imho unnecessary) to reduce it to 100k (which I'd guess could result in missing geometry in the mesh). I'm running meshroom on 16gb of ram as well, and usually I can get meshing done with just around the default value of 5 million vertices (sometimes I reduce it to 4.5 million). Other than that I usually get good help by following the tips on this page: https://github.com/alicevision/meshroom/wiki/Reconstruction-parameters
Also I have found good help from the tweaks mentioned on this site when working with turntables: https://github.com/alicevision/meshroom/wiki/Reconstruction-parameters
Recently, I found the option of "Guided Matching" in the FeatureMtaching node very useful when I had a model where only half the cameras were reconstructed. And if there are problems with too few feature points on the object then turning akaze on as well usually helps me. Finally, if the object does not have that many feature points, I have also had success with placing it on the turntable with a piece of paper underneath it, with different scribbles on the paper to help with the tracking of the cameras.
You want to have 60-80% overlap between images. For a small object like that I would expect 30-50 total images to be sufficient, taken in 2 or 3 loops around from different angles. Typically a yellow icon means that it's got enough data to work, but you could also try adding your sensor to the DB by following the guide: https://github.com/alicevision/meshroom/wiki/Add-Camera-to-database
That usually happens because of one of a couple reasons: the picture may be fuzzy/blurry, or there isn’t enough overlap with surrounding photos for the software to find enough matches to place the photo in space around the reconstruction. Really try to get lots of overlap. It’s not usual to get over 200 photos for a small object.
Also check out this link. It’ll have parameters you can tweak to get better reconstruction, but I’d first look at your dataset.
That design is based on discussion here: https://github.com/alicevision/meshroom/issues/283. Basically, the diagram on plate is for better reconstruction (theoretically). But I am not sure if other design will work since I haven't try it myself.
That design is based on discussion here: https://github.com/alicevision/meshroom/issues/283. Basically, the diagram on plate is for better reconstruction (theoretically). But I am not sure if other design will work since I haven't try it myself.
check out this thread where it was done as a proof of concept. Meshroom is a great free tool for the job and as of 2019 supports 360 cameras. Unfortunately his discovery was that the 360 camera produces so much data you DON'T want such as empty walls/ceiling or the photographer it ends up not being the best way to capture the scene with much detail.
You could also wait. Meshroom devs are currently working on getting it to work with AMD cards.
https://github.com/alicevision/meshroom/issues/595
Meanwhile try using the simpler mode that doesn't require CUDA. It might not be as good but it's practise.
If you have an Nvidia graphics card. Meshroom is a free and easy software to make models like this from pictures
here is a video showcasing it
If it's for printing and it's an aesthetic part (not scanning a machined part for validation or remanufacturing) then photogrammetry is absolutely your best bet. Meshroom would be a smart software to start with. It may seem a little daunting but the results will definitely be better than any consumer grade scanner and the workflow won't be any harder than any commercial scanner's would be.
I used meshroom (https://github.com/alicevision/meshroom) with some success. It's free and open source. You can get the models out; though you have to do some digging around in folders to do so.
It's possible, but it requires a lot of hoops and it's very difficult:
https://github.com/alicevision/meshroom/issues/204
Plus since it requires CUDA, it won't run on Mohave since it doesn't have CUDA drivers.
https://github.com/alicevision/meshroom/wiki/Add-Camera-to-database
The format is: CameraMaker;CameraModel;SensorWidth(mm)
Hasselblad;Hasselblad L1D-20c;25.4
(1 inch equals 25.4 millimeters)
It says it might need to match the EXIF and that is what is in the EXIF.
Wow, didn't know Hasselblad was bought out by DJI (Chinese company). All the Swiss mechanical precision couldn't keep up with the digital revolution.
I tried to make my own software for this as an experiment but i wuickly realised that this couldn't be done (in most cases) with just a top, side and front view. For simple shapes like spheres, cubes and the like, it's possible, but imagine what happens if you take this approach with a mug. it will not be able to know that it is hollow and depending on your angles, it will not be able to correctly represent the handle, especially if it's a bit rounded. I did find this: https://github.com/alicevision/meshroom it will need pictures from a lot of angles and it took some practice to know how to take proper pictures but i got a nice model from this for a very complex object.