This app was mentioned in 1 comment, with an average of 2.00 upvotes
It actually depends on the camera itself and it REALLY depends on properly calibration. This is done with a chessboard printed pattern approach. If you download OpenCV examples on google play: https://play.google.com/store/apps/details?id=com.jnardari.opencv_androidsamples&hl=en
You can create a proper .YAML lens+cmos corrective file that you can then pipe into software like photoscan to give you a MUCH, MUCH better result then you think is currently possible with a cell phone. If microsoft is smart, they'll allow you to export the generated pose graph (OR AT LEAST WORK WITH IT!) to take a series of photos after the base geometry is calculated to get a much better texture and post process the object.
They are balancing speed and detail in the example you saw.
You may also want to try: https://play.google.com/store/apps/details?id=com.smartmobilevision.scann3d&hl=en
This is a pure photogrammetry approach that does not use an external server (like 123d does) and is calculated directly on your phone. These guys are really advanced. it's a little bit nitpicky but when it works, it works crazy good. It also has lens profiles for the most common android cameras already, so you can skip the generative step.
The problem with cellphone cameras is that when you're streaming video at X hrz you have to do it at a low resolution to get the fastest cmos refresh rate possibe - because you need clear images without motion blur to get feature pose geometry as fast as they demonstrated. This makes things "fuzzy".
When you have all the time in the world to calculate geometry with SIFT/SURF/ETC approaches on a GPU on a computer, of course you can get a better result.
Plus photoscan knows the parameters of your camera, AND you're taking still images at 12MP or something.