This app was mentioned in 4 comments, with an average of 1.25 upvotes
Just realized that Alexander Pacha, the guy who made the sensor code I'm using, has a test app up on the Play Store. It has the same sensor models I have in my app, so you can install that and test:
https://play.google.com/store/apps/details?id=org.hitlabnz.sensor_fusion_demo
For reasons not related to Time Warp the iPhone 6 latency could be lower than on Android and closer to Gear VR, but without proper testing we won't know. Google made some unfortunate design decisions early on with Android that increase latency. This is true both for the display, which caused Oculus to make changes in Android for Gear VR, and also for audio, which is the reason why almost none of the professional music apps for iOS have been ported to Android. The audio latency problem has been known for long, but the internal handing of the display just became a problem with VR, so no direct comparisons are available. iOS was designed for low latency, which makes it very likely to be faster, but AFAIK nobody ever published a direct comparison based on measurements.
Right now this is still partly academic. Time Warp for Cardboard is still experimental, so it is turned off by default, and it is only implemented in the Cardboard Android SDK (which handles thinks like rendering with lens distortion correction), not the Cardboard Unity SDK, on which most apps are based. The Unity SDK sits on top of the Android SDK, but without extra steps from the developers current apps don't use Time Warp. On iOS only the Unity SDK exists, and I am not sure how rendering is handled there or what the "base" latency for displaying images on iOS is. If Apple avoided all the potential traps and samples the sensors faster, it could in theory get down to the 20ms of Gear VR, but I seriously doubt it currently does. It will have at least 17ms more latency, as it uses tiled rendering on the GPU just like everybody else, and this can only be changed in the OS/display drivers. Currently the only parts that are "tested" are the rendering performance, which is higher on iOS thanks to Metal, but this only influences how complex the geometry of a scene can be while maintaining high frame rates, not the latency.
The problem with the "already out" Android phones is similar: we just don't know how good the sensors are. The gyroscope sensors in every phone on the market are capable to be read at 1000Hz like the Gear VR sensors, but for most likely power consumption reasons they are usually sampled at around 150Hz. All it takes to improve this is a firmware update to the microprocessor that handles sensor data, which unfortunately has to be done before assembling the phone, so it won't work for existing ones. Sensor sampling rate or calibration isn't specified by the manufacturers, so the only way to tell which phones use faster sensor setups would be to test them all. Testing them would be very simple, all you have to do is read the sensors as fast as possible and check how often the values actually change, but nobody has done this yet and published the results for different phones.
The sensors in the iPhone aren't technically better than those in Android phones, they use pretty much the same IMUs. The difference is a) possibly the IMU firmware and b) what the OS does with the data. There is a MIT licensed sensor fusion library for Android that delivers much better data than the default Android sensor library, which also uses some forms of sensor fusion to improve on the either imprecise and/or jumpy sensors. The demo app for the library works pretty well even on phones that drift horribly like the Galaxy S4. Sensor fusion comes with a computational cost, which is probably why it wasn't implemented in Android in the same way, as it has to target a much larger number of devices with widely varying performance parameters. Using a dedicated motion coprocessor can allow to perform a lot more sensor data improvement without slowing down the phone, currently giving the iPhone higher precision, but again this doesn't necessarily influence latency.
Combine all this with the fact that people react completely differently to latency and you have a mess. For some the 80ms in Cardboard are no problem, these will never notice a difference if latency is reduced with Time Warp to e.g. 50ms like on the DK1 or even less. Oculus aims for 20ms as this is where "most users don't notice the latency any more", but some still will. This makes subjective comparisons pretty much useless, as it is impossible to tell if someone has problems with a phone due to high latency or due to high sensitivity towards even low latency. Until someone measures the actual latency with a high speed camera, pretty much all we can do is make best guesses based on what we know about the rendering process. The only ones who ever properly tested and documented everything regarding latency are Oculus with the Note 4/S6 and Gear VR.
In theory you can reach a 0ms latency on any phone by using motion estimation. Humans usually don't suddenly change the direction of their head turns, so by combining gyroscope and accelerometer data you can "guess" where the head will be in e.g. 50ms, if your latency is 50ms. Oculus presented this option, but it seems that in the cases where the estimate is wrong, the brain reacts very badly, so they opted for their multi-step optimizations to actually reduce latency. Ideally we end up with pre-calibrated, high frequency sensors and sensor fusion to re-calibrate in real time, removal of all unnecessary steps in the render pipeline plus adjustable motion estimation for those who are very sensitive to latency, but currently we are still pretty much in a try-and-error phase.
> Well no, I just mean that phones have multiple sensors and I could (If they aren't already) combine them to get better results.
Google introduced sensor fusion as virtual sensors in Android 4.3 (or 4.2?), but not all sensor fusion is created equal. The one in Android seems to be worse than the one in iOS, so reimplementing it may be a valid idea. There is an open source sensor fusion solution for Android written in Java as part of a master thesis (Sensor fusion for robust outdoor Augmented Reality tracking on mobile devices PDF) that explains a lot of the required background and a nice Android demo showing the different types of fusion and how well they work, definitely worth a look.
You might also be interested in the description of the reverse engineering of the DK1 tracker for the open source Foculus Rift Tracker firmware used by OpenGear and others.
> And from my own experience I'm not too concerned about the Oculus Rift potentially getting that data at a higher frequency.
The increased sensor sampling frequency is one of many steps to reduce the total latency. All this runs under "motion-to-photon" speed, as it describes the time it takes from reading a sensor to showing the picture matching the updated sensor data with the required interpretation, rendering and transmission in between. Some limits are simply due to the way phones render images, Oculus modified the Android OS for Samsung to remove several steps, so they could achieve a 20ms latency on Gear VR. A bigger problem is the latency introduced by USB or any kind of network.
How important all this is depend a lot on the individual user. Any latency will cause the image the eyes see to be slightly outdated during movement, which leads to discrepancies with the "data" from the human vestibular system. Some people aren't sensitive to this at all, while some have to take of the HMD after a few seconds in order not to puke. Reducing latency makes it usable for a larger group of people.
> If you have a rift and are willing to help me, Then I could probably do things much faster.
I don't have a Rift, never really played any games, use Macs for development in Python and Java and all my PCs are headless Debian Linux servers running Postgres databases for the application server software I usually write. I started developing VR apps for Android with Unity a few months, so your project isn't really in the area of VR that I focus on. Usually the best way to get people to contribute to any open source project is to give them at least a rudimentary working prototype. It doesn't have to be pretty, it only has to do something useful that they themselves need, proving the project will go somewhere, because then people have an incentive to help improving it. Currently you are lacking this prototype that users could test, I'd recommend making the APK as a demo a high priority.
And one comment about the code: I looked at the GitHub repository about two days ago and the first thing I saw was this commit comment: "I fixed so many things! The native code ...". There are several like "Massive changes, And it works now!" or "Maybe this will help?". The idea of commit comments is to give a quick idea what you have changed without having to look through the changes, not to just document that you have done something. My first (assembler) code is from 1984, so by now almost twice your age, and over time I have gained some experience with figuring out code written by other developers. Your comments basically screamed "Turn around, don't look back, just run!" I think it is great that you are trying such a project, but also try to look at it from the perspective of someone who isn't already familiar with it and wants to figure out whether is is worth investing time or not.