The GPU of the 810 was already better than the Exynos too. Arguably this is more important since GPU is more important for gaming which is where you need most of your power.
Qualcomm is playing catch up in the CPU department, Samsung is playing catch up in the GPU department.
1) CPU wise the A9x is faster than the PS4 or Xbox One (based on sunspider and Kraken benchmarks), but these machines are for gaming not for CPU intensive tasks. GPU wise it's like bringing a knife to a gunfight
Gfxbench T-rex offscreen (measures raw GPU power)
iPad pro: 163.4 fps
PS4: 943.5 fps
(this is based on the HD 7850 results, the PS4 GPU is somewhere between the 7850 and the 7870)
http://cdn.wccftech.com/wp-content/uploads/2013/02/PS4-GPU-Performance1.gif
2) Its not just about raw GPU performance, the reason why consoles have been able to keep up with gaming pcs is because of their bandwidth, the PS4 has a memory bandwidth of 175gbs (the ipad pro has around 40-50gbs) that number is staggering. Higher memory bandwith allows for higher resolution surface textures
http://images.eurogamer.net/2013/articles//a/1/7/2/8/7/7/4/gpu.png
The xbox one is 1.3 teraflops and PS4 is 1.8 teraflops in FP32 (single precision)
The X1 is 1 teraflops in FP16 (half precision), its 0.5 teraflops in FP32 (single precision)
BTW teraflops isn't useful for comparing if its different GPU architectures (similar to Ghz for CPUs)
PS4's GPU is similar to an AMD HD 7850, although often performs worse than a Nvidia GTX 750 Ti if the game is CPU heavy (e.g. open world games)
For comparisons, in GFXBench 4.0 Car Chase offscreen 1080p
>92.8 fps - AMD HD 7850
>83.3 fps - Nvidia GTX 750 Ti
>30.1 fps - Shield TV / Tegra X1
> 14.4 fps - Shield Tablet / Tegra K1
But you are right that if Nvidia keep improving at this rate, in 2017 we may have Nvidia tablets will be on par or have better gpu performance than the current gen consoles (Apple too for their A11X in 2017)
the tegra x1 isnt much more powerfull then something like apples A10Fusion chip or the new Snapdragon/Exynos chipsets.
Check graphics benchmarks here:
Keep in mind the shield console runs at higher clockspeed plugged into a wall. The "still higher clocked" tegra x1 in the google pixel c is already weaker then modern smartphones.
Look at them stacked up next to each other on Aztec:
Perfect doubling of performance as you go up from one to the other.
For graphics they are very competitive, but Apple has overtaken them for now
For graphics IMO the best benchmark is GFXBench Long term Manhatten 3.1 (on-screen). It pushes GPUs to their limits and tests how well they can sustain their clocks.
38.4 fps - iPhone 8 Plus
36.4 fps - OnePlus 5
30.9 fps - OnePlus 3T
27.4 fps - iPhone 7 Plus
For image processing I'd say Qualcomm has fallen a generation behind since they can only do 4K 30fps, while the A11 can do 4K 60fps
I believe this one on GFXBench is the 32-core version. Go on the actual site and search for Apple M1, you’ll see they stack up properly with double the performance between M1 to M1 Pro to M1 Max:
Then why not just run GFXBench on your computer and compare the scores. Or go to the GFXBench site and search for whatever graphics card you want and compare, that’s all they’ve done.
Here’s the 5700xt:
https://gfxbench.com/result.jsp?benchmark=gfx50&test=759&text-filter=5700xt&order=median
And here’s all the Apple M gpus:
Here are some metal benchmarks for the pro Vega 64:
There has to be some serious misconfiguration then.. =/
Other games are running completely fine? Try a GFXBenchmark and see if you are WAY under the score for your GPU.
That's not necessarily the case
Car Chase Offscreen (OpenGL ES 3.1+AEP)
5.3 fps - 510
5.1 fps - 420
4.4 fps - 418
Manhatten 3.1 Offscreen (OpenGL ES 3.1)
12.5 fps - 420
9.8 fps - 418
9.2 fps - 510
Manhatten 3.0 Offscreen (OpenGL ES 3.0)
17.7 fps - 420
14.8 fps - 418
14.3 fps - 510
Could you please bootcamp into windows and run GPU-Z, and take a screenshot? :) It will give us the full specs on that GPU.
And could you please run GFXBench? https://gfxbench.com/result.jsp
Yea if you run benchmarks a long time the 810 will be slightly worse CPU-wise, but if you're gaming the GPU difference is pretty huge (the 808's GPU is worse than even the 805's).
The Adreno 430 even beats the Galaxy S6's Exynos.
Most people use their phones for web browsing and videos and such anyway, which is where the things like HVEC and better memory come into play.
The 810 holds up pretty well in this "real world" benchmark
Download this https://gfxbench.com/result.jsp And compare with others
I'd advise you to run Driver Booster (look for Iobit Driver Booster) and update all drivers first (Don't forget to create a system restore point before).
I agree, something is off. I don't think a ryzen APU could beat the old R7 by that much unless that A12 is misconfigured and running on single-channel memory.
Go to: https://gfxbench.com/result.jsp
then type into the search "A12". There will be two results for the 9800e:
AMD A12-9800E RADEON R7, 12 COMPUTE CORES 4C+8G 1920 x 1080 1156 Frames (19.6 Fps) OpenGL 2016.04.20 AMD Radeon R7 Graphics
AMD A12-9700P RADEON R7, 10 COMPUTE CORES 4C+6G 1920 x 1080 1030 Frames (17.4 Fps) OpenGL 2016.12.04 AMD Radeon R7 Graphics
AMD PRO A12-9800E R7, 12 COMPUTE CORES 4C+8G 1920 x 1080 692 Frames (11.7 Fps) OpenGL 2017.05.15 AMD Radeon R7 Graphics
Indeed, this shows the poorly configued A12 running at about half its potential.
Now type in "2700u" into the search field and you see the Ryzen 2700u profile showing a similar FPS.
AMD Ryzen 7 2700U with Radeon Vega Graphics 1920 x 1080 1283 Frames (21.7 Fps) OpenGL 2017.08.07 AMD Radeon™ Vega 10 Mobile Graphics
What this shows is that since Bristol Ridge the GPU performance is totally memory bandwidth limited.
Because fast clocking RAM is energy inefficient, mainstream laptops will be limited to mostly 2133 or at best 2400, while enthusiast to gaming laptops might go as high as 2667 or 2800 at best. I'm hoping desktop AM4 will do 2800 standard and 3000 OC with memory channel and iGPU overcock
The way to handle the performance problem in mobile is to use the APU as a base and add a dGPU in dual graphics config for higher performance gaming. Either that or add a 3rd memory channel and slot.
I find this GPU Hierarchy helpful for performance comparisons.
Unfortunately, it lists your card one tier lower than the GTX 480.
However, these OpenGL benchmark results for the Car Chase benchmark that tests "hardware tessellation, graphics, and compute shaders" indicate slightly higher FPS for GTX 750ti than the GTX 480.
GTX 750ti appears to have a higher clock speed and more VRAM while the GTX 480 has faster memory bandwidth and more render output processors. This is based on the hardware spec comparison provided by GPUBoss (which I don't recommend for benchmarks).
I don't understand the hate. Sure, it's not the greatest chip ever, but the HTC One M9 outperforms the S6 on GPU tests (which is what'd you'd need a powerful SOC for- gaming).
You can't compare these devices yet. For one it still identifies as a Nokia when devices like the 640 are identifying as a Microsoft phone. Secondly, most other Adreno 430 devices are posting much better scores. So really all these benchmarks are useful for is establishing that a higher-end Microsoft phone exists not what it is capable of.
The Geekbench results aren't really representative of how the Max will perform. The GFXBench ones are better with all the M1 GPU's stacked up as doubling each other:
My bad, I get them from GFXBench most of the time.
Looked at some other sources and it seems to hover between 3x and 4x the performance, did I say something silly?
​
1080 TI is about the best price/performance you can get in the high-end, the 1660Ti might be a bit better (price/performance wise), but it's not a real upgrade from the 980 Ti.
RTX 2080 if you care about RTX / DLSS, I don't really yet, since I don't own a 1080p panel and am not enthusiastic about running DLSS on a high end panel just to enable RTX so forgive me for the bias in the first few lines. However remember that the 980 Ti is already pretty good, so you'd be getting an 50-55% improvement and 11GB of Vram instead of 6GB. I'm in a pretty similar position to you, GTX 1080, would prefer a bit more speed in my games, but GPU is still good enough to make any upgrade go into the area of diminishing returns immediately. I'm waiting for the next gen.
nintendo is a big company and they can shop around, maybe qualcomm, maybe samsung, maybe design their own chip like apple, maybe pay a random chinese company. they have choices was my point https://gfxbench.com/result.jsp?benchmark=gfx50&test=547&order=median&base=gpu&arch-check-unknown=0&arch-check-x86=0
> running it inside a VM using GPU passthrough
So this is a MacOS VM using vt-d? That might change a lot of things compared to OP and myself.
I've generally stayed away from VMs since they've always given me worse performance compared to native, and usually the emulated GPU offers very little acceleration. But vt-d should make up for it. I am tempted to try this out. What host OS are you running?
I've been using https://gfxbench.com/result.jsp. It has an OpenGL and Metal version. But the VM config might present very different symptoms compared to native. But I am curious to know the results regardless.
Try GFXBench, it's a relatively small OpenGL benchmark utility (300MB). If the benchmark performs as expected (for example "1080p Car Chase Offscreen" at 110-120 fps), it's probably a bug with Witcher 3. If the benchmark underperforms, make an Ubuntu live USB drive and run an GFXBench off of it. If that benchmark doesn't perform as expected, you might have a hardware problem.
You can compare more yourself at: gfxBench
The Apple A9X chip (featuring a custom PowerVR graphics chip) is estimated to be 38$ to manufacture. It was released in 2015.
Benchmark results: GFX Trex 1080p Offscreen render:
Apple A9X (passive cooling): 163 Frames per seconds
Nvidia Tegra X1 (passive cooling): 101 Frames per seconds
https://cdn0.vox-cdn.com/uploads/chorus_asset/file/6562791/Screen_Shot_2016-05-29_at_23.19.00.0.png
1.8 times faster than T880 MP12 in Manhattan 3.1 at 1080p.
https://gfxbench.com/result.jsp
Lowest scoring T880 MP12 GPU gets 25.7FPS in the same benchmark.
25 * 1.8 = 45FPS.
HD 530 with i7-6700K = 48FPS.
Nvidia Shield TV = 45FPS.
HD 530 with i3-6300 = 41.1FPS.
HD 530 with Pentium G4500 = 14.7FPS.
I find this GPU Hierarchy helpful for performance comparisons.
Unfortunately, it lists your card a couple tiers lower than the GTX 480.
These OpenGL benchmark results for the Car Chase benchmark may also help compare performance between GPUs, as NMS will use OpenGL instead of DirectX commonly used in Windows benchmarks.
I find this GPU Hierarchy helpful for performance comparisons.
Unfortunately, it lists your card one tier lower than the GTX 480.
However, these OpenGL benchmark results for the Car Chase benchmark that tests "hardware tessellation, graphics, and compute shaders" indicate slightly higher FPS for GTX 750ti than the GTX 480.
GTX 750ti appears to have a higher clock speed and more VRAM while the GTX 480 has faster memory bandwidth and more render output processors. This is based on the hardware spec comparison provided by GPUBoss (which I don't recommend for benchmarks).
EDIT: Added link
That's right, AnandTech only included results for Manhatten 3.0
Not sure why, it leaves this comparison graph sort of half done since we can't see the actual performance reductions, although to be fair it is under the battery section, would be nice if it was in its own
The OnePlus 3 is an estimate from this chart
Most of them are from GSMArena, because the GFXBench website was down last night when I initally tried to get the numbers (I posted that comment in the AnandTech review thread first)
GSMArena didnt have the numbers for the 6S or OnePlus 2, probably since the Manhatten 3.1 was quite new at the time those were reviewed, I just got those earlier today from the official results page
This should answer your question. Tegra X1 is currently at the top, we'll see in a few days how that changes.
I'd also suggest to go for 1080p. 720p will work if your budget is very constrained, but the price difference isn't that huge anymore. As the magnification of the lenses in Cardboard is fixed, the field of view you get from e.g. a 4.5" screen is smaller than that from a larger one, so in this case bigger is better. 5" - 5.3" is usually best, above that you may no longer see the whole screen, depending on your eyesight.
VR apps require a lot of GPU power and low framerates due to lack of sufficiently fast graphic chips will cause nausea for many, therefore looking at benchmarks is also important. There are a number of very cheap smartphones with large screens and completely underpowered SoC, buying by resolution and screen size alone doesn't work. In a case of doubt lower resolution and screen size are preferable to insufficient power to drive VR.
I have no list, only rough guide lines:
Beyond that it depends mostly on the exact definition of "affordable".