> Testing was performed on a system with an Intel Core 2 Quad Q9400 CPU with 4 GBs of RAM and an Nvidia Geforce GTX 650 Ti
Benchmark CPU vs minimum CPU.
It's hard. Benchmarks are about the best we can do right now. The reason for this is these processors do a ton of different tasks. Even benchmarks usually target specific tasks so it is hard to build a benchmark that will accurately reflect all that the processor is capable of without handing the development over to Intel, haha. FLOPs are a better measurement than plain clock frequency but they are specific to floating point operations which is definitely not everything. Unless you can build a benchmark for your specific usage habits, your best bet is the widely available benchmarks.
To anyone who wants to do a quick check to get a rough idea of where their equipment falls, I'd check up on the CPU benchmarks here and do a GPU comparison here.
According to the hardware listed above and using these sites, you should have:
A rating of 3724-10282 if you have an AMD CPU.
A rating of 5296-10044 if you have an Intel CPU.
A rating of 7.3-8.4 if you have an AMD GPU.
A rating of 6.0-8.6 if you have a Nvidia GPU.
These are rough estimations, but if you fall within these (or exceed them) you should be fine. Although gpuboss takes noise level into account on their ratings, so probably double check those.
Edit: Just saw that first site has GPU benchmarks too, might look into that for better numbers to compare to. http://www.videocardbenchmark.net/
Lol, the megahertz myth strikes again.
In case anyone's interested to look at real benchmark numbers:
Typical modern netbook with a 2.13GHz Intel Celeron.
Omg, 2.13 > 1.1 so it must be like 2 times faster than a MacBook right? Right? Right?
Seriously though, why do people suddenly care about the clock speed number? The MacBook Air also has a lower clock speed than the netbook above, yet people never assumed it's slower than a netbook. Why is it different this time around?
I'll try to break this down quickly and concisely. Feel free to add onto or correct anything as I am not an expert by any means.
Intel vs AMD
Basically both companies manufacture CPU's, originally Intel was the go to chip as it was better performance wise and quality wise. Lately AMD has been able to compete with Intel's products. Generally speaking Intel products are higher priced, a lot of things account into this. Specifically with the last few chips that have come out in the past few years Intel has been out performing AMD chips. Intel also has made quite a name for itself and does in fact have fan boys that will always choose Intel. AMD however is usually much cheaper but by no means is it a poor quality product.
Currently I am using the AMD Phenom x4 955 Black Edition, I've had no problems with it and I have been able to play the latest games with it just fine (BF3, TOR, etc). That being said, I am planning on selling this machine soon and building a new one which I plan on buying a i5 2500k for.
The i5 2500k is notoriously known as being the processor of choice when it comes to this subreddit, and many users will immediately recommend it. Not that it isn't warranted though, it does provide a very good price vs performance ratio and will run current games very well. As far as comparing these CPU's this is a good site to use.
Ultimately what is most important is what you will be using the computer for. Generally speaking the Intel chips fair better for gaming, mainly because the games will not be able to take advantage of all of the AMD chip's 8 cores. Now if you do a lot of video editing, rendering etc, these programs are more capable of taking advantage of multiple cores so in that case I would choose one of the AMD chips.
> i7 3770k
You have the processor rated just below the best laptop processor on the market, and you're stuck at 30-40fps? And here I thought we were finally in the age where I could slide by with one laptop to rule them all... Guess I'm investing in a gaming desktop after all.
You get about a 42% improvement in single thread performance. Which will help you in CPU bound games such as Fallout 4, or Counter Strike GO.
Just bought the lowest end macbook pro!
- price is about the same as before for the 256GB / 8GB RAM config
- the new 2.0 GHz CPU is more power efficient and has about the same performance as the old 2.7 GHz one. Here's a 3-way comparison between the i5-6260U, a 1.8 GHz chip similar to the new 2.0 GHz one, the old 2.7 GHz chip, and the new 2.9 GHz chip
- the lowest end one has a bigger battery than the touch bar ones, 54.5 vs 49.2 Wh
- vim user, so yay for a physical escape key
All of this plus the better display, better speakers, better keyboard, and better trackpad made it an easy decision.
i5 is definitely better.
C2D P8400: 1518 PassMark score
i5 2557M: 2584 PassMark Score
Be careful buying old server hardware. Some are only good at converting electricity to heat. With emulators the CPU is very important especially single thread performance. 50+ cores wont do much for most emulators, a couple sure but not many. Depending on the emulators you are shooting for I'd just build a system around a Intel Pentium G3258.
Some benchmarks to give you an idea what to look for. http://www.cpubenchmark.net/singleThread.html
Hm. Looks like a great CPU. When I built my most recent machine, I got an AMD Phenom II X6 1090T that cost me $125, after a rebate. At the time, no i5 chip provided similar performance at that price point. I don't know how the 2500K is for overclocking, but I've found that the 1090T can be cranked up to add an aggregate performance boost of about 24%, which puts it in the ballpark of a non-overclocked 2500K.
Heck, it looks like I could get a Core i7-2600 @ 3.40GHz for less than $300. That's a performance leap over the other chips. Frankly, however, I'm barely able to get the 1090T's six cores all working much, even doing some intensive image processing. It was a balance, and I decided to put a lot of the build money into SSDs and a great motherboard. Also a great case and absurdly great CPU cooler.
Is this what were calling old hardware now?
Granted it's not gaming hardware but it's reasonable to think a small business or somebody's grandma might still have one of these lying around.
It's not Pentium III is all in saying people...
A slightly overclocked Ryzen 1700 will be about 1.7x faster in multithreaded applications (according to CPUBenchmark). That's a huge difference.
For single-threaded applications, the i7 will edge out very slightly. (scores of 2027 vs 1754 according to this page)
Overall, the 1700 versus that CPU will be like night and day, if you do productivity tasks.
Athlon X4 860K is same socket, cheaper, better single threaded and better total performance. I'm also still learning, but you might want to look into this.
First of all, those AnandTech graphs were pure troll, cherry picked in the worst possible way... i7 5820K at 4.8 GHz, i7 5960X at 4.7 GHz, i7 6900K at 4.2 GHz. The actual averages in PassMark for those CPUs:
Thus it's overall: 15,084 versus 12,986 versus 15,979 versus 16,786.
Then single threaded: 2,046 versus 2,013 versus 1,990 versus 2,168.
Considering the 1700X supposedly had its turbo disabled and is a $400 CPU, giving 95% of the performance of the $1,000 i7 5960X... That's pretty damned good I'd say. Chances are the 1800X will be as close to the i7 6900K as the 1700X is to the i7 5960X. Perhaps closer still if turbo truly isn't working here.
This is a huge win for AMD, 90% of the performance of Broadwell-E at 50% of the price. The only thing to be concerned about is Intel's margins when these bad boys finally hit retail!
That's because technically it's only a dual floating point module processor, with 2 cores per common FP module.
I realize that people have weaker rigs than yours, but they probably are not trying to throw ~2000 part crafts at them either. Large crafts have always brought rigs to their knees.
And while it might have been better than early i7s, you're still running one of Intel's lower-end chips, and it's Passmark scores reflect that. Considering that KSP is CPU intensive since Unity's physics engine runs on the CPU, I feel like you're expecting more than what is reasonable performance-wise.
CPU bound, single core, check out CPU benchmarks and then the single thread details. Intel has the fastest 'single' thread CPUs. I think your CPU has the highest score on that: http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-4790K+%40+4.00GHz&id=2275.
Edit: this is an accurate list of which CPUs are most likely to give the best performance in KSP:
Edit 2:Why did I ever choose to go with an AMD motherboard? The best AMD procesor has 67% of the
performance but at 72% of the price. However, the AMD processor uses 250% the power the Intel does.
Recommendation to those building a PC for KSP purposes: Do not choose a motherboard with an AMD socket.
Intel Core i5-2500K @ 3.3 GHz or AMD FX-8350 @ 4.0 GHz or AMD Phenom II x4 940 @ 3.0 GHz
> NVIDIA GeForce GTX 680 or AMD Radeon HD 7970 (2 GB VRAM)
Biggest load of bullshit I've seen in ages. Just say that you haven't got the slightest of ideas on how to put your games on PC.
How's AMD Phenom II X4 940 at the same level as 2500K?? Or even FX-8350 for that matter. It's not even in the same range of CPUs
It seems they have issues with VRAM being too low on GPUs and they're not using it as intended or they're pushing all textures on it like Titanfall to fill it up. They're definitely not that pretty to justify filling up the entire VRAM.
As far as performance goes the minimum does not make any damn sense.
He said cpubenchmark.net, which is passmark database.
Infact, there, it is indeed listed as faster: http://www.cpubenchmark.net/compare.php?cmp=828&cmp=1780
The problem is that this happens only in certain very well threaded workloads, while in many others the i5 wins
This is exactly right, the two aren't even comparable. Even in the single thread rating the 2500k smokes the Phenom.
> Processor: Intel Core i7 or better
So would a first-gen i7 suffice, or does it need to be current? There is a nearly 3x difference in capability between first and current generations.
Granted, I'm guessing it'll require far less anyway...
> Graphics: DirectX 11.0 compatible (2 GB) or better
This doesn't really mean anything. A 6450 has these requirements, but is extremely low performance for gaming.
I wish that companies would actually put out real requirements rather than fluff. Fluff doesn't help anyone with preparing for a game.
An i3 4330 scores about 20% better than a 965 while using less than half the power. Single-thread improvement is more like 70%. It's not a huge upgrade, but it's definitely an upgrade. If you let your PC run 24/7, the power savings will probably pay for the i3 by the time you replace it.
This is a good deal for someone who wants to build an LGA 1150 (aka current gen) Intel machine but doesn't want to invest in a good CPU yet.
Wow, I read the whole thing, Just ridiculous. I was pissed off when people stole a couple design elements from me, can't imagine how you're feeling.
On a lighter note, those sketches are lovely, I especially enjoyed the portraits.
~~EDIT: I looked up benchmarks for the hardware they sent you. The card is absolute shit and the CPU is less powerful than an i7-2600.~~
I've always wondered how such a machine would perform at gaming. Out of curiosity, why wouldn't it work?
Just for the sake of mentioning it: Current Benchmarks of Multiple CPU Systems
My CPU score was 15,797
Now go to the charts, http://www.cpubenchmark.net/high_end_cpus.html And look for the prices of CPU's around the 15797 mark, and realize i paid $329.
But yea it is a small percentage off in FPS for some games, lol
of course I am generalizing. And talking about their current (aging lineup). And yes the pentium changed the lanscape a little bit (I would still rather buy 860k for 80$ than pentium for 65$ but if you want to spend 65$ and not more then pentium is really great chip).
but it still makes sense to buy AMD in the 100-200$ price range.
i3 4130 vs FX6300 (+-100$)
i3 4360 vs FX8320 (+-130$)
i5 4440 vs FX8350 (+-170$)
So yeah if you can utilize all of the cores then AMD FX equivalent is better if you can't than intel is better. And all of those AMD cpus are unlocked so you can overclock them where as those from intel are locked.
Not to mention that a some people already have AM3+ compatible motherboard and not even know about it - the chances are if you owned Phenom II before and didnt skimp on the mobo that your AM3 can support FX with a simple bios update (that was my case :-) ). AMD is really famous for their compatibility - I recently upgraded oooold 95nm single core Athlon 64 2ghz to 45nm Phenom II quad on the same AM2 motherboard using the same DDR2 memory :-) Heck some cheap AM3+ boards still use those ancient chipsets from the AM2+ days.
Right now the main constraint for KSP is the CPU. You want a CPU that has exceptional single-threaded performance. Intel CPUs tend to be the best brand for this.
Check out a single-threaded performance benchmark here, and then choose the best performing one you can afford.
The second and third most important considerations are RAM and GPU. You want to have at least 6 GB of Ram (KSP can use up to 4, plus a few gigs extra for other processes), and any decent graphics card should be able to run it without much trouble.
Just because AMD has more cores, does NOT mean it does more work.
The top 17 processors are all intel. 23 of the top 25 are all intel.
The very top chip in that list (E5-2690) is an octal core with hyperthreading, effectively a 16 logical core chip. The top AMD chip, at about 60% the performance, is the opteron 6272, a 16 core chip with 4x multiprocessing, effectively 64 logical cores.
The comparison page for those 2 processors:
Intel's chip uses 135 watts of power and relative cooling; AMD's chip uses 115 watts of power and relative cooling.
So, to sum it all up, the Intel chip uses 135 watts of power with 8 (16) cores, to achieve a passmarks score of 16,609, and costs about $2000. The AMD chip uses 115 watts of power with 16 (64) cores, to achieve a passmarks score of 10,245 and costs about $500.
Intel provides 123 passmarks per watt, and AMD provides 89 passmarks per watt. Intel costs (approx) 2.2 cents per hour to run, AMD costs (approx) 2 cents per hour to run.
Performance parity between the two chips would be approximately 5 AMD chips to 3 intel chips. The cost to run 5 AMD chips is about 10 cents per hour; 6.6 cents per hour for 3 intel chips.
So, there is a 3.3 cent per hour advantage to running intel for 3/5 workloads. Though, this does not incorporate cooling; Generally speaking, about 2x as much power will be required for cooling (including room cooling) as your processors. so it's more like 9.9 cents per hour.
If you bought 5 AMD chips for $2500 and 3 intel chips for $6000, at 9.9 cents per hour disparity, the "Break even" point is just over 4 years. (at 4 years, power savings is $3,421.
This is based on 6.9c kw/h, which is very cheap. Actual power savings will probably be higher. For example in california, it's usually 14-19c/kw/H unless you are using bloom boxes.
90+ percent of the high performance CPUs on this chart are Intel-based. That isn't speculation or anecdotal experiences, just numbers. Draw your own conclusions.
You don't really have much choice, there's only one iMac under £1000.
If you're set on Apple you could potentially get her a 13" Macbook pro, which would be useful if she needs to edit away from home. If she already has a desktop that she's happy with then this would be more helpful. Plus, (although it is £100 more) the Macbook is significantly faster than the iMac according to this comparison of CPUs.
Alternatively you could get her a custom-built PC. You might be able to get one at a local computer store, but if you have a geeky friend who knows about computers I'm sure they'll help you out for cheaper. If she uses a PC currently then this might be the best option. Apple Macs cost significantly more for the same performance, plus the OS is unfamiliar to Windows users.
Again, going in a completely different direction, have you considered getting her something other than a computer? You can get some really nice lenses around the £1k mark, which she would definitely appreciate.
TL;DR: No, this is not worth it unless your alternative is literally not purchasing anything.
It took a while to track down what exact CPU is in these things since all of the product literature just says "Intel Xeon 2.2ghz". Links are at the bottom for you to review.
CPU is called Intel® Xeon® Processor 2.20 GHz, 512K Cache, 400 MHz FSB. It was released in early 2002. The Dell PowerEdge 2650 was released in early/mid 2002. This is a 12 year old CPU running in a 12 year old server -- there's a reason these are being thrown away :-)
Passmark for this CPU is 252.
Passmark for a recent-generation, mid-range Intel Xeon E3-1220v3 (currently $199 from Newegg) is 6991.
Passmark for the best CPUs on the market today is around 12000.
Now, these numbers do not necessarily mean that there is a perfectly linear relationship between the number and performance. However, in rough laymen's terms this says that the E3-1220v3 is almost 30x as powerful as each Intel Xeon 2.2ghz CPU included in the Dell PE 2650. Since each unit includes two of the 2.2ghz chips, the E3-1220v3 is 15x as powerful as a Dell PE 2650 (at a basic, rough estimate level).
Sorry, but this is a phrase I don't believe you should be associating with your fileserver. You don't need to buy the most expensive parts, but buying cheap can set you up for failure.
An Ivy Bridge Celeron, the G1620, is essentially the same price and runs circles around that AMD. It also supports ECC memory.
Then you could add a Supermicro X9-SCL-something for around $160.
8GB of ECC from a quality company will run you new ~ $100
I don't like Corsair one bit, but that's up to you. The Seasonic SSR-360GP is commonly on sale around $55.
So that is around $355 without a case for ECC and IPMI.
oops, you want 8 drives. A m1015 on ebay runs around $100.
I can confirm that most of what OP says is true. And I also share some of his frustrations. I have a bit of experience from hosting the massive multiplayer events and I like hanging around the r/factorioMMO servers. So here are some more thoughts.
Factorio headless is mostly single threaded. It still does many things in other threads so I don't expect it to run well on just one core. Going beyond 4 cores will have no performance benefit imo.
If you want to be able to run a big server with a big map, look for CPUs with good single-thread performance http://www.cpubenchmark.net/singleThread.html.
I absolutely hate virtual cores from VPS because they are usually trash.
Since 0.14, many people don't realize that the game is sometimes greatly slowed down by a slow server. They look at the FPS and see 60 UPS/60 FPS. But they don't know that many of those 60UPS are duplicated, because they are waiting for the server. You can see this is happening if the multiplayer waiting icon is blinking constantly, after you activate it in the debug options.
But! There is one advantage to a slow server. It becomes more accessible to players. Slow server means people with slower computers can join, since the game kicks or makes the game unplayable for anyone that cant keep up with the server. Also people joining mid-game can catch up much faster.
As for internet, Factorio is still quite sensitive to bad connectivity. And good connectivity, quality and high speed usually go hand in hand. Try to look for a provider with good peering. A server of around 200 people needs 2MB/s of traffic for game packets. But you need to have a good speed(or limit the map download speed) to make sure that map downloading packets don't slow down the game packets. 1Gbit is what I recommend for MMO events. 100Mbit(with map download limit) for small servers.
The "Turbo" version has an Intel Pentium N3710 CPU which has a passmark of 1299 which would probably struggle to transcode even a 720p stream. For direct play it would work fine I would imagine.
not unless you are doing something that requires more cores/threads like cpu encoding for streaming games over twitch (or other streaming service.) if you are just playing games then you probably will not, however you will be able to multi-task a lot easier.
the one thing you would notice is that this cpu is built on an x99 motherboard for DDR4 RAM. that stuff is fast
if you are looking to upgrade your mobo so you can use DDR4, may i suggest the i7 5820k? according to this it is the highest ranked cpu for under $1000. it is the one i am going to get when i upgrade to a desktop later this year. new RAM, mobo, and this cpu is rather pricey, but it leaves you a ton of space to grow.
Considering this is a hardware subreddit... I would assume I am not alone in caring about After Effects, Cinema 4D, and other animation/rendering programs' multi threaded performance per dollar. AMD actually does pretty well here.
Benchmarks that max the CPU matter a lot to animators as it compares raw performance. Games rarely max the CPU.
Build a custom setup. That processor is woefully underpowered - upgrading the RAM isn't going to have a noticeable impact on DF in your scenario.
Your processor performance
Core i3 processors run in the 4k range on CPU score, while i5 and i7 processors run upwards in the 7-8k range. While the focus on DF is a single thread, still don't underestimate the value of a modern processor (clock speed is not everything).
Just keep in mind that your processor will probably run you $100 for something cheap, probably another $75-$150 for a motherboard, and then another $75-$100 for RAM. Even if you go with something cheap, expect to spend roughly $300.
I don't know what your budget is like, but you might just want to make a good all-around gaming PC instead of trying to build one just for DF.
This isn't as impressive as it sounds.
1 month = 30 days * 24 hours * 60 minutes = 43,200 minutes
So Google is able to perform this task 43,200 times faster than in 1999.
In 1999, The Pentium III Xeon was state of the art. Now we have much better processors. According to Passmark:
The second processor, which is what I use in my cluster at work, is roughly 55 times faster and is not even an expensive one. So assuming Google didn't make any other improvements to their software, they would only need 785 more mid to low end processors to do the job. For all we know, that one month number was generated with a small number of CPUs, maybe even 1 or 2 considering the company was created in Sept 98.
So yeah, I believe Google has a few hundred computers somewhere.
PS - The Pentium II was only ~800 Mhz in 1999, I'm being generous using 1400. That's all Passmark had anyway.
edit: The article is dated 2012, so this CPU is more appropriate: E5-2680, which scores 13401. It doesn't change much.
edit2: Some informed people are pointing out that network capacity is a bigger bottleneck than CPU in instances like this. I agree 100%, but it's harder to illustrate so I chose the simpler CPU-based one. If we assume each webpage is 1MB, then we get:
50,000,000 pages * 1MB/page / 1 minute = 833GBps
That is a lot, but not incomprehensible. 100GBps links are available, and you can bet that Google has many links across many data centers.
Is it even 10% per generation? Looking at the Passmark benchmark, the 6700K, three generations after the 3770K (two of which were "tocks"), scores just ~15% higher. This includes gains due to higher clock speed, and not just IPC gains. And Kaby Lake will be, apparently, a Skylake "refresh".
I hope that Zen changes things, but, what we can reasonably expect is that, if it costs like an i7, it will perform like an i7 (or better, yeah, but on the same tier), and vice versa.
You are right about iGPU gains, though. We can expect them to get better. I can't really tell how much better, though... and the 980Ti is pretty strong.
synthetics put them very close at stock speeds, with the xeon having a better single thread rating despite a slightly lower overall score.
and then there's the four thread i5-4690 not backing down at all in real world tests, which suggests that this eight thread xeon would end up ahead of the 9590, at stock.
"and boom here we go again - http://www.cpubenchmark.net/compare.php?cmp=2347&cmp=2017 42% better multithreading performance."
Nobody denies AMD's 8 core have relatively better multicore performance.
But here's the thing. Unless you do movie encoding, hardly anything utilizes 8 cores fully.
The Intel counterpart (like the i5) has significant more power per core.. which is what benefits almost everything you do - general windows usage and gaming.
Not to mention you won't be OCing that AMD at all with a cheap mobo and stock fan.
a sandybridge based i5 will slaughter atom boxes in both processing power and power usage.
Running on cpu score alone
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Atom+Z550+%40+2.00GHz - 386 points
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-2500+%40+3.30GHz - 6506 points
So on synthetic benchmarks the i5 is comparable to 16 atoms.
I know the real world doesn't work like this, but, if you think about it, it's absolutely crazy.
It is wayyy more efficient to visualize those atom machines onto an i5, and it'll cost you less in the long run too.
CPU Benchmark keeps benchmarks on every modern processor. This page shows a comparison of the m3 and m5 CPUs in the MacBook, and the i5 in the MacBook Air
On paper, we're talking about a difference of 20% processing power between the base-model MacBook Air and the base-model MacBook. A difference of about 9% on the base-model MacBook Air vs the Core m5 MacBook.
But for two reasons, those differences aren't as bad as they seem. For one thing, Apple appears to be using these chips at a higher clock speed than the stock models CPU Benchmarks tested, which shrinks the gap considerably.
For another thing, the biggest bottleneck in a computer, despite the advancements made, is still the storage. The MacBook's flash storage is vastly faster than that found in the MacBook Air, which shrinks the speed gap as well.
In real world usage, the two machines will fare about the same in day to day tasks. Neither of these are machines for gamers, video editors, or any other similarly high-performance tasks. If you're concerned with power, move up to the MacBook Pro. The i5 and i7 chips in the MacBook Pro are a magnitude better than the Air or the MacBook.
If you're more of a consumer, meaning you browse the web, watch movies, and use Office (etc), the MacBook is the machine I'd choose. You will make trade offs with this machine, such as the sole USB port. However, you wouldn't be considering MacBook in the first place if you couldn't get by with one port.
Better price, lower TDP, built for 24/7, no inbuilt gpu so better temps and overall efficiency. More features like ECC, some virtualization features and overall longer lifetime than i7. Built to handle high workloads over long periods of time.
Intel Xeon 1241v3 is about the same performance as i7 4770k for ~100$ less.
My passmark score for Intel Xeon 1241v3 is 10580. Compare
passmark is bullshit.
Although frankly, newer i3s even beat it there
His CPU is almost 6 years old. It's less than half as powerful as the CPU recommended by Oculus.
The socket is old too, so they'd need a new mobo, new CPU, and new GPU. But they'd most likely need a new PSU too.
So they can keep what? Their RAM and hard drive? Sure. But the rest needs upgraded.
>Too many people here are focusing on entire PC upgrades when most people just need a GPU upgrade.
No, too many people thinking they can run the Rift with their 6 year old CPU.
- Better distributed performance
- Price (as you indicated)
-Better single threaded performance
-Lower power consumption
They are, but AMD give better overall performance for the same money.
But since Intel run rings around AMD on single thread performance, it matters not. You will routinely have a four core i5 that does much better for games than a AMD 8350, despite the 8350 being "better" at multi-threaded work.
Overall value = AMD.
Single Thread Value = Intel.
Huge fan of the game... not exactly the heavy hitter when it comes to PC performance.
"I'm running Far Cry 3 at Ultra, 1920X1080 4x anti and MSAA at a silky smooth 60FPS avg"
"Oh yeah, well I can play a strategy game in windowed mode."
Out of curiosity, I tried to find an iMac retina 5K benchmark test, it's hardware looked pretty good on paper. Instead I found this.
> "We couldn't run any of our performance benchmark tests on the hands-on units"
...which has me worried? I mean, they're selling with an i7-4790 and a radeon R9 M290X which is pretty damn good for a mobile card... So I don't know what they're hiding.
Performance wise, the Xeon E5-2697 is quite possibly the very best non-military CPU available. It has roughly 1.3x the power of the i7 5960x ^1
(as a side note, the Xeon has 14 physical cores and 28 logical)
Passmark is a decent CPU benchmark, and look how low your CPU scores.
And looking at a game like Metro 2033, in which your card struggles to get even near 30FPS, it's guarenteed to look choppy.
Sorry man, you need a new rig.
I was going to post this - the only benchmark site you need:
Also, Tom's Hardware's Best CPU and Best GPU articles are excellent - just make sure they're up to date.
It depends on many things:
So, it really depends on what you plan to use your computer for and what you budget it. If you don't plan on doing much video/audio rendering, the i7 2600k probably isn't for you. The i5 2500k seems to be the best suited processor for most of the people on this subreddits' needs, costing about $210 USD.
Look at 6800k. Exactly same clock if turbo is working on Ryzen. And single core score is actually the same/better. So closed turbo does not make sense to me.
While these numbers are including individual OC results in the average too, Ryzen is just at stock. So I assume Ryzen nearly catches Skylake IPC wise...
I prefer the fewer-larger-but-more-expensive drive setup but many prefer the more-smaller-but-less-expensive drive setup. Regardless, you might want to look for a chassis that have a drive multiplier / backplane based on SAS or SAS2. Hook that up to a LSI HBA (like the popular IBM M1015) in IT mode and you're good to go.
The other hardware are personal preference.. You want double Xeon? Sure, lot's of processing power but a lot of heat + power usage as well. One frame of reference can be the CPU passmark score. This is a handy site to quickly (read: very basic, high-level) compare cpus: http://www.cpubenchmark.net/
A more modern CPU can have tweaks specific to your needs (say: encryption acc) and can have lower power usage. It's a balance between performance, power usage, heat output and upfront costs. Combine all those factors into a TCO you're comfortabele with and you've got yourself a setup.
Personally, I try to spend more upfront (more expensive but way less power hungry CPU, larger/less drives etc) to keep the noise, heat and power to a minimum.
Those scores don't say anything about performance, it just gives you a number without telling you why it's better. It's honestly one of the worst sites to go when looking to buy new hardware.
A few other scores from Passmark
i3-4370 beats out other i5s and i7s
The GTX 690 is worse than the 680 despite the 690 being a dual 680 GPU
People should avoid Passmark.
That's an ancient processor that would be terrible for Minecraft due to its poor single thread performance. Why not just rent from SoYouStart or some place? That plus the slow disks would bottleneck hard.
I sincerely hope that you're trolling. That CPU is a good 25% slower than a 1230v2, and almost 40% slower in terms of single thread performance. You could rent a 12xxv2 machine with actual SSDs for ~$50/mo in most cases.
96GB RAM? Seriously, paired with that dinosaur processor?
According to passmark they have about the same multithreaded performance (when using all the cores). However if the software can only use 2 cores it will run better on i3.
Comparing just the CPU performance, not even taking into account the poor performance of the integrated graphics, the rMBP already falls short of a high end desktop. That kinda goes without saying that a laptop doesn't perform as well as a desktop, let alone a high end one.
>It has a Intel Core 2 6600 2.4GHz CPU
>my setup should be able to support up to 3 concurrent transcodes
PassMark score: 1555
The minimum suggested PassMark score for a single 4Mbps/720p transcode session is 1500.
>First question - most of my files are MP4 and MKV, if I understand correctly Roku can direct play them, meaning no transcoding is required. Is that accurate?
MP4 and Matroska (MKV) are containers, which contain media streams -- audio, video, subtitles, etc. The media streams are encoded in various formats.
If your Plex client cannot decode one or more streams, transcoding would be required. If your Plex client is not capable of opening the container, Plex Media Server can simply repackage the streams in a container supported by your client.
The Roku 3 should be able to play H.264 encoded video streams and either AAC or MP3 encoded audio streams contained within either an MP4 or Matroska container.
>If so, does that mean I could share with lots of people assuming they are all using Roku, Up to my bandwidth limit?
Yes. Streaming without transcoding does not require very much CPU power.
>Second question. I was given a pair of HP Proliant DL380G6 servers, each with 144GB of RAM and 2 Intel Xeon X5570 2.93GHz CPUs (4 cores, 16 logical cores total).
PassMark score: 5630
This CPU would be capable of supporting 2x 10Mbps/1080p plus 1x 4Mbps/720p transcode sessions (or 3x 4Mbps/720p transcode sessions).
>If I wanted to make the best possible Plex server possible given my hardware what would you recommend?
Unless you have a reason to partition the server into one or more virtual machines, I would forgo any virtualization as it will simply reduce the efficiency and power of your server. For Plex Media Server, you want to maximize the CPU power.
Everyone's telling you to upgrade, so I'll chip in a little more detail as to why. The AMD Duron CPU was released 14 years ago as a competitor to the then-popular Pentium III and was one of the lowest power options in the category of CPUs that included the AMD Athlon, Intel Pentium III and Intel Celeron. It is a single-core, 32-bit CPU with clock speeds ranging from 600MHz in the earliest edition to 1.8GHz in the latest edition. Its Front-Side Bus clocked in at 100MHz or 133MHz.
In short, this thing isn't just "old", it's practically "ancient" in terms of the number of hardware generations that have occurred since its prime. And even in its prime, it was the bargain-basement option.
Just about anything you can find on ebay or craigslist these days will be better than what you've got right now. If you can afford to spend a few hundred dollars and don't mind assembling it yourself, you could build a nice Xeon-based system (such as with an E3-1225v3 CPU) with an Intel Pro NIC for $600-$800.
Just for kicks, I tried to find some benchmarks for you. This was a little tricky, since Passmark was in its infancy when this CPU was released. http://www.cpubenchmark.net/pt7_cpu_list.php shows only a couple of entries for the Duron -- the basic "AMD Duron" entry shows a passmark score of 268. By comparison, the Intel Xeon E3-1225v3 has a Passmark score of almost 7,000 and the best CPUs on the market today have Passmark scores approaching 14,000.
I'm impressed that you've gotten this machine to limp along to this point, but it's well overdue for retirement.
>Try to go look at a chart of processor specs over time. Processor speed has pretty much flattened out now.
Err, no. Speed as expressed by the primitive measure of gigahertz has flattened. Speed as measured by actual computer power continues to grow at historical rates. Check out the top CPU on that chart, at 3.33ghz, with a 10,000 score. Ctrl-F Pentium 4: the first you will find is 3.4ghz, and it scores a 500. Today's 3.4ghz is 20x faster than 2002's 3.4ghz. It also uses 1/2 the power.
You'll only need 8 gigs of ram unless you're doing a ton of work at once. And i really do mean a ton.
Now for the processor, apple makes it annoying for two reasons. One is that they don't tell you the exact model they use and Two is that apple uses rare, very slightly upgraded variants of the base processor so there are little to no benchmarks available.
We can go off of the benchmarks for the base processors though: http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+m3-7Y30+%40+1.00GHz&id=2864 https://cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-7Y54+%40+1.20GHz&id=2873 https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-7Y75+%40+1.30GHz
Only the upgrade to the i7 makes sense, the i5 has a lower multi core score but a higher single core. I wouldn't say the i7 is worth $250 dollars more but if you have the money, go for it
The U-series or Y-series Intel is just as fast as the desktop Pentium G4560, dont expect it to be even remotely as fast as a gaming desktop. This sub makes it sound like U and Y has a huge gap in performance, while forgetting the fact that the U itself is far inferior than their desktop or standard-voltage laptop counterparts.
Therefore, Id say spending the extra cash to get the highest model isnt the wisest shopping decision. You probably save money and get better performance by getting a base model or mid-range Surface combined with, say, an i5 desktop
Pentium G4560 http://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+G4560
I'd say it's worth it. For one you can't watch Netflix in 4k on any other CPU generation than Kaby Lake right now. Also there is a performance increase of 10% according to the Passmark score.
Hate to break it to you but your computer is way below minimum spec.
Your processor benchmarks just barely above the required minimum (See this comparison here)
Your video card, on the other hand (and I'm reluctant to even call what you have a video card), is far, far below the minimum required specs (See the comparison here).
I'm guessing you bought, or were given, a pre-built laptop and whoever picked it out didn't realize that an Intel HD Family card isn't the same thing as a dedicated graphics card. That card is meant for little more than video playback and some older games with the graphics turned all the way down.
There have been some issues similar to what you are describing reported for the PC version, but I'd really recommend upgrading as soon as you are feasibly able to. Unfortunately, since you are on a laptop, you can't just go out and pick up a new graphics card to slot in; you'll need to upgrade to an entirely new system if you want to get better hardware.
Your single core rating on that 8350 is actually lower than that of your 4 core cpu. So unless you have a program that really needs those extra cores. Wait.
Well, just looking at RAM in isolation, then yes your old laptop is 'better'; although most people would recomend you to get the 8GB SP4 anyway, even if you don't need it now, for future proofing.
In terms of basically every other specs, then no your old laptop is not 'better'. The SP4 CPU is slightly better, whilst having a 15W TDP instead of a 35W TDP, meaning it will run cooler and quieter, and consume less battery.
The hard drive is no contest, SP4's PCIe SSD vs a Hybrid HDD with only 4GB SLC NAND Flash portion - Read and Write speeds will be massively better in the SP4 and this translates in to a much more responsive system.
The screen is again no contest, SP4's 2,736 x 1,824 resolution display vs your laptop's 1366 x 768 display - meaning nearly 5 million pixels instead of just over 1 million; plus better colour accuracy etc.
In terms of raw computing power, there may not be a massive increase over your laptop - although the SSD will make a massive difference in how quick the system feels. The appeal of the Surface line comes due to the form factor, pen, amazing display etc - whilst still managing comparable performance to full size laptops.
> AMD will pass Intel on IPC (instructions per clock) with Zen.
Do you have any sort of source on this, or is it just rampant speculation that AMD somehow with a single generation will catch up and even pass Intel, who has increased their gap in IPC from AMD every generation for the past 8 or so years?
> they will both be at about the same level
Considering how AMD currently is about 5 levels behind Intel when it comes to IPC I seriously doubt that.
> http://www.cpubenchmark.net/compare.php?cmp=2275&cmp=2565 Notice how the Haswell CPU has a higher Single Thread rating than the Skylake due to the 0.2 Ghz faster Turbo Speed.
And passmark is a horrible way to measure CPU performance, check this for example for proper data:
Spoiler: Haswell to Skylake (DDR3): Average ~5.7% Up.
Basically when it comes to CPUs:
i7 > i5, more cores > less cores, and higher GHz > lower GHz.
You can go on this website: http://www.everymac.com and search up the Mac you are interested in, find its actual processor number and then look at the score on this website: http://www.cpubenchmark.net.
As far as the numbers on that site are concerned, >7000 is considered strong in terms of performance, and a 10% difference on its own will be "just noticeable". ~20% will "feel faster". Source
If you run the numbers for the computers you're looking at, the iMac's CPU is 28% more powerful than the Mini's (6430 vs 4652), if each is equipped with the i5 option.
Also, beyond 8GB, CPU is generally a much better investment than RAM for production and running plugins like effects and synths. 8GB is really plenty and 16GB is by far enough, even if you're using multi-GB sample libraries. So if it's between 8GB of Ram and an i7, and 16GB of RAM and an i5, you should probably go with the i7.
You do understand that mobile and desktop parts are not on the same level of performance even if branded similarly.
A GTX 970 is 4 times more powerful than a 960m even a regular GTX 960 is 3 times more powerful than a 960m.
Also the i5-2500k is 2 times more powerful than your i5-5200u.
Mobile parts do not compare to desktop counterparts ever. Referance
In this case, since the i7 is a low-wattage version (because of the U), it is outperformed by the i5. Here's a comparison of the two processors.
And since the 970m outperforms the 960m by a factor of 3, I would go with the first laptop.
Will it increase performance? Certainly. Plex transcoding is typically bottlenecked by CPU performance, so increasing it will help. Info on CPU power needed for transcoding: https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server-computer-
Will it mean more streams? Really depends on how much you can overclock with stability. Your Q6600 does just under 3000 passmarks, stock, and to get to two 1080p transcodes, you'll want to have about 4000 passmarks. That's a pretty significant overclock but not impossible for a Q6600. I think you'd need to hit about 3.1 GHz or faster.
Nah, the pentium does have better single core performance.
The difference is meh, but he could save money getting that and possibly a 280x or something.
This is the same company that claims in their ads that the computers that they were building in 2004 are still outperforming what you can buy for $500 today in any Walmart out there.
I can but stew in my silent rage every time I hear the claim. There was not a single consumer CPU on the planet in 2004 that would even hold a candle to something as basic as a Celeron G1820 that you can pick up for ~$45 today. Even the top of the line Athlon 64 FX-55 from 2004 looks like a graphing calculator vs the bargain Celeron of choice today. To say nothing of the crap performance you're going to get out of the RAM, Storage, and Buses of the age. Even the top of the line X850 XT graphics card from 2004 doesn't put up much of a fight against the integrated graphics in say, this Core i3 laptop with Intel HD4400 graphics which is a hair short of $500.
> There is a little room for optimism.
Hold the wateworks, Susan. You're only comparing single processors. Consider parallelism. The i7 980X (Gulftown, July 2010) is benched barely above the i7 3820 (Sandy Bridge, February 20120), but the latter costs just $320. Moore's law (as you've chosen to interpret it) calls for FLOPs/$1000 to halve every 18-24 months, but in 19 months it's fallen by two-thirds! So what if the 3960X (Sandy Bridge, November 2011) is only 33% faster than the 980X? Silicon's been bumping against the heat dissipation wall for nearly ten years - the 130W barrier has proven far less penetrable than the 3 GHz barrier. It's not like anyone still imagines AI is going to require doggedly linear computations.
AMD has the most bang for your buck. But Intel completely and utterly dominates the high-end CPU's. But majority of them are expensive as fuck. A thousand bucks, GUDDAMN. Xeons and i7's, Xeons and i7's everywhere.
That's why 2600(k)-2500(k) are being paraded. They are, paradoxically, at the top of the list, but relatively inexpensive.
Cores aren't everything, Here's the passmark score for the CPU you have
And here's the passmark for the 6850k:
As you can see the 6850k has over 2x the performance.
Overall they are pretty underpowered cheap components. It should run League of Legends and almost every indie game but don't expect any AAA title to run any better than 30fps on lowest settings
The CPU score is 407. Common wisdom here is that each transcoding session requires a score of 2000. /u/Teem214 is dead on with his comments.
upgrading the RAM would help a little bit, the CPU is the biggest thing holding him back at the moment. and DDR2 RAM? Honestly if your buddy has ~500 he could build a solid PC that could run wow on High no problem.
here is a link comparing his processor compared to current stuff
here is a link comparing the processor you have to current stuff
is it playable? yea absolutely, I'd be surprised if you could push the graphics up past "good" and if you could get anything above 30 fps if you did. best route would be give him your processor and get 2 more gigs of RAM. My honest recommendation would be a new PC entirely, or at least get him off of XP. dear god
What do you do with it though?
The i3-3217u has about as much CPU power as my 6 year old Core2Duo E8500 desktop.
I, myself, have an i7 4790k, but I believe in objectivity. Here, OP: a segment of the review on the AMD 8370 from guru3d:
Here, you can see the power draw, and thus heat, is exactly in line with the best i7s
They're really not bad processors, and here is, admittedly only one, benchmark chart against a bunch of CPUs:
You can see it competes with the best (though OBVIOUSLY doesn't come in 1st, noone's saying that) at half the price.
The fanbois around here need to seriously relax.
This is the comparison of my 2010 laptop and the 2015 Acer c910 processors (i7 and i3). As you can see, the chromebook's i3 processor uses substantially less power AND performs better.
You can still use a chromebook for coding. Just as you would send big jobs to your schools supercomputer, you can send jobs to your desktop or the supercomputer.
Intel Core i7 920 @ 2.67GHz PassMark score: 5003
Capability: 2x 10Mbps/1080p or 3x 4Mbps/720p transcode sessions
Related wiki page: What kind of CPU do I need for my Server computer?
So, by the benchmarks (because almost no one will have heard of a J1900 before) - It's really rather slow. 1/4th the single thread performance of a g3258 (ie the Haswell pentium K), but with 4 cores instead of 2. CPUbenchmark.net
A full system writeup for 2 J1900 boards can be found on Anandtech; it's not the same manufacturer but features/specs seem comparable, and you can't really beat their reviews. Interestingly, they have the J1900 at 1/2 the single-thread performance of a g3258, and sometimes beating it handily when multithreaded.
Personally, I think that with the addition of a cheap PCIe raid card, one of these would make a great NAS. Such as this one - 4xSATA3 on a PCIe card, $25.
>Is this issue from my Processor or my upload speed?
Intel Celeron G1620 @ 2.70GHz PassMark: 2671
This processor is only capable of transcoding a single 4Mbps/720p or 10Mbps/1080p file at a time.
it's just so shitty to know my graphics card is capable of so much more than it is doing. I thought it was my processor's fault with the low fps I was getting in LL and IS Marines... but Phoronix continues to close in on the prop Amd Driver issues, and it seems to be totally the driver's maintainers fault for their sub-par optimizations and features access.... thoroughly disappointing.
PassMark = 2477
It would be able to transcode up to one 10Mbps/1080p file at a time.
The last generations were ivy bridge based, not sandy bridge. I could be wrong, but I believe apple actually used the i5-3210M, for the entry model.
According to pass-mark, the i5-4260 is slightly slower than the old i5. In fact you'll see that all the new models are slower than the ones they replace. The new i7 is nearly half as fast as the old one.
Again these are synthetic benchmarks, so we'll have to wait for confirmation from actual reviews.
>I've been thinking about foregoing the Air for a Windows laptop.
Good decision, the OS X market share is only 4% now, people are buying iPhones and iPads.
Windows has over 90% and it's not bad really, I'm a 25 Mac user and I've switched to Win 7.
You can still get Win 7 Pro, you order the PC online from a dealer that will downgrade it for you from Win 8 Pro, it's all legit, Microsoft allows that. Win 7 is good until 2020 and Win 10 is coming out next year.
>Anything good in the 800-1000$ range?
You can try asking over at /r/suggestalaptop that is much better place than here for that type of question.
However don't mention a Mac, very PC oriented crowd there.
You also can use this site to check actual performance of the hardware your interested in.
BTW, I'm rather partial to Sager Notebooks, for the fact they make excellent gaming/workstations and they last, clean out of dust and anti-glare screens are better for the eyes. Plus there is the wow factor from other PC types. 17"HD screen, a powerful GPU and a i7 for the same price as a Air, it's what the old MacBook Pro's used to be before Apple castrated them.
Edit: Fixed Win 9 to 10
The general rule taken from plex's site is 2000 passmark points per 10mb 1080p stream being transcoded. The 4200u scores 3300 passmarks.
Re-uploaded because I forgot to blank out a dudes name :/
Just because I keep seeing people ask about their specs being good enough, I figured I'd post this tweet from the creative director behind Watch_Dogs. The CPU score he references comes from this page here http://www.cpubenchmark.net/high_end_cpus.html which has a fairly comprehensive list of which CPU's should easily attain Ultra settings when coupled with any card higher than a GTX 670 or it's AMD equivalent.
i5 owners with a GTX 660 or above (or AMD equivalents) should easily see high to very high settings, which makes my desperate rush to get a 660 or higher even more desperate, lol.
Fingers crossed the game isn't too badly optimized, though the resolutely confirmed 8 core requirement for ultra does smack of poor optimization sadly. Either that or Ubisoft suddenly became the new Crytek.
Alright, just going to point something out really quick with synthetics -
Alright, all done.
Using MIPS as a way of benchmarking processors is really terrible. This is because not all processors have the same instruction set. For example: The i7-980X pulls 18.2% more instructions per second than the i7-4770k. If you just look at that statistic you would think the 980X is the better processor. In reality, things are the other way around and the 4770k outperforms the 980X by roughly 16.4%. This disparity is a result of the 4770k's newer, more sophisticated instruction set.
Here is an article about using MIPS as a benchmark. It says:
> using this number as a way of measuring processor performance is completely pointless because no two chips use exactly the same kind of instructions, execution method, etc.
The point of what I am saying is that an increase in the number of instructions per second is completely useless if the dies become hard to manufacture and the instruction sets become as inefficient as something from the 1990s. Typically when intel brags about it's newest performance gains, they put it in terms of real world execution times not MIPS.
Here are the quasi-practical benchmarks for the i7-980X and 4770K that I looked at when coming up with the 16.4% performance gap statistic.
Raw clock cycles don't buy you quite as much in terms of visible performance as it used to. Check out this chart. Basically Intel does a better job of optimizing how the data is used when it's inside the processor. AMD has gone for throwing more raw power at the problem of how to get more performance out of their processors, Intel has gone for better tuning.
That being said, the price for performance is still something to consider. AMD may be a better value for your money, even if they don't hit the same high performance numbers as Intel.
There are a lot more factors to processing power than just clock speed. Intel processors are much ore efficient, and end up significantly faster than their AMD equivalents. They also use much less power, leading to better battery life, and are less prone to overheating
There are many other factors involved in a CPU performance, mainly the cache, technologies, turbos, accelerations, etc.
Right now, AMD as a company cannot even consider competing Intel. It's borderline monopoly these days.
edit: I actually thought AMD was doing better than this... 5 years of getting raped is sad.