A more readable presentation with all the info and pictures, by a journalist with an EE degree:
Well, it does from a GPU point of view, by a long shot. That was pretty much expected anyway, though; see http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
DDR3 or DDR2 or whatnot is a type of ram - higher is better typically, as it is the newer technology. You can read more on that here.
The number after, like 1600, 1333, 1000 is the MHz that the RAM operates at - you want to make sure this agrees with your motherboard.
The four numbers, separated by hashes, like 7-7-7-8, are representative of four different measures on the RAM for how long different operations take (this is the timing, or latency). More reading on that here. Essentially timings that are lower means the RAM is faster at performing certain tasks.
Voltage is how much potential difference is required to run the RAM. Newer RAM uses lower voltages to stay cooler with lower timings. Excess heat is bad for computers! Changing the voltage is one of the ways to overclock your RAM. I would not recommend doing it without more research!
Because of how a SSD operates. The less data on it, the faster the drive and the longer it will last.
The SSD has multiple dies (chips) which have many planes. Each plane is divided into blocks. And each block is divided into pages. For example, a 512MB plane has 1024 512KB blocks, and a 512KB block has 128 4KB pages.
The controller can read and write to any empty page in anywhere it chooses. This is cheap, but eventually there are no empty pages left. Keep in mind that pages are not "deleted", just marked invalid. So it runs out of empty pages, and has to find an invalid page to overwrite (erase+write).
This erase wears the flash memory; there's a finite limit. The design of flash means that the controller can only erase blocks, not individual pages. Thus, overwriting a single page means copying all the other valid pages in the block, erasing the block, and writing the copied pages.
This is all very slow, which is why controllers avoid overwriting. If you have lots of static data like music, then those blocks are completely valid (no invalid pages). This means that only a number of blocks are available for overwriting, causing uneven wear.
Similarly, if the drive is nearly full, then the controller has to do a lot of overwrites. This reduces the speed and life of the drive. In fact, OP should keep only programs (dynamic data) on the SSD and keep that free space for the best performance.
Sequential read, like reading music files, is fairly the same on either HDD or SSD. It's random read that's much faster. TRIM is an OS command that tells the drive, "Hey, I'm not doing anything so go ahead and remove those invalid pages (overwrite to create free space)."
I hope this helps! Cheers.
By Thor's mighty hammer no.
Go with the i5 2500k or i7 2600k. X58 is a dead socket.
With the money saved you can get much better gaming performance with a GPU upgrade.
the i7 980x was a powerhouse chip a year and a half ago when it came out. But it's a waste of money now.
I've had mine since release (the 250GB model). It looks better. It's quieter. It's near silent. It has great touch sensitive controls. You can install every game to the hard drive with zero space concerns. It has auto-shut off (anti-RROD). It's already set-up for Kinect (if that interests you). It has built in Wi-Fi. The only way it didn't improve upon the previous version was the removable HDD, and even that's possible. I HIGHLY recommend it. It is the console the XBOX360 should have been from the beginning.
The op is also pretty dumb for being so smug about the specs. Anandtech has a good article up about it >From a CPU standpoint, Apple has a performance advantage at the same clock speed, but Qualcomm runs its cores at a higher clock. NVIDIA claimed that the move to an out-of-order architecture in the A9 was good for a 20% increase in IPC. Qualcomm has a 20% clock speed advantage. In most situations I think it's safe to say that the A5 and the APQ8060 have equally performing CPUs.
So while the touchpad may be clocked higher, architectural differences keep them roughly equal.
The iPad 2 also has twice the gpu performance (dual core), a longer lasting battery, is much lighter, has a better screen (ips), does not lag and chug along doing basic tasks like webos, loads web pages faster, and of course the third party app support.
Yea, the $100 touchpad is an amazing deal, but it's still not an iPad. Or a galaxy tab/transformer, so the fact that not everyone is as ecstatic as the op shouldn't be too much of a shock
LI5 answer: Imagine the processor is like a kid cleaning his room. The Core 2 Quad is a 5 year old, and the Core i5 is a 7 year old. The 5 year old can pick up 10lbs of toys to put away at a time, so it takes him 20 minutes to clean his room. The 7 year old is older and stronger. He can pick up 20 lbs of toys at a time so it only takes him 10 minutes to clean his room.
More detailed explanation:
Intel has gone through 2 generations of processors since the Core 2 series (which is nicknamed Conroe): the first generation Core series (nicknamed Nehalem) which have 3 digit model numbers (Like Core i5 750) and the second generation Core series (nicknamed Sandy Bridge) which have 4 digit model numbers (Like Core i5 2500). With each generation, Intel has modified and improved how the processor functions so the computer works faster at the same clock rate. The technical differences between the two processors could not be explained to a five year old. Anandtech has written up articles detailing the differences between the core 2 and the first generation core series, and the first and second generation core series.
The short answer to the question of whether the Core i5 and the Core 2 are the same processor is NO, they are fundamentally different architectures. The Core i5 is newer and faster.
Here in BuildAPC, the words "Core i7" and "Gaming Rig" are the equivalent of dividing by zero. For just a gaming rig, you're wasting your money gaining little to no performance increases at all going for the i7 2600 as opposed to the i5 2500. Here's why. As you can see, the 2500 keeps up with the i7 in every game. The biggest difference is like 10 frames, which is still relatively small and won't be an issue considering the immensely powerful graphics card you have in there. Save your money and get a 2500 instead, you'll get the same performance and an extra $80-100 in your pocket! Whoa!
Use that money you save to put towards a better power supply, as you do not want to skimp out on one. Cooler Master's eXtreme series isn't that great. They're not terrible, but at your budget you should be getting something like the Corsair TX850 V2 or the Corsair HX850. It looks like you want to add a second GTX 570 to SLI in the future, and you definitely need the extra wattage (namely amps on the +12V rails but I'm just getting technical here) to handle that. I strongly encourage getting the HX850 with that money you save, as it's currently on sale for $155 if you enter the promo code 'EMCKCKC33' on Newegg. That's also before a $20 mail-in rebate you can go for.
Make those two changes, and you'll have yourself an absolutely supreme gaming rig. That's a pretty cool case by Rosewill; the CM690 II Advanced, which is very similar, is another great choice although you lose out on the USB 3.0 ports.
The original source for that article is "via Reddit and Anandtech."
The source for the Anandtech article? Reddit.
Pretty much the only reason to ever choose the 2600 over the 2500 is if you do a lot of math manipulation such as FEA or MATLAB (as witnessed by the POVray benchmark).
He was being sarcastic. There is no good reason for gaming to get an i7-970 over an i5-2500k. The i7-970 is an overpriced and last generation CPU. There isn't really any reason to get it over the current gen, especially with the Sandy Bridge Enthusiast series coming out soon.
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/9
>"So we now know that inside SGS2 is either a Sony IMX105, or Samsung S5K3H2YX sensor. This is basically the same exact camera lottery situation that the MyTouch 4G Slide is in, as it in fact has the same two exact sensors listed, though F/2.2 optics."
It is like HT in that one core looks like two, but it can actually do some of two things at once. Better than Intel's version of HT.
>A single Bulldozer core will appear to the OS as two cores, just like a Hyper Threaded Core i7. The difference is that AMD is duplicating more hardware in enabling per-core multithreading. The integer resources are all doubled, including the schedulers and d-caches. It’s only the FP resources that are shared between the threads. The benefit is you get much better multithreaded integer performance, the downside is a larger core.
http://www.anandtech.com/show/2872
As far as core counts go, see http://www.anandtech.com/show/2881/2
>Henceforth AMD is referring to the number of integer cores on a processor when it counts cores. So a quad-core Zambezi is made up of four integer cores, or two Bulldozer modules. An eight-core would be four Bulldozer modules.
I prefer the AnandTech GPU Bench:
http://www.anandtech.com/bench/GPU11/188
Way more data, more up to date, and frankly a direct "this card is better than that card" chart just misses a tremendous amount of nuance, especially when ranked by synthetic benchmarks. The real question to ask when ranking GPUs is: "better for what?"
TomsHardware is good too, but less user friendly, I find.
There was a fair bit of SSD news at CES. Here are just a couple examples.
OCZ'z Vertex Pro 3 Demo: World's First SandForce SF-2000
Micron's RealSSD C400 uses 25nm NAND at $1.61/GB, Offers 415MB/s Reads
Late February-March is still expected. Unfortunately according to Anand it appears that SandForce will be missing out initially. >It’s looking like SandForce will be last to bring out their next-generation drive in the first half of the year with both Micron and Intel beating it to the punch, but if we can get this sort of performance, and have it be reliable, it may be worth the wait.
Anandtech's Sandy Bridge review. With actual information instead of rewriting a product PR brief.
That's quite a blanket statement there. Whether a game is VRAM limited is based on the GPU architecture and the game itself. Performance is based on whole array of other system factors too. Maybe there's a problem between keyboard and chair.
Evidently, a Radeon 6950 is not VRAM limited in Battlefield 3's singleplayer under the specs Bit-Tech used. One could extrapolate, based on benchmarks, that in most games the Radeon 6950 won't be VRAM limited; again, I stress that this is very dependent on the game.
There is some evidence that a GeForce 560 ti could benefit from more VRAM in Battlefield 3's multiplayer. This might be due to its different architecture, or perhaps how the drivers are implemented. Who knows?
The difference in price between two models with different amounts of VRAM is minimal. I suggest upgrading due to this minimal price difference; some will disagree. It's really up to the buyer to decide whether the move is worth it.
TL;DR: It depends on the game stupid. Arguing over this is like arguing over whether a Porsche or Corvette is better. In the end it's probably a wash, and 28nm cards will smoke current parts.
AMD is in a very good position when it comes to low-powered CPUs. Bobcat is by far the best architecture available at the moment and is actually good enough to run some newish games at minimal settings.
It's due for a die shrink in early 2012. These 2nd gen Enhanced Bobcats will have 1-4 cores and be paired with a Radeon HD 7000 series GPU. They'll consume upto 9W for the 1-2 core Krishna, and upto 20W for the 2-4 core Wichita. Intel's Atom simply can't compete with that and ARM can't even compete with Atom.
Barring any major problems with the 28 nm process (which would also effect ARM), AMD's Deccan platform (Krishna and Wichita) is going to be too good to ignore.
What I'm seeing from this platform is a mini-ITX gaming system more powerful than a PS3. Also, if I was going to get a Windows 8 tablet, it would most definitely need to be powered by Krishna.
Edit: The mini-ITX gaming system, is actually a project I've got lined up for next year. It involves a custom linux build and will basically be an open source console that's also a fully functional PC, so I'm slightly biased.
> galaxy s2 already beats it in hardware
How? Similar CPU speed, the A5's GPU is far faster than the Galaxy S2's (http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17).
Anandtech has a significantly better review up of the Transformer prime and the Tegra 3 architecture.
Tons of benchmarks and data and prescient analysis.
i5 2500k is 'technically' mid range in terms of price. But performance wise, it's right up there with the i7 2600k in terms of gaming performance. Hell most benches show that it kicks every AMD processor's anus.
http://www.anandtech.com/bench/Product/203?vs=288
And again, benchmark wise, it pretty much beats out most of the first generation quad-core i7s.
http://www.anandtech.com/bench/Product/100?vs=288
tl;dr
i5 2500k isn't mid range.
A 1GHz processor from 10 years ago is very different from a 1GHz processor today. The clock speed is an extremely arbitrary performance metric and can't be used to compare processors of different architectures at all. About 7 years ago I bought a computer with a 3GHz processor (Pentium 4 Prescott). About 3 years ago I bought a desktop with a 3.8GHz CPU (Core 2 Duo E8500). Now I have a different desktop with a 3.4GHz CPU (Core i7 2600K). The fact that they're all around 3GHz means nothing, the current 3.4GHz i7 is between 10 and 20 times as fast the P4 Prescott (One set of comparisons, others exist).
Also it depends completely on what software is used. If you're running software mostly built for desktops for doing whatever they promoted tablet pcs for back then, the processing requirements can be higher than for software written for a mobile device doing the things people want to do on a tablet today.
They are as good or better when compared to Nvidia but cost hundreds less. The 6970 is $150 less than the 580, but almost matches it, as seen here: http://www.anandtech.com/bench/Product/292?vs=305
Realistically, for the price of one GTX 580, you can get two 6950s and use them in crossfire, which will spank the hell out of the fastest single GPU on the planet. Two 580s will run you $1000 into the ground but it's not really necessary. A monetary comparison would be ATI's best, the 6970, vs the 580. Two 6970s will be $700 while two GTX 580s will be 1 grand, for similar performance overall.
TLDR, Nvidia is overpriced in terms of the performance you get. ATI cards have always been the poor man's solution in gaming, and they are almost as good.
That was only really a problem for earlier drives, even writing say 100GB a day, every day, a 256 GB SSD won't run out of cycles before your flash cells lose their charge, which is in about 10 years. Hard drives on the other hand tend to fail relatively often because of the moving parts(The figure I tend to quote as an IT person is roughly 25% over 4 years source:google study)
However, HDDs are much cheaper/GB (even with the flooding in thailand).
EDIT: You do bring up a good point for RAM disk, that really is fast
Because we said so. Buy this Scooby Doo Mystery bed sheet set instead.
Love,
/r/buildapc
P.S. - This is pretty good for comparing different GPUs.
Even Anand loves Honeycomb: http://www.anandtech.com/show/4189/google-android-30-honeycomb-preview
"To be quite honest, after looking at Honeycomb in action, iOS looks like it might need a facelift real soon."
Video memory has little to do with system memory. Your system RAM (8GB) is used for storing files that you're working on and files that your OS and various programs use. Video RAM (VRAM) is used the same way, but specifically for visual information. When playing a game, it would be used for storing textures, maps and any other information the GPU would need to process the game visuals.
Currently, there isn't a huge difference between nvidia and AMD in terms of performance. I've heard some say that nvidia cards have better default cooling systems, though this can be made irrelevant by the aftermarket fan systems that most manufacturers (EVG, XFX, Gigabyte, etc...) put on. In the past, ATI drivers were not as good, though they have improved in recent years.
Don't compare card's specs and assume one is better than the other; use Anandtech to look at benchmarks. The GTX 460 is probably best bet in the $100-150 range, the 6870 in the $150-200 range, the GTX 560Ti or 6950 in the $200+ range.
A more readable presentation with all the info and pictures, by a journalist with an EE degree:
Well, since it's a gaming machine, there's absolutely no reason to get an i7 over an i5. Look at the i5-2500k vs. i7-2600k benchmarks, there is absolutely no benefit to the i7. Additionally, the i5-2500k beats the i7-950 in most benchmarks. I don't understand the whole run it stock thing though. It's really easy to overclock, won't impact the life of the chip (that you'll notice) and is a marginal price increase over a locked chip. In the long run, it saves you money since you can make your computer last longer by overclocking.
That said, if you decide to stick with a locked CPU, don't even bother looking at P67 mobos. Save yourself some money and get an H67 board.
> I'm thinking probably 12 GB of RAM
Way overkill for gaming. Unless you're planning on doing rendering or design work, 12GB is overkill. You don't need more than 4GB, though 8GB might be worth it since memory is so cheap right now.
> a single GPU, most likely a GTX 470
I wouldn't recommend a GTX 470. It runs hot, draws a lot of power, and is slower than the cheaper 5850 or 6870, which both run cooler and draw less power.
> I've heard SB has better clock speeds but it lacks support for better GPUs, maxing out at PCI-E x8? IS that true?
Only in SLI/Crossfire setups. It can run a single card at full x16. Of course, there isn't a single GPU on the market that can saturate x16 pipeline and I don't know of any that can with x8 either.
>That's a huge difference for little gain in performance.
According to Passmark, the 2500k has a nearly 1500 point lead over the 1090t. And on the Anandtech Comparison the 2500k has a significant lead in rendering times (around 30% faster)
Instead of trying to list them using amazon's API, I suggest perhaps pulling the info from newegg or using MSRP. The manufacturer and retailer will price two similarly performing parts about equal so I find it odd that nVidia dominates so much.
I kinda wonder how you calculated the performance benchmarks. Maybe, reference review sites like anandtech or something. That requires much more work, and I'm not sure how it would be implemented, but it would also be a lot more accurate.
The radeon 6950 should be better than the 560Ti but that is game dependent. either way, it is also more expensive, but only slightly so. Yours lists it at close to $300 which isn't a good value at all, but anandtech.com shows it available for much lower, which is the price point AMD would target.
CPUs aren't as simple as clock speed and core count. Pipe lining and architecture has a lot to do with it. It doesn't matter how many clock cycles they're able to do if they're not doing as much work per clock cycle as Intel CPUs are. AMD destroyed the Pentium 4s with their Athlon 64 chip was due to a superior architecture even though they were clocked lower. However they're still basically using that same, now outdated, architecture and Intel has taken the lead. Let's look at benchmarks:
i3-2100 versus Phenom II X6 1100T
Despite having 1/3rd as many cores, being clocked lower, and being almost half the cost i3-2100 still comes out on top in most of the tests.
I'm very far from an Apple Fanboy, I actively avoid Apple Products when possible, but a list of hardware is not that useful either. Whether or not you prefer a product or not, the best thing we have are benchmarks:
The iPad 2 consistently performs much better than the Xoom in every test. While on paper the iPad 2 may be "worse" once you run it, the iPad is a very well engineered machine.
My reasoning is mostly cost vs. performance. Obviously, RAM prices have gone down a lot since the early days, but there's still roughly 30$ difference between 4GB and 8GB. Benchmarks show 8GB improves gaming performance by about 0% over 4GB, so basically I'd be paying extra cash for something that's largely irrelevant for something like 90% of the people coming here for guidance.
In order to maintain the budget categories I mentioned above there are places where you simply must cut back on bells and whistles. I feel excessive RAM is chief among those.
To begin you have a dated CPU/mobo socket selected. The current generation i5/i7 processors are socket 1155. For gaming the highest recommended CPU is the i5-2500K it has great performance / cost ratio and allows overclocking.
To go along with that you will need an 1155 socket motherboard - the ASrock Extreme 4 is recommended a lot and reviewed well. There are a lot of other choices however that will work just fine.
Since your build budget is pretty high you could go with a GTX 560 Ti SLI or 570 SLI or Radeon 6970 Crossfire and get better results than the single 580 will offer you - in some cases by a drastic margin. Check me for comparisons.
Personally I'd try and find a PSU with fewer 12V rails with more amps on each to provide power to high end GPU devices. Something like the Corsair HX series, Antec and Seasonic also have models that will work better.
The cooler is a beast, HDD choices look good too.
Few ideas for you to look at and refine your build.
Engadget's review was not negative. And everyone should read Anand's review.
And on this device, i care little about USB charging. It takes my DX with a 1560mah battery almost 5 hours to charge to full on USB, i guarantee the xoom will barely charge while in standby and not at all while in use.
320Ms weren't in the 15/17s... The 330Ms have double the stream processors and are at a high clock. NoteBookCheck claims the HD 3000 underperforms the 330M by 37%. Here's Anand talking about the 330M.
>Actually, you're wrong. You should look things up before you start making assertions.
Indeed.
The Anandtech article is definitely a much more informative piece of journalism than this Ars piece. Instead of just grabbing numbers and directly comparing results, Anandtech decided to take it one step further and explore what exactly is causing the issue with current software.
In this page, for example, they determined that the "Balanced" power profile in ESXi doesn't utilize the additional power states in Bulldozer. Enabling them improved power consumption over the 6100 series. In the next page, they determined that Windows Server 2008 SP2 doesn't enable turbo in the "Balanced" power profile.
You have to read into the numbers to make sense of any of it. Yes, Bulldozer isn't the architecture we were all pining for to be competitive with Intel, but I don't feel that it's a bad start to build on for the next few years.
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined/17
LG Optimus 3D has the same graphics chip, clocked at a lower speed than the Galaxy Nexus. It outperforms in some tests, and barely is edged by the SGSII in others. No clear winner, I don't think.
And the pentile is a trade-off for being more efficient. That is a huge-ass screen, so battery life could suffer if it was a full-on, non-pentile display. And the type of pentile display is the same on the Nexus S...really haven't heard any complaints about that.
If it's out of warranty you can still save it for under $15.
Read how to open your XBox360 from anandtech. Don't bother buying a special kit, all you need are some torx screwdrivers and a thin, rigid poking implement like a pen refill.
Clean off the old thermal compound with q-tips and isopropyl alcohol (any drug store or supermarket will have it)
Replace with a proper amount of Arctic Silver 5 thermal compound (spread thin and evenly across the CPU and GPU dice with an index card). Should be about $10 from Radio Shack or Microcenter (or online if you want to wait for shipping).
Replace the 'X-clamps' with (8) m5 0.80x10 metric machine screws, and 32 5mm flat metal washers (4 per screw, 2 above and 2 below motherboard). You can get these for less than $5 at lowes (Lowes part number: #138433, #138319). Others will suggest Nylon washers but they're not needed as the contact points won't short.
I fixed mine 2 weeks ago and haven't had any problems since. The first 2-3 times you start it up you will get the same glitches/ RRoD as before while the thermal compound settles. It needs a few cycles of hot/cool to set.
A single 6990 is ALMOST IDENTICAL in performance to crossfire 6950s; except a 6990 is in the $700 range and a couple 6950s are in the $500 range.
Source: http://www.anandtech.com/show/4209/amds-radeon-hd-6990-the-new-single-card-king/12
So I haven't read up much on this yet, but I always trust Anadtech reviews. Here's 2:
http://www.anandtech.com/show/4762/samsungs-galaxy-s-2-the-smoothest-scrolling-android-device-around
http://www.anandtech.com/show/4686/samsung-galaxy-s-2-international-review-the-best-redefined
Holy crap I had no idea the noise cancellation feature of our phones was so damn good! Even when background music is blasting you can barely hear it on the other end!
Mind blown
Nice design, but slow. Hopefully they continue development, or open it up for patching.
Take a look at Anandtech's review of it:
http://www.anandtech.com/show/4508/hp-touchpad-review
The point is, the hardware has lots of potential. But the OS isn't as fast as Android, nor power saving.
Personally, I'm ditching WebOS for Android if it ever gets on it with everything working.
BUT, I may consider sticking with WebOS just for development. It could be a niche market that provides better exposure than the flooded Apple and Android markets.
Here is a quick comparison of Core2Duo E8600 vs i5 2500k. As you can see, the performance is significant in some areas. You will need to take a call based on your usage.
>NVIDIA also chose not to implement ARM's Media Processing Engine (MPE) with NEON support in Tegra 2. It has since added in MPE to each of the cores in Kal-El. You may remember that MPE/NEON support is one of the primary differences between TI's OMAP 4 and NVIDIA's Tegra 2. As of Kal-El, it's no longer a difference.
I wouldn't say Phenom II is below current mainstream as i3 with only its 2 cores gets beaten by Phenom II in plenty of benchmarks.
So if Phenom II is below current mainstream so is i3, which is obviously not true.
Also AMD is selling a lot of Llano APUs, which is basically Phenom II.
>The fact that its backwards compatible and has what 8 times more bandwidth than anything currently needs means its going to last a while.
You can build an external RAID that would saturate a USB3 link easily. For instance, one of the first thunderbolt devices uses more throughput than USB3 could theoretically deliver
Difference: 560 is basically the same GF114 material that could not qualify for 560 Ti
Recommendation: Do not go with 560, consider buying 560 Ti instead with dual fans
Nvidia GeForce GTX 560/560 Ti compared with: 500 Series: 560 < 560 Ti < 570 < 580 < 590 400 Series: 460 < 465 < 560 < 560 Ti < 470 < 480 NOTE: 465,470,480 is GF100, 460 is GF104, 560/560 Ti is GF114, 570,580,590 is GF100
If you are considering EVGA
Warranty wise part number ending with: KR (Limited 3 Year) --> GTX 560 AR,A1 (Limited Lifetime excluding 02G-P3-1568-KR) --> GTX 560 Ti
Maximum Memory Configuration wise: 560 - 1GB (excluding 02G-P3-1568-KR which is 2GB) 560 Ti - 2GB
Cooling solution wise: 560 - Single Fan 560 Ti - Single Fan and Dual Fan
Lucrative Market (air based dual slot cooling cards):
GTX GB $ MinFPS FPS Δ 590 3GB $730 ----- 580 3GB $590 ----- 580 1.5GB $470 134.5 +13.6% 570 2.5GB $400 ----- 570 1.25GB $310 126.9 +7.2% 560 Ti 2GB $290 ----- 560 Ti 1GB $220 118.4 100% [ Set as base - Best card for the money right now ] 560 2GB $250 108.8 -8.1% 560 1GB $200 104.1 -12.1% 480 ----------- 114.1 470 ----------- ----- 460 1GB $160 88.2 460 .75GB $150 84.1
MinFPS info. from anandtech, game used is Battlefield: Bad Company 2 (Chase Bench) @ 1680x1050 and set to Maximum Quality
It's a wash generally. Even though SSDs in theory have lower power consumption, they usually use about the same amount as a rotating HD.
http://www.anandtech.com/show/4316/ocz-vertex-3-240gb-review/9
Check out the SSD's compared to a Seagate Momemtus (hybrid flash/platter).
You'll usually save power, but not a whole lot. Laptop platter drives were already pretty optimized for power.
I'd worry about the screen more, since it uses the most power, then your CPU. Setting aggressive power saving settings will help more.
There was an article on Anandtech I believe which was reviewing one of the newest SF-2XXX series SSDs. Where he stated that the number of writes available on the drive had dropped from 10k -> 5k -> 3k with each respective decrease in chip size (45->32nm etc and MLC chips).
And at first this looks like "HOLY WTF PANIC". But he did some research about how much data he writes to his hard drive a day, which was something like 7 GB (which honestly, is freaking huge so totally above average). So 7GB on a ~140GB hard drive takes 20 days. Now you've used 1 complete write cycle on the drive. You still have 2,999 more!
So 20 days * 3k / 365 days/year = 164 years.
Looks really good to me, but I would recommend going with the 2500K. Its $15 more but that is really worth it when you consider the performance increase you can get by over-clocking it. You can easily get it to 4.4GHz using the stock cooler, see here.
Also I would normally recomend this G.Skill Ram because of the make and its reputation, am not familiar with A-DATA.
Anandtech had a good review of all the Sandy Bridge chips. I would prefer the Sandy Bridge chips, even if it ends up being an i3 over the current generation i5's.
Anandtech says 365W TDP. Your point is well taken, but double the wattage is a bit hyperbolic, don't you think?
>and there are more benchmarks
Yes there are more. Maybe you should also have linked to them instead of spreading misinformation.
http://www.anandtech.com/bench/Product/293?vs=331
Also, having more than 1gb is essential if you want to push up features like antialiasing and anisotropic filtering on newer games.
>choose the brand you like the most, it's pretty much equal
False once again. Brands add features such as better quality capacitors for longer lifespan, or superior VRM count and quality, in addition to a superior cooler. See: The difference between a Twin Frozr II and a Twin Frozr III card.
tl;dr: OP is talking about things he doesn't understand.
Now /r/buildapc, we all know how much you love to trash AMD..., but wouldn´t it help to explain the reason you´d get an AMD over the i5 in some cases?
aka op linked to a 6 core v the i5, but may not know the 4 core does as well and is way cheaper
This comparison makes much more sense to link to...
You get about 9/10 the frame rate on 3/4 games and 7/10 on another. The price of the i5 in the test is $220 the price of the 955 is $120.
If you compare the i3 and the 955...the 955 beats out the i3 in gaming...so slightly that it´s probably still worth getting the i3 in case you want to upgrade to an i5 or i7 lol...and they´re the same price.
So, basically 955 is slightly better than the matching price i3, by such a small margin that I´d just say fuck it intel is better just in case of an upgrade...because amd can´t hang with intel in the higher price range.
The only reason to get amd is if you have an am2+ now that you can flash to am3 and you´re looking for a cheap upgrade. I bought an amd recently for just that reason...motherboard crashed...now I have to buy a new motherboard anyway...fml...still stuck with amd.
edit: read the first post and assumed this is what you all did...jk, some well reasoned posts in this thread...
It should be able to stomp it unmercifully. You may actually achieve a new dimension of clarity with a 560ti. Here are some benchmarks comparing a 560ti to a GTX 260 http://www.anandtech.com/bench/Product/318?vs=330
Here is the comparision I did: http://www.anandtech.com/bench/Product/293?vs=330
It looks like, for Bad Company 2 (I plan on playing BF3), the 6950 does better at higher resolutions, although I will be playing at 1080.
I'm sorry but that simply isn't true.
http://www.hardocp.com/article/2009/05/19/real_world_gameplay_cpu_scaling/1
http://www.anandtech.com/show/4310/amd-phenom-ii-x4-980-black-edition-review/7
The first article is two years old but it's not like AMD has come out with any CPU technology since that time. Intel has released Sandybridge, however. The second article is from this year.
If you are using slower video cards it may not be as exaggerated of a difference, but there is a difference, and it grows even larger when you use faster video cards and multiple GPU setups, or play on multiple monitors, where more CPU overhead comes into play.
The Core i3, which is a dual core, beats most AMD quad cores in gaming.
I've got a link here that shows quite the opposite. Really, Nv fans have been spouting outdated nonsense for a long time, eg. AMD(ATI) has horrible drivers.
Here is the fastest 2500K vs the 1100T. Intel wins pretty much everything. If you want pure raw power, intel is the way to go. If you want value, AMD is the way to go.
The 560 Ti is roughly 30%~ more powerful than the 460, but it's also 65$~ more expensive. For the high-end system you're paying for, though, wouldn't you want a stronger GPU to complement all the other parts?
First off I highly recommend r/SuggestALaptop for help with choosing laptops. Next I am curious as what size, display resolution, etc... you want? If all you want is to Skype and browse the internet, AMD's Fusion APUs are actually excellent although they would be classified as "netbooks". They perform about twice as well as Atom CPUs. For instance take a look at HP Pavilion dm1-3025dx for $380 which is essentially the same laptop as the HP dm1z reviewed here
> I want to build a gaming PC with a core i7 processor
If you want to waste $100, yes, you want an i7 for gaming. If you'd rather put that elsewhere in your budget or in games, you want an i5. There is no benefit to having an i7 over an i5 for gaming. Look at the i5-2500k and i7-2600k benchmarks and the i7 965 Extreme and i5-2500k benchmarks. Unless you're doing work that actually benefits from hyperthreading, there is no advantage in gaming. The i5-2500k overclocks better than 1366 i7s and just as well as the i7-2600k. The only "advantage" the i7-2600k has is 2MB more L3 cache and hyperthreading, neither of which make a $100+ difference. At best it makes a 1-3% performance difference and, in some games, a performance hit.
That said, what needs to be in the budget? Monitor? OS? Mouse? Keyboard? Just the case, mobo, ram, HDD, CPU, GPU, and PSU? You can build a great computer with monitor and everything else for $1300-1500. Any more than that is just diminishing returns.
If you don't have a 5850 now, crossfire 6850s instead. It's about $50 cheaper besides mail in rebates, and AMD tweaked something with Barts to get Crossfire to work better. So much so that CF'd 6850s outperform 5850s in games that aren't really shader heavy. Plus Barts is more efficient so you get quieter fans and less heat in your case. And the 6850 does tessellation better. There really isn't a downside besides at most 3-5 FPS in worst case situations.
Edit: Derp, I ignored the second part of your post asking about the 6990. I'm not sure about 5000 series prices, although they're very likely to keep dropping as they become solidly previous generation. But prices always drop over time with hardware. It's just the way it goes.
Was gonna add this in as an edit but it stands on its own as a (probably better) answer:
There are many more differences between the iPhone 4 and iPod touch that you are forgetting. Using this as a source: http://www.anandtech.com/show/3903/apples-ipod-touch-2010-review-not-a-poor-mans-iphone-4
-iPhone 4 has louder speakers
-iPhone 4 has twice the RAM
-Not the same display- iPod touch is less bright and offers less contrast as well
-MUCH lower quality rear-facing camera. Also, no flash.
-No GPS onboard
geez, that processor.
for a budget gaming rig go with the i3-2100. it's about $80 cheaper than the amd and will perform pretty well for a cheap system.
or since you're getting a 580 for a Christmas gift spend the extra cash on an i5-2500k(it's like $20 more at newegg and you can save it by buying less ram- 16gb is overkill by far for gaming, 8gb will be more than enough)
edit: here's a comparison of the more expensive 8150(3.6ghz?) and an i3-2100 http://www.anandtech.com/bench/Product/434?vs=289
i5-2500k vs the more expensive amd as well: http://www.anandtech.com/bench/Product/434?vs=288
This really belongs in [/r/buildapc](/r/buildapc) but I'll let it slide. Just a quick word that Crossfire and SLI have microstuttering problems such that it's often best to just get one powerful card. Furthermore make sure your PSU can handle the new card, I ran into that problem when I got my GTX 570 (CHECK THE AMPS ON THE 12V RAIL, NOT THE TOTAL WATTAGE).
For ~$400 you should be getting a GTX 570 or Radeon HD 6970 depending on what games you plan on playing. Here's a comparison of the two, bear in mind that's at stock speeds and you can by GPUs with OC/better cooling.
High failure rates. The OCZ brand has been synonymous with poor reliability for years (this article is worth reading).
I'm on my fourth Vertex 2. The first three randomly died under normal use.
The amount of money I've spent on shipping the bad drives back to OCZ could've been put towards a better and more reliable drive.
The Tegra 3 is a faster CPU than the A5, but its GPU component is still (significantly) slower than the PowerVR in the iPad 2. Source.
Take a look at 6950 SLI vs 580 SLI. Two 6950's are around 500$. Two 580's is around 1,000$. Do you feel that the unnoticeable performance increase is worth an extra 500$?
Well, the concept of future proofing is a myth. That said, an i5-2500k should be viable for as long as the i7-2600k, especially with its overclocking potential. If you want to "future proof" that money should be put toward GPUs.
> The i7 and RAM are a bit overkill, but I spend a lot of time on my PC (Don't really game at the moment) tagging music and ripping, so a nice CPU and RAM is handy when dealing with massive WAV files and FLAC encoding) and the SSD will do nicely for quick writes to metadata,
You really won't see much of a difference between the i7 and i5, especially for what you've talked about doing with it so far. The main things the i7 has that the i5 doesn't is hyperthreading and an extra 2MB L3 cache.
I agree nice RAM is nice to have, but, honestly 16GB is way overkill and you'd be better setting aside the extra money for a GTX 570 over the GTx 560 Ti or a nicer SSD like the Intel 510 or Crucial M4. I recently upgraded to 8GB and noticed pretty much no difference between 8GB and 4GB for Handbrake encodes. From the stuff you're talking about, you won't see a difference between 8GB and 16GB. Really, it's bragging rights more than anything. Of course, anyone in the know with computers will laugh at 16GB of RAM with a GTX 560 Ti. If you had a GTX 580 and that much RAM, that's be something different.
> and the i7 has a sweet onboard video encoding chip.
I assume you're referencing Lucid Virtu here which requires a Z68 board that supports it and software that supports it. If you're using Handbrake for your encodes, you're SOL. Also, both the i7-2600k and the i5-2500k have the same IGP.
I can vouch for the case and memory, I have both and no complaints there. As for the gpu i have a xfx 6870, benches say it's a little better than the 460 you have picked out and its about the same price. Food for thought. :)
the gtx 570 consumes about 450 under full system load [furmark] according to anandtech, so a quality 550w should cover you for a single gtx 570 and even an overclocked cpu, as the 460 watt figure is from the wall, so you multiply that by your power supply's efficiency [during their tests I think they use an 80plus bronze psu, so 85] to see the actual amount being pulled from the psu.
460 * .85 = 391 watts being pulled from the PSU, so as I said, a good 550w should cover you for a gtx 570 based build.
If you are planning on SLI, I would recommend a quality 750w power supply.
My two recommendations for PSUs to power a gtx 570 based system would be the rosewill RG 530 if you're on a budget or trying to save some cash, or the seasonic X560 if you want a top notch PSU for that wattage range.
If you're planning on sli, I would recommend the corsair TX 750 or the XFX 750w, whichever is cheaper as they are essentially the same unit for the "budget" 750w psu, and a seasonic X750 or x760, corsair ax750, or NZXT hale90 750w psus for the "high end" of that wattage range.
The CPU upgrade will not help you in gaming much at all. The addition of Hyper-threading is great for rendering and video encoding, but can actually hurt gaming performance.
Assuming you get a card that can actually unlock, and hold stable clocks at 6970 speeds, you are looking at about a 33% increase in speed over the 5850, not nearly enough to justify it IMHO.
The larger speed gains come at higher resolutions because of the increased VRAM amount. At 1680x1050, I wouldn't be surprised if you could run BF3 on high without problems on your current rig.
Edit: My advice. Hold off for now and buy Ivy Bridge and AMD 7000 series when they come out. You certainly don't have a weak system, and at your intended resolution you should be perfectly fine for now.
Edit 2: If you notice unbearable frame rates in BF3, then just buy the GPU and hold off on the rest. You can migrate the 6950 to a new build and add a second one for Crossfire for your next upgrade path.
Although IPS panels do tend to have more lag than other LCDs, in practice you won't notice a difference. One frame of input lag isn't enough to be detected, even if you're looking for it.
Personally, I'd definitely recommend an IPS display. Unlike other components, monitors are good for a very long time. They're well worth a solid investment and too many people skimp here I think (which is odd because the monitor is ultimately what you're going to be looking at 98% of the time you're using your computer).
Most people install the OS and maybe a current favorite game or two on the SSD and the rest of the files on the secondary mechanical drive, at least that's what I would do. Since you have a z68 board you also have SSD caching, here is a good review of that.
A good program a lot of people use with SSD's is Steam mover, it allows you to easily move a game around to different drives to take advantage of the SSD load times.
An i5 is almost twice as expensive as an AMD Phenom II X4, and around 50% faster. For a lot of people the Phenom will be fine and within the budget.
Please do not get that GTX580, it has terrible price/performance. Compare it here with the GTX570, which is something like $200 cheaper: http://www.anandtech.com/bench/Product/306?vs=305
If you want to spend $500 on a GPU you should definitely CF/SLI. For example two 6870s can be had for $350 and are faster: http://www.anandtech.com/bench/Product/301?vs=305
Put that $150 towards your next GPU upgrade and you will definitely come out cheaper, not the other way around.
Neither a gtx260 or a 6950 will consume 500w.
my 560ti consumes about as much as a 6950 and I could run this system on a GOOD 400w. a 6950 does not consume 500w.
Your power supply was obviously crappy if it couldn't handle a 6950 and was rated for at least 350w. a good 350w could handle a 6950.
benches:
http://www.anandtech.com/bench/Product/293
320w during furmark which is an absolute worst case scenario.
http://www.anandtech.com/show/1719/9 educate yourself
There are modifications, but no stunning performance difference. Of course, games tend to be programmed very directly for the specific console GPU being used, allowing for drastically higher performance than PC games which are just ported over with DirectX.
More time and energy (or programming for a specific GPU / family of GPUs) would be able to fix this, but would be expensive.
As Carmack said, current PC GPUs are 10x as powerful as the ps3/xbox 360.
Yeah that's ridiculous, there goes any vestiges of respect I had for slate. Here's a review by someone who actually used one at anandtech
Please type out the components in the post itself instead of letting all of us download an excel sheet. I for one do not have MS Office.
--
Edit: Alright. Newegg link is much better, thanks.
First of all, holy crap dude. Did you just pick the most expensive parts you could find? I think you're gonna give gpunotpsu a stroke as soon as he reads this :D
I would start off by throwing that CPU out and getting a i7 2600K with a decent cooler. That'll save you like $700 bucks and will be faster and will overclock a lot better.
Then the memory, is there any reason you need 16GB? If there isn't I would start off with 2x4GB, you always have the option to upgrade later.
Throw out the 160GB disk and get an SSD instead. They are mandatory with your budget.
My advice would be to search for "2600k" in this subreddit and base your new build on one of the builds given in one of those threads. Then ask for suggestions on that.
Great article, but as usual from Nvidia kool-aid drinking camp, it neglects to mention the prohibitive latency of moving data to the GPU and back. GPGPU is great for processing large batches of data but it's practically useless for more common interactive programs. Intel's Sandy Bridge has a fast ring bus with a very low latency connection GPU-CPUs. But they still ship terrible GPUs and anyway it's much harder to code in OpenCL [Intel GPU] than CUDA [Nvidia GPU].
Also, the hardware available varies a lot more for GPUs. The program depends even more on the customer's hardware platform. This reduces significantly the cases where GPGPU software development applies.
I reproduced the build here. 910usd and 870 w/MIR So, you're paying roughly a 100usd premium by having it built by them.
That 100 could be used to turn your 6850 into a 6950. Which is a nice upgrade.
Here's a comparison between the two.
The high profile H.264 works.
"Tegra 3's video decoder can accelerate 1080p H.264 high profile content at up to 40Mbps"
http://www.anandtech.com/show/5072/nvidias-tegra-3-launched-architecture-revealed/2
Adreno 220 runs around the same as a 300mhz SGX 540, The Nexus runs it at 384mhz while the Razr runs it at 300mhz. If you look at the benchmarks here you'll get a fair idea, The LG Optimus 3D has a SGX 540 @ 300mhz and same OMAP 4430 CPU but at a slightly slower 1ghz so use that as the basis for the Razr.
The Nexus and Razr has the advantage of also having a IVB HD video encoding/decoding chip in it's SoC (Cortex A9 based cores with NEON) so 720p stuff can be offset to it and further improving on battery life unlike the Snapdragon S3 (Cortex A8 without NEON) design.
So over all the Nexus would better due to speed of GPU and it can already run games written for PowerVR which happens to be the first supported GPU due to it being on the Droid 1 (SGX 530) so pretty much you can expect games to be supported (just look for Droid 1 and Galaxy S games) now and in future.
Qualcomm Snapdragons are nothing special anymore and pretty much the slowest of all ARMv7 cpu's clock for clock that's why they are pushing 1.5ghz while everyone else is still on 1.2ghz (and still beating Snapdragons at higher clock speeds) and if you notice some of the 720p off screen benchmarks the Adruno 220 wasn't even tested due to problems with it.
Sadly HTC relies too much on Qualcomm for their CPU/GPU's which right now is lagging everyone else and they've even delayed their Snapdragon S4 designs due to tech problems
The people saying that the i3 will bottleneck your system are wrong. Your CPU is excellent, being a dual-core Sandy Bridge (32 nm) processor that performs similarly to high-end AMD Phenom x4s (Quad cores, 45 nm) in games due to higher IPC.
See benchmarks: http://www.anandtech.com/bench/Product/289?vs=88
Both your GPU and CPU are significantly better than the recommended spec.
A 6770 is a rebranded 5770 with a few enhancements (but identical performance for the most part) so these benchmarks will definitely help you in your decision.
I'm not saying that he should go out and buy a Bulldozer. I'm saying you're being ridiculous by claiming that the i5 wipes the floor with it. It doesn't, and that's a fact backed up by all the benchmarks.
Using Anand's benckmarks of actual applications, Bulldozer does:
As can be seen, the actual real world differences are pretty miniscule. The 2500K most certainly does not wipe the floor with Bulldozer and neither does the 2600K.
2500K > i7-965. It makes a nice gap to show where Bulldozer fits. Same as how Phenom II was tested against Core2Quads even though Core iSeries were long since launched. The i7-965 is faster than the FX8150 in 4 of 6 tests where that site compared the two, the only glowing victory being the most heavily threaded application. 8 "real" cores/modules should be better than 4 with hyperthreading in that case.
It's hilarious I'm being downvoted for the facts straight out of the article, Bulldozer is late and falls short if they don't get really aggressive with pricing.
Did you even read my post? Intel can compete with AMD even at the lowest pricepoints.
Pick two processor at the same price point and look at the benchmarks here. Don't just accept the "conventional wisdom" for truth when it's so easy to look at the facts yourself.
Not all Intel motherboards cost $150. I know the hivemind here at reddit strongly believes Intel cannot compete at gaming PC for $500, but it's simply not true. There's plenty of cheap 1155 motherboards and the Sandy Bridge Pentiums at price range $65-90 and the i3-2100 at $115 all outperform AMDs similarly priced offerings when it comes to gaming.
Before you downvote me, please at least take the time to look at some benchmarks in this article: http://www.anandtech.com/show/4524/the-sandy-bridge-pentium-review-pentium-g850-g840-g620-g620t-tested/3
I'm as big of an AMD fan as the next guy, but I go by benchmarks, not by brand loyalty.
Not really helpful since the next page shows AMD doing better. I'm also thinking of the fact that you can get a 32 core beast (with a quad-CPU server board) for the same price as a 24 core Intel setup.
Anandtech reviewed it when it was released with comparisons to the ION platform. The e350 has a bit more CPU grunt but the video decoding at the time of the review wasn't quite as good as the nvidia chip. Still good enough to handle most 1080p content though.
It is worth noting that an i5-2400 will beat the snot out of even an overclocked Phenom X4 II, as seen in this comparison against a highly clocked Phenom. If you can fit it into your budget, it is a great investment over the Phenoms.
This is a great article - the short answer is that with Sandy Bridge, benefits are negligible above DDR3 1600, and 1333 is nearly as good.
http://www.anandtech.com/show/4503/sandy-bridge-memory-scaling-choosing-the-best-ddr3
http://www.anandtech.com/bench/Product/287?vs=288
That's pretty much all we can tell you, it's up to you if the added performance in those specific usage scenarios is worth it. For mostly gaming, just stick with the i5-2500k. If you're doing a lot of video encoding it might be worth it, but given your description I don't think it is.
i5 or wait for bulldozer (rumored for june 7)
phenom ii hexacores offer less performance than sandy bridge even in multithreaded apps. relevant benchmark
this should give you some idea of the pricing of bulldozer octocores. potentially gulftown-like performance for sandy bridge-like pricing.