> Testing was performed on a system with an Intel Core 2 Quad Q9400 CPU with 4 GBs of RAM and an Nvidia Geforce GTX 650 Ti
ಠ_ಠ
closed tab
Do you know which piece or pieces are receiving the most heat? If not, I highly recommend HWinfo. It was a godsend when trying to figure out what was heating up in my own case and I found out it was my video card.
It's a pretty easy and comprehensive tool! Best of luck on your path forward.
You can also use a utility tool such as CPU-Z to find out the motherboard model number. That saves you from opening up the computer and go looking with a flashlight. Or from finding the box that you hopefully stored somewhere.
It's hard. Benchmarks are about the best we can do right now. The reason for this is these processors do a ton of different tasks. Even benchmarks usually target specific tasks so it is hard to build a benchmark that will accurately reflect all that the processor is capable of without handing the development over to Intel, haha. FLOPs are a better measurement than plain clock frequency but they are specific to floating point operations which is definitely not everything. Unless you can build a benchmark for your specific usage habits, your best bet is the widely available benchmarks.
I assume you've researched the card well, but I'll still throw in some tips.
I think I have to say this. Please don't harass, bully or pester dylanaraps or otherwise spam the neofetch repository with Amogus memes. Dylan has confirmed the reason for removal was the braille and a pull request is already pending that redraws it with ASCII.
If you want to help get AmogOS back into neofetch, please contribute to this repository.
Should be known sooner than later, The author of HWiNFO mentioned that the deviation metric is available in HWiNFO v6.27-4185 Beta
I'll try to break this down quickly and concisely. Feel free to add onto or correct anything as I am not an expert by any means.
Intel vs AMD
Basically both companies manufacture CPU's, originally Intel was the go to chip as it was better performance wise and quality wise. Lately AMD has been able to compete with Intel's products. Generally speaking Intel products are higher priced, a lot of things account into this. Specifically with the last few chips that have come out in the past few years Intel has been out performing AMD chips. Intel also has made quite a name for itself and does in fact have fan boys that will always choose Intel. AMD however is usually much cheaper but by no means is it a poor quality product.
Currently I am using the AMD Phenom x4 955 Black Edition, I've had no problems with it and I have been able to play the latest games with it just fine (BF3, TOR, etc). That being said, I am planning on selling this machine soon and building a new one which I plan on buying a i5 2500k for.
The i5 2500k is notoriously known as being the processor of choice when it comes to this subreddit, and many users will immediately recommend it. Not that it isn't warranted though, it does provide a very good price vs performance ratio and will run current games very well. As far as comparing these CPU's this is a good site to use.
Ultimately what is most important is what you will be using the computer for. Generally speaking the Intel chips fair better for gaming, mainly because the games will not be able to take advantage of all of the AMD chip's 8 cores. Now if you do a lot of video editing, rendering etc, these programs are more capable of taking advantage of multiple cores so in that case I would choose one of the AMD chips.
> i7 3770k
You have the processor rated just below the best laptop processor on the market, and you're stuck at 30-40fps? And here I thought we were finally in the age where I could slide by with one laptop to rule them all... Guess I'm investing in a gaming desktop after all.
You get about a 42% improvement in single thread performance. Which will help you in CPU bound games such as Fallout 4, or Counter Strike GO. http://www.cpubenchmark.net/compare.php?cmp%5B%5D=1780&cmp%5B%5D=2598
Sorry for the long post but maybe it's relevant to your issue as well.
The stock CPU fan for my i3-6100 failed after a little more than a year, it would intermittently fail to start spinning but once it was going it would stay on. I first noticed this because of 90-95C temps in games and even had the system powercycle presumably due to CPU heat.
In addition to the MSI Afterburner/RivaTuner in-game OSD I started using OpenHardware Monitor with its Windows Desktop Gadget and temp history plot to track temps and fan speeds and noticed the CPU fan often wouldn't come on when powering on the system or resuming from sleep and I'd have to physically tap one of the fanblades to get it spinning again. I also noticed that when I disconnected the fan and would manually spin the blades it would feel "notchy" in a certain range of the rotation which didn't feel right. I suspect this is where the fan would sometimes come to a stop and the initial burst of power sent to the fan when turning on the system wouldn't be enough to overcome the resistance and initiate rotation.
If you're satisfied that your CPU fan is working fine now, or that the issue was user error (e.g. failure to connect the fan (we've probably all been there)) that has since been fixed then you're probably OK but if you didn't pin down the issue with the fan I suggest monitoring the speed, perhaps keeping the case's side-panel removed so you can actually look at the thing and maybe consider replacing the fan via a warranty claim or aftermarket part. I ended up getting a cheapie Rosewill fan which has been working fine and even runs a few degrees cooler.
Even though a CPU is designed to protect itself from heat it's still not good for the system as a whole if it has to powercycle so it would be smart to keep your eye on temps and fan speeds until you're satisfied everything is fine.
Use GPUZ and run the Bus Interface test. (It is the little question mark next to Bus Interface)
I️ will tell you if you are running in PCI- 3.0 or 2.0 or 1.0 with the added bonus of giving you all the other information.
https://www.techpowerup.com/gpuz/
Great program to monitor GPU.
Just bought the lowest end macbook pro!
Rationale:
- price is about the same as before for the 256GB / 8GB RAM config
- the new 2.0 GHz CPU is more power efficient and has about the same performance as the old 2.7 GHz one. Here's a 3-way comparison between the i5-6260U, a 1.8 GHz chip similar to the new 2.0 GHz one, the old 2.7 GHz chip, and the new 2.9 GHz chip
- the lowest end one has a bigger battery than the touch bar ones, 54.5 vs 49.2 Wh
- vim user, so yay for a physical escape key
All of this plus the better display, better speakers, better keyboard, and better trackpad made it an easy decision.
i5 is definitely better.
C2D P8400: 1518 PassMark score http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core2+Duo+P8400+%40+2.26GHz
i5 2557M: 2584 PassMark Score http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-2557M+%40+1.70GHz
Be careful buying old server hardware. Some are only good at converting electricity to heat. With emulators the CPU is very important especially single thread performance. 50+ cores wont do much for most emulators, a couple sure but not many. Depending on the emulators you are shooting for I'd just build a system around a Intel Pentium G3258.
Some benchmarks to give you an idea what to look for. http://www.cpubenchmark.net/singleThread.html
According to your messages in other subreddits, you playing GTA V, which is a very CPU-intensive game. But even like that, it's hard to say if you're CPU or GPU bottlenecked. So, you should try a simple thing to check it out using GTA V (it's viable for any game):
How to read your results:
BTW, you don't seems to give a single f**k about your post, so i'm loosing my time, but it's still a good advice for people here :)
Is this what were calling old hardware now?
Granted it's not gaming hardware but it's reasonable to think a small business or somebody's grandma might still have one of these lying around.
It's not Pentium III is all in saying people...
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+Duo+L2500+%40+1.83GHz
A slightly overclocked Ryzen 1700 will be about 1.7x faster in multithreaded applications (according to CPUBenchmark). That's a huge difference.
For single-threaded applications, the i7 will edge out very slightly. (scores of 2027 vs 1754 according to this page)
Overall, the 1700 versus that CPU will be like night and day, if you do productivity tasks.
Athlon X4 860K is same socket, cheaper, better single threaded and better total performance. I'm also still learning, but you might want to look into this.
http://www.cpubenchmark.net/compare.php?cmp[]=2362&cmp[]=1446
You could install CoreTemp and see if your computer is overheating which is pretty common. It is always worth checking inside the case if you can to see if there is a dust buildup, especially in areas with lots of airflow. Other than that it could just be problems related to being an old computer or just faulty hardware which is also not uncommon.
I went back in time and looked at HWiNFO changelogs to find the first mention of Zen/Summit Ridge ~~in version v5.32 on 7 June 2016, 11 months before Ryzen went on sale:~~ >Enhanced preliminary AMD Zen support.
Later Edit: Actually, the first mention of Zen is older than than, it was in 14 October 2015 in version v5.06:
>Added preliminary recognition of AMD Zen (Summit Ridge, Raven Ridge).
First of all, those AnandTech graphs were pure troll, cherry picked in the worst possible way... i7 5820K at 4.8 GHz, i7 5960X at 4.7 GHz, i7 6900K at 4.2 GHz. The actual averages in PassMark for those CPUs:
http://www.cpubenchmark.net/compare.php?cmp%5B%5D=2340&cmp%5B%5D=2332&cmp%5B%5D=2794
Thus it's overall: 15,084 versus 12,986 versus 15,979 versus 16,786.
Then single threaded: 2,046 versus 2,013 versus 1,990 versus 2,168.
Considering the 1700X supposedly had its turbo disabled and is a $400 CPU, giving 95% of the performance of the $1,000 i7 5960X... That's pretty damned good I'd say. Chances are the 1800X will be as close to the i7 6900K as the 1700X is to the i7 5960X. Perhaps closer still if turbo truly isn't working here. This is a huge win for AMD, 90% of the performance of Broadwell-E at 50% of the price. The only thing to be concerned about is Intel's margins when these bad boys finally hit retail!
That's because technically it's only a dual floating point module processor, with 2 cores per common FP module.
http://www.cpubenchmark.net/cpu.php?cpu=AMD+A10-5800K+APU&id=1446
There's a chance the pump died. Check your BIOS or use some reliable software like HWiNFO which will report your pump speeds. If it's 0 RPM then time to RMA that AIO.
I went to the source code for answers.
For those who aren't programmers, neofetch checks for the existence of several distro-specific files and several distro-specific programs, as well as the contents of the /etc/lsb-release, /usr/lib/os-release, and /etc/os-release files if they exist. Failing all that it looks for the names of distros in files named /etc/[whatever]-release (may not be reading this part of the code correctly).
I realize that people have weaker rigs than yours, but they probably are not trying to throw ~2000 part crafts at them either. Large crafts have always brought rigs to their knees.
And while it might have been better than early i7s, you're still running one of Intel's lower-end chips, and it's Passmark scores reflect that. Considering that KSP is CPU intensive since Unity's physics engine runs on the CPU, I feel like you're expecting more than what is reasonable performance-wise.
CPU bound, single core, check out CPU benchmarks and then the single thread details. Intel has the fastest 'single' thread CPUs. I think your CPU has the highest score on that: http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-4790K+%40+4.00GHz&id=2275.
Edit: this is an accurate list of which CPUs are most likely to give the best performance in KSP: https://www.cpubenchmark.net/singleThread.html
Edit 2:Why did I ever choose to go with an AMD motherboard? The best AMD procesor has 67% of the performance but at 72% of the price. However, the AMD processor uses 250% the power the Intel does.
Recommendation to those building a PC for KSP purposes: Do not choose a motherboard with an AMD socket.
> Minimum Intel Core i5-2500K @ 3.3 GHz or AMD FX-8350 @ 4.0 GHz or AMD Phenom II x4 940 @ 3.0 GHz
> NVIDIA GeForce GTX 680 or AMD Radeon HD 7970 (2 GB VRAM)
Biggest load of bullshit I've seen in ages. Just say that you haven't got the slightest of ideas on how to put your games on PC.
How's AMD Phenom II X4 940 at the same level as 2500K?? Or even FX-8350 for that matter. It's not even in the same range of CPUs
It seems they have issues with VRAM being too low on GPUs and they're not using it as intended or they're pushing all textures on it like Titanfall to fill it up. They're definitely not that pretty to justify filling up the entire VRAM.
As far as performance goes the minimum does not make any damn sense.
He said cpubenchmark.net, which is passmark database.
Infact, there, it is indeed listed as faster: http://www.cpubenchmark.net/compare.php?cmp[]=828&cmp[]=1780
The problem is that this happens only in certain very well threaded workloads, while in many others the i5 wins
This is exactly right, the two aren't even comparable. Even in the single thread rating the 2500k smokes the Phenom.
Source: http://www.cpubenchmark.net/compare.php?cmp[]=367&cmp[]=804
> Processor: Intel Core i7 or better
So would a first-gen i7 suffice, or does it need to be current? There is a nearly 3x difference in capability between first and current generations.
Granted, I'm guessing it'll require far less anyway...
> Graphics: DirectX 11.0 compatible (2 GB) or better
This doesn't really mean anything. A 6450 has these requirements, but is extremely low performance for gaming.
I wish that companies would actually put out real requirements rather than fluff. Fluff doesn't help anyone with preparing for a game.
An i3 4330 scores about 20% better than a 965 while using less than half the power. Single-thread improvement is more like 70%. It's not a huge upgrade, but it's definitely an upgrade. If you let your PC run 24/7, the power savings will probably pay for the i3 by the time you replace it.
This is a good deal for someone who wants to build an LGA 1150 (aka current gen) Intel machine but doesn't want to invest in a good CPU yet.
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Celeron+G1830+%40+2.80GHz
Wow, I read the whole thing, Just ridiculous. I was pissed off when people stole a couple design elements from me, can't imagine how you're feeling.
On a lighter note, those sketches are lovely, I especially enjoyed the portraits.
~~EDIT: I looked up benchmarks for the hardware they sent you. The card is absolute shit and the CPU is less powerful than an i7-2600.~~
I've always wondered how such a machine would perform at gaming. Out of curiosity, why wouldn't it work?
Just for the sake of mentioning it: Current Benchmarks of Multiple CPU Systems
Read this: https://www.hwinfo.com/forum/threads/effective-clock-vs-instant-discrete-clock.5958/
Effective clocks provide an accurate indication of what the cores are doing, in my opinion.
Core clocks are like the top speed shown on your speedometer.
Right now the main constraint for KSP is the CPU. You want a CPU that has exceptional single-threaded performance. Intel CPUs tend to be the best brand for this.
Check out a single-threaded performance benchmark here, and then choose the best performing one you can afford.
The second and third most important considerations are RAM and GPU. You want to have at least 6 GB of Ram (KSP can use up to 4, plus a few gigs extra for other processes), and any decent graphics card should be able to run it without much trouble.
Just because AMD has more cores, does NOT mean it does more work.
http://www.cpubenchmark.net/high_end_cpus.html
The top 17 processors are all intel. 23 of the top 25 are all intel.
The very top chip in that list (E5-2690) is an octal core with hyperthreading, effectively a 16 logical core chip. The top AMD chip, at about 60% the performance, is the opteron 6272, a 16 core chip with 4x multiprocessing, effectively 64 logical cores.
The comparison page for those 2 processors:
http://www.cpu-world.com/Compare/796/AMD_Opteron_6200_series_6272_vs_Intel_Xeon_E5-2690.html
Intel's chip uses 135 watts of power and relative cooling; AMD's chip uses 115 watts of power and relative cooling.
So, to sum it all up, the Intel chip uses 135 watts of power with 8 (16) cores, to achieve a passmarks score of 16,609, and costs about $2000. The AMD chip uses 115 watts of power with 16 (64) cores, to achieve a passmarks score of 10,245 and costs about $500.
Intel provides 123 passmarks per watt, and AMD provides 89 passmarks per watt. Intel costs (approx) 2.2 cents per hour to run, AMD costs (approx) 2 cents per hour to run.
Performance parity between the two chips would be approximately 5 AMD chips to 3 intel chips. The cost to run 5 AMD chips is about 10 cents per hour; 6.6 cents per hour for 3 intel chips.
So, there is a 3.3 cent per hour advantage to running intel for 3/5 workloads. Though, this does not incorporate cooling; Generally speaking, about 2x as much power will be required for cooling (including room cooling) as your processors. so it's more like 9.9 cents per hour.
If you bought 5 AMD chips for $2500 and 3 intel chips for $6000, at 9.9 cents per hour disparity, the "Break even" point is just over 4 years. (at 4 years, power savings is $3,421.
This is based on 6.9c kw/h, which is very cheap. Actual power savings will probably be higher. For example in california, it's usually 14-19c/kw/H unless you are using bloom boxes.
90+ percent of the high performance CPUs on this chart are Intel-based. That isn't speculation or anecdotal experiences, just numbers. Draw your own conclusions.
> I believe North Korea has its own seperate internet, so they don't have access to being on Reddit
Fun fact:
North Koreans who specialize in IT get unlimited access to the world wide web with the caveat that they cannot make personal accounts and can only communicate through the Korea Computer Centre.
example of a North Korean person communicating with the outside world.
My first ever hipster rice. Wanted some pleasant colors and visuals to accompany me through the cold months. Despite being a hipster rice, I made sure to sprinkle in just a little bit of weeb.
wm - i3-gaps
terminal - urxvt
terminal font - tewi
web browser - firefox + twily's css
music - ncmpcpp + mopidy
text editor - vim
irc client - weechat
system info - smugfetch
And, as is customary, my dots.
Onward to 2017 with my pretty desktop <3
With a little effort I'm sure you can figure out what needs to be done, try visiting /r/buildapc.
If you download CPU-z from CPUID (https://www.cpuid.com/softwares/cpu-z.html) that will tell you what you have for CPU, RAM, and GPU and from there anybody can tell you what you need to upgrade or whether you are better off buying a new system entirely if you want to avoid building yourself.
You don't really have much choice, there's only one iMac under £1000.
If you're set on Apple you could potentially get her a 13" Macbook pro, which would be useful if she needs to edit away from home. If she already has a desktop that she's happy with then this would be more helpful. Plus, (although it is £100 more) the Macbook is significantly faster than the iMac according to this comparison of CPUs.
Alternatively you could get her a custom-built PC. You might be able to get one at a local computer store, but if you have a geeky friend who knows about computers I'm sure they'll help you out for cheaper. If she uses a PC currently then this might be the best option. Apple Macs cost significantly more for the same performance, plus the OS is unfamiliar to Windows users.
Again, going in a completely different direction, have you considered getting her something other than a computer? You can get some really nice lenses around the £1k mark, which she would definitely appreciate.
Yeah... now you know why we don't like SLI, it isn't a well made technology (for most applications). PUBG is particularly infamous for losing performance with an SLI configuration.
We can walk through a couple of things if you like, but I recommend just not using SLI and returning the second card.
First have a look at your PCIe lanes with GPU-Z, what does it show for each card in the "Bus Interface" field?
I can confirm that most of what OP says is true. And I also share some of his frustrations. I have a bit of experience from hosting the massive multiplayer events and I like hanging around the r/factorioMMO servers. So here are some more thoughts.
Factorio headless is mostly single threaded. It still does many things in other threads so I don't expect it to run well on just one core. Going beyond 4 cores will have no performance benefit imo. If you want to be able to run a big server with a big map, look for CPUs with good single-thread performance http://www.cpubenchmark.net/singleThread.html.
I absolutely hate virtual cores from VPS because they are usually trash.
Since 0.14, many people don't realize that the game is sometimes greatly slowed down by a slow server. They look at the FPS and see 60 UPS/60 FPS. But they don't know that many of those 60UPS are duplicated, because they are waiting for the server. You can see this is happening if the multiplayer waiting icon is blinking constantly, after you activate it in the debug options.
But! There is one advantage to a slow server. It becomes more accessible to players. Slow server means people with slower computers can join, since the game kicks or makes the game unplayable for anyone that cant keep up with the server. Also people joining mid-game can catch up much faster.
As for internet, Factorio is still quite sensitive to bad connectivity. And good connectivity, quality and high speed usually go hand in hand. Try to look for a provider with good peering. A server of around 200 people needs 2MB/s of traffic for game packets. But you need to have a good speed(or limit the map download speed) to make sure that map downloading packets don't slow down the game packets. 1Gbit is what I recommend for MMO events. 100Mbit(with map download limit) for small servers.
The "Turbo" version has an Intel Pentium N3710 CPU which has a passmark of 1299 which would probably struggle to transcode even a 720p stream. For direct play it would work fine I would imagine.
not unless you are doing something that requires more cores/threads like cpu encoding for streaming games over twitch (or other streaming service.) if you are just playing games then you probably will not, however you will be able to multi-task a lot easier.
the one thing you would notice is that this cpu is built on an x99 motherboard for DDR4 RAM. that stuff is fast
if you are looking to upgrade your mobo so you can use DDR4, may i suggest the i7 5820k? according to this it is the highest ranked cpu for under $1000. it is the one i am going to get when i upgrade to a desktop later this year. new RAM, mobo, and this cpu is rather pricey, but it leaves you a ton of space to grow.
Considering this is a hardware subreddit... I would assume I am not alone in caring about After Effects, Cinema 4D, and other animation/rendering programs' multi threaded performance per dollar. AMD actually does pretty well here.
http://www.cpubenchmark.net/cpu_value_available.html
Benchmarks that max the CPU matter a lot to animators as it compares raw performance. Games rarely max the CPU.
Build a custom setup. That processor is woefully underpowered - upgrading the RAM isn't going to have a noticeable impact on DF in your scenario.
Core i3 processors run in the 4k range on CPU score, while i5 and i7 processors run upwards in the 7-8k range. While the focus on DF is a single thread, still don't underestimate the value of a modern processor (clock speed is not everything).
Just keep in mind that your processor will probably run you $100 for something cheap, probably another $75-$150 for a motherboard, and then another $75-$100 for RAM. Even if you go with something cheap, expect to spend roughly $300.
I don't know what your budget is like, but you might just want to make a good all-around gaming PC instead of trying to build one just for DF.
What does it show in GPU-Z? To be honest though, I strongly suspect you're simply mistaken and you have a GTX 650 TI. Every instance I can remember of people coming on here (/r/techsupport) with a problem like this, and it has happened many times, it was a mistake by the poster.
Well, that's your first problem.
Install: https://drivers.amd.com/drivers/amd_chipset_drivers_v1.07.07.0725.exe
Next, you need to use this one version of HWInfo64 that supports Ryzen 3000 fully:
https://www.hwinfo.com/files/hwi_609_3855.zip
... HWMonitor is garbage software, don't use it. Only use HWInfo64.
Is it even 10% per generation? Looking at the Passmark benchmark, the 6700K, three generations after the 3770K (two of which were "tocks"), scores just ~15% higher. This includes gains due to higher clock speed, and not just IPC gains. And Kaby Lake will be, apparently, a Skylake "refresh".
I hope that Zen changes things, but, what we can reasonably expect is that, if it costs like an i7, it will perform like an i7 (or better, yeah, but on the same tier), and vice versa.
You are right about iGPU gains, though. We can expect them to get better. I can't really tell how much better, though... and the 980Ti is pretty strong.
synthetics put them very close at stock speeds, with the xeon having a better single thread rating despite a slightly lower overall score.
and then there's the four thread i5-4690 not backing down at all in real world tests, which suggests that this eight thread xeon would end up ahead of the 9590, at stock.
"and boom here we go again - http://www.cpubenchmark.net/compare.php?cmp[]=2347&cmp[]=2017 42% better multithreading performance."
Nobody denies AMD's 8 core have relatively better multicore performance. But here's the thing. Unless you do movie encoding, hardly anything utilizes 8 cores fully. The Intel counterpart (like the i5) has significant more power per core.. which is what benefits almost everything you do - general windows usage and gaming.
Not to mention you won't be OCing that AMD at all with a cheap mobo and stock fan.
a sandybridge based i5 will slaughter atom boxes in both processing power and power usage.
Running on cpu score alone http://www.cpubenchmark.net/cpu.php?cpu=Intel+Atom+Z550+%40+2.00GHz - 386 points
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-2500+%40+3.30GHz - 6506 points
So on synthetic benchmarks the i5 is comparable to 16 atoms. I know the real world doesn't work like this, but, if you think about it, it's absolutely crazy.
It is wayyy more efficient to visualize those atom machines onto an i5, and it'll cost you less in the long run too.
You're probably running an older version of neofetch. They added Win 11 logo 8 days ago and people don't seem to be complaining about it.
Cool project! I didn't know it existed. I've just added support for using pixterm
as an image backend to neofetch
. :)
https://github.com/dylanaraps/neofetch/commit/6711ebc91f9bea32a6c2c75b56324f35ca691925
Here's what it looks like: https://i.imgur.com/l6weZgi.png
The detection for the Resizable BAR BIOS and CSM options are incorrect:
http://jedi95.com/ss/cfa5f02d5fc00eae.png
Resizable BAR is enabled, and CSM is disabled.
https://www.techpowerup.com/gpuz/details/gkuab
NCVP reports "enabled" for Resizable BAR. GPU-Z's Graphics Card page also correctly reports Resizable BAR is enabled.
The other fields are reported correctly.
Specs:
Ryzen 9 5950X
Asus Crosshair VIII Hero Wifi BIOS 3501 (AGESA 1.2.0.2)
EVGA RTX 3090 FTW3 (w/Resize BAR BIOS)
Open Hardware Monitor hasn't had a release since 2016 (so, no Ryzen support), and based on github activity it really hasn't seen much development at all in the intervening time even if you were to pull the code and build it yourself. HWInfo would be a better option, though it's not open source.
Skip MalwareBytes. Defender built into Windows is very good already. Leave it turned on.
CPU-Z by CPUID is a popular tool to gather information about your system. I think a lot of people will have an older version somewhere on your drive. Yesterday, a flaw was disclosed allowing arbitrary memory read & write.
The version itself is quite old, but still, this might be a good idea to check if your diagnostics software is up to date.
Current version is 1.81, available here: https://www.cpuid.com/softwares/cpu-z.html
CPU Benchmark keeps benchmarks on every modern processor. This page shows a comparison of the m3 and m5 CPUs in the MacBook, and the i5 in the MacBook Air
On paper, we're talking about a difference of 20% processing power between the base-model MacBook Air and the base-model MacBook. A difference of about 9% on the base-model MacBook Air vs the Core m5 MacBook.
But for two reasons, those differences aren't as bad as they seem. For one thing, Apple appears to be using these chips at a higher clock speed than the stock models CPU Benchmarks tested, which shrinks the gap considerably.
For another thing, the biggest bottleneck in a computer, despite the advancements made, is still the storage. The MacBook's flash storage is vastly faster than that found in the MacBook Air, which shrinks the speed gap as well.
In real world usage, the two machines will fare about the same in day to day tasks. Neither of these are machines for gamers, video editors, or any other similarly high-performance tasks. If you're concerned with power, move up to the MacBook Pro. The i5 and i7 chips in the MacBook Pro are a magnitude better than the Air or the MacBook.
If you're more of a consumer, meaning you browse the web, watch movies, and use Office (etc), the MacBook is the machine I'd choose. You will make trade offs with this machine, such as the sole USB port. However, you wouldn't be considering MacBook in the first place if you couldn't get by with one port.
Better price, lower TDP, built for 24/7, no inbuilt gpu so better temps and overall efficiency. More features like ECC, some virtualization features and overall longer lifetime than i7. Built to handle high workloads over long periods of time.
Intel Xeon 1241v3 is about the same performance as i7 4770k for ~100$ less.
My passmark score for Intel Xeon 1241v3 is 10580. Compare
passmark is bullshit.
Although frankly, newer i3s even beat it there http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i3-6100+%40+3.70GHz
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core2+Quad+Q9650+%40+3.00GHz
His CPU is almost 6 years old. It's less than half as powerful as the CPU recommended by Oculus.
The socket is old too, so they'd need a new mobo, new CPU, and new GPU. But they'd most likely need a new PSU too.
So they can keep what? Their RAM and hard drive? Sure. But the rest needs upgraded.
>Too many people here are focusing on entire PC upgrades when most people just need a GPU upgrade.
No, too many people thinking they can run the Rift with their 6 year old CPU.
Ram can be easily tested with MemTest86.
Can you recreate this crash? Perhaps the laptop overheated? Temps can be checked with HWiNFO64.Have you used any CPU/GPU tuning software? If yes, then open the software again and clear all set values to defaults.
Update Intel chipset+iGPU drivers and Nvidia drivers.
http://www.cpubenchmark.net/compare.php?cmp[]=1780&cmp[]=2230
Pro FX8350
- Better distributed performance
- Price (as you indicated)
Pro i5
-Better single threaded performance
-Lower power consumption
They are, but AMD give better overall performance for the same money.
But since Intel run rings around AMD on single thread performance, it matters not. You will routinely have a four core i5 that does much better for games than a AMD 8350, despite the 8350 being "better" at multi-threaded work.
Overall value = AMD.
http://www.cpubenchmark.net/common_cpus.html#cpuvalue
Single Thread Value = Intel.
Performance wise, the Xeon E5-2697 is quite possibly the very best non-military CPU available. It has roughly 1.3x the power of the i7 5960x ^1
(as a side note, the Xeon has 14 physical cores and 28 logical)
Don't use HWMonitor, as it's known to give bogus readings. Use HWiNFO instead.
If the temps are high in hwinfo as well, is your cooler installed correctly? If it's an aftermarket cooler, did you remove a plastic film if there was one?
Passmark is a decent CPU benchmark, and look how low your CPU scores.
And looking at a game like Metro 2033, in which your card struggles to get even near 30FPS, it's guarenteed to look choppy.
Sorry man, you need a new rig.
It depends on many things:
So, it really depends on what you plan to use your computer for and what you budget it. If you don't plan on doing much video/audio rendering, the i7 2600k probably isn't for you. The i5 2500k seems to be the best suited processor for most of the people on this subreddits' needs, costing about $210 USD.
Probably wouldn't look good because it would give a lot of unseemly color codes. :(
Edit: Just tested it, you can do neofetch --stdout
to get rid of the color codes, but it also gets rid of the ascii art.
Final Edit: neofetch|sed 's/\x1B\[[0-9;\?]*[a-zA-Z]//g'
will get rid of colors, but keep the ascii art. Thanks to ghost from the link above.
Mostly everything. The processor is very old and weak, there's only 4GB of RAM (but don't buy any more blindly just yet), the GPU is quite old as well, etc....
What is your budget ?
What are your performance expectations ? What kind of games do you want to play, at what settings ?
Do you know which GPU of the HD 6900 series that is ? If you don't, please run GPU-Z and take a screenshot. Or look directly at the sticker on the back of the graphics card and tell us the model number written there.
Well, it's all in the sidebar, but here you go: GPU-Z will show you your GPU and VRAM clock speeds, CPU-Z will tell you your processor clock speed, memory clock and timings.
Edit: I understand that you probably won't know what to do with all that information. If you just post a bunch of screenshots, we will figure it out together from there.
Use HWInfo64 to monitor the "Tdie" temperature (actual temp at the processor die to the heat spreader that your cooler is sitting on). Once you open HWInfo64, click the "Sensors" button. "Tctl", which has a +10c offset, may be what you're seeing in Ryzen Master (I've not used Ryzen Master since a 1500X I setup last year, but it did report the Tctl at the time).
Grab openhardware monitor. https://openhardwaremonitor.org/
Run it
Play a song or two on beatsaber, once it starts lagging take your headset off and tab over to it.
Check to see if it's system temps throttling.
Other thing to check is to make sure your vive's HDMI cable is plugged into your dedicated GPU and not your onboard. It's a simple thing people sometimes forget, even us tech savvy folks.
Look at 6800k. Exactly same clock if turbo is working on Ryzen. And single core score is actually the same/better. So closed turbo does not make sense to me.
http://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-6800K+%40+3.40GHz&id=2785
While these numbers are including individual OC results in the average too, Ryzen is just at stock. So I assume Ryzen nearly catches Skylake IPC wise...
I prefer the fewer-larger-but-more-expensive drive setup but many prefer the more-smaller-but-less-expensive drive setup. Regardless, you might want to look for a chassis that have a drive multiplier / backplane based on SAS or SAS2. Hook that up to a LSI HBA (like the popular IBM M1015) in IT mode and you're good to go.
The other hardware are personal preference.. You want double Xeon? Sure, lot's of processing power but a lot of heat + power usage as well. One frame of reference can be the CPU passmark score. This is a handy site to quickly (read: very basic, high-level) compare cpus: http://www.cpubenchmark.net/
A more modern CPU can have tweaks specific to your needs (say: encryption acc) and can have lower power usage. It's a balance between performance, power usage, heat output and upfront costs. Combine all those factors into a TCO you're comfortabele with and you've got yourself a setup.
Personally, I try to spend more upfront (more expensive but way less power hungry CPU, larger/less drives etc) to keep the noise, heat and power to a minimum.
Those scores don't say anything about performance, it just gives you a number without telling you why it's better. It's honestly one of the worst sites to go when looking to buy new hardware.
A few other scores from Passmark
i3-4370 beats out other i5s and i7s
The GTX 690 is worse than the 680 despite the 690 being a dual 680 GPU
People should avoid Passmark.
That's an ancient processor that would be terrible for Minecraft due to its poor single thread performance. Why not just rent from SoYouStart or some place? That plus the slow disks would bottleneck hard.
I sincerely hope that you're trolling. That CPU is a good 25% slower than a 1230v2, and almost 40% slower in terms of single thread performance. You could rent a 12xxv2 machine with actual SSDs for ~$50/mo in most cases.
96GB RAM? Seriously, paired with that dinosaur processor?
http://www.cpubenchmark.net/compare.php?cmp%5B%5D=2193&cmp%5B%5D=2266
According to passmark they have about the same multithreaded performance (when using all the cores). However if the software can only use 2 cores it will run better on i3.
Comparing just the CPU performance, not even taking into account the poor performance of the integrated graphics, the rMBP already falls short of a high end desktop. That kinda goes without saying that a laptop doesn't perform as well as a desktop, let alone a high end one.
>It has a Intel Core 2 6600 2.4GHz CPU
>my setup should be able to support up to 3 concurrent transcodes
PassMark score: 1555
The minimum suggested PassMark score for a single 4Mbps/720p transcode session is 1500.
>First question - most of my files are MP4 and MKV, if I understand correctly Roku can direct play them, meaning no transcoding is required. Is that accurate?
MP4 and Matroska (MKV) are containers, which contain media streams -- audio, video, subtitles, etc. The media streams are encoded in various formats.
If your Plex client cannot decode one or more streams, transcoding would be required. If your Plex client is not capable of opening the container, Plex Media Server can simply repackage the streams in a container supported by your client.
The Roku 3 should be able to play H.264 encoded video streams and either AAC or MP3 encoded audio streams contained within either an MP4 or Matroska container.
>If so, does that mean I could share with lots of people assuming they are all using Roku, Up to my bandwidth limit?
Yes. Streaming without transcoding does not require very much CPU power.
>Second question. I was given a pair of HP Proliant DL380G6 servers, each with 144GB of RAM and 2 Intel Xeon X5570 2.93GHz CPUs (4 cores, 16 logical cores total).
PassMark score: 5630
This CPU would be capable of supporting 2x 10Mbps/1080p plus 1x 4Mbps/720p transcode sessions (or 3x 4Mbps/720p transcode sessions).
>If I wanted to make the best possible Plex server possible given my hardware what would you recommend?
Unless you have a reason to partition the server into one or more virtual machines, I would forgo any virtualization as it will simply reduce the efficiency and power of your server. For Plex Media Server, you want to maximize the CPU power.
No, the mechanism was falsifying the current reading. Not the TDC or the EDC limits. The measurement, directly.
The CPU's FIT system, functionally, is a map of temperature to safe current, assuming some particular desired lifespan.
Falsify the current, and the CPU will request more voltage ("overvolting"), draw an un-safe current, and shorten its lifespan.
Everyone's telling you to upgrade, so I'll chip in a little more detail as to why. The AMD Duron CPU was released 14 years ago as a competitor to the then-popular Pentium III and was one of the lowest power options in the category of CPUs that included the AMD Athlon, Intel Pentium III and Intel Celeron. It is a single-core, 32-bit CPU with clock speeds ranging from 600MHz in the earliest edition to 1.8GHz in the latest edition. Its Front-Side Bus clocked in at 100MHz or 133MHz.
In short, this thing isn't just "old", it's practically "ancient" in terms of the number of hardware generations that have occurred since its prime. And even in its prime, it was the bargain-basement option.
Just about anything you can find on ebay or craigslist these days will be better than what you've got right now. If you can afford to spend a few hundred dollars and don't mind assembling it yourself, you could build a nice Xeon-based system (such as with an E3-1225v3 CPU) with an Intel Pro NIC for $600-$800.
Just for kicks, I tried to find some benchmarks for you. This was a little tricky, since Passmark was in its infancy when this CPU was released. http://www.cpubenchmark.net/pt7_cpu_list.php shows only a couple of entries for the Duron -- the basic "AMD Duron" entry shows a passmark score of 268. By comparison, the Intel Xeon E3-1225v3 has a Passmark score of almost 7,000 and the best CPUs on the market today have Passmark scores approaching 14,000.
I'm impressed that you've gotten this machine to limp along to this point, but it's well overdue for retirement.
>Try to go look at a chart of processor specs over time. Processor speed has pretty much flattened out now.
Err, no. Speed as expressed by the primitive measure of gigahertz has flattened. Speed as measured by actual computer power continues to grow at historical rates. Check out the top CPU on that chart, at 3.33ghz, with a 10,000 score. Ctrl-F Pentium 4: the first you will find is 3.4ghz, and it scores a 500. Today's 3.4ghz is 20x faster than 2002's 3.4ghz. It also uses 1/2 the power.
Might I followup with the original technical JPRS report, the Chernobyl Notebook, also written by Medvedev prior to writing the "truth about chernobyl".
Reading it made it clear how many levels of idiocy were involved in the disaster.
I'd say it's worth it. For one you can't watch Netflix in 4k on any other CPU generation than Kaby Lake right now. Also there is a performance increase of 10% according to the Passmark score.
Hate to break it to you but your computer is way below minimum spec.
Your processor benchmarks just barely above the required minimum (See this comparison here)
Your video card, on the other hand (and I'm reluctant to even call what you have a video card), is far, far below the minimum required specs (See the comparison here).
I'm guessing you bought, or were given, a pre-built laptop and whoever picked it out didn't realize that an Intel HD Family card isn't the same thing as a dedicated graphics card. That card is meant for little more than video playback and some older games with the graphics turned all the way down.
There have been some issues similar to what you are describing reported for the PC version, but I'd really recommend upgrading as soon as you are feasibly able to. Unfortunately, since you are on a laptop, you can't just go out and pick up a new graphics card to slot in; you'll need to upgrade to an entirely new system if you want to get better hardware.
Your single core rating on that 8350 is actually lower than that of your 4 core cpu. So unless you have a program that really needs those extra cores. Wait.
http://www.cpubenchmark.net/compare.php?cmp%5B%5D=1780&cmp%5B%5D=1929
Well, just looking at RAM in isolation, then yes your old laptop is 'better'; although most people would recomend you to get the 8GB SP4 anyway, even if you don't need it now, for future proofing.
In terms of basically every other specs, then no your old laptop is not 'better'. The SP4 CPU is slightly better, whilst having a 15W TDP instead of a 35W TDP, meaning it will run cooler and quieter, and consume less battery.
The hard drive is no contest, SP4's PCIe SSD vs a Hybrid HDD with only 4GB SLC NAND Flash portion - Read and Write speeds will be massively better in the SP4 and this translates in to a much more responsive system.
The screen is again no contest, SP4's 2,736 x 1,824 resolution display vs your laptop's 1366 x 768 display - meaning nearly 5 million pixels instead of just over 1 million; plus better colour accuracy etc.
In terms of raw computing power, there may not be a massive increase over your laptop - although the SSD will make a massive difference in how quick the system feels. The appeal of the Surface line comes due to the form factor, pen, amazing display etc - whilst still managing comparable performance to full size laptops.
> AMD will pass Intel on IPC (instructions per clock) with Zen.
Do you have any sort of source on this, or is it just rampant speculation that AMD somehow with a single generation will catch up and even pass Intel, who has increased their gap in IPC from AMD every generation for the past 8 or so years?
> they will both be at about the same level
Considering how AMD currently is about 5 levels behind Intel when it comes to IPC I seriously doubt that.
> http://www.cpubenchmark.net/compare.php?cmp[]=2275&cmp[]=2565 Notice how the Haswell CPU has a higher Single Thread rating than the Skylake due to the 0.2 Ghz faster Turbo Speed.
And passmark is a horrible way to measure CPU performance, check this for example for proper data:
http://www.anandtech.com/show/9483/intel-skylake-review-6700k-6600k-ddr4-ddr3-ipc-6th-generation/9
Spoiler: Haswell to Skylake (DDR3): Average ~5.7% Up.
Make sure you're following your FIT voltage.
​
Use HWiNFO instead of HWMon as HWiNFO is way better maintained and more accurate.
One thing where they mention about hardware monitors reporting nonsense numbers because they don't know about a particular bit of hardware, on PC there's huge amounts of variations on different parts, so the developers of the software need information to improve it.
HWinfo for certain will take reports generated by the application to get your sensors correct, and you can do so on the forum. When you see "Enhanced monitoring of..." in the changelog for a particular model, it's likely someone supplied the information needed.
You do understand that mobile and desktop parts are not on the same level of performance even if branded similarly.
A GTX 970 is 4 times more powerful than a 960m even a regular GTX 960 is 3 times more powerful than a 960m.
Also the i5-2500k is 2 times more powerful than your i5-5200u.
Mobile parts do not compare to desktop counterparts ever. Referance
In this case, since the i7 is a low-wattage version (because of the U), it is outperformed by the i5. Here's a comparison of the two processors.
And since the 970m outperforms the 960m by a factor of 3, I would go with the first laptop.
Will it increase performance? Certainly. Plex transcoding is typically bottlenecked by CPU performance, so increasing it will help. Info on CPU power needed for transcoding: https://support.plex.tv/hc/en-us/articles/201774043-What-kind-of-CPU-do-I-need-for-my-Server-computer-
Will it mean more streams? Really depends on how much you can overclock with stability. Your Q6600 does just under 3000 passmarks, stock, and to get to two 1080p transcodes, you'll want to have about 4000 passmarks. That's a pretty significant overclock but not impossible for a Q6600. I think you'd need to hit about 3.1 GHz or faster.
Nah, the pentium does have better single core performance.
The difference is meh, but he could save money getting that and possibly a 280x or something.
This is the same company that claims in their ads that the computers that they were building in 2004 are still outperforming what you can buy for $500 today in any Walmart out there.
I can but stew in my silent rage every time I hear the claim. There was not a single consumer CPU on the planet in 2004 that would even hold a candle to something as basic as a Celeron G1820 that you can pick up for ~$45 today. Even the top of the line Athlon 64 FX-55 from 2004 looks like a graphing calculator vs the bargain Celeron of choice today. To say nothing of the crap performance you're going to get out of the RAM, Storage, and Buses of the age. Even the top of the line X850 XT graphics card from 2004 doesn't put up much of a fight against the integrated graphics in say, this Core i3 laptop with Intel HD4400 graphics which is a hair short of $500.