Message boards :
Number crunching :
CPU Power Usage Varies Significantly With RAM Frequency
Message board moderation
Author | Message |
---|---|
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
One of my crunchers has an I7-4820K for its CPU. The RAM in this machine (8GB - 4x2GB) runs by default at 1600MHz when I start up the machine. But it can (activating XMP in BIOS) run at 2133MHz. No big deal, right? The interesting thing I found is, when running full speed MB (as recently) using 7 of 8 HT cores for CPU processing and 6 threads of GPU MB (2 GTX 780s @ 3/GPU), I found via Core Temp, that at 1600MHz the CPU was using 61-63 watts, but at 2133MHz, it was using 71-73 watts, 10 watts more. Can someone please explain to me why that should be so? |
Fawkesguy Send message Joined: 8 Jan 01 Posts: 108 Credit: 188,578,766 RAC: 0 |
I'm guessing the XMP profile increases voltage for the RAM, which is usually required for higher frequencies. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
It's hard work pushing all those electrons around so fast? |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
Fawkesguy is correct. The standard voltage for DDR3 RAM is 1.5v, and 1.65v for faster XMP speeds. This increase of voltage will result in higher wattage use. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
It's hard work pushing all those electrons around so fast? True. CMOS logic uses essentially zero power when it's in high or low state. The transitions use practically all the power, so a 33% speed increase would increase power used by 33%, even if a voltage boost were not needed to support the higher speed. Joe |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I'm guessing the XMP profile increases voltage for the RAM, which is usually required for higher frequencies. Well said. I use some Crucial low voltage 1.35v dimms in my i5-4670K boxen. The XMP profiles include 1600MHz at 1.5v in addition to the OC values. As a personal note I would like it if the memory manufactures would publish the XMP info for their memory on their sites. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
It's hard work pushing all those electrons around so fast? So that says that memory causes the CPU in my case to use 20 watts @1600MHz and 30 watts @2133MHz. But the memory is powered by the MB, not the CPU, so why does the CPU use all that power? I would think that the computations done by the CPU would be almost all the power used by the CPU, yes? This says all the SETI grinding uses 40 watts? |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
It's hard work pushing all those electrons around so fast? The memory controller is in the CPU. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
The memory controller is in the CPU. I knew that; I'm just surprised at how much of the CPU power usage is from the memory. Makes it sound like it may have been a bad decision to move it into the CPU, if it in fact uses close to half the power used by the CPU. After all, the whole thing is to have cooler running CPUs, and that much additional power dissipation seems to go against that. |
bluestar Send message Joined: 5 Sep 12 Posts: 7031 Credit: 2,084,789 RAC: 3 |
How much heat is supposed to come from those memory modules or cards that are inside a computer? Does it make any difference when it comes to the power charge these memory modules may be subjected to, or is it rather about possibly overclocking the main processor instead, a feature which supposedly is carried out by the BIOS, which in fact should be regarded as being computer firmware. I recently upgraded to 16 GB RAM in two memory cards. They are running at 1600 MHz. I assume this means frequency when it comes to given memory transfer speed of data. These cards are located on each side of the big Noctua fan (possibly having the designation NH-DH14) which is an excellent processor fan. Those airflows which could have been mounted above the memory cards makes a lot of noise. The first purchase had two fans inside these airflows. The most previous purchase came with two such airflows which are still unpacked, but having three fans for cooling inside. Of course you are not supposed to be mixing or combining processor fans and airflows with a possible liquid cooling system which is much more difficult to assemble into a computer. I do have one such cooling system lying around which could be used, but I will not be attempting such a thing alone. For possibly doing this I will need to have assistance from someone else. |
BilBg Send message Joined: 27 May 07 Posts: 3720 Credit: 9,385,827 RAC: 0 |
Makes it sound like it may have been a bad decision to move it into the CPU ... Do you try to cheat the laws of physics? ;) If you 'move' the memory controller back to the chipset you'll also move the heat "problem" With the additional "benefit" of slower RAM transfers and need for additional cooling of chipset ... Â - ALF - "Find out what you don't do well ..... then don't do it!" :) Â |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
Makes it sound like it may have been a bad decision to move it into the CPU ... I understand that. I was just commenting on the idea of running all that power dissipation onto the CPU; chipset cooling is a different problem than CPU cooling... Like all real world decisions, there are trade-offs involved. |
OzzFan Send message Joined: 9 Apr 02 Posts: 15691 Credit: 84,761,841 RAC: 28 |
Makes it sound like it may have been a bad decision to move it into the CPU ... Indeed there are trade-offs. The trend for a while now, as I'm sure you're aware, is for miniaturization and performance. Bringing the MCH into the CPU accomplishes that goal, and puts the onus on cooling even more. Then that brings up the topic of die shrinks, which usually allows the chip to consume less power, and therefore dissipate less heat, all the while allowing those 'extra' parts moved into the CPU to still retain their benefits of performance. Makes the SoC space quite interesting. ;-) |
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
Why not change the whole thing? Could it be that the memory tells the computer what to access/compute next? To program a GPU efficiently one must do just that right now in his/hers mind. After all - the memory is the slowest link in the chain. Imagine You were the one running 100m with or without the hurdles -- would it be nice if the marathon runners were already giving You the way? (Insted of stepping aside when politely asked or after a CPU/OS predetermined timeout) HDD? Finally forget it! To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
ivan Send message Joined: 5 Mar 01 Posts: 783 Credit: 348,560,338 RAC: 223 |
To program a GPU efficiently one must do just that right now in his/hers mind. Yes, GPU programming is a whole new mindset -- two talented programmers before me failed to get our analysis software running on a FPS vector-processor (TRIUMF, 1984) before I tried and succeeded. Looking today at the theoretical and S@H-reported performance of my Tegra GPU -- 327 GFLOPS and 14.5 GFLOPS respectively (double the S@H figure as I run two jobs at once) -- it does appear that there is some way to go in extracting ultimate performance. In my experience, this may need new algorithms. Mind you, the same is true of complex CPU software, as this 2007 report on my current project's software shows -- I doubt programmers or compilers have gotten significantly more proficient in the interim. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
To program a GPU efficiently one must do just that right now in his/hers mind. Yeah, I see it as a problem of old 'black box' algorithms that have worked very well in serial and vectorised style, don't have the computational density needed ( compute versus memory complexity) to leverage what's there well. Typical ratio for GPU applications is currently around 5% overall, with better optimised & relatively easily optimised applications around 8%. The CUFFT library bumps around 10% at certain sizes, which is very highly optimised. I believe one answer is rethinking blackboxes like the FFT, fusing more into them for increased arithmetic density, data locality and fewer passes over data. For example, who uses FFT by itself anyway ? (not us). Well I've been finding attempts at that kindof kernel fusion workable, though application bodies still tend to have a very serialised nature. In the case of multibeam the signal reporting structure relies on an implicit order, that has to be observed in any added serialisation steps (for handling the 5% overflow results). That's a more frustrating roadblock than the GPU code itself. [as is the weak multithreading support in certain APIs...] "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.