Equipment safety

Questions and Answers : Windows : Equipment safety
Message board moderation

To post messages, you must log in.

AuthorMessage
Rei Umali
Volunteer tester

Send message
Joined: 23 Aug 13
Posts: 7
Credit: 163,008
RAC: 0
United States
Message 1406749 - Posted: 23 Aug 2013, 4:51:16 UTC
Last modified: 23 Aug 2013, 4:52:32 UTC

Hello,

I just started a few minutes ago and am running my CPU at 100%, GPU looks like it's at >70%.

I'm running i7 2600K @ stock clock w/ h100i cooling and GTX 550ti vid card. fractal design case with 2 intake fans and the h100i fans for exit.

Looks like after 16 minutes, CPU temps are on 60*C, and GPU temp is at 71*C.

Should I throttle down CPU and GPU usage to keep my rig from blowing up? I would like to keep this rig (minus the GPU) for as long as I can.

Thanks.
ID: 1406749 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22158
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1406751 - Posted: 23 Aug 2013, 5:03:37 UTC

Your rig is comfortably inside its temperature limits so don't bother about throttling - mine have been sitting at those sort of temperatures for weeks - indeed one of my GPUs has been at that sort of temperature for well over a year and no harm.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1406751 · Report as offensive
Rei Umali
Volunteer tester

Send message
Joined: 23 Aug 13
Posts: 7
Credit: 163,008
RAC: 0
United States
Message 1407107 - Posted: 23 Aug 2013, 22:29:41 UTC

Thanks for the quick reply. Instead of creating another thread, I figured I'd ask here...

I have 19 SETI@home with "Ready to report" status. Will they upload by themselves?
I have 9 currently "Running" status and a lot of "Ready to start" statuses.

Any more inputs/action needed from me?

Thanks again.
ID: 1407107 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1407116 - Posted: 23 Aug 2013, 23:37:27 UTC - in response to Message 1407107.  

I have 19 SETI@home with "Ready to report" status. Will they upload by themselves?

These are already uploaded. The Upload & report system is twofold.
Uploading is just moving data from a hard drive on your computer, via the internet to a hard drive at the server at Seti.
Reporting uses the database, and it's been found that reporting e.g. 25 tasks takes as much overhead as it does to report 1 task. With overhead, we mean server CPU, memory and disk-usage. So it's better to report multiple.

Which is what BOINC tries to do. Completed work is reported at the first of:

1) 24 hours before deadline.
2) Connect Every X before deadline.
3) 24 hours after task completion.
4) Immediately if the upload completes later than either 1, 2, or 3 upon completion of the task.
5) On a trickle up message.
6) On a trickle down request.
7) On a server scheduled connection. Used, but I am not certain by which project.
8) On a request for new work.
9) When the user pushes the update button.
10) On a request from an account manager.
11) Report immediately every task, if "No new Task" is set.
12) Report immediately if CPU or network time-of-day override starts in the next 30 minutes. (BOINC 7.0)
13) When minimum work buffer is reached. (BOINC 7.0)

I have 9 currently "Running" status and a lot of "Ready to start" statuses.

Any more inputs/action needed from me?

No, BOINC is essentially a set and forget software. As long as you run stock software (no third-party science applications) and have set BOINC to run on preferences or always, it'll do that (as long as the project has work and no server trouble). It doesn't need any manual input from the human, it works even better without human intervention. :-)
ID: 1407116 · Report as offensive
Rei Umali
Volunteer tester

Send message
Joined: 23 Aug 13
Posts: 7
Credit: 163,008
RAC: 0
United States
Message 1407169 - Posted: 24 Aug 2013, 4:39:08 UTC - in response to Message 1407116.  

Awesome.

One last question... Maybe :)

Multi-processor rig (used/auction server) without video card

or

Desktop rig (i5) with a vid card (gtx 660)?
ID: 1407169 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1407171 - Posted: 24 Aug 2013, 5:02:39 UTC - in response to Message 1407169.  

Desktop rig with vid card. GPUs are far more efficient at SETI work than any CPU out there.
ID: 1407171 · Report as offensive
Rei Umali
Volunteer tester

Send message
Joined: 23 Aug 13
Posts: 7
Credit: 163,008
RAC: 0
United States
Message 1407267 - Posted: 24 Aug 2013, 12:24:51 UTC

Interesting. I guess I'm investing in a few then. Thanks!
ID: 1407267 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1407393 - Posted: 24 Aug 2013, 18:56:12 UTC - in response to Message 1407171.  

GPUs are far more efficient at SETI work than any CPU out there.

LOL, is not what Eric said:

29 21/08/2013 1:38:30 PM NVIDIA GPU 0: GeForce GTX 780 (driver version 32641, CUDA version 5050, compute capability 3.5, 3072MB, 4698 GFLOPS peak)

Yes, that number is based on a lot of bad assumptions yet many GPU only projects or apps use it for calculating credit... It assumes 1) that there is no GPU memory latency. 2) The GPU is capable of performing every operation in one cycle. 3) Every operation necessary for the program can be parallelized to the maximum capability of the GPU and 4) the GPU doesn't stall while data is being transferred over the bus from CPU memory to GPU memory. All of those assumptions are wrong. In reality the machine above is running at 245 bGFLOP (benchmark-GFLOP which is the unit the BOINC client measures) 5.2% of "peak" which is about 86 real GFLOP/s. That's really 1.8% of "peak".

When talking about efficiency, I was talking about the former number which is based on the average windows machine doing 1 real FLOP for every 2.85 benchmark FLOP. So when I said 2.5% efficient, I really meant 0.88% efficient.

;-)
ID: 1407393 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1407466 - Posted: 24 Aug 2013, 22:25:57 UTC

Look at the top machines.
Also realize people have done the math of credit per watt.
GPU wins.


Janice
ID: 1407466 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1407469 - Posted: 24 Aug 2013, 22:54:34 UTC - in response to Message 1407393.  

GPUs are far more efficient at SETI work than any CPU out there.

LOL, is not what Eric said:

29 21/08/2013 1:38:30 PM NVIDIA GPU 0: GeForce GTX 780 (driver version 32641, CUDA version 5050, compute capability 3.5, 3072MB, 4698 GFLOPS peak)

Yes, that number is based on a lot of bad assumptions yet many GPU only projects or apps use it for calculating credit... It assumes 1) that there is no GPU memory latency. 2) The GPU is capable of performing every operation in one cycle. 3) Every operation necessary for the program can be parallelized to the maximum capability of the GPU and 4) the GPU doesn't stall while data is being transferred over the bus from CPU memory to GPU memory. All of those assumptions are wrong. In reality the machine above is running at 245 bGFLOP (benchmark-GFLOP which is the unit the BOINC client measures) 5.2% of "peak" which is about 86 real GFLOP/s. That's really 1.8% of "peak".

When talking about efficiency, I was talking about the former number which is based on the average windows machine doing 1 real FLOP for every 2.85 benchmark FLOP. So when I said 2.5% efficient, I really meant 0.88% efficient.

;-)


Actually, nowhere in there did Eric say that GPUs weren't more efficient than CPUs. All he said was the GPUs aren't as efficient at the manufacturer's GFLOPS claims.
ID: 1407469 · Report as offensive
Rei Umali
Volunteer tester

Send message
Joined: 23 Aug 13
Posts: 7
Credit: 163,008
RAC: 0
United States
Message 1407481 - Posted: 25 Aug 2013, 0:23:02 UTC

Seems like my 550Ti may have died. Only a few hours at 70*c.

See how much points I've accumulated. That's all I've done and my 550Ti gave up.

I'll take a closer look tomorrow, perhaps, too tired (or lazy) from work.
ID: 1407481 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1407489 - Posted: 25 Aug 2013, 2:39:49 UTC - in response to Message 1407469.  
Last modified: 25 Aug 2013, 2:39:55 UTC

Actually, nowhere in there did Eric say that GPUs weren't more efficient than CPUs. All he said was the GPUs aren't as efficient at the manufacturer's GFLOPS claims.

Earlier:
For many projects there seems to be no relationship between credit granted and work done, especially GPU-only projects. CUDA5 GPUs are about 2.5% efficient when running SETI@home, and that's actually not that bad for GPU code. If we assumed GPUs were 100% efficient, and had a GPU-only app we could boost your credit rate by 40X. But I prefer that they mean something.


:-)
ID: 1407489 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1407495 - Posted: 25 Aug 2013, 3:07:11 UTC - in response to Message 1407489.  
Last modified: 25 Aug 2013, 3:12:17 UTC

Actually, nowhere in there did Eric say that GPUs weren't more efficient than CPUs. All he said was the GPUs aren't as efficient at the manufacturer's GFLOPS claims.

Earlier:
For many projects there seems to be no relationship between credit granted and work done, especially GPU-only projects. CUDA5 GPUs are about 2.5% efficient when running SETI@home, and that's actually not that bad for GPU code. If we assumed GPUs were 100% efficient, and had a GPU-only app we could boost your credit rate by 40X. But I prefer that they mean something.


:-)


Again, all that says is that the hardware is only about 2.5% as efficient as the GFLOPS rating suggests, not that they are only 2.5% efficient in total. A GPU at 2.5% efficiency is still faster than any CPU out there due to the way GPUs are able to handle math.

If Eric was claiming as you suggest, and if your argument is true that GPUs aren't as efficient as CPUs, then the entire top 100 list would be dominated by Quad-socket servers with 16 core beasts in them.

But that's not what Eric was claiming.
ID: 1407495 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1407577 - Posted: 25 Aug 2013, 10:33:02 UTC - in response to Message 1407495.  
Last modified: 25 Aug 2013, 10:34:21 UTC

Again, all that says is that the hardware is only about 2.5% as efficient as the GFLOPS rating suggests, not that they are only 2.5% efficient in total. A GPU at 2.5% efficiency is still faster than any CPU out there due to the way GPUs are able to handle math.

If Eric was claiming as you suggest, and if your argument is true that GPUs aren't as efficient as CPUs, then the entire top 100 list would be dominated by Quad-socket servers with 16 core beasts in them.

But that's not what Eric was claiming.

Even at 2.5% efficiency, GPUs would be better than CPUs. Because of their being faster, having a crap-load more cores to hack at the data simultaneously, way faster memory. But if the GPUs would be even better, the 100% efficiency, only then would the credit increase 40 times. At the moment the op-code used to port the science application to run on the GPU is not efficient, it does not use all the GPUs their (almost unique) hardware to the fullest, thus is not as efficient as the CPU is, which is used to its fullest capability/capacity.

If you had a CPU that had 1536 cores, it would be as good as any of the GPUs in the middle/higher end if all cores were used simultaneously. It's just that a CPU has 1, 2, 3, 4, maybe 6, maybe 8, and top maybe 12 cores. Hyper-threading excluded. And they don't use these cores through a multi-threaded application. Perhaps if Seti had an mt application, there'd be a better comparison.

So now it's just always 1 core per task on CPU versus all cores per task on GPU, which is why the GPUs are so much the better. Doesn't mean that the actual work they do is more efficient, rated on a 100% by 100% comparison, CPU vs GPU.
So in this case it's CPU = 100% efficient per single core, versus GPU = 2.5% efficient per e.g. 1536 cores. Nothing to do in the above (second) example with the GFLOP value that the manufacturer gave its GPU.
ID: 1407577 · Report as offensive
Rei Umali
Volunteer tester

Send message
Joined: 23 Aug 13
Posts: 7
Credit: 163,008
RAC: 0
United States
Message 1407612 - Posted: 25 Aug 2013, 14:41:19 UTC

Not that any of you seem to have cared, but reseating the vid card seemed to have revived it. Need to do spring cleaning on my rig anyway...

For what its worth, y'all's arguing is giving me an idea on how everything works.

If what y'all are claiming is true, has anyone been able to get a PS3 to work? I know some entities are using PS3s to crunch numbers for them.
ID: 1407612 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1407640 - Posted: 25 Aug 2013, 17:10:34 UTC - in response to Message 1407577.  
Last modified: 25 Aug 2013, 17:25:42 UTC

If you wanna be pedantic about it:

Even at 2.5% efficiency, GPUs would be better than CPUs. Because of their being faster, having a crap-load more cores to hack at the data simultaneously, way faster memory.


GPUs are typically run no more than 1GHz where as CPUs are run anywhere from 1.6 to 4GHz+, but I know you didn't mean literal speed when saying GPUs are faster.

GPUs may have more cores, but they're not directly comparable to CPU cores. They are actually highly specialized units that are very efficient at repetitive routines and functions.

But if the GPUs would be even better, the 100% efficiency, only then would the credit increase 40 times. At the moment the op-code used to port the science application to run on the GPU is not efficient, it does not use all the GPUs their (almost unique) hardware to the fullest, thus is not as efficient as the CPU is, which is used to its fullest capability/capacity.


Ok, obviously I have to clarify myself. When I said that GPUs are far more efficient, I meant that they were far more capable of doing the work when compared to CPUs, not a literal efficiency rating.

If you had a CPU that had 1536 cores, it would be as good as any of the GPUs in the middle/higher end if all cores were used simultaneously. It's just that a CPU has 1, 2, 3, 4, maybe 6, maybe 8, and top maybe 12 cores.


Not even close. The cores of a CPU are designed to run a wide variety of code whereas the cores of a GPU, as I said previously, are highly specialized math engines - imagine an FPU on crack. 1536 CPU cores would still not be able to match the performance of 1536 GPU cores even at a 2.5% efficiency rating.

Doesn't mean that the actual work they do is more efficient, rated on a 100% by 100% comparison, CPU vs GPU.
So in this case it's CPU = 100% efficient per single core, versus GPU = 2.5% efficient per e.g. 1536 cores. Nothing to do in the above (second) example with the GFLOP value that the manufacturer gave its GPU.


Actually, it does have to do with the manufacturer given GPU GFLOP value. The manufacturer's GFLOP value is the maximum theoretical performance of the card assuming a 100% efficiency rating. The fact that no card will ever come close to 100% efficiency because of a multitude of other factors of card design is evident that many GPU applications don't actually reach that value. However, even at a relatively low efficiency rating of 2.5%, the fact that a GPGPU core is highly specialized still gives it the advantage.

Try not to be so literal next time. There really was no need to confuse the newbies asking for help in Q&A about pedantic details when asking for help.
ID: 1407640 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1407642 - Posted: 25 Aug 2013, 17:13:14 UTC - in response to Message 1407612.  

If what y'all are claiming is true, has anyone been able to get a PS3 to work? I know some entities are using PS3s to crunch numbers for them.


Yes, that has been tried. The caveat is that you must load a compatible OS onto the PS3 to crunch, and Sony has disabled the ability to run alternative OSes on the PS3 with one of their later firmware upgrades.

So unless you have a PS3 that is blocked from getting any of those more recent updates, the PS3 was a very powerful cruncher. Before the firmware upgrade, people would setup entire farms of PS3s and would get some pretty impressive RAC - on Folding@home. The SETI@home/BOINC client wasn't mature enough to get the same kind of performance.
ID: 1407642 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1407679 - Posted: 25 Aug 2013, 20:00:52 UTC - in response to Message 1407612.  

SETI@home will give your graphics card and CPU a pretty good workout.
It should be able to handle it assuming your power supply is up to the job.

You WILL be burning more electricity than you are accustomed. But you can also
have some fun doing it. And yes keeping things clean especially on air cooled rigs is a great idea. But 70c should not be an issue. 80c is a good time to clean things up.. and at 90c PANIC.

Note: this is my personal rule of thumb and does not represent the views of SETI staff, nor Nvidia nor any manufacturer. Advice is guaranteed to be worth what you paid for it.
Janice
ID: 1407679 · Report as offensive
Rei Umali
Volunteer tester

Send message
Joined: 23 Aug 13
Posts: 7
Credit: 163,008
RAC: 0
United States
Message 1407775 - Posted: 26 Aug 2013, 2:25:47 UTC - in response to Message 1407679.  

My processor is liquid cooled with h100i. Vid card is air cooled.

No idea why I had to reseat the vid card other than dust accumulating.

Power supply should be enough, I believe 650w.
ID: 1407775 · Report as offensive

Questions and Answers : Windows : Equipment safety


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.