What was once old is new(er? ish?) again.

Message boards : Number crunching : What was once old is new(er? ish?) again.
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
juan BFP Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 6948
Credit: 394,920,101
RAC: 144,347
Panama
Message 1921511 - Posted: 27 Feb 2018, 3:48:00 UTC
Last modified: 27 Feb 2018, 3:59:17 UTC

What you need to keep and eye too is in the GPU usage EVGA Precision is good to follow that, to low, start another WU it must be in the range of >90% to be sure you use your GPU close to it's max performance.

The best point with the SoG builds on the 1080Ti are 2 or even 3 WU/time on the normal hosts. Not know what is the best on the Atom.

BTW If you wish to try 3...... just use.... <ngpus>0.33</ngpus> and change project_max_concurrent to 3

Remember each GPU WU uses an entire CPU core/thread.
ID: 1921511 · Report as offensive
Profile Stargate (S.A.)
Volunteer tester
Avatar

Send message
Joined: 4 Mar 10
Posts: 1504
Credit: 475,643
RAC: 1,266
Australia
Message 1921515 - Posted: 27 Feb 2018, 5:20:54 UTC
Last modified: 27 Feb 2018, 5:21:27 UTC

Wish I had the knowledge of all these files and scripts I see on here and other threads like you guy's "its very interesting"

Steve
ID: 1921515 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4918
Credit: 316,271,846
RAC: 721,304
United States
Message 1921535 - Posted: 27 Feb 2018, 9:07:48 UTC

Sorry about not being able to contribute to Al's slowpoke machine today. Been busy trying to get Numbskull back online. Parts didn't arrive in the mail till 5 PM.(Grrrr) Then had to finish putting the new 1080Ti Hydro Copper into the rebuilt loop. That went fairly easily though designing the loop layout took some time. Main thing was to get a drain valve into the system and replace the crappy kit PVC tubing which leached plasticizers out and fouled the cpu block. Put in Primochill Advanced LRT tubing.

Main issue was that I also replaced the cpu with a newer build week chip that doesn't have the segmentation violation bug. That is what I have been fighting tonight. I couldn't just get it to run the memory stable at the old values. Been lots of discussion that the IMC in the chip also obeys the silicon lottery rule. Looks like the new chip doesn't have as good a IMC as the old chip. Had to drop from 3333 Mhz back down to 3200 Mhz. But the plus is that I can now run the chip at 3950 Mhz where before I could only get BOINC stable at 3900 Mhz.

I wondered how I would fair with cpu temps since I added the Hydro Copper card to the loop. Both are on a single 360mm radiator. Looks like I raised the temps by a couple of degrees. Now at 64° C. where I was at 58-62° C when it was just the cpu in the loop. The 1080Ti Hydro Copper is at 32° C and the water loop temp is at 31°C.

Al, looks like Juan has given you good advice. I really have nothing more to contribute.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1921535 · Report as offensive
juan BFP Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 6948
Credit: 394,920,101
RAC: 144,347
Panama
Message 1921543 - Posted: 27 Feb 2018, 11:55:21 UTC - in response to Message 1921515.  
Last modified: 27 Feb 2018, 12:09:28 UTC

Wish I had the knowledge of all these files and scripts I see on here and other threads like you guy's "its very interesting"

Believe me, all the info i posted here i learn reading this forums and learning by the old trial & error metodoth.

@Keith
If you can could you check his 1080Ti crunching times, i still believing are high for this GPU. Not have any 1080Ti or Atom here to compare. My point is his times are in the range of 500 secs my Windows hosts crunching 1 blc WU @ time with a very less aggressive configuration & a 1070 only GPU are in the range of 350 secs. Maybe is just because the Atom itself but seems high AFAIK.

@AI
Could you post your GPU usage with this configuration?
ID: 1921543 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4918
Credit: 316,271,846
RAC: 721,304
United States
Message 1921551 - Posted: 28 Feb 2018, 2:50:47 UTC - in response to Message 1921543.  

I actually think the issue is the low and slow Atom processor. It must be breathless all the time. I don't think it can keep up with demand of the 1080Ti in shoveling data into and out of the card.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1921551 · Report as offensive
juan BFP Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 6948
Credit: 394,920,101
RAC: 144,347
Panama
Message 1921555 - Posted: 28 Feb 2018, 2:59:13 UTC - in response to Message 1921551.  
Last modified: 28 Feb 2018, 3:34:41 UTC

I actually think the issue is the low and slow Atom processor. It must be breathless all the time. I don't think it can keep up with demand of the 1080Ti in shoveling data into and out of the card.

If that's true and thinking only about RAC, maybe his best move is to switch the 1080Ti to a fast host like his 7167755 and put the 670 on the atom or something similar with another less powerful GPU.
I still wondering about the GPU usage that could be the final clue to the puzzle.
ID: 1921555 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4918
Credit: 316,271,846
RAC: 721,304
United States
Message 1921560 - Posted: 28 Feb 2018, 3:06:21 UTC - in response to Message 1921555.  

Could look at various hosts and try to find a four core Intel processor running at 1.6 Ghz, same at the Atom and somebody else with a 1080 or better card and compared run_times. I think they will have similar times. I see posts all the time in gaming circles that you need a fast processor to feed high powered graphics card or the common refrain is that the system is cpu "bottlenecked" and you won't get anywhere near the performance out of the card that it is capable.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1921560 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1635
Credit: 361,094,357
RAC: 288,498
United States
Message 1921664 - Posted: 28 Feb 2018, 15:10:16 UTC
Last modified: 28 Feb 2018, 15:12:25 UTC

I finally got the setting changed this morning to run 2 GPU tasks at once. The CPU utilization went from averaging in the 40% range to now 65-70%, and I took a look at the times for the tasks that were stacking up due to the outage yesterday, and they seemed to be in the 9-11ish minute range, usually closer to the 11 side. Not sure where in Precision to tell the GPU utilization/usage, but when BOINC restarted, it went from running at 405MHz to 5022, so it sure seems to be running flat out. It'll be interesting to see how longer term this puts out, but if it turns out to be sub-par, I do have a couple other boards with a wee bit more oomph that I could toss it into. I suppose the best way to know is to compare it to other 1080Ti Hydro cards in more generously CPU'd systems?

Keith, sounds like you had some fun yesterday, glad it's up and running under water now, you should see some good production out of that rig! And you have some seriously nice temps there too. Don't you just love running under water? If only it were easy and cheap, Everyone would do it! lol

ID: 1921664 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 4555
Credit: 267,544,496
RAC: 393,345
United States
Message 1921668 - Posted: 28 Feb 2018, 15:20:09 UTC - in response to Message 1921664.  

install Rays' SIV 64X to see GPU utilization and/or CPU usage, temps, etc. I only use Precision X to control the fans and memory speeds

You can google SIV64X
ID: 1921668 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1635
Credit: 361,094,357
RAC: 288,498
United States
Message 1921681 - Posted: 28 Feb 2018, 16:08:10 UTC - in response to Message 1921668.  

K, thanks Z, I'l grab it today.

ID: 1921681 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4918
Credit: 316,271,846
RAC: 721,304
United States
Message 1921687 - Posted: 28 Feb 2018, 16:29:05 UTC - in response to Message 1921664.  

Al, actually you aren't running flat out. 5005Mhz is only the P2 memory speed inflicted by Nvidia when running a compute load. The default memory clock speed is 5500Mhz or actual of 11 Ghz.

There is a way I found out how to get Nvidia cards to run at their correct P0 memory speeds with a compute load. I posted how to in the Solution to achieve P0 power state for Nvidia compute loads in Windows thread in the GPUUG forum.

Yes, it is very nice to run under water. But expensive. You think you can get into custom cooling with one of the cheap kits but in the end you spend a lot more getting the loop to be more conventional and usable than the bare-bones kits provide. It has been a good learning experience. Should be more comfortable in the future for any new custom cooling projects. At least I know the value of properly cleaning a radiator and using good quality tubing and expect fittings to be at least 25%-50% of the total project costs.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1921687 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1635
Credit: 361,094,357
RAC: 288,498
United States
Message 1921689 - Posted: 28 Feb 2018, 16:29:58 UTC

Just ran SIV, and it says that my memory utilization is between 40-60% on average, and that my GPU is running at between 35-50% loaded, with 2 tasks running. I tried setting the Ngpus line from .5 to .33 to see what would happen if I tried running 3 tasks at once, but after re-reading the config, and then even restarting BOINC, it still only had 2 tasks running. When I went from 1 to 2, and re-read the config files, the 2nd task started immediately, so not sure what's going on with that. I'll let it run today and take a glance at it now and then to see how things are going, but seems to at least be running smoothly.

ID: 1921689 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4918
Credit: 316,271,846
RAC: 721,304
United States
Message 1921691 - Posted: 28 Feb 2018, 16:33:09 UTC - in response to Message 1921689.  

Remember every gpu task needs a full core to support each gpu task. So if you were running 2 cpu tasks when you changed and then adjusted the gpu usage to do 3 tasks, then you simply ran out of cores. You only have 4 cores. One of the cpu tasks will have to finish up before the third gpu task will start.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1921691 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1635
Credit: 361,094,357
RAC: 288,498
United States
Message 1921692 - Posted: 28 Feb 2018, 16:34:32 UTC - in response to Message 1921687.  

Thanks for that link, Keith, I will check it out and see about getting it into P0 state, which will hopefully make this rig perform up to it's fullest potential. And yeah, I started with the idea of putting my CAD system under water, but put that one on hiatus for the time being, as I didn't feel like dumping any more money into it at the time, it was already ridiculously expensive as it was. Maybe sometime down the road, and from what Mark was saying, he seemed to be pretty impressed what that CPU block that he mentioned in his other thread, so that might eventually be a place to start. But for right now, I have enough on my computer plate to keep me busy for a month or 2... ;-)

ID: 1921692 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1635
Credit: 361,094,357
RAC: 288,498
United States
Message 1921694 - Posted: 28 Feb 2018, 16:37:21 UTC - in response to Message 1921691.  

So, pretty much it will be limited to 2, as HT doesn't count in this scenario? if not, good to know, I'll switch it back, I was just trying to see what the CPU load would be with 3 out of the 4 cores loaded up. And try to get the GPU utilization up to between 75-85 percent, but running 2 tasks doesn't seem to be able to accomplish that, at least so far, but I'll keep an eye on it with SIV.

ID: 1921694 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4918
Credit: 316,271,846
RAC: 721,304
United States
Message 1921697 - Posted: 28 Feb 2018, 16:42:21 UTC - in response to Message 1921692.  

If you can keep the card cool enough, it will overclock itself using its GPU Boost 3.0 algorithm in its firmware. Also the NvidiaInspector tool is also good for adding additional memory and core overclocks to the card. It isn't as full featured as Precision XOC per se, it doesn't have any control or monitoring over fan speed for example, I use SIV for that, Mark uses Precision X. Whatever works for you is best.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1921697 · Report as offensive
juan BFP Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 6948
Credit: 394,920,101
RAC: 144,347
Panama
Message 1921699 - Posted: 28 Feb 2018, 16:44:54 UTC - in response to Message 1921694.  

So, pretty much it will be limited to 2, as HT doesn't count in this scenario? if not, good to know, I'll switch it back, I was just trying to see what the CPU load would be with 3 out of the 4 cores loaded up. And try to get the GPU utilization up to between 75-85 percent, but running 2 tasks doesn't seem to be able to accomplish that, at least so far, but I'll keep an eye on it with SIV.

No your Atoms s is capable to run 4 thread. If you want to run 3 WU you need to change:

<project_max_concurrent>3</project_max_concurrent>

This number limits the number of project global WU crunched at a time (GPU+CPU) if set to 2 only 2 WU will be crunched at a time nomather what you post in the <ngpus> line.

GenuineIntel
Intel(R) Atom(TM) CPU 330 @ 1.60GHz [Family 6 Model 28 Stepping 2]
(4 processadores) 

ID: 1921699 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 4918
Credit: 316,271,846
RAC: 721,304
United States
Message 1921701 - Posted: 28 Feb 2018, 16:45:41 UTC - in response to Message 1921694.  

No, gpu tasks take priority over the cpu core usage. If you set the card to run 4 concurrent tasks, they would commandeer all 4 available cores of the cpu. No cpu tasks will run because all cores are in use by the card. A physical core or a HT core is all the same with respect to what a gpu task needs to run.
Seti@Home classic workunits:20,676 CPU time:74,226 hours
ID: 1921701 · Report as offensive
Al Special Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1635
Credit: 361,094,357
RAC: 288,498
United States
Message 1921704 - Posted: 28 Feb 2018, 16:59:03 UTC - in response to Message 1921701.  

That was it, thank you! Now I have 3 running, and SIV says that the CPU is running at about 80%, and the GPU bounces between 35-80%, but usually in the short time I was watching it in the lower end of that range. It'll be interesting to see how it performs over the next few days, as it has more than enuf tasks to grind thru...

ID: 1921704 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 4555
Credit: 267,544,496
RAC: 393,345
United States
Message 1921706 - Posted: 28 Feb 2018, 17:09:54 UTC - in response to Message 1921704.  

ah....Reminds me of the early days we were all first figuring out all these tools, lol....

Good refresher course. ElricM you watching this??
ID: 1921706 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : What was once old is new(er? ish?) again.


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.