GPU testing on low powered CPU

Message boards : Number crunching : GPU testing on low powered CPU
Message board moderation

To post messages, you must log in.

1 · 2 · Next

AuthorMessage
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1828527 - Posted: 5 Nov 2016, 15:19:00 UTC
Last modified: 5 Nov 2016, 15:19:13 UTC

I recently redid my Celeron J1900 system with a more appropriately size PSU since the 550w I was using was a bit much. Plus I went for a new cube case. http://i.imgur.com/Hey7HV6.jpg http://i.imgur.com/F5mytjx.jpg http://i.imgur.com/HzEmS6B.png
I decided to pickup a PCIe x1 to x16 adapter while I was at it for my GTX 750 Ti FTW and give it a go while I was at it. So far there don't seem to be any issues running the GPU in the system off of the adapter.

I have started with the x41zi CUDA50 app. I used that app when the 750 was in my dual Xeon X5470 system. Mostly my goal is to see how such a small CPU effects the GPU. Since the GPU isn't a super cruncher it may turn out to be a good pairing.

When I was running the 750 in my dual Xeon X5470 I beleive it may have only run normal AR Arecibo tasks at that time.
I did find a comment in a post stating it was running:
2 Arecibo at once: ~25min

For the past few days the 750 has been running in the Celeron and chewing on a load of GUPPI VLARS.
2 GUPPI VLAR at once: 70-75min
1 GUPPI VLAR at once: 42-45m
CPU usage for the CUDA app looks to run upwards at 15-17% sometimes.

I don't have a reference for this system with Arecibo tasks but I believe that 2-3 times as long for GUPPI VLARs using the CUDA app should be about as expected?

I'm going to run a short batch of CPU GUPPI VLARs for a data point and then my plan is to run the NV SOG r3420 app for comparison of the CUDA. Then maybe some tuning parameters.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1828527 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1828544 - Posted: 5 Nov 2016, 15:49:34 UTC - in response to Message 1828527.  

to run the NV SOG r3420 app for comparison of the CUDA. Then maybe some tuning parameters.

??? r3528 is current one
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1828544 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1828547 - Posted: 5 Nov 2016, 16:00:40 UTC - in response to Message 1828544.  
Last modified: 5 Nov 2016, 16:04:19 UTC

to run the NV SOG r3420 app for comparison of the CUDA. Then maybe some tuning parameters.

??? r3528 is current one

I had r3420 handy from the v0.45 Beta installer. Since I was not using the GPU recently I wasn't keeping up to date on the releases.
Just grabbed r3528. The CUDA app is still finishing tasks. So I had not used r3420 yet.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1828547 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1828880 - Posted: 6 Nov 2016, 15:29:02 UTC - in response to Message 1828527.  

You could take a look at my AMD 4200+ 2 core for a comparison
AMD 4200 it works well with a 750Ti and no CPU task.

Guppie ~ 35-40 minutes 2 at a time with SoG
A sample:
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc4_2bit_guppi_57449_42148_HIP78709_OFF_0008.1383.831.18.27.213.vlar_2	00:39:16 (00:29:20)	2016-11-03 10:39:25 PM	your-4dacd0ea75	2016-11-03 10:38:41 PM	1 CPU + 0.5 NVIDIA GPUs	74.7	Reported: OK +	159.43 MB	129.25 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc4_2bit_guppi_57449_41812_HIP78709_0007.2204.416.17.26.254.vlar_2	00:40:14 (00:30:58)	2016-11-03 10:25:24 PM	your-4dacd0ea75	2016-11-03 10:21:24 PM	1 CPU + 0.5 NVIDIA GPUs	77.0	Reported: OK	160.59 MB	130.37 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc4_2bit_guppi_57449_42148_HIP78709_OFF_0008.1383.831.18.27.204.vlar_2	00:39:22 (00:29:59)	2016-11-03 10:12:23 PM	your-4dacd0ea75	2016-11-03 10:00:22 PM	1 CPU + 0.5 NVIDIA GPUs	76.2	Reported: OK	159.91 MB	129.72 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc4_2bit_guppi_57449_42148_HIP78709_OFF_0008.1383.831.18.27.92.vlar_2	00:39:19 (00:29:46)	2016-11-03 9:45:22 PM	your-4dacd0ea75	2016-11-03 9:41:22 PM	1 CPU + 0.5 NVIDIA GPUs	75.7	Reported: OK	159.40 MB	129.22 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc6_2bit_guppi_57398_MESSIER031_0018.8434.831.23.46.172_2	00:35:36 (00:27:08)	2016-11-03 9:22:22 PM	your-4dacd0ea75	2016-11-03 9:19:59 PM	1 CPU + 0.5 NVIDIA GPUs	76.2	Reported: OK +	152.21 MB	122.08 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc6_2bit_guppi_57398_MESSIER031_0017.27820.831.23.46.109_2	00:36:05 (00:26:25)	2016-11-03 9:03:22 PM	your-4dacd0ea75	2016-11-03 9:01:13 PM	1 CPU + 0.5 NVIDIA GPUs	73.2	Reported: OK +	153.67 MB	123.65 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc6_2bit_guppi_57398_MESSIER031_0019.20653.831.24.47.149_2	00:29:04 (00:22:49)	2016-11-03 8:49:51 PM	your-4dacd0ea75	2016-11-03 8:45:51 PM	1 CPU + 0.5 NVIDIA GPUs	78.5	Reported: OK	150.78 MB	120.64 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc3_2bit_guppi_57451_25670_HIP69732_0021.19855.416.17.26.245.vlar_1	00:36:53 (00:28:52)	2016-11-03 8:25:39 PM	your-4dacd0ea75	2016-11-03 8:23:15 PM	1 CPU + 0.5 NVIDIA GPUs	78.3	Reported: OK	139.86 MB	109.62 MB	
SETI@home	8.19 setiathome_v8 (opencl_nvidia_SoG)	blc3_2bit_guppi_57451_24929_HIP63406_0019.20423.831.17.26.131.vlar_1	00:33:23 (00:26:59)	2016-11-03 8:20:15 PM	your-4dacd0ea75	2016-11-03 8:16:15 PM	1 CPU + 0.5 NVIDIA GPUs	80.8	Reported: OK	141.70 MB	111.49 MB	
ID: 1828880 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1828887 - Posted: 6 Nov 2016, 16:30:33 UTC - in response to Message 1828880.  

You could take a look at my AMD 4200+ 2 core for a comparison
AMD 4200 it works well with a 750Ti and no CPU task.

Guppie ~ 35-40 minutes 2 at a time with SoG
A sample:
<snip>

Good info. Is your GPU using PCIe x16 or is it limited to less on the MB? As I'm using a PCIe x1 adapter with mine I may expect there could be a reduction in speed with a x1 connection.

After switching from CUDA to SoG I was getting mostly normal AR Arecibo tasks. Which were running in 15-18min. I believe that is similar to how single CUDA tasks ran with the 750Ti in the 3.33GHz Xeon.

I suspended the Arecibo I had on hand and let the GUPPI VLARs run one at a time. The run times look to be 18m45s-19m15s. At the moment I'm configuring to run 2 SoG tasks, but I have to adjust other project settings to free up another CPU core.

Normally the system runs:
1 - iGPU AP
2 - PrimeGrid
1 - Free
The NV GPU has been getting the normally free core. So I'm reducing PrimeGrid to 1 tasks while I run 2 SoG.

Currently I am running 2 tasks via app_config.xml setting.
It may turn out that -cpu_lock and -total_GPU_instances_num N could be a better option when running 2 tasks at once.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1828887 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1828914 - Posted: 6 Nov 2016, 22:05:55 UTC - in response to Message 1828887.  

It has a x16 slot for the card, I believe it is v2.0 if I remember right.

I run a fairly aggressive command line, but leave it at low priority since it wants 100% of the CPU - not good if not running LP. No locking or anything, just 0.5 GPU, 1.0 CPU, and let it run with prefs set to not download CPU tasks ... that get ugly if it tries running them.

The main problem is memory, the XP box only has 1GB to play with, but it runs well 24/7 and I can't even remember the last time it crashed. Maybe a reboot every month or 2 if I'm moving something.
ID: 1828914 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1828919 - Posted: 6 Nov 2016, 22:11:02 UTC
Last modified: 6 Nov 2016, 22:11:51 UTC

Running 2 GUPPI VLARs with SoG seems to give a pretty solid 35m run time for me.

The CPU time is > than the Run Time, but I expect that has to do with the nature of the slower CPU.
I could play with -use_sleep settings to see if they make any measurable difference.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1828919 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13732
Credit: 208,696,464
RAC: 304
Australia
Message 1828973 - Posted: 7 Nov 2016, 6:19:17 UTC - in response to Message 1828887.  

I'm using a PCIe x1 adapter with mine I may expect there could be a reduction in speed with a x1 connection.

Depending on the hardware & application, yes. But as to how much it will impact crunching times- I couldn't begin to guess.
With CUDA50 my PCIe bus load was generally 0%, the odd 1-2% blip.
Running 1 WU at a time with SoG & some aggressive settings on my GTX 1070, I get PCIe bus load sustained peaks of around 15% on a *8 connection (PCIe v3.1) (My GTX 750Ti sometimes has very short peaks of 17%).
If my figuring is correct, that would work out to around 120% of a *1 connection.

However sometime next year PCIe v4 should see the light of day, with double the bandwidth of PCIe v3.x
Grant
Darwin NT
ID: 1828973 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1829112 - Posted: 8 Nov 2016, 3:33:18 UTC - in response to Message 1828973.  

I'm using a PCIe x1 adapter with mine I may expect there could be a reduction in speed with a x1 connection.

Depending on the hardware & application, yes. But as to how much it will impact crunching times- I couldn't begin to guess.
With CUDA50 my PCIe bus load was generally 0%, the odd 1-2% blip.
Running 1 WU at a time with SoG & some aggressive settings on my GTX 1070, I get PCIe bus load sustained peaks of around 15% on a *8 connection (PCIe v3.1) (My GTX 750Ti sometimes has very short peaks of 17%).
If my figuring is correct, that would work out to around 120% of a *1 connection.

However sometime next year PCIe v4 should see the light of day, with double the bandwidth of PCIe v3.x

Since the system I'm using only has PCIe 2.0 x1 it seems like it may be an issue. Running the CUDA50 app I was seeing around 44% load on the bus from GPUz. With the NV SoG app I don't believe I have seen it go over 5%. While staying mostly in the 2-3% range. I've also been running with an empty mb_cmdline.txt. So adding some settings may drive it up. If so I'd just have to find the break even point between settings and bus saturation slowing things down.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1829112 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13732
Credit: 208,696,464
RAC: 304
Australia
Message 1829290 - Posted: 9 Nov 2016, 10:39:39 UTC - in response to Message 1829112.  

Running 1 WU at a time with SoG & some aggressive settings on my GTX 1070, I get PCIe bus load sustained peaks of around 15% on a *8 connection (PCIe v3.1) (My GTX 750Ti sometimes has very short peaks of 17%).

To add to that- when crunching Arecibo WUs, my GTX 750Ti can have peaks of up to 22%; sustained periods of 20% Bus Interface Load.
Grant
Darwin NT
ID: 1829290 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1829499 - Posted: 10 Nov 2016, 1:50:41 UTC - in response to Message 1828527.  



When I was running the 750 in my dual Xeon X5470 I beleive it may have only run normal AR Arecibo tasks at that time.
I did find a comment in a post stating it was running:
2 Arecibo at once: ~25min

For the past few days the 750 has been running in the Celeron and chewing on a load of GUPPI VLARS.
2 GUPPI VLAR at once: 70-75min
1 GUPPI VLAR at once: 42-45m
CPU usage for the CUDA app looks to run upwards at 15-17% sometimes.

I don't have a reference for this system with Arecibo tasks but I believe that 2-3 times as long for GUPPI VLARs using the CUDA app should be about as expected?

I'm going to run a short batch of CPU GUPPI VLARs for a data point and then my plan is to run the NV SOG r3420 app for comparison of the CUDA. Then maybe some tuning parameters.



. . Hi Hal,

. . May I ask why such an older version of SoG. The current version in Lunatics Beta 6 is r3557. It has been rewritten to minimise tasks coming up as inconclusives. So maybe you should consider that. As a comparison I am running it on my Pentium D 930 3.0GHz rig with 2 GTX950s and it is doing OK, previously I had been running r3528 with two GTX970s, but the upper card was running too hot so I downgraded the system to the 950s. Just a thought.

. . For comparison runtimes I am running doublets, and they are taking about 24 to 26 Mins for Arecibo tasks and about 32 to 35 Mins for Guppis (Blc2s). The later versions of SoG tend to be faster on Guppis, especially over CUDA.

. . Just my 2 bits :)

Stephen

.
ID: 1829499 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1829504 - Posted: 10 Nov 2016, 2:00:38 UTC - in response to Message 1828919.  

Running 2 GUPPI VLARs with SoG seems to give a pretty solid 35m run time for me.

The CPU time is > than the Run Time, but I expect that has to do with the nature of the slower CPU.
I could play with -use_sleep settings to see if they make any measurable difference.


. . Hi again,

. . I am using -use_sleep to get 4 concurrent tasks on the GPus with only 2 cores in the Pentium D.

. . The command in the command line file with _SoG in the name is-

-high_prec_timer -use sleep.

. . The Pentium D copes quite OK with the load.

Stephen

.
ID: 1829504 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1829524 - Posted: 10 Nov 2016, 3:20:29 UTC - in response to Message 1829499.  



When I was running the 750 in my dual Xeon X5470 I beleive it may have only run normal AR Arecibo tasks at that time.
I did find a comment in a post stating it was running:
2 Arecibo at once: ~25min

For the past few days the 750 has been running in the Celeron and chewing on a load of GUPPI VLARS.
2 GUPPI VLAR at once: 70-75min
1 GUPPI VLAR at once: 42-45m
CPU usage for the CUDA app looks to run upwards at 15-17% sometimes.

I don't have a reference for this system with Arecibo tasks but I believe that 2-3 times as long for GUPPI VLARs using the CUDA app should be about as expected?

I'm going to run a short batch of CPU GUPPI VLARs for a data point and then my plan is to run the NV SOG r3420 app for comparison of the CUDA. Then maybe some tuning parameters.



. . Hi Hal,

. . May I ask why such an older version of SoG. The current version in Lunatics Beta 6 is r3557. It has been rewritten to minimise tasks coming up as inconclusives. So maybe you should consider that. As a comparison I am running it on my Pentium D 930 3.0GHz rig with 2 GTX950s and it is doing OK, previously I had been running r3528 with two GTX970s, but the upper card was running too hot so I downgraded the system to the 950s. Just a thought.

. . For comparison runtimes I am running doublets, and they are taking about 24 to 26 Mins for Arecibo tasks and about 32 to 35 Mins for Guppis (Blc2s). The later versions of SoG tend to be faster on Guppis, especially over CUDA.

. . Just my 2 bits :)

Stephen

.

If you want to ask why I was going to use that version you can. Raistmer asked me the same thing and I was up to r3528 less than an hour from my OP. Since I'm currently using a NV GPU I have a reason to keep on the current release builds. In fact I had switched from r3528 to r3557 earlier today.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1829524 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1829525 - Posted: 10 Nov 2016, 3:28:05 UTC - in response to Message 1829504.  

Running 2 GUPPI VLARs with SoG seems to give a pretty solid 35m run time for me.

The CPU time is > than the Run Time, but I expect that has to do with the nature of the slower CPU.
I could play with -use_sleep settings to see if they make any measurable difference.


. . Hi again,

. . I am using -use_sleep to get 4 concurrent tasks on the GPus with only 2 cores in the Pentium D.

. . The command in the command line file with _SoG in the name is-

-high_prec_timer -use sleep.

. . The Pentium D copes quite OK with the load.

Stephen

.

It looks like I had really noticed -high_prec_timer in the read me. However I'm not sure if the system I'm using the 750TI FTW in supports that hardware function. I know several of my MBs can toggle HPET off/on. It is likely HPET is supported on the system but can't be disabled.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1829525 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1831231 - Posted: 18 Nov 2016, 16:15:39 UTC - in response to Message 1829524.  



. . Hi Hal,

. . May I ask why such an older version of SoG. The current version in Lunatics Beta 6 is r3557. It has been rewritten to minimise tasks coming up as inconclusives. So maybe you should consider that. As a comparison I am running it on my Pentium D 930 3.0GHz rig with 2 GTX950s and it is doing OK, previously I had been running r3528 with two GTX970s, but the upper card was running too hot so I downgraded the system to the 950s. Just a thought.

. . For comparison runtimes I am running doublets, and they are taking about 24 to 26 Mins for Arecibo tasks and about 32 to 35 Mins for Guppis (Blc2s). The later versions of SoG tend to be faster on Guppis, especially over CUDA.

. . Just my 2 bits :)

Stephen

.

If you want to ask why I was going to use that version you can. Raistmer asked me the same thing and I was up to r3528 less than an hour from my OP. Since I'm currently using a NV GPU I have a reason to keep on the current release builds. In fact I had switched from r3528 to r3557 earlier today.


. . Good to see we are on the same page then :)

Stephen

.
ID: 1831231 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1831232 - Posted: 18 Nov 2016, 16:18:42 UTC - in response to Message 1829525.  

Running 2 GUPPI VLARs with SoG seems to give a pretty solid 35m run time for me.

The CPU time is > than the Run Time, but I expect that has to do with the nature of the slower CPU.
I could play with -use_sleep settings to see if they make any measurable difference.


. . Hi again,

. . I am using -use_sleep to get 4 concurrent tasks on the GPus with only 2 cores in the Pentium D.

. . The command in the command line file with _SoG in the name is-

-high_prec_timer -use sleep.

. . The Pentium D copes quite OK with the load.

Stephen

.

It looks like I had really noticed -high_prec_timer in the read me. However I'm not sure if the system I'm using the 750TI FTW in supports that hardware function. I know several of my MBs can toggle HPET off/on. It is likely HPET is supported on the system but can't be disabled.



. . I suspect a zero current initialisation would restore the rig to defaults.

Stephen

.
ID: 1831232 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1831235 - Posted: 18 Nov 2016, 16:23:06 UTC - in response to Message 1831232.  

Running 2 GUPPI VLARs with SoG seems to give a pretty solid 35m run time for me.

The CPU time is > than the Run Time, but I expect that has to do with the nature of the slower CPU.
I could play with -use_sleep settings to see if they make any measurable difference.


. . Hi again,

. . I am using -use_sleep to get 4 concurrent tasks on the GPus with only 2 cores in the Pentium D.

. . The command in the command line file with _SoG in the name is-

-high_prec_timer -use sleep.

. . The Pentium D copes quite OK with the load.

Stephen

.

It looks like I had not really noticed -high_prec_timer in the read me. However I'm not sure if the system I'm using the 750TI FTW in supports that hardware function. I know several of my MBs can toggle HPET off/on. It is likely HPET is supported on the system but can't be disabled.



. . I suspect a zero current initialisation would restore the rig to defaults.

Stephen

.

I went the easy route and set the command in the app config. Since it didn't throw any errors it must support the function despite note having a setting to toggle it off/on in the BIOS for that particular system.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1831235 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1831328 - Posted: 19 Nov 2016, 0:35:30 UTC - in response to Message 1831235.  
Last modified: 19 Nov 2016, 0:36:15 UTC

Running 2 GUPPI VLARs with SoG seems to give a pretty solid 35m run time for me.

The CPU time is > than the Run Time, but I expect that has to do with the nature of the slower CPU.
I could play with -use_sleep settings to see if they make any measurable difference.


. . Hi again,

. . I am using -use_sleep to get 4 concurrent tasks on the GPus with only 2 cores in the Pentium D.

. . The command in the command line file with _SoG in the name is-

-high_prec_timer -use sleep.

. . The Pentium D copes quite OK with the load.

Stephen

.

It looks like I had not really noticed -high_prec_timer in the read me. However I'm not sure if the system I'm using the 750TI FTW in supports that hardware function. I know several of my MBs can toggle HPET off/on. It is likely HPET is supported on the system but can't be disabled.



. . I suspect a zero current initialisation would restore the rig to defaults.

Stephen

.

I went the easy route and set the command in the app config. Since it didn't throw any errors it must support the function despite note having a setting to toggle it off/on in the BIOS for that particular system.

I should add that after a few days using -high_prec_timer -use_sleep I lowered the CPU reservation.
From: <gpu_usage>0.5</gpu_usage> <cpu_usage>1.0</cpu_usage>
To: <gpu_usage>0.5</gpu_usage> <cpu_usage>0.5</cpu_usage>
This let me return PrimeGrid to running 2 CPU tasks tasks instead of 1.
So the system is running:
1 iGPU AP task
2 NV SoG MB tasks
2 PrimeGrid CPU tasks
2 CPU cores free. 1 for iGPU & 1 for NV GPU.
I might be able to get by leaving only 1 CPU core free for both the iGPU & NV GPU on the system. I am going to have to try running 3 PrimeGrid CPU tasks to see if that starts to slow down anything.

It seems like a low powered GPU is a good match to a small CPU.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1831328 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1831504 - Posted: 19 Nov 2016, 23:28:07 UTC - in response to Message 1831328.  

iGPU build hardly required CPU reservation. It's supporting runtime CPU consumption quite low.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1831504 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1831519 - Posted: 20 Nov 2016, 1:06:42 UTC - in response to Message 1831504.  

iGPU build hardly required CPU reservation. It's supporting runtime CPU consumption quite low.

The system when running only SETI@home AP tasks, 4 CPU AP & 1 iGPU AP, had no issues.
When running other projects on the CPU, 4 CPU PrimeGrid & 1 iGPU AP, the iGPU would be slower. So I reserved a CPU to keep it happy.

For the 750Ti with the NV SoG app I did some tests with 2 free CPU cores.
This command line caused NV driver restarts:
-high_prec_timer -use_sleep -hp -cpu_lock -cpu_lock_fixed_cpu 2

This command line also caused NV driver restarts:
-high_prec_timer -use_sleep -hp

This command line was OK, but was found to have slightly lower performance:
-high_prec_timer -use_sleep -cpu_lock -cpu_lock_fixed_cpu 2

Looks like either the system, or the version driver I'm using, didn't like -hp command. with r3557.

Now I have configured the system for 3 CPU PrimeGrid, 1 iGPU AP, & 2 NV SoG. System is currently sitting ~90% CPU load. So hopefully not much extra time is added for Starting GPU tasks. If it does maybe I'll switch to 2 PrimeGrid and 1 SETI MB CPU task for comparison.
The Celeron J1900 is not great at MB CPU tasks. Which is why I don't normally run those tasks. Not sure how well it would do with GBT VLARs tho...
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1831519 · Report as offensive
1 · 2 · Next

Message boards : Number crunching : GPU testing on low powered CPU


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.