Quotas too small for GPU crunching

Message boards : Number crunching : Quotas too small for GPU crunching
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile [AF>HFR>RR] ThierryH
Volunteer tester

Send message
Joined: 28 Oct 01
Posts: 35
Credit: 10,867,120
RAC: 0
France
Message 855449 - Posted: 19 Jan 2009, 18:51:37 UTC

I have 2 GTX295 on an i7 machine. It's crunching around 800 WUs per day. Quota is 100 WUs per cpu and per day, so 800 WUs per day for an i7. It's too small, especially if there is a lot of large angle range WUs during a day.
Please, increase !

My machine

Thank you,
ThierryH
ID: 855449 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 855479 - Posted: 19 Jan 2009, 20:00:54 UTC

I congratulate you on being able to crunch so much in a short period of time. Seti and other projects intentionally set the limit to 100/core. If you were to have a hard drive failure or some other catastrophe. you'd put 900+ WU's on hold which start to eat up database space when you consider the number of people that get large caches and either quit the project or have other failures. There are other good reason that someone would point out.


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 855479 · Report as offensive
Profile mr.kjellen
Volunteer tester
Avatar

Send message
Joined: 4 Jan 01
Posts: 195
Credit: 71,324,196
RAC: 0
Sweden
Message 855483 - Posted: 19 Jan 2009, 20:07:08 UTC

I would say that 2 GTX295s should at LEAST be counted as 4 cores and allow for 100WU/day as well as the CPU cores.

/Anton
ID: 855483 · Report as offensive
Aaron

Send message
Joined: 3 Apr 99
Posts: 18
Credit: 535,856
RAC: 0
United Kingdom
Message 855485 - Posted: 19 Jan 2009, 20:15:42 UTC - in response to Message 855483.  
Last modified: 19 Jan 2009, 20:20:12 UTC

Maybe a better way would be to let the user process all of the units until the quoter is reached and then let the user download 10 or so at a time until it is reset at the end of the day.

The above would depend on reporting successful work done and not having the original quoter used up by errors or aborts.

To clarify:

Good
1. Process and report 100 units successfully.
2. Download additional work in small chunks.

Bad
1. Report lots of errors and abort work you done like the look of.
2. No more work for you until tomorrow.
ID: 855485 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 855486 - Posted: 19 Jan 2009, 20:19:26 UTC - in response to Message 855485.  

I also recall reading that Seti recommends running 1 or more of the other projects available through BOINC. I don't think any suggestion to expand WU quotas would be taken seriously. I would bet that S@H would prefer to increase the data being crunched per WU than change the daily quota.


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 855486 · Report as offensive
Profile mr.kjellen
Volunteer tester
Avatar

Send message
Joined: 4 Jan 01
Posts: 195
Credit: 71,324,196
RAC: 0
Sweden
Message 855498 - Posted: 19 Jan 2009, 20:43:10 UTC - in response to Message 855485.  

Maybe a better way would be to let the user process all of the units until the quoter is reached and then let the user download 10 or so at a time until it is reset at the end of the day.

The above would depend on reporting successful work done and not having the original quoter used up by errors or aborts.

To clarify:

Good
1. Process and report 100 units successfully.
2. Download additional work in small chunks.

Bad
1. Report lots of errors and abort work you done like the look of.
2. No more work for you until tomorrow.


This does in no way contradict having a WU dl-quota for GPU cores. Let the work flow. The more the merrier! :)

In ThierryH's case though, 10 units at a time would probably not be enough 'extra ration'...since he computes 12 in parallell...
ID: 855498 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 855501 - Posted: 19 Jan 2009, 20:52:33 UTC - in response to Message 855498.  

As stated before Seti@home encourages people to have additional projects to work on. I'd be happy as a clam to get that much done in one day. However its been their policy to not increase the # of Wu's being sent out but to increase the detail/work processed in each WU. I don't see Berkeley changing their way of doing things for relatively few individuals.
I'd bet that they appreciate all your work but instead of complaining about something they aren't changing just for you try working another project in your spare time.


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 855501 · Report as offensive
Aaron

Send message
Joined: 3 Apr 99
Posts: 18
Credit: 535,856
RAC: 0
United Kingdom
Message 855509 - Posted: 19 Jan 2009, 21:05:13 UTC - in response to Message 855498.  

In ThierryH's case though, 10 units at a time would probably not be enough 'extra ration'...since he computes 12 in parallell...


Indeed, maybe 1 or 2 extra units per processing unit would be more logical.

I'd bet that they appreciate all your work but instead of complaining about something they aren't changing just for you try working another project in your spare time.


Its probably even more productive to talk about possible solutions and join other projects to take up the slack.

For example: Maybe someone can raise this with the BOINC people for it be be implemented and maybe even submit a patch if the development is open. (And project administrators can use it or not).
ID: 855509 · Report as offensive
nick
Volunteer tester
Avatar

Send message
Joined: 22 Jul 05
Posts: 284
Credit: 3,902,174
RAC: 0
United States
Message 855530 - Posted: 19 Jan 2009, 22:15:59 UTC

Well just letting the manager see the GPUs as a core would work, with 12 cores you could get up to 1200 WUs a day, a lot more than the machine would be able to crunch, I think....


ID: 855530 · Report as offensive
Profile popandbob
Volunteer tester

Send message
Joined: 19 Mar 05
Posts: 551
Credit: 4,673,015
RAC: 0
Canada
Message 855617 - Posted: 20 Jan 2009, 2:03:45 UTC

Even my gtx 260 core 216 can chew through over 400 wu's a day for the 13 credit wu's... Plus then there is the aborted VLAR wu's... (sorry I wont run wu's that take over 2 hours when they shouldn't and make my PC sluggish)
It took me over 5 days to get 1000 wu's in que (not even a 5 day buffer)
But a long buffer is currently needed because if I came across a bunch of shorties and VLAR then I would chrun through over half of my que in a single day...


Do you Good Search for Seti@Home? http://www.goodsearch.com/?charityid=888957
Or Good Shop? http://www.goodshop.com/?charityid=888957
ID: 855617 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 855681 - Posted: 20 Jan 2009, 6:15:49 UTC

From the boinc_dev mailing list:

From: David Anderson <davea@ssl.berkeley.edu>
Subject: Re: [boinc_dev] Work-fetch and gpu quotas
Date sent: Mon, 19 Jan 2009 17:03:50 -0800
Copies to: BOINC dev <boinc_dev@ssl.berkeley.edu>

I checked in a change to deal with this problem.
I added a scheduler config parameter "cuda_multiplier".
If the basic quota is K,
the actual quota for a host with N CPUs and M CUDA devices will be
K * (N + M*cuda_multiplier)
(i.e. cuda_multiplier reflects how much faster GPUs
are than CPUs for this project).

I'll deploy this on SETI@home (with a value of 5 or so) soon.

-- David

                                                                 Joe
ID: 855681 · Report as offensive
Profile [AF>HFR>RR] ThierryH
Volunteer tester

Send message
Joined: 28 Oct 01
Posts: 35
Credit: 10,867,120
RAC: 0
France
Message 855702 - Posted: 20 Jan 2009, 8:09:35 UTC - in response to Message 855681.  

From the boinc_dev mailing list:

From: David Anderson <davea@ssl.berkeley.edu>
Subject: Re: [boinc_dev] Work-fetch and gpu quotas
Date sent: Mon, 19 Jan 2009 17:03:50 -0800
Copies to: BOINC dev <boinc_dev@ssl.berkeley.edu>

I checked in a change to deal with this problem.
I added a scheduler config parameter "cuda_multiplier".
If the basic quota is K,
the actual quota for a host with N CPUs and M CUDA devices will be
K * (N + M*cuda_multiplier)
(i.e. cuda_multiplier reflects how much faster GPUs
are than CPUs for this project).

I'll deploy this on SETI@home (with a value of 5 or so) soon.

-- David

                                                                 Joe


Thank you for the answer.
ThierryH.
ID: 855702 · Report as offensive
Profile [AF>HFR>RR] ThierryH
Volunteer tester

Send message
Joined: 28 Oct 01
Posts: 35
Credit: 10,867,120
RAC: 0
France
Message 855796 - Posted: 20 Jan 2009, 22:36:57 UTC

Today, there was a lot of very large angle range. My box took only 12 hours to crunch daily 800 WUs. Now waiting tomorrow 800 next on GPUGrid.

ThierryH


ID: 855796 · Report as offensive
Profile Virtual Boss*
Volunteer tester
Avatar

Send message
Joined: 4 May 08
Posts: 417
Credit: 6,440,287
RAC: 0
Australia
Message 855903 - Posted: 21 Jan 2009, 4:55:50 UTC - in response to Message 855796.  

If my calcs are right, when the new rule is implemented you should have an allocation of 1800/day. (If multiplier value = 5)

That should keep you going 24/7 most if not all of the time.
ID: 855903 · Report as offensive
Profile [AF>HFR>RR] ThierryH
Volunteer tester

Send message
Joined: 28 Oct 01
Posts: 35
Credit: 10,867,120
RAC: 0
France
Message 855918 - Posted: 21 Jan 2009, 7:11:41 UTC - in response to Message 855903.  

If my calcs are right, when the new rule is implemented you should have an allocation of 1800/day. (If multiplier value = 5)

That should keep you going 24/7 most if not all of the time.


2800 ;)
GTX295 have two GPUs on board. So 100 * ( 8 + 5 * 4 ).
It's effectively enougth. Probably too much for the moment.



ID: 855918 · Report as offensive
Profile mr.kjellen
Volunteer tester
Avatar

Send message
Joined: 4 Jan 01
Posts: 195
Credit: 71,324,196
RAC: 0
Sweden
Message 855933 - Posted: 21 Jan 2009, 8:04:32 UTC

Say Thierry,
Of what makes are your GTX295s? I'm curious because several users of different projects report that the Nvidia drivers only let them use one core of each card. The EVGA cards are mentioned in particular.

Do you have them working 2 WU/each at one time (for a total of 4)?
Did you have to configure them in any way special? (sli/physix)

I have a pair on the way as well and I am interested in knowing if WinXP x64 works...WinXP x86 apparantly works, Vista (x64 or x86) apparently does not.

/Anton
ID: 855933 · Report as offensive
Chelski
Avatar

Send message
Joined: 3 Jan 00
Posts: 121
Credit: 8,979,050
RAC: 0
Malaysia
Message 855957 - Posted: 21 Jan 2009, 13:06:35 UTC

Seems like it is effective already, seen a 700 units limit being reached on my client (2 CPUs + 1 GPUx5)

Unfortunately I think the client still grossly underestimates the productivity of CUDA and fails to compensate by increasing the queue proportionately.
ID: 855957 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 855960 - Posted: 21 Jan 2009, 13:16:53 UTC - in response to Message 855957.  

Unfortunately I think the client still grossly underestimates the productivity of CUDA and fails to compensate by increasing the queue proportionately.

Which client?
ID: 855960 · Report as offensive
Profile [AF>HFR>RR] ThierryH
Volunteer tester

Send message
Joined: 28 Oct 01
Posts: 35
Credit: 10,867,120
RAC: 0
France
Message 855963 - Posted: 21 Jan 2009, 13:27:40 UTC - in response to Message 855933.  

Say Thierry,
Of what makes are your GTX295s? I'm curious because several users of different projects report that the Nvidia drivers only let them use one core of each card. The EVGA cards are mentioned in particular.

Do you have them working 2 WU/each at one time (for a total of 4)?
Did you have to configure them in any way special? (sli/physix)

I have a pair on the way as well and I am interested in knowing if WinXP x64 works...WinXP x86 apparantly works, Vista (x64 or x86) apparently does not.

/Anton


My GTX295s are from Gainward. Usage of one or two GPUs is only a setup. You have to disable SLI to have two GPUs on one card, able to crunch two WUs same time.
Like you could see on my machine, I'm working under XP64. Just be sure to take nVidia driver 181.20 or more to avoid all troubles there was by the past with SETI app.

/Thierry
ID: 855963 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 856030 - Posted: 21 Jan 2009, 17:35:11 UTC
Last modified: 21 Jan 2009, 17:38:34 UTC

@ [AF>HFR>RR] ThierryH

Now you are down to:
Maximum daily WU quota per CPU 1/day

<core_client_version>6.4.5</core_client_version>
<![CDATA[
<message>
app_version download error: couldn't get input files:
<file_xfer_error>
  <file_name>setiathome_6.08_windows_intelx86__cuda.exe</file_name>
  <error_code>-120</error_code>
  <error_message>signature verification failed</error_message>
</file_xfer_error>

</message>
]]>
 
Validate state Invalid 
Claimed credit 0 
Granted credit 0 
application version 6.08


Hope the Berkeley-crew will fix this soon..


EDIT:
BTW.
Nice rig.. :-)
ID: 856030 · Report as offensive

Message boards : Number crunching : Quotas too small for GPU crunching


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.