MB, AP, and Cuda requests


log in

Advanced search

Message boards : Number crunching : MB, AP, and Cuda requests

Author Message
PhonAcq
Send message
Joined: 14 Apr 01
Posts: 1622
Credit: 21,578,496
RAC: 2,958
United States
Message 875465 - Posted: 14 Mar 2009, 15:20:18 UTC

Would someone explain a bit how requests are satisfied? Assuming a host is capable of, and enabled to, process wu's for each of these cases, does the boinc client request the type or does the boinc download app decide what to give you (assuming all all available for downloading)? Is it merely you get what's available at the time, consistent with your settings and the time available on the host? In a cuda case, how does it decide the number of cuda's to send versus the MB's or AP's? Are there separate caches for cuda and MB/AP?

Obviously I don't know much and am looking for enlightening. At present I only run MB's but am considering enabling AP's, and would like to understand the process more before I do.

Cosmic_Ocean
Avatar
Send message
Joined: 23 Dec 00
Posts: 2204
Credit: 8,021,463
RAC: 4,318
United States
Message 875477 - Posted: 14 Mar 2009, 16:04:47 UTC

I remember reading something from...I think Richard?.. regarding how the scheduler works. As far as the selected applications for the venue, when the client requests work, it asks the feeder cache for the selected apps, and if there's no work available and you have "allow for other apps" selected, then it makes a second request to basically `*' (using a wildcard as an example) and anything that replies back with a non-zero value for availability, gets work sent from that queue.

What I'm trying to figure out though is why is it when there's plenty of MBs available, it only gives me ap_v5? I know, I know..I keep bringing it up and mentioning it all over the place, but I haven't heard it be addressed, so it's almost like nobody's reading it. *shrug*
____________

Linux laptop uptime: 1484d 22h 42m
Ended due to UPS failure, found 14 hours after the fact

Richard Haselgrove
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8275
Credit: 44,971,191
RAC: 13,877
United Kingdom
Message 875503 - Posted: 14 Mar 2009, 17:41:01 UTC

So far as I can tell (and this is all fluid and subject to change), BOINC v6.4.x will just ask SETI for 'work'. So, subject to various limitations, you stand a chance of getting anything, and it'll all go into one big cache of tasks. So you have a chance of having all CUDA and a cold CPU, or vice-versa.

Limitations include:
You won't get CUDA work if you don't have a CUDA card
If you have an optimised application, you won't get work for anything not listed in app_info.xml
If you have a computer below a certain minimum speed, you shouldn't get Astropulse or Astropulse_v5
You shouldn't get work for any application not ticked in your preferences, unless you also ticked 'accept work from other applications'

With BOINC v6.6.xx, work fetched and cached is split into two lists: CPU and CUDA. So the CUDA cache should always, and separately, have work in it, which in SETI's case will always be MB.

Work allocation for the CPU cache remains at the discretion of the server, and remains subject to the same limitations (except the CUDA one, of course). So you could get either AP, AP_V5, or MB/CPU, depending on the server's mood.

With regard to the question "when I ask for random tasks, why do I always seem to get Astropulse_v5?", have a look at Matt's opening post in The End of All Things (Oct 30 2008). IIRC, the 'feeder' cache he describes holds just 100 tasks at once. So, if I'm reading Matt correctly, what we the crunchers persuaded him to do after the October download woes was to set the servers so they have just 33 MB tasks available at once, 33 AP, and 33 AP_V5.

There probably won't be any AP, because we're right down to the dregs of the dregs. The MB will go pretty much at once: just a couple of big hits from a CUDA guy (20 each), and it's gone. So that just leaves AP_v5 until (all of 2 seconds later) the feeder gets another cache of 100 from the database, as described in Tom (Dec 23 2008).

Cosmic_Ocean
Avatar
Send message
Joined: 23 Dec 00
Posts: 2204
Credit: 8,021,463
RAC: 4,318
United States
Message 875505 - Posted: 14 Mar 2009, 17:47:32 UTC

I thought the feeder did 100 of each and not cumulative. Of course for old AP, that..might not work if it was in batches of 100, but I was under the impression the feeder was fed by the splitters, so resends should just be able to be dropped back into the feeder cache at any time, right?
____________

Linux laptop uptime: 1484d 22h 42m
Ended due to UPS failure, found 14 hours after the fact

Richard Haselgrove
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8275
Credit: 44,971,191
RAC: 13,877
United Kingdom
Message 875510 - Posted: 14 Mar 2009, 18:12:10 UTC - in response to Message 875505.

I thought the feeder did 100 of each and not cumulative. Of course for old AP, that..might not work if it was in batches of 100, but I was under the impression the feeder was fed by the splitters, so resends should just be able to be dropped back into the feeder cache at any time, right?

Wrong. Read those two posts of Matt's again. The key phrases are "... the feeder has half the memory for SETI@home workunits than it did." and "holds at any given time the names of 100 available workunits ... queries the database every two seconds to see if there's more work available", from the first and second references respectively.

Nothing can "drop something into the feeder cache". When a task is reported, the transitioner will update the database to alert the attention of the validator. If the validator doesm't like it, it'll alert some other process to create a new 'Task' row in the database table. It's only when the feeder queries the database that it'll find (eventually - at the far end of the database table) that there's a new task eligible for allocation.

Cosmic_Ocean
Avatar
Send message
Joined: 23 Dec 00
Posts: 2204
Credit: 8,021,463
RAC: 4,318
United States
Message 875511 - Posted: 14 Mar 2009, 18:18:34 UTC

I did just read those when you posted them, but I was saying that before reading them, that's how I thought it worked.

So now I guess the question could be.. would it be time to consider increasing the feeder cache to a larger number than 100? What kind of negative effects would that cause, if any? I would have thought that with all the CUDA clients being added and all, the feeder would have been allowed to be..tripled?
____________

Linux laptop uptime: 1484d 22h 42m
Ended due to UPS failure, found 14 hours after the fact

Richard Haselgrove
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8275
Credit: 44,971,191
RAC: 13,877
United Kingdom
Message 875513 - Posted: 14 Mar 2009, 18:36:05 UTC - in response to Message 875511.

I did just read those when you posted them, but I was saying that before reading them, that's how I thought it worked.

So now I guess the question could be.. would it be time to consider increasing the feeder cache to a larger number than 100? What kind of negative effects would that cause, if any? I would have thought that with all the CUDA clients being added and all, the feeder would have been allowed to be..tripled?

The - shall we say observation? - that people are talking about is the relative proportion of AP_V5 and MB/CPU tasks in their allocations. Just increasing the size of the feeder cache isn't going to alter that: you would need to adjust the proportion too. Something like 20:1 would seem right, looking at the Server Status page. Or maybe 94:5:1 for MB:AP_V5:AP.

Looking at Matt's first post, all he did was turn on a standard BOINC server option. He hasn't got time to do his own programming: I doubt he has time to reprogram BOINC as well. So who's going to ask BOINC to reprogram the server tools? I don't think it had better be me, because I'm making too much of a nuisance of myself on the client side already.

Josef W. Segur
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4137
Credit: 1,004,349
RAC: 238
United States
Message 875518 - Posted: 14 Mar 2009, 19:04:53 UTC - in response to Message 875505.

I thought the feeder did 100 of each and not cumulative. Of course for old AP, that..might not work if it was in batches of 100, but I was under the impression the feeder was fed by the splitters, so resends should just be able to be dropped back into the feeder cache at any time, right?

The Splitter creates a workunit and a workunit entry in the master database, then two result entries are made in the master database (for initial replication of 2). Those result entries are part of the "Results ready to send". The Feeder can be set to take things from the ready to send list various ways, here it is looking to get some proportions of MB, AP, and AP_v5 tasks to put into empty slots in the 100 total (yes, it's cumulative). Those slots are distributed evenly based on weighting values for the different kinds of work. I agree with Richard that it seems quite likely all three are weighted evenly but even if more than half were MB very few requests from CUDA hosts would be needed to empty out those slots. Just 5 which need 20 CUDA WUs would exhaust MB work no matter how the weightings are set.

The "Results received in last hour" values on the Server status page are an indicator of how many downloads of WUs took place a few days earlier. The averages for the last week are about 27500 for MB and 1490 for AP. Converting to seconds and mutiplying by the WU file size in bits indicates about 23 MBits/sec of MB download (7.64 WUs/sec) and 27.8 MBits/sec of AP download (0.414 WUs/sec). With those WU delivery rates it is very unclear why requests aren't getting a reasonable mix.

There is a BOINC option which can increase the priority of resends, and the Feeder can be set to consider priority, but there's no indication this project is adding those additional burdens to the server load. New result entries seem to go at the end of the "Results ready to send" queue no matter if they're initial replication or resends.
Joe

daysteppr
Send message
Joined: 22 Mar 05
Posts: 69
Credit: 2,775,826
RAC: 6,858
United States
Message 875724 - Posted: 15 Mar 2009, 7:37:38 UTC
Last modified: 15 Mar 2009, 7:38:45 UTC

All I know is that Im getting nothing but 5.03 tasks.
I have an amd x2 4600+ 2.4 ghz WITH the optimized ap and they're taking me 2 days or so each
____________

Profile RandyC
Avatar
Send message
Joined: 20 Oct 99
Posts: 714
Credit: 1,704,345
RAC: 0
United States
Message 875965 - Posted: 15 Mar 2009, 22:23:06 UTC - in response to Message 875724.

All I know is that Im getting nothing but 5.03 tasks.
I have an amd x2 4600+ 2.4 ghz WITH the optimized ap and they're taking me 2 days or so each


I have a system similar to yours. However, it seems to run about 20% faster. WU times are 120k-130k secs. I have it split between Einstein and S@H-AP at a 100:200 resource ratio.

Some (possible) differences:
o I run a 5% overclock on the cpu
o I'm running 2GB DDR2 PC-6400 RAM
o I'm running XP-Pro 64-bit

I would think getting some faster memory might help.

daysteppr
Send message
Joined: 22 Mar 05
Posts: 69
Credit: 2,775,826
RAC: 6,858
United States
Message 875987 - Posted: 16 Mar 2009, 0:06:33 UTC - in response to Message 875965.

All I know is that Im getting nothing but 5.03 tasks.
I have an amd x2 4600+ 2.4 ghz WITH the optimized ap and they're taking me 2 days or so each


I have a system similar to yours. However, it seems to run about 20% faster. WU times are 120k-130k secs. I have it split between Einstein and S@H-AP at a 100:200 resource ratio.

Some (possible) differences:
o I run a 5% overclock on the cpu
o I'm running 2GB DDR2 PC-6400 RAM
o I'm running XP-Pro 64-bit

I would think getting some faster memory might help.


- Dont have any Overclock. I was thrashing AP 5.00 WU's when I did that. Went from 2.4 ghz to 2.6 using software. Temps never changed.
- Im 2GB DDR2 PC 5300 Ram
- XP 32 bit
- Using ' Keep WU in Memory' setting.

Im ok with it, mostly. It just drives me crazy when an Intel chip does it 2 x as fast as me and we have the same ' clock' or an Intel does it faster even without an optimized ap at 'same speed.'

I can however say I have never had an outage issue. My settings are the stock, 1+2 or something. Never been out of work.
____________

PhonAcq
Send message
Joined: 14 Apr 01
Posts: 1622
Credit: 21,578,496
RAC: 2,958
United States
Message 876125 - Posted: 16 Mar 2009, 12:31:32 UTC - in response to Message 875518.

I agree with Richard that it seems quite likely all three are weighted evenly but even if more than half were MB very few requests from CUDA hosts would be needed to empty out those slots. Just 5 which need 20 CUDA WUs would exhaust MB work no matter how the weightings are set.

Joe

How difficult would it be to create a couple fictitious data types and fill them with MB work? Doing so would increase the fraction of MB's available in the feeder cache it seems. One value would be to reduce the number of times hosts are asking for more MB's, none are available, hosts ask again, and so on; that is it would reduce inefficient bandwidth use.

Josef W. Segur
Volunteer developer
Volunteer tester
Send message
Joined: 30 Oct 99
Posts: 4137
Credit: 1,004,349
RAC: 238
United States
Message 876199 - Posted: 16 Mar 2009, 17:43:46 UTC - in response to Message 876125.

I agree with Richard that it seems quite likely all three are weighted evenly but even if more than half were MB very few requests from CUDA hosts would be needed to empty out those slots. Just 5 which need 20 CUDA WUs would exhaust MB work no matter how the weightings are set.
Joe

How difficult would it be to create a couple fictitious data types and fill them with MB work? Doing so would increase the fraction of MB's available in the feeder cache it seems. One value would be to reduce the number of times hosts are asking for more MB's, none are available, hosts ask again, and so on; that is it would reduce inefficient bandwidth use.

That would involve coding changes all over the place, you'd need separate splitters and assimilators or significantly modified ones. It ought to be far easier to update the weights for the existing 3 applications then touch the trigger file which tells the Feeder to reread the database settings. If the default even weighting is still in effect, that certainly should be fixed since there shouldn't ever be enough Astropulse resends to fill 1/3 the list. But we don't really know the weights haven't been set, the evidence simply indicates there are about 18.5 MB tasks sent for each AP_v5 task.
Joe

PhonAcq
Send message
Joined: 14 Apr 01
Posts: 1622
Credit: 21,578,496
RAC: 2,958
United States
Message 876272 - Posted: 16 Mar 2009, 21:32:19 UTC - in response to Message 876199.
Last modified: 16 Mar 2009, 21:32:58 UTC

What I was thinking is that they added AP wu's without so much trouble (I'm guessing). Why not replicate the process by adding a fictitious 'new' MB source of wu's?

Just brainstorming here. But obviously if the weights could be adjusted, then they should be, especially if doing so would reduce bandwidth loading as I suggested below.

Richard Haselgrove
Volunteer tester
Send message
Joined: 4 Jul 99
Posts: 8275
Credit: 44,971,191
RAC: 13,877
United Kingdom
Message 876284 - Posted: 16 Mar 2009, 22:00:26 UTC

Another reason why it would be good to give a higher priority to MB over AP is that, at the moment, AP work is flying out of the window like hot cakes.

That means that the AP splitters are dis-inhibited by disk space limitations for a higher proportion of the time, and are racing ahead of the MB splitters. At the moment, there are 29 'tapes' occupying live server space which are 'done' as far as AP is concerned, but have to hang around until MB catches up - about 1.3 Terabytes.

I'm sure Matt would prefer MB and AP to process the raw data at roughly the same speed, to lighten his data- and storage- management load.

Message boards : Number crunching : MB, AP, and Cuda requests

Copyright © 2014 University of California