Message boards :
Number crunching :
The spoofed client - Whatever about
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5
Author | Message |
---|---|
Joseph Stateson Send message Joined: 27 May 99 Posts: 309 Credit: 70,759,933 RAC: 3 |
that is exactly what I thought. So what is the cache that is being discussed? |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
SETI is the one who has imposed the cache limits, SETI is the one who would have to change the cache limits. The Point is, compared to other suggestions being offered, SETI changing the cache limits would be a Very Easy endeavor to accomplish. That's all. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Seti only allows 100 tasks per cpu and 100 tasks per gpu on board at any time for the host's cache. So setting for X days of work does nothing for you if you have a reasonably fast system. You are only going to get the maximum cache allotment for your hardware. Setting additional days of work to 0.01 will cause the client to ask for work at every scheduler connection. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
If the objective is to try and "smooth out the bumps" while keeping crunchers well-fed, it seems to me that the easiest way for the SETI project to handle that would be to eliminate or greatly increase hard limits on tasks in progress, and instead calculate tasks delivered based on average turnaround time, as is (I believe) already done up to the point where the limit is reached. Logically, that should be sufficient control, and self-correcting for problem clients. Am I missing something? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Am I missing something?Yep, i'll try to summarise things- Basically, the more work returned per hour, then the lower the Results-out-in-the-field number can be or the higher the number of Results-out-in-the-field then the lower the amount of work returned per hour can be before the servers start to choke under the load. That is the Seti@home server issue. As discussed earlier in the thread, the Results-out-in-the-field has been limited (by the imposition of server side limits) to stop the servers from falling over or grinding to a halt as the Seti@home servers are at their limits. At the current levels of Results-out-in-the-field, when the Results-received-in-last-hour reaches a certain (variable) level, the Workunit-files-waiting-for-deletion start to backup. Once they reach a certain (variable) level, the splitter output falls away, sometimes to less than10 per second. If this goes on for long enough then the Ready-to-send buffer runs out, and people can't get work. The number of Results-out-in-the-field is the total of all the crunchers All tasks number in their account Task list. It was suggested that the larger people's caches, the larger the number of Results-out-in-the-field. Tbar hypothesized that this wasn't the case, and provided evidence that for a given amount of work processed per hour, the All value of tasks remains the same- the ratio between In progress (people's caches) & Validation pendings varies, but for a given hourly throughput, the All tasks number remains the same. The smaller the In progress number, the larger the Validation pending number; the larger the In progress number, the smaller the Validation pending number- but the All tasks number remains the same. In the past (I still can't remember his name, Jeff I think it was?) looked at the turnaround time for wok, and found that, regardless of how many long-term outstanding Pendings or Inconclusives you might see in your Task List, the huge majority of work is returned within 48 hours. Work Units that take longer than 48 hours to be validated are only a very small percentage, and the long term (2 months+) outstanding work is only a very small percentage of that very small percentage. Because the high performance crunchers have the greatest impact on the Results-out-in-the-field (because their All number of tasks is so high), and because re-distributing the work out in the field to reduce the number of WUs that take more than 48 hours to be returned won't actually have any significant effect on the Results-out-in-the-field numbers (because they are only a very small percentage of the number of WUs that are returned) any benefit to alleviating the servers load issues would be virtually nil. Rob has suggested that systems which have large numbers of Ghosts may contribute as much as 5.5% to the Results-out-in-the-field, so fixing the mechanism for limiting work to non-performing systems would help with that. But 5.5% (think of it as 0.055) really isn't much in the overall scheme of things- fixing those systems would result in a better buffer before the servers start to have issues, but it wouldn't be enough to enable any meaningful increase in the Sever side limits IMHO. There are plans (photos have been posted) to upgrade the Upload server (which also does file deletion, database purging, and (maybe) validating) which will hopefully alleviate the present server issues and allow the Server side limits to be increased. Grant Darwin NT |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
IMHO whats is unrealistic is to keep the size of the WU small (around 720K) with the current top crunchers who can crunch a WU in less than 30 sec. . . Sadly I cannot see that working either. If they created a separate WU format for the heavy hitters then those tasks can only be validated against other heavy hitters, and I do not think there are high enough numbers of these to make this work, and it would rather defeat the purpose of the validation process itself. If they increased the size of ALL WUs then the slow machines, who we are rapidly losing even now, would cause their owners to lose interest even more quickly. If they are only doing a few tasks a day would they still bother if that number drops even only by half? . . I still think daily limits that take into account the daily productivity of that individual host would be the most workable solution for the project. They already maintain the information about each host's productivity, so a mechanism that allows the schedulers to use this info to multiply the basic limits accordingly would require a smaller change in the system (so it seems to me anyway). That way the slow machines can still function under their current limits but for machines that produce large multiples of the work done by slower rigs can receive work in multiples of the basic limit. A machine producing 200 valid results per day has the current limits of 100 per device, but a machine producing 1000 valid units a day can receive 500 per device. No need for spoofing then and it remains under the control of the project/servers rather than various and sundry workarounds out in the field. Stephen ? ? |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Setting additional days of work to 0.01 will cause the client to ask for work at every scheduler connection. . . Also when that value is set to 0.00 which is what I run at ... . . My rigs ask for work at every request interval. Stephen . . |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Setting additional days of work to 0.01 will cause the client to ask for work at every scheduler connection. I tried to set it lower than 0.01 and the web page would not accept any lower value. You save your changes but the value never updates to anything other than 0.01. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Stargate (SA) Send message Joined: 4 Mar 10 Posts: 1854 Credit: 2,258,721 RAC: 0 |
I've tried it and saved it, now it shows just "0" how would this work ? |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
I tried to set it lower than 0.01 and the web page would not accept any lower value. You save your changes but the value never updates to anything other than 0.01. . . OK, I often feel I live in the twilight zone, it seems things behave entirely differently for me than for other people ... . . I have rechecked ALL my machines and every one has accepted a value of zero, and they are running 3 different versions of Boinc client & BoincManager. One Linux machine is running 7.2.42, the others are running 7.14.2 and the Windows machine is running 7.6.33. How bizarre ... Stephen ? ? |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Are the spoofing mods published anywhere under GPL? I might try my hand at Linux compilation... |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Are the spoofing mods published anywhere under GPL? I might try my hand at Linux compilation... AFAIK No. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.