Message boards :
Number crunching :
Panic Mode On (107) Server Problems?
Message board moderation
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 29 · Next
Author | Message |
---|---|
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
Huh?? "Not sending work - last request too recent:" is the exact correct programmed response from the servers if you hit Update before the normal project 305 second timeout between connections. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51511 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
He said NOT receiving the response. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
Sorry, missed that somehow. I have always received the timeout response when clicking Update too soon. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
![]() Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 ![]() ![]() |
Hi, I had my GPUs running out of work on Tuesdays and on other weekdays too. 'Not sending work' - 'No tasks available' bla bla blah. I took a look at the boinc client source code and found that there was a had coded limit of 1000 WUs per host. I removed that. Next I made the client to tell the servers that I have four times the GPUs I actually have. Now I have 100 CPU + 4 * 4*100 GPU tasks in the cache constantly. No problems any more. The cache survives most of the Tuesday outage(s) and the servers refusing to send Arecibo vlars to NVIDIA (special app) hosts. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
Hi, . . I wish I had your skills, it would be nice to feel buffered against maintenance days. Stephen :) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13886 Credit: 208,696,464 RAC: 304 ![]() ![]() |
Woke up to find the cache running down. Did Tbar's trick. re-filled on the next automatic request. I don't run standby projects. After adding AP to my system, the issue isn't nearly as bad as it was, but it still occurs. I have never been able to get more than 54 WUs on a single request, no matter how low the cache is. It all started back in Dec of last year when people that chose to run AP only, with "If no work for selected applications is available, accept work from other applications?" set to Yes were no longer getting any MBv8 work when there was no AP available & had to set "Run only the selected applications Seti@home v8" to Yes as well to receive any. EDIT- the problem getting work seems to be greatest when there is no AP work being split, and when (for whatever reason) very little GBT work is available for download. Grant Darwin NT |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51511 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
Hi, I wish the project was able to support such caches for everybody and cheating the code was not required. Meow. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
![]() Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 ![]() ![]() |
Hi, I think the same. The project should make the cache relative to RAC. My cache lasts now 1h 20 min for shorties and 7h for guppi work. If I had only one CPU and no GPUs I should have a cache of max 2 WUs per real core. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51511 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
Well, whatever the compromise, I do feel that there is some relatively simple way to set a sliding scale. A factor of RAC might work, or average turn around time. An old CPU with the least capable GPU could run for many days on 100wu for each. A 16 core CPU with a GTX1080.............not so much. And a computer running 'special sauce' needs more work to stay happy as well. And it would have a higher RAC and a shorter turn around time. So one of the two should be a workable multiplier to use in cache adjustment for fast crunchers. Meow. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
Al ![]() ![]() ![]() ![]() Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 ![]() ![]() |
Do I remember correctly that these limits were put in place back in a time when there was a dearth of work available? What with the newest types of work being added in the last year or so, hasn't that concern been alleviated? When work is in short supply, we want to parse it out equitably to all, that is only fair for everyone who has graciously chosen SETI to crunch for, regardless of their systems output. But, when there is work aplenty, and we have systems that have the capability to really crank it out, wouldn't it make sense to upgrade the cache sizes to be optimal for whatever the system can produce? I do remember a similar discussion about this a while back, and I think one of the responses regarding it was the effect it had on the database, but maybe I am mis-remembering. ![]() ![]() |
![]() ![]() ![]() Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 ![]() ![]() |
Don't want to be a downer but since Dr. A has moved on...... I don't think any of this is going to change.... ![]() ![]() |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51511 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
No, if memory serves, the limits were put in place to cut down on the database size, which the servers were having problems with maintaining resulting in a lot of instability, crashes, and downtime. Not because work was scarce. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51511 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
And I don't think that Dr. Anderson had anything to do with the limits. Other than perhaps providing the code to implement them. I am pretty sure they are a project specific limit, not a Boinc limit. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
The rate of transactions per second the servers have to do is based just on just how many tasks are returned, irregardless of how big the caches are for every host, fast or slow. The impact would be in the size of the database tables holding the number of tasks out in the field since that would increase for caches that are enlarged for the fastest hosts who are running the special app or have many gpus per host machine. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
And I don't think that Dr. Anderson had anything to do with the limits. Other than perhaps providing the code to implement them. Maybe I misunderstood Petri's post. He said he recompiled the BOINC application, NOT the SETI applications. So the hard limit is in the platform and not specific to the project. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13886 Credit: 208,696,464 RAC: 304 ![]() ![]() |
And I don't think that Dr. Anderson had anything to do with the limits. Other than perhaps providing the code to implement them. The 1000 total WU limit is in BOINC. The 100 WUs per CPU/GPU limit is a Seti one. Grant Darwin NT |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51511 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
And I don't think that Dr. Anderson had anything to do with the limits. Other than perhaps providing the code to implement them. What I meant is that Dr. Anderson did not determine the limits or hard wire them into Boinc as best as I know. They are contained in Boinc, but they are set by the project administrators. In this case, they were probably set by Eric. And according to Grant's post whilst I was typing this, the ultimate 1000 wu limit may be hardwired into Boinc. So I may not have been 100% correct. Meow. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
Al ![]() ![]() ![]() ![]() Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 ![]() ![]() |
The 1000 total WU limit is in BOINC. So the 1000 limit is circumventable with the proper coding knowledge, but the 100 is client side and is in stone? ![]() ![]() |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
Hi, . . Yep, that would be a nice solution as well, even better in fact. Stephen <sigh> |
![]() ![]() ![]() Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 ![]() ![]() |
I think there are 2 different places for limits. The seti limit which says 100 per processing unit (which we are all familiar with) and then there the BOINC limit which Petri pointed out is 1000 per host. Of course this raised an interesting question Next I made the client to tell the servers that I have four times the GPUs I actually have. Now I have 100 CPU + 4 * 4*100 GPU tasks in the cache constantly.ie 1700 except... In progress (3387) lol, Oh to have those coding skills..... ;) Just pulling your leg Petri... You do great work ![]() ![]() |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.