Message boards :
Number crunching :
Panic Mode On (104) Server Problems?
Message board moderation
Previous · 1 . . . 11 · 12 · 13 · 14 · 15 · 16 · 17 . . . 42 · Next
Author | Message |
---|---|
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
The scheduler only has a couple hundred files at a time in between reloads. One or 2 requests can easily empty it between it's reloading. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
The scheduler only has a couple hundred files at a time in between reloads. And it should only take fractions of a second to reload. Even after extended outages it's not (usually) this hard to get work. Grant Darwin NT |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1856 Credit: 268,616,081 RAC: 1,349 |
Definitely not seeing any issues here; all caches remain full. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
In checking the logs across my machines for the past few days the only time I have got a Scheduler request completed: got 0 new tasks response it was followed by This computer has reached a limit on tasks in progress. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Grant and Bruce, I'm having your same issues on one of my machines. Only getting 4-6 tasks per request every 20 minutes or so. Most requests get zero. I'm down to about 100 tasks on that machine when I should have 300. The other two machines have full loads. That machine must have bad karma with the download server compared to my other machines. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I've been noticing similar behavior. It seems if I have both AstroPulse v7: yes & SETI@home v8: yes, I'm more likely to receive; Sun Jan 15 12:39:34 2017 | SETI@home | Reporting 5 completed tasks However, if it's set to AstroPulse v7: no & SETI@home v8: yes I'm more likely to receive; Sun Jan 15 12:46:43 2017 | SETI@home | Reporting 3 completed tasks After receiving tasks I can set it back to AstroPulse v7: yes & SETI@home v8: yes, and receive tasks for a while. Later it goes back to; Project has no tasks available even though there is room in the cache and tasks available. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
In the last 4 hours I've got 50 tasks. Should be out of work in the next few hours. After receiving tasks I can set it back to AstroPulse v7: yes & SETI@home v8: yes, and receive tasks for a while. Later it goes back to; More weirdness relating to work type settings? My settings are Run only the selected applications AstroPulse v7: no SETI@home v8: yes If no work for selected applications is available, accept work from other applications? no EDIT- ??? So I changed my settings to accept work for APv7 and accept for other applications (even though I can't process either). Hit update. The next Scheduler request resulted in 52 WUs. The one following that resulted in 52WUs. 51WUs on the next request. And then another 23WUs, cache full. Just like that. Next request- reported 4, got 4. Just like it used to be; just like that. ????? Thanks TBar. Anyone else also having issues might want to change the settings and see what results. I'll leave things as they are till I get home from work and then we'll see what changing them back does. Grant Darwin NT |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
Dunno what to say, except that I am not having any issues maintaining cache. My settings are yes, yes, and yes. Might have something to do with the fact that I am running a fleet of older GPUs which are not as fast as some of the newer cards some of you are fronting, and do not crunch up my cache as quickly. "Time is simply the mechanism that keeps everything from happening all at once." |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
My settings are yes, yes, and yes. That appears to be it. Try changing them to accept v8 work only and see if they stop getting work for a while. Settings that used to work, no longer do. Sound familiar? Grant Darwin NT |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
In the last 4 hours I've got 50 tasks. Well, I have been set Yes, Yes, Yes for years and not getting work apparently. I'm going to toggle AP and accept other work off and see what happens. Then toggle them all back to yes. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
I've been noticing similar behavior. It seems if I have both AstroPulse v7: yes & SETI@home v8: yes, I'm more likely to receive; . . There was a discussion by ppl who run the settings AP=Yes,V8=No and if no tasks available then any other work=Yes. Never knew you could do that until this discussion. Apparently with 7.6.22(and earlier) this worked fine at getting V8 tasks when there was no AP available. But with 7.6.33 it would result in them not receiving any work at all. This was referred to Eric, I believe, for a fix. Is it possible that the "fix" has resulted in some rigs running 7.6.33 not getting any work if AP is set to Yes? . . Just an observation from the sidelines. I am running 7.6.33 and have both AP and V8 set to Yes but mostly now I am getting work OK. Perhaps changing AP to No for a while then changing it back might be successful ... Stephen <shrug> |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Well my experiment with toggling AP and V8 off and then back on seems to have worked. Now have a full load of 300 tasks on the problem machine. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
My settings are yes, yes, and yes. Most of my machines are in a venue that is set to only allows MB v8 work. I have 2 hosts that are in a venue with AP v7 enabled. So that setting hasn't effected my ability to get work. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
When I got home, the cache had started to run down again. No where near as much as before, but still running down. Changing the AP & Accept other work back to No, and the cache filled up again, after many earlier requests resulting in no work, or only 1 or 2WUs. So far all the following work requests have resulted in getting WUs equal in number to those reported. Having to toggle those settings every 8-12 hours to continue to receive work is going to get very old, very quickly. :-/ I'm running the 7.6.22 64bit manager. Grant Darwin NT |
Bruce Send message Joined: 15 Mar 02 Posts: 123 Credit: 124,955,234 RAC: 11 |
I'm running Boinc 7.2.42, so I don't know if the problem is at the user level. The same thing is happening again tonight. Gradually running out of work even though the RTS says it is full. 0 tasks sent project has no tasks And when I do receive work it is only 1 or 2 WU. About 8am this morning the server sent enough work in 3 connections to fill my cache, and kept it full all day. Then tonight it is starting to run dry again. Are they working on the servers at night? How come it doesn't effect everyone? Weird! What ever is causing this, I wish that they would get it fixed. My account settings are set to yes,yes and yes. Bruce |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
My account settings are set to yes,yes and yes. Change them to no, yes, no. Give it a couple of scheduler requests and see what happens. Going to Yes, Yes, Yes got me work in the early hours of the morning. When I got back from work the cache was shrinking again so I changed it back to No, Yes, No & it filled up again. EDIT- Seeing Bruce's post I decided to check my cache again. And guess what, it was running down. "Project has no tasks available" being the standard response to requests for work. So I changed my preferences to Yes, Yes, Yes. Several scheduler requests later, and still "Project has no tasks available." So I changed it back to No, Yes, No and bingo! The project does have tasks available. First request after updating to get changed settings, cache re-filled. And the following requests result in the number of allocated WUs to match number of WUs reported. WTF is going on? Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Jan 4 Well, it appears that there has been no fix for the v8 checkbox bug yet. Has Eric been working on this issue? If so, what ever has been done has probably resulted in the current issue. Grant Darwin NT |
Bruce Send message Joined: 15 Mar 02 Posts: 123 Credit: 124,955,234 RAC: 11 |
My account settings are set to yes,yes and yes. I tried changing my account settings to no, yes no. The first connection I received 4 tasks, then the next several I only got 1 task each time. Now I'm back to no new tasks. I'll try running the settings through another cycle and see what happens. Bruce |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
It seems to be affecting the nVidia Hosts more than the ATI Hosts. Last night I set mine to yes, Yes, YES, and now the 2 NV Hosts are down around 80 &50 tasks while the ATI Hosts are the same. Mon Jan 16 07:21:22 2017 | SETI@home | Reporting 2 completed tasks Absent mindedness, or just a short attention span? I suppose I'm going to have to change the settings to wake up the Server. |
Harri Liljeroos Send message Joined: 29 May 99 Posts: 4649 Credit: 85,281,665 RAC: 126 |
The Haveland graphics have been fixed: https://setistats.haveland.com/ We are having a higher number of tasks out in the field, about 6 million instead of the normal 4.2 million. Also the results waiting to purge is almost 3 million when normal used to be about 2.2 million. So server has to work harder to keep up. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.