Message boards :
Number crunching :
Panic Mode On (55) Server problems?
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 10 · Next
Author | Message |
---|---|
James Sotherden Send message Joined: 16 May 99 Posts: 10436 Credit: 110,373,059 RAC: 54 |
Just checked the servers and and see no MB work lots of AP . Tried to get me some but no luck, got two lousy 4 minute GPUs that took one 5 minutes of button abuse to get and the other one refused to download. [/quote] Old James |
Mad Fritz Send message Joined: 20 Jul 01 Posts: 87 Credit: 11,334,904 RAC: 0 |
... Too late ;-) Many thanks, this proxy does ATM work for me as well. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Well, I don't know where that leaves my top rig....... Apparently not 400 per card........ Last night I had 1335 WUs for the GPUs on my top rig, of which 863 were VHAR. This morning, I have 677 left, of which 328 are VHAR. And still banging it's head against the 'in process' limit. Meowwwarrggghhhhhhhhhhhhhhhhhhhhh.......... "Freedom is just Chaos, with better lighting." Alan Dean Foster |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
And it appears the splitters are slowing down even more... I suspect we shall be observing some chinks in the Cricket graph shortly. Need some more assimilator power online to deal with the shorty influx. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65746 Credit: 55,293,173 RAC: 49 |
And it appears the splitters are slowing down even more... Yeah and this 4 wu limit is getting to be a real drag on My 4 gpu pc(2 GTX295 cards, It's capable of holding 3)... I keep getting pestered to do AP v505 too... The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
And it appears the splitters are slowing down even more... It's not a 4 WU limit.... It appears to be 400 per host with a GPU (for the GPU cache). Your DCF has probably gone so low that you are only requesting 1 task per GPU. See the 'cannot get any cache built up' thread. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65746 Credit: 55,293,173 RAC: 49 |
And it appears the splitters are slowing down even more... Ok, maybe It has, Oh wise non-machine liter(pun intended). So I'm essentially in limp along mode, figures. :( The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
And it appears the splitters are slowing down even more... You might try my suggestion of slowing your GPUs way down for a bit and see if the work requests don't pick up again. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
OK.... I haven't changed anything on my three speedier computers with GPUs running Lunatics since this whole thing started. Two 460s and a 560Ti (2 WUs simultaneously on each) ran completely out of GPU work. Now all three of them are maintaining a cache of work between 230-235 WUs, total. It might not mean anything, but since the caches are grouped so tightly together, it might. (we may have accidentally stumbled into one of those "math tricks" where you add a magic number to any number and divide by another given number... and the result is you'll always get your birthday) These three computers are "about" equally quick with a GPU work unit. The fourth machine (P4 with GT 240) -- which is substantially slower with a GPU work unit -- had a nice big cache, but that cache is now 330-ish and falling. It will be interesting to see if it, too, gets into that 230 range and begins to hold. I hope I don't get the chance to observe that, but if all four fall to that 230 WU cache level, then I would be comfortable guessing that the formula (intentional or not) is resulting in a max cache of about 230 WUs. If it falls through the 230-cache level, I'll feel comfortable guessing that the formula is yielding a really short "time restriction." It will take another couple of days for me to think I can glean anything from what I'm seeing at this altitude (not getting into each computer's individual DCFs, etc). But I continue to leave my computers alone so that there is a chance I can observe something. I hesitate to get into trying to straighten this out by hand for fear that I'll defeat(or fight)the servers' attempts to compensate and just make long-term matters worse. I just don't understand the problem well enough to think I can fix it. |
S@NL Etienne Dokkum Send message Joined: 11 Jun 99 Posts: 212 Credit: 43,822,095 RAC: 0 |
well, as a non gpu user I might be not have a high tasks in progress limit, but when on both machines I get a max. of 25 tasks per core and they are all shorties which take about 35 to 40 minutes to complete I wonder how this will help te network problems as reported by Jeff yesterday. Every time my machines contact berkeley ( every 5 min. to fetch work ) I report 1 and get 1 WU. This couldn't be helpful to the load on the servers... Well, all to do is wait and see what comes... |
kepan Send message Joined: 17 Sep 99 Posts: 7 Credit: 27,442,770 RAC: 0 |
I'm still having problems to upload/report/download WU's. The problems are like in august. /Per, Sweden |
Dimly Lit Lightbulb 😀 Send message Joined: 30 Aug 08 Posts: 15399 Credit: 7,423,413 RAC: 1 |
Lil' ol' mini cruncher Sparky has reached a limit on tasks in progress, sayeth the messages/event log. Humbug :). |
Bill Beeman Send message Joined: 15 May 99 Posts: 11 Credit: 7,722,342 RAC: 0 |
Looks like the router needs reloading again. I managed to get some uploaded by using Hotspot Shield, but it is blind to all my machines without that. Same symptoms as before. |
Mad Fritz Send message Joined: 20 Jul 01 Posts: 87 Credit: 11,334,904 RAC: 0 |
Lil' ol' mini cruncher Sparky has reached a limit on tasks in progress, sayeth the messages/event log. Humbug :). At least you get messages - for some others and me the HE-router is acting like a firewall ^^ |
Dave Barstow Send message Joined: 14 May 99 Posts: 76 Credit: 15,064,044 RAC: 0 |
|
Akio Send message Joined: 18 May 11 Posts: 375 Credit: 32,129,242 RAC: 0 |
I had to "no more tasks" my Seti...I had a plethora of shorties. As far as the uploads everything is going smooth for me. I saw a FB post from Seti@Home that there were issues, but everything seems to be working fine on my end. Are there still on-going issues with people have trouble with downloads and uploads? |
Mad Fritz Send message Joined: 20 Jul 01 Posts: 87 Credit: 11,334,904 RAC: 0 |
Yes. _______________________________ PS C:\> tracert -h 15 208.68.240.13 Routenverfolgung zu boinc2.ssl.berkeley.edu [208.68.240.13] über maximal 15 Abschnitte: 1 2 ms 1 ms 1 ms 192.168.62.1 2 11 ms 12 ms 9 ms 217-162-191-1.dynamic.hispeed.ch [217.162.191.1] 3 10 ms 9 ms 9 ms 217-168-54-61.static.cablecom.ch [217.168.54.61] 4 11 ms 11 ms 11 ms 172.31.208.69 5 122 ms 120 ms 119 ms 84-116-130-49.aorta.net [84.116.130.49] 6 120 ms 119 ms 150 ms us-was03a-rd1-xe-0-3-0.aorta.net [84.116.130.66] 7 122 ms 121 ms 121 ms us-nyc01c-rd1-ge-15-0-0.aorta.net [84.116.130.161] 8 123 ms 121 ms 121 ms us-nyc01b-ri1-xe-4-1-0.aorta.net [213.46.190.98] 9 122 ms 124 ms 124 ms core1.nyc4.he.net [198.32.118.57] 10 192 ms 191 ms 194 ms 10gigabitethernet10-2.core1.sjc2.he.net [184.105.213.197] 11 193 ms 200 ms 199 ms 10gigabitethernet3-2.core1.pao1.he.net [72.52.92.69] 12 * * * Zeitüberschreitung der Anforderung. PS C:\> tracert -h 15 208.68.240.16 Routenverfolgung zu setiboincdata.ssl.berkeley.edu [208.68.240.16] über maximal 15 Abschnitte: 1 2 ms 1 ms 1 ms 192.168.62.1 2 9 ms 9 ms 9 ms 217-162-191-1.dynamic.hispeed.ch [217.162.191.1] 3 9 ms 10 ms 8 ms 217-168-54-61.static.cablecom.ch [217.168.54.61] 4 11 ms 11 ms 14 ms 172.31.208.69 5 120 ms 122 ms 123 ms 84-116-130-49.aorta.net [84.116.130.49] 6 120 ms 120 ms 119 ms us-was03a-rd1-xe-0-3-0.aorta.net [84.116.130.66] 7 122 ms 120 ms 120 ms us-nyc01c-rd1-ge-15-0-0.aorta.net [84.116.130.161] 8 122 ms 124 ms 120 ms us-nyc01b-ri1-xe-4-1-0.aorta.net [213.46.190.98] 9 124 ms 121 ms 122 ms core1.nyc4.he.net [198.32.118.57] 10 197 ms 199 ms 199 ms 10gigabitethernet10-1.core1.sjc2.he.net [184.105.213.173] 11 192 ms 191 ms 195 ms 10gigabitethernet3-2.core1.pao1.he.net [72.52.92.69] 12 * * * Zeitüberschreitung der Anforderung. PS C:\> tracert -h 15 208.68.240.18 Routenverfolgung zu boinc2.ssl.berkeley.edu [208.68.240.18] über maximal 15 Abschnitte: 1 2 ms 1 ms 1 ms 192.168.62.1 2 9 ms 9 ms 10 ms 217-162-191-1.dynamic.hispeed.ch [217.162.191.1] 3 8 ms 9 ms 9 ms 217-168-54-61.static.cablecom.ch [217.168.54.61] 4 25 ms 10 ms 11 ms 172.31.208.69 5 138 ms 122 ms 119 ms 84.116.134.25 6 120 ms 119 ms 119 ms us-was03a-rd1-xe-1-3-0.aorta.net [84.116.130.70] 7 120 ms 122 ms 119 ms us-nyc01c-rd1-ge-15-0-0.aorta.net [84.116.130.161] 8 120 ms 119 ms 119 ms us-nyc01b-ri1-xe-4-1-0.aorta.net [213.46.190.98] 9 127 ms 123 ms 125 ms core1.nyc4.he.net [198.32.118.57] 10 190 ms 189 ms 199 ms 10gigabitethernet10-2.core1.sjc2.he.net [184.105.213.197] 11 203 ms 200 ms 199 ms 10gigabitethernet3-2.core1.pao1.he.net [72.52.92.69] 12 * * * Zeitüberschreitung der Anforderung. PS C:\> tracert -h 15 208.68.240.20 Routenverfolgung zu setiboinc.ssl.berkeley.edu [208.68.240.20] über maximal 15 Abschnitte: 1 2 ms 1 ms 1 ms 192.168.62.1 2 9 ms 9 ms 9 ms 217-162-191-1.dynamic.hispeed.ch [217.162.191.1] 3 9 ms 9 ms 9 ms 217-168-54-61.static.cablecom.ch [217.168.54.61] 4 10 ms 11 ms 12 ms 172.31.208.69 5 118 ms 118 ms 118 ms 84-116-130-53.aorta.net [84.116.130.53] 6 120 ms 119 ms 119 ms us-was03a-rd1-xe-1-3-0.aorta.net [84.116.130.70] 7 120 ms 119 ms 119 ms us-nyc01c-rd1-ge-15-0-0.aorta.net [84.116.130.161] 8 121 ms 119 ms 119 ms us-nyc01b-ri1-xe-4-1-0.aorta.net [213.46.190.98] 9 122 ms 124 ms 125 ms core1.nyc4.he.net [198.32.118.57] 10 191 ms 190 ms 199 ms 10gigabitethernet10-2.core1.sjc2.he.net [184.105.213.197] 11 190 ms 191 ms 195 ms 10gigabitethernet3-2.core1.pao1.he.net [72.52.92.69] 12 * * * Zeitüberschreitung der Anforderung. |
Rick Send message Joined: 3 Dec 99 Posts: 79 Credit: 11,486,227 RAC: 0 |
Seem to be getting tasks now but downloads are still a bit iffy. I just got a set of 18 AP tasks that are trying to download. Hadn't noticed it before but those are 8MB each. If things were flowing normally I would be getting transfers at between 30 and 40KBps but now I'm lucky to see 10MBps on most of my downloads. Also see that the estimates for these tasks is about 31 hours. |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
It seems I have finally hit the current server side cap of 450 units. Janice |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65746 Credit: 55,293,173 RAC: 49 |
It seems I have finally hit the current server side cap of 450 units. Mine seems to be at 631 right now. :D All cause of the Flops setting in My seti xml file. Of course I put Boinc back to its normal summer schedule of 6-8 hours a night, as I'm not crazy about the room being 82F on humid cloudy days. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.