Message boards :
Number crunching :
Panic Mode On (109) Server Problems?
Message board moderation
Previous · 1 . . . 24 · 25 · 26 · 27 · 28 · 29 · 30 . . . 35 · Next
Author | Message |
---|---|
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
I kept threatening, mainly toggling preferences and the Triple Update. Haven't resorted to kicking server. Cache is full right now. Not going to do anything. Will have to see where I am at in the morning. Calling a night. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
rob smith ![]() ![]() ![]() Send message Joined: 7 Mar 03 Posts: 22790 Credit: 416,307,556 RAC: 380 ![]() ![]() |
Thanks Richard - I woke up in the "wee small hours" and thought "there's something else to do with APR" and you got to the keyboard before I did. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
![]() ![]() ![]() Send message Joined: 1 Apr 13 Posts: 1859 Credit: 268,616,081 RAC: 1,349 ![]() ![]() |
I test ran some Einstein work and it bogged my rig down so far it became totally unusable for anything else. That was during the great SETI outage of '16 and I haven't tried it since. Just curious, was that CPU work, GPU work or a combination? Since I run Einstein only as filler for SETI, I set it up for GPU-only, as running the SETI CPU cache dry is not an issue for me. ![]() ![]() |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
I kept threatening, mainly toggling preferences and the Triple Update. Haven't resorted to kicking server. Cache is full right now. Not going to do anything. Will have to see where I am at in the morning. Calling a night. . . I hope you got a good night's sleep :) Stephen :) |
![]() ![]() Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 ![]() ![]() |
I don't know what you guys are doing, the time estimates on both of my boxes with GTX1060s are very accurate. I run 2 at a time and they process at the rate of 4 an hour, so they estimate 30 min a task. I was running my R9 390x 24/7 on it for a few months until I rotated to Milkyway. The only work fetch issue I had was not filling my 5+5 day cache queue. I was only getting ~8 days worth of tasks. SETI@home classic workunits: 93,865 CPU time: 863,447 hours ![]() |
![]() ![]() Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 ![]() |
Just curious, was that CPU work, GPU work or a combination?It was a combination. It crashed everything so bad, I just stopped the requests and let the cache run dry. Haven't messed with it since as SETI has been 'Steady'..... ![]() "Sour Grapes make a bitter Whine." <(0)> |
juan BFP ![]() ![]() ![]() ![]() Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 ![]() ![]() |
Just curious, was that CPU work, GPU work or a combination?It was a combination. It crashed everything so bad, I just stopped the requests and let the cache run dry. Haven't messed with it since as SETI has been 'Steady'..... I believe you have the same trouble some of us has in the past. Is the way the cache works when you run both projects. If you use the normal Seti WU days cache like 3 days in E@H E@ DL a lot of WU making our host "hostage" of than amount of WU's To fix run both projects and E@H as a backup project just put 0 (zero) as resource share on E@H settings. That will allow your host DL just 1 E@H WU at a time and crunch it but only when your Seti cache where dry. As soon as the Seti work where available it will return to fill your cache with Seti WU's and they will start as soons as the E@H ends to crunch or in 60 min (you could change that too). Hope that's help. ![]() |
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
I kept threatening, mainly toggling preferences and the Triple Update. Haven't resorted to kicking server. Cache is full right now. Not going to do anything. Will have to see where I am at in the morning. Calling a night. Down about 80 tasks in the caches. Doesn't help that overnight, eventually all the machines sync up with their work requests timings. Never figured out why, something to do with BoincTasks monitoring and server set gpu backoffs or whatever. Triple update on all machines staggered by a minute got everyone full. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
![]() ![]() ![]() Send message Joined: 29 Jun 99 Posts: 11451 Credit: 29,581,041 RAC: 66 ![]() ![]() |
I don't know what you guys are doing, the time estimates on both of my boxes with GTX1060s are very accurate. I run 2 at a time and they process at the rate of 4 an hour, so they estimate 30 min a task. I did because of the averaging of run times between the CPU and GPU. With no CPU work being done at Einstein, the Seti CPU work is not affected by the Einstein GPU work. |
![]() ![]() ![]() Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 ![]() ![]() |
The worst offender is Einstein which still uses DCF and DOESN'T use APR. As Juan pointed out, if you leave Einstein as a backup, it will only supply you enough to crunch at that moment when you ask for work, ie 1 work unit per card until you finished that work unit. However, if you use it as your main project, then you need to edit your preferences on how many days worth of work. For seti I have my days set gto 10 and 0.1 extra. However, when I make Einstein my main project, I change my cache to 0.1 day and 0.1 extra, that prevents huge amount from being downloaded. GPUGrid, doesn't matter how you set your preferences, you get 1 per card and 1 extra only. So you just have to figure out the best configuration for each project, oh, and if you forget to change 0.1 back to 10 when you move from Einstein to Seti, you figure it out quickly as you don't get the full allotment of 100 work units per GPU/CPU..... ![]() ![]() |
juan BFP ![]() ![]() ![]() ![]() Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 ![]() ![]() |
Just an additional info: If you put your resource share to 0 (zero) and you have more than 1 GPU it will DL 1 WU for each GPU. The same happening with CPU cores. If you have 4 available to crunch it will DL 4. At least that works with Seti & E@H, GPUGrid works in a totally different way as Zalster post. ![]() |
![]() ![]() ![]() Send message Joined: 29 Jun 99 Posts: 11451 Credit: 29,581,041 RAC: 66 ![]() ![]() |
However, when I make Einstein my main project, I change my cache to 0.1 day and 0.1 extra, that prevents huge amount from being downloaded +1 |
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
That's fine and works if you only have one main project at a time. But if you run multiple projects at the same time, it does not. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
![]() ![]() ![]() ![]() Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 ![]() ![]() |
oh, and if you forget to change 0.1 back to 10 when you move from Einstein to Seti, you figure it out quickly as you don't get the full allotment of 100 work units per GPU/CPU........ And if you forget to change 4.0+0.01 when changing to Einstein (with RS <> 0) to find E@H doesn't have a 100 task limit ... last time I did that and turned my back for a few minutes and have had IIRC 736 tasks ... way, way over committed! |
![]() ![]() ![]() Send message Joined: 29 Jun 99 Posts: 11451 Credit: 29,581,041 RAC: 66 ![]() ![]() |
oh, and if you forget to change 0.1 back to 10 when you move from Einstein to Seti, you figure it out quickly as you don't get the full allotment of 100 work units per GPU/CPU........ And if you forget to change 4.0+0.01 when changing to Einstein (with RS <> 0) to find E@H doesn't have a 100 task limit ... last time I did that and turned my back for a few minutes and have had IIRC 736 tasks ... way, way over committed! Yep you gotta be careful, very dangerous after the cocktail hour. |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 ![]() ![]() |
I kept threatening, mainly toggling preferences and the Triple Update. Haven't resorted to kicking server. Cache is full right now. Not going to do anything. Will have to see where I am at in the morning. Calling a night. I've been seeing the same on my Linux machines today. Similar to a rolling Blackout. The Server will stop sending the tasks requested by the Client and just send a few tasks at random. Once the Host is down by around 100 tasks the Server will recover and fill the cache. A while later the same will happen on a different machine. The current victim is down by about 70 tasks and just received 5 new tasks instead of the 70 or so the client is requesting. The 3 update routine hasn't had any effect so far. The cache should be around 220 on this machine, https://setiathome.berkeley.edu/results.php?hostid=6906726&offset=140 |
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
oh, and if you forget to change 0.1 back to 10 when you move from Einstein to Seti, you figure it out quickly as you don't get the full allotment of 100 work units per GPU/CPU........ And if you forget to change 4.0+0.01 when changing to Einstein (with RS <> 0) to find E@H doesn't have a 100 task limit ... last time I did that and turned my back for a few minutes and have had IIRC 736 tasks ... way, way over committed! Ha! LOL. Been there ..... done that. I have you beat. I forgot to switch to NNT for an hour once. Accumulated over 5000 tasks. Couldn't even abort them all in one shot and had to take whacks at a couple a hundred at a time. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
![]() ![]() ![]() Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 ![]() ![]() |
oh, and if you forget to change 0.1 back to 10 when you move from Einstein to Seti, you figure it out quickly as you don't get the full allotment of 100 work units per GPU/CPU........ And if you forget to change 4.0+0.01 when changing to Einstein (with RS <> 0) to find E@H doesn't have a 100 task limit ... last time I did that and turned my back for a few minutes and have had IIRC 736 tasks ... way, way over committed! Dang, I hate when that happens...Wait..Hold this.... ![]() ![]() |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13938 Credit: 208,696,464 RAC: 304 ![]() ![]() |
The splitter output has fallen even further. They were good for 50+/s, then it dropped down to around 42/s, now they're struggling to provide 30/s. That's about 108,000 per hour. Unfortunately current demand is 130,00/hr min, averaging around 135,000. We need 39/s as a minimum to meet peak demand (140,000) and keep a ready-to-send buffer with the present load. In a few hours there will be no work left in the ready-to-send buffer & caches will start to run down (more than they normally do) and not get refilled till the splitter output recovers. I think Eric might need to do some further splitter trouble shooting. Or it could be related to the general server system malaise- Replica keeps dropping behind, WU deleters likewise can't keep up. Grant Darwin NT |
juan BFP ![]() ![]() ![]() ![]() Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 ![]() ![]() |
Why those things always happening on the friday? TGIF Cocktail hours? Ops 510 PM I'm late for the first one. ![]() |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.