Message boards :
Number crunching :
Panic Mode On (108) Server Problems?
Message board moderation
Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · 20 . . . 29 · Next
Author | Message |
---|---|
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Another attempt at splitting under way ... still all errors :(( |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
No work, no power consumption. Computer shut down. And what would be the fun in that? Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Apparently the Master Science database (that's the long-term storage of completed results, not the day-to-day BOINC server that we interact with every day) crashed after 241 days of continuous running. It's restarted, and appears to be running OK, but I guess they're keeping it lightly loaded overnight so they can run further tests in daylight.I wonder if that is the actual cause of the splitter issue since the splitters failed 3 hours before maintenance when assimilation was still in progress for MB tasks. |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
No work, no power consumption. Computer shut down. Well, not much crunching fun going on right now.......have to stay entertained somehow. "Time is simply the mechanism that keeps everything from happening all at once." |
JaundicedEye Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 |
I thought it was just me..........and things were going so well before the outrage.........sigh..... "Sour Grapes make a bitter Whine." <(0)> |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
I thought it was just me..........and things were going so well before the outrage.........sigh..... I woulda thought they might have had it sorted by now. But not looking any better so far. Meowsigh. "Time is simply the mechanism that keeps everything from happening all at once." |
JaundicedEye Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 |
Or as the VA says........Patients! Patients! Patients! (sorry, not much else going on....) "Sour Grapes make a bitter Whine." <(0)> |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
AP assimilators still off line, AP Validation still backing up- which was happening sInce late on Nov 26th. Possibly related to present splitter issues, or just coincidence? Grant Darwin NT |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Or as the VA says........Patients! Patients! Patients! "And there is no joy in Setiland, all the splitters have struck out." Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
The well is dry here and Einstein benefits. |
David@home Send message Joined: 16 Jan 03 Posts: 755 Credit: 5,040,916 RAC: 28 |
Is there any way to have the 100 work unit limit in SETI@home extended? E.g. is there a config parameter I can tweak? I ran out during the maintenace slot and have only had a handful since. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Is there any way to have the 100 work unit limit in SETI@home extended? E.g. is there a config parameter I can tweak? I ran out during the maintenace slot and have only had a handful since.Buy a second GPU and use it for another project. My GTX 970s are working for GPUGrid, but I managed to top up to 200 tasks on each machine after the outage, so the 750 Tis still have plenty left. |
Phil Burden Send message Joined: 26 Oct 00 Posts: 264 Credit: 22,303,899 RAC: 0 |
Is there any way to have the 100 work unit limit in SETI@home extended? E.g. is there a config parameter I can tweak? I ran out during the maintenace slot and have only had a handful since. A question that has been posed many times over the years, and the simple answer is no. Seti is set to dish out a maximum of 100 tasks per CPU, and 100 tasks per GPU. Plus you can also do as Richard suggests and let your pc work on other deserving projects ;-) P. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
No, nothing can be configured on our end. Project servers control the hard task limit. Only thing we can do on our end is to "bunker" tasks for the Tuesday outage using one of the SETI "Reschedulers" available here in this forum. A search will turn up your choices. I normally bunker enough to get mostly through the outage except for the Linux host. But with this breakdown yesterday and today every host is out of work. That is why it is a good idea to have a "backup" project to process when SETI falls over. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Piotr Send message Joined: 24 May 17 Posts: 18 Credit: 20,069,282 RAC: 41 |
Come on, my flat starting to freeze and its below zero C outside ! |
marsinph Send message Joined: 7 Apr 01 Posts: 172 Credit: 23,823,824 RAC: 0 |
No any explanation from Berkeley and staff ! Of course, they can not always publish. But a little word from staff helps ! I also not forget the time difference between Belgium and Berkeley (9 hours) It will says it is already 10.00AM there. Soon we perhaps will miss "The" signal we all of us are searching.... Because limitation of 100WU (understandable) i am in a few hours out of WU on all my computers.. And I ask work for 10 + 10 days. So to conclude : it will be nice if staff post situation report. Of course no reaction are needed on my post Best regards from Belgium. |
David@home Send message Joined: 16 Jan 03 Posts: 755 Credit: 5,040,916 RAC: 28 |
Thanks all for the feedback, 1) more GPUs, alas I cannot do that with my PC, it's prime use is photo editing and Photoshop cannot cope with multiple GPU cards 2) backup project, I have setup POGs with zero resource share. This seems to cure POG's bad behavoir of slowing filling the cache and squeezing out other projects 3) Rescheduler - must look into that. I always run out of CPU work and now with my new graphics card I run out of GPU work as well during the maintenance slot. But I have an idea, I am sure Eric was asking for more crunchers recently as SETI needed more CPU power. Why not ask the SETI team to up this 100 limit so people can run 100% during the maintenance slot? Seems odd to have a limit so low, just increase it a bit should be enough. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
That suggestion is made a lot too. I believe the original reason why it wasn't done is that the database couldn't handle the large increase in size if more tasks were out in the field. And considering our woes today is database related, probably a good idea to make no changes. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
IIRC the 100 limit was post in place to protect the database when the GPUs are a lot less powerful than today´s and could crunch a WU in about 20- 30 min or more. At that time you DL a lot of WU (up to you cache size most of us uses up to 10 days at the time) and that realy breaks the database limits. So 100 WU holds for more than a day in most of the hosts. But now with the top GPU´s like the 1080Ti running Linux builds the crunch a WU in 5-6 min (or even less) they not last for even 5-6 hrs . At that time we ask to rise the limit for at least 200 WU per GPU but our claim was never hearded. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
And considering our woes today is database related, probably a good idea to make no changes.Different database, but the point is well made. I am sure Eric was asking for more crunchers recently as SETI needed more CPU power.Given that under normal circumstances, SETI is providing work for 158/168 hours in any given week, more crunchers is certainly the answer: allowing existing crunchers to hoard more work puts a lot of strain on the servers for very little added production. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.