Message boards :
Number crunching :
Panic Mode On (92) Server Problems?
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 23 · Next
Author | Message |
---|---|
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Yep, getting some tasks here 2, but why does Rosetta work always want to go into high priority mode whenever I get SETI work? :-O I just found with BOINC 7.2.42 that with a backup projects it stops working on the backup project as soon as work is received for a non-backup project. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
WezH Send message Joined: 19 Aug 99 Posts: 576 Credit: 67,033,957 RAC: 95 |
I just found with BOINC 7.2.42 that with a backup projects it stops working on the backup project as soon as work is received for a non-backup project. I haven't seen that. One of my CPU-only host is still working with backup project (S@H beta). 1/6 cores are working with S@H, and host has 36 S@H workunits ready to start. "Please keep Your signature under four lines so Internet traffic doesn't go up too much" - In 1992 when I had my first e-mail address - |
Phil Burden Send message Joined: 26 Oct 00 Posts: 264 Credit: 22,303,899 RAC: 0 |
Yep, getting some tasks here 2, but why does Rosetta work always want to go into high priority mode whenever I get SETI work? :-O How does Boinc determine which is a backup project, and which isn't? P. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Yep, getting some tasks here 2, but why does Rosetta work always want to go into high priority mode whenever I get SETI work? :-O Projects with Resource Share set to 0 are "backup projects". SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I just found with BOINC 7.2.42 that with a backup projects it stops working on the backup project as soon as work is received for a non-backup project. I think I found how why it is doing that for me. My i7's download 8 tasks from their backup projects. However I am using an app_config.xml with <max_concurrent>4</max_concurrent> set. BOINC will finish the 4 tasks that it actively processing for the backup project while it downloads work for the main project. Then will start the main project tasks leaving the remaining 4 backup project tasks "ready to start". I imagine the backup tasks would start again if BOINC went to high priority mode. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
JaundicedEye Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 |
11/20/2014 8:53:13 AM | SETI@home | Project has no tasks available 11/20/2014 9:01:20 AM | SETI@home | Project has no tasks available 11/20/2014 9:06:28 AM | SETI@home | Project has no tasks available 11/20/2014 9:18:34 AM | SETI@home | Project has no tasks available 11/20/2014 9:23:41 AM | SETI@home | Project has no tasks available 11/20/2014 9:54:53 AM | SETI@home | Project has no tasks available 11/20/2014 10:16:09 AM | SETI@home | Project has no tasks available 11/20/2014 11:01:36 AM | SETI@home | Project has no tasks available 11/20/2014 11:53:08 AM | SETI@home | Project has no tasks available 11/20/2014 1:48:57 PM | SETI@home | Project has no tasks available 11/20/2014 2:13:06 PM | SETI@home | Project has no tasks available 11/20/2014 2:18:13 PM | SETI@home | Project has no tasks available 11/20/2014 2:32:19 PM | SETI@home | Project has no tasks available Hopefully by next Thursday we all will have much to be Thankful for.........the Hope that Springs eternal........ |
Sirius B Send message Joined: 26 Dec 00 Posts: 24879 Credit: 3,081,182 RAC: 7 |
Hopefully by next Thursday we all will have much to be Thankful for.........the Hope that Springs eternal........ Not sure about that... ...approx. 10 minutes ago while attempting to post, got this: - Fatal error: Cannot redeclare boinc_real_escape_string() (previously declared in /disks/carolyn/b/home/boincadm/projects/sah/html/inc/db.inc:177) in /disks/carolyn/b/home/boincadm/projects/sah/html/inc/boinc_db.inc on line 710 |
Wiggo Send message Joined: 24 Jan 00 Posts: 34841 Credit: 261,360,520 RAC: 489 |
I've received 40 tasks here this morning and most of those were new 1's. :-) Cheers. |
Lionel Send message Joined: 25 Mar 00 Posts: 680 Credit: 563,640,304 RAC: 597 |
I've received 40 tasks here this morning and most of those were new 1's. :-) Same here, but about 400 though over last night, now down to 355. All CPU and most are a mix of _0 and _1. However not that many over the preceding last few days. Luck of the timing draw I guess. L. |
JaundicedEye Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 |
Fatal error: Cannot redeclare boinc_real_escape_string() (previously declared in /disks/carolyn/b/home/boincadm/projects/sah/html/inc/db.inc:177) in /disks/carolyn/b/home/boincadm/projects/sah/html/inc/boinc_db.inc on line 710 I got the same message, but when I took the quotation marks out of the hope that springs eternal I was able to post. Maybe it should be the hype that springs eternal within the breast of salesmen... |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Hopefully by next Thursday we all will have much to be Thankful for.........the Hope that Springs eternal........ I was getting that site wide about 90 min ago. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Wiggo Send message Joined: 24 Jan 00 Posts: 34841 Credit: 261,360,520 RAC: 489 |
Looking at the cricket graphs it seems some more files have been found to send down the line. Cheers. |
betreger Send message Joined: 29 Jun 99 Posts: 11361 Credit: 29,581,041 RAC: 66 |
Now the trick is will one of my GT430s get some, it is getting bored with Einstein. |
David S Send message Joined: 4 Oct 99 Posts: 18352 Credit: 27,761,924 RAC: 12 |
My GT 630 has managed to get some new work, enough to keep it from asking Einstein (or do I have Einstein on NNT on that one..?) My i7 with the GT 440 has loaded up on as many Betas as it can get (they are actually available, if you're truly desperate for Seti) and hasn't contacted the Main server in days, which now that I think about it would be because I set a really high resource share at Beta so the Androids would prefer it. Speaking of Beta, there have been various descriptions here of the scheduler sending computers different versions of cuda tasks to see which one works the best. Guys always say it takes 11 of each for it to decide. My i7 (the last I looked) has 19 cuda32s, 19 cuda 42s, and about 5 cuda50s. (I think it hasn't started on any of them yet because it's running Einsteins in HP.) Is 19 the real number, or is it different at Beta than Main? (I presume these are being done on stock apps instead of Lunatics because Beta is technically a different project.) Another question comes to mind: once the server decides on which stock app is the best, is it pretty likely that the same cuda version of the Lunatics apps will be the best for Main? David Sitting on my butt while others boldly go, Waiting for a message from a small furry creature from Alpha Centauri. |
Jeff Buck Send message Joined: 11 Feb 00 Posts: 1441 Credit: 148,764,870 RAC: 0 |
I was just looking through my pendings and noticed that there was apparently about a 2 hour period on Monday when either the validators or transitioners weren't doing their job. I've got quite a few WUs where there are no tasks "in progress", where the last host reported its task on 17 November between about 19:42 and 21:51 UTC, but apparently missed the validator, leaving the WU in "Completed, waiting for validation" status for both of us. Examples: http://setiathome.berkeley.edu/workunit.php?wuid=1641206662 http://setiathome.berkeley.edu/workunit.php?wuid=1641182095 http://setiathome.berkeley.edu/workunit.php?wuid=1640583179 Apparently, these will have to wait for a "second chance" validation when the original deadline is reached. I've also got at least one where the host which reported in that Monday window had a computation Error, which should have resulted in a _2 task being sent out, but it wasn't. To me, that doesn't seem like a validator issue, but perhaps a transitioner one? Here's that WU: http://setiathome.berkeley.edu/workunit.php?wuid=1640765912 Has anybody else noticed WUs like this in their pendings? |
Phil Burden Send message Joined: 26 Oct 00 Posts: 264 Credit: 22,303,899 RAC: 0 |
Yup, I've got about 14 v7 AP's awaiting validation since the 10th/11th of thiws month. I just assumed the validators were hiccuoing along with everything else ;-) doubtless it'll sort itself out given time. P. |
Jeff Buck Send message Joined: 11 Feb 00 Posts: 1441 Credit: 148,764,870 RAC: 0 |
Actually, the APs are a separate issue, due to the AP database problems they've been having for several weeks. I should have been more explicit in specifying that the WUs that apparently missed the validators on Monday were MBs. |
Claggy Send message Joined: 5 Jul 99 Posts: 4654 Credit: 47,537,079 RAC: 4 |
The scheduler is shutdown for some reason: Fri 21 Nov 2014 22:24:56 GMT | SETI@home | update requested by user Fri 21 Nov 2014 22:24:57 GMT | SETI@home | sched RPC pending: Requested by user Fri 21 Nov 2014 22:24:57 GMT | SETI@home | [sched_op] Starting scheduler request Fri 21 Nov 2014 22:24:57 GMT | SETI@home | Sending scheduler request: Requested by user. Fri 21 Nov 2014 22:24:57 GMT | SETI@home | Requesting new tasks for CPU Fri 21 Nov 2014 22:24:57 GMT | SETI@home | [sched_op] CPU work request: 640621.95 seconds; 0.00 devices Fri 21 Nov 2014 22:24:57 GMT | SETI@home | [sched_op] NVIDIA work request: 0.00 seconds; 0.00 devices Fri 21 Nov 2014 22:24:59 GMT | SETI@home | Scheduler request completed: got 0 new tasks Fri 21 Nov 2014 22:24:59 GMT | SETI@home | Project is temporarily shut down for maintenance Fri 21 Nov 2014 22:24:59 GMT | SETI@home | Project requested delay of 3600 seconds Fri 21 Nov 2014 22:24:59 GMT | SETI@home | [sched_op] Deferring communication for 01:00:00 Fri 21 Nov 2014 22:24:59 GMT | SETI@home | [sched_op] Reason: project is down Claggy |
JaundicedEye Send message Joined: 14 Mar 12 Posts: 5375 Credit: 30,870,693 RAC: 1 |
The scheduler is shutdown for some reason: Possibly major surgery in progress? Hopefully the patient will make a speedy recovery. |
Mike Send message Joined: 17 Feb 01 Posts: 34258 Credit: 79,922,639 RAC: 80 |
Switch to GBT maybe. With each crime and every kindness we birth our future. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.