Panic Mode On (111) Server Problems?

Message boards : Number crunching : Panic Mode On (111) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 8 · 9 · 10 · 11 · 12 · 13 · 14 . . . 31 · Next

AuthorMessage
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1928031 - Posted: 4 Apr 2018, 22:10:53 UTC - in response to Message 1928028.  

[Edit] Hah! Worked. Set NNT and reported 933 tasks. But in reality it only reported a couple of hundred. Will see whether I can get any work now.

IIRC the max number of reported task on one report is 256, even if it shows a bigger number.
Eventualy after few updates all will clear.
What makes your host not receive new work is if you have some UL stalled.


. . That is what I have found from my experience. The rig will saying "reporting 1000" but next time it will be "reporting 744", etc, etc, etc.

Stephen

:)
ID: 1928031 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928032 - Posted: 4 Apr 2018, 22:11:11 UTC - in response to Message 1928029.  

If you're still having trouble you might try Updating with No new tasks set.

. . That sounds like a good idea ...

Stephen

. .
Yes, I'm reminded that it's a useful trick from the olden days, though I don't remember having had to use it since they moved the servers to the co-lo. But I've certainly advocated it in the past.
ID: 1928032 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928057 - Posted: 5 Apr 2018, 1:19:25 UTC - in response to Message 1928028.  

[Edit] Hah! Worked. Set NNT and reported 933 tasks. But in reality it only reported a couple of hundred. Will see whether I can get any work now.

IIRC the max number of reported task on one report is 256, even if it shows a bigger number.
Eventualy after few updates all will clear.
What makes your host not receive new work is if you have some UL stalled.

No that was not the case with my Linux machines. You are correct. The max actually reported was 250 tasks each time. So it took half a dozen Update button smashes to get them all to report.

At each report after the tasks were reported, I got the "your machine has reached the maximum amount of work in progress" message in the Event Log. I had no stalled tasks of any kind on any machine at the time. No stalled uploads or downloads. I still was not getting any work until the number of tasks still to report fell below the magic 1000 tasks number. Then the Linux machines started to pick up work after the reported tasks went through.

All was good in an hour or so and I was back to full caches. I sure would like to see a post from Eric in the Technical News forum about what they actually did to the databases. For my own curiosity. All I can say is that it is fantastic that they got the results to purge down from the astronomical 11M results it hit at its peak. It was impossible to view any tasks on my high producing machines because of the 11K and 13K of tasks in limbo on each host. I can actually pull up any machine now and view tasks. That is fantastic. I was wondering how they were performing and I really needed to check to make sure they were producing valid results as any glitch on any host would add up to a lot of errors very quickly if not caught as soon as possible.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928057 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1928090 - Posted: 5 Apr 2018, 12:55:42 UTC
Last modified: 5 Apr 2018, 13:01:11 UTC

The servers seem to be a bit recalcitrant to send work this morning.
Getting those messages about the computer reaching that mythical 'limit of tasks in progress'.
And I must not be the only one. Results in the field has been dropping off.

Meowsigh.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1928090 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928109 - Posted: 5 Apr 2018, 15:28:51 UTC - in response to Message 1928090.  

I was getting that message only on the newest computer last night. Caches were stubbornly staying 100 tasks below full. I just hoped that it would recover over overnight or not fall to zero. Seems to have recovered to about 25 tasks down from full right now. All the computers are getting that message this morning like you said but are close to full.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928109 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928118 - Posted: 5 Apr 2018, 17:05:21 UTC - in response to Message 1928109.  
Last modified: 5 Apr 2018, 17:10:21 UTC

Now both Linux crunchers are back to being down 100 tasks from full again like last night. Only 1 in 5 task requests get any work and then only 1 or 2 tasks. The rest of the time I get the "you've reached the limit of tasks in progress" message.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928118 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1928119 - Posted: 5 Apr 2018, 17:13:56 UTC - in response to Message 1928118.  

Now both Linux crunchers are back to being down 100 tasks from full again like last night. Only 1 in 5 task requests get any work and then only 1 or 2 tasks. The rest of the time I get the "you've reached the limit of tasks in progress" message.

Getting it again here as well.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1928119 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928121 - Posted: 5 Apr 2018, 17:27:44 UTC - in response to Message 1928119.  

I wish Richard would pop in here and explain why the scheduler thinks we have reached a limit. I show a constantly increasing shortfall in seconds of work requested for both cpu and gpu from the work_fetch_debug output. Increasing from 1 day cache to 2 day cache did nothing other than increase the shortfall request. Still get the message. Turn in 15 tasks, get one back in return. Caches falling fast and fastest for the Linux crunchers. Down by half already on the Linux machines.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928121 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928122 - Posted: 5 Apr 2018, 17:36:31 UTC

Seems the messages have changed. Along with more of the Reached a Limit, I'm now just being told Nothing was sent;

Thu Apr 5 12:53:58 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 12:53:58 2018 | SETI@home | Reporting 4 completed tasks
Thu Apr 5 12:54:00 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 12:54:00 2018 | SETI@home | No tasks sent
Thu Apr 5 12:54:00 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 12:59:08 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 12:59:08 2018 | SETI@home | Reporting 4 completed tasks
Thu Apr 5 12:59:09 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 12:59:09 2018 | SETI@home | No tasks sent
Thu Apr 5 13:04:18 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:04:18 2018 | SETI@home | Reporting 5 completed tasks
Thu Apr 5 13:04:19 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:04:19 2018 | SETI@home | No tasks sent
Thu Apr 5 13:09:28 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:09:28 2018 | SETI@home | Reporting 6 completed tasks
Thu Apr 5 13:09:29 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:09:29 2018 | SETI@home | No tasks sent
Thu Apr 5 13:14:38 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:14:38 2018 | SETI@home | Reporting 6 completed tasks
Thu Apr 5 13:14:39 2018 | SETI@home | Scheduler request completed: got 2 new tasks
Thu Apr 5 13:19:49 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:19:49 2018 | SETI@home | Reporting 9 completed tasks
Thu Apr 5 13:19:50 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:19:50 2018 | SETI@home | No tasks sent
Thu Apr 5 13:24:59 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:24:59 2018 | SETI@home | Reporting 7 completed tasks
Thu Apr 5 13:25:00 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:25:00 2018 | SETI@home | No tasks sent
Thu Apr 5 13:30:13 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:30:13 2018 | SETI@home | Reporting 4 completed tasks
Thu Apr 5 13:30:14 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:30:14 2018 | SETI@home | No tasks sent
Thu Apr 5 13:35:27 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:35:27 2018 | SETI@home | Reporting 5 completed tasks
Thu Apr 5 13:35:28 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:35:28 2018 | SETI@home | No tasks sent
Hey, at least I got 2...
ID: 1928122 · Report as offensive
Profile Mr. Kevvy Crowdfunding Project Donor*Special Project $250 donor
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 3776
Credit: 1,114,826,392
RAC: 3,319
Canada
Message 1928125 - Posted: 5 Apr 2018, 17:43:28 UTC
Last modified: 5 Apr 2018, 17:46:26 UTC

All of these look new... I wonder with the recent long outrage to fix database issues that this has been introduced to reduce work-in-progress entries, specifically by countering "bunkering" (edit: not saying that anyone is, of course! Pretty sure that the 100 CPU/GPU limit was introduced specifically to limit work-in-progress entries so it is an issue that has been addressed before so it's possible.)
ID: 1928125 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928126 - Posted: 5 Apr 2018, 17:46:55 UTC - in response to Message 1928121.  

I'm reading, but I don't have an explanation. The first thing I would look for would be an accurate count of the tasks queued for each resource type on the machine at the time the request for additional work was made. Also, I'd look at the complete reply message shown in the event log: we know that multiple reasons are often stated, with only one being relevant. But I do know that this machine had 200 tasks onboard when this exchange took place a few minutes ago:

05/04/2018 18:26:44 | SETI@home | [sched_op] Starting scheduler request
05/04/2018 18:26:44 | SETI@home | Sending scheduler request: To fetch work.
05/04/2018 18:26:44 | SETI@home | Reporting 1 completed tasks
05/04/2018 18:26:44 | SETI@home | Requesting new tasks for NVIDIA GPU
05/04/2018 18:26:44 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
05/04/2018 18:26:44 | SETI@home | [sched_op] NVIDIA GPU work request: 87220.11 seconds; 0.00 devices
05/04/2018 18:26:44 | SETI@home | [sched_op] Intel GPU work request: 0.00 seconds; 0.00 devices
05/04/2018 18:26:46 | SETI@home | Scheduler request completed: got 0 new tasks
05/04/2018 18:26:46 | SETI@home | [sched_op] Server version 709
05/04/2018 18:26:46 | SETI@home | No tasks sent
05/04/2018 18:26:46 | SETI@home | Tasks for CPU are available, but your preferences are set to not accept them
05/04/2018 18:26:46 | SETI@home | Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
05/04/2018 18:26:46 | SETI@home | Tasks for Intel GPU are available, but your preferences are set to not accept them
05/04/2018 18:26:46 | SETI@home | Project requested delay of 303 seconds
05/04/2018 18:26:46 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_blc03_guppi_58153_00010_DIAG_PSR_J0358+5413_0010.24696.818.21.44.40.vlar_0
05/04/2018 18:26:46 | SETI@home | [sched_op] Deferring communication for 00:05:03
05/04/2018 18:26:46 | SETI@home | [sched_op] Reason: requested by project
As you can see, that machine is set to run SETI on NVidia cards only - two cards, so 200 is the limit. By rights, I could have been allocated one replacement task for the one I was reporting, but none were available.

Is it possible that on a machine which has CPU crunching enabled, you might have 100 CPU tasks onboard, and the scheduler might count them and say "enough, already", and bail out without enumerating GPU tasks? The message you're discussing does say "This computer has reached A limit on tasks in progress" (direct quote from my log at 18:15, except for the emphasis). It doesn't say which limit.
ID: 1928126 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1928127 - Posted: 5 Apr 2018, 17:49:57 UTC

Well, it does beg the question...........
WAS there a change in the code recently?
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1928127 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928128 - Posted: 5 Apr 2018, 17:51:13 UTC - in response to Message 1928122.  

@ TBar

Your log says

Sending scheduler request: To report completed tasks.
Note that mine says

Sending scheduler request: To fetch work.
Did you perhaps catch 'No new tasks' by mistake? If you don't ask, you'll never get.
ID: 1928128 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928130 - Posted: 5 Apr 2018, 18:04:42 UTC - in response to Message 1928128.  

Did you perhaps catch 'No new tasks' by mistake? If you don't ask, you'll never get.

Not a chance. You did see it did send 2, correct? I can post the entire log, it's a bit longer though;

Thu Apr 5 13:45:49 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 13:45:54 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 13:45:54 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 236409.72 seconds; 0.00 devices
Thu Apr 5 13:45:55 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 13:45:55 2018 | SETI@home | No tasks sent
Thu Apr 5 13:45:55 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_blc03_guppi_58157_16456_DIAG_PSR_J1024-0719_0006.18406.409.22.45.176.vlar_1
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.123_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23no17aa.2048.687994.3.30.137_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.113_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.121_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 13:46:20 2018 | SETI@home | Computation for task 21no17aa.19405.13973.5.32.153_0 finished
Thu Apr 5 13:46:20 2018 | SETI@home | Starting task 23dc17aa.3884.15208.14.41.155_0
Thu Apr 5 13:46:21 2018 | SETI@home | Computation for task 23dc17aa.20274.18889.12.39.118_0 finished
Thu Apr 5 13:46:21 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.235_1
Thu Apr 5 13:46:22 2018 | SETI@home | Started upload of 21no17aa.19405.13973.5.32.153_0_r1206594985_0
Thu Apr 5 13:46:23 2018 | SETI@home | Started upload of 23dc17aa.20274.18889.12.39.118_0_r1245950715_0
Thu Apr 5 13:46:24 2018 | SETI@home | Finished upload of 21no17aa.19405.13973.5.32.153_0_r1206594985_0
Thu Apr 5 13:46:26 2018 | SETI@home | Finished upload of 23dc17aa.20274.18889.12.39.118_0_r1245950715_0
Thu Apr 5 13:46:27 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.235_1 finished
Thu Apr 5 13:46:27 2018 | SETI@home | Starting task 23dc17aa.3884.16435.14.41.200_0
Thu Apr 5 13:46:29 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.235_1_r593375621_0
Thu Apr 5 13:46:31 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.235_1_r593375621_0
Thu Apr 5 13:47:37 2018 | SETI@home | Computation for task 23dc17aa.20274.18889.12.39.119_0 finished
Thu Apr 5 13:47:37 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.225_0
Thu Apr 5 13:47:39 2018 | SETI@home | Started upload of 23dc17aa.20274.18889.12.39.119_0_r1788636002_0
Thu Apr 5 13:47:41 2018 | SETI@home | Finished upload of 23dc17aa.20274.18889.12.39.119_0_r1788636002_0
Thu Apr 5 13:48:26 2018 | SETI@home | Computation for task 23dc17aa.3884.16435.14.41.200_0 finished
Thu Apr 5 13:48:26 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.226_0
Thu Apr 5 13:48:28 2018 | SETI@home | Started upload of 23dc17aa.3884.16435.14.41.200_0_r1821173763_0
Thu Apr 5 13:48:30 2018 | SETI@home | Finished upload of 23dc17aa.3884.16435.14.41.200_0_r1821173763_0
Thu Apr 5 13:49:18 2018 | SETI@home | Computation for task 23dc17aa.3884.15208.14.41.155_0 finished
Thu Apr 5 13:49:18 2018 | SETI@home | Starting task 21no17ab.11500.223441.5.32.217_1
Thu Apr 5 13:49:20 2018 | SETI@home | Started upload of 23dc17aa.3884.15208.14.41.155_0_r1658243187_0
Thu Apr 5 13:49:23 2018 | SETI@home | Finished upload of 23dc17aa.3884.15208.14.41.155_0_r1658243187_0
Thu Apr 5 13:49:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.225_0 finished
Thu Apr 5 13:49:38 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.220_0
Thu Apr 5 13:49:40 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.225_0_r201744178_0
Thu Apr 5 13:49:42 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.225_0_r201744178_0
Thu Apr 5 13:50:26 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.226_0 finished
Thu Apr 5 13:50:26 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.230_1
Thu Apr 5 13:50:28 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.226_0_r1763249168_0
Thu Apr 5 13:50:31 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.226_0_r1763249168_0
Thu Apr 5 13:51:00 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 13:51:05 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:51:05 2018 | SETI@home | Reporting 8 completed tasks
Thu Apr 5 13:51:05 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 13:51:05 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 13:51:05 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 237883.99 seconds; 0.00 devices
Thu Apr 5 13:51:06 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 13:51:06 2018 | SETI@home | No tasks sent
Thu Apr 5 13:51:06 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.13973.5.32.153_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.118_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.119_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.3884.15208.14.41.155_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.235_1
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.3884.16435.14.41.200_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.225_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.226_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 13:51:27 2018 | SETI@home | Computation for task 21no17ab.11500.223441.5.32.217_1 finished
Thu Apr 5 13:51:27 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.224_0
Thu Apr 5 13:51:29 2018 | SETI@home | Started upload of 21no17ab.11500.223441.5.32.217_1_r1825719015_0
Thu Apr 5 13:51:32 2018 | SETI@home | Finished upload of 21no17ab.11500.223441.5.32.217_1_r1825719015_0
Thu Apr 5 13:51:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.220_0 finished
Thu Apr 5 13:51:38 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.222_0
Thu Apr 5 13:51:40 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.220_0_r1671108643_0
Thu Apr 5 13:51:42 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.220_0_r1671108643_0
Thu Apr 5 13:52:26 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.230_1 finished
Thu Apr 5 13:52:26 2018 | SETI@home | Starting task 23dc17aa.3884.16435.14.41.202_0
Thu Apr 5 13:52:28 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.230_1_r1013284403_0
Thu Apr 5 13:52:30 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.230_1_r1013284403_0
Thu Apr 5 13:53:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.222_0 finished
Thu Apr 5 13:53:38 2018 | SETI@home | Starting task 21no17ab.11500.223441.5.32.211_0
Thu Apr 5 13:53:41 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.222_0_r1785353434_0
Thu Apr 5 13:53:43 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.222_0_r1785353434_0
Thu Apr 5 13:54:25 2018 | SETI@home | Computation for task 23dc17aa.3884.16435.14.41.202_0 finished
Thu Apr 5 13:54:25 2018 | SETI@home | Starting task 21no17ab.11500.223441.5.32.214_1
Thu Apr 5 13:54:27 2018 | SETI@home | Started upload of 23dc17aa.3884.16435.14.41.202_0_r2028738979_0
Thu Apr 5 13:54:30 2018 | SETI@home | Finished upload of 23dc17aa.3884.16435.14.41.202_0_r2028738979_0
Thu Apr 5 13:54:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.224_0 finished
Thu Apr 5 13:54:38 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1
Thu Apr 5 13:54:40 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.224_0_r607232865_0
Thu Apr 5 13:54:44 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.224_0_r607232865_0
Thu Apr 5 13:54:47 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1 finished
Thu Apr 5 13:54:47 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1
Thu Apr 5 13:54:49 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1_r1026118679_0
Thu Apr 5 13:54:52 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1_r1026118679_0
Thu Apr 5 13:55:00 2018 | SETI@home | Computation for task 21no17ab.11500.223441.5.32.211_0 finished
Thu Apr 5 13:55:00 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1
Thu Apr 5 13:55:02 2018 | SETI@home | Started upload of 21no17ab.11500.223441.5.32.211_0_r772654919_0
Thu Apr 5 13:55:05 2018 | SETI@home | Finished upload of 21no17ab.11500.223441.5.32.211_0_r772654919_0
Thu Apr 5 13:55:45 2018 | SETI@home | Computation for task 21no17ab.11500.223441.5.32.214_1 finished
Thu Apr 5 13:55:45 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1
Thu Apr 5 13:55:47 2018 | SETI@home | Started upload of 21no17ab.11500.223441.5.32.214_1_r963446509_0
Thu Apr 5 13:55:49 2018 | SETI@home | Finished upload of 21no17ab.11500.223441.5.32.214_1_r963446509_0
Thu Apr 5 13:56:09 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 13:56:14 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:56:14 2018 | SETI@home | Reporting 9 completed tasks
Thu Apr 5 13:56:14 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 13:56:14 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 13:56:14 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 239288.10 seconds; 0.00 devices
Thu Apr 5 13:56:15 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 13:56:15 2018 | SETI@home | No tasks sent
Thu Apr 5 13:56:15 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17ab.11500.223441.5.32.217_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.220_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.230_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.224_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.222_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.3884.16435.14.41.202_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17ab.11500.223441.5.32.211_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17ab.11500.223441.5.32.214_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 13:58:07 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1 finished
Thu Apr 5 13:58:07 2018 | SETI@home | Starting task 23dc17aa.19675.23384.10.37.243_0
Thu Apr 5 13:58:09 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1_r1741867489_0
Thu Apr 5 13:58:12 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1_r1741867489_0
Thu Apr 5 13:58:51 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1 finished
Thu Apr 5 13:58:51 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0
Thu Apr 5 13:58:53 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1_r1044341074_0
Thu Apr 5 13:58:55 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1_r1044341074_0
Thu Apr 5 13:59:47 2018 | SETI@home | Computation for task 23dc17aa.19675.23384.10.37.243_0 finished
Thu Apr 5 13:59:47 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1
Thu Apr 5 13:59:49 2018 | SETI@home | Started upload of 23dc17aa.19675.23384.10.37.243_0_r67884315_0
Thu Apr 5 13:59:50 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1 finished
Thu Apr 5 13:59:50 2018 | SETI@home | Starting task 23dc17aa.19675.24202.10.37.32_1
Thu Apr 5 13:59:52 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1_r2100367723_0
Thu Apr 5 13:59:53 2018 | SETI@home | Finished upload of 23dc17aa.19675.23384.10.37.243_0_r67884315_0
Thu Apr 5 13:59:55 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1_r2100367723_0
Thu Apr 5 13:59:55 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1 finished
Thu Apr 5 13:59:55 2018 | SETI@home | Starting task 23no17aa.2048.690448.3.30.217_1
Thu Apr 5 13:59:56 2018 | SETI@home | Computation for task 23dc17aa.19675.24202.10.37.32_1 finished
Thu Apr 5 13:59:56 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1
Thu Apr 5 13:59:57 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1_r478977597_0
Thu Apr 5 13:59:58 2018 | SETI@home | Started upload of 23dc17aa.19675.24202.10.37.32_1_r1022985780_0
Thu Apr 5 14:00:00 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1_r478977597_0
Thu Apr 5 14:00:00 2018 | SETI@home | Finished upload of 23dc17aa.19675.24202.10.37.32_1_r1022985780_0
Thu Apr 5 14:01:23 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 14:01:28 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 14:01:28 2018 | SETI@home | Reporting 6 completed tasks
Thu Apr 5 14:01:28 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 14:01:28 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 14:01:28 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 240895.02 seconds; 0.00 devices
Thu Apr 5 14:01:29 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 14:01:29 2018 | SETI@home | No tasks sent
Thu Apr 5 14:01:29 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.19675.23384.10.37.243_0
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.19675.24202.10.37.32_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 14:01:55 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0 finished
Thu Apr 5 14:01:55 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.1227.21.44.70.vlar_0
Thu Apr 5 14:01:57 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0_r1914605811_0
Thu Apr 5 14:02:01 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0_r1914605811_0
Thu Apr 5 14:02:43 2018 | SETI@home | Computation for task 23no17aa.2048.690448.3.30.217_1 finished
Thu Apr 5 14:02:43 2018 | SETI@home | Starting task blc02_2bit_guppi_58185_72022_LGS_3_off_0027.11979.2045.22.45.84.vlar_2
Thu Apr 5 14:02:45 2018 | SETI@home | Started upload of 23no17aa.2048.690448.3.30.217_1_r1925260118_0
Thu Apr 5 14:02:47 2018 | SETI@home | Finished upload of 23no17aa.2048.690448.3.30.217_1_r1925260118_0
Thu Apr 5 14:03:14 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1 finished
Thu Apr 5 14:03:14 2018 | SETI@home | Starting task 21no17aa.19405.16427.5.32.209_1
Thu Apr 5 14:03:16 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1_r1314845934_0
Thu Apr 5 14:03:19 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1_r1314845934_0
The cache is set for One day.
ID: 1928130 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928133 - Posted: 5 Apr 2018, 18:14:23 UTC - in response to Message 1928130.  

Yes, I can see

Thu Apr 5 13:14:39 2018 | SETI@home | Scheduler request completed: got 2 new tasks
But your 'full' log doesn't include the transaction at 13:14:39.

What happened inside

Thu Apr 5 13:14:38 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:14:38 2018 | SETI@home | Reporting 6 completed tasks
Thu Apr 5 13:14:39 2018 | SETI@home | Scheduler request completed: got 2 new tasks
That's the (only) one we need.
ID: 1928133 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928134 - Posted: 5 Apr 2018, 18:18:30 UTC - in response to Message 1928133.  
Last modified: 5 Apr 2018, 18:21:50 UTC

It looks as though someone kicked something, 'cause I did Nothing;
Thu Apr 5 14:06:37 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 14:06:42 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 14:06:42 2018 | SETI@home | Reporting 7 completed tasks
Thu Apr 5 14:06:42 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 14:06:42 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 14:06:42 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 242039.97 seconds; 0.00 devices
Thu Apr 5 14:06:44 2018 | SETI@home | Scheduler request completed: got 80 new tasks
Thu Apr 5 14:11:49 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 14:11:54 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 14:11:54 2018 | SETI@home | Reporting 3 completed tasks
Thu Apr 5 14:11:54 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 14:11:54 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 14:11:54 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 222610.69 seconds; 0.00 devices
Thu Apr 5 14:11:55 2018 | SETI@home | Scheduler request completed: got 9 new tasks
It's finally sending tasks.

This part?
Thu Apr 5 13:14:33 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 13:14:38 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:14:38 2018 | SETI@home | Reporting 6 completed tasks
Thu Apr 5 13:14:38 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 13:14:38 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 13:14:38 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 229589.54 seconds; 0.00 devices
Thu Apr 5 13:14:39 2018 | SETI@home | Scheduler request completed: got 2 new tasks
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 13:14:39 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] estimated total CPU task duration: 0 seconds
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 416 seconds
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.21599.818.21.44.235.vlar_0
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.21458.1227.21.44.145.vlar_0
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.11928.5.32.128_1
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.23690.409.21.44.68.vlar_1
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.11928.5.32.131_0
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.11928.5.32.118_0
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 13:14:39 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 13:14:41 2018 | SETI@home | Started download of 23dc17aa.18440.24615.15.42.147
Thu Apr 5 13:14:41 2018 | SETI@home | Started download of 23dc17aa.18440.24615.15.42.123
Thu Apr 5 13:14:43 2018 | SETI@home | Finished download of 23dc17aa.18440.24615.15.42.123
Thu Apr 5 13:14:44 2018 | SETI@home | Finished download of 23dc17aa.18440.24615.15.42.147
ID: 1928134 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928139 - Posted: 5 Apr 2018, 18:29:13 UTC - in response to Message 1928134.  

Probably someone collected those

Tasks for CPU are available, but your preferences are set to not accept them
Tasks for AMD/ATI GPU are available, but your preferences are set to not accept them
Tasks for Intel GPU are available, but your preferences are set to not accept them
that were cluttering up the feeder cache. Perhaps 'Tasks for CPU' were available, but because you weren't asking for any, and your preferences (may?) allow them, there was no point in sending that message.
ID: 1928139 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928148 - Posted: 5 Apr 2018, 19:12:06 UTC - in response to Message 1928126.  


Is it possible that on a machine which has CPU crunching enabled, you might have 100 CPU tasks onboard, and the scheduler might count them and say "enough, already", and bail out without enumerating GPU tasks? The message you're discussing does say "This computer has reached A limit on tasks in progress" (direct quote from my log at 18:15, except for the emphasis). It doesn't say which limit.

I wasn't aware that the scheduler can make a distinction for gpu or cpu tasks. Can you show me the code that the scheduler can differentiate between reaching A limit of tasks in progress for EACH type of task?

I have a hunch the message is a catchall for ALL types of tasks, IOW the whole host cache allotment. Richard you probably know which module to look in for the code snippet that generates the message.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928148 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14649
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928150 - Posted: 5 Apr 2018, 19:27:00 UTC - in response to Message 1928148.  

That's the benefit of exact (copy and paste) quoting - it makes search tools work :-)

https://github.com/BOINC/boinc/blob/master/sched/sched_send.cpp#L1355

That segment starts at https://github.com/BOINC/boinc/blob/master/sched/sched_send.cpp#LL1205, with // send messages to user about why jobs were or weren't sent,

You'll probably need to do another search to find where g_wreq->max_jobs_exceeded() is set, and so on.
ID: 1928150 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928155 - Posted: 5 Apr 2018, 19:37:22 UTC - in response to Message 1928150.  

That's the benefit of exact (copy and paste) quoting - it makes search tools work :-)

https://github.com/BOINC/boinc/blob/master/sched/sched_send.cpp#L1355

That segment starts at https://github.com/BOINC/boinc/blob/master/sched/sched_send.cpp#LL1205, with // send messages to user about why jobs were or weren't sent,

You'll probably need to do another search to find where g_wreq->max_jobs_exceeded() is set, and so on.

I don't know why I have such a hard time searching the code. I do use copy and pastes and it rarely finds what I am looking for. Must be some sort of error on my part as I have lousy search luck even when trying to search on supposed obvious keywords even in Google.

I came upon that g_wreq->max_jobs_exceeded() too in reading the code. That would seem to be the place where a hosts cache allotment is set.

This was my last transaction on my fastest Linux machine and the first to exhibit the issue last night.

Thu 05 Apr 2018 12:26:15 PM PDT | SETI@home | [sched_op] Starting scheduler request
Thu 05 Apr 2018 12:26:15 PM PDT | SETI@home | Sending scheduler request: To fetch work.
Thu 05 Apr 2018 12:26:15 PM PDT | SETI@home | Reporting 12 completed tasks
Thu 05 Apr 2018 12:26:15 PM PDT | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Thu 05 Apr 2018 12:26:15 PM PDT | SETI@home | [sched_op] CPU work request: 696786.80 seconds; 0.00 devices
Thu 05 Apr 2018 12:26:15 PM PDT | SETI@home | [sched_op] NVIDIA GPU work request: 316803.94 seconds; 0.00 devices
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | Scheduler request completed: got 1 new tasks
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] Server version 709
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | Project requested delay of 303 seconds
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] estimated total CPU task duration: 3640 seconds
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 0 seconds
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 20no17aa.5630.32472.6.33.193.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26060.2454.21.44.58.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26060.2454.21.44.68.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26499.2454.21.44.76.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26060.2454.21.44.74.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.28289.2454.22.45.65.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26070.2454.22.45.82.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.28339.2454.21.44.77.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26499.2454.21.44.75.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26499.2454.21.44.77.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26070.2454.22.45.64.vlar_1
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_56920_Bol520_0008.26070.2454.22.45.83.vlar_0
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu 05 Apr 2018 12:26:19 PM PDT | SETI@home | [sched_op] Reason: requested by project
Thu 05 Apr 2018 12:26:21 PM PDT | SETI@home | Started download of 20dc17ab.12498.9085.16.43.165.vlar
Thu 05 Apr 2018 12:26:24 PM PDT | SETI@home | Finished download of 20dc17ab.12498.9085.16.43.165.vlar
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928155 · Report as offensive
Previous · 1 . . . 8 · 9 · 10 · 11 · 12 · 13 · 14 . . . 31 · Next

Message boards : Number crunching : Panic Mode On (111) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.