Panic Mode On (111) Server Problems?

Message boards : Number crunching : Panic Mode On (111) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · 20 . . . 31 · Next

AuthorMessage
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14660
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928701 - Posted: 7 Apr 2018, 21:55:55 UTC - in response to Message 1928697.  

I don't want to burst any bubbles, however, I've always run my cache settings at about a day. Since I only run One or Two CPU tasks, my CPU caches are Always very low. Doesn't seem to help...does it.
It would be interesting to see someone 'provoke' the server to simply stop sending replacements, without changing the cache settings of course. As far as I know the server just decides to stop sending replacement tasks without being 'provoked'. That's the way it works for me anyway. Right now it's working normally, completed tasks are being replaced when reported. How would you provoke the server to stop sending replacements? I'm also waiting to hit that empty feeder, seems I've managed to avoid it for the last day. Just lucky I suppose.
So you believe in magical schedulers that don't look at any of the numbers you send them, and don't process them according to published code?

You're on your own with that.
ID: 1928701 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13797
Credit: 208,696,464
RAC: 304
Australia
Message 1928704 - Posted: 7 Apr 2018, 22:09:42 UTC - in response to Message 1928697.  
Last modified: 7 Apr 2018, 22:13:31 UTC

It would be interesting to see someone 'provoke' the server to simply stop sending replacements, without changing the cache settings of course. As far as I know the server just decides to stop sending replacement tasks without being 'provoked'. That's the way it works for me anyway.

It's not about "provoking" it to stop sending work, it's about "provoking" it to send it again when it stops.

Richard's theory is along the lines of what I suspected- it relates to the order in which it processes the request for work (work type & processing device). For whatever reason there are times where the Scheduler is struggling, and that results in some of us not getting work to replace returned work which is what normally occurs. It might also explain why your triple update (usually) helps.
When the issue occurs, the Scheduler response comes within the normal time frame for me, 2-3 seconds. During the same period in the last occurence, Sirius said he was still getting work, but the Scheduler requests were taking longer than usual.

So the next time the problem occurs, i'll change my cache setting & see if it helps (or not). And Sirius will keep an eye out & see if he's still getting work when we're not, and whether he's getting the long Scheduler response times again.

Sorting this out might also help sort out the problem with the Application settings, as they are related- the issue with not getting work to replace returned work came about at the same time people that chose to do AP work, and V8 only if no AP was available had to change their settings to accept V8 as well or not get any work if there was no AP available.
Grant
Darwin NT
ID: 1928704 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928705 - Posted: 7 Apr 2018, 22:10:14 UTC - in response to Message 1928701.  

I thought BOINC was supposed to be set and forget. You know, install the Program and let it run. Now you're trying to say users have to master advanced math to get any work?
Please.
The server deciding to stop sending work isn't because the user decided to not do his homework.
If it is, then the people at BOINC need to do their homework and make the App more user friendly.
ID: 1928705 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1928706 - Posted: 7 Apr 2018, 22:22:27 UTC - in response to Message 1928631.  

Am I misreading something or is the same tape loaded twice?
Tape blc03_2bit_guppi_58185_76028_Dw1_off_0033
. .

. . If it is I hope the results the second time around match the first :)

Stephen

? ?
ID: 1928706 · Report as offensive
Profile Bernie Vine
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 26 May 99
Posts: 9954
Credit: 103,452,613
RAC: 328
United Kingdom
Message 1928707 - Posted: 7 Apr 2018, 22:27:47 UTC

I thought BOINC was supposed to be set and forget. You know, install the Program and let it run.


Yep that is how it works for me.

All I do is check 3-4 times a day and almost always my caches are full, sometimes one or two tasks low. If I wait a while it will top up on it's own.

Now I use Windows and only have mid-range cards so perhaps that helps.
ID: 1928707 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1928714 - Posted: 7 Apr 2018, 23:00:30 UTC - in response to Message 1928676.  

What part of the system would be tuned? Are we talking about setting venues for each machine or something? Or are you talking about changing rsc_fpops_est in client_state?
No, I wouldn't touch <rsc_fpops_est> - in fact, I wouldn't change anything inside client_state.xml (so no rescheduling, either). My aim Is to find out how the scheduler is really working, and hence to support or disprove some of the many theories that have been floated in this thread. Only once we all understand it, can we make sensible suggestions about how to drive it. I aim to work with BOINC, rather than against it - unless I come across a coding bug which makes it work in a way which is different from what is documented or which appears to have made the servers work differently from the way the designers intended.

Once I've got some working estimates for my particular host - yours will be different - I'm going to do the maths to work out how long 100 CPU tasks will last, and how long 100 GPU tasks will last. If the CPU runs dry first, that's no good for our test: I'll use app_config.xml to reduce the number of CPU cores SETI is allowed to use (that'll prolong the cache lifetime for the same setting - the spare cores can get on with something else). Then, I'll set the cache so that the machine loads up 100 CPU tasks, at which point it'll be gasping for GPU work, too. See if I get knocked back with a task limit message - that's the one I'm looking for, nothing else. Then, back off the cache just enough to stabilise the CPU in the 90s, but the GPU still wanting more. If I don't get GPU work, or get the wrong message, then it's back to the drawing board and re-read the code to see where I've gone wrong.

It may take a while...


. . I am afraid I really do not understand your approach here. On Bertie I have the work request limits set to 0.35/0 or at leat 0.35 days of work with NO additional work. At 1000 to 1100 tasks per day on the GPU or 100 tasks per 0.1 days of request that will more than keep the Q for the two GPUs full while restricting the CPU Q to about 16 tasks. (ie 200 GPu and 16 GPU). If I increase that work request limit to the point where the CPU Q reaches the server set limit I will be requesting work for about 2 days which on the GPUs would be over 2000 tasks. To set the a limit that will leave the GPU Q just below the 200 max (about 0.18 days) then the CPU Q will have only about 8 tasks. The is no way that the CPU request will achieve the limit of 100 tasks without the GPU Q being absolutely full. ???

Stephen

? ?
ID: 1928714 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1928717 - Posted: 7 Apr 2018, 23:15:57 UTC - in response to Message 1928697.  

I don't want to burst any bubbles, however, I've always run my cache settings at about a day. Since I only run One or Two CPU tasks, my CPU caches are Always very low. Doesn't seem to help...does it.
It would be interesting to see someone 'provoke' the server to simply stop sending replacements, without changing the cache settings of course. As far as I know the server just decides to stop sending replacement tasks without being 'provoked'. That's the way it works for me anyway. Right now it's working normally, completed tasks are being replaced when reported. How would you provoke the server to stop sending replacements? I'm also waiting to hit that empty feeder, seems I've managed to avoid it for the last day. Just lucky I suppose.


+1

Stephen

?
ID: 1928717 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928733 - Posted: 8 Apr 2018, 0:33:36 UTC - in response to Message 1928698.  

This really is going to be the last for todaynight :-)

Sorry, meant to include that.
4/7/2018 13:50:24 | | [work_fetch] target work buffer: 86400.00 + 864.00 sec
So target is one day or 864864 seconds.
No it isn't - use a calculator. 87,264 seconds.

Adding up shortfall and saturated is 4/7/2018 13:51:09 | | [work_fetch] shortfall 118044.05 nidle 0.00 saturated 47498.36 busy 0.00 or 165542 seconds of gpu work. Which is 46 hours of gpu work total/ Divide by 3 gpus and you get 15.3 hours of work. Huhh?
OK, maybe I'm rusty.

Target (wall-clock time): 87,264
Saturated (maybe also wall-clock time): 47,498.36
Leaves shortfall: 39,765.64 - wall-clock, which means per GPU.
3 GPUs, total needed to fill 3 shortfalls: 119,296.92

Does that sound better?

I did use a calculator. Doesn't mean I didn't enter an extra zero in calculating total target. Fumble fingers blamed.

Leaves shortfall: 39,765.64 - wall-clock, which means per GPU

OK, this is where I was unclear. I thought the shortfall was calculated for the entire system. Not for just one gpu.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928733 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13797
Credit: 208,696,464
RAC: 304
Australia
Message 1928751 - Posted: 8 Apr 2018, 2:27:39 UTC - in response to Message 1928691.  
Last modified: 8 Apr 2018, 2:27:57 UTC

Looks like they can meet 114,000 returned per hour, but 134,000 per hour is beyond their abilities. Ready-to-send should be empty in another 9-12 hours at the present rate of decline.

Revise that estimate, splitter output has taken an even deeper & more extended dive.
Maybe 3 hours now left in the Ready-to-send buffer.
Grant
Darwin NT
ID: 1928751 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14660
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928776 - Posted: 8 Apr 2018, 7:59:36 UTC - in response to Message 1928733.  

I did use a calculator. Doesn't mean I didn't enter an extra zero in calculating total target. Fumble fingers blamed.
My fingers fumble too. I've trained myself ALWAYS to use copy'n'paste whenever it matters:


Quicker, too.
ID: 1928776 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 35522
Credit: 261,360,520
RAC: 489
Australia
Message 1928785 - Posted: 8 Apr 2018, 9:11:04 UTC - in response to Message 1928528.  

Good point, but I want to see if it works with more than 50 at a time :-)

After that hit, I'll keep a close eye on the log for when the scheduler has issues.

I doubt that you'll see much difference there Sirius as you're only doing CPU work and not requesting constant GPU work. ;-)

But then again I don't suffer those same problems here, but when I do have a problem everyone has a problem.

Cheers.

Yes, if Wiggo is having issues . . . . he is the proverbial canary in the coalmine. He runs a version 6 client and if he is having issues . . . . EVERYONE is having issues.

Ever since the last of the Arecibo tasks cleared out from the RTS buffer, I have kept all machines at full caches. So maybe the problem is that the scheduler is having issues differentiating cpu or gpu work requests when the mix in the buffer contains Arecibo VLAR's.
Well the 2nd PC that I totally stripped down and rebuilt yesterday (an old Vista job I built back in the 1st days of the OS) is now running Mint 18.3 with my old heater box GTX 560Ti with a 10 day plus 0.1 cache setting, so I'll see the later version of BIONC is to blame (or it's just too slow to trigger said problem). :-D

Cheers.
ID: 1928785 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 35522
Credit: 261,360,520
RAC: 489
Australia
Message 1928786 - Posted: 8 Apr 2018, 9:16:04 UTC - in response to Message 1928528.  

Good point, but I want to see if it works with more than 50 at a time :-)

After that hit, I'll keep a close eye on the log for when the scheduler has issues.

I doubt that you'll see much difference there Sirius as you're only doing CPU work and not requesting constant GPU work. ;-)

But then again I don't suffer those same problems here, but when I do have a problem everyone has a problem.

Cheers.

Yes, if Wiggo is having issues . . . . he is the proverbial canary in the coalmine. He runs a version 6 client and if he is having issues . . . . EVERYONE is having issues.

Ever since the last of the Arecibo tasks cleared out from the RTS buffer, I have kept all machines at full caches. So maybe the problem is that the scheduler is having issues differentiating cpu or gpu work requests when the mix in the buffer contains Arecibo VLAR's.
Well the 2nd PC that I totally stripped down and rebuilt yesterday (an old Vista job I built back in the 1st days of the OS) is now running Mint 18.3 with my old heater box GTX 560Ti with a 10 day plus 0.1 cache setting, so I'll see if I can get a later version of BIONC to prove to be the blame (or it's just too slow to trigger said problem). :-D

Cheers.
ID: 1928786 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14660
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928787 - Posted: 8 Apr 2018, 10:00:20 UTC - in response to Message 1928714.  

. . I am afraid I really do not understand your approach here. On Bertie I have the work request limits set to 0.35/0 or at leat 0.35 days of work with NO additional work. At 1000 to 1100 tasks per day on the GPU or 100 tasks per 0.1 days of request that will more than keep the Q for the two GPUs full while restricting the CPU Q to about 16 tasks. (ie 200 GPu and 16 GPU). If I increase that work request limit to the point where the CPU Q reaches the server set limit I will be requesting work for about 2 days which on the GPUs would be over 2000 tasks. To set the a limit that will leave the GPU Q just below the 200 max (about 0.18 days) then the CPU Q will have only about 8 tasks. The is no way that the CPU request will achieve the limit of 100 tasks without the GPU Q being absolutely full. ???

Stephen

? ?
Exactly. That's a very sensible and admirable course of action, and I'm glad it suits you. Carry on exactly as you are.

The reason I've got involved in this thread is to try and quieten down the continual grumbling from a small group of users who appear to have different objectives, and complain when BOINC doesn't respond the way they would like it to.

From the limited hard facts they've been willing to supply, I've formed the impression that their objective is to keep the CPU queue filled as well, and don't mind that they are continually pestering the servers for vastly more GPU work than they are ever going to receive. By tracking down the mechanism, I'm hoping to suggest a course of action closer to yours, which will maximise their throughput and minimise their grumbles, at the minor detriment of relaxing one of their secondary objectives. But I need the facts first.

I hope to be in a position to do that in a couple of hours, when the last of my backup project work from the extended outage this week has been flushed from the computer I've chosen as the guinea-pig. But I have some yard-work to complete first.
ID: 1928787 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13797
Credit: 208,696,464
RAC: 304
Australia
Message 1928788 - Posted: 8 Apr 2018, 10:35:30 UTC - in response to Message 1928787.  
Last modified: 8 Apr 2018, 10:40:37 UTC

From the limited hard facts they've been willing to supply, I've formed the impression that their objective is to keep the CPU queue filled as well, and don't mind that they are continually pestering the servers for vastly more GPU work than they are ever going to receive.

If that is the case, then you have the wrong impression.

Our cache, or the Server side limits (depending on the abilities of our system), sets the maximum amount of work we can get.
What we would like, is for the Scheduler to replace whatever work we return, as we return it If we return 1 WU, we would like to get another 1 WU. If we return 5 WUs, we would like to get 5WUs back.
That is what has occurred since I moved to Seti under BOINC. Now, at seemingly random times, it can take 5-20 Scheduler requests, each time reporting work, in order to get more (either GPU or CPU) work.
That is what changed in Dec 2016.

                                      Run only the selected applications AstroPulse v7: yes
                                                                          SETI@home v8: yes
If no work for selected applications is available, accept work from other applications? yes

It used to be that people could have "Run only the selected applications AstroPulse v7" set to Yes, "Run only the selected applications SETI@home v8" set to No, and "If no work for selected applications is available, accept work from other applications? Yes" and get MB V8 work when no AP work was available. In Dec 2016, with the sorting out of the SoG 8.22/ 8.23 rollout, people started posting about no longer being able to get any V8 work.
The only way they could now get V8 work was to also set "Run only the selected applications SETI@home v8" to Yes. The "If no work for selected applications is available, accept work from other applications?: Yes" no longer had any effect of getting work.
And at the same time this occurred, the Scheduler became random in whether it would or would not supply work to replace that which had been returned.
I found that by installing the AP application, and setting "Run only the selected applications AstroPulse v7" to Yes the frequency, and severity, of my cache running down was reduced; but it continues to occur.

I personally would just like it to work the way it used to, all the time. Not just now & then or even most of the time.
I return a WU, I get a new WU. I return 10 WUs I get 10 WUs back. This isn't about not being able to get work because of Arecibo VLARs and GPU work requests- the problem affects CPU work requests as well.
Grant
Darwin NT
ID: 1928788 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14660
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928793 - Posted: 8 Apr 2018, 11:39:31 UTC - in response to Message 1928788.  

I would like to see the scheduler reply messages - all of them - that accompany the "Project requested delay of 303 seconds" at the end of a scheduler RPC, for one or two examples of this type of problem. We've been intertwining discussions about several problems in this thread, and I'm finding it hard to disentangle whether this problem is, or isn't, linked to "This computer has reached a limit on tasks in progress" - on my machines, I usually do get 'one for one', or however I've set the cache on that particular machine (sometime it's 'five for five'). If anything different happens, it's usually an obvious server problem (or user error at this end - that happens too!), and I'd like to keep this thread clearer for server problems, because I believe it's one of the very few threads that the staff sometimes peek into for signs of trouble.

When I start my own tests, I'll also be monitoring the <sched_op_debug> and <work_fetch_debug> Event Log flags, which give more insight into what is being requested and why. But that can get very verbose, and I don't expect anyone else to want to go down to that level of detail.
ID: 1928793 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1928804 - Posted: 8 Apr 2018, 14:20:36 UTC - in response to Message 1928788.  


If that is the case, then you have the wrong impression.

It used to be that people could have "Run only the selected applications AstroPulse v7" set to Yes, "Run only the selected applications SETI@home v8" set to No, and "If no work for selected applications is available, accept work from other applications? Yes" and get MB V8 work when no AP work was available. In Dec 2016, with the sorting out of the SoG 8.22/ 8.23 rollout, people started posting about no longer being able to get any V8 work.
The only way they could now get V8 work was to also set "Run only the selected applications SETI@home v8" to Yes. The "If no work for selected applications is available, accept work from other applications?: Yes" no longer had any effect of getting work.
And at the same time this occurred, the Scheduler became random in whether it would or would not supply work to replace that which had been returned.
I found that by installing the AP application, and setting "Run only the selected applications AstroPulse v7" to Yes the frequency, and severity, of my cache running down was reduced; but it continues to occur.

I personally would just like it to work the way it used to, all the time. Not just now & then or even most of the time.
I return a WU, I get a new WU. I return 10 WUs I get 10 WUs back. This isn't about not being able to get work because of Arecibo VLARs and GPU work requests- the problem affects CPU work requests as well.


. . It may not be directly relevant but on the i5 with the 970s AP tasks recently stopped working. They will start but quickly stop and change to "waiting to run. I tried re-installing the AP app and other things but no improvement. So to avoid receiving them I set the Run AP V7 tasks to NO and was surprised to find that while before I had only ever received a small smattering of AP tasks when they were running I was now receiving dozens per day. On checking the settings I realised I had left the "If no work for selected app available" option to Yes and this was bringing me more AP work than ever before. So it would seem to be having the complete opposite effect to disabling "receive work for MB V8" switch.

. .I have no idea of what to make of that but it seemed worth noting.

Stephen

?
ID: 1928804 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928807 - Posted: 8 Apr 2018, 15:12:29 UTC - in response to Message 1928793.  
Last modified: 8 Apr 2018, 15:13:56 UTC

I would like to see the scheduler reply messages - all of them - that accompany the "Project requested delay of 303 seconds" at the end of a scheduler RPC, for one or two examples of this type of problem. We've been intertwining discussions about several problems in this thread, and I'm finding it hard to disentangle whether this problem is, or isn't, linked to "This computer has reached a limit on tasks in progress" - on my machines, I usually do get 'one for one', or however I've set the cache on that particular machine (sometime it's 'five for five'). If anything different happens, it's usually an obvious server problem (or user error at this end - that happens too!), and I'd like to keep this thread clearer for server problems, because I believe it's one of the very few threads that the staff sometimes peek into for signs of trouble.

When I start my own tests, I'll also be monitoring the <sched_op_debug> and <work_fetch_debug> Event Log flags, which give more insight into what is being requested and why. But that can get very verbose, and I don't expect anyone else to want to go down to that level of detail.

Here you go, from over a Year ago. That's how long I've been posting about it, and I'm sure you're aware of the problem by now.
Posted: 27 Feb 2017, 17:00:00 UTC
On my machines the problem comes and goes. Currently it has resurfaced...on my nVidia machines. As All times before, the problem Doesn't happen on my ATI machines. I have noticed the server places the ATI tasks Before the nVidia tasks, as in, it will send tasks to the ATI GPUs before sending tasks to the nVidia GPUs when both GPUs are present. I don't know if that is part of the problem or not. Currently both nVidia machines are down by around 50 tasks even though there are plenty tasks ready to send. The usual response is;

Mon Feb 27 11:33:02 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 11:33:02 2017 | SETI@home | Reporting 2 completed tasks
Mon Feb 27 11:33:02 2017 | SETI@home | Requesting new tasks for NVIDIA GPU
Mon Feb 27 11:33:02 2017 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Mon Feb 27 11:33:02 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 255459.51 seconds; 0.00 devices
Mon Feb 27 11:33:10 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 11:33:10 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 11:33:10 2017 | SETI@home | Project has no tasks available
Mon Feb 27 11:33:10 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 11:38:17 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 11:38:17 2017 | SETI@home | Reporting 4 completed tasks
Mon Feb 27 11:38:17 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 11:38:17 2017 | SETI@home | [sched_op] CPU work request: 167.15 seconds; 0.00 devices
Mon Feb 27 11:38:17 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 256694.99 seconds; 0.00 devices
Mon Feb 27 11:38:24 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 11:38:24 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 11:38:24 2017 | SETI@home | Project has no tasks available
Mon Feb 27 11:38:24 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 11:43:31 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 11:43:31 2017 | SETI@home | Reporting 3 completed tasks
Mon Feb 27 11:43:31 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 11:43:31 2017 | SETI@home | [sched_op] CPU work request: 625.03 seconds; 0.00 devices
Mon Feb 27 11:43:31 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 258128.20 seconds; 0.00 devices
Mon Feb 27 11:43:38 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 11:43:38 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 11:43:38 2017 | SETI@home | Project has no tasks available
Mon Feb 27 11:43:38 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 11:48:46 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 11:48:46 2017 | SETI@home | Reporting 4 completed tasks
Mon Feb 27 11:48:46 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 11:48:46 2017 | SETI@home | [sched_op] CPU work request: 1087.85 seconds; 0.00 devices
Mon Feb 27 11:48:46 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 259303.31 seconds; 0.00 devices
Mon Feb 27 11:48:51 2017 | SETI@home | Scheduler request completed: got 3 new tasks
Mon Feb 27 11:48:51 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 11:48:51 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 11:53:58 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 11:53:58 2017 | SETI@home | Reporting 3 completed tasks
Mon Feb 27 11:53:58 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 11:53:58 2017 | SETI@home | [sched_op] CPU work request: 1546.67 seconds; 0.00 devices
Mon Feb 27 11:53:58 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 259271.87 seconds; 0.00 devices
Mon Feb 27 11:54:05 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 11:54:05 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 11:54:05 2017 | SETI@home | Project has no tasks available
Mon Feb 27 11:54:05 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 11:59:12 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 11:59:12 2017 | SETI@home | Reporting 3 completed tasks
Mon Feb 27 11:59:12 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 11:59:12 2017 | SETI@home | [sched_op] CPU work request: 2332.56 seconds; 0.00 devices
Mon Feb 27 11:59:12 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 260648.54 seconds; 0.00 devices
Mon Feb 27 11:59:20 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 11:59:20 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 11:59:20 2017 | SETI@home | Project has no tasks available
Mon Feb 27 11:59:20 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:04:28 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:04:28 2017 | SETI@home | Reporting 3 completed tasks
Mon Feb 27 12:04:28 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:04:28 2017 | SETI@home | [sched_op] CPU work request: 3243.49 seconds; 0.00 devices
Mon Feb 27 12:04:28 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 262043.70 seconds; 0.00 devices
Mon Feb 27 12:04:32 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 12:04:32 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:04:32 2017 | SETI@home | Project has no tasks available
Mon Feb 27 12:04:32 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:09:39 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:09:39 2017 | SETI@home | Reporting 4 completed tasks
Mon Feb 27 12:09:39 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:09:39 2017 | SETI@home | [sched_op] CPU work request: 4110.04 seconds; 0.00 devices
Mon Feb 27 12:09:39 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 263911.27 seconds; 0.00 devices
Mon Feb 27 12:09:47 2017 | SETI@home | Scheduler request completed: got 1 new tasks
Mon Feb 27 12:09:47 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:09:47 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:14:55 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:14:55 2017 | SETI@home | Reporting 4 completed tasks
Mon Feb 27 12:14:55 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:14:55 2017 | SETI@home | [sched_op] CPU work request: 4672.06 seconds; 0.00 devices
Mon Feb 27 12:14:55 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 265318.69 seconds; 0.00 devices
Mon Feb 27 12:15:00 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 12:15:00 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:15:00 2017 | SETI@home | Project has no tasks available
Mon Feb 27 12:15:00 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:20:07 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:20:07 2017 | SETI@home | Reporting 3 completed tasks
Mon Feb 27 12:20:07 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:20:07 2017 | SETI@home | [sched_op] CPU work request: 4960.54 seconds; 0.00 devices
Mon Feb 27 12:20:07 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 266612.49 seconds; 0.00 devices
Mon Feb 27 12:20:14 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 12:20:14 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:20:14 2017 | SETI@home | Project has no tasks available
Mon Feb 27 12:20:14 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:25:21 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:25:21 2017 | SETI@home | Reporting 1 completed tasks
Mon Feb 27 12:25:21 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:25:21 2017 | SETI@home | [sched_op] CPU work request: 6441.03 seconds; 0.00 devices
Mon Feb 27 12:25:21 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 267794.37 seconds; 0.00 devices
Mon Feb 27 12:25:27 2017 | SETI@home | Scheduler request completed: got 4 new tasks
Mon Feb 27 12:25:27 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:25:27 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:25:27 2017 | SETI@home | [sched_op] Deferring communication for 00:05:03
Mon Feb 27 12:25:27 2017 | SETI@home | [sched_op] Reason: requested by project
Mon Feb 27 12:30:35 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:30:35 2017 | SETI@home | Reporting 2 completed tasks
Mon Feb 27 12:30:35 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:30:35 2017 | SETI@home | [sched_op] CPU work request: 7778.30 seconds; 0.00 devices
Mon Feb 27 12:30:35 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 267077.27 seconds; 0.00 devices
Mon Feb 27 12:30:37 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 12:30:37 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:30:37 2017 | SETI@home | Project has no tasks available
Mon Feb 27 12:30:37 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:35:44 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:35:44 2017 | SETI@home | Reporting 4 completed tasks
Mon Feb 27 12:35:44 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:35:44 2017 | SETI@home | [sched_op] CPU work request: 8761.16 seconds; 0.00 devices
Mon Feb 27 12:35:44 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 268609.63 seconds; 0.00 devices
Mon Feb 27 12:35:46 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 12:35:46 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:35:46 2017 | SETI@home | Project has no tasks available
Mon Feb 27 12:35:46 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:40:53 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:40:53 2017 | SETI@home | Reporting 4 completed tasks
Mon Feb 27 12:40:53 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:40:53 2017 | SETI@home | [sched_op] CPU work request: 9546.23 seconds; 0.00 devices
Mon Feb 27 12:40:53 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 270152.86 seconds; 0.00 devices
Mon Feb 27 12:41:00 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 12:41:00 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:41:00 2017 | SETI@home | Project has no tasks available
Mon Feb 27 12:41:00 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:46:08 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:46:08 2017 | SETI@home | Reporting 3 completed tasks
Mon Feb 27 12:46:08 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:46:08 2017 | SETI@home | [sched_op] CPU work request: 10228.05 seconds; 0.00 devices
Mon Feb 27 12:46:08 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 271573.68 seconds; 0.00 devices
Mon Feb 27 12:46:13 2017 | SETI@home | Scheduler request completed: got 1 new tasks
Mon Feb 27 12:46:13 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:46:13 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:51:20 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:51:20 2017 | SETI@home | Reporting 4 completed tasks
Mon Feb 27 12:51:20 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:51:20 2017 | SETI@home | [sched_op] CPU work request: 10783.47 seconds; 0.00 devices
Mon Feb 27 12:51:20 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 271980.14 seconds; 0.00 devices
Mon Feb 27 12:51:27 2017 | SETI@home | Scheduler request completed: got 0 new tasks
Mon Feb 27 12:51:27 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:51:27 2017 | SETI@home | Project has no tasks available
Mon Feb 27 12:51:27 2017 | SETI@home | Project requested delay of 303 seconds
Mon Feb 27 12:56:35 2017 | SETI@home | Sending scheduler request: To fetch work.
Mon Feb 27 12:56:35 2017 | SETI@home | Reporting 2 completed tasks
Mon Feb 27 12:56:35 2017 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
Mon Feb 27 12:56:35 2017 | SETI@home | [sched_op] CPU work request: 11340.81 seconds; 0.00 devices
Mon Feb 27 12:56:35 2017 | SETI@home | [sched_op] NVIDIA GPU work request: 272789.50 seconds; 0.00 devices
Mon Feb 27 12:56:41 2017 | SETI@home | Scheduler request completed: got 1 new tasks
Mon Feb 27 12:56:41 2017 | SETI@home | [sched_op] Server version 707
Mon Feb 27 12:56:41 2017 | SETI@home | Project requested delay of 303 seconds

Results ready to send ====== 603,373 --- 0m

Oh look, the other machine finally got some tasks;

Mon 27 Feb 2017 12:30:30 PM EST | SETI@home | Sending scheduler request: To fetch work.
Mon 27 Feb 2017 12:30:30 PM EST | SETI@home | Reporting 3 completed tasks
Mon 27 Feb 2017 12:30:30 PM EST | SETI@home | Requesting new tasks for NVIDIA
Mon 27 Feb 2017 12:30:30 PM EST | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Mon 27 Feb 2017 12:30:30 PM EST | SETI@home | [sched_op] NVIDIA work request: 294157.14 seconds; 0.00 devices
Mon 27 Feb 2017 12:30:33 PM EST | SETI@home | Scheduler request completed: got 64 new tasks
Mon 27 Feb 2017 12:30:33 PM EST | SETI@home | [sched_op] Server version 707
Mon 27 Feb 2017 12:30:33 PM EST | SETI@home | Project requested delay of 303 seconds
Mon 27 Feb 2017 12:30:33 PM EST | SETI@home | [sched_op] Deferring communication for 00:05:03
Mon 27 Feb 2017 12:30:33 PM EST | SETI@home | [sched_op] Reason: requested by project

Since the 64 tasks were sent the server is now sending tasks again to this machine.
ID: 1928807 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14660
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928819 - Posted: 8 Apr 2018, 16:27:41 UTC - in response to Message 1928787.  
Last modified: 8 Apr 2018, 16:37:39 UTC

But I have some yard-work to complete first.
Today was the day of the Battle of the Triffid


... and the triffid damn near won. Testing will be postponed while I catch my breath.

Edit - in the meantime,

08/04/2018 17:30:50 | SETI@home | Scheduler request completed: got 42 new tasks
08/04/2018 17:31:35 | SETI@home | Scheduler request completed: got 54 new tasks
08/04/2018 17:32:04 | SETI@home | Scheduler request completed: got 36 new tasks
- up to the limit in every case. It works for me.
ID: 1928819 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928822 - Posted: 8 Apr 2018, 17:35:16 UTC - in response to Message 1928819.  

Yea, it works for EVERYBODY right now...your point?

I'll give you a history lesson. The problem DOESN't happen on AMD/ATI GPUs.
The problem first started when an AMD/ATI release was Botched and people were sent NVIDIA Apps instead of ATI Apps, here,
http://setiathome.berkeley.edu/forum_thread.php?id=80754&postid=1838395#1838395
http://setiathome.berkeley.edu/forum_thread.php?id=80754&postid=1838601#1838601
Posted: 16 Jan 2017, 17:15:51 UTC, Hosts with ATI GPU, which downloaded nvidia_SoG app, did not get the new ATI_SoG app, so still tried to run nvidia app and had same "Quering NV device abilities failed" error.
Whatever was done back then started the problem that has existed ever since.
Whatever it is, it ONLY AFFECTS NVIDIA GPUs, and only at certain times.
Right now the problem is around 15 months OLD and counting. Here is a more recent log;
Thu Apr 5 13:45:49 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 13:45:54 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 13:45:54 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 236409.72 seconds; 0.00 devices
Thu Apr 5 13:45:55 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 13:45:55 2018 | SETI@home | No tasks sent
Thu Apr 5 13:45:55 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_blc03_guppi_58157_16456_DIAG_PSR_J1024-0719_0006.18406.409.22.45.176.vlar_1
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.123_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23no17aa.2048.687994.3.30.137_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.113_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.121_0
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 13:45:55 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 13:46:20 2018 | SETI@home | Computation for task 21no17aa.19405.13973.5.32.153_0 finished
Thu Apr 5 13:46:20 2018 | SETI@home | Starting task 23dc17aa.3884.15208.14.41.155_0
Thu Apr 5 13:46:21 2018 | SETI@home | Computation for task 23dc17aa.20274.18889.12.39.118_0 finished
Thu Apr 5 13:46:21 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.235_1
Thu Apr 5 13:46:22 2018 | SETI@home | Started upload of 21no17aa.19405.13973.5.32.153_0_r1206594985_0
Thu Apr 5 13:46:23 2018 | SETI@home | Started upload of 23dc17aa.20274.18889.12.39.118_0_r1245950715_0
Thu Apr 5 13:46:24 2018 | SETI@home | Finished upload of 21no17aa.19405.13973.5.32.153_0_r1206594985_0
Thu Apr 5 13:46:26 2018 | SETI@home | Finished upload of 23dc17aa.20274.18889.12.39.118_0_r1245950715_0
Thu Apr 5 13:46:27 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.235_1 finished
Thu Apr 5 13:46:27 2018 | SETI@home | Starting task 23dc17aa.3884.16435.14.41.200_0
Thu Apr 5 13:46:29 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.235_1_r593375621_0
Thu Apr 5 13:46:31 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.235_1_r593375621_0
Thu Apr 5 13:47:37 2018 | SETI@home | Computation for task 23dc17aa.20274.18889.12.39.119_0 finished
Thu Apr 5 13:47:37 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.225_0
Thu Apr 5 13:47:39 2018 | SETI@home | Started upload of 23dc17aa.20274.18889.12.39.119_0_r1788636002_0
Thu Apr 5 13:47:41 2018 | SETI@home | Finished upload of 23dc17aa.20274.18889.12.39.119_0_r1788636002_0
Thu Apr 5 13:48:26 2018 | SETI@home | Computation for task 23dc17aa.3884.16435.14.41.200_0 finished
Thu Apr 5 13:48:26 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.226_0
Thu Apr 5 13:48:28 2018 | SETI@home | Started upload of 23dc17aa.3884.16435.14.41.200_0_r1821173763_0
Thu Apr 5 13:48:30 2018 | SETI@home | Finished upload of 23dc17aa.3884.16435.14.41.200_0_r1821173763_0
Thu Apr 5 13:49:18 2018 | SETI@home | Computation for task 23dc17aa.3884.15208.14.41.155_0 finished
Thu Apr 5 13:49:18 2018 | SETI@home | Starting task 21no17ab.11500.223441.5.32.217_1
Thu Apr 5 13:49:20 2018 | SETI@home | Started upload of 23dc17aa.3884.15208.14.41.155_0_r1658243187_0
Thu Apr 5 13:49:23 2018 | SETI@home | Finished upload of 23dc17aa.3884.15208.14.41.155_0_r1658243187_0
Thu Apr 5 13:49:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.225_0 finished
Thu Apr 5 13:49:38 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.220_0
Thu Apr 5 13:49:40 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.225_0_r201744178_0
Thu Apr 5 13:49:42 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.225_0_r201744178_0
Thu Apr 5 13:50:26 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.226_0 finished
Thu Apr 5 13:50:26 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.230_1
Thu Apr 5 13:50:28 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.226_0_r1763249168_0
Thu Apr 5 13:50:31 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.226_0_r1763249168_0
Thu Apr 5 13:51:00 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 13:51:05 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:51:05 2018 | SETI@home | Reporting 8 completed tasks
Thu Apr 5 13:51:05 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 13:51:05 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 13:51:05 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 237883.99 seconds; 0.00 devices
Thu Apr 5 13:51:06 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 13:51:06 2018 | SETI@home | No tasks sent
Thu Apr 5 13:51:06 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.13973.5.32.153_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.118_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.20274.18889.12.39.119_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.3884.15208.14.41.155_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.235_1
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.3884.16435.14.41.200_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.225_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.226_0
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 13:51:06 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 13:51:27 2018 | SETI@home | Computation for task 21no17ab.11500.223441.5.32.217_1 finished
Thu Apr 5 13:51:27 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.224_0
Thu Apr 5 13:51:29 2018 | SETI@home | Started upload of 21no17ab.11500.223441.5.32.217_1_r1825719015_0
Thu Apr 5 13:51:32 2018 | SETI@home | Finished upload of 21no17ab.11500.223441.5.32.217_1_r1825719015_0
Thu Apr 5 13:51:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.220_0 finished
Thu Apr 5 13:51:38 2018 | SETI@home | Starting task 21no17aa.19405.15200.5.32.222_0
Thu Apr 5 13:51:40 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.220_0_r1671108643_0
Thu Apr 5 13:51:42 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.220_0_r1671108643_0
Thu Apr 5 13:52:26 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.230_1 finished
Thu Apr 5 13:52:26 2018 | SETI@home | Starting task 23dc17aa.3884.16435.14.41.202_0
Thu Apr 5 13:52:28 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.230_1_r1013284403_0
Thu Apr 5 13:52:30 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.230_1_r1013284403_0
Thu Apr 5 13:53:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.222_0 finished
Thu Apr 5 13:53:38 2018 | SETI@home | Starting task 21no17ab.11500.223441.5.32.211_0
Thu Apr 5 13:53:41 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.222_0_r1785353434_0
Thu Apr 5 13:53:43 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.222_0_r1785353434_0
Thu Apr 5 13:54:25 2018 | SETI@home | Computation for task 23dc17aa.3884.16435.14.41.202_0 finished
Thu Apr 5 13:54:25 2018 | SETI@home | Starting task 21no17ab.11500.223441.5.32.214_1
Thu Apr 5 13:54:27 2018 | SETI@home | Started upload of 23dc17aa.3884.16435.14.41.202_0_r2028738979_0
Thu Apr 5 13:54:30 2018 | SETI@home | Finished upload of 23dc17aa.3884.16435.14.41.202_0_r2028738979_0
Thu Apr 5 13:54:38 2018 | SETI@home | Computation for task 21no17aa.19405.15200.5.32.224_0 finished
Thu Apr 5 13:54:38 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1
Thu Apr 5 13:54:40 2018 | SETI@home | Started upload of 21no17aa.19405.15200.5.32.224_0_r607232865_0
Thu Apr 5 13:54:44 2018 | SETI@home | Finished upload of 21no17aa.19405.15200.5.32.224_0_r607232865_0
Thu Apr 5 13:54:47 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1 finished
Thu Apr 5 13:54:47 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1
Thu Apr 5 13:54:49 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1_r1026118679_0
Thu Apr 5 13:54:52 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1_r1026118679_0
Thu Apr 5 13:55:00 2018 | SETI@home | Computation for task 21no17ab.11500.223441.5.32.211_0 finished
Thu Apr 5 13:55:00 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1
Thu Apr 5 13:55:02 2018 | SETI@home | Started upload of 21no17ab.11500.223441.5.32.211_0_r772654919_0
Thu Apr 5 13:55:05 2018 | SETI@home | Finished upload of 21no17ab.11500.223441.5.32.211_0_r772654919_0
Thu Apr 5 13:55:45 2018 | SETI@home | Computation for task 21no17ab.11500.223441.5.32.214_1 finished
Thu Apr 5 13:55:45 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1
Thu Apr 5 13:55:47 2018 | SETI@home | Started upload of 21no17ab.11500.223441.5.32.214_1_r963446509_0
Thu Apr 5 13:55:49 2018 | SETI@home | Finished upload of 21no17ab.11500.223441.5.32.214_1_r963446509_0
Thu Apr 5 13:56:09 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 13:56:14 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 13:56:14 2018 | SETI@home | Reporting 9 completed tasks
Thu Apr 5 13:56:14 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 13:56:14 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 13:56:14 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 239288.10 seconds; 0.00 devices
Thu Apr 5 13:56:15 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 13:56:15 2018 | SETI@home | No tasks sent
Thu Apr 5 13:56:15 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17ab.11500.223441.5.32.217_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.220_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.230_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.224_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17aa.19405.15200.5.32.222_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.3884.16435.14.41.202_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17ab.11500.223441.5.32.211_0
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 21no17ab.11500.223441.5.32.214_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.43.vlar_1
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 13:56:15 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 13:58:07 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1 finished
Thu Apr 5 13:58:07 2018 | SETI@home | Starting task 23dc17aa.19675.23384.10.37.243_0
Thu Apr 5 13:58:09 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1_r1741867489_0
Thu Apr 5 13:58:12 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1_r1741867489_0
Thu Apr 5 13:58:51 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1 finished
Thu Apr 5 13:58:51 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0
Thu Apr 5 13:58:53 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1_r1044341074_0
Thu Apr 5 13:58:55 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1_r1044341074_0
Thu Apr 5 13:59:47 2018 | SETI@home | Computation for task 23dc17aa.19675.23384.10.37.243_0 finished
Thu Apr 5 13:59:47 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1
Thu Apr 5 13:59:49 2018 | SETI@home | Started upload of 23dc17aa.19675.23384.10.37.243_0_r67884315_0
Thu Apr 5 13:59:50 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1 finished
Thu Apr 5 13:59:50 2018 | SETI@home | Starting task 23dc17aa.19675.24202.10.37.32_1
Thu Apr 5 13:59:52 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1_r2100367723_0
Thu Apr 5 13:59:53 2018 | SETI@home | Finished upload of 23dc17aa.19675.23384.10.37.243_0_r67884315_0
Thu Apr 5 13:59:55 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1_r2100367723_0
Thu Apr 5 13:59:55 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1 finished
Thu Apr 5 13:59:55 2018 | SETI@home | Starting task 23no17aa.2048.690448.3.30.217_1
Thu Apr 5 13:59:56 2018 | SETI@home | Computation for task 23dc17aa.19675.24202.10.37.32_1 finished
Thu Apr 5 13:59:56 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1
Thu Apr 5 13:59:57 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1_r478977597_0
Thu Apr 5 13:59:58 2018 | SETI@home | Started upload of 23dc17aa.19675.24202.10.37.32_1_r1022985780_0
Thu Apr 5 14:00:00 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1_r478977597_0
Thu Apr 5 14:00:00 2018 | SETI@home | Finished upload of 23dc17aa.19675.24202.10.37.32_1_r1022985780_0
Thu Apr 5 14:01:23 2018 | SETI@home | [sched_op] Starting scheduler request
Thu Apr 5 14:01:28 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Thu Apr 5 14:01:28 2018 | SETI@home | Reporting 6 completed tasks
Thu Apr 5 14:01:28 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Thu Apr 5 14:01:28 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Thu Apr 5 14:01:28 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 240895.02 seconds; 0.00 devices
Thu Apr 5 14:01:29 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] Server version 709
Thu Apr 5 14:01:29 2018 | SETI@home | No tasks sent
Thu Apr 5 14:01:29 2018 | SETI@home | Project requested delay of 303 seconds
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.818.21.44.9.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_61754_And_XI_0014.21947.2045.21.44.193.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_64326_And_XI_0018.25460.1227.21.44.123.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.19675.23384.10.37.243_0
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_58186_Bol520_0010.30153.0.22.45.44.vlar_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task 23dc17aa.19675.24202.10.37.32_1
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Thu Apr 5 14:01:29 2018 | SETI@home | [sched_op] Reason: requested by project
Thu Apr 5 14:01:55 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0 finished
Thu Apr 5 14:01:55 2018 | SETI@home | Starting task blc03_2bit_guppi_58185_59455_Bol520_0012.26432.1227.21.44.70.vlar_0
Thu Apr 5 14:01:57 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0_r1914605811_0
Thu Apr 5 14:02:01 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_56920_Bol520_0008.23680.1636.22.45.19.vlar_0_r1914605811_0
Thu Apr 5 14:02:43 2018 | SETI@home | Computation for task 23no17aa.2048.690448.3.30.217_1 finished
Thu Apr 5 14:02:43 2018 | SETI@home | Starting task blc02_2bit_guppi_58185_72022_LGS_3_off_0027.11979.2045.22.45.84.vlar_2
Thu Apr 5 14:02:45 2018 | SETI@home | Started upload of 23no17aa.2048.690448.3.30.217_1_r1925260118_0
Thu Apr 5 14:02:47 2018 | SETI@home | Finished upload of 23no17aa.2048.690448.3.30.217_1_r1925260118_0
Thu Apr 5 14:03:14 2018 | SETI@home | Computation for task blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1 finished
Thu Apr 5 14:03:14 2018 | SETI@home | Starting task 21no17aa.19405.16427.5.32.209_1
Thu Apr 5 14:03:16 2018 | SETI@home | Started upload of blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1_r1314845934_0
Thu Apr 5 14:03:19 2018 | SETI@home | Finished upload of blc03_2bit_guppi_58185_63036_And_XI_0016.23707.2045.21.44.64.vlar_1_r1314845934_0
ID: 1928822 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928823 - Posted: 8 Apr 2018, 17:36:15 UTC - in response to Message 1928787.  


The reason I've got involved in this thread is to try and quieten down the continual grumbling from a small group of users who appear to have different objectives, and complain when BOINC doesn't respond the way they would like it to.

From the limited hard facts they've been willing to supply, I've formed the impression that their objective is to keep the CPU queue filled as well, and don't mind that they are continually pestering the servers for vastly more GPU work than they are ever going to receive. By tracking down the mechanism, I'm hoping to suggest a course of action closer to yours, which will maximise their throughput and minimise their grumbles, at the minor detriment of relaxing one of their secondary objectives. But I need the facts first.

I hope to be in a position to do that in a couple of hours, when the last of my backup project work from the extended outage this week has been flushed from the computer I've chosen as the guinea-pig. But I have some yard-work to complete first.

But why would I deliberately hamstring a host that has just as high cpu only production as the typical host with a cpu AND a gpu combined. We are expecting an influx of new data and new sources in the future and the project is going to need as much computing horsepower as possible.

I need cpu tasks to keep the hosts loaded and running correctly because temps and voltages are tied to cpu loads. I just want the project to send me a new task, cpu or gpu for each task I return.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928823 · Report as offensive
Previous · 1 . . . 14 · 15 · 16 · 17 · 18 · 19 · 20 . . . 31 · Next

Message boards : Number crunching : Panic Mode On (111) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.