Panic Mode On (111) Server Problems?

Message boards : Number crunching : Panic Mode On (111) Server Problems?
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 15 · 16 · 17 · 18 · 19 · 20 · 21 . . . 31 · Next

AuthorMessage
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928827 - Posted: 8 Apr 2018, 17:55:18 UTC - in response to Message 1928822.  

I'll give you a history lesson.
Thanks - I'll take a look when I'm fit again. I'm making too many typing errors, which means I still need to rest.

Right now the problem is around 15 months OLD and counting.
All the more reason to try and fix it.

Here is a more recent log;
But (on a couple of spot checks), it's already too old to find any additional data from task records in the database. Host ID? Version of BOINC client in use? It has that strange message, that every RPC starts with "To report completed tasks", not "To fetch work".

I get these two together:

08/04/2018 18:41:45 | SETI@home | Sending scheduler request: To report completed tasks.
08/04/2018 18:41:45 | SETI@home | Not requesting tasks: "no new tasks" requested via Manager
when I need work, I see

08/04/2018 17:30:47 | SETI@home | Sending scheduler request: To fetch work.
ID: 1928827 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928828 - Posted: 8 Apr 2018, 17:59:25 UTC - in response to Message 1928823.  

But why would I deliberately hamstring a host that has just as high cpu only production as the typical host with a cpu AND a gpu combined. We are expecting an influx of new data and new sources in the future and the project is going to need as much computing horsepower as possible.

I need cpu tasks to keep the hosts loaded and running correctly because temps and voltages are tied to cpu loads. I just want the project to send me a new task, cpu or gpu for each task I return.
Then help me to try and track down the cause of the problem. Keeping a cache of 95 CPU tasks running/ready to run isn't going to hamstring anything, unless you're running a 96 CPU computer.
ID: 1928828 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928830 - Posted: 8 Apr 2018, 18:11:27 UTC - in response to Message 1928827.  
Last modified: 8 Apr 2018, 18:35:53 UTC

I'll give you a history lesson.
Thanks - I'll take a look when I'm fit again. I'm making too many typing errors, which means I still need to rest.

Right now the problem is around 15 months OLD and counting.
All the more reason to try and fix it.

Here is a more recent log;
But (on a couple of spot checks), it's already too old to find any additional data from task records in the database. Host ID? Version of BOINC client in use? It has that strange message, that every RPC starts with "To report completed tasks", not "To fetch work".

I get these two together:

08/04/2018 18:41:45 | SETI@home | Sending scheduler request: To report completed tasks.
08/04/2018 18:41:45 | SETI@home | Not requesting tasks: "no new tasks" requested via Manager
when I need work, I see

08/04/2018 17:30:47 | SETI@home | Sending scheduler request: To fetch work.

The only time I see 'Sending scheduler request: To fetch work' is when I'm Not reporting completed tasks, and since I'm usually reporting tasks I almost Never see it.
This is from a Linux Host with 7.4.44, note the Requesting new tasks for NVIDIA GPU part;
Sun 08 Apr2018 02:01:15 PM EDT | SETI@home | [sched_op] Starting scheduler request
Sun 08 Apr 2018 02:01:15 PM EDT | SETI@home | Sending scheduler request: To report completed tasks.
Sun 08 Apr 2018 02:01:15 PM EDT | SETI@home | Reporting 2 completed tasks
Sun 08 Apr 2018 02:01:15 PM EDT | SETI@home | Requesting new tasks for NVIDIA GPU
Sun 08 Apr 2018 02:01:15 PM EDT | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Sun 08 Apr 2018 02:01:15 PM EDT | SETI@home | [sched_op] NVIDIA GPU work request: 133610.41 seconds; 0.00 devices
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | Scheduler request completed: got 2 new tasks
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | [sched_op] Server version 709
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | Project requested delay of 303 seconds
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | [sched_op] estimated total CPU task duration: 0 seconds
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 571 seconds
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_66328_And_X_off_0021.19737.2045.22.45.164.vlar_1
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_73290_LGS_3_off_0029.21081.409.21.44.113.vlar_1
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | [sched_op] Deferring communication for 00:05:03
Sun 08 Apr 2018 02:01:17 PM EDT | SETI@home | [sched_op] Reason: requested by project


This is from a Mac with 7.8.6 running Yosemite, note the 'Requesting new tasks for NVIDIA GPU and AMD/ATI GPU' part;
Sun Apr 8 14:09:47 2018 | SETI@home | [sched_op] Starting scheduler request
Sun Apr 8 14:09:47 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Sun Apr 8 14:09:47 2018 | SETI@home | Reporting 1 completed tasks
Sun Apr 8 14:09:47 2018 | SETI@home | Requesting new tasks for NVIDIA GPU and AMD/ATI GPU
Sun Apr 8 14:09:47 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Sun Apr 8 14:09:47 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 86140.71 seconds; 0.00 devices
Sun Apr 8 14:09:47 2018 | SETI@home | [sched_op] AMD/ATI GPU work request: 90720.00 seconds; 1.00 devices
Sun Apr 8 14:09:48 2018 | SETI@home | Scheduler request completed: got 1 new tasks
Sun Apr 8 14:09:48 2018 | SETI@home | [sched_op] Server version 709
Sun Apr 8 14:09:48 2018 | SETI@home | Project requested delay of 303 seconds
Sun Apr 8 14:09:48 2018 | SETI@home | [sched_op] estimated total CPU task duration: 0 seconds
Sun Apr 8 14:09:48 2018 | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 467 seconds
Sun Apr 8 14:09:48 2018 | SETI@home | [sched_op] estimated total AMD/ATI GPU task duration: 0 seconds
Sun Apr 8 14:09:48 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_66328_And_X_off_0021.1540.2045.21.44.181.vlar_1
Sun Apr 8 14:09:48 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Sun Apr 8 14:09:48 2018 | SETI@home | [sched_op] Reason: requested by project


This is another Mac running Sierra, note the Requesting new tasks for NVIDIA GPU part;
Sun Apr 8 14:15:10 2018 | SETI@home | [sched_op] Starting scheduler request
Sun Apr 8 14:15:15 2018 | SETI@home | Sending scheduler request: To report completed tasks.
Sun Apr 8 14:15:15 2018 | SETI@home | Reporting 5 completed tasks
Sun Apr 8 14:15:15 2018 | SETI@home | Requesting new tasks for NVIDIA GPU
Sun Apr 8 14:15:15 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Sun Apr 8 14:15:15 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 215187.04 seconds; 0.00 devices
Sun Apr 8 14:15:15 2018 | SETI@home | Computation for task blc01_2bit_guppi_58185_72656_LGS_3_0028.24702.818.21.44.11.vlar_0 finished
Sun Apr 8 14:15:15 2018 | SETI@home | Starting task blc01_2bit_guppi_58185_72656_LGS_3_0028.24173.2454.22.45.160.vlar_1
Sun Apr 8 14:15:16 2018 | SETI@home | Scheduler request completed: got 5 new tasks
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] Server version 709
Sun Apr 8 14:15:16 2018 | SETI@home | Project requested delay of 303 seconds
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] estimated total CPU task duration: 0 seconds
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 1192 seconds
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_73290_LGS_3_off_0029.23486.1636.22.45.189.vlar_1
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_73290_LGS_3_off_0029.23486.1636.22.45.183.vlar_1
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_72656_LGS_3_0028.24173.1636.22.45.16.vlar_1
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_72656_LGS_3_0028.24702.818.21.44.5.vlar_1
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_72656_LGS_3_0028.24163.1636.22.45.87.vlar_0
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Sun Apr 8 14:15:16 2018 | SETI@home | [sched_op] Reason: requested by project

Do you still think these 15 MONTHS of problems are caused by people having No New Tasks Set?
How do you receive work if NNT is set?

Here's one where it isn't reporting a completed task, and since it didn't report a task, the NVIDIA cache was full;
Sat Apr 7 10:24:22 2018 | SETI@home | [sched_op] Starting scheduler request
Sat Apr 7 10:24:22 2018 | SETI@home | Sending scheduler request: To fetch work.
Sat Apr 7 10:24:22 2018 | SETI@home | Requesting new tasks for NVIDIA GPU and AMD/ATI GPU
Sat Apr 7 10:24:22 2018 | SETI@home | [sched_op] CPU work request: 0.00 seconds; 0.00 devices
Sat Apr 7 10:24:22 2018 | SETI@home | [sched_op] NVIDIA GPU work request: 444308.99 seconds; 0.00 devices
Sat Apr 7 10:24:22 2018 | SETI@home | [sched_op] AMD/ATI GPU work request: 263520.00 seconds; 1.00 devices
Sat Apr 7 10:24:24 2018 | SETI@home | Scheduler request completed: got 0 new tasks
Sat Apr 7 10:24:24 2018 | SETI@home | [sched_op] Server version 709
Sat Apr 7 10:24:24 2018 | SETI@home | No tasks sent
Sat Apr 7 10:24:24 2018 | SETI@home | No tasks are available for AstroPulse v7
Sat Apr 7 10:24:24 2018 | SETI@home | This computer has reached a limit on tasks in progress
Sat Apr 7 10:24:24 2018 | SETI@home | Project requested delay of 303 seconds
Sat Apr 7 10:24:24 2018 | SETI@home | [sched_op] Deferring communication for 00:05:03
Sat Apr 7 10:24:24 2018 | SETI@home | [sched_op] Reason: requested by project
ID: 1928830 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928837 - Posted: 8 Apr 2018, 18:32:16 UTC - in response to Message 1928830.  

Do you still think these 15 MONTHS of problems are caused by people having No New Tasks Set?
I don't think I've ever said that. I have said that it might - repeat might - be aggravated by having 100 CPU tasks cached, no completed CPU tasks to report, and 'reached a limit on tasks in progress' kicking in too aggressively. And it might - still repeat might - be aggravated by other things too.

Your logs still don't match mine:

08-Apr-2018 08:35:22 [SETI@home] Sending scheduler request: To fetch work.
08-Apr-2018 08:35:22 [SETI@home] Reporting 1 completed tasks
08-Apr-2018 08:35:22 [SETI@home] Requesting new tasks for NVIDIA GPU
08-Apr-2018 08:35:22 [SETI@home] [sched_op] CPU work request: 0.00 seconds; 0.00 devices
08-Apr-2018 08:35:22 [SETI@home] [sched_op] NVIDIA GPU work request: 24508.36 seconds; 0.00 devices
08-Apr-2018 08:35:22 [SETI@home] [sched_op] Intel GPU work request: 0.00 seconds; 0.00 devices
08-Apr-2018 08:35:25 [SETI@home] Scheduler request completed: got 1 new tasks
08-Apr-2018 08:35:25 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc03_2bit_guppi_58185_63036_And_XI_0016.27326.1227.22.45.204.vlar_1
ID: 1928837 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928839 - Posted: 8 Apr 2018, 18:46:05 UTC - in response to Message 1928837.  
Last modified: 8 Apr 2018, 18:47:25 UTC

Your logs still don't match mine:

So you tell me, it looks to be a Windows thing to Me. The only time I've seen the Fetch was when I wasn't reporting tasks.
The only thing I can think of is the Report completed Tasks Immediately setting.
Sun Apr 8 11:25:18 2018 | | Config: run apps at regular priority
Sun Apr 8 11:25:18 2018 | | Config: report completed tasks immediately
Sun Apr 8 11:25:18 2018 | | Config: use all coprocessors
ID: 1928839 · Report as offensive
Sirius B Project Donor
Volunteer tester
Avatar

Send message
Joined: 26 Dec 00
Posts: 24882
Credit: 3,081,182
RAC: 7
Ireland
Message 1928840 - Posted: 8 Apr 2018, 18:53:09 UTC

Just my 2c worth.

Wouldn't be better to run a test:

Windows.
CPU cruncher only
GPU cruncher only
Combined cruncher

Then the same for Linux.

Maybe able to pinpoint where the actual issue lies.
ID: 1928840 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928841 - Posted: 8 Apr 2018, 18:53:34 UTC - in response to Message 1928839.  
Last modified: 8 Apr 2018, 19:05:09 UTC

The only thing I can think of is the Report completed Tasks Immediately setting.
Sun Apr 8 11:25:18 2018 | | Config: run apps at regular priority
Sun Apr 8 11:25:18 2018 | | Config: report completed tasks immediately
Sun Apr 8 11:25:18 2018 | | Config: use all coprocessors
That could well be it, and easy to test. Thanks.

I'm getting closer to testing.

08/04/2018 19:46:18 | SETI@home | [sched_op] Starting scheduler request
08/04/2018 19:46:18 | SETI@home | Sending scheduler request: To fetch work.
08/04/2018 19:46:18 | SETI@home | Reporting 2 completed tasks
08/04/2018 19:46:18 | SETI@home | Requesting new tasks for CPU and NVIDIA GPU
08/04/2018 19:46:18 | SETI@home | [sched_op] CPU work request: 262284.00 seconds; 0.00 devices
08/04/2018 19:46:18 | SETI@home | [sched_op] NVIDIA GPU work request: 43747.66 seconds; 0.00 devices
08/04/2018 19:46:18 | SETI@home | [sched_op] Intel GPU work request: 0.00 seconds; 0.00 devices
08/04/2018 19:46:20 | SETI@home | Scheduler request completed: got 98 new tasks
08/04/2018 19:46:20 | SETI@home | [sched_op] Server version 709
08/04/2018 19:46:20 | SETI@home | Project requested delay of 303 seconds
08/04/2018 19:46:20 | SETI@home | [sched_op] estimated total CPU task duration: 266426 seconds
08/04/2018 19:46:20 | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 33115 seconds
08/04/2018 19:46:20 | SETI@home | [sched_op] estimated total Intel GPU task duration: 0 seconds
08/04/2018 19:46:20 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_72656_LGS_3_0028.31966.2454.22.45.158.vlar_0
08/04/2018 19:46:20 | SETI@home | [sched_op] handle_scheduler_reply(): got ack for task blc01_2bit_guppi_58185_72656_LGS_3_0028.31966.2454.22.45.159.vlar_0
08/04/2018 19:46:20 | SETI@home | [sched_op] Deferring communication for 00:05:03
08/04/2018 19:46:20 | SETI@home | [sched_op] Reason: requested by project
That still leaves me 34 tasks short of the limit (166, need 200) - not sure which is low yet. I'll keep y'all posted.

Edit - might even have a clue already.

08/04/2018 19:51:25 | SETI@home | Sending scheduler request: To fetch work.
08/04/2018 19:51:25 | SETI@home | Reporting 1 completed tasks
08/04/2018 19:51:25 | SETI@home | [sched_op] CPU work request: 5610.98 seconds; 0.00 devices
08/04/2018 19:51:25 | SETI@home | [sched_op] NVIDIA GPU work request: 10829.59 seconds; 0.00 devices
08/04/2018 19:51:27 | SETI@home | Scheduler request completed: got 1 new tasks
08/04/2018 19:51:27 | SETI@home | [sched_op] estimated total CPU task duration: 6498 seconds
08/04/2018 19:51:27 | SETI@home | [sched_op] estimated total NVIDIA GPU task duration: 0 seconds
08/04/2018 19:55:08 | | [work_fetch] target work buffer: 64800.00 + 3456.00 sec
08/04/2018 19:55:08 | | [work_fetch] shortfall 3537.95 nidle 0.00 saturated 66347.67 busy 0.00
08/04/2018 19:55:08 | | [work_fetch] shortfall 11135.21 nidle 0.00 saturated 57120.79 busy 0.00
Same machine, next fetch. The single task received was, as shown, for CPU. So was the task reported, 6548119510. So I suspect I had 100 CPU, swapped 1 for 1, and the server bailed out at the point. I need a beer to think about that one.
ID: 1928841 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1928842 - Posted: 8 Apr 2018, 18:54:09 UTC

Dot't know if is relevant to your talks but i noticed each time my host produce a error WU it triger the 'reached a limit on tasks in progress' bottom and starts the account form 0, so intil a lot of new WU where reported it not receive any new job.
That makes little hurt on a host like mine who produces a lot of new work per hr, but if that happening in a slower host and close to a server outage maybe that could cause some kind of host starvation of new WU.
ID: 1928842 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928845 - Posted: 8 Apr 2018, 19:02:46 UTC - in response to Message 1928828.  

But why would I deliberately hamstring a host that has just as high cpu only production as the typical host with a cpu AND a gpu combined. We are expecting an influx of new data and new sources in the future and the project is going to need as much computing horsepower as possible.

I need cpu tasks to keep the hosts loaded and running correctly because temps and voltages are tied to cpu loads. I just want the project to send me a new task, cpu or gpu for each task I return.
Then help me to try and track down the cause of the problem. Keeping a cache of 95 CPU tasks running/ready to run isn't going to hamstring anything, unless you're running a 96 CPU computer.

But my experiment yesterday to drop to 0.5 day cache dropped me to 75 cpu tasks in an hour before I called a halt. It would have fallen even further if I had continued because hosts had quit even asking for work.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928845 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928848 - Posted: 8 Apr 2018, 19:06:59 UTC - in response to Message 1928845.  

See edit to my previous post - I may have a new theory within 5 minutes. I need time, as well as the beer, to think about that one.
ID: 1928848 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22266
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1928852 - Posted: 8 Apr 2018, 19:17:50 UTC - in response to Message 1928845.  

If that host stopped requesting work it hadn't reached the floor - you should have let it run down tasks until it either hit the floor (zero tasks), or restarted asking for work.

Another thing, and you consistently refuse to accept this - SETI@Home has NEVER promised us a continuous flow of work, so do not expect to see you cache's filled on ever call.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1928852 · Report as offensive
Sirius B Project Donor
Volunteer tester
Avatar

Send message
Joined: 26 Dec 00
Posts: 24882
Credit: 3,081,182
RAC: 7
Ireland
Message 1928853 - Posted: 8 Apr 2018, 19:21:45 UTC - in response to Message 1928852.  

Someone has pulled the rabbit out of the hat. How many times has the rabbit been forgotten? :-)
ID: 1928853 · Report as offensive
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 1928864 - Posted: 8 Apr 2018, 19:31:49 UTC - in response to Message 1928852.  
Last modified: 8 Apr 2018, 19:34:37 UTC

Another thing, and you consistently refuse to accept this - SETI@Home has NEVER promised us a continuous flow of work, so do not expect to see you cache's filled on ever call.

Did I mention the problem doesn't exist for ATI users? Fine, they don't promise us a continuous flow of work. But, they discriminate against NVIDIA users.
Does SETI want to be accused of Discrimination? We can do that you know.
ID: 1928864 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928884 - Posted: 8 Apr 2018, 20:32:37 UTC - in response to Message 1928852.  

If that host stopped requesting work it hadn't reached the floor - you should have let it run down tasks until it either hit the floor (zero tasks), or restarted asking for work.

Another thing, and you consistently refuse to accept this - SETI@Home has NEVER promised us a continuous flow of work, so do not expect to see you cache's filled on ever call.

That is a completely false statement and I take offense. I DO work for other projects BECAUSE Seti can't supply constant work to keep the hosts busy.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928884 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1928885 - Posted: 8 Apr 2018, 20:34:05 UTC - in response to Message 1928864.  

Another thing, and you consistently refuse to accept this - SETI@Home has NEVER promised us a continuous flow of work, so do not expect to see you cache's filled on ever call.

Did I mention the problem doesn't exist for ATI users? Fine, they don't promise us a continuous flow of work. But, they discriminate against NVIDIA users.
Does SETI want to be accused of Discrimination? We can do that you know.

+1
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1928885 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14656
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1928888 - Posted: 8 Apr 2018, 20:53:51 UTC - in response to Message 1928885.  

It might argued that SETI protects users from the poor software tools provided by NVidia to support their hardware. But that's a contentious thought that is outside the scope of this thread, and I'm not going to develop it tonight or any night.
ID: 1928888 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1928910 - Posted: 8 Apr 2018, 23:53:16 UTC
Last modified: 8 Apr 2018, 23:55:31 UTC

. . Interestingly, the log jam on the Blc01 tapes seems to have been cleared and they are nearly all split now (only 390 channels of them left). Maybe there was something hung up there ...

Stephen

? ?
ID: 1928910 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13766
Credit: 208,696,464
RAC: 304
Australia
Message 1928974 - Posted: 9 Apr 2018, 6:27:32 UTC - in response to Message 1928852.  
Last modified: 9 Apr 2018, 6:36:27 UTC

Another thing, and you consistently refuse to accept this - SETI@Home has NEVER promised us a continuous flow of work, so do not expect to see you cache's filled on ever call.

If there are over 500,000 WUs ready to send, and there are no Arecibo VLARs anywhere in sight, it's not an unreasonable expectation to get work to replace work that is returned. It's happened ever since Seti moved to BOINC, until Dec 2016 when things changed.
Given that prior to this issue occurring, even if there were a flood of Arecibo VLARs, there was never an issue getting CPU work, and it only took a few requests before a batch of GPU work would download.

If there is no data- then there's no data. And that is what we were never promised- never ending data.
But if there is data available, ready to be downloaded & processed, then it's implicit that we are able to download it to process. Being able to get it is important; particularly so if they want even more crunchers than they already have.
It's no good just splitting work if it's going to be withheld from those that want to process it.
Grant
Darwin NT
ID: 1928974 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13766
Credit: 208,696,464
RAC: 304
Australia
Message 1928979 - Posted: 9 Apr 2018, 7:47:52 UTC

Just made a post in the Café, and it sat there for ages and then I got this:
"Project is down
The project's database server is down. Please check back in a few hours."

Refreshed & checked the thread & it went through, and everything else is responding as it was beforehand,
Grant
Darwin NT
ID: 1928979 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13766
Credit: 208,696,464
RAC: 304
Australia
Message 1928981 - Posted: 9 Apr 2018, 8:05:52 UTC

Splitters appear to have come back to life & managed to considerably re-fill the Ready-to-send buffer. Unfortunately the WU deleters haven't been able to keep up & the WUs Awaiting-deletion backlog has started to grow again.
Grant
Darwin NT
ID: 1928981 · Report as offensive
Previous · 1 . . . 15 · 16 · 17 · 18 · 19 · 20 · 21 . . . 31 · Next

Message boards : Number crunching : Panic Mode On (111) Server Problems?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.