About Deadlines or Database reduction proposals

Message boards : Number crunching : About Deadlines or Database reduction proposals
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 16 · Next

AuthorMessage
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2034024 - Posted: 26 Feb 2020, 14:04:33 UTC - in response to Message 2034017.  

Nobody ask my question: How do we raise the request to evaluate the squeeze of the deadlines to the Seti powers?

By look at the SSP page we could see:

Results returned and awaiting validation 0 39,153 13,810,892
Results out in the field 0 61,160 5,333,622

Something must be done..... and quick!


. . Well someone could pm Eric. Not sure who would be the one, Kittyman and MrKevvy have had success contacting him but I do not know who else. Maybe Richard would be a good candidate.

Stephen

? ?
ID: 2034024 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14676
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2034032 - Posted: 26 Feb 2020, 15:09:06 UTC - in response to Message 2034024.  

I nominate Mr. Kevvy. I find it's best to go to him with one issue at a time, and I've got one in the pipeline already (update Beta server to check that the Christmas 'Anonymous Platform' bug is properly fixed.
ID: 2034032 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2034039 - Posted: 26 Feb 2020, 15:53:22 UTC - in response to Message 2034032.  

I nominate Mr. Kevvy. I find it's best to go to him with one issue at a time, and I've got one in the pipeline already (update Beta server to check that the Christmas 'Anonymous Platform' bug is properly fixed.

Just send him a PM asking for his help. Let's see what we get.
ID: 2034039 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2034112 - Posted: 27 Feb 2020, 0:26:42 UTC - in response to Message 2034032.  

I nominate Mr. Kevvy. I find it's best to go to him with one issue at a time, and I've got one in the pipeline already (update Beta server to check that the Christmas 'Anonymous Platform' bug is properly fixed.


. . A good one. When that bug is put to rest they can roll out 7.15 properly in main and maybe that will exorcise some of the gremlins that are annoying us.

. . I will even move my Beta testing machine back into Beta to help ... :) {even though there is NO Parkes data yet - I have to get that in :) }

Stephen

< fingers crossed>
ID: 2034112 · Report as offensive     Reply Quote
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 2034113 - Posted: 27 Feb 2020, 0:28:29 UTC - in response to Message 2034039.  
Last modified: 27 Feb 2020, 0:29:08 UTC

I nominate Mr. Kevvy. I find it's best to go to him with one issue at a time, and I've got one in the pipeline already (update Beta server to check that the Christmas 'Anonymous Platform' bug is properly fixed.

Just send him a PM asking for his help. Let's see what we get.


. . Yes but who? We don't want to flood him with 50 messages asking the same thing ...

. . Dare I nominate Juan ??? :)

Stephen

:(
ID: 2034113 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2034121 - Posted: 27 Feb 2020, 0:59:16 UTC - in response to Message 2034113.  
Last modified: 27 Feb 2020, 1:00:35 UTC

I nominate Mr. Kevvy. I find it's best to go to him with one issue at a time, and I've got one in the pipeline already (update Beta server to check that the Christmas 'Anonymous Platform' bug is properly fixed.

Just send him a PM asking for his help. Let's see what we get.


. . Yes but who? We don't want to flood him with 50 messages asking the same thing ...

. . Dare I nominate Juan ??? :)

Stephen

:(

Sure he not going to read all the thread

I just ask him to rise to the power the 3 main options we discuss:
- halve the actual value
- fix to 25 days (like AP)
- up to 28 or 30 days
All while keep the shorties at 10 days.

Did i forget something?
ID: 2034121 · Report as offensive     Reply Quote
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13841
Credit: 208,696,464
RAC: 304
Australia
Message 2034157 - Posted: 27 Feb 2020, 6:14:38 UTC - in response to Message 2034017.  
Last modified: 27 Feb 2020, 6:16:11 UTC

Results returned and awaiting validation 0 39,153 13,810,892
Results out in the field 0 61,160 5,333,622

Something must be done..... and quick!
New database server (and ideally a matching replica). More cores, faster clock speeds, and more than double the RAM.
That would then leave the present database server hardware available to replace other less powerful systems (the Scheduler, download & upload servers get my vote for upgrading with the displaced hardware).
A couple of All Flash Arrays for storage would make an even bigger difference, but they'd cost a whole lot more than a couple of new database server systems.
Grant
Darwin NT
ID: 2034157 · Report as offensive     Reply Quote
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 2034171 - Posted: 27 Feb 2020, 9:14:43 UTC
Last modified: 27 Feb 2020, 9:15:48 UTC

What is the process for Results to move from "Results returned and awaiting validation" to the next process?

Is it matching returned Results?
or
Compute intensive validation?
or
Complex DB lookups?

Does the top to bottom order on the Server Status page reflect the order of processing?

Is this process documented where I can study it?

EDIT: Does anyone have hard data on resource utilization of the servers that we can see?
ID: 2034171 · Report as offensive     Reply Quote
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13841
Credit: 208,696,464
RAC: 304
Australia
Message 2034173 - Posted: 27 Feb 2020, 9:51:32 UTC - in response to Message 2034171.  
Last modified: 27 Feb 2020, 9:54:23 UTC

What is the process for Results to move from "Results returned and awaiting validation" to the next process?
The general work flow-
A data file is loaded, WUs are split off from it, and then go in to the Ready to send buffer (unless it's empty, then they go straight out to hosts as they request work).

Results out in the field: is the number of WUs that have been sent out, and are waiting for a result to be returned.

Results returned and awaiting validation: They are the results that have been returned by crunchers, but their constituent WU has yet to reach quorum (with the need to re-check any overflow result in case it's because of a dodgy driver/ RX5000 combination, it meas that there are way, way more resends (_2. _3. _4 WUs) than usual which is what has blown this figure out recently).

Workunits waiting for validation: These are WUs that have reach quorum, and are waiting to be validated.
Workunits waiting for assimilation: These are the WUs that have been validated, and are waiting to have data from their canonical task input into the master science database.
Workunit/Files waiting for deletion: The number of files which can be deleted from disk, as the workunit has been assimilated, and there is no more use for it or its constituent results.
Workunits/Results waiting for db purging: The number of workunits or results which have been deleted from disk and after 24 hours will be purged from the database. It is during this period that completed results can still be viewed in your personal account pages.


Is it matching returned Results?
or
Compute intensive validation?
or
Complex DB lookups?
There is a lot of disk reading & writing activity involved with pretty much each one of the steps (of course some more & some less than others). With the database no longer being cached in the database server RAM, the slow I/O rate of the file storage is causing the Assimilation/Deletion & Purge backlogs.
Grant
Darwin NT
ID: 2034173 · Report as offensive     Reply Quote
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22492
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2034174 - Posted: 27 Feb 2020, 10:06:26 UTC

The data used for validation is held in RAM (as far as I'm aware).

When a task is returned after making sure it is not an "error result" the first step is to see if its twin is back yet or not, if it is then the two results are compared and if sufficiently similar then they are valid, one (normally the first one back) is declared the "canonical result" and heads off to the main database for storage and future analysis. If they don't agree then another identical task is sent out. The whole process may be repeated until the maximum number of replicate tasks have been sent out, returned and compared; the current maximum is 10, with rules around types of error.
Completed and validated tasks are meant to be on display for 24 hours from final validation, this is so the user has a chance of seeing it in its completed state.
Depending on server settings a new replica task may be placed in the send queue immediately, or may sit on the side for some time.
There are some background queries running, such as updating credit, host error figures and the like. The worst process of all is the resending of a task which requires some quite complex queries that may need to be run a couple of times at the time of resend as a task is not sent to the same user more than once, and the trigger for the resend are taken into account.

The top to bottom order on the Server Status Page does not reflect the actual sequence of actions as some are batched, some are running in parallel. For example while the validators run more or less continuously the file deleters tend to run in a batch mode.

There is a figure at the top of the SSP that gives the queries per second - this normally sits in the 1000 to 1500 range, but I've seen it as high as 5000 - this figure is not a particularly good metric for server load as that will depend on what type the query is and how complex it is. Additionally there is at least one site (someone will chime in with a link to it) that shows the data flows using various tools. But together they may not reflect the critical server load, which when one part of the system is hitting a limit, so dragging other parts down (e.g. a sudden massive surge in IP traffic may hit the overall performance as it needs disk and CPU time to process)

There are some answers in the project science pages, but they may not go far enough for you.

I hope that helps to answer some of your questions
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2034174 · Report as offensive     Reply Quote
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 2034175 - Posted: 27 Feb 2020, 10:50:38 UTC - in response to Message 2034174.  

@ Grant (SSSF) and rob smith
Thank you.

Depending on server settings a new replica task may be placed in the send queue
immediately, or may sit on the side for some time.

Since it appears the "Results returned and awaiting validation" is the largest by number of
any of the queues, wouldn't it make sense to have the "Resend" be the highest priority to
cleanup the maximum number of WU's in that queue?? Keep this queue smaller and the
20 million number isn't reached, the data is kept in RAM, etc.

Has this been suggested, or is it another half-baked idea?

After all, if ALL the "Results out in the field" were returned and matched perfectly, there
might still be many in that queue for "Resend".
ID: 2034175 · Report as offensive     Reply Quote
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13841
Credit: 208,696,464
RAC: 304
Australia
Message 2034177 - Posted: 27 Feb 2020, 11:10:51 UTC - in response to Message 2034175.  
Last modified: 27 Feb 2020, 11:11:39 UTC

Depending on server settings a new replica task may be placed in the send queue
immediately, or may sit on the side for some time.

Since it appears the "Results returned and awaiting validation" is the largest by number of
any of the queues, wouldn't it make sense to have the "Resend" be the highest priority to
cleanup the maximum number of WU's in that queue?? Keep this queue smaller and the
20 million number isn't reached, the data is kept in RAM, etc.
Given all the present bottlenecks there are probably delays in checking when a WU is due for return/re-issue & then making it available.
That aside, i'd have thought once a WU has been re-issued, it would be made available then & there for download. However if there are already WUs in the Ready-to-send buffer i figure it would go to the end of the queue (from the time it takes from when a file starts being split to when you see WUs from it, the Ready-to-send buffer is First in First out).


It would probably be possible to have WUs fed to the feeder in order of initial date of issue to speed the return of resends up, but that would be at the expense of yet more database querying.
Grant
Darwin NT
ID: 2034177 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14676
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2034182 - Posted: 27 Feb 2020, 11:24:13 UTC - in response to Message 2034175.  

I don't think that proposal will make much difference. Think about it in terms of time, rather than absolute numbers.

The big one, 'Results returned and awaiting validation', is currently just over 14 million. Volunteers are currently returning just over 140 thousand per hour. So 'awaiting validation' would require 4 days of 'resend only' continuous work to clear.

Even when the splitters aren't being throttled, the 'ready to send' queue is rarely above 4 hours, and at the moment it's about 2.5 seconds. Allocating resends immediately, rather than waiting for them to move up the queue from back to front, will save at most 4 hours against that 4 day backlog - trivial, and hardly worth doing.

What would help most would be reducing the time we have to wait for a resend task to be created - and reducing the deadlines would take a big chunk out of that.
ID: 2034182 · Report as offensive     Reply Quote
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22492
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2034187 - Posted: 27 Feb 2020, 13:51:56 UTC

I just did a very rapid count of tasks sitting in my "in progress" queue. I took the first 100 tasks on my list and had a look at the deadline dates, all are tasks sent to my host today:
71% have a 53 day deadline, 12% have deadlines in excess of 70 days (now I didn't expect that), and 4% have 20 day deadlines (more or less as expected - they are all re-sends which have shorter deadlines).
Binning results into ten-day groups we get:
21-30 days = 4%
31-40 days = 0
41-50 days = 8%
51-60 days = 72%
61-70 days = 4%
71-80 days = 11%
81-90 days = 1%

A quick comment, there are a substantial number of tasks with deadlines over 60 days - first step, cap the maximum deadline at mode+5% of mode (mode = 51-60 days, so cap at no more that 59 days, this would move >15% of the tasks back into the mode group.
Leave the very short deadlines as they are, they represent less than 5% of the total.
See what that does to the pending queue size, my guess is it would reduce by about 10% (remember there would be an increase in re-sends, which are currently sitting at about 4%)
If that reduction isn't enough look at pulling the mode group deadline down into to around 40 days, but leave the short times as they are - scale the max deadline down so it remains the same ratio to the mode group (48 days?). That should start to have an impact on pending queue size, probably pulling it down by at least 10%, maybe 25% - but the resend pile will grow somewhat.

Remember this is based on a small sample, on one day, at one time, so may not be representative of what everybody sees in their in-work queue, to get a better picture there needs to be more data over a longer period of time and over more hosts. Also that changing deadlines will not have an instant effect on the size of any of the queues as it will take time for the majority of hosts to start to see the effects, my first guess is no real change would be seen for at least a month as there are so many tasks on so many hosts that take their time to return result, and of course even longer to fail to return a result - that backlog has to be worked through.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2034187 · Report as offensive     Reply Quote
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 2034189 - Posted: 27 Feb 2020, 14:05:38 UTC - in response to Message 2034177.  

@ Grant (SSSF)
Given all the present bottlenecks there are probably delays in checking when a WU is due for
return/re-issue & then making it available.
Agreed. I imagine the feeder queries the DB at some interval looking for Results Ready to Send and
then puts them on the RTS queue.

It would probably be possible to have WUs fed to the feeder in order of initial date of issue to
speed the return of resends up, but that would be at the expense of yet more database querying.
The existing query could have the results returned in the "time of first issue" sort order. No extra query.

By using a "Resend" instead of a "Fresh" Result, the "Results out in the field" would grow by only one instead
of two. And when it is returned and validated, would result in decreasing the "Results returned and
awaiting validation" by three (or more) instead of only two.
ID: 2034189 · Report as offensive     Reply Quote
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22492
Credit: 416,307,556
RAC: 380
United Kingdom
Message 2034194 - Posted: 27 Feb 2020, 14:38:12 UTC

Agreed. I imagine the feeder queries the DB at some interval looking for Results Ready to Send and then puts them on the RTS queue.

Not quite - the splitters place the work unit into the ready to send table - only one entry is made. This is a rotating FIFO buffer
If there are no tasks being resent when a new batch of 200 tasks is required the top 100 work units are pulled from the RTS and made into tasks, which are then allocated and sent out. If there are any tasks to be re-sent they are loaded into the dispatch buffer first, which is topped up with new tasks as required.

It would probably be possible to have WUs fed to the feeder in order of initial date of issue to speed the return of resends up, but that would be at the expense of yet more database querying.


The existing query could have the results returned in the "time of first issue" sort order. No extra query.

By using a "Resend" instead of a "Fresh" Result, the "Results out in the field" would grow by only one instead
of two. And when it is returned and validated, would result in decreasing the "Results returned and
awaiting validation" by three (or more) instead of only two.


While possible it would be far from practical to do so - re-sends by their very nature tend to become available at very random times, so very often totally out of sequence (from the evidence I've seen the majority of re-sends come from computational errors, and inconclusive results being returned.)
A re-send is a new task, copied from the work unit (this is where the use of "result" to mean several different things can get very confusing), so there is only one additional (main) entry in that part of the database, - two tasks are only created at the time of first distribution.

If you look at these to task names you will see that they are from different "tapes", and one is a re-send, as shown by the "_2" at the end:
23fe20aa.3924.9065.10.37.58_2
and the other is the second of the two initial tasks generated from the work unit, as shown by the "_1" at the end:
23fe20ad.20097.476.11.38.24.vlar_1
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 2034194 · Report as offensive     Reply Quote
Darrell Wilcox Project Donor
Volunteer tester

Send message
Joined: 11 Nov 99
Posts: 303
Credit: 180,954,940
RAC: 118
Vietnam
Message 2034196 - Posted: 27 Feb 2020, 15:02:46 UTC - in response to Message 2034194.  

@rob smith
Not quite - the splitters place the work unit into the ready to send table - only one entry is made. This is a rotating FIFO buffer
NOT in a DB then? Or is the Server Status page wrong where it states "feeder: Fills up the scheduler work queue with workunits
ready to be sent. " The "Feeder" runs on Synergy, and the splitters don't.

While possible it would be far from practical to do so - re-sends by their very nature tend to become available at very random times ...
And a Resend should just be a Resend no matter when it is determined to be one. Since they increase the "Results out in the field" queue
by only 1, and decrease the "Results returned and awaiting validation" queue by more than 2 when validated, they should be the priority
to be processed next.

I think that is what you said above when they are put into the top of the next batch to go out, so we agree. The question is about
how they are picked by the feeder.
ID: 2034196 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14676
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2034198 - Posted: 27 Feb 2020, 15:23:58 UTC - in response to Message 2034196.  

When the 'Ready to send' queue is down to 14 (as on current show), where the feeder picks from is a moot point - it'll pick all 14, whatever they are, and that's the most you'll get in a single request. My last got resends only - all three of them - which won't make a dent in the 'returned, waiting for validation' queue.

The bigger problem is that there aren't any resend tasks available for those 14 million tasks, because they're all waiting for a current task to either be returned by its current host, or to pass its individual deadline. That's the problem we've got to solve.
ID: 2034198 · Report as offensive     Reply Quote
TBar
Volunteer tester

Send message
Joined: 22 May 99
Posts: 5204
Credit: 840,779,836
RAC: 2,768
United States
Message 2034203 - Posted: 27 Feb 2020, 15:54:15 UTC - in response to Message 2034198.  
Last modified: 27 Feb 2020, 15:58:52 UTC

I'm still in favor of a much shorter Work Cache. As it stands, the Server appears to be more interested in sending tasks to machines that won't use them for another Nine Days instead of sending tasks to much more capable machines that will send the completed tasks back in five minutes. My one machine is still not receiving enough work to keep it running, and it doesn't make any sense to me to send work to slower machines that have Days worth of work already. I have repeatedly suggested a One Day Cache, however, at this point any reduction would help. The current policy of sending tasks to machines that won't run them for over a week is just exacerbating the problem of Pending tasks.
ID: 2034203 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14676
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2034208 - Posted: 27 Feb 2020, 16:37:39 UTC - in response to Message 2034187.  
Last modified: 27 Feb 2020, 16:59:04 UTC

I just did a very rapid count...
Binning results into ten-day groups we get:
21-30 days = 4%
31-40 days = 0
41-50 days = 8%
51-60 days = 72%
61-70 days = 4%
71-80 days = 11%
81-90 days = 1%
I don't think we should get too hung up of the details of individual task deadlines. The curve which Joe, I, and others mapped out involved a lot of observation over what felt like months. It was robustly researched, and convincing enough for the project to adopt it. But that was 12 years ago. It was valid (deliberately) for stock CPU apps only - no GPUs back then. And it was true - two major science revisions ago.

There are, now, distinct outliers. I've just reported Task 8591013178. Deadline 44.33 days, so near the top of your table. But with AR=0.135278, it's a (rare) observation just outside VLAR, and it took twice as long, on my GPU, as I expected. On a CPU, it would have whistled through.

So, please forget about engineering this within an inch of its life. The project hasn't got time for that. My 'halve it' suggestion was made because it'll involve the minimum possible code change - two bytes.

/2

All we need to do is find out where to put it...

Edit - found a copy of the splitter_pfb folder which I downloaded from svn just before Christmas (2019!). Looks like they haven't been recording subsequent changes in their repository - tut, tut. There's a file header:

mb_wufiles.cpp,v 1.1.2.6 2007/08/10 18:21:13 korpela
but it'll do as a start. This looks like the position just before Joe started work

431	db_wu.delay_bound=std::max(86400.0*7,db_wu.rsc_fpops_est/4e+7);
'delay_bound' is boincspeak for what we know as a deadline - "minimum 7 days: the rest according to flops". The current version will be different, but that's the only place delay_bound is mentioned in the splitter code.
ID: 2034208 · Report as offensive     Reply Quote
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 16 · Next

Message boards : Number crunching : About Deadlines or Database reduction proposals


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.