An observation about the project

Message boards : Number crunching : An observation about the project
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14653
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1691584 - Posted: 15 Jun 2015, 10:26:08 UTC - in response to Message 1691583.  

But overall, in general, if you're doing a fair bit of work everyday, the RAC should be an upward trend ? :-)

It should trend gradually upwards, towards something close to your daily work rate. But we usually reckon it takes a month to six weeks to get there.
ID: 1691584 · Report as offensive
Profile Michael McGrath
Avatar

Send message
Joined: 13 Jun 15
Posts: 310
Credit: 1,075,745
RAC: 0
United Kingdom
Message 1691585 - Posted: 15 Jun 2015, 10:26:11 UTC

I've only been a member 2 days ...it should all be recent :-D lol
ID: 1691585 · Report as offensive
Profile Michael McGrath
Avatar

Send message
Joined: 13 Jun 15
Posts: 310
Credit: 1,075,745
RAC: 0
United Kingdom
Message 1691587 - Posted: 15 Jun 2015, 10:26:41 UTC

Thank you guys :-)
ID: 1691587 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34840
Credit: 261,360,520
RAC: 489
Australia
Message 1691588 - Posted: 15 Jun 2015, 10:27:48 UTC

Your RAC won't level out for 2-3 months yet.

Cheers.
ID: 1691588 · Report as offensive
Profile Michael McGrath
Avatar

Send message
Joined: 13 Jun 15
Posts: 310
Credit: 1,075,745
RAC: 0
United Kingdom
Message 1691590 - Posted: 15 Jun 2015, 10:34:03 UTC

I notice some work pending validation has been sent to computers that've never done any work :P I'm guessing that I'll have to wait until the deadline expires before it is sent to someone else, or would that lose the work ? sorry for another question :-)
ID: 1691590 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34840
Credit: 261,360,520
RAC: 489
Australia
Message 1691591 - Posted: 15 Jun 2015, 10:36:18 UTC

Yes it'll be sent out again to someone else and you'll just have to wait for the credit.

Cheers.
ID: 1691591 · Report as offensive
Profile Michael McGrath
Avatar

Send message
Joined: 13 Jun 15
Posts: 310
Credit: 1,075,745
RAC: 0
United Kingdom
Message 1691592 - Posted: 15 Jun 2015, 10:40:59 UTC - in response to Message 1691591.  

Yes it'll be sent out again to someone else and you'll just have to wait for the credit.

Cheers.

Thanks Wiggo :-)
ID: 1691592 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1691796 - Posted: 15 Jun 2015, 18:39:13 UTC

As far as I can remember, ntpckr isn't running because when it does, it brings the DB crying to its knees and basically the entire project comes to a screeching halt. The queries that are involved require a lot of I/O (I don't recall if hardware or software... both of which the DB still currently has some issues with, which is the reason for the server-side "tasks in progress" limits).

I, among others, have suggested taking one of the weekly DB backups and putting that backup on an isolated machine and let ntpckr chew on that for a few months to get caught up. Then update that DB with a recent backup and let ntpckr chew through the new data for a while until it gets caught-up again. Maybe do that one more time, and then it should be able to run live in "near real time" as it has in its name.

That's my understanding on that, at least.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1691796 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14653
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1691799 - Posted: 15 Jun 2015, 18:46:04 UTC - in response to Message 1691796.  

As far as I can remember, ntpckr isn't running because when it does, it brings the DB crying to its knees and basically the entire project comes to a screeching halt. The queries that are involved require a lot of I/O (I don't recall if hardware or software... both of which the DB still currently has some issues with, which is the reason for the server-side "tasks in progress" limits).

I, among others, have suggested taking one of the weekly DB backups and putting that backup on an isolated machine and let ntpckr chew on that for a few months to get caught up. Then update that DB with a recent backup and let ntpckr chew through the new data for a while until it gets caught-up again. Maybe do that one more time, and then it should be able to run live in "near real time" as it has in its name.

That's my understanding on that, at least.

Can't be quite right. The ntpckr will be chewing over the science database - every result ever received - which is HUGE.

The weekly backups - and the server "tasks in progress" limits - are for the BOINC database, which is for the fast-moving transactional stuff. Everything of interest is dropped from that after 24 hours - after having been assimilated into the science database, of course.

I think it was mostly the sheer size of the science database which stalled the ntpckr when they tried it - you simply need something with the resources of Google to access and process that much data in real time.
ID: 1691799 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1691991 - Posted: 16 Jun 2015, 3:34:45 UTC - in response to Message 1691799.  

As far as I can remember, ntpckr isn't running because when it does, it brings the DB crying to its knees and basically the entire project comes to a screeching halt. The queries that are involved require a lot of I/O (I don't recall if hardware or software... both of which the DB still currently has some issues with, which is the reason for the server-side "tasks in progress" limits).

I, among others, have suggested taking one of the weekly DB backups and putting that backup on an isolated machine and let ntpckr chew on that for a few months to get caught up. Then update that DB with a recent backup and let ntpckr chew through the new data for a while until it gets caught-up again. Maybe do that one more time, and then it should be able to run live in "near real time" as it has in its name.

That's my understanding on that, at least.

Can't be quite right. The ntpckr will be chewing over the science database - every result ever received - which is HUGE.

The weekly backups - and the server "tasks in progress" limits - are for the BOINC database, which is for the fast-moving transactional stuff. Everything of interest is dropped from that after 24 hours - after having been assimilated into the science database, of course.

I think it was mostly the sheer size of the science database which stalled the ntpckr when they tried it - you simply need something with the resources of Google to access and process that much data in real time.

Ah. Okay. I guess I had some misinterpretations somewhere along the line.

In any case, If they're working on some DB stuff such as breaking it up into tables, why couldn't ntpckr just deal with one single table at a time and then make a list of things it finds interesting, and then when the next table is loaded, continue adding to the list of interesting things, and so on. Once it has chewed through everything, go through the list and make comparisons and consolidate the list a bit and there's your list of interesting things to look at more-closely.

But the second part of my aforementioned idea still works.. once the backlog has been analyzed, it should be able to keep up in near real time. I wouldn't think that it would have to completely start over with every new result that comes in. So.. the hurdle at the moment for ntpckr is chewing through the backlog.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1691991 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : An observation about the project


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.