Message boards :
Number crunching :
Panic Mode On (98) Server Problems?
Message board moderation
Previous · 1 . . . 27 · 28 · 29 · 30
Author | Message |
---|---|
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
My version has still some accuracy problems. cheers! could save some work :) "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
So it appears that we're getting closer to another new milestone of sorts. 2^32 tasks. I know the DB has had a few adjustments and tweaks over the years to deal with these special numbers, but I'm wondering if it is already able to handle this one. [edit: if memory serves me correctly, I think I recall 2^31-1 was a problem and Matt changed that field in the DB from being a signed 4-byte integer over to being an unsigned long (8-byte) integer, meaning 2^64 will be the next time that is a problem.] It hasn't been created yet, but the time is drawing near. Let's see who the lucky person ends up being. I'm hedging my bet that it's going to be some incredibly slow machine, or someone who loads up a cache full of WUs on that machine's first-ever contact and is never heard from again. Result 4294967296 Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Bill Butler Send message Joined: 26 Aug 03 Posts: 101 Credit: 4,270,697 RAC: 0 |
So it appears that we're getting closer to another new milestone of sorts. 2^32 tasks. Hey Cosmic_Ocean, I am trying to keep up with you! How did you find we are getting close to that number before losing count? In round numbers I am reading that the Master Science Data Base is stuffed with 1.76 * 10^9 workunits. Also, for convenience I note that 2^32 ~= 4.29 * 10^9. Dividing 1.76 / 4.29 the result is ~ 41% full. This is not yet ominous. But maybe this is not what you are talking about. "It is often darkest just before it turns completely black." |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
So it appears that we're getting closer to another new milestone of sorts. 2^32 tasks. When you are viewing your tasks look at the far left column labeled "Tasks". The number there reflects how many tasks how many tasks have been generated. Note the number of tasks exceeds the number of workunits by a minimum of 2 to 1. There are at least 2 tasks per workunit & up to 10. That integer is what tends to be the issue. I would have to reread Matt's previous posts, but I recall that they must define the integer length in the table. Previously when we ran into a number larger than the table could accept. So work generation comes to a stop until they modify the table to accept the larger integer. Which seems to be done by creating a new table and then copying the data into the new table. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
So it appears that we're getting closer to another new milestone of sorts. 2^32 tasks. The standard BOINC database schema (schema.sql) 221 create table workunit ( 222 id integer not null auto_increment, 258 create table result ( 259 id integer not null auto_increment, still uses type "integer" (which I think means 32 bits) for both workunit and result (=task) IDs. The most recent task I've been issued, a couple of minutes ago, is 4278193638. Allow for another 333,889 tasks already generated and ready to send, that means that we're within 16,500,000 of reaching 2^32 tasks and not being able to split any more work (because they can't be inserted into the table). With a turnover of ~72,500 per hour, we would reach that point in 226 hours - or almost exactly the end of July. I hope we're already non-standard. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
Reading Matt's Funny story from November 2011, I'm not exactly reassured. He hasn't typed the figures '6' and '4' in that order since then, so they may be in for a surprise. I think I feel an email coming on... |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
They appear to be informix SERIAL8 types on the seti db code. If I'm interpreting the IBM documentation correctly, then it should be allowed up to 9,223,372,036,854,775,807 Serial8 int8 "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
They appear to be informix SERIAL8 types on the seti db code. If I'm interpreting the IBM documentation correctly, then it should be allowed up to 9,223,372,036,854,775,807 Matt said they'd been using 8-byte longs in the Informix (science) database We've been bitten by this long ago in informix, and have since been storing larger numbers there as int8's (8 byte integers) or doubles. 'long ago' in 2011. But they were still caught out by the limits of the MySQL (BOINC) database. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
They appear to be informix SERIAL8 types on the seti db code. If I'm interpreting the IBM documentation correctly, then it should be allowed up to 9,223,372,036,854,775,807 ah. Yeah I wonder if there's another schema file for that hiding... [Edit:] I see, probably some limits in the code that uses the databases. Could indeed amount to a cross-fingers and see what happens situation. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Reading Matt's Funny story from November 2011, I'm not exactly reassured. He hasn't typed the figures '6' and '4' in that order since then, so they may be in for a surprise. I think I feel an email coming on... It's only been about 4 years. I'm sure it's in the "to do" pile. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.