Message boards :
Number crunching :
Panic Mode On (107) Server Problems?
Message board moderation
Previous · 1 . . . 15 · 16 · 17 · 18 · 19 · 20 · 21 . . . 29 · Next
Author | Message |
---|---|
petri33 Send message Joined: 6 Jun 02 Posts: 1668 Credit: 623,086,772 RAC: 156 |
Here's another one: http://setiathome.berkeley.edu/workunit.php?wuid=2664443000 It was run on 1080Ti in 115 seconds. The 156 second one was run on 1080. The servers have a lot of tapes with DIAG_KIC8462852 data. To overcome Heisenbergs: "You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Here's another one: http://setiathome.berkeley.edu/workunit.php?wuid=2664443000 . . Hi Petri, . . So from what I am hearing these new tasks take about the same amount of time to process as a normal blc04 task, and we are in for a flood of them. Things could be worse. They could be blc05 tasks (shudder). Stephen :) |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
Now that I've had time to look at several of them, they take about 30% longer than the normal BLC04. Interesting.. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Now that I've had time to look at several of them, they take about 30% longer than the normal BLC04. Interesting.. . . Hmmm, . . That makes them even slower than Blc05 tasks :( ... ouch. Stephen . . But maybe they will be the ones that make the difference ... ??? :) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Now that I've had time to look at several of them, they take about 30% longer than the normal BLC04. Interesting.. . . Now that I have had a chance to observe run times they seem to be very comparable to Blc05 tasks. And I hope all those who were unhappy at the dearth of GBT work are smiling now ... my rigs are running 60% GBT tasks .... Stephen <ambivalent shrug> |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
Hopefully a few more files will be loaded up before the weekend. Even with the increased runtimes of the new GBT data, I can't see the current GBT & Arecibo files making it through the weekend. Grant Darwin NT |
Kissagogo27 Send message Joined: 6 Nov 99 Posts: 716 Credit: 8,032,827 RAC: 62 |
i don't remember whatsort of files are theses ones Repertoire de D:\BOINC_DATA\projects\setiathome.berkeley.edu 700Ko each ? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
700Ko each ? Hmm. Looks like they've finally released the 4bit WUs. Testing was done over 12 months ago- the advantage of the 4bit WUs is that they result in a 46% reduction in the noise in the WU when they are processed. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
And I notice BOINC v7.8.2 has been released, but no info on the changes/updates about it on the download pages. Grant Darwin NT |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Yes, 4-bit WU's released indeed. Twice the size, twice the fun :-) And twice as long to crunch? Meow? "Time is simply the mechanism that keeps everything from happening all at once." |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Yes, 4-bit WU's released indeed. Twice the size, twice the fun :-) If that's the case, let's hope that Seti can find twice the bandwidth............................... Still do miss those old bandwidth graphs. "Time is simply the mechanism that keeps everything from happening all at once." |
rob smith Send message Joined: 7 Mar 03 Posts: 22526 Credit: 416,307,556 RAC: 380 |
If I remember they were only a couple of percent longer run than 2-bit units. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
If I remember they were only a couple of percent longer run than 2-bit units. . . Yep, run times for those I received in Beta were very similar to the 2bit variety but resolution and noise reduction were apparently much superior. I have been wondering if/when they would materialise. Stephen :) |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Hmm, with double the demand on the bandwidth, I really wonder if this is going to work? That is my concern. Things have been running so smoothly as of late, I just dunno if it will hold together doubling the data sent out. We shall see soon enough, I guess. The kitties are sniffing growing pains in the air. Hope they are wrong. Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
Quick general question along these lines. I've always wondered, since server strain had been a perennial issue for the project, why haven't (at least as far as I know, but I honestly haven't followed this very closely) the powers that be just made up the work units to be effectively 'bigger', which reduce the numbers of up and downloads? I would guess that the individual file sizes might be a little larger, but if the we could reduce the I/O by a factor of 2, wouldn't overall this be a good thing? I can't imagine that I am the first person who's ever thought of it, so there must be a reason that it hasn't yet happened. Thoughts/opinions? |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Quick general question along these lines. I've always wondered, since server strain had been a perennial issue for the project, why haven't (at least as far as I know, but I honestly haven't followed this very closely) the powers that be just made up the work units to be effectively 'bigger', which reduce the numbers of up and downloads? I would guess that the individual file sizes might be a little larger, but if the we could reduce the I/O by a factor of 2, wouldn't overall this be a good thing? I can't imagine that I am the first person who's ever thought of it, so there must be a reason that it hasn't yet happened. Thoughts/opinions? Has been mentioned before, and I actually thought that was what was happening with the new change. Meow. "Time is simply the mechanism that keeps everything from happening all at once." |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
I would imagine changing the time period for a task would cause quite a problem with the database storage and/or extraction/comparison of results. With the data units being the same size it is probably the best to keep it that way. I'm certainly seeing download issues with doubling of file sizes and my 5Mb/s pipe. It does affect normal surfing much more than before. I may have to try going back to 2 concurrent download per computer. |
rob smith Send message Joined: 7 Mar 03 Posts: 22526 Credit: 416,307,556 RAC: 380 |
The 4-bit tasks are not the "bigger" ones that have been talked bout by some. The so-called "bigger" tasks were thought to contain twice as many (or more) data points of 2-bit resolution, thus take about twice as long to process, but with no increase in resolution, but a reduction in the number of work units generated per "tape" as each task overlaps its neighbours by a fair amount thus by increasing the number data points the number of overlaps is reduced. I believe that one of the reasons for the "bigger" tasks not be released into the wild was/is that many of the slower machines would suffer an unacceptable level of time-outs. As has already been said the 4-bit tasks have the same number of data points, so take about the same time as two-bit tasks. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Well, there has to be a way of sending that work only to machines that could handle it. Would require rewriting some Boinc/scheduler code of course. If the machine sent back a tidbit of info about itself with the work request, the scheduler could decide whether or not to send it. Average turn around time might work. "Time is simply the mechanism that keeps everything from happening all at once." |
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
That makes absolute sense to me, but from what I've heard about our current resource/staffing issues, is there the headcount available to do it? Would this be something that we might be able to somehow roll into the new(ish) project that the generous Russian gentleman had financed and was implemented last year? Or would this be somewhat outside the scope of that project? |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.