Message boards :
Number crunching :
Panic Mode On (116) Server Problems?
Message board moderation
Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 47 · Next
Author | Message |
---|---|
rob smith Send message Joined: 7 Mar 03 Posts: 22204 Credit: 416,307,556 RAC: 380 |
A couple of "good" examples of CrditScrew in action..... Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Wow! 5 minutes on a 1070 with the special app. They had better award 3X credit. I see a BLC35 has a Peak swap size around 27GB. Your BLC41 has a Peak swap size around 70GB. Should be 3X credit in my opinion. Depends on the CPU run time. If it has 3 times the amount of processing to do, then it will. If not, it won't. And then you've got the effects of Credit New and what it does to actually determine credit. Expect to be disappointed. Grant Darwin NT |
rob smith Send message Joined: 7 Mar 03 Posts: 22204 Credit: 416,307,556 RAC: 380 |
Swap size is a very poor metric for determining "credit due" - different applications* running on the same task may well have very different swap sizes. * "applications" in this context includes command lines and parameters passed by BOINC "automagically" to the application. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ghia Send message Joined: 7 Feb 17 Posts: 238 Credit: 28,911,438 RAC: 50 |
Of the validated BLC41 WUs, the non-vlars here run normal times and pay normally. The vlars take 3 times longer and payment has been over 3 times more. Humans may rule the world...but bacteria run it... |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I have two in my cache. Haven't gotten to them yet, but they're in the pipeline. Well the first one listed there is next-up to start crunching. The second one has already been returned and came in at 2253 seconds.. about half of what I was anticipating, and gave half the credits as normal, too. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
I only caught a few before the splitters changed tapes, but one was WU 3522701971 Mine is the second (run on a slow-running GTX 750Ti, despite what the host record says): ran over 50 minutes (normal is less than half that), but credit is also high. |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
I only caught a few before the splitters changed tapes, but one was WU 3522701971 I also notice that there is almost the difference of 4 minutes between run-time and CPU time, run-time being longer |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
I have two in my cache. Haven't gotten to them yet, but they're in the pipeline. Second one finished and reported. Completed and validated 6,440.44 6,439.09 145.26 twice as long as normal, little less than twice the credits. A quick visual average of the valid results seems to be 75-85 for a normal/WU, so getting 145 is roughly double. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I only caught a few before the splitters changed tapes, but one was WU 3522701971 Anytime that run_time is longer than cpu_time indicates the cpu is overcommitted. A 4 minute deficit over 50 minutes is acceptable in my opinion. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I only caught a few before the splitters changed tapes, but one was WU 3522701971Wow. 50 minutes and then it Overflowed... Meaning, it would have taken longer if it would have finished. How much longer? Probably would have taken an Hour, based on My 750 Ti and the longer 41s. My 750 Ti takes over 20 minutes on those longer ones running CUDA, and that is about 3 Times faster than OpenCL in Windows. An hour to run a single GPU task on a 750 Ti? Pretty discouraging considering My 750 Ti is finishing the BLC 24s in about 7 minutes. Those longer 41s are pretty brutal. Fortunately, some of the 41.vlars run normally, such as the ones with PSR in the name. People looking at the run-times should note if the task OVERFLOWED. Aborted tasks will Not give an accurate indication of run-time, and many of the longer 41s are Overflowing. BTW, CPU run-times don't mean much on GPU tasks, as long as it's around 10% or higher, it really depends on the App and the settings. |
Speedy Send message Joined: 26 Jun 04 Posts: 1643 Credit: 12,921,799 RAC: 89 |
I only caught a few before the splitters changed tapes, but one was WU 3522701971Wow. 50 minutes and then it Overflowed... Meaning, it would have taken longer if it would have finished. How much longer? Probably would have taken an Hour, based on My 750 Ti and the longer 41s. My 750 Ti takes over 20 minutes on those longer ones running CUDA, and that is about 3 Times faster than OpenCL in Windows. An hour to run a single GPU task on a 750 Ti? Pretty discouraging considering My 750 Ti is finishing the BLC 24s in about 7 minutes. Those longer 41s are pretty brutal. Fortunately, some of the 41.vlars run normally, such as the ones with PSR in the name. People looking at the run-times should note if the task OVERFLOWED. Aborted tasks will Not give an accurate indication of run-time, and many of the longer 41s are Overflowing. Completely out of curiosity does anybody know at what percentage this task overflowed |
Unixchick Send message Joined: 5 Mar 12 Posts: 815 Credit: 2,361,516 RAC: 22 |
No Panic, just caution... The results received in the last hour is on the rise. Not a big deal at 124k, but could be a warning sign of an incoming shorty storm. I'm running 2+ days behind with my cache, so I can't tell if this is an issue, or just a small blip. I'm really liking the data discussion. The guardian published an article about breakthrough listen project. Basically said we haven't found anything yet. It is very quiet except for the noise that we make. Still thinking we should make a separate thread for data discussion, so this thread can be for warning signs and true panics. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
I picked up 11 new WUs, then after not seeing a resend for almost a month, I managed to pick one of them up as well. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
I picked up 11 new WUs, then after not seeing a resend for almost a month, I managed to pick one of them up as well. Those were AP WUs, as for MB I've yet to see any from file 06my19ad. 18dc09aa is still there. Grant Darwin NT |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
as for MB I've yet to see any from file 06my19ad.I've got one. It's a VLAR. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
as for MB I've yet to see any from file 06my19ad.I've got one. It's a VLAR. Just got a few myself in the last 10min. Just posting about something on the forums often seems to be the best way to get something happening. Edit- And the last couple of work requests have resulted in all Arecibo WUs. Grant Darwin NT |
Wiggo Send message Joined: 24 Jan 00 Posts: 34754 Credit: 261,360,520 RAC: 489 |
It usually takes about 7hrs from the time Arecibo work is split until the time they start getting delivered.as for MB I've yet to see any from file 06my19ad.I've got one. It's a VLAR. Cheers. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
[quote]It usually takes about 7hrs from the time Arecibo work is split until the time they start getting delivered. . . Yay! It looks like someone finally kick started 18dc09aa .. at least it seems to have made some progress though I am not seeing any WUs from it. As for Arecibo work in general I have only about 2 dozen tasks on this machine (out of 300) But all Arecibo work is welcome :) Stephen :) |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
The first VLAR I got was created 21 Jun 2019, 2:35:58 UTC sent 21 Jun 2019, 8:51:38 UTCThat's where the six hours goes - in the 'ready to send' cache. No wait, no cache. It's what buffering is for. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
It usually takes about 7hrs from the time Arecibo work is split until the time they start getting delivered. Yep. Depends on the runtime for the WUs already out there- Return rate varies between 95k-140k per hour. Roughly 630k WUs in the ready_to_send buffer. So roughly 6.5 to 4.5 hours for currently being split work to start coming through. Another point of work flow interest- looking at the Haveland graphs for the AP received_last_hour numbers. An initial surge and peak of work coming in, then a large drop, with a slight rise then gradual tapering off over several hours. Then a similar (though not quite as steep) a surge in returned work to a new high, a slight drop then back to the previous peak & now the gradual tapering off in work being returned again. Interesting to see the effect of the combinations of large and small caches, high and low throughput systems, and project resource settings has on work being returned. Grant Darwin NT |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.