Message boards :
Number crunching :
Panic Mode On (102) Server Problems?
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 25 · Next
Author | Message |
---|---|
JLDun Send message Joined: 21 Apr 06 Posts: 573 Credit: 196,101 RAC: 0 |
Doesn't it seem like this seems to have become a regular weekend thing. And, coincidentally, all since the Panic 102 thread started.... |
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
About to run out of GPU work...Looks like Einstein is about to see me again.. |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1853 Credit: 268,616,081 RAC: 1,349 |
About to run out of GPU work...Looks like Einstein is about to see me again.. Maybe not. I just got my caches refilled ... Hit or miss until the traffic dies down a bit, but ... |
The_Matrix Send message Joined: 17 Nov 03 Posts: 414 Credit: 5,827,850 RAC: 0 |
Horray, i got the first tasks downloaded and crunshing. |
David S Send message Joined: 4 Oct 99 Posts: 18352 Credit: 27,761,924 RAC: 12 |
Still getting no tasks available from Beta. David Sitting on my butt while others boldly go, Waiting for a message from a small furry creature from Alpha Centauri. |
KWSN THE Holy Hand Grenade! Send message Joined: 20 Dec 05 Posts: 3187 Credit: 57,163,290 RAC: 0 |
I'm not getting any GPU work, either for nVidia or ATI... ...And problems with Beta should be reported on the Beta site, as Eric rarely reads the forums here, according to him... . Hello, from Albany, CA!... |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
I'm not getting any GPU work, either for nVidia or ATI... I am, but it's taking anywhere between 3-7 requests to get it. There's been a lot of VLAR work around for a while now, but it looks like the percentage of it versus shorties/normal WUs has increased even further over the last couple of days. Grant Darwin NT |
JLDun Send message Joined: 21 Apr 06 Posts: 573 Credit: 196,101 RAC: 0 |
I've gotten a few of those, and I think 3 of them -9'ed on me. Not having a GPU cruncher means I'm not too worried- but that's just me. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
, and the AR's we get for our GPU's are mostly low ARs relatively close to being classified as VLARs. I've noticed a few of those. So far, they tend to finish earlier than estimated and don't bog down the system, unlike the VLARs that take longer than estimated & make the system less responsive. Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
, and the AR's we get for our GPU's are mostly low ARs relatively close to being classified as VLARs. I just had a look at my Core 2 Duo system, and some of those longer running WUs do run longer than the estimated times (like a VLAR). I suspect it's due to the lack of optimised applications, both for the GPUs and the C2D. My i7 is running the AVX application, and the crunching times are much improved. On the C2D it doesn't have the benefit that it had with the v7 SSSE CPU application, nor an optimised GPU application, so there is a greater contention for CPU resources resulting in longer CPU & GPU crunching times compared to my i7 with the AVX application. Grant Darwin NT |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
On those 750's with the zi app, I notice that you're running at default (generic) settings: pulsefind: blocks per SM 4 (Fermi or newer default) The Lower the angle range, the more impact those settings have. I would suggest for display devices and a device like that, pfblockspersm of 15, and pfperiodsperlaunch of 200, might inmprove things. On non-display GPUs (or even the display one if you don;t experience notable slowdown) you could up the process priority. All those settings are in the xxx_mbcuda.cfg file, using the sample provided as a guide. If display lag doesn't get too bad, then such settings should reduce CPU feeding requirement somewhat. For Core2Duo, yes MBv8's increased precision makes feeding harder (I use Core2Duo to feed a 980 on the main development rig, so know the pain). Not something that can be necessarily optimised out (at least short term), because the [CPU-Side] precision increase is there for reasons. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13746 Credit: 208,696,464 RAC: 304 |
The Lower the angle range, the more impact those settings have. I would suggest for display devices and a device like that, pfblockspersm of 15, and pfperiodsperlaunch of 200, might inmprove things. On non-display GPUs (or even the display one if you don;t experience notable slowdown) you could up the process priority. I might give that a go. Initially I tried several different settings when running v7, all the way up to the maximums, but they didn't have any significant effect so I just went back to the defaults. EDIT- although I don't think I changed process priority at all. Will give that a go this time. Is it likely that process priority needs to be higher for the other settings to have a significant effect? EDIT- just noticed some longer running than estimated WUs on my i7 as well. Grant Darwin NT |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
EDIT- just noticed some longer running than estimated WUs on my i7 as well. Yeah, In the scheme of things, that's the estimation component in the scheduler of CreditNew, so expect it to be no more stable/accurate than credits. (moral being never send scientists to do an Engineer's Job) For the settings, yeah process priority impact *might* be significantly swamping other settings. It's all very system dependant though, so if your Core2Duo happens to have as much trouble feeding 2 750's as mine does 1 980, then I wouldn't expect 'max performance'. The ability to respond to software interrupts as fast isn't there, motherboard chipset plays a role there, irrespective of actual CPU utilisation. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Yes, it's an enormous amount of VLARs out there now, and the AR's we get for our GPU's are mostly low ARs relatively close to being classified as VLARs. Remind me again ........ what is the cutoff AR range that gets classified as .VLAR???? I saw some .09~ range tasks on the GPU's that awarded ~~160 or so credits. They were not tagged as VLAR. Took about twice as long to run than the typical .40~ AR range tasks ...about 24 minutes on my 970's doing .5 tasks each. I don't think I had ever seen tasks with that low an AR on the GPU's before. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
That's what I thought I had remembered. I had (3) of these .099 AR range tasks and they WERE NOT marked as .VLAR. By the definition of <.12 ... they should have been marked as .VLAR. I inspected them because I saw they had been awarded pretty high credit by CREDIT_New and that caught my attention. Long gone by now... just wondering. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I've seen a few of those, took for freakin' ever; http://setiathome.berkeley.edu/result.php?resultid=4706958573 I figured it must be those blasted Rays again... |
Jeff Buck Send message Joined: 11 Feb 00 Posts: 1441 Credit: 148,764,870 RAC: 0 |
Interesting. I hadn't noticed those, and there aren't any in my current validated tasks list, but I just looked in my archives and found one from a couple days ago with an AR of 0.078860 and credit of 157.53. It took nearly 40 minutes to run on a GTX660. And a day earlier, there was one with an AR of 0.096775 and credit of 151.43 that took an hour and 2 minutes to run on a GTX750Ti. EDIT: Further archive diving turned up an AR of 0.063161, credit of 195.74, and an hour and 2 minutes on a GTX660. (BTW, all of these times are with 2 tasks per GPU. However, if one of those tasks happens to be an AP, the MB run times suffer no matter what the AR.) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
Hmmm, here's another; http://setiathome.berkeley.edu/result.php?resultid=4709513420 Run time: 34 min 3 sec CPU time: 33 min 36 sec SETI@home using CUDA accelerated device GeForce GTX 750 Ti setiathome v8 enhanced x41p_zi, Cuda 6.50 special Compiled with NVCC 6.5, using 6.5 libraries. Modifications done by petri33. Detected setiathome_enhanced_v8 task. Autocorrelations enabled, size 128k elements. Work Unit Info: ............... WU true angle range is : 0.064373 ??? |
Mike Send message Joined: 17 Feb 01 Posts: 34258 Credit: 79,922,639 RAC: 80 |
Hmmm, here's another; Not sure what you want to say. VLAR`s starting AR 0.012 not 0.12. With each crime and every kindness we birth our future. |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
Hmmm, here's another; So...All those other people are Wrong? Let's say this. A normal task on that machine runs for 4.5 minutes and scores 64 credits, http://setiathome.berkeley.edu/result.php?resultid=4709532453 The last one of these "non-VLARs" ran for 28 minutes and scored 100 credits, look above. Now if this one also scores 100, Let's do the math; 34 divided by 4.5 = 7.5 x 64 = 480 credits for a normal task verses 100 for one of these non-VLARS That's a 380 credit difference... I WUZ Robbed. How about that? |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.