Message boards :
Number crunching :
Report all problems here...........
Message board moderation
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 10 · Next
Author | Message |
---|---|
Geek@Play Send message Joined: 31 Jul 01 Posts: 2467 Credit: 86,146,931 RAC: 0 |
Something is better.....first time I have been able to download work with the cricket graph above 90. Slow but the work came in! Boinc....Boinc....Boinc....Boinc.... |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
The freaking point is the GPU was supposed to have a separate limit now. It does, but the overall limit limits the individual limits. Grant Darwin NT |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 |
soft^spirit wrote: ... I like to do informal analysis which takes a few hours, but if this develops into some suggestion I can make to David Anderson which might help I may PM to ask for more data. The GT220 has a peak GFLOPS of 128.16 per nVidia's formula which BOINC has accepted. For your i5, BOINC believes the "Measured floating point speed 3249.93 million ops/sec" is the same kind of value. IOW, the servers would initially have assumed the GT220 was 39.4 times as fast as one core of the i5. The averaging since that initial assumption should have largely corrected to a more realistic ratio. Actual times for similar completed work show the GT220 is slightly less than twice as fast for VHAR (angle range 1.13+) work, about 2.8 times as fast for midrange ~0.43 angle range, and about 15% slower than the i5 for VLAR (angle range below 0.05) work. What I think happened was prior to last Monday July 5, the servers were thinking the GT220 was around 3 times as fast as the i5. Then the splitters did a "tape" or two which produced mostly VLAR work just while hosts were finally able to build cache up to prepare for the outage. Because CPUs do VLAR work at about the same speed as midrange, the original estimate from the splitter is similar. So the VLARs sent for the GT220 to process were probably underestimated by nearly a factor of 3 at that point, but the VLARs sent for the i5 to process were about right. But each time the GT220 completes one of those, it forces the host DCF (duration correction factor) up so the estimates for GPU work are about right, while each time the i5 does one it only brings DCF down slightly. That assymetrical changing of DCF is because it is mainly intended for work fetch calculations and the BOINC devs wanted to ensure it would never cause so much work to be requested that deadlines might be missed. I think if you check the "Task duration correction factor" for your i5 host it's likely to be near 3 rather than the 1.0 value the server-side adjustments assume. The server-side adjustments should adapt as more of the host's tasks are validated, but you got caught in the coincidence of being sent many tasks which were seriously different from what had been used to do the previous averaging. So until the CPU work sent July 5 is finished those estimates will be wrong. And the need to download heavily before each weekly outage will probably cause less extreme anomalies each week. The averaging would work best if there were a steady stream of work being sent and returned. Joe |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
Well....a second rig has run out of Cuda tasks and is not getting any....... They had the limits initially set at 5/40/140 5 per CPU 40 per GPU 140 total. Problem is people "stocked up" might have had more CPU units.. and I think the "Total" limit is perhaps un-necessary. I am hoping the total is the first thing turned off completely.. and sooner the better. Janice |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
Josef, I am currently crunching one of the 9+ hour work units..1.02 hours in it is 31% complete, and shows 4:56:xx to go. From experience.. it will take about 3 hours. In the mean time the other work units that were 9+ hours now show 11+ hours. This seems to be going the wrong direction. GPU my mathematical wizardry show precisely.. "eh somewhere in there" Ok so I got "D";s in math. Hard to tell on the smaller CPU units.. but some might be in a few minute range. And.. for the record.. I did not run out of work. I did call some of my GPU time in to defend Middle Earth... and find material to perfect certain cocktails.. But I did not run out of work units. I do have priorities. Janice |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 66199 Credit: 55,293,173 RAC: 49 |
Well....a second rig has run out of Cuda tasks and is not getting any....... Agreed. Nice Marvin there soft^spirit. Savoir-Faire is everywhere! The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
The freaking point is the GPU was supposed to have a separate limit now. My limit is bigger than your limit..... Anyway...the whole issue was to allow the GPU to get some work when everything else in the cache was assigned to the CPU and there were no WUs that could be transferred via rescheduler to the GPU. If I have 1000 WUs on the CPU and the GPU is sucking wind, that is exactly the situation that frustrated so many here during the last outage. Whatever the total limit happens to be has to be applied to each device, not the sum total. I don't think this was Jeff's intent....but I guess he will have to speak up for himself on that one. "Time is simply the mechanism that keeps everything from happening all at once." |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
The freaking point is the GPU was supposed to have a separate limit now. I do not think that was Jeff's intent. And not to worry.. We will make sure no one likes us till all the bowls are filled.... or at least do not leave you starving. Janice |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
... Yes, now it's better than before. But it's not the meaning of a Cache, IMHO. A Cache has to protect your from a Interruption in the Workflow. Since SETI takes a 3 Day Break every Week the current Cache from 7-8 hours (for my Rigs) is to small. Yes, 24 hours before the next Outage you can try to fill yout Cache for three Days, but because of the Overload it's a Hope an Fearing. ;-) Helli A loooong time ago: First Credits after SETI@home Restart |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
... The limits are meant to soft-start the servers after 3 days of downtime... I have no problem with that as long as they are lifted more than 1 day before the next outage, thereby creating a '24 Hours of Lemans' race to fill caches at the last minute and leaving 1000's of downloads stranded when the servers go down. Kinda pointless lifting the limit and then having the work stuck in downloads so it can't be processed. The idea behind separate CPU/GPU limit counts was to get the GPUs that had run out of work something to do as soon as the outage was over. That is not happening right now for 2 of my own rigs, and I am sure for countless others. "Time is simply the mechanism that keeps everything from happening all at once." |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Yes, now it's better than before. But it's not the meaning of a Cache, IMHO. It's nothing to do with a cache. It's just so that everyone is able to get some work & to help limit the load on the system. Grant Darwin NT |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
... if you can get your 7 days to be an honest 7 days... you should have no worries I had to ask for 6 to get a solid 3. Lets just hope they can open up wide open in the morning. Janice |
MadMaC Send message Joined: 4 Apr 01 Posts: 201 Credit: 47,158,217 RAC: 0 |
Hmm, as I read the documentation at http://www.efmer.eu/forum_tt/index.php?topic=428.0 v 0.3 of Fred's new task mover may be able to move some VLARs back to GPU. I'm sure Fred would appreciate some testing, and I presume many who moved huge numbers of VLARs to CPU and now have no GPU tasks might benefit. Precautions like making a backup before trying any program in early development are obviously sensible.Joe No need, I downloaded rescheduler 1.7 from the lunatics forums and that can transfer units (even VLAR's) back from the cpu to the gpu... Even so it still takes about 90 mins on my fermi to complete a VLAR, but with 270 odd VLAR's, every little helps. My worry is that if they dont remove the limits, it will take me until Wed to crunch through the VLAR's and then I will be out of work. I could abort them, but I dont like doing that - it's all science, even if it is slow science.. |
Helli_retiered Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 |
Well, my Cachesize was allready set to 3 1/2 Days (as always). But after the Outage it's now down to Zero. Now i have to go the next three Days with this 140 Workunit Cache (7-8 Hours Work). I hate it to set my Cache to such a high Value of 7 Days. ;-) Helli A loooong time ago: First Credits after SETI@home Restart |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
Hmm, as I read the documentation at http://www.efmer.eu/forum_tt/index.php?topic=428.0 v 0.3 of Fred's new task mover may be able to move some VLARs back to GPU. I'm sure Fred would appreciate some testing, and I presume many who moved huge numbers of VLARs to CPU and now have no GPU tasks might benefit. Precautions like making a backup before trying any program in early development are obviously sensible.Joe Anybody else test this yet? I am hoping that the limit may be lifted in the morning and I will not have to resort to transferring VLARs back to my GPUs. But, as they are crunch-only rigs, I could do so and not have to worry about the slowdown caused by VLAR work being done by the GPU. So, if it works, and things don't get fixed in Boincland, it could be a good tool to have. "Time is simply the mechanism that keeps everything from happening all at once." |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
Hmm, as I read the documentation at http://www.efmer.eu/forum_tt/index.php?topic=428.0 v 0.3 of Fred's new task mover may be able to move some VLARs back to GPU. I'm sure Fred would appreciate some testing, and I presume many who moved huge numbers of VLARs to CPU and now have no GPU tasks might benefit. Precautions like making a backup before trying any program in early development are obviously sensible.Joe Just testing for the bottom of the blender at the moment.. I think I can find it if it holds still.. Janice |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
Still looking for that lost shaker of salt? "Time is simply the mechanism that keeps everything from happening all at once." |
soft^spirit Send message Joined: 18 May 99 Posts: 6497 Credit: 34,134,168 RAC: 0 |
salt?? not fer thish Yo ho ho an.. every bottle in tha cupboard Janice |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
LOL... The 'lost shaker of salt' is a Margaritaville thingy. But I suppose you knew that..... "Time is simply the mechanism that keeps everything from happening all at once." |
RottenMutt Send message Joined: 15 Mar 01 Posts: 1011 Credit: 230,314,058 RAC: 0 |
Anybody else test this yet? just abort them in this situation. sorry wingmen but is what the project has forced us to do if you want gpu work, i don't feel guilty doing it anymore. unless it is during the 3D outage, then it may be beneficial... |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.