Report all problems here...........

Message boards : Number crunching : Report all problems here...........
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 10 · Next

AuthorMessage
Profile Geek@Play
Volunteer tester
Avatar

Send message
Joined: 31 Jul 01
Posts: 2467
Credit: 86,146,931
RAC: 0
United States
Message 1014119 - Posted: 10 Jul 2010, 3:45:37 UTC

Something is better.....first time I have been able to download work with the cricket graph above 90. Slow but the work came in!
Boinc....Boinc....Boinc....Boinc....
ID: 1014119 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13373
Credit: 208,696,464
RAC: 304
Australia
Message 1014142 - Posted: 10 Jul 2010, 4:44:29 UTC - in response to Message 1014113.  

The freaking point is the GPU was supposed to have a separate limit now.

It does, but the overall limit limits the individual limits.
Grant
Darwin NT
ID: 1014142 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1014154 - Posted: 10 Jul 2010, 5:21:18 UTC - in response to Message 1014046.  

soft^spirit wrote:
...
On the I5 system, I would say the GPU units are pretty close to fair.... Hard to tell due to the mix of units at the moment. But 2 hours is about average.. I have everything throttled back a bit, as I need to use the computers too.

However the CPU units rating themselves at 10 hours on the GT220,(9 to 11:50 that I have seen) is very very far off. These units will take a MAX of about 3
hours to complete, each. 2 at a time.

If someone want to contact me directly to be an "example".. send a message and I will keep them posted directly. As long as it gets to those that can use the information.

I like to do informal analysis which takes a few hours, but if this develops into some suggestion I can make to David Anderson which might help I may PM to ask for more data.

The GT220 has a peak GFLOPS of 128.16 per nVidia's formula which BOINC has accepted. For your i5, BOINC believes the "Measured floating point speed 3249.93 million ops/sec" is the same kind of value. IOW, the servers would initially have assumed the GT220 was 39.4 times as fast as one core of the i5. The averaging since that initial assumption should have largely corrected to a more realistic ratio. Actual times for similar completed work show the GT220 is slightly less than twice as fast for VHAR (angle range 1.13+) work, about 2.8 times as fast for midrange ~0.43 angle range, and about 15% slower than the i5 for VLAR (angle range below 0.05) work.

What I think happened was prior to last Monday July 5, the servers were thinking the GT220 was around 3 times as fast as the i5. Then the splitters did a "tape" or two which produced mostly VLAR work just while hosts were finally able to build cache up to prepare for the outage. Because CPUs do VLAR work at about the same speed as midrange, the original estimate from the splitter is similar. So the VLARs sent for the GT220 to process were probably underestimated by nearly a factor of 3 at that point, but the VLARs sent for the i5 to process were about right. But each time the GT220 completes one of those, it forces the host DCF (duration correction factor) up so the estimates for GPU work are about right, while each time the i5 does one it only brings DCF down slightly.

That assymetrical changing of DCF is because it is mainly intended for work fetch calculations and the BOINC devs wanted to ensure it would never cause so much work to be requested that deadlines might be missed. I think if you check the "Task duration correction factor" for your i5 host it's likely to be near 3 rather than the 1.0 value the server-side adjustments assume.

The server-side adjustments should adapt as more of the host's tasks are validated, but you got caught in the coincidence of being sent many tasks which were seriously different from what had been used to do the previous averaging. So until the CPU work sent July 5 is finished those estimates will be wrong. And the need to download heavily before each weekly outage will probably cause less extreme anomalies each week. The averaging would work best if there were a steady stream of work being sent and returned.
                                                                 Joe
ID: 1014154 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1014155 - Posted: 10 Jul 2010, 5:26:15 UTC - in response to Message 1014113.  

Well....a second rig has run out of Cuda tasks and is not getting any.......
So that is obviously not working right.


Check your count of CPU tasks. I had a bunch of VLARs that threw me over the overall limit.

My GPU is now humming away, however I am also wanted for the mass murder of about 80 VLAR WUs.

Gotta run.



Yeah, I know......
The freaking point is the GPU was supposed to have a separate limit now.


They had the limits initially set at 5/40/140
5 per CPU
40 per GPU
140 total.

Problem is people "stocked up" might have had more CPU units.. and I think the "Total" limit is perhaps un-necessary.

I am hoping the total is the first thing turned off completely.. and sooner the better.
Janice
ID: 1014155 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1014159 - Posted: 10 Jul 2010, 5:35:40 UTC - in response to Message 1014154.  

Josef, I am currently crunching one of the 9+ hour work units..1.02 hours in
it is 31% complete, and shows 4:56:xx to go. From experience.. it will take about 3 hours. In the mean time the other work units that were 9+ hours now show 11+ hours.

This seems to be going the wrong direction. GPU my mathematical wizardry show precisely.. "eh somewhere in there"


Ok so I got "D";s in math.

Hard to tell on the smaller CPU units.. but some might be in a few minute range.
And.. for the record.. I did not run out of work. I did call some of my GPU time in to defend Middle Earth... and find material to perfect certain cocktails.. But I did not run out of work units. I do have priorities.
Janice
ID: 1014159 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 64944
Credit: 55,293,173
RAC: 49
United States
Message 1014160 - Posted: 10 Jul 2010, 5:37:09 UTC - in response to Message 1014155.  

Well....a second rig has run out of Cuda tasks and is not getting any.......
So that is obviously not working right.


Check your count of CPU tasks. I had a bunch of VLARs that threw me over the overall limit.

My GPU is now humming away, however I am also wanted for the mass murder of about 80 VLAR WUs.

Gotta run.



Yeah, I know......
The freaking point is the GPU was supposed to have a separate limit now.


They had the limits initially set at 5/40/140
5 per CPU
40 per GPU
140 total.

Problem is people "stocked up" might have had more CPU units.. and I think the "Total" limit is perhaps un-necessary.

I am hoping the total is the first thing turned off completely.. and sooner the better.

Agreed.

Nice Marvin there soft^spirit.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1014160 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51464
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1014173 - Posted: 10 Jul 2010, 6:12:25 UTC - in response to Message 1014142.  

The freaking point is the GPU was supposed to have a separate limit now.

It does, but the overall limit limits the individual limits.

My limit is bigger than your limit.....

Anyway...the whole issue was to allow the GPU to get some work when everything else in the cache was assigned to the CPU and there were no WUs that could be transferred via rescheduler to the GPU.

If I have 1000 WUs on the CPU and the GPU is sucking wind, that is exactly the situation that frustrated so many here during the last outage.
Whatever the total limit happens to be has to be applied to each device, not the sum total.

I don't think this was Jeff's intent....but I guess he will have to speak up for himself on that one.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1014173 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1014175 - Posted: 10 Jul 2010, 6:20:47 UTC - in response to Message 1014173.  

The freaking point is the GPU was supposed to have a separate limit now.

It does, but the overall limit limits the individual limits.

My limit is bigger than your limit.....

Anyway...the whole issue was to allow the GPU to get some work when everything else in the cache was assigned to the CPU and there were no WUs that could be transferred via rescheduler to the GPU.

If I have 1000 WUs on the CPU and the GPU is sucking wind, that is exactly the situation that frustrated so many here during the last outage.
Whatever the total limit happens to be has to be applied to each device, not the sum total.

I don't think this was Jeff's intent....but I guess he will have to speak up for himself on that one.


I do not think that was Jeff's intent. And not to worry.. We will make sure no one likes us till all the bowls are filled.... or at least do not leave you starving.



Janice
ID: 1014175 · Report as offensive
Profile Helli_retiered
Volunteer tester
Avatar

Send message
Joined: 15 Dec 99
Posts: 707
Credit: 108,785,585
RAC: 0
Germany
Message 1014176 - Posted: 10 Jul 2010, 6:24:31 UTC - in response to Message 1014155.  

...
They had the limits initially set at 5/40/140
5 per CPU
40 per GPU
140 total.
...


Yes, now it's better than before. But it's not the meaning of a Cache, IMHO.
A Cache has to protect your from a Interruption in the Workflow.

Since SETI takes a 3 Day Break every Week the current Cache from 7-8 hours (for my Rigs) is to small.
Yes, 24 hours before the next Outage you can try to fill yout Cache for three Days, but because of the
Overload it's a Hope an Fearing. ;-)

Helli
A loooong time ago: First Credits after SETI@home Restart
ID: 1014176 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51464
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1014178 - Posted: 10 Jul 2010, 6:30:34 UTC - in response to Message 1014176.  

...
They had the limits initially set at 5/40/140
5 per CPU
40 per GPU
140 total.
...


Yes, now it's better than before. But it's not the meaning of a Cache, IMHO.
A Cache has to protect your from a Interruption in the Workflow.

Since SETI takes a 3 Day Break every Week the current Cache from 7-8 hours (for my Rigs) is to small.
Yes, 24 hours before the next Outage you can try to fill yout Cache for three Days, but because of the
Overload it's a Hope an Fearing. ;-)

Helli

The limits are meant to soft-start the servers after 3 days of downtime...
I have no problem with that as long as they are lifted more than 1 day before the next outage, thereby creating a '24 Hours of Lemans' race to fill caches at the last minute and leaving 1000's of downloads stranded when the servers go down. Kinda pointless lifting the limit and then having the work stuck in downloads so it can't be processed.

The idea behind separate CPU/GPU limit counts was to get the GPUs that had run out of work something to do as soon as the outage was over. That is not happening right now for 2 of my own rigs, and I am sure for countless others.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1014178 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13373
Credit: 208,696,464
RAC: 304
Australia
Message 1014179 - Posted: 10 Jul 2010, 6:31:07 UTC - in response to Message 1014176.  

Yes, now it's better than before. But it's not the meaning of a Cache, IMHO.

It's nothing to do with a cache. It's just so that everyone is able to get some work & to help limit the load on the system.
Grant
Darwin NT
ID: 1014179 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1014180 - Posted: 10 Jul 2010, 6:33:00 UTC - in response to Message 1014176.  

...
They had the limits initially set at 5/40/140
5 per CPU
40 per GPU
140 total.
...


Yes, now it's better than before. But it's not the meaning of a Cache, IMHO.
A Cache has to protect your from a Interruption in the Workflow.

Since SETI takes a 3 Day Break every Week the current Cache from 7-8 hours (for my Rigs) is to small.
Yes, 24 hours before the next Outage you can try to fill yout Cache for three Days, but because of the
Overload it's a Hope an Fearing. ;-)

Helli


if you can get your 7 days to be an honest 7 days... you should have no worries

I had to ask for 6 to get a solid 3. Lets just hope they can open up wide open in the morning.
Janice
ID: 1014180 · Report as offensive
Profile MadMaC
Volunteer tester
Avatar

Send message
Joined: 4 Apr 01
Posts: 201
Credit: 47,158,217
RAC: 0
United Kingdom
Message 1014182 - Posted: 10 Jul 2010, 6:40:53 UTC - in response to Message 1013846.  

Hmm, as I read the documentation at http://www.efmer.eu/forum_tt/index.php?topic=428.0 v 0.3 of Fred's new task mover may be able to move some VLARs back to GPU. I'm sure Fred would appreciate some testing, and I presume many who moved huge numbers of VLARs to CPU and now have no GPU tasks might benefit. Precautions like making a backup before trying any program in early development are obviously sensible.
                                                               Joe



No need, I downloaded rescheduler 1.7 from the lunatics forums and that can transfer units (even VLAR's) back from the cpu to the gpu...
Even so it still takes about 90 mins on my fermi to complete a VLAR, but with 270 odd VLAR's, every little helps.
My worry is that if they dont remove the limits, it will take me until Wed to crunch through the VLAR's and then I will be out of work.

I could abort them, but I dont like doing that - it's all science, even if it is slow science..
ID: 1014182 · Report as offensive
Profile Helli_retiered
Volunteer tester
Avatar

Send message
Joined: 15 Dec 99
Posts: 707
Credit: 108,785,585
RAC: 0
Germany
Message 1014183 - Posted: 10 Jul 2010, 6:41:00 UTC - in response to Message 1014180.  
Last modified: 10 Jul 2010, 6:42:31 UTC


if you can get your 7 days to be an honest 7 days... you should have no worries

I had to ask for 6 to get a solid 3. Lets just hope they can open up wide open in the morning.


Well, my Cachesize was allready set to 3 1/2 Days (as always). But after the Outage it's now down to Zero.
Now i have to go the next three Days with this 140 Workunit Cache (7-8 Hours Work). I hate it to set
my Cache to such a high Value of 7 Days. ;-)

Helli
A loooong time ago: First Credits after SETI@home Restart
ID: 1014183 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51464
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1014186 - Posted: 10 Jul 2010, 7:02:15 UTC - in response to Message 1013846.  

Hmm, as I read the documentation at http://www.efmer.eu/forum_tt/index.php?topic=428.0 v 0.3 of Fred's new task mover may be able to move some VLARs back to GPU. I'm sure Fred would appreciate some testing, and I presume many who moved huge numbers of VLARs to CPU and now have no GPU tasks might benefit. Precautions like making a backup before trying any program in early development are obviously sensible.
                                                               Joe

Anybody else test this yet?
I am hoping that the limit may be lifted in the morning and I will not have to resort to transferring VLARs back to my GPUs. But, as they are crunch-only rigs, I could do so and not have to worry about the slowdown caused by VLAR work being done by the GPU.
So, if it works, and things don't get fixed in Boincland, it could be a good tool to have.


"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1014186 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1014190 - Posted: 10 Jul 2010, 7:09:54 UTC - in response to Message 1014186.  

Hmm, as I read the documentation at http://www.efmer.eu/forum_tt/index.php?topic=428.0 v 0.3 of Fred's new task mover may be able to move some VLARs back to GPU. I'm sure Fred would appreciate some testing, and I presume many who moved huge numbers of VLARs to CPU and now have no GPU tasks might benefit. Precautions like making a backup before trying any program in early development are obviously sensible.
                                                               Joe

Anybody else test this yet?
I am hoping that the limit may be lifted in the morning and I will not have to resort to transferring VLARs back to my GPUs. But, as they are crunch-only rigs, I could do so and not have to worry about the slowdown caused by VLAR work being done by the GPU.
So, if it works, and things don't get fixed in Boincland, it could be a good tool to have.



Just testing for the bottom of the blender at the moment.. I think I can find it if it holds still..
Janice
ID: 1014190 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51464
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1014193 - Posted: 10 Jul 2010, 7:16:35 UTC - in response to Message 1014191.  



Just testing for the bottom of the blender at the moment.. I think I can find it if it holds still..

Still looking for that lost shaker of salt?
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1014193 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1014197 - Posted: 10 Jul 2010, 7:28:46 UTC - in response to Message 1014193.  



Just testing for the bottom of the blender at the moment.. I think I can find it if it holds still..

Still looking for that lost shaker of salt?


salt?? not fer thish
Yo ho ho an.. every bottle in tha cupboard
Janice
ID: 1014197 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51464
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1014202 - Posted: 10 Jul 2010, 7:43:02 UTC - in response to Message 1014197.  



Just testing for the bottom of the blender at the moment.. I think I can find it if it holds still..

Still looking for that lost shaker of salt?


salt?? not fer thish
Yo ho ho an.. every bottle in tha cupboard

LOL...
The 'lost shaker of salt' is a Margaritaville thingy.
But I suppose you knew that.....
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1014202 · Report as offensive
Profile RottenMutt
Avatar

Send message
Joined: 15 Mar 01
Posts: 1011
Credit: 230,314,058
RAC: 0
United States
Message 1014203 - Posted: 10 Jul 2010, 7:43:53 UTC - in response to Message 1014186.  
Last modified: 10 Jul 2010, 7:46:00 UTC

Anybody else test this yet?
I am hoping that the limit may be lifted in the morning and I will not have to resort to transferring VLARs back to my GPUs. But, as they are crunch-only rigs, I could do so and not have to worry about the slowdown caused by VLAR work being done by the GPU....


just abort them in this situation. sorry wingmen but is what the project has forced us to do if you want gpu work, i don't feel guilty doing it anymore.

unless it is during the 3D outage, then it may be beneficial...
ID: 1014203 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 10 · Next

Message boards : Number crunching : Report all problems here...........


 
©2022 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.