Friday, July 9, server start

Message boards : Number crunching : Friday, July 9, server start
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · Next

AuthorMessage
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65751
Credit: 55,293,173
RAC: 49
United States
Message 1013805 - Posted: 9 Jul 2010, 17:53:49 UTC - in response to Message 1013800.  

Any news on the limit this week? Still 20 for now?

No, the limits are:
5 per cpu-core - in other words 10 for dual-core, 20 for quad, 40 for 8-way...
40 per gpu
Max 140 per computer.


any word on when the "limits" will be lifted?

will it be after downloads subside to 80Mb/s or lower?

24 hours before shutdown was not sufficient time, as last week proved.

uploads are flowing well.

Monday I think, maybe.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1013805 · Report as offensive
Profile RottenMutt
Avatar

Send message
Joined: 15 Mar 01
Posts: 1011
Credit: 230,314,058
RAC: 0
United States
Message 1013844 - Posted: 9 Jul 2010, 19:11:18 UTC - in response to Message 1013800.  

Monday I think, maybe.


now your just trying to provoke me.

... limits are:
5 per cpu-core - in other words 10 for dual-core, 20 for quad, 40 for 8-way...
40 per gpu
Max 140 per computer.


so if you have 28 (28*5=140) or more logical cpu's, it is possible that you could not get any gpu work.
ID: 1013844 · Report as offensive
Profile rebest Project Donor
Volunteer tester
Avatar

Send message
Joined: 16 Apr 00
Posts: 1296
Credit: 45,357,093
RAC: 0
United States
Message 1013847 - Posted: 9 Jul 2010, 19:15:14 UTC

First, the good news. All of my uploads went without a hitch. Very cool.

Now the bad news. It appears the CPU/GPU differentiation does not work at all for the optimized apps. As predicted, I ran out of GPU work yesterday afternoon. I drew a very lucky hand of Astropulse work on Monday so I still have a number of MB VLAR CPU units left to crunch. Even though my GPU is currently sitting idle, I'm still getting the "Message from server: This computer has reached a limit on tasks in progress". This is, well, annoying.

It will be interesting to see when Scheduler sees fit to send me the 40 GPU WU's I'm supposed to have.





Join the PACK!
ID: 1013847 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1013854 - Posted: 9 Jul 2010, 19:22:43 UTC - in response to Message 1013847.  

so I still have a number of MB VLAR CPU units left to crunch.



Uhhh, Rebest, how big a number of VLARs? If you have more than 140 WUs on your computer you wont get any more until you get them down or the Powers That Be up the quota for us.


PROUD MEMBER OF Team Starfire World BOINC
ID: 1013854 · Report as offensive
Rabbit&Carrot

Send message
Joined: 3 Oct 03
Posts: 25
Credit: 80,178,117
RAC: 0
Korea, South
Message 1013868 - Posted: 9 Jul 2010, 19:39:33 UTC - in response to Message 1013844.  
Last modified: 9 Jul 2010, 19:40:14 UTC

Monday I think, maybe.


now your just trying to provoke me.

... limits are:
5 per cpu-core - in other words 10 for dual-core, 20 for quad, 40 for 8-way...
40 per gpu
Max 140 per computer.


so if you have 28 (28*5=140) or more logical cpu's, it is possible that you could not get any gpu work.


I am afraid that is true.

One of my rigs with a Quad-core CPU and 3 GPU's have for some reason 700 VLAR's and because of the limit on the total number of WU's I cannot receive any GPU WU's. I should either let my GPU's get some rest until Monday or kill 680 VLAR's to get some GPU WU's.
ID: 1013868 · Report as offensive
Profile rebest Project Donor
Volunteer tester
Avatar

Send message
Joined: 16 Apr 00
Posts: 1296
Credit: 45,357,093
RAC: 0
United States
Message 1013871 - Posted: 9 Jul 2010, 19:41:39 UTC - in response to Message 1013854.  
Last modified: 9 Jul 2010, 19:44:06 UTC

so I still have a number of MB VLAR CPU units left to crunch.



Uhhh, Rebest, how big a number of VLARs? If you have more than 140 WUs on your computer you wont get any more until you get them down or the Powers That Be up the quota for us.


Ah! That's it! Looks like it's going to be long weekend crunching those $#%^$#&^% VLARs or I kill them to get back up to full speed. Too bad these are my only choices.

Join the PACK!
ID: 1013871 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1013878 - Posted: 9 Jul 2010, 19:48:31 UTC - in response to Message 1013873.  

I should either let my GPU's get some rest until Monday or kill 680 VLAR's to get some GPU WU's.


But you can bet that if you did that, the WU's you'll get will all be VLAR's. We don't know yet, if what they're sending out now is part of a VLAR storm or not.

I'm now able to download *ONE* GPU WU at the time after I've finished the ones I have. All's vlars except a few vhars slipping through.
ID: 1013878 · Report as offensive
Profile Geek@Play
Volunteer tester
Avatar

Send message
Joined: 31 Jul 01
Posts: 2467
Credit: 86,146,931
RAC: 0
United States
Message 1013880 - Posted: 9 Jul 2010, 19:55:26 UTC

I went and inspected the first page of my returned work today. It's all VLAR work. Was interested in how many times they have been issued.

20 wu were issued to 69 client computers before finally being validated. That's an average of 3.45 client computers for each wu. To be fair there were a couple that validated with the first 2 issues but there were at least 4 wu that were issued to 4, 5 and even 6 computers before finally reaching validation.

I for one will be happy when the killer app is no longer needed.
Boinc....Boinc....Boinc....Boinc....
ID: 1013880 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30651
Credit: 53,134,872
RAC: 32
United States
Message 1013893 - Posted: 9 Jul 2010, 20:13:18 UTC - in response to Message 1013880.  

I went and inspected the first page of my returned work today. It's all VLAR work. Was interested in how many times they have been issued.

20 wu were issued to 69 client computers before finally being validated. That's an average of 3.45 client computers for each wu. To be fair there were a couple that validated with the first 2 issues but there were at least 4 wu that were issued to 4, 5 and even 6 computers before finally reaching validation.

I for one will be happy when the killer app is no longer needed.

I was under the impression the outage and the 99% issuance of VLAR's was to force the people using VLAR killers to stop using them or not be able to get any work and they would quit out of frustration.

IIRC Frank Drake proved that ET will only be found in a VLAR.

;)

ID: 1013893 · Report as offensive
Tony Li

Send message
Joined: 21 May 01
Posts: 6
Credit: 1,337,747
RAC: 0
United States
Message 1013896 - Posted: 9 Jul 2010, 20:19:31 UTC

Just FYI: Due to personal circumstances, I'm not able to participate on weekends. As a result, I'm effectively now only getting work two days a week.

So far, this is going ok, but if I run out of work, I'm going to give my cycles to another project...

ID: 1013896 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6652
Credit: 121,090,076
RAC: 0
United States
Message 1013908 - Posted: 9 Jul 2010, 20:31:01 UTC - in response to Message 1013896.  

Just FYI: Due to personal circumstances, I'm not able to participate on weekends. As a result, I'm effectively now only getting work two days a week.

So far, this is going ok, but if I run out of work, I'm going to give my cycles to another project...


No problem! By contributing your cycles, you are helping science in one way or another. Good luck with what ever you choose! :)

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1013908 · Report as offensive
Profile Bill Walker
Avatar

Send message
Joined: 4 Sep 99
Posts: 3868
Credit: 2,697,267
RAC: 0
Canada
Message 1013916 - Posted: 9 Jul 2010, 20:39:55 UTC - in response to Message 1013908.  

Just FYI: Due to personal circumstances, I'm not able to participate on weekends. As a result, I'm effectively now only getting work two days a week.

So far, this is going ok, but if I run out of work, I'm going to give my cycles to another project...


No problem! By contributing your cycles, you are helping science in one way or another. Good luck with what ever you choose! :)

Steve



I agree with Steve. BOINC will handle a range of workloads very nicely. I have one old machine, only on part time, that does about 2WU a week for S@H. It is quite happy to just sit there and crunch S@H, and CPDN, at its own pace. Every little bit helps.

ID: 1013916 · Report as offensive
Profile MadMaC
Volunteer tester
Avatar

Send message
Joined: 4 Apr 01
Posts: 201
Credit: 47,158,217
RAC: 0
United Kingdom
Message 1013918 - Posted: 9 Jul 2010, 20:41:58 UTC - in response to Message 1013908.  

Just FYI: Due to personal circumstances, I'm not able to participate on weekends. As a result, I'm effectively now only getting work two days a week.

So far, this is going ok, but if I run out of work, I'm going to give my cycles to another project...


No problem! By contributing your cycles, you are helping science in one way or another. Good luck with what ever you choose! :)

Steve

I do MW@home in my spare time, but this is the only space based project that supports fermi at the moemnt as far as I know
ID: 1013918 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1013922 - Posted: 9 Jul 2010, 20:47:45 UTC

I would like to thank Jeff for his establishing communication with the rabble.

Thank you, Jeff.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1013922 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1013936 - Posted: 9 Jul 2010, 21:10:25 UTC - in response to Message 1013883.  

I am starting the servers up.

I had to hide both the jobs limit thread and the new outage schedule thread. Queries for those threads were clobbering the boinc database. I will put something more about this up later...

Slow down the creation/sending of AP units. Those are really bandwidth hogs. At least until the worst rush is over.

I'm not sure I agree with the logic, an AP task has about 23 times the bits of an S@H Enhanced task, but the estimated runtime may have a higher ratio. For many hosts, one AP WU fulfills the cache setting so they would stop requesting work.

But counting an AP and "MB" task the same for the limits mechanism doesn't make sense. Once the server bug which is affecting counting for AP application versions is fixed, then it would make sense to have different limits for AP and "MB" applications.

It's probably a moot point for this week anyhow. For next week it could be implemented fairly simply during the outage by having only mb_splitters active until S@H Enhanced "Results ready to send" grew to about 3/4 whatever target they had set, then enabling ap_splitters. The taskid sequence then would have a stretch of maybe 300000 with no AP tasks. Preceding that would be a mixed section left over from the Tuesday shutdown so some early requests would get AP, but when that 300000 sequence was reached only "MB" tasks would be going out for about 3 hours at peak rate.

I believe the sequence may work out just as unevenly as last week, but the higher limits will mean those sections of the queue where there's very little AP work may stay at peak delivery rather than falling back to 50 Mb/s. From Scarecrow's graphs it appears AP "Results ready to send" has actually increased since the restart, while "MB" has decreased. That suggests the actual distribution was similar to what I suggested in the preceding paragraph.

A reduction from 3 Feeder slots for AP v505 to only 2 might be a good change. 48:1 is a better approximation to the 40:1 splitting ratio than the current 32:1. It would make the AP splitters fall behind the mb_splitters when both are working on the same set of "tapes" rather than getting ahead as they do now. But the real advantage is that 96 or 97 "MB" WUs plus 2 AP WUs is about 427 Mbits and if the Feeder is running a 5 second cycle the download bandwidth would average 85.4 Mbits/second. If an even enough mix in the "Results ready to send" queue were possible so the Feeder were able to replenish all slots each cycle, that would be close to ideal.
                                                               Joe
ID: 1013936 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1014058 - Posted: 10 Jul 2010, 1:24:31 UTC
Last modified: 10 Jul 2010, 1:24:42 UTC

I observe that the limit has been lifted (or increased) shortly after 6pm PDT.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1014058 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1014067 - Posted: 10 Jul 2010, 1:46:03 UTC - in response to Message 1014058.  

I observe that the limit has been lifted (or increased) shortly after 6pm PDT.


Looks like they upped it to 6 per logical CPU core now.

ID: 1014067 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65751
Credit: 55,293,173
RAC: 49
United States
Message 1014069 - Posted: 10 Jul 2010, 1:53:33 UTC - in response to Message 1014067.  

I observe that the limit has been lifted (or increased) shortly after 6pm PDT.


Looks like they upped it to 6 per logical CPU core now.

Oh wow(rolls eyes), Just for My cpu, I'd need 19.2 WU's per logical CPU core, As My cpu crunches the WU's in 1:15:00, plus or minus a couple minutes or so.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1014069 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1014091 - Posted: 10 Jul 2010, 2:28:53 UTC

I was at the previous limit 5 per, 4 cores, and I managed to pick up 12 more, so I think it's more like 8 per CPU.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1014091 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1014095 - Posted: 10 Jul 2010, 2:33:31 UTC

And the DL servers are really struggling now, it took 22+ mins downloading a WU. One more now trying to DL, same thing + project backoff.
ID: 1014095 · Report as offensive
Previous · 1 · 2 · 3 · Next

Message boards : Number crunching : Friday, July 9, server start


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.