Posts by Odan


log in
1) Message boards : Number crunching : Thank you (Message 1037407)
Posted 1422 days ago by Profile Odan
I have officially added S@H to my List of Things to Worry About. It comes in at about 1,500,000,000 on the list right now. I'll get to it in a few centuries, assuming nothing new comes up in that time.



I think you need to get organised. If you freeze your list and strictly schedule 1 second of worry to each topic you can get to worrying about S@H in only 47 years, 6 months, 18 hours and 38 minutes approximately. You wouldn't have time for sleep but sleeping if for wimps anyway!
2) Message boards : Number crunching : Quick fundraiser for SETI's new server (Message 1032341)
Posted 1443 days ago by Profile Odan
I'll throw my mighty support behind "Oscar" as well.

And if you need a "sci-fi" tie in try this - Oscar J. Friend
sci-fi author from the 40s-60s.

http://en.wikipedia.org/wiki/Oscar_J._Friend

Or maybe because servers can be so cranky.....Oscar (the Grouch)
might work as well


Also Oscar Goldman from "The Six Million Dollar Man"
3) Message boards : Number crunching : The Outage has begun (Message 1024570)
Posted 1470 days ago by Profile Odan
Do you sit there all day waiting for it to start or stop? :)
4) Message boards : Number crunching : Friday, July 9, server start (Message 1014280)
Posted 1504 days ago by Profile Odan
I am starting the servers up.

I had to hide both the jobs limit thread and the new outage schedule thread. Queries for those threads were clobbering the boinc database. I will put something more about this up later...

Slow down the creation/sending of AP units. Those are really bandwidth hogs. At least until the worst rush is over.

I'm not sure I agree with the logic, an AP task has about 23 times the bits of an S@H Enhanced task, but the estimated runtime may have a higher ratio. For many hosts, one AP WU fulfills the cache setting so they would stop requesting work. <snip>

Joe


The runtime is longer but not 23 times longer, more like 12 times. That means they take somewhere in the region of twice the download bandwidth per unit crunch time. I see that as supporting the "bandwidth hog" description but I wouldn't go quite so far as to call them that :)

When AP were in much readier supply last year, in fact they were produced to fully satisfy demand & we had a reservoir of units ready to send, we did run for quite a while where the download bandwidth was maxed out. IIRC we were running with approx 400,000 AP in progress as the stable demand. Most of this year we have averaged about 100,000 in progress which is well below the demand if distribution were less restricted.
5) Message boards : Number crunching : OK......let's wipe all the s@@t off the boards....... (Message 1014008)
Posted 1505 days ago by Profile Odan
Besame Mucho......

I bitch much. I think that is the translation.....LOL.


With a Spanish vocabulary like that you had better not stray south of the border, mi gatito :)
6) Message boards : Number crunching : OK......let's wipe all the s@@t off the boards....... (Message 1013987)
Posted 1505 days ago by Profile Odan
OH..........
And in response to all those who have been posting about leaving.....

Spare me. You are not doing anybody any good with your whining.

BTW...I have seen whining misspelled so many times.......LOL.

Hit The Road, Jack.


And the colored gurlz sing.......anybody remember that quote?


I'll take a walk on the wild side with you, Mark. Let's get committed together :)

Respect, Mega-crunchin' Kitty Man
Jonathan

(doo dedoo dedoo......)
7) Message boards : Number crunching : Is this fair - discuss! (Message 1013418)
Posted 1506 days ago by Profile Odan
Well said, Gary.
8) Message boards : Number crunching : Observation of Effect (Message 1013415)
Posted 1506 days ago by Profile Odan
On this system, which is a virtual machine running OpenSolaris, I have 2 WUs in progress (but one has finished and is trying to upload) and 2 awaiting validation, notwithstanding the fact that this is a slow machine. On the real machine, which is a Linux box, I have one in progress and 4 awaiting validation, but this was true even before the outage. People load too many WUs, in my humble opinion.
Tullio


I have a comparatively slow machine by modern standards, certainly not a super cruncher, with a Q6600 & a GTX8800 that is mildly overclocked. This gets through around 6-10 MB WU an hour depending on type. This means it needs around 240 WU per day to survive a 3+ day outage I need around 500-700 WU - fewer if I snag some AP or the WU are longer but I still need hundreds of units.

Are you saying I should only have a cache of 10 or so :)?

9) Message boards : Number crunching : SETI loves me. (Message 1013148)
Posted 1507 days ago by Profile Odan
Wasn't that the idea?

Not with a 10 days cache. Or haven't you read Jeff's post either? Seems like a lot of people haven't done so. Or all must've thought it wasn't meant for their eyes.

While in the same thread I pointed to, people are even saying they're increasing from whatever cache they had to a 10 day cache in anticipation of the limit to be lifted. Uhuh, good way to get a limit increase next week, instead of a limit lift.

Before you think I say this because I didn't get any, I did... Took me 5 hours to get 31 tasks in on the i3. Blistering high download speeds of 0.24KB and 0.03KB/sec on downloads. I liked those. Something new for my 25Mbit connection to deal with. :)

Those 31 probably won't get me through the outage, were it not that I also have 28 Einstein, 2 CPDN and at least 50 Primegrid. I'll weather it.

No, I was posting this because it's always nice to see everyone follow admin's request. But y'all seem to think they don't listen to your complaints, so why should you listen to them, right? :)


Hi Jord,

I feel your frustration. I did reduce my cache BTW but I also fiddled with some DCFs to get what I expect to just about tide me over until the servers come back on line. All in all, If I had just left my cache at 8 days i would have ended up with about the same 4-ish days of work.

I do look forward to the time when we have more accurate cache filling. I know this is being attempted & I wait patiently for the improvement & realise that I am effectively taking part in RC testing after Beta testing was unfortunately too small to be conclusive.

One thing that occurred to me while I was waiting for the 20 WU limit to be removed was that it was somewhat counter productive.

During the approx 3.5 days of restricted issue the number of MB in progress gradually fell. I believe that this is at least partly due to the draining of remaining larger caches for the fastest crunchers among us who can actually crunch more than 20 units at a time(!) as well as the slightly more usual among us who get through 20 WU quite quickly.

AP exhibited an interesting ratcheting up pattern that seems to be related to AP units being available for something like 1 hour in every 4 or so much more freely than at other times (don't shoot me here I'm really not quite sure what happened but it looks this way to me :) )

Anyway, my point is that the 20 WU limit was a very good idea for the initial restart; it allowed almost everyone to get some form of cache & to get crunching straight away without us being able to max out the downloads. Unfortunately I think it was too severe for too long if S@h still wants to be available for people to crunch continually (I'm not sure that is what the team wants or not, I haven't seen any statement either way. I know that nobody guarantees 24/7 availability I just wonder if Berkley want us all to continue crunching at about the same rate or if they want a reduction)

Right, back to the point :) If you look at the Cricket graphs for the week you can see that for those 3+ days we were only downloading WUs at something like 45% of practical capacity for most of that time. Remember that during that time the reservoir of work in progress was falling steadily; individuals were seeing their previously still large caches draining steadily and panicked, increasing their cache sizes further in the hope of snagging WUs when the flood gates opened. For these people we effectively had a 6 day cache drain so that when the limit was removed for 26 hours or so there was a lot of cache to fill up.

If some of the 55% of available bandwidth could have been freed up sooner this would have reduced the cache drain and given longer for the caches to fill up. It would also have helped a bit to calm the urge in some people to increase cache sizes "just in case".

Of course, if the splitters could not have kept up it wouldn't have mattered but we did start this latest outage with a reservoir of WUs on the servers that could not be sent; I also noticed not all the splitters were running all the time so it appeared that there was some capacity not used.

Anyway, I hope that we all have a calm outage :) and that everything comes up smoothly on Friday. I hope there is a bit freer availability of WUs.

Happy crunching all!
10) Message boards : Number crunching : Panic Mode On (35) Server problems (Message 1012416)
Posted 1509 days ago by Profile Odan
Anybody have a clue what the bandwidth cycles on the Cricket Graph are all about? I don't think I have ever seen such a well defined pattern before....

There appears to be correlation with when the splitters are boosting the "Results ready to send" and when they're idle. Compare Scarecrow's graphs, though it's hard to really match the time scales. It's a case where sampling the server status once an hour isn't quite enough to pin down the relationship, but my guess is the high rate download bursts are occurring just after the splitters have stopped for awhile. Or it could just be coincidence...
Joe

And "ready to send" doesn't drop to zero when splitters are idle, but bandwidth load drops hugely nevertheless.


My suspicions-
The Ready to Send buffer probably drops quite rapidly when badnwidth is maxed out, then continues to drop gradually once the network traffic drops off till such time as the splitters fire up & top up the buffer; the graphs aren't updated frequently enough to see accurately what's happening.


Extremely wild supposition-
The traffic bursts may be related to odd work request behaviour.
I noticed one or 2 threads where people commented about the client not requesting new work, even though they had less than 20 in their cache. After a while, it does request work & that's when you're getting those bursts in network traffic. Lots of clients running down their buffer below 20 Work Units before requesting more, resulting in short bursts of network traffic.


EDIT- just had a look at the Astropulse graphs. It shows a full Ready to Send buffer, with ups & downs similar to MB, but the slope of the waveform is different.
MB- buffer fills quickly, drains slowly.
AP- buffer fills slowly, but drains quickly.

Looks like the spkies could be very much AP related. Just odd with their fairly consistent frequency.


The spike interval & duration I can't explain any better than above but if you compare the cricket graph with scarecrow's "AP in progress" you can see a beautiful correlation with a ratcheting up of AP in progress every time a data burst occurs. Very pretty!

[/img]
11) Message boards : Number crunching : Color Me GONE (Message 1011623)
Posted 1510 days ago by Profile Odan
Let's see here..............

<much snarkiness deleted>

Maybe I'll come back when this project is run by someone who is NOT insane!

I'm gone too, but not in the way Geek@Play threatens. I've crunching along just fine for the past 2 1/2 months, just not reading the drivel here in the forums.

SETI@Home is working. It's not working well, and in fact it's showing how important fault-resilience can be. I've not run out of work. I'd express my ideas on how to make it run better, but clearly this isn't the place for it.

There should be a line between complaints and observations, and personal insults hurled toward those running the project, and even between fellow forum members, and in a polite society, that line should never be crossed.

There should be a place in these forums for civil discourse.

But no. It's all childish complaints and personal attacks.

I think it's time to revive and revise Godwin's Law because in my opinion, when you stop talking about the problems, and start talking about the people, you might as well compare them to Hitler.

It's rude. It's indecent. It's obscene. It's incredibly bad Karma. Were you raised by wolves?

Where are your manners? Do you always kick people when they're down?


I see that you have actually proven Godwin's Law - was that your intention? :)
You also seem dangerously close to taking part in the rudeness you decry. I do agree with you that personal attacks in these forums are rude & could even be interpreted by some as cowardly and bullying. Come on guys, I know this is a very sore point for many of us but lets be civilised or at least civil about it.
12) Message boards : Number crunching : This computer has reached a limit on tasks in progress?? (Message 1011621)
Posted 1510 days ago by Profile Odan
Sooooooo, anyone notice the 200k MP workunits and 10K AP workunits sitting on the server!

hummmm

time to let the work flow.

And the numbers have risen lately, have many mega crunchers left or what??


That is possible but as far as I can see the reason is mainly that the data out from Berkley is nurdling along at approx 50% of max since the limit of 20 tasks per host has been introduced. This means that the mega crunchers cannot run at 100% and that even small to middling crunchers cannot build up much of a cache.

It means that even with only 2 AP splitters and 4 MB splitters actually splitting (despite what the green status bars would misleadingly tell us) splitting is outstripping the throttled back pipe.
13) Message boards : Number crunching : This computer has reached a limit on tasks in progress?? (Message 1011370)
Posted 1511 days ago by Profile Odan
I'm getting limited to 20 WU still. I've had to resort to re-scheduling VHARs to my GPU to keep it busy - yeuch!

And it may work for awhile if you sit at your computer 24/7....


HEHEHEHE I have the tool, it works on its own just like magic!!!!
Sorry. Going over the top there a bit :)
14) Message boards : Number crunching : 5PM and somebody flipped the switch (Message 1011262)
Posted 1511 days ago by Profile Odan
Looks that way to me. One machine go way more than 20 before the nasty but mow I'm limited on my GPU cruncher to 20 again.
15) Message boards : Number crunching : This computer has reached a limit on tasks in progress?? (Message 1011260)
Posted 1511 days ago by Profile Odan
I'm getting limited to 20 WU still. I've had to resort to re-scheduling VHARs to my GPU to keep it busy - yeuch!
16) Message boards : Number crunching : Download servers just came back up.... (Message 1010831)
Posted 1512 days ago by Profile Odan
I've been graced with 9 603's that I've not had for a while - for yonks it's been all Cuda on my little 8800 GTS, and a lot of those flunked out with errors. Now I've got another 12 603's trying to drop in. I'm happy :)


Have you tried running the re-branding tool to keep both CPU & GPU happy?
version 1.9

Works for me!
17) Message boards : Number crunching : Upload server (Message 998493)
Posted 1551 days ago by Profile Odan
With a bit of luck everything will be back to normal just in time for the weekly maintenance outage :)
18) Message boards : Number crunching : Switching to CP.net - wish me luck! (Message 992081)
Posted 1579 days ago by Profile Odan
Is it just me or do others find the idea of crunching farms working on climate prediction somewhere between amusing & puzzling :) {:

Ironic, perhaps? Not guilty (farm) here - just the li'l old family daily driver, albeit somewhat souped-up. But it raises a smile for me every now and then...

F.


Keep smiling, Fred :)
19) Message boards : Number crunching : Switching to CP.net - wish me luck! (Message 992034)
Posted 1579 days ago by Profile Odan
Is it just me or do others find the idea of crunching farms working on climate prediction somewhere between amusing & puzzling :) {:
20) Message boards : Number crunching : Watts 'n watts (Message 987526)
Posted 1597 days ago by Profile Odan

"switch mode power supplies (SMPS) range in efficiencies about 75% for a "bad one" to 95% for a good one":
Yes - it is like Ideal gas (theoretical approximation) vs Real gas

wikipedia article says:
"so the converters can theoretically operate with 100% efficiency"

Of course there will be losses as you pointed out as the elements are nor ideal and have resistance.
And efficiency of 80% (using SMPS) is much better than if a linear regulator was used for the above discussed GPU:

1.2 V / 12 V = 0.1 = 10% efficiency (if linear regulator was used)


About 15 years ago I had a mainboard (unstable Jamicon used with AMD K5) with linear regulator + two small 3x2x1 cm heatsinks
to power CPU by 3.52 V from 5 V line (only (5-3.5)/5 = 0.3 = 30% lower out-voltage) and the heatsinks become very hot.


But in their description of the Buck regulator they say up to 95% efficient.

The buck regulator is the simplest step down SMPS, but is as far as I am aware, only used for battery chargers.

Buck regulators are very common in modern electronics. In our case, we run a 5 volt back plane to remain compatible with boards that run the 68000 and 68030 processors. In the last processor selection we went to the Motorola Cold Fire processor that was not available in a 5 volt version. We didn't want to change the back plane because we will always have cases where we will need to mix 5 volt boards with the newer processor, so instead we decided to provide a local 3 volt power source on the board for the processor and it's support logic that required the lower voltage. We selected and constructed two designs on the board, because we weren't sure how much power we would require, but the one we populated used this part. This is only one of many parts available for this use.


Hey, yes I remember those beasts. Work very well. We tend to use higher switch frequencies these days so we can shrink the footprint & use ceramic caps for size and reliability. An example here bit smaller current but nice part.
We use several of these on a single card to produce different voltages to feed various circuits all powered from a 24V bus. Handy way to do it: cheap, small & efficient as well as reliable.


Next 20

Copyright © 2014 University of California