Message boards :
Number crunching :
I QUIT!
Message board moderation
Author | Message |
---|---|
harley2paws Send message Joined: 13 Feb 00 Posts: 10 Credit: 62,581,349 RAC: 0 ![]() |
Seti should do a couple things till they get bigger pipe. First: NO NEW USERS..they can't handle ones they have. Second: Make the wu's take longer either by making bigger or more intense analist. |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 ![]() |
Lets face it, would seti be so much fun if absolutely nothing ever went wrong? The challenge is probably keeping enough problems around to keep the zealots interested, without loosing the more impatient volunteers amongst us. I find myself sort of concurring with the first comment. Increasing expectations without improving the fundamentals is suicide; if not today then soon. Longer wu's would not help if the number of users returning results also increases. The choking and gagging at Berkeley would persist, albeit with an even larger latency/longer duration. Perhaps it would be good if a "true" computer scientist/engineer could model the system and come up with an appropriate number of users target for the given hardware. This model could be consulted as the project contemplates significant changes: like CUDA. Not having such a tool, I bet that the sysadmins feel at times like they are sliding butt-first down a steep gravely ravine on Mt. Hood with only a pair of silken underwear on. |
![]() ![]() Send message Joined: 19 Jan 03 Posts: 205 Credit: 1,248,845 RAC: 0 ![]() |
How about a forced 'trimming' of the fat ? Obsolescence is part of technology growth - without it, well....you could imagine. I realize that this is a 'volunteer' program, but maybe it's time for the folks at Seti@Home to tighten up requirements a bit in terms of hardware to run the project - ON THE USER END. Why not make SSE3 capabilities a MINIMUM Requirement to join as a donor of CPU cycles ? There are alot of old@ss systems kicking around the project that the project could probably do without. How many sub-> 3Ghz Pentium 4 PCs are there wasting bandwidth on the network ? Considering a half decent CUDA card RAC's about 5,000+ on it's own, how many full PCs can that single card repalce in terms of validated work ? Technology has no room for sentimentalism.......We strive to move forward. My 2cents. Allan I am TCP JESUS...The Carpenter Phenom Jesus....and HAMMERING is what I do best! formerly known as...MC Hammer. |
Matthew S. McCleary ![]() Send message Joined: 9 Sep 99 Posts: 121 Credit: 2,288,242 RAC: 0 ![]() |
And round and round we go once again... ![]() |
![]() Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 ![]() |
I realize that this is a 'volunteer' program, but maybe it's time for the folks at Seti@Home to tighten up requirements a bit in terms of hardware to run the project - ON THE USER END. Not everyone has the money to update to the latest and greatest the second it is available. besides, the whole thought behind BOINC and Seti is to use the left-over CPU cycles during normal use of the PC, it isn't demanding you run 24/7/365 on as many CPUs as you can cram into a case with as large a cache as your hard drive is big. All the rest of your solutions are so preposterous, I won't even address them. Dream on. |
![]() ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
Why not make SSE3 capabilities a MINIMUM Requirement to join as a donor of CPU cycles ? ROFL, that would just put about ~3/4 of Lunatics development out of commission, Several main developers entirely. The main repository leading a very useful life scrapes through, as a 3.2GHz p4. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 ![]() |
I realize that this is a 'volunteer' program, but maybe it's time for the folks at Seti@Home to tighten up requirements a bit in terms of hardware to run the project - ON THE USER END. ... and for some of us, we have perfectly functional workstations that we'll give excess computing, but won't buy hardware just for SETI. That isn't to say "won't invest money" in SETI. I've simply donated money directly to the project -- and without strings. If we all sent $5 for each active cruncher we have, SETI could easily pay for the gigabit upgrade, better servers and more staff. I don't believe that SETI (or any other BOINC project) should play favorites, based on any criteria. |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 ![]() |
My point below might be implemented... * So find a number of users the current hardware/software can currently support. Pull the number out of the cosmos if you have to, assuming there are no computer scientists around to do a better job of estimating. * Then only allow more users to join if the active user base falls below, say, 80% of this target number. * Don't throw anybody off; no need to. They will die of their own obsolesence, due to cost of the new cap and tax policies from DC, or due to project dilution. * If the system gets improved and a bottleneck is relaxed, then increase the target number. * If the software gets improved (like cuda) and a new bottleneck occurs, then don't do anything but wait for the active user base to drop down to what the servers and database can handle, or for new hardware to be implemented. Under NO circumstances ask for more volunteers when the system is literally held together with baling wire and spit; a reputation is a hard thing to lose. When the time comes and more people are needed, I would love to see a new user soliciation indicating, say, "Only 1000 new user slots available to the public". Elitism works on Madison Avenue, so why not SETI. |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 ![]() |
My point below might be implemented... I think there is a technical solution, with the existing hardware, that would be more friendly. ... but you've read my position before, as I have read yours. |
![]() ![]() Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 ![]() |
Haven't we been over all this many times before? Are you suggesting that we can clear up the traffic jams by getting rid of a small per centage of users who upload or download maybe once or twice a week? [smartass]Maybe we should allow one free WU per day, and then charge for extras. That way we could fund all sorts of nifty improvements. If you can afford multiple GPUs you can pay for all the WUs![/smartass] But seriously... One of the functions of SETI and BOINC is to develop the concept of distributed processing. That includes finding out what the real population of volunteer CPUs looks like and is capable of. Forcing that population to change sort of screws that up. A seperate part of the experiment might include encouraging the volunteer population to "self edit" itself. Hey, maybe that is what happening here! The outages have been deliberately introduced, to see who can put up with it and who leaves! They are so sneaky... Edit to add another thought about CUDAs: So we get rid of the old PCs and go full Cuda. I suspect that would clear things up for a few weeks, amaybe months. By then the high paid techno-elite will have choked the system with more Cudas. How do we prune then? Draw straws? Restrict the project to diamond plated GPUs? The point is that the system has to adopt the users, not the other way around. Adopting the system is easy, all it takes is time and money. ![]() ![]() |
![]() Send message Joined: 30 May 03 Posts: 871 Credit: 28,092,319 RAC: 0 ![]() |
Somebody does need to figure out precisely how many connections the upload/download servers can each handle reliably and figure out a way to keep connection requests near or below that level. Yesterday evening I had five uploads in queue and watched them for hours trying to get through. Several times 2 or 3 of them would get to 100% and then their connections would fail. Hours later they would do the same thing again. Providing solid connections by itself could go a long way towards smoothing out the recent see-saw battles. Martin |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 ![]() |
Yesterday evening I had five uploads in queue and watched them for hours trying to get through. Several times 2 or 3 of them would get to 100% and then their connections would fail. ... and here is what happens on one of those: BOINC (through the TCP/IP stack) sends a "SYN" (I'd like to connect) and the upload server replies "SYN+ACK" and your machine sends an "ACK" -- that completes the connection (this is standard TCP). Then BOINC starts feeding data to the TCP/IP stack, which is responsible for sending it on to Berkeley, getting back the "ACK" packets and keeping work flowing. The fact that it got to 100% just means that BOINC handed the last byte of data to the IP stack. When there is heavy congestion, the whole result could be in RAM on your machine, waiting for the "ACK" from the first set of packets -- and the first set might not ever get through. That's why you see the nice jump to 100% -- as far as the BOINC application is concerned, the whole file has been "sent" but the first packets might not ever reach Berkeley. There are a number of ways to cut down the number of connections at any given moment. One is to jettison users -- fewer users, fewer connections. The other is to simply tell all the BOINC clients to retry less often. Fewer connection attempts means fewer failed connections, less traffic, more throughput. |
Matthew S. McCleary ![]() Send message Joined: 9 Sep 99 Posts: 121 Credit: 2,288,242 RAC: 0 ![]() |
I may be confused, but it seems to me that the high-volume (i.e., fast) workstations are the primary cause of the recent buckling-under-load. According to SETI@home people themselves, CUDA has doubled the workload since it was implemented. CUDA workstations are consuming more and more workunits all the time. Sure, there may be lots of Pentium IIIs and Pentium 4s out there (I have a couple) but in all honesty, an older system with no CUDA is not going to chew through workunits very quickly. What, one, maybe two per day, and that's for a fairly fast P4. It's the CUDA systems that chew through a workunit in 12 minutes, combined with multi-day caches, that are causing the upload and download gluts -- not these older systems that keep crunching along slowly but surely. In short, I don't think restricting SETI@home to only those who have the latest, greatest hardware is going to help much. And in many cases it's going to make the problem worse. ![]() |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 ![]() |
In short, I don't think restricting SETI@home to only those who have the latest, greatest hardware is going to help much. And in many cases it's going to make the problem worse. I think the idea is: if the "fastest" 10% are doing 90% of the work, then jettisoning all but the fastest 10% will reduce the workload by 10%. |
Alinator Send message Joined: 19 Apr 05 Posts: 4178 Credit: 4,647,982 RAC: 0 ![]() |
Why not make SSE3 capabilities a MINIMUM Requirement to join as a donor of CPU cycles ? LOL... Agreed. Comments about other folks hardware like that are less than helpful and smack of elitism. Of course, if someone wants to wire me about 3 grand, I'll be happy to put a fully loaded i7 975 online and proceed to beat it to within an inch of its life. ;-) Case in point, my K6/300 has somehow managed to accumulate more credit than 80 some odd percent of all BOINC hosts ever, and a lot of those it's passed are big gun battleships which never hung around long enough for one reason or another to make any difference at all! :-) I fully intend to keep him soldiering along until he has an MB burnout, since he performs his primary task perfectly and has done so for almost a decade. :-D Another point to consider here is that scientifically SAH work is just rote data reduction, and therefore it really isn't a 'race' unless one chooses to look at it as such. For me, I couldn't care less if it takes until the cows come home to complete a task as long as it makes the deadline. Alinator |
Matthew S. McCleary ![]() Send message Joined: 9 Sep 99 Posts: 121 Credit: 2,288,242 RAC: 0 ![]() |
Which would buy the project what, exactly? Would a ten percent reduction in workload even be sufficient to alleviate the problems we're seeing with uploads and downloads? I doubt it. And how long would it take for that ten percent to be eaten up again, as people go out and buy GTX 260s and even faster cards? It's nothing more than a very temporary stopgap, and would come at the cost of alienating (har har) users. Perhaps that would have a cumulative effect on reducing workload, but even so it still strikes me as a bad idea in the long term. As someone pointed out, it's easy to damage one's reputation, and very difficult to repair that damage. ![]() |
![]() ![]() Send message Joined: 4 Sep 99 Posts: 3868 Credit: 2,697,267 RAC: 0 ![]() |
Comments about other folks hardware like that are less than helpful and smack of elitism. I agree fully. Elitist class-based thinking like that leads to Boston Tea Parties, the fall of the Bastille, and October Revolutions. Another point to consider here is that scientifically SAH work is just rote data reduction, and therefore it really isn't a 'race' unless one chooses to look at it as such. For me, I couldn't care less if it takes until the cows come home to complete a task as long as it makes the deadline. Again I agree completely. If SETI is supposed to somehow "serve" the desires of users, don't the desires of those of us interested in the Science have as much priority as the desires of the Speed Racer crowd? ![]() ![]() |
NorthernStudio Send message Joined: 7 Dec 05 Posts: 5 Credit: 3,196,847 RAC: 0 ![]() |
I lived in towns with ~volunteer~ fire departments running ancient equipment held together with bailing wire and duct tape. Can't recall anyone, new members or the old timers crying that they were quitting their ~volunteer~ activity if the organization didn't buy new equipment to suit their personal needs. Those ~volunteers~ would run into a burning building wearing only flip-flops and boxer shorts if there was someone inside to rescue. Of course, there isn't a credit system that encourages some to forget the reason why they joined and see the whole project as something that exists to give them some sense of importance. Perhaps a suggestion might be to limit the amount of work units sent to ~volunteers~ or perhaps scrap the credit system entirely. Ramp up the PR in recruiting new ~volunteers~ that are interested in the science and not in personal aggrandizement. As has been said, SETI@Home asked us only to contribute our unused computing resources for the benefit of science. It didn't ask us to go out and invest in what seems more often to be powerful ego-processing units. So quit. It's not about you or me or any one of us. The rest of us will pick up the slack if needed and perhaps work on getting real ~volunteers~ into the project when needed. A dozen willing hands doing a little of the lifting is far more valuable than one blowhard that needs to lift it all himself. Wayne (Usually a lurker.) |
![]() ![]() Send message Joined: 19 Jan 03 Posts: 205 Credit: 1,248,845 RAC: 0 ![]() |
I never attacked anyone's hardware, nor do I consider myself elite. ...So, fine then......here's another comment for the peanut gallery to rip to shreds: Using outdated/obsolete hardware is Environmentally Irresponsible. Could the Seti@Home distributed computing project then be considered anti-green by association ? Have fun with that one ;) I am TCP JESUS...The Carpenter Phenom Jesus....and HAMMERING is what I do best! formerly known as...MC Hammer. |
NorthernStudio Send message Joined: 7 Dec 05 Posts: 5 Credit: 3,196,847 RAC: 0 ![]() |
No need to rip it to shreds. You didn't pose any arguments to support the statement before passing to the opposition. By my understanding of the rules of debate, you lose. Wayne |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.