I don't get it??

Message boards : Number crunching : I don't get it??
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
Profile Rev. Tim Olivera

Send message
Joined: 15 Jan 06
Posts: 20
Credit: 1,717,714
RAC: 0
United States
Message 959240 - Posted: 29 Dec 2009, 12:41:53 UTC

I have been running setiathome for my god like 9 - 10 years now. And in all those year with out fail on a long week end I get "NO RUNNING TASKS" or "UNABLE TO DOWNLOAD TASK" or some other thing is wrong!! So for 3 or more days the systems I build just to run Setiathome are sucking electricity and sping the good old hard drive doing nothing, "I DON'T GET IT" is the equipment at Berkeley so unreadable it brakes down with no one there for longer then two days?? I thought the reason for spending a godzillin dollars on Sun equipment was because it is readable and dose not need some one watching it 24/7/265!! Am I wrong in that thought??

Rev. Tim Olivera

P.S. No Running Tasks Again ;-(
ID: 959240 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14679
Credit: 200,643,578
RAC: 874
United Kingdom
Message 959246 - Posted: 29 Dec 2009, 13:23:36 UTC

Oh dear. Yet again, Rev. Tim, you write without reading first.

Even the most expensive Sun equipment (and I believe it was donated, not the result of excessive spending) requires one vital extra ingredient: electricity.

As the front page news has said for several days now, many buildings on the Berkeley campus have been without power this weekend for a major upgrade to the electricity supply network. If you don't get power, you don't get tasks.
ID: 959246 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 959254 - Posted: 29 Dec 2009, 14:01:02 UTC - in response to Message 959240.  
Last modified: 29 Dec 2009, 14:16:57 UTC

Because you haven't joined any other project, even as a backup? - Suggested by Richard Haselgrove in August 09.
like Einstein or CPDN,

and since there's another shutdown this weekend, you'll run out of work next week too,
you could always increase your cache to say 3 or 4 days, so you don't run out of work so often.

Claggy
ID: 959254 · Report as offensive
spitfire_mk_2
Avatar

Send message
Joined: 14 Apr 00
Posts: 563
Credit: 27,306,885
RAC: 0
United States
Message 959343 - Posted: 29 Dec 2009, 19:39:34 UTC

Rev., just move to another project.

I dropped SETI all together. This project has scheduled down time every Tuesday, but then it goes down just about every weekend too.

I am done and gone.
ID: 959343 · Report as offensive
Luke
Volunteer developer
Avatar

Send message
Joined: 31 Dec 06
Posts: 2546
Credit: 817,560
RAC: 0
New Zealand
Message 959345 - Posted: 29 Dec 2009, 19:50:55 UTC - in response to Message 959343.  
Last modified: 29 Dec 2009, 19:51:17 UTC

Rev., just move to another project.

I dropped SETI all together. This project has scheduled down time every Tuesday, but then it goes down just about every weekend too.

I am done and gone.


No need to sulk. If I may say it harshly. You were never promised constant uptime, you were never promised a stream of work, you were never promised a no error project.
Matt, Eric and the team do a damn good job of keeping everything running smoothly, as best as they can.
- Luke.
ID: 959345 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 959348 - Posted: 29 Dec 2009, 20:18:54 UTC - in response to Message 959240.  

I have been running setiathome for my god like 9 - 10 years now. And in all those year with out fail on a long week end I get "NO RUNNING TASKS" or "UNABLE TO DOWNLOAD TASK" or some other thing is wrong!! So for 3 or more days the systems I build just to run Setiathome are sucking electricity and sping the good old hard drive doing nothing, "I DON'T GET IT" is the equipment at Berkeley so unreadable it brakes down with no one there for longer then two days?? I thought the reason for spending a godzillin dollars on Sun equipment was because it is readable and dose not need some one watching it 24/7/265!! Am I wrong in that thought??

Rev. Tim Olivera

P.S. No Running Tasks Again ;-(

So nice to hear from you Reverend, your uplifting stories of tolerance and understanding are always welcome.

The reason you're upset (and seemingly, the reason you're upset in every single post I've ever seen from you) is that you don't understand a couple of basic concepts about BOINC.

In order to get the kind of reliability you have always expected, the project has to do extraordinary things.

For one thing, they have to spend a whole lot more money on staff. They can't staff 8/5, they'd have to staff 24/7.

They'd have to spend a lot more money on redundant servers.

They wouldn't fit in their current server space, so more money for facilities.

... and they need redundant connectivity, delivered on multiple paths.

For last weekend's outage (and this coming weekend's outage) they would have to rent a big generator, and pay for fuel and monitor it, because Campus is upgrading their power distribution -- and turned off electricity to the whole building.

The whole point to BOINC is to make distributed computing affordable to projects that cannot afford to do things on the kind of budget you seem to think they have.

If you were really following along, you'd see that most of the Sun equipment in service today is a gift from Sun. IIRC, the first V40z server was one of their prototypes, no longer needed since it did not match the production V40z.

I'm always disappointed by people who complain about downtime, because the tools to deal with downtime are built-in to BOINC -- and they work stunningly well.

If anyone wants to actually learn about the underlying concepts, I'll be glad to point out the relevant white papers.
ID: 959348 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 959355 - Posted: 29 Dec 2009, 20:31:12 UTC - in response to Message 959345.  

Rev., just move to another project.

I dropped SETI all together. This project has scheduled down time every Tuesday, but then it goes down just about every weekend too.

I am done and gone.


No need to sulk. If I may say it harshly. You were never promised constant uptime, you were never promised a stream of work, you were never promised a no error project.
Matt, Eric and the team do a damn good job of keeping everything running smoothly, as best as they can.

Luke,

I have to respectfully disagree.

As I interpret the BOINC design goals, white papers, and observe the operation of BOINC itself, the typical project should be able to run with as little as about 70% availability. Downtime is handled gracefully -- especially if you crunch two projects.

It seems to me that SETI@Home is over 90%, even including scheduled outages, like the Tuesday maintenance and the campus-imposed outages for power upgrades.

In other words, I don't think it's "as best they can" but really remarkably good.

I think the problems are twofold:

1) People with unreasonable expectations -- we see that alot among people who say "why is BOINC doing this work unit, that isn't due for a month, and not this one for next week?"

... or "why don't I have at least one work unit from every project I crunch?"

2) People who compare BOINC (which does not involve people directly) to Amazon or Google -- sites that must be up 99.9999% of the time.

We spend too much time saying "SETI@Home is down" and not nearly enough saying "Look what they're doing on essentially no money!"

-- Ned
ID: 959355 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66354
Credit: 55,293,173
RAC: 49
United States
Message 959358 - Posted: 29 Dec 2009, 20:43:40 UTC - in response to Message 959345.  

Rev., just move to another project.

I dropped SETI all together. This project has scheduled down time every Tuesday, but then it goes down just about every weekend too.

I am done and gone.


No need to sulk. If I may say it harshly. You were never promised constant uptime, you were never promised a stream of work, you were never promised a no error project.
Matt, Eric and the team do a damn good job of keeping everything running smoothly, as best as they can.

It's just the Maids day(week?) off, For operating on a shoestring they do a great job, I'll wait. Watch TV, Play some MULE, etc.
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 959358 · Report as offensive
Profile Albireo380

Send message
Joined: 21 Mar 08
Posts: 119
Credit: 1,570,025
RAC: 0
United Kingdom
Message 959359 - Posted: 29 Dec 2009, 20:44:08 UTC

I read the painful postings on this thread with interest. Presumably we contribute to Seti@Home because we are interested in the whole ET thing, and want to contribute in a small way.

When the system goes down, I switch to Einstein or Rosetta. Where is the sweat in that?

No issue from my perspective, even although I am only a baby contributor. Matt and the gang will bring SETI back up to speed as quickly as they can. I suspect they aren't just buffing their nails and drinking coffee : ) Are your nails highly polished Matt? Bet they aren't. Eric - can I use your nails as mirrors? Probably not.

I hope you all had a good Christmas and will have a happy & prosperous New Year.

Cheers

Tom

PS The film "For All mankind" was on TV last night - great : ))
Chill out guys.
ID: 959359 · Report as offensive
Luke
Volunteer developer
Avatar

Send message
Joined: 31 Dec 06
Posts: 2546
Credit: 817,560
RAC: 0
New Zealand
Message 959364 - Posted: 29 Dec 2009, 20:55:11 UTC - in response to Message 959355.  
Last modified: 29 Dec 2009, 20:57:52 UTC

Rev., just move to another project.

I dropped SETI all together. This project has scheduled down time every Tuesday, but then it goes down just about every weekend too.

I am done and gone.


No need to sulk. If I may say it harshly. You were never promised constant uptime, you were never promised a stream of work, you were never promised a no error project.
Matt, Eric and the team do a damn good job of keeping everything running smoothly, as best as they can.

Luke,

I have to respectfully disagree.

As I interpret the BOINC design goals, white papers, and observe the operation of BOINC itself, the typical project should be able to run with as little as about 70% availability. Downtime is handled gracefully -- especially if you crunch two projects.

It seems to me that SETI@Home is over 90%, even including scheduled outages, like the Tuesday maintenance and the campus-imposed outages for power upgrades.

In other words, I don't think it's "as best they can" but really remarkably good.

I think the problems are twofold:

1) People with unreasonable expectations -- we see that alot among people who say "why is BOINC doing this work unit, that isn't due for a month, and not this one for next week?"

... or "why don't I have at least one work unit from every project I crunch?"

2) People who compare BOINC (which does not involve people directly) to Amazon or Google -- sites that must be up 99.9999% of the time.

We spend too much time saying "SETI@Home is down" and not nearly enough saying "Look what they're doing on essentially no money!"

-- Ned


I don't see any disagreement. I've never heard Matt or Eric promise any of the things I mentioned in my previous post, and yet people complain about 'all this downtime'. When in fact, S@H does a great job on the little money they have.
Are we not on the same page?
- Luke.
ID: 959364 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66354
Credit: 55,293,173
RAC: 49
United States
Message 959365 - Posted: 29 Dec 2009, 20:58:03 UTC - in response to Message 959359.  

I read the painful postings on this thread with interest. Presumably we contribute to Seti@Home because we are interested in the whole ET thing, and want to contribute in a small way.

When the system goes down, I switch to Einstein or Rosetta. Where is the sweat in that?

No issue from my perspective, even although I am only a baby contributor. Matt and the gang will bring SETI back up to speed as quickly as they can. I suspect they aren't just buffing their nails and drinking coffee : ) Are your nails highly polished Matt? Bet they aren't. Eric - can I use your nails as mirrors? Probably not.

I hope you all had a good Christmas and will have a happy & prosperous New Year.

Cheers

Tom

PS The film "For All mankind" was on TV last night - great : ))
Chill out guys.

Currently there's a resync operation going on in the background I've read, So until then, Play a game, Watch TV, Crunch for someone else for the moment, etc, Me I'll wait and maybe play a game of MULE(Needs port 6260 forwarded in ones router).
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 959365 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 959368 - Posted: 29 Dec 2009, 21:10:53 UTC - in response to Message 959364.  

Rev., just move to another project.

I dropped SETI all together. This project has scheduled down time every Tuesday, but then it goes down just about every weekend too.

I am done and gone.


No need to sulk. If I may say it harshly. You were never promised constant uptime, you were never promised a stream of work, you were never promised a no error project.
Matt, Eric and the team do a damn good job of keeping everything running smoothly, as best as they can.

Luke,

I have to respectfully disagree.

As I interpret the BOINC design goals, white papers, and observe the operation of BOINC itself, the typical project should be able to run with as little as about 70% availability. Downtime is handled gracefully -- especially if you crunch two projects.

It seems to me that SETI@Home is over 90%, even including scheduled outages, like the Tuesday maintenance and the campus-imposed outages for power upgrades.

In other words, I don't think it's "as best they can" but really remarkably good.

I think the problems are twofold:

1) People with unreasonable expectations -- we see that alot among people who say "why is BOINC doing this work unit, that isn't due for a month, and not this one for next week?"

... or "why don't I have at least one work unit from every project I crunch?"

2) People who compare BOINC (which does not involve people directly) to Amazon or Google -- sites that must be up 99.9999% of the time.

We spend too much time saying "SETI@Home is down" and not nearly enough saying "Look what they're doing on essentially no money!"

-- Ned


I don't see any disagreement. I've never heard Matt or Eric promise any of the things I mentioned in my previous post, and yet people complain about 'all this downtime'. When in fact, S@H does a great job on the little money they have.

Only in what should be considered "success."

I think they're wildly successful -- better availability would just cost a lot more money with no improvement in results.
ID: 959368 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66354
Credit: 55,293,173
RAC: 49
United States
Message 959433 - Posted: 30 Dec 2009, 1:44:19 UTC - in response to Message 959368.  

Rev., just move to another project.

I dropped SETI all together. This project has scheduled down time every Tuesday, but then it goes down just about every weekend too.

I am done and gone.


No need to sulk. If I may say it harshly. You were never promised constant uptime, you were never promised a stream of work, you were never promised a no error project.
Matt, Eric and the team do a damn good job of keeping everything running smoothly, as best as they can.

Luke,

I have to respectfully disagree.

As I interpret the BOINC design goals, white papers, and observe the operation of BOINC itself, the typical project should be able to run with as little as about 70% availability. Downtime is handled gracefully -- especially if you crunch two projects.

It seems to me that SETI@Home is over 90%, even including scheduled outages, like the Tuesday maintenance and the campus-imposed outages for power upgrades.

In other words, I don't think it's "as best they can" but really remarkably good.

I think the problems are twofold:

1) People with unreasonable expectations -- we see that alot among people who say "why is BOINC doing this work unit, that isn't due for a month, and not this one for next week?"

... or "why don't I have at least one work unit from every project I crunch?"

2) People who compare BOINC (which does not involve people directly) to Amazon or Google -- sites that must be up 99.9999% of the time.

We spend too much time saying "SETI@Home is down" and not nearly enough saying "Look what they're doing on essentially no money!"

-- Ned


I don't see any disagreement. I've never heard Matt or Eric promise any of the things I mentioned in my previous post, and yet people complain about 'all this downtime'. When in fact, S@H does a great job on the little money they have.

Only in what should be considered "success."

I think they're wildly successful -- better availability would just cost a lot more money with no improvement in results.

I agree Ned, I've heard of other projects taking months to get back online, S@H must have their own Scotty here, As they do pull stuff off and do a good job of It too.
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 959433 · Report as offensive
Profile Pappa
Volunteer tester
Avatar

Send message
Joined: 9 Jan 00
Posts: 2562
Credit: 12,301,681
RAC: 0
United States
Message 959436 - Posted: 30 Dec 2009, 2:04:18 UTC

I have expired Results to return to NQueens... This is there is second Major failure (the first was down for over a month but I did get to return expired Results). So my guess it is hard to get computer parts to Chile (they are shipped via the slow boat from China Grin)...

The reason for running NQueens was a test to see how well CPU Applications would give up resources (and Debt issues with Boinc Alpha) while running Cuda/GPU Applications. So some of those that have the Volunteer Tester tags under their names also test many things besides just Seti.

An for the Record, I do get that the rev. is saying. I took time off from Seti Classic as a result of issues.

There is nothing wrong with other projects to keep CPU's and GPU's warm.

I would also say that as Seti gets back to running that in your Computing Preferences You could set

Maintain enough work for an additional 1.5 days (not everyone at once please, as it does take Boinc a while to adjust and a batch of shorties will cause more grief)

That should get us past the next power ourage.

Regards


Please consider a Donation to the Seti Project.

ID: 959436 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 959449 - Posted: 30 Dec 2009, 3:15:10 UTC - in response to Message 959436.  

An for the Record, I do get that the rev. is saying. I took time off from Seti Classic as a result of issues.

There is nothing wrong with other projects to keep CPU's and GPU's warm.

Classic was a different animal. No cache, no ability to run other projects, just crunch one work unit, then upload/download the next.

Sure, there were third party applications, but it was kinda primitive.

BOINC has tools that overcome most of the shortcomings, like built-in caching. If the Reverend would care to tweak a few settings he would not run out of work.

My biggest gripe though is with those who can't see the BOINC client and the BOINC servers as a system -- and we really should be looking at both together, and not one or the other in isolation.
ID: 959449 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 66354
Credit: 55,293,173
RAC: 49
United States
Message 959452 - Posted: 30 Dec 2009, 3:46:28 UTC - in response to Message 959449.  

An for the Record, I do get that the rev. is saying. I took time off from Seti Classic as a result of issues.

There is nothing wrong with other projects to keep CPU's and GPU's warm.

Classic was a different animal. No cache, no ability to run other projects, just crunch one work unit, then upload/download the next.

Sure, there were third party applications, but it was kinda primitive.

BOINC has tools that overcome most of the shortcomings, like built-in caching. If the Reverend would care to tweak a few settings he would not run out of work.

My biggest gripe though is with those who can't see the BOINC client and the BOINC servers as a system -- and we really should be looking at both together, and not one or the other in isolation.

Back then Classic was state of the Art, Today It's an antique I'd say.
Savoir-Faire is everywhere!
The T1 Trust, T1 Class 4-4-4-4 #5550, America's First HST

ID: 959452 · Report as offensive
woodenboatguy

Send message
Joined: 10 Nov 00
Posts: 368
Credit: 3,969,364
RAC: 0
Canada
Message 959458 - Posted: 30 Dec 2009, 3:59:51 UTC

Rev,

I missed the whole thing as I was out of town and only realized what had happened when I returned. My reaction? I invested the time to shrug.

I noticed later on that my cruncher was no long running dry and had picked up again. I saved a shrug for something more monumental.

Point is, SETI is a contribution of computing cycles. As mentioned in better posts than mine, the costs of bullet-proof service from SETI would far (far) exceed the benefit. They aren't making millions of dollars an hour pumping auctions or books out to a willing community. Nothing would justify the kind of expectations being laid on a voluntary endevour, and one by the way that does exceptionally well (I run IT projects by the way and know what it takes to do better).

SETI abides. You should too.

Regards,
ID: 959458 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 959459 - Posted: 30 Dec 2009, 4:02:41 UTC - in response to Message 959449.  

An for the Record, I do get that the rev. is saying. I took time off from Seti Classic as a result of issues.

There is nothing wrong with other projects to keep CPU's and GPU's warm.

Classic was a different animal. No cache, no ability to run other projects, just crunch one work unit, then upload/download the next.

Sure, there were third party applications, but it was kinda primitive.

BOINC has tools that overcome most of the shortcomings, like built-in caching. If the Reverend would care to tweak a few settings he would not run out of work.

My biggest gripe though is with those who can't see the BOINC client and the BOINC servers as a system -- and we really should be looking at both together, and not one or the other in isolation.


As a system it's always running. Even if that means out of the 1,000,000 or so people running S@H that only 1 of them has work for processing. Really even if there is 0 work to process it's still working.

The whole period of time the servers were w/o power for the rewiring the project was still making progress as many peoples computers kept running w/o them evening knowing the servers were down.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 959459 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 959461 - Posted: 30 Dec 2009, 4:13:32 UTC - in response to Message 959459.  
Last modified: 30 Dec 2009, 5:08:37 UTC

As a system it's always running. Even if that means out of the 1,000,000 or so people running S@H that only 1 of them has work for processing. Really even if there is 0 work to process it's still working.

I was actually taking a much more self-centered view.

If I have a four day cache, and the SETI@Home servers are down for 3 days, then my machine never ran out of work.

If the servers are up, but have no work to give out, but I'm crunching off of my four day cache, then my machine is not out of work.

If my machine has a full cache, but has 20 completed work units waiting for the servers to come back, then I'm good because I'm not out of work.

If I crunch 95% SETI and 5% something else, and because of a SETI outage I crunch my other project for a week, I'm good because I'm not out of work -- and BOINC will crunch nothing but SETI until that extra "something else" is paid back.

As Papa says, set "extra days" to 1.5. I'd suggest "extra days" plus "connect every" should be somewhere around two, but it's all the same idea.

It might be a little different if you have a multi-CUDA host, you may not be able to keep a big cache, but you can crunch more than one project.

SETI@Home does a pretty good job keeping things warm if you just size your cache appropriately, or if you give a little bit to some other project.

I notice when the BOINC client is having trouble talking to the BOINC servers, but I look at that the same way I look inside any black box and watch how it works.

It's interesting, but as long as the whole box is working, I don't care.
ID: 959461 · Report as offensive
nemesis
Avatar

Send message
Joined: 12 Oct 99
Posts: 1408
Credit: 35,074,350
RAC: 0
Message 959467 - Posted: 30 Dec 2009, 4:39:58 UTC

oh ye of little faith.....
ID: 959467 · Report as offensive
1 · 2 · 3 · Next

Message boards : Number crunching : I don't get it??


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.