Not getting full 10 days tasks?

Message boards : Number crunching : Not getting full 10 days tasks?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
Ryan

Send message
Joined: 20 May 99
Posts: 7
Credit: 5,691,912
RAC: 0
United States
Message 1043091 - Posted: 17 Oct 2010, 12:26:14 UTC - in response to Message 1043008.  

Evening

And I am not in a good humor.

Does anyone know how stupid this is?

I still have to wonder why people incorrectly set a ten day cache when they did not know that it would double, triple or quaddruple the stress on the Servers. We all know that the servers have issues and when you increase the stress it is going to cause problems. During the last outage you could see that 3.5+ megabits/sec were the 230000+ computers attempting to contact the scheduler. That is rougly 4% of the 100 megabit bandwidth upstream. When uploads start to happen you can see go over 40 megabits/sec on the network link. When downloads are enabled that trashes uploads and scheuler requests. It is all those 10 day caches pushing out other Seti Users.

So when the BAD Advice happened, too many people listened to the BAD Advice! You also should know by now that there are limits on the number of Workunits that you can download right now. That is limited to not overrun the bandwidth available.

So as I have cut back on Seti, so the greedy can get their fill (listening to bad advice)... I feel no sympathy for you. I will not be the straw that broke the camels back... So on the next Crash, please start a Thread I was the Seti User that broke the Camels Back! Then you can brag about how large your cache size is. Then complain that you can not get work or that you still have work and expect others to praise you or feel sorry for you.

Regards




Wow.... the condescension is thick.

"...incorrectly set a ten day cache"? What's the "proper" way to set a 10 day cache? Sarcasm aside... if the 10-day cache is a bad idea, why has it not yet been removed?

The problem is not the size of the cache, but the frequency in which said cache is threatened by unexpected server problems. The stress on the servers would not "double, triple, or quadruple" if they weren't down all the time.

The 3.5 megabits/sec is not our fault. If that's a problem, then the clients should be updated to wait longer between calls.

Let's say that everyone has a maximum cache of 5 days and seti has been down for a week (not at all unrealistic considering the recent problems). When the servers are placed back in service, you will still see 230,000+ clients trying to fill their caches. The only difference the cache will make is the amount of time those clients will spend trying to get what they can.

If you feel no sympathy for us greedy people, why bother cutting back yourself?
ID: 1043091 · Report as offensive
Wandering Willie
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 136
Credit: 2,127,073
RAC: 0
United Kingdom
Message 1043101 - Posted: 17 Oct 2010, 13:13:47 UTC - in response to Message 1043091.  

Evening

And I am not in a good humor.

Does anyone know how stupid this is?

I still have to wonder why people incorrectly set a ten day cache when they did not know that it would double, triple or quaddruple the stress on the Servers. We all know that the servers have issues and when you increase the stress it is going to cause problems. During the last outage you could see that 3.5+ megabits/sec were the 230000+ computers attempting to contact the scheduler. That is rougly 4% of the 100 megabit bandwidth upstream. When uploads start to happen you can see go over 40 megabits/sec on the network link. When downloads are enabled that trashes uploads and scheuler requests. It is all those 10 day caches pushing out other Seti Users.

So when the BAD Advice happened, too many people listened to the BAD Advice! You also should know by now that there are limits on the number of Workunits that you can download right now. That is limited to not overrun the bandwidth available.

So as I have cut back on Seti, so the greedy can get their fill (listening to bad advice)... I feel no sympathy for you. I will not be the straw that broke the camels back... So on the next Crash, please start a Thread I was the Seti User that broke the Camels Back! Then you can brag about how large your cache size is. Then complain that you can not get work or that you still have work and expect others to praise you or feel sorry for you.

Regards




Wow.... the condescension is thick.

"...incorrectly set a ten day cache"? What's the "proper" way to set a 10 day cache? Sarcasm aside... if the 10-day cache is a bad idea, why has it not yet been removed?

The problem is not the size of the cache, but the frequency in which said cache is threatened by unexpected server problems. The stress on the servers would not "double, triple, or quadruple" if they weren't down all the time.

The 3.5 megabits/sec is not our fault. If that's a problem, then the clients should be updated to wait longer between calls.

Let's say that everyone has a maximum cache of 5 days and seti has been down for a week (not at all unrealistic considering the recent problems). When the servers are placed back in service, you will still see 230,000+ clients trying to fill their caches. The only difference the cache will make is the amount of time those clients will spend trying to get what they can.

If you feel no sympathy for us greedy people, why bother cutting back yourself?


Perhaps he has seen the error of his ways. Greed does not pay.

The best controlled project I have come across for WU limits is AQUA@ Home. You can set your download cache to 10 days and you will still only receive at the moment 2 WU’s. As soon as one is uploaded and reported the next WU is sent.

AQUA@home Message from server: This computer has reached a limit on tasks in progress.

Michael
ID: 1043101 · Report as offensive
Profile soft^spirit
Avatar

Send message
Joined: 18 May 99
Posts: 6497
Credit: 34,134,168
RAC: 0
United States
Message 1043124 - Posted: 17 Oct 2010, 14:37:09 UTC

Pappa has a point. Increasing your cache size to the full 10 days IS additional stress on the servers. This has lead in part to several of the recent outtages.
And people are using tricks to move beyond that. No I will not share them. It is a BAD idea.

I do intend to produce some heat with these machines for the winter. That is because they produce heat, and it is a "might as well not waste it" circumstance. I am not going to whine if I get cold, because there is NO reason for it to be my ONLY heat source.

There are download limits. They are there for a reason. I am sitting at them myself. This should allow others to fill up and slow down the errors they are getting trying to download. That is how it is supposed to work. Hopefully that is how it is working.

In theory it can be used to "okay give everyone something to work on. Next.. give everyone a couple hours worth. Next, give everyone a days worth. Next, give everyone 2 days worth....(you get the idea. And anything beyond 3.. maybe 4 to allow for bad estimates of the time it takes to crunch.. is just silly for a computer that is always connected. If the outtages run longer than that.. well then the project is down and certainly does not need added stress!!

And for those complaining about the servers being down, If you have looked around you might notice they have some major hardware issues that may well require replacements. Feel free to be part of the solution by helping to donate to keep them up. Rather than part of the problem by seeing how many units you can tuck away.


Janice
ID: 1043124 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1043127 - Posted: 17 Oct 2010, 14:44:46 UTC

I asked the other day if I remember correctly that someone said the max cache could be set by the project, overriding the local setting. I didn't see any answer so I'll try again :)

I have 5 days on my PCs, that's enough for just about all outages. If there should be a situation (or two) where SETI is down for longer, well all will survive that. Why not help the servers/database with limiting the cache? :)
ID: 1043127 · Report as offensive
Brkovip
Avatar

Send message
Joined: 18 May 99
Posts: 274
Credit: 144,414,367
RAC: 0
United States
Message 1043133 - Posted: 17 Oct 2010, 14:54:10 UTC

Wow, I didn't know I would stir up such a hornets nest. All I wanted was to get to see the max RAC this one computer could turn in and without enough tasks it won't ever happen.
ID: 1043133 · Report as offensive
Aurora Borealis
Volunteer tester
Avatar

Send message
Joined: 14 Jan 01
Posts: 3075
Credit: 5,631,463
RAC: 0
Canada
Message 1043135 - Posted: 17 Oct 2010, 14:55:27 UTC - in response to Message 1043127.  
Last modified: 17 Oct 2010, 14:57:03 UTC

I asked the other day if I remember correctly that someone said the max cache could be set by the project, overriding the local setting. I didn't see any answer so I'll try again :)

I have 5 days on my PCs, that's enough for just about all outages. If there should be a situation (or two) where SETI is down for longer, well all will survive that. Why not help the servers/database with limiting the cache? :)

The project can't control the cache directly. That is a Boinc Manager function.
What they can control is the number of WU it sends in each request, put a cap on the number WU in progress, and the number of WU that can be downloaded per day per CPU core and/or GPU.
ID: 1043135 · Report as offensive
Ryan

Send message
Joined: 20 May 99
Posts: 7
Credit: 5,691,912
RAC: 0
United States
Message 1043137 - Posted: 17 Oct 2010, 14:57:12 UTC - in response to Message 1043133.  

Wow, I didn't know I would stir up such a hornets nest. All I wanted was to get to see the max RAC this one computer could turn in and without enough tasks it won't ever happen.


Be careful mentioning RAC.....You'll stir up another even larger hornets nest. :)
ID: 1043137 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51469
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1043148 - Posted: 17 Oct 2010, 15:14:12 UTC
Last modified: 17 Oct 2010, 15:15:45 UTC

I will state my position that 10 day caches have little or nothing to do with the project's current difficulties. NADA.

What percentage of the total Seti user base do you really think sets such large caches?
1%? 2-3%? I doubt it's even that high.

So I really think all this gnashing of teeth berating some who carry large caches and the computers powerful enough to justify them is just pi##ing in the wind.

Just as I am sure that if every single user that read this forum suddenly dropped their caches to 1 day, you would see a big zippo in the impact on the project.

The problems are 1st, server hardware, which is being addressed thanks to the wonderful response in my donation fund drives as of late, and bandwidth....which will have to be analyzed and possibly addressed once the server base is stable with some new servers on duty.

So knock it off, and quit bashing other users for the way they wish to run their hardware. If somebody sets their cache too high and can't return the work in a timely fashion (and by that, I mean before deadline), there are mechanisms in place to cut back the work that they can download with the error induced limits.

And, yes.....that's my final answer.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1043148 · Report as offensive
Terror Australis
Volunteer tester

Send message
Joined: 14 Feb 04
Posts: 1817
Credit: 262,693,308
RAC: 44
Australia
Message 1043150 - Posted: 17 Oct 2010, 15:14:41 UTC
Last modified: 17 Oct 2010, 15:16:50 UTC

I don't see what all the fuss is about !!

All my computers have reached the "tasks in progress" limit. The problem is the tasks are still stuck on the server waiting to download.

The download rate has been so slow that it's not even keeping up with the rate at which units are being processed and from what I on read on other threads I'm not the only one effected in this way. Taking this into account, any b*tch fight over cache sizes is totally meaningless. No-one with a cache size of more than 2 days has had a full cache for nearly a month

Due to the download limit nobody is filling their caches so this whole argument is theoretical. Whatever the problem currently is at the server, it is NOT being caused by 10 day caches.

So Peace Brothers, the problems will be sorted eventually. In the meantime, take 10 deep breaths and go watch some TV.

T.A.
(4 day cache)
ID: 1043150 · Report as offensive
ToxicTBag

Send message
Joined: 5 Feb 10
Posts: 101
Credit: 57,197,902
RAC: 0
United Kingdom
Message 1043152 - Posted: 17 Oct 2010, 15:19:36 UTC

On that note "i'd like to buy the world a coke" :-)
ID: 1043152 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1043155 - Posted: 17 Oct 2010, 15:28:10 UTC - in response to Message 1043077.  

Have you actually seen these numbers? I have filled my cache and have 157 CPU and 133 GPU and have 3 GPU's
10/17/2010 4:42:11 AM | SETI@home | Message from SETI@home: This computer has reached a limit on tasks in progress


Yup - I have been at 320/1280 on my 2 8-core, 4 GPU systems for some time now.
Is your no. WUs limited by cache size and time estimates for the WUs you have now? (Maybe bad DCF from server?)
ID: 1043155 · Report as offensive
Profile hiamps
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 4292
Credit: 72,971,319
RAC: 0
United States
Message 1043182 - Posted: 17 Oct 2010, 17:19:31 UTC

Wow, I do have a green star, guess I have donated...I have been running out of work each week and I believe the quotas are killing the servers with requests. I bet you believe that going slow on a busy freeway helps too.
Official Abuser of Boinc Buttons...
And no good credit hound!
ID: 1043182 · Report as offensive
Profile hiamps
Volunteer tester
Avatar

Send message
Joined: 23 May 99
Posts: 4292
Credit: 72,971,319
RAC: 0
United States
Message 1043188 - Posted: 17 Oct 2010, 17:35:24 UTC
Last modified: 17 Oct 2010, 17:36:05 UTC

The funny part is I have been bitchin for years that this day was coming and then I was wrong. Now that no one took this into account and started upgrading a couple years ago it is an emergency...WOW! AT least David fixed Boinc.
Official Abuser of Boinc Buttons...
And no good credit hound!
ID: 1043188 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1043198 - Posted: 17 Oct 2010, 18:10:51 UTC

About cache being important for smooth running of the project or not.

I was under the impression that the database problems lately was due to ghosts AND cache size, maybe I'm wrong.
ID: 1043198 · Report as offensive
Brkovip
Avatar

Send message
Joined: 18 May 99
Posts: 274
Credit: 144,414,367
RAC: 0
United States
Message 1043210 - Posted: 17 Oct 2010, 23:52:39 UTC

Well I guess I don't have to worry about getting my 10 day queue filled now on the system I wanted to test out. I think Mork needs to be sent back to his home planet and a replacement Ork put in his place.
ID: 1043210 · Report as offensive
Jamie

Send message
Joined: 8 Feb 01
Posts: 28
Credit: 11,078,008
RAC: 0
United States
Message 1043215 - Posted: 18 Oct 2010, 0:16:53 UTC - in response to Message 1043124.  

Pappa has a point. Increasing your cache size to the full 10 days IS additional stress on the servers. This has lead in part to several of the recent outtages.
I think the problem here is that you have the causality arrow going the wrong way.

Cache sizes wasn't an issue before the project starting being down as much as it was up.

ID: 1043215 · Report as offensive
Profile ScarabDrowner
Volunteer tester
Avatar

Send message
Joined: 13 Sep 03
Posts: 90
Credit: 456,378
RAC: 0
United States
Message 1043229 - Posted: 18 Oct 2010, 0:38:16 UTC

Let's see, I have my seti preferences set to say my computer connects every 5 days, and to download work for another 5 days. Is that a valid definition of a 10-day cache? Would you say I'm being a bad person on this project by having these settings?
What if I were to then mention that it takes my computer 16-17 hours to finish ONE workunit? That's close enough to call it one workunit per day. OMG, my 10-day cache is hoarding 10 work units (if I can ever get the bloody wu's to download :)! Someone else is going to have to go without I guess. :P
ID: 1043229 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51469
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1043232 - Posted: 18 Oct 2010, 0:42:44 UTC - in response to Message 1043210.  

Well I guess I don't have to worry about getting my 10 day queue filled now on the system I wanted to test out. I think Mork needs to be sent back to his home planet and a replacement Ork put in his place.

I think this latest tantrum pretty much assures that mork is going to be sent into space somewhere. Space outside of the Seti server closet anyway.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1043232 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 1043277 - Posted: 20 Oct 2010, 20:38:05 UTC - in response to Message 1043232.  

Bye,bye Mork, it's been nice to know you. You had a good long run.


PROUD MEMBER OF Team Starfire World BOINC
ID: 1043277 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : Not getting full 10 days tasks?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.