Message boards :
Number crunching :
Rescheduling Hosts - Bad Practice
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 . . . 11 · Next
Author | Message |
---|---|
Wiggo Send message Joined: 24 Jan 00 Posts: 34762 Credit: 261,360,520 RAC: 489 |
Well while you have more than the 200 in progress AP's that you're entitled to then you had better get use to what others think of you. [edit] And the same goes to the others doing it. [/edit] |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
I guess I shouldn't think about buying a new machine...I'm not 'entitled' to any more work. Heaven forbid I place one card in another existing machine and run that one up to 200. OMG! my other machines are well below 200! I'm not getting my Fair Share! Whatever, have a good one. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34762 Credit: 261,360,520 RAC: 489 |
You can go and buy another rig and I won't have any problems with that, but as Juan stated in the first post, "that´s clearely a bad practice and mess with the RAC off all wingmates (paid lower credits)", and it's noticeable to those of us who don't do that many AP's when we get paid only half the credit we should've received because our wingperson rescheduled a CPU task to a GPU. Cheers. |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
You don't have a clue do you. Check the tasks on the Mac, what you're describing is not reality. Show me where a CPU AP is being run on a GPU. At least get your facts straight before nagging someone for hours. Cheers! |
Wiggo Send message Joined: 24 Jan 00 Posts: 34762 Credit: 261,360,520 RAC: 489 |
If that's so then how are you obtaining work above the limit? |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
It's magic. Mac magic. I decided not to tell because other people might do the same and offend you even more. BTW, it's much easier than that App you hate which doesn't even work on a Mac... |
Tim Send message Joined: 19 May 99 Posts: 211 Credit: 278,575,259 RAC: 0 |
And a few more :-( Sorry guys this was a test we make to see the up’s and down’s of the RAC. No MultiBeam work was abandoned. This will not happen again. This is a promise. Tim |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Nice Tim. Sorry if i point you. By my post you could see i said something most be wrong. My intention was to rise the problem to anyone who don´t know why is not good to use rescheduler in SETI, no matter you have, CPU, NVidias, AMI, or MAC! For those who don´t know, here is a short "not tecnical" explanation, after the introduction of "creditscrew" each WU is labeled and target to be crunched by the CPU/GPU at the server side. When you reschedule (in the client side - the server will never know you do that) the WU from de CPU to the GPU and crunch the CPU WU on the GPU the credit given is a lot less than expected, why? because the time to crunch involved, something like GPU´s crunch faster than CPU´s (let keep on non tecnical language). So less time to crunch = less credit received. Since Boinc uses the small number of credits to "paid" your work, then you and your wingmen will receive a lot less credit per unit crunched (IE lower credit = lower RAC). That´s is clearely pointed in the rescheduler main page: "Warning, rescheduling may result in less credits, for you or your wing man. I´m not talking about more or less WU to crunch, We all sufering from the 100 WU limit on SETI and we could rescheduling our jobs, but that is bad practice, when you do that you simply mess not just with your credit (receive less credit per WU) you mess with your wingmans credits and makes them receive a lot less credit than expected too. I´m sorry if somebody just see his own host, we are in comunity, you need to remember whatever you do mess with a lot of others. BTW - Trying to answering the question about the ATI´s - they are clearely superior and easy to use on projects who use DP (Milkway, Einstein, etc), not the SETI case, so i belive their users swith to those projects (maybe because they get tired) and at the end that is not bad, it´s science on the same way. Be careful what you wish for. I don´t wish nothing. I´m not telling anyone what you need to crunch (MB or AP), i´m only telling, avoid to use the rescheduler, by use it you mess with the others. If you discover a way to bypass the limit without mess with the others it´s fine (BTW I know at least one just don´t have time for babysiting) and will be even greatter if you share the info with the comunity too, the 100 WU limits is bad for all who use big crunchers (specialy with more than one fast GPU) that´s why i ask a lot of times in this forum to rise the limit from 100 WU per host to 100WU per GPU, but that is for another thread. It's magic. Such things don´t exist on computers (except from Jason´s Black Magic of course), so if you resheduler your job you and your wingman guaranted credit per WU will be lower. |
rob smith Send message Joined: 7 Mar 03 Posts: 22204 Credit: 416,307,556 RAC: 380 |
The only reason to use rescheduling is to polish one's ego "look how many WU I can horde". Given the Tuesday outage normally only lasts about three hours, that's all the work you need in your caches. If you run out then you should to what the project recommends - use an alternate project (some pay far better than S@H for the same crunching time) Indeed it you are really so eager to get loads of credit ten migrate to one of the "high roller" projects and watch your RAC go through the roof... But do not horde S@H WU just to keep your ego polished. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
bill Send message Joined: 16 Jun 99 Posts: 861 Credit: 29,352,955 RAC: 0 |
There's a valid reason for the 100 wu limit, to keep the database from crashing,. If somebody caused a database crash by somehow bypassing the 100 wu limit, I would hope that they would get blacklisted permanently so fast their head would spin. |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
There's a valid reason for the 100 wu limit, to keep the database from crashing,. The number relevant to the Database is the ALL Number. The All number for some people in this thread is much higher than 200. Here's the OP; State: All (2034) Mine, State: All (459) Wiggo, State: All (1271) |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
This thread is not about the quantity of WU is about the bad impact on the credit guranted to your wingman you cause when you rescheduled the WU. If you care about the rest of the comunity just don´t use the rescheduler and all will be fine. :) |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 |
This thread is not about the quantity of WU is about the bad impact you cause on your fellow wingman when you rescheduled the WU. I don't use the 'rescheduler'. It doesn't even work on a Mac. So, I guess all is fine ;-) |
Wiggo Send message Joined: 24 Jan 00 Posts: 34762 Credit: 261,360,520 RAC: 489 |
It's not the "State: All" that we are talking about, it's the "State: In progress". Juan's and my "State: All" are higher than your's for the simple fact that we do more MultiBeam work than we do Astropulse work and we still have to wait for slower rigs to catch up to us. Cheers. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
This thread is not about the quantity of WU is about the bad impact you cause on your fellow wingman when you rescheduled the WU. If you don´t use then you are OK, but you could see a lot of others who use, they are the ones who need to stop with the bad practice. But be aware even if you don´t use the scheduler and uses another way to bypass the limit if you crunch a CPU WU on the GPU the efect is the same. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34762 Credit: 261,360,520 RAC: 489 |
Even if you are not using the rescheduler you are using some loophole that is allowing you to override the limits of work in progress. Cheers. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14650 Credit: 200,643,578 RAC: 874 |
Even if you are not using the rescheduler you are using some loophole that is allowing you to override the limits of work in progress. And it's the "State: All" figure which contributes to the size of the BOINC database, the time it takes to compact it and back it up during maintenance, and the speed of queries the rest of the time: in other words, the slickness and responsiveness of all the visible server functions. The smaller those numbers are, the better neighbour you are being to the rest of the community. |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Out of topic, but important, if the 100 WU limit exist to protect the DB, then any way to bypass this limits could be considered as a bad practice too. BTW it´s hard to keep the All WU low with few fast GPU´s, in my case last stats shows 5892 WU and i use a 0.5 day cache only. |
Wiggo Send message Joined: 24 Jan 00 Posts: 34762 Credit: 261,360,520 RAC: 489 |
Even if you are not using the rescheduler you are using some loophole that is allowing you to override the limits of work in progress. Richard if you can work out a way for my wing people to pick up the pace a bit then I'd be much happier to with the reduced "All" number. ;-) But really the thread is about those who do things to get around the server side limits on tasks in progress and most especially those who have that have several hundred AP's above the limit (talk about skimming the cream off the milk). Cheers. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
I found randomly an additional rescheduler host >id=3050453<. What happened here? ap_21oc13ae_B5_P1_00300_20140117_08445.wu - >wuid=1405444816< The 1st result, a formerly CPU task rescheduled to GPU and a 30/0 result. The 2nd & 3rd both with 5/0 result. Why get the 30/0 result Cr. granted? What's with the database, which result was saved? Thanks. * Best regards! :-) * Philip J. Fry, team seti.international founder. * Optimize your PC for higher RAC. * SETI@home needs your help. * |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.