Rescheduling Hosts - Bad Practice

Message boards : Number crunching : Rescheduling Hosts - Bad Practice
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 11 · Next

AuthorMessage
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1467248 - Posted: 22 Jan 2014, 11:33:11 UTC
Last modified: 22 Jan 2014, 11:33:35 UTC

Philip

About your question i belive the credit was garanted by the credit gurantes script made to fix another problem and garanted credit in some wrong WU, that´s was explored in others threads in the forum before.

About database, i´m curious too but i belive is the 5/0 result who will be recorded because the canonical result was the 3340074176 task.

Out of topic, but interesting, how you manage to get so few CPU time if you run with a relative slow processor and you use the stock AP crunching version?
ID: 1467248 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1467250 - Posted: 22 Jan 2014, 11:49:16 UTC - in response to Message 1467248.  
Last modified: 22 Jan 2014, 11:51:35 UTC

Out of topic, but interesting, how you manage to get so few CPU time if you run with a relative slow processor and you use the stock AP crunching version?


My guess, it's that he has a GTX 285, so it's not that the CPU parts are quicker, but that the GPU parts take longer, and possibly lower impact on waits to the CPU. I see the opposite with Cuda multibeam on 780, because it's so fast it heavily taxes my old Core2Duo, even though CPU usage for Cuda multibeam is relatively low to start with (it'll need to be made a lot lower with x42, and configurable)

What's with the database, which result was saved?
The canonical result, so the better looking result would have made the science database. It probably means the 1st result had enough similar looking signals to be jusged 'weakly similar' . I guess a granting script could have been in play, though I think weak similarity is more likely.
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1467250 · Report as offensive
Profile William
Volunteer tester
Avatar

Send message
Joined: 14 Feb 13
Posts: 2037
Credit: 17,689,662
RAC: 0
Message 1467266 - Posted: 22 Jan 2014, 12:52:57 UTC - in response to Message 1467228.  
Last modified: 22 Jan 2014, 12:53:35 UTC

What happened here?
ap_21oc13ae_B5_P1_00300_20140117_08445.wu - >wuid=1405444816<

The 1st result, a formerly CPU task rescheduled to GPU and a 30/0 result.
The 2nd & 3rd both with 5/0 result.

Why get the 30/0 result Cr. granted?
What's with the database, which result was saved?

canonical result 3340074176
IOW the second one.
It's recent [no credit granting script involved] so the 30/0 one must have been 'weakly similar' That means the validator judged it similar enough to the other two to give it credit, but it won't go canonical (the one that's put in the DB)
Joe could enlighten us about the criteria for weakly similar. I keep forgetting and it may be different for MB and AP. [I thought it was more than 50% overlap of signals]

Regarding the topic, can you all have a deep breath and keep it civil please?
Else a mod might come along and decide it has degraded into a flame war.
A person who won't read has no advantage over one who can't read. (Mark Twain)
ID: 1467266 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1467291 - Posted: 22 Jan 2014, 14:38:39 UTC
Last modified: 22 Jan 2014, 14:40:46 UTC

@Jason

Could be but my question is why in one host WU have a very high CPU usage and the other no, something must be in place to do that, you don´t see this diferences on others WU, and maybe could be de an interesting path to follow.

@William
LOL - War, sure not... maybe just an "agressive negotiations" :)

Anyway it´s important put clear to all: rescheduling after creditscrew is not a good practice and we all must avoid to use.
ID: 1467291 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1467293 - Posted: 22 Jan 2014, 14:46:03 UTC - in response to Message 1467291.  

@Jason

Could be but my question is why in one host WU have a very high CPU usage and the other no, something must be in place to do that, you don´t see this diferences on others WU, and maybe could be de an interesting path to follow.

@William
LOL - War, sure not... maybe just an "agressive negotiations" :)

Anyway it´s important put clear to all: rescheduling after creditscrew is not a good practice and we all must avoid to use.

As far as I am concerned, anybody that uses rescheduling to get past the 100 per host GPU limit and hoard AP work deserves all the bad karma they can receive.

I don't like the limit, and have asked numerous times that it be changed to per GPU instead of per host, but so far that request has not been granted.
But I still don't cheat to try to get around it.

Meow.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1467293 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1467294 - Posted: 22 Jan 2014, 14:47:13 UTC - in response to Message 1467291.  

@Jason

Could be but my question is why in one host WU have a very high CPU usage and the other no, something must be in place to do that, you don´t see this diferences on others WU, and maybe could be de an interesting path to follow.

The OpenCL apps aren't really Jason's territory, so I'll try.

Both the NV users are running the same application, but Vincent's host 7110498 - the high CPU usage - is running Windows 8.1, while Philip's host 7188044 is running Windows XP. Probably not recommended to downgrade to XP now, with security support ending in three months' time.
ID: 1467294 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1467296 - Posted: 22 Jan 2014, 14:49:20 UTC
Last modified: 22 Jan 2014, 15:06:43 UTC

AFAIK, the low CPU time usage of the OpenCL AP app at my PC are, because:
< 27x.x NV driver (I use min recommended driver 263.06)
1 app,task /GPU

- if you use 27x.x+ NV driver and/(or?) 2+ tasks/GPU you have "run time = CPU time" at NV hosts - AFAIK & IIRC.

* Best regards! :-) * Philip J. Fry, team seti.international founder. * Optimize your PC for higher RAC. * SETI@home needs your help. *
ID: 1467296 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1467310 - Posted: 22 Jan 2014, 15:23:55 UTC - in response to Message 1467296.  


- if you use 27x.x+ NV driver and/(or?) 2+ tasks/GPU you have "run time = CPU time" at NV hosts - AFAIK & IIRC.

* Best regards! :-) * Philip J. Fry, team seti.international founder. * Optimize your PC for higher RAC. * SETI@home needs your help. *

Not true. I am running 2/GPU with 331.82 drivers, and my CPU time for AP GPU WUs is a good deal less than GPU run time.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1467310 · Report as offensive
Profile Sutaru Tsureku
Volunteer tester

Send message
Joined: 6 Apr 07
Posts: 7105
Credit: 147,663,825
RAC: 5
Germany
Message 1467426 - Posted: 22 Jan 2014, 20:02:14 UTC
Last modified: 22 Jan 2014, 20:20:46 UTC

Then maybe it's an OS thing, WinXP with less CPU time usage?

= = = = = = = = = = = = = = = = = = = =

An other host >id=4001951< of >RottenMutt< with rescheduled tasks, because of this >wuid=1405244741< results got 604.87 instead of ~ 700 Cr.

Not the worst case with just ~ 50 - 100 Cr., but annoying.

Please don't reschedule tasks from CPU-GPU or GPU-CPU.

Reminder, quote out of my profile:
(...)
Since the new Credit System 'CreditNew' you shouldn't longer use a reschedule tool for to send CPU WUs to GPU or GPU WUs to CPU (confirmed by SETI@home Director Dr. David P. Anderson). If you still do this it will disturb the correct calculation of the Credits of/for the results. Much less Credits will be granted for you, also for other members (wingmen*)!
(...)

[EDIT: I got his statement via E-Mail.]
[EDIT#2: Please add ~ 95 Cr. to the Credits of my account if you look to them.
It was not the first time and will not be the last time that this will happen. :-|]

* Best regards! :-) * Philip J. Fry, team seti.international founder. * Optimize your PC for higher RAC. * SETI@home needs your help. *
ID: 1467426 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1467434 - Posted: 22 Jan 2014, 20:13:46 UTC - in response to Message 1465986.  

Until this days, still some users insist in rescheduling, that´s clearely a bad practice and mess with the RAC off all wingmates (paid lower credits). A waste of resouces for the project and for us users.


Can you justify text in bold?
Actually, to process AstroPulse on CPU is real waste of users (energy/time) and projects (bigger turnaround times) resourses.

Each AP task recived by stock CPU app I would consider as pitiful waste. And if such task happens to land to Linux stock AP CPU... good chances it will end with computation error, even more wasteful than windows stock CPU AP app.

So, the less AP tasks will be left to stock CPU app the better.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1467434 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1467443 - Posted: 22 Jan 2014, 20:29:28 UTC - in response to Message 1466383.  
Last modified: 22 Jan 2014, 20:33:32 UTC

The only reason to use rescheduling is to polish one's ego "look how many WU I can horde".


Too wrong statement to pass by.
Modern ATi card can process MUCH more AP tasks than modern BOINC client running on autopilot can fetch from project.

And to process AP tasks is the best that ATi card can do for SETI project.

What here about someone's "ego" at all ???

If host is able to process all fetched work before deadline it's fine.
If it can provide better turnaround times - the better. But slightly longer turnaround vs idle computational resourse...

Perhaps, project settings in part of imposed limits are misconfigured.
With erratic AP work supply 100 tasks limit is too low to allow 24/7 processing.
Maybe it would be good to rise this limit (and even better if it would be done with re-designed quota management system). If host returns wrong results (including missed deadlines) it should be punished and prevented from getting too much work. But if it returns valid results it's fine.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1467443 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1467485 - Posted: 22 Jan 2014, 23:13:37 UTC - in response to Message 1467266.  

What happened here?
ap_21oc13ae_B5_P1_00300_20140117_08445.wu - >wuid=1405444816<

The 1st result, a formerly CPU task rescheduled to GPU and a 30/0 result.
The 2nd & 3rd both with 5/0 result.

Why get the 30/0 result Cr. granted?
What's with the database, which result was saved?

canonical result 3340074176
IOW the second one.
It's recent [no credit granting script involved] so the 30/0 one must have been 'weakly similar' That means the validator judged it similar enough to the other two to give it credit, but it won't go canonical (the one that's put in the DB)
Joe could enlighten us about the criteria for weakly similar. I keep forgetting and it may be different for MB and AP. [I thought it was more than 50% overlap of signals]
...

That basic criteria for "weakly similar" also applies to AP, but there's special logic for single pulses. The validator only considers single pulses which are at least 1% above threshold. The summary of signals from the bad host has 26 of the 30 showing "1.#IO" for peak_power, and since that is a Windows indication of a NaN (Not a Number), those 26 were excluded from comparison. The remaining 4 were apparently good enough to achieve the "weakly similar" state.
                                                                  Joe
ID: 1467485 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1468135 - Posted: 24 Jan 2014, 11:36:54 UTC
Last modified: 24 Jan 2014, 12:10:56 UTC

@Raitsmer

Reagarding your 2 posts, we all agree crunching AP on the CPU is a is real waste than in the GPU´s, and could extrapolate MB crunching in the CPU is a waste to if you compare with the GPU crunch, but most of the users want to do it´s part and CPU crunching is the base of the project. But i was talking on a totaly diferent point of view. Is not about crunching eficience, it´s about to be "polite" and don´t cheat the rules.

I don´t agree with the 100 WU limit per host, and allways ask to rise to at least 100 WU per GPU as others, but if there is a rule in play "100 WU limit" the we all need to follow or we will end on a lot bigger mess than creditscrew. Your ideia to upgrade the way the job is distributed, guaranting work to who produces and punish who only waste resources is perfect.

Let´s go a little deeper why there are AP shortages, mainly because a lot of users who don´t crunch AP in the past, me included, was "forced" to crunch AP because the way creditscrew is working (or not working right in my opinion).

I could agree we are not here for credits and if what i want is credits are a lot of other projects who "paid" a lot more, but the point is other, we are humans not machines, and in the human world we are competitives and loves to compare the things, it´s hard to see one host crunching MB only receiving 50% of the credits than the same host when crunching AP. It´s was never about the quantity of credit per WU it´s about "balance" the credits, paid the same number of credits for the same work done (by work done i mean computer time used to crunch the WU AP or MB on the same host).

Bypass the limits leaves to rescheduling and rescheduling to wierd situations like, why someone could need a 1000 AP WU cache? thats will take 40 days or more to crunch. Sure a lot of them will reach the time limit before crunched. So yes it´s a waste of time and resources since the WU will be need to send to another hosts wasting the servers capacity and increasing the DB size.

Please don´t take me wrong, i have no desire to put gasoline on fire, just try to answer your question.

BTW: In the ideal world we could have an option who allow only to crunch AP on the GPU´s leaving the CPU´s to crunch MB, but AFAIK that option is not avaiable on SETI (If i´m wrong please somebody share to me how to do that).

Totaly out of topic: I wish to thank you and the others who develop the AP crunching app´s for the excelent job done, hope you all continue to do that for a long time.
ID: 1468135 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1468144 - Posted: 24 Jan 2014, 12:07:23 UTC - in response to Message 1468135.  
Last modified: 24 Jan 2014, 12:14:41 UTC

Well, lets not mix 1000 tasks per GPU hosts and hosts with adequate (even re-scheduled) number of tasks.
First falls into "can't complete before deadline" category and hence should be punished, second falls in "as efficient as can be" category so should be rewarded! And both can use re-sheduling to achieve their state.
As with many other things (fire, for example ;) ) re-scheduling per se not bad being properly applied.

So called "rules" and "polite" behavior both self-imposed actually.
I consider not waste energy if you could both following the rules and polite to all life forms on this planet. And with current 100 tasks per host limit being "polite" in my sence REQUIRES re-scheduling for any ATi-enabled host with fast enough GPU. Who not doing this are just lazy perhaps and not care (can we call them "inpolite" - not sure ;D )

And regarding credits per se:
1) the aim of "credits" - to stimulate work on project, not to hinder it. In this sense CreditNew quite broken. And with broken credits mechanism to follow broken rules is as to follow ancient tabus.
2) "lower credits" can be urban myth after all, at least in short term.
Example:

3336373475 1403780369 15 Jan 2014, 12:00:53 UTC 17 Jan 2014, 1:26:09 UTC Завершён и проверен 3,296.96 465.26 712.19 AstroPulse v6
Анонимная платформа (Тип ЦП)
3336153054 1403604712 15 Jan 2014, 7:11:14 UTC 16 Jan 2014, 14:15:43 UTC Завершён и проверен 3,491.73 983.90 674.85 AstroPulse v6
Анонимная платформа (ГП ATI)

As one can see, "proper" "GPU" task recived less credits while re-scheduled "CPU" task (processed with the same app on same GPU) recived more credits.
CreditNew is near to random as credit rewarding system. So "polittnes" and "following the rules" hardy applicable on this topic at all.

P.S. And regarding AstroPulse on NV, some technical info: though one can recive bigger credits doing AP tasks on NV GPUs I think (based on conducted testing, not just IMO) that NV more suitable for MB task processing in current project stage. And again, here "credits" imposed by CreditNew contradict to what is better for project.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1468144 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34258
Credit: 79,922,639
RAC: 80
Germany
Message 1468151 - Posted: 24 Jan 2014, 12:23:25 UTC
Last modified: 24 Jan 2014, 12:25:01 UTC

And with current 100 tasks per host limit being "polite" in my sence REQUIRES re-scheduling for any ATi-enabled host with fast enough GPU. Who not doing this are just lazy perhaps and not care (can we call them "inpolite" - not sure ;D )


Thats not entirely true.
I dont need to reschedule and i have a fast GPU.
I`m always having 100 units left in my cache.

That might be different on a multi GPU host but shouldn`t if set up correct.
One reason i still use Boinc 6.


With each crime and every kindness we birth our future.
ID: 1468151 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1468154 - Posted: 24 Jan 2014, 12:27:29 UTC - in response to Message 1468135.  


BTW: In the ideal world we could have an option who allow only to crunch AP on the GPU´s leaving the CPU´s to crunch MB, but AFAIK that option is not avaiable on SETI (If i´m wrong please somebody share to me how to do that).

AFAIK it's possible by using anonymous platform. But (!) it will not solve the issue. BOINC just will not fetch enough AP tasks to keep GPU busy.
So, w/o re-scheduling the 2 available choices are:
1) do only AP and leave GPU idle quite big % of time (here someone can contradict "why keeping idle, do another project". Well, this is quite different topic that involves personal estimation of value of different projects. To be honest quite lot of peoples consider SETI project as whole as energy waste, but we will not follow their point of view, right ;) ).
2) Do AP as long as possible and do MB the rest of time.
But this will
a) reduce host efficiency.
b) reduce host efficiency even more cause host's cache will be filled with MB tasks so host will not ask for new AP tasks even when they will be available on server.

Both these 2 choices are imperfect. Re-scheduling to increase queue for AP tasks is imperfect solution too but IMO in less degree than those 2 options.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1468154 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1468155 - Posted: 24 Jan 2014, 12:27:38 UTC - in response to Message 1468144.  

Another take on the 'politeness' question:

I think we can all see that, just at the moment and for the time being (and possibly for the reasons already discussed), SETI has far more volunteer computing power devoted to AP than it needs to process the volume of work currently passing through the pipeline - witness the length of time there are no tasks ready to send, between tape loadings.

Is it really beneficial, to the project or anyone else, for the AP tasks to be gathered up in their hundreds or thousands by a comparatively small number of volunteers, and then held in caches in the gaps between work being split?
ID: 1468155 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1468157 - Posted: 24 Jan 2014, 12:29:31 UTC - in response to Message 1468151.  

And with current 100 tasks per host limit being "polite" in my sence REQUIRES re-scheduling for any ATi-enabled host with fast enough GPU. Who not doing this are just lazy perhaps and not care (can we call them "inpolite" - not sure ;D )


Thats not entirely true.
I dont need to reschedule and i have a fast GPU.
I`m always having 100 units left in my cache.

That might be different on a multi GPU host but shouldn`t if set up correct.
One reason i still use Boinc 6.


Mike,
1) BOINC 6 is not "modern BOINC" now so someone can say you are cheating too ;)
2) Is this possible with BOINC 7? I tried quite a lot of different settings and always observed lack of AP tasks 2-3 days before maintanance.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1468157 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1468158 - Posted: 24 Jan 2014, 12:31:49 UTC - in response to Message 1468144.  

Well, lets not mix 1000 tasks per GPU hosts and hosts with adequate (even re-scheduled) number of tasks.
First falls into "can't complete before deadline" category and hence should be punished, second falls in "as efficient as can be" category so should be rewarded! And both can use re-sheduling to achieve their state.
As with many other things (fire, for example ;) ) re-scheduling per se not bad being properly applied.

I totaly agree, i was pointed on the direction of the first group, those who rescheduler just to DL a large number of WU and have no capacity to crunch them. It´s hard to separate who realy knows how to rescheduling (the good ones those who must rewarded) form the rest.

About the "urban myth" we could find examples in both directions, what is clearely broken is the random way creditscrew handles the credits, and at the end the source of this thread.

I could only wish our "SETI gods" could hear our prays and finaly fix creditscrew to avoid all this mess. And if not to mush to ask, rise the 100 WU limit a little... it´s hard to see your expensive multi GPU host running empty... :)
ID: 1468158 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1468162 - Posted: 24 Jan 2014, 12:42:26 UTC - in response to Message 1468155.  
Last modified: 24 Jan 2014, 13:01:47 UTC

Another take on the 'politeness' question:

I think we can all see that, just at the moment and for the time being (and possibly for the reasons already discussed), SETI has far more volunteer computing power devoted to AP than it needs to process the volume of work currently passing through the pipeline - witness the length of time there are no tasks ready to send, between tape loadings.

Is it really beneficial, to the project or anyone else, for the AP tasks to be gathered up in their hundreds or thousands by a comparatively small number of volunteers, and then held in caches in the gaps between work being split?


Good question!
And I would answer absolutely yes, provided those small number can reliable return valid results of course (to return valid result before deadline is absolute requirement).

Actually I already mentioned this in prev post but lets consider this in details:
1) we know that most of work are done by "unattended hosts". By this term I gather all participants of "stock" and "set and forget" style.
2) we know that AP work is relatively scarse, also we know that VLARs can be done only on CPU.
3) we know that stock CPU AP app is awful and stock CPU Linux AP app is hardly functional at all (Urs provided needed fixes LONG time ago but I never saw they were implemented on main, correct me if I missed that).

So, what conclusion one could do from these 3 facts?
That "unattended hosts " should be saved from AP tasks! This allows their CPUs to cruch VLARs instead, this increases project efficiency, reduces project's energy consumption over the world, even improves AP turnaround times!( GPU makes AP task in 30 min or even less, even opt CPU app requires ~10h and I'm afraid to imagine how long it takes for stock CPU app).

That's my logic why collecting AP tasks for ATi GPUs (as I said before leave NV GPUs for MB work, though it will reduce your RAC perhaps) is good deed and should be rewarded, not blamed.
But as with any other deed it should be done in right degree. As I said earlier, return valid result in time as absolute requirement.
SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1468162 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 11 · Next

Message boards : Number crunching : Rescheduling Hosts - Bad Practice


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.