## Average Credit Decreasing?

Message boards : Number crunching : Average Credit Decreasing?
Message board moderation

Previous · 1 . . . 17 · 18 · 19 · 20 · 21 · 22 · 23 . . . 33 · Next

AuthorMessage
Darth Beaver

Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Message 1789109 - Posted: 21 May 2016, 1:33:59 UTC - in response to Message 1788956.

Down, down, and down it goes.
Where it stops, nobody knows.

Grant Coles stopped using that for there adds mate

Down Down prices are down !

Or as I say Down Down food quality is down

Or down down Credit new is down and screwing things up .
ID: 1789109 ·
Darth Beaver

Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Message 1789114 - Posted: 21 May 2016, 1:46:01 UTC

How about just adding to Credit new a time part if it takes longer than 1 hr and every hour after that a extra 80-100 credits for all units CPU and GPU

If the GPU can do it in under 1 hr no extra credit

If it takes longer you get the extra credits

This will solve the problem of slower machines not being treated fair

It won't penalise faster machines like it is doing now

And you won't need to change much as you can add a extra fumula to the score once credit new determines what your score should be less extra credits for taking a long time

Credit new score + 80 for every hour past 1 hr simple should solve the Vlar problem buy not PENALISING fast machines and will give slower ones a little extra and should balance out much better .
ID: 1789114 ·
Darth Beaver

Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Message 1789122 - Posted: 21 May 2016, 2:01:10 UTC

Extra formula for credit new

UT/3600=ET

If ET<3600 then ET=0

ET*80=EC

CN+EC=FS

UT = Unit time
ET = Extra time
CN = credit new
EC = extra credit
FS = Final score

I'm shore to add those lines into what ever code should not be to complicated Mr Anderson
ID: 1789122 ·
Darth Beaver

Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Message 1789134 - Posted: 21 May 2016, 3:23:08 UTC

You can add this too Fill in what ever in brackets

If U = (how ever you I.D the units) then (goto , goback , jump ,)(program line) else (goto , goback , jump to )(program line)

U = Unit

If you wish to exclude CPU units all together to make it only for GPU's and repeat line for AP's weather GPU or CPU

That should make it even more fairer (for those whom may whinged about slower computers getting more credit and those with a Xenon Chip or Chip with more than 4 cores getting to much extra credit )
ID: 1789134 ·
Darth Beaver

Joined: 20 Aug 99
Posts: 6728
Credit: 21,443,075
RAC: 3
Message 1789235 - Posted: 21 May 2016, 13:33:11 UTC - in response to Message 1789122.

Extra formula for credit new

UT/3600=ET

If ET<3600 then ET=0

ET*80=EC

CN+EC=FS

UT = Unit time
ET = Extra time
CN = credit new
EC = extra credit
FS = Final score

Edit : I got it wrong first time so line ET*80=EC should be ET+80=EC
ID: 1789235 ·
Ulrich Metzner
Volunteer tester

Joined: 3 Jul 02
Posts: 1256
Credit: 13,565,513
RAC: 13
Message 1789238 - Posted: 21 May 2016, 13:43:17 UTC

"Credit" should reflect the real work done, so every machine - regardless of it's speed - should "earn" the same amount (for this unit!). If a machine is slower, it gets less credit just because it can't crunch as much units, as a faster machine - simple as that. Same work - same credit.

Unfortunately at this time credit is "magically guessed" by some highly scientifically blown up "wannabe all in one" algorithm, that is in reality simply a magnificent complicated random number generator - really good for nothing at all.
Aloha, Uli

ID: 1789238 ·
Cruncher-American

Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
Message 1789244 - Posted: 21 May 2016, 13:54:39 UTC - in response to Message 1789238.

"Credit" should reflect the real work done, so every machine - regardless of it's speed - should "earn" the same amount (for this unit!). If a machine is slower, it gets less credit just because it can't crunch as much units, as a faster machine - simple as that. Same work - same credit.

That's what I was saying in my post a few above this one - and the slowdown caused by GPU vlars IS being faithfully represented by CN (within its limits that we all know). The current credit crash is being caused by the stretchout of vlars on GPUs, which is a faulty app that should be fixed a.s.a.p. and doesn't have all the issues debated for CN. It just needs someone to find and fix the bug, which is (I assume) a design problem.
ID: 1789244 ·
Zalster
Volunteer tester

Joined: 27 May 99
Posts: 5516
Credit: 528,817,460
RAC: 242
Message 1789250 - Posted: 21 May 2016, 14:10:35 UTC - in response to Message 1789244.

The app works fine. It was refined to deal with the more complex data coming out of Green Bank.

You could turn off the credit system and the apps would work like they are supposed to.

It's the credit system that is the problem, it's based off some imaginary number that then gets multiplied by some correction factor that then gets spit out.

So what we need is a fixed credit for each work unit. Then it would properly reflect how well one's computers do.
ID: 1789250 ·
Volunteer tester

Joined: 15 May 99
Posts: 251
Credit: 434,772,072
RAC: 236
Message 1789276 - Posted: 21 May 2016, 15:37:45 UTC - in response to Message 1789250.

The app does not have a "bug" in it as you keep espousing. The VLAR wu's are not the same type of signal as mid-range or very high range angle ranged wu's. Both of those types are moving across the sky to put it very simply, the larger the number the "faster" it passes a point in the sky. The MB app has several types of "apps" within it to search for signals, the split of computation time each of these kernels takes of the overall computation time is in part driven by the angle range. In the case of VLARs, where the telescope is looking at a pinpoint location the sky for a long time, there's a whole lot of time spent looking for pulses. There's a limit to how parallelized you can make those pulse searches. That's the high level gist of how the all works and is working as designed.

There's no bug in the app but some architectures do pulse finding more efficiently than others. The current Cuda app struggles with it, making the computer laggy and that is why for a very long time VLARS were kept off the GPU's even though AMD cards running OpenCL were not adversely impacted by VLARs. The developers recently released a pen OpenCL app for the nvidia cards to overcomes the lag issue, making it viable to reintroduce VLAR to the GPUs on the main project. It doesn't really change the fact they take longer to run, but that's not a bug, that's do to the reasons I described above. Further optimization will likely be possible over time but in the mean time it was better to release an app that allowed the entire community to contribute to the Green Bank data, as it will soon make up the majority of the work we have to do. The only reason you are seeing a drop in RAC is because CN does not correctly give credit for work done and in introducing the new wu's we are going to see a lot of oscillation in the RAC granted on any given wu. I've seen VLARs get 60 points and others get 200 for the same amount of work. Eventually that variation will likely settle down but our overall RAC will almost certainly always be lower the more optimized our apps gets, because it is what is flawed...

I rambled on way more that I intended, but I didn't want other people having the misconception that there was a "bug" in the app that was causing the credit drop. That is catigorically incorrect.

Chris
ID: 1789276 ·
Ulrich Metzner
Volunteer tester

Joined: 3 Jul 02
Posts: 1256
Credit: 13,565,513
RAC: 13
Message 1789278 - Posted: 21 May 2016, 15:44:06 UTC

+1

CreditScrew is the bug, not the application!
Aloha, Uli

ID: 1789278 ·
Mr. Kevvy
Volunteer moderator
Volunteer tester

Joined: 15 May 99
Posts: 3430
Credit: 1,114,826,392
RAC: 3,319
Message 1789284 - Posted: 21 May 2016, 15:57:35 UTC

To add to what Chris wrote: at first there was much complaining that there was never enough work.... we were running out every Tuesday and not catching up until the next day, and plenty of other times as well.
Solved: the SETI@Home scientists built the GBT receiver and a new splitter and got on the Breakthrough Listen initiative, and now we have enough work (possibly even too much!)

Next, there was much complaining (including from myself) that CUDA GPUs weren't being assigned the new GBT work, though it's possibly far more likely to contain a candidate signal, and thus find something.
Solved: the same group beta-tested the impact to system stability/usability of these work units being run on CUDA GPUs and then released them (this was also probably necessary, because there is so much work and CUDA GPUs make up such a significant fraction of machines available to do it.)

Now, there is much complaining that this work is causing a reduction of credit. Sure, it's sad that our RAC is dropping, but it isn't anyone's "fault" except the NVidia engineers who designed the memory architecture of the CUDA-enabled cards, and it's the unintended side effect of something that was asked for. It isn't CreditNew or Dr. David Anderson's doing. Fixed credit won't solve it as 3x the complete speed still equals 1/3 the fixed credit over time. It needs a possibly complex, and possibly impossible, workaround in the SETI@Home client itself.

So, in the interim, be patient, enjoy the new science, and be happy that those GUPPI work units trade off three times the compute time for... I dunno... thousands of times the chance of actually finding something, and I'll take that tradeoff any day!
ID: 1789284 ·
Richard A. Van Dyke

Joined: 17 May 99
Posts: 73
Credit: 318,717,859
RAC: 4,214
Message 1789285 - Posted: 21 May 2016, 16:06:05 UTC - in response to Message 1789250.

The app works fine. It was refined to deal with the more complex data coming out of Green Bank.

You could turn off the credit system and the apps would work like they are supposed to.

It's the credit system that is the problem, it's based off some imaginary number that then gets multiplied by some correction factor that then gets spit out.

So what we need is a fixed credit for each work unit. Then it would properly reflect how well one's computers do.

Just a thought...
Perhaps going back to the days of SETI Classic is the answer: One WU completed equals one credit (Cobblestone). It is still an indicator of how much work your machine has completed without the imaginary number multiplied by some correction factor. You get credit for completing the WU regardless of whether you have the latest and greatest hardware or something that is 10 years old (just like we do now).

I imagine there would be some resistance from the user community to going back to the 1 WU = 1 credit system, but it would put an end to the CreditNew/CreditScrew problem. Besides, after crunching for 17 years now, I still haven't been able to gain enough credits for that darn toaster. :-)
ID: 1789285 ·
betreger

Joined: 29 Jun 99
Posts: 11095
Credit: 29,581,041
RAC: 66
Message 1789286 - Posted: 21 May 2016, 16:07:06 UTC - in response to Message 1789284.

Amen
ID: 1789286 ·
tullio
Volunteer tester

Joined: 9 Apr 04
Posts: 8634
Credit: 2,930,782
RAC: 1
Message 1789289 - Posted: 21 May 2016, 16:18:40 UTC

I've split work. I am running Einstein@home GPU tasks on my Windows/Nvidia PC and both SETI@home and SETI Beta GPU tasks on my Linux/ATI box with OpenCL. But I am getting many more credits from Einstein with its fixed credits system.
Tullio
ID: 1789289 ·
Grant (SSSF)
Volunteer tester

Joined: 19 Aug 99
Posts: 13334
Credit: 208,696,464
RAC: 304
Message 1789339 - Posted: 21 May 2016, 22:18:36 UTC - in response to Message 1789285.

Just a thought...
Perhaps going back to the days of SETI Classic is the answer: One WU completed equals one credit (Cobblestone). It is still an indicator of how much work your machine has completed without the imaginary number multiplied by some correction factor.

The old system was not an indicator of work done.
Getting 1 Credit for a noisy WU that lasts 3 seconds is not even close to being on par with getting 1 Credit for a WU that takes 12 hours to complete.

Just get rid of the fudge factor for the (supposed) actual v highly theoretical efficiency factor.

There should just a be reference system with a reference application. The time it takes for it to process a valid WU should determine the amount of Credit granted for that WU.
A noisy WU 0.5 Credits.
A shorty 5 Credits
A mid range unit 50 Credits
A VLAR 100 Credits (even for those occasional VLARs that take 50% longer than most others- the same amount of work is being done, it's just taking longer due to application limitations).
WUs of other ARs between those 3 get proportionally more or less Credit.
(Numbers created at random with no relationship to the actual values of a Cobblestone. Just used for illustrative purposes only).

If your hardware does things in less time, then you will get more Credit per day than the reference system. If it's slower, then you get less.
If there is an optimised application for your specific hardware then you will get even more Credit per day.
If the project releases an improved application then everyone that uses it will get more Credit per day than the reference system. There is no fudging of the Credit system for (supposed) actual v theoretical efficiency.
The new stock application is more efficient than the reference, so it will result in more Credit per day.
New hardware comes along, an application is written for it. If it's faster than the reference system, you will get more Credit per day than the reference. If that application is then optimised, then you will get even more Credit per day.
Theoretical efficiencies have nothing to do with Credit granted.
Grant
Darwin NT
ID: 1789339 ·
Raistmer
Volunteer developer
Volunteer tester

Joined: 16 Jun 01
Posts: 6324
Credit: 106,370,077
RAC: 121
Message 1789347 - Posted: 21 May 2016, 22:58:07 UTC - in response to Message 1789339.

Then the question arise "how bad" ref app should be.
Definitely that ref app can't just be stock CPU app cause it has different paths for different CPU capabilities for example.
So,
1) where to find that app...
2) how to translate credits between different apps (AP vs MB, MBv6 vs MBv7 (autocorr search added) )
3) how to translate between different projects.

All these questions lead to requirement of more universal approach.
Cause being implemented as is it's even worse than FLOP counting "v2" approach where project scientist arbitrary (in fact, cause real floating point performance will relate on memory throughput and access pattern too as trivial example) assigns some FLOPs connected with specific parts of algorithm and credit calculated just adding FLOPs for computed parts.

BTW, AFAIK some projects, where different apps really very different (for example, where few different subprojects implemented) indeed abandon whole idea of universal credit and grant separate entities (like different badges) for separate subprojects. It has own advantages - for example, nobody will whine about AP vs MB credit dispairing cause all will see that there are AstoCredits and MultiCredits....
ID: 1789347 ·
Brent Norman
Volunteer tester

Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Message 1789348 - Posted: 21 May 2016, 23:08:47 UTC - in response to Message 1789347.

I would say that GBT has to be the new reference, since Eric has said that 90% plus tasks will be coming from there.

Which app, CPU CUDA AMD I will leave to the experts to figure out :)
ID: 1789348 ·
Raistmer
Volunteer developer
Volunteer tester

Joined: 16 Jun 01
Posts: 6324
Credit: 106,370,077
RAC: 121
Message 1789349 - Posted: 21 May 2016, 23:13:26 UTC - in response to Message 1789347.

BTW, AFAIK some projects, where different apps really very different (for example, where few different subprojects implemented) indeed abandon whole idea of universal credit and grant separate entities (like different badges) for separate subprojects. It has own advantages - for example, nobody will whine about AP vs MB credit dispairing cause all will see that there are AstoCredits and MultiCredits....

And such scheme will allow decoupling between technical(scheduling!) meaning of RAC and credits and their social (competitive) meaning.
From other side, competition will be more complex cause there will be no single param to max out.

In short, leave CreditScrew fixing for scheduling improvement and revert to "v2" FLOP-counting with important addition: always recall that those FLOPs of "different color" for different algorithms so all credits are "colored" and should remain such (separate credit accounting for different algorithms).
This will make competition fair... but restrics it only to particular algorithm/app/subproject.
[though, "fairness" will be limited even here - MB and its' AR-curve.... those colors will form "continuous spectrum" LoL ]
(PulseFind FLOPS of light green, autocorr of magenta and Triplets of dark blue... what a nice palette we will have :D )
ID: 1789349 ·
Grant (SSSF)
Volunteer tester

Joined: 19 Aug 99
Posts: 13334
Credit: 208,696,464
RAC: 304
Message 1789350 - Posted: 21 May 2016, 23:24:14 UTC - in response to Message 1789347.

BTW, AFAIK some projects, where different apps really very different (for example, where few different subprojects implemented) indeed abandon whole idea of universal credit and grant separate entities (like different badges) for separate subprojects. It has own advantages - for example, nobody will whine about AP vs MB credit dispairing cause all will see that there are AstoCredits and MultiCredits....

Which defeats the whole idea of BOINC credits.
Grant
Darwin NT
ID: 1789350 ·
Raistmer
Volunteer developer
Volunteer tester

Joined: 16 Jun 01
Posts: 6324
Credit: 106,370,077
RAC: 121
Message 1789353 - Posted: 21 May 2016, 23:28:28 UTC - in response to Message 1789350.

BTW, AFAIK some projects, where different apps really very different (for example, where few different subprojects implemented) indeed abandon whole idea of universal credit and grant separate entities (like different badges) for separate subprojects. It has own advantages - for example, nobody will whine about AP vs MB credit dispairing cause all will see that there are AstoCredits and MultiCredits....

Which defeats the whole idea of BOINC credits.

yep. No universal credits. "ref app" approach defeats it too cause there can't be single ref app for DIFFERENT algorithms...
ID: 1789353 ·
Previous · 1 . . . 17 · 18 · 19 · 20 · 21 · 22 · 23 . . . 33 · Next

Message boards : Number crunching : Average Credit Decreasing?