A very steep decline in Average Credits!!!

Message boards : Number crunching : A very steep decline in Average Credits!!!
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 14 · Next

AuthorMessage
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 36365
Credit: 261,360,520
RAC: 489
Australia
Message 1914633 - Posted: 22 Jan 2018, 20:24:22 UTC

I've been running dual GPU's since MB V6 and towards the end of that run my then Q6600 with dual GTX 660's, an Athlon X4 630 with dual GTX 550 Ti's and an C2D E6300 with a GTX 560 Ti were producing a RAC of 120K, things have been going backwards ever since.

Cheers.
ID: 1914633 · Report as offensive
JLDun
Volunteer tester
Avatar

Send message
Joined: 21 Apr 06
Posts: 573
Credit: 196,101
RAC: 0
United States
Message 1914634 - Posted: 22 Jan 2018, 20:25:09 UTC - in response to Message 1912836.  

https://setiweb.ssl.berkeley.edu/beta/

Beaten to it, but made a linky.....
ID: 1914634 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1914641 - Posted: 22 Jan 2018, 20:52:06 UTC - in response to Message 1914631.  
Last modified: 22 Jan 2018, 20:54:16 UTC

I not see any credit complains on E@H who uses a fix credit per WU crunched.

Very easy there, as you know each WU on a given host has the same run time.

ThatÅ› is exactly the same who happening in Seti (WU of the same type & with similar AR) if you look one host or i'm wrong?

Anyway that's why i post:

Will need to adjust the base value according the WU type (blc, Arecibo, AP, Vlars, etc) due the differences on crunching times, but that is easily to do


Not a hard task to do a WU Type x AR x Credit simple table.......
ID: 1914641 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22448
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1914643 - Posted: 22 Jan 2018, 21:14:36 UTC

Just need a bit of calibration to produce a linear regression line for each of the "main" types of task (currently two, maybe three - VLAR, normal, VHAR) - then there's only two parameters needed for each type (slope & intercept). So simple....
The "big" fun is in setting the AR ranges for the types, as with the current data sets they are all low.

What we need is a big pile of run times and ARs for CPU tasks, all on "simple" processor system running the stock application. Then spend a bit of time doing the banding and correlation. before getting the pruning shears out on the code and binning a lot of the complexity that is CreditScrew.

Why stock application? - its the foundation on which all the others are built.

Use a "low end" CPU to do the calibration, this will more readily show the (dis)advantages of using optimised applications and/or high performance processors.

With a stable credit system it becomes easier to do comparisons between users and systems

As others have either said or implied, driven from the data the credit reward per task is not going to be upset by the vagaries of the user.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1914643 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1914646 - Posted: 22 Jan 2018, 22:03:14 UTC - in response to Message 1914643.  



Simple, transparent & eficient.
ID: 1914646 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2786
Credit: 685,657,289
RAC: 835
Canada
Message 1914675 - Posted: 23 Jan 2018, 3:11:44 UTC

Isn't the problem that the BLC splitters are just not assigning the proper flops estimate as compared to what the Arecibo ones did/do? Since the system was 'tuned' (using the term loosely) to only Arecibo tasks being present that would be the so called baseline. If the BLC tasks were assigned a lower flop estimate, we wouldn't be finishing them in less time than estimated, which should stabilize 'normality' ??
ID: 1914675 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 36365
Credit: 261,360,520
RAC: 489
Australia
Message 1914677 - Posted: 23 Jan 2018, 3:29:46 UTC

Isn't the problem that the BLC splitters are just not assigning the proper flops estimate as compared to what the Arecibo ones did/do? Since the system was 'tuned' (using the term loosely) to only Arecibo tasks being present that would be the so called baseline. If the BLC tasks were assigned a lower flop estimate, we wouldn't be finishing them in less time than estimated, which should stabilize 'normality' ??

No. The problems with CreditScrew were easy enough to see well before GBT work was even thought of (although D.A. refused to want to see the problem or the evidence with it back then). Then GBT work, when it came along, just made the problem so much more easier to see (but still D.A. insisted he was right and we were wrong), but now with just GBT work available even a blind monkey can see the problem and so D.A. had to finally admit that his CreditNew was badly flawed.

Cheers.
ID: 1914677 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13835
Credit: 208,696,464
RAC: 304
Australia
Message 1914679 - Posted: 23 Jan 2018, 3:48:43 UTC - in response to Message 1914603.  

Heck, as long as it provides consistent credit for a given amount of work done, you can call it kibbles for all I care. All I want is consistency, I really don't care about the actual number. 100, 1000, whatever. Just make it Consistent!

That was the system that preceded Credit New.
Grant
Darwin NT
ID: 1914679 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1914681 - Posted: 23 Jan 2018, 3:54:55 UTC - in response to Message 1914675.  

Isn't the problem that the BLC splitters are just not assigning the proper flops estimate as compared to what the Arecibo ones did/do? Since the system was 'tuned' (using the term loosely) to only Arecibo tasks being present that would be the so called baseline. If the BLC tasks were assigned a lower flop estimate, we wouldn't be finishing them in less time than estimated, which should stabilize 'normality' ??

The splitters have nothing to do with how the flops estimate is generated. Their task is simply splitting the tapes. The CreditNew algorithm in the main server code is what determines the flops estimate according the math and logic posted in the CreditNew Wiki link that has been posted many times now. I think the main key is to focus on these parts of the Wiki:

Notes:

Version normalization is only applied if at least two versions are above sample threshold.
Version normalization addresses the common situation where an app's GPU version is much less efficient than the CPU version (i.e. the ratio of actual FLOPs to peak FLOPs is much less). To a certain extent, this mechanism shifts the system towards the "Actual FLOPs" philosophy, since credit is granted based on the most efficient app version. It's not exactly "Actual FLOPs", since the most efficient version may not be 100% efficient.
If jobs are not distributed uniformly among versions (e.g. if SETI@home VLAR jobs are done only by the CPU version) then this mechanism doesn't work as intended. One solution is to create separate apps for separate types of jobs.
Cheating or erroneous hosts can influence app_version.pfc_avg to some extent. This is limited by the "sanity check" mechanism, and by the fact that only validated jobs are used. The effect on credit will be negated by host normalization (see below). There may be an adverse effect on cross-version normalization. This could be eliminated by computing app_version.pfc_avg as the sample-median value of host_app_version.pfc_avg


I think the host normalization function is very broken.

The second normalization is across hosts. Assume jobs for a given app are distributed uniformly among hosts. Then the average credit per job should be the same for all hosts.

To achieve this, we scale PFC by the factor

app_version.pfc_avg / host_app_version.pfc_avg

This scaling is only done if both statistics are above sample threshold.

There are some cases where hosts are not sent jobs uniformly:

job-size matching (smaller jobs sent to slower hosts)
GPUGrid.net's scheme for sending some (presumably larger) jobs to GPUs with more processors.
The normalization by wu.fpops_est handles this (assuming that it's set correctly).

Notes:

For apps with large variance of job sizes, the host normalization mechanism is vulnerable to a type of cheating called "cherry picking". A mechanism for defeating this is described below.
The host normalization mechanism reduces the claimed credit of hosts that are less efficient than average, and increases the claimed credit of hosts that are more efficient than average.


We just need to persuade DA to take action on the developer issue already logged instead of just kicking the can down the road.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1914681 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1856
Credit: 268,616,081
RAC: 1,349
United States
Message 1914683 - Posted: 23 Jan 2018, 3:58:38 UTC - in response to Message 1914631.  

I not see any credit complains on E@H who uses a fix credit per WU crunched.

Very easy there, as you know each WU on a given host has the same run time.

???
I only do GPU work there, but I see a tremendous variation in runtime for that same 3450 points, anywhere from 3-4 minutes to as much as 30.
I turned off CPU work, as I very seldom run out of SETI CPU tasks.
But I don't care for the fixed approach, given the variations I see.
ID: 1914683 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 36365
Credit: 261,360,520
RAC: 489
Australia
Message 1914684 - Posted: 23 Jan 2018, 4:06:19 UTC

That was the system that preceded Credit New.

Yes, that was when an AP was worth 1344 credits and doing MB's over the same time period was worth just as much. Now those were the days.

In fact the credit received here wasn't that much below what Einstein paid at that time (and still pays), but now it's a huge difference between the 2 projects as Einstein didn't change over to CreditNew as even they at the time could see that there was a big problem with it way back then also.

Cheers.
ID: 1914684 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13835
Credit: 208,696,464
RAC: 304
Australia
Message 1914685 - Posted: 23 Jan 2018, 4:07:26 UTC - in response to Message 1914681.  

The host normalization mechanism reduces the claimed credit of hosts that are less efficient than average, and increases the claimed credit of hosts that are more efficient than average.

We just need to persuade DA to take action on the developer issue already logged instead of just kicking the can down the road.

One of the major issues is the hang-up about efficiency, and how efficiency is determined. It uses a claimed value for GFLOPs for the device, and then a calculated (not actual counted) value for the FLOPS when processing a WU.
By this system, GPUs are considered extremely inefficient because their claimed FLOPS rating is much more than the estimated FLOPs when processing a WU. Yet by any normal measure of efficiency- WUs processed per hour, WattHours of power used to process a WU etc, GPUs leave CPUs for dead.
And remember when SoG was released on main how Credit took a hit? A more efficient GPU application than previously existed resulted in less Credit being granted, when it should have granted more for the increase in (theoretical) efficiency.
Grant
Darwin NT
ID: 1914685 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1914696 - Posted: 23 Jan 2018, 5:33:35 UTC - in response to Message 1914683.  

I not see any credit complains on E@H who uses a fix credit per WU crunched.

Very easy there, as you know each WU on a given host has the same run time.

???
I only do GPU work there, but I see a tremendous variation in runtime for that same 3450 points, anywhere from 3-4 minutes to as much as 30.
I turned off CPU work, as I very seldom run out of SETI CPU tasks.
But I don't care for the fixed approach, given the variations I see.

I assume the 30 minute tasks are by the 750Ti??? Certainly not from the 980's.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1914696 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1914697 - Posted: 23 Jan 2018, 5:39:18 UTC - in response to Message 1914685.  


One of the major issues is the hang-up about efficiency, and how efficiency is determined. It uses a claimed value for GFLOPs for the device, and then a calculated (not actual counted) value for the FLOPS when processing a WU.
By this system, GPUs are considered extremely inefficient because their claimed FLOPS rating is much more than the estimated FLOPs when processing a WU. Yet by any normal measure of efficiency- WUs processed per hour, WattHours of power used to process a WU etc, GPUs leave CPUs for dead.
And remember when SoG was released on main how Credit took a hit? A more efficient GPU application than previously existed resulted in less Credit being granted, when it should have granted more for the increase in (theoretical) efficiency.

Exactly. Somewhere back in the cobwebs of my memory, I believe it was mentioned that the efficiency calculation was tied into the then new AVX applications. The AVX application FLOPS estimate might be somewhat correct for CPU tasks, but it falls down on the Lunatics optimized apps and most certainly, bears no resemblance to any of the GPU apps.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1914697 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1914698 - Posted: 23 Jan 2018, 5:50:31 UTC - in response to Message 1914685.  

By this system, GPUs are considered extremely inefficient because their claimed FLOPS rating is much more than the estimated FLOPs when processing a WU


I believe that the "Sanity Checking", "Cheat Prevention" and "Cherry Picking" parts of CreditNew are partly to blame for the reduced credits. The mechanisms believe that the real flops of a gpu is the result of a cheat or cherry picking scenario and the credits being reduced to what they believe the normalized fpops estimate should be.

The CreditNew algorithm can't comprehend the exponential increase in processing power performance of modern gpu's that has occurred since the algorithm was created.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1914698 · Report as offensive
Profile Jimbocous Project Donor
Volunteer tester
Avatar

Send message
Joined: 1 Apr 13
Posts: 1856
Credit: 268,616,081
RAC: 1,349
United States
Message 1914706 - Posted: 23 Jan 2018, 8:53:20 UTC - in response to Message 1914696.  

I not see any credit complains on E@H who uses a fix credit per WU crunched.

Very easy there, as you know each WU on a given host has the same run time.

???
I only do GPU work there, but I see a tremendous variation in runtime for that same 3450 points, anywhere from 3-4 minutes to as much as 30.
I turned off CPU work, as I very seldom run out of SETI CPU tasks.
But I don't care for the fixed approach, given the variations I see.

I assume the 30 minute tasks are by the 750Ti??? Certainly not from the 980's.

Yeah, due to power and real estate constraints I have a mix of 1 750ti and 1 or 2 980s in each machine.
But I think I've taken that into consideration in the statement I made ... perhaps i should look again.
ID: 1914706 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1914851 - Posted: 24 Jan 2018, 16:15:17 UTC - in response to Message 1914844.  

I take it for granted, that they fixed or killed CreditScrew, during this extra long outage.
Hehe.....
LOL

MeowLOL.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1914851 · Report as offensive
Profile Cactus Bob
Avatar

Send message
Joined: 19 May 99
Posts: 209
Credit: 10,924,287
RAC: 29
Canada
Message 1914853 - Posted: 24 Jan 2018, 16:44:47 UTC

Credits per hour on my Computer by project (plus or minus 10%)
Seti was calculated using BLC02...vlars

...510cr/hr ----- Seti@home - (Using 1 CPU & 1 GPU) Credit new
...600cr/hr ----- Skynet POGS - (Using 3 CPUs Only) Credit new, I think
.3,500cr/hr ----- Milkyway - (Using 1 CPU & 1 GPU) Set Credit/WU
15,200cr/hr ----- Einstein - (Using .5 GPU and .5 GPU) Set Credit/WU

I know its not Seti's fault or problem but if I want to share work with all the above projects simply using a straight credit value is no use at all. I would have use the above numbers as factors and that would work until WU were changes and the value and time for them changed, then you would have to recalculate it all over again once you have ran enough WUs for a benchmark.

Some form of consistency within and then across projects would be awesome.

My 2 cents

Bob
Sometimes I wonder, what happened to all the people I gave directions to?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
SETI@home classic workunits 4,321
SETI@home classic CPU time 22,169 hours
ID: 1914853 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51477
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1914856 - Posted: 24 Jan 2018, 17:06:38 UTC - in response to Message 1914853.  



Some form of consistency within and then across projects would be awesome.

My 2 cents

Bob

I would be more than pleased just to have it consistent within Seti's own borders.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1914856 · Report as offensive
W-K 666 Project Donor
Volunteer tester

Send message
Joined: 18 May 99
Posts: 19314
Credit: 40,757,560
RAC: 67
United Kingdom
Message 1914889 - Posted: 24 Jan 2018, 19:25:03 UTC - in response to Message 1914856.  



Some form of consistency within and then across projects would be awesome.

My 2 cents

Bob

I would be more than pleased just to have it consistent within Seti's own borders.

+1
ID: 1914889 · Report as offensive
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 14 · Next

Message boards : Number crunching : A very steep decline in Average Credits!!!


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.