Message boards :
Number crunching :
A very steep decline in Average Credits!!!
Message board moderation
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 . . . 14 · Next
Author | Message |
---|---|
Wiggo Send message Joined: 24 Jan 00 Posts: 36783 Credit: 261,360,520 RAC: 489 |
I've been running dual GPU's since MB V6 and towards the end of that run my then Q6600 with dual GTX 660's, an Athlon X4 630 with dual GTX 550 Ti's and an C2D E6300 with a GTX 560 Ti were producing a RAC of 120K, things have been going backwards ever since. Cheers. |
JLDun Send message Joined: 21 Apr 06 Posts: 574 Credit: 196,101 RAC: 0 |
|
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
I not see any credit complains on E@H who uses a fix credit per WU crunched. ThatÅ› is exactly the same who happening in Seti (WU of the same type & with similar AR) if you look one host or i'm wrong? Anyway that's why i post: Will need to adjust the base value according the WU type (blc, Arecibo, AP, Vlars, etc) due the differences on crunching times, but that is easily to do Not a hard task to do a WU Type x AR x Credit simple table....... |
rob smith Send message Joined: 7 Mar 03 Posts: 22529 Credit: 416,307,556 RAC: 380 |
Just need a bit of calibration to produce a linear regression line for each of the "main" types of task (currently two, maybe three - VLAR, normal, VHAR) - then there's only two parameters needed for each type (slope & intercept). So simple.... The "big" fun is in setting the AR ranges for the types, as with the current data sets they are all low. What we need is a big pile of run times and ARs for CPU tasks, all on "simple" processor system running the stock application. Then spend a bit of time doing the banding and correlation. before getting the pruning shears out on the code and binning a lot of the complexity that is CreditScrew. Why stock application? - its the foundation on which all the others are built. Use a "low end" CPU to do the calibration, this will more readily show the (dis)advantages of using optimised applications and/or high performance processors. With a stable credit system it becomes easier to do comparisons between users and systems As others have either said or implied, driven from the data the credit reward per task is not going to be upset by the vagaries of the user. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Simple, transparent & eficient. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
Isn't the problem that the BLC splitters are just not assigning the proper flops estimate as compared to what the Arecibo ones did/do? Since the system was 'tuned' (using the term loosely) to only Arecibo tasks being present that would be the so called baseline. If the BLC tasks were assigned a lower flop estimate, we wouldn't be finishing them in less time than estimated, which should stabilize 'normality' ?? |
Wiggo Send message Joined: 24 Jan 00 Posts: 36783 Credit: 261,360,520 RAC: 489 |
Isn't the problem that the BLC splitters are just not assigning the proper flops estimate as compared to what the Arecibo ones did/do? Since the system was 'tuned' (using the term loosely) to only Arecibo tasks being present that would be the so called baseline. If the BLC tasks were assigned a lower flop estimate, we wouldn't be finishing them in less time than estimated, which should stabilize 'normality' ?? No. The problems with CreditScrew were easy enough to see well before GBT work was even thought of (although D.A. refused to want to see the problem or the evidence with it back then). Then GBT work, when it came along, just made the problem so much more easier to see (but still D.A. insisted he was right and we were wrong), but now with just GBT work available even a blind monkey can see the problem and so D.A. had to finally admit that his CreditNew was badly flawed. Cheers. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
Heck, as long as it provides consistent credit for a given amount of work done, you can call it kibbles for all I care. All I want is consistency, I really don't care about the actual number. 100, 1000, whatever. Just make it Consistent! That was the system that preceded Credit New. Grant Darwin NT |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Isn't the problem that the BLC splitters are just not assigning the proper flops estimate as compared to what the Arecibo ones did/do? Since the system was 'tuned' (using the term loosely) to only Arecibo tasks being present that would be the so called baseline. If the BLC tasks were assigned a lower flop estimate, we wouldn't be finishing them in less time than estimated, which should stabilize 'normality' ?? The splitters have nothing to do with how the flops estimate is generated. Their task is simply splitting the tapes. The CreditNew algorithm in the main server code is what determines the flops estimate according the math and logic posted in the CreditNew Wiki link that has been posted many times now. I think the main key is to focus on these parts of the Wiki: Notes: I think the host normalization function is very broken.
We just need to persuade DA to take action on the developer issue already logged instead of just kicking the can down the road. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1856 Credit: 268,616,081 RAC: 1,349 |
I not see any credit complains on E@H who uses a fix credit per WU crunched. ??? I only do GPU work there, but I see a tremendous variation in runtime for that same 3450 points, anywhere from 3-4 minutes to as much as 30. I turned off CPU work, as I very seldom run out of SETI CPU tasks. But I don't care for the fixed approach, given the variations I see. |
Wiggo Send message Joined: 24 Jan 00 Posts: 36783 Credit: 261,360,520 RAC: 489 |
That was the system that preceded Credit New. Yes, that was when an AP was worth 1344 credits and doing MB's over the same time period was worth just as much. Now those were the days. In fact the credit received here wasn't that much below what Einstein paid at that time (and still pays), but now it's a huge difference between the 2 projects as Einstein didn't change over to CreditNew as even they at the time could see that there was a big problem with it way back then also. Cheers. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13854 Credit: 208,696,464 RAC: 304 |
The host normalization mechanism reduces the claimed credit of hosts that are less efficient than average, and increases the claimed credit of hosts that are more efficient than average. One of the major issues is the hang-up about efficiency, and how efficiency is determined. It uses a claimed value for GFLOPs for the device, and then a calculated (not actual counted) value for the FLOPS when processing a WU. By this system, GPUs are considered extremely inefficient because their claimed FLOPS rating is much more than the estimated FLOPs when processing a WU. Yet by any normal measure of efficiency- WUs processed per hour, WattHours of power used to process a WU etc, GPUs leave CPUs for dead. And remember when SoG was released on main how Credit took a hit? A more efficient GPU application than previously existed resulted in less Credit being granted, when it should have granted more for the increase in (theoretical) efficiency. Grant Darwin NT |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I not see any credit complains on E@H who uses a fix credit per WU crunched. I assume the 30 minute tasks are by the 750Ti??? Certainly not from the 980's. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Exactly. Somewhere back in the cobwebs of my memory, I believe it was mentioned that the efficiency calculation was tied into the then new AVX applications. The AVX application FLOPS estimate might be somewhat correct for CPU tasks, but it falls down on the Lunatics optimized apps and most certainly, bears no resemblance to any of the GPU apps. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
By this system, GPUs are considered extremely inefficient because their claimed FLOPS rating is much more than the estimated FLOPs when processing a WU I believe that the "Sanity Checking", "Cheat Prevention" and "Cherry Picking" parts of CreditNew are partly to blame for the reduced credits. The mechanisms believe that the real flops of a gpu is the result of a cheat or cherry picking scenario and the credits being reduced to what they believe the normalized fpops estimate should be. The CreditNew algorithm can't comprehend the exponential increase in processing power performance of modern gpu's that has occurred since the algorithm was created. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Jimbocous Send message Joined: 1 Apr 13 Posts: 1856 Credit: 268,616,081 RAC: 1,349 |
I not see any credit complains on E@H who uses a fix credit per WU crunched. Yeah, due to power and real estate constraints I have a mix of 1 750ti and 1 or 2 980s in each machine. But I think I've taken that into consideration in the statement I made ... perhaps i should look again. |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
I take it for granted, that they fixed or killed CreditScrew, during this extra long outage. MeowLOL. "Time is simply the mechanism that keeps everything from happening all at once." |
Cactus Bob Send message Joined: 19 May 99 Posts: 209 Credit: 10,924,287 RAC: 29 |
Credits per hour on my Computer by project (plus or minus 10%) Seti was calculated using BLC02...vlars ...510cr/hr ----- Seti@home - (Using 1 CPU & 1 GPU) Credit new ...600cr/hr ----- Skynet POGS - (Using 3 CPUs Only) Credit new, I think .3,500cr/hr ----- Milkyway - (Using 1 CPU & 1 GPU) Set Credit/WU 15,200cr/hr ----- Einstein - (Using .5 GPU and .5 GPU) Set Credit/WU I know its not Seti's fault or problem but if I want to share work with all the above projects simply using a straight credit value is no use at all. I would have use the above numbers as factors and that would work until WU were changes and the value and time for them changed, then you would have to recalculate it all over again once you have ran enough WUs for a benchmark. Some form of consistency within and then across projects would be awesome. My 2 cents Bob Sometimes I wonder, what happened to all the people I gave directions to? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SETI@home classic workunits 4,321 SETI@home classic CPU time 22,169 hours |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
I would be more than pleased just to have it consistent within Seti's own borders. "Time is simply the mechanism that keeps everything from happening all at once." |
W-K 666 Send message Joined: 18 May 99 Posts: 19400 Credit: 40,757,560 RAC: 67 |
+1 |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.