A very steep decline in Average Credits!!!

Message boards : Number crunching : A very steep decline in Average Credits!!!
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 14 · Next

AuthorMessage
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1913366 - Posted: 16 Jan 2018, 12:53:57 UTC - in response to Message 1912507.  

I've decided after fall of RAC to run other projects and not focas on RAC with just 1 laptop was bound to happen,
running Milkyway@Home what others are good?

I run Seti for the science, not for the Credit (yes, it would be nice to have our contributions acknowledged, but i'm doing the work because I like the concept of the project. Is there anyone else out there? Won't know if we don't look).


. . Sums it up.

Stephen

:)
ID: 1913366 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1913367 - Posted: 16 Jan 2018, 12:56:35 UTC - in response to Message 1912531.  

Is it pure coincidence that we are all seeing our RAC drop after probably installing a certain patch from Microsoft in the last 7 days that was headlined to slow PC's down by up to 30% ????


. . The drop in RAC is even more pronounced on my Linux machines than on my Windows machine.

Stephen

:(
ID: 1913367 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13155
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913378 - Posted: 17 Jan 2018, 1:24:29 UTC - in response to Message 1913341.  


I watch my machines quite closely and have not noticed sub 50s credit awards like this in the past. I did try something different in the latest work shortage. I decide to try the rescheduler for the first time. I used it to move several hundred GPU tasks to CPU. Since I knew the machine would completely run out of work, I decided on a strategy to keep it fully loaded. I moved enough GPU tasks to CPU to make sure the CPU would be fully loaded while I slept my Sunday evening. I then enabled mining on the GPUs. In the morning, I stopped mining and un-suspended GPUs. Everything looked normal. Only thing that doesn't make sense is that it is not the rescheduled work that is getting the lower credit. It is the work that actually ran afterward on the GPUs. Anyone think this is the cause?


No that is NOT the cause. CreditScrew is the cause. Think about it. You moved tasks assigned originally to the gpus on your system. The scheduler took into account the APR for the gpu application. You moved the gpu tasks temporarily to the cpu for bunkering. The scheduler and server has no knowledge of this action. You then move your gpu tasks temporarily stored in the cpu cache back to the gpu cache where you process them during the outrage.

What has changed? Nothing. You processed the originally assigned gpu tasks on the gpu as intended. You get 50 credits per task. Thank you CreditScrew.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913378 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5516
Credit: 528,817,460
RAC: 242
United States
Message 1913384 - Posted: 17 Jan 2018, 1:43:43 UTC - in response to Message 1913378.  
Last modified: 17 Jan 2018, 1:44:06 UTC

You get 50 credits per task. Thank you CreditScrew.


You're getting 50 credits per task?? Who did you pay off?? WHO!!!!

edit...

Tell me!!
ID: 1913384 · Report as offensive
Profile Mr. Kevvy Crowdfunding Project Donor*Special Project $250 donor
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 3564
Credit: 1,114,826,392
RAC: 3,319
Canada
Message 1913386 - Posted: 17 Jan 2018, 1:58:18 UTC - in response to Message 1913363.  

He has acknowledged a problem with credit, though: credit is a BOINC-wide problem, not a SETI problem. Issue #2132


Issue #2132 (Fixed URL)
ID: 1913386 · Report as offensive
Profile RueiKe Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 14 Feb 16
Posts: 492
Credit: 378,512,430
RAC: 785
Taiwan
Message 1913394 - Posted: 17 Jan 2018, 2:28:45 UTC - in response to Message 1913378.  

No that is NOT the cause. CreditScrew is the cause. Think about it. You moved tasks assigned originally to the gpus on your system. The scheduler took into account the APR for the gpu application. You moved the gpu tasks temporarily to the cpu for bunkering. The scheduler and server has no knowledge of this action. You then move your gpu tasks temporarily stored in the cpu cache back to the gpu cache where you process them during the outrage.

What has changed? Nothing. You processed the originally assigned gpu tasks on the gpu as intended. You get 50 credits per task. Thank you CreditScrew.

In my case, I am not doing "bunkering". I moved a bunch of WUs to CPU and left them there. I was not trying to get more tasks, only keep my CPU fully loaded during the outage. I only run SETI and LHC, and LHC doesn't have GPU tasks, so my plan was to move tasks from GPU to CPU to keep the CPU loaded during the outage and use the GPUs for mining. But if this is messing up credit calculations for work done, then I won't do it.

Low consistent credit is not an issue for me. What I like about LHC is that credit is also very low, perhaps even more difficult than SETI. This makes the competitive computing aspect of it even more meaningful. I only raised the concern in this thread since some of the observations after rescheduling seemed extreme. Even some tasks below 20credits, so still concerned that rescheduling is a factor. In this case, there is also a shift in work unit types, so still uncertain what happened.
GitHub: Ricks-Lab
Instagram: ricks_labs
ID: 1913394 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13369
Credit: 208,696,464
RAC: 304
Australia
Message 1913405 - Posted: 17 Jan 2018, 3:15:08 UTC

You can find out all about Credit new here.
Grant
Darwin NT
ID: 1913405 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13155
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913406 - Posted: 17 Jan 2018, 3:23:30 UTC - in response to Message 1913384.  

You get 50 credits per task. Thank you CreditScrew.


You're getting 50 credits per task?? Who did you pay off?? WHO!!!!

edit...

Tell me!!


Chuckle! It looks like that brief tail-end of the tapes yesterday in between the BLC05's and the new BLC02's, specifically the BLC13/14/25's are awarded more credit than the fast BLC05's. I'm seeing shadows of the normal Arecibo credits in the 70's and 80's.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913406 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13155
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913407 - Posted: 17 Jan 2018, 3:26:18 UTC - in response to Message 1913394.  

Rick, I misunderstood your move was not bunkering, but in fact just rescheduling. Since that was the case, your APR for the original gpu tasks and the CreditNew algorithm are not in sync since you processed them on the cpu with an entirely different APR. That explains the discrepancy in credit awarded.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913407 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13155
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913408 - Posted: 17 Jan 2018, 3:28:07 UTC - in response to Message 1913405.  

You can find out all about Credit new here.

Always good to provide the link to the explanation of CreditNew when the question arises anew. Thanks, Grant.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913408 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1913415 - Posted: 17 Jan 2018, 4:11:09 UTC

If somebody has some hard data showing just what the impact of rescheduling is on granted credits, or can run some new tests to generate a comparison, I think it would be very useful. When I first experimented with rescheduling in June of 2016, there were some people who said it did affect credit and others who said that was a myth that had already been put to rest long before.

So, just to make sure that my own rescheduling wasn't messing up other people's credit, I did some fairly extensive comparisons. My results were posted in Message 1799300. The conclusion I reached, based on those results, was that rescheduling had "no more impact to the credits than is caused by the random number generator that assigns them in the first place."

Rescheduling at that time simply meant moving Guppi VLARs that were originally assigned to the GPUs over to the CPUs, and moving non-VLAR Arecibo tasks that were originally assigned to the CPUs over to the GPUs. So, yes, tasks were being run on a different device than what they were originally assigned to, which is the issue that is being raised again here.

Now, perhaps things have changed in some way in the last year and a half, such that my previous conclusion is no longer valid. If so, I think new testing and documented results would be needed to demonstrate it.
ID: 1913415 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13155
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913434 - Posted: 17 Jan 2018, 6:00:23 UTC - in response to Message 1913415.  

How would you test for that. Easy enough to run the same task in the benchmark apparatus with a cpu and then gpu. But how do you submit the same task to the project for validation and credit award?
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913434 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13155
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913435 - Posted: 17 Jan 2018, 6:07:22 UTC - in response to Message 1913434.  

Thanks for the link Jeff. I guess I missed that thread entirely and see it was the inception of the rescheduler concept.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913435 · Report as offensive
Profile RueiKe Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 14 Feb 16
Posts: 492
Credit: 378,512,430
RAC: 785
Taiwan
Message 1913454 - Posted: 17 Jan 2018, 9:15:10 UTC - in response to Message 1913415.  

If somebody has some hard data showing just what the impact of rescheduling is on granted credits, or can run some new tests to generate a comparison, I think it would be very useful. When I first experimented with rescheduling in June of 2016, there were some people who said it did affect credit and others who said that was a myth that had already been put to rest long before.

So, just to make sure that my own rescheduling wasn't messing up other people's credit, I did some fairly extensive comparisons. My results were posted in Message 1799300. The conclusion I reached, based on those results, was that rescheduling had "no more impact to the credits than is caused by the random number generator that assigns them in the first place."

Rescheduling at that time simply meant moving Guppi VLARs that were originally assigned to the GPUs over to the CPUs, and moving non-VLAR Arecibo tasks that were originally assigned to the CPUs over to the GPUs. So, yes, tasks were being run on a different device than what they were originally assigned to, which is the issue that is being raised again here.

Now, perhaps things have changed in some way in the last year and a half, such that my previous conclusion is no longer valid. If so, I think new testing and documented results would be needed to demonstrate it.

My results also show that rescheduled work from GPU to CPU gets normal if not higher credit. See this example 6317203128. My observation is that non-rescheduled WU's that ran after the rescheduling event get lower credit. This could be the result of the WUs post outage are very different than WUs before. But I was concerned that something is going on with credit calculation after the rescheduling. Did the rescheduled work somehow change the reference for credit calculation of new WU's? Can information be extracted for the 2 WUs I referenced to verify this?
GitHub: Ricks-Lab
Instagram: ricks_labs
ID: 1913454 · Report as offensive
Profile Stargate (SA)
Volunteer tester
Avatar

Send message
Joined: 4 Mar 10
Posts: 1852
Credit: 2,258,721
RAC: 0
Australia
Message 1913455 - Posted: 17 Jan 2018, 9:20:49 UTC

As I love doing Seti@home would doing the Beta version be good idea or waste of time?

Cheers in advance
Steve
ID: 1913455 · Report as offensive
Profile Kissagogo27 Special Project $75 donor
Avatar

Send message
Joined: 6 Nov 99
Posts: 711
Credit: 8,032,827
RAC: 62
France
Message 1913469 - Posted: 17 Jan 2018, 11:40:51 UTC

when i've a look at my validated Wu , i can notice shorter crunch times for theses BLC WU , perhaps lot of BLC shorties all Vlar_0 or Vlar_1 for GPU /CPU ??
normal GPU BLC wu takes about 100 credits in 40min (longer than arecibo wus ), theses ones takes 30min for 50-75 credits ..

for my cpu, last "normal" BLC takes about 280mins /Wu ( Arecibo more than 300min) , now, they takes 180 to 300mins for 50-75 credits ..
ID: 1913469 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 21003
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1913501 - Posted: 17 Jan 2018, 14:47:59 UTC - in response to Message 1913455.  
Last modified: 17 Jan 2018, 14:52:10 UTC

Beta is only a test platform for new applications (server or cruncher), it does not directly contribute to the science of SETI@Home.
As such Beta does tend to "pay" lower than one would expect, and can result in some very strange behaviour....
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1913501 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13155
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1913543 - Posted: 17 Jan 2018, 18:41:07 UTC
Last modified: 17 Jan 2018, 18:42:00 UTC

I don't think you can draw any valid conclusions yet, Rick. I think you have to run the test for a much longer period to stabilize the APR for rescheduled tasks and also to have a much bigger variety of task types. We've already seen that there is a large difference in task times between BLC types and even changes in task times between the same type, specifically the early BLC04's and BLC13's of past months and the more recent versions.

It all comes down to how BOINC interprets the "difficulty" of the task. I've posted about the very large difference in credit awards I see over at GPUGrid.net for CPU tasks. Some tasks got 180 credits and the some other tasks got 7 credits for basically the same running time. It was explained the difference was the size of the molecule involved and therefore the "difficulty" assigned by BOINC in awarding credit.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1913543 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1913547 - Posted: 17 Jan 2018, 19:01:23 UTC - in response to Message 1913434.  

How would you test for that. Easy enough to run the same task in the benchmark apparatus with a cpu and then gpu. But how do you submit the same task to the project for validation and credit award?
For this type of test, there's really nothing an offline bench can tell you. You actually have to move one group of tasks from CPU to GPU and another group from GPU to CPU. Then just match the task types and ARs as closely as possible and record the amount of credit awarded.

In the tables that I linked to, you can see that I included 46 different tasks, all grouped by task type (Guppi VLAR and Arecibo non-VLAR), with ARs as closely matched as I could get them over the period that I was monitoring. Then each group is further broken down based on the device originally assigned vs. the device where the tasks actual ran, with unrescheduled tasks on both CPU and GPU as controls.

Since we currently have very few Arecibo non-VLARs showing up, any current test could only compare similar Guppi VLARs, but that could still be informative.

Of course, all that is dependent on actually receiving new tasks of any kind sometime later this year. ;^)
ID: 1913547 · Report as offensive
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 1913550 - Posted: 17 Jan 2018, 19:11:23 UTC

I participate in the tests done in the past, you could compare the crunching times etc but not really the credits since the credit depends on your wingmate hosts performance too. That's one of the problems of CreditScrew. You can't really control the test environment to be sure if your results have really meaning or no.
ID: 1913550 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 14 · Next

Message boards : Number crunching : A very steep decline in Average Credits!!!


 
©2022 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.