Message boards :
Number crunching :
Average Credit Decreasing?
Message board moderation
Previous · 1 . . . 8 · 9 · 10 · 11 · 12 · 13 · 14 . . . 32 · Next
Author | Message |
---|---|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Hey - I just noticed that I have NO GBT WUs on my GPUs now. Been that way for about a week or so now, other than the odd re-issue. Grant Darwin NT |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
Hey - I just noticed that I have NO GBT WUs on my GPUs now. But is it that they are VLARs or not? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Hey - I just noticed that I have NO GBT WUs on my GPUs now. All the current Guppie WUs are VLARs, so that's why they're not going to the GPUs. Grant Darwin NT |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
So, all this being said, would installing the 64 bit version of Windows possibly clear up some of the issues? *Possibly*, on a lot of provisos: if it is one of the later (I think Cedar Mill) P4 models that can do 64 bit, currently it is notably paging the HDD when trying to load the 950 up, and there is actually any CPU headroom apparent. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
Ok, well I think I'll give it a shot, it's not that hard a process, and there isn't that much stuff on there that has to be reinstalled. The CPU is the latest, so it does support 64 bit computing. Hopefully that helps things a bit, but since it is just sitting there otherwise idling 90%+ of the time, and is only set to do GPU tasks already, I'm not having any unrealistic expectations. But, here's hoping! |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
...award you credit according to how efficient you were (compared to all tasks and tasks in your group of tasks) in crunching the task, [I think in theory, you are supposed to get a bonus if you are faster than the average] Wait, what? Who? Who figured this out? Or did one of the Boinc crew offer this bit of info? |
shizaru Send message Joined: 14 Jun 04 Posts: 1130 Credit: 1,967,904 RAC: 0 |
Actually I hadn't figured out what I really wanted to ask. Here's my real question: Anyone understand why tasks that scored 100 under v6 didn't score more under v7? (say 120 or 150, or whatever) |
Mr. Kevvy Send message Joined: 15 May 99 Posts: 3797 Credit: 1,114,826,392 RAC: 3,319 |
Hey - I just noticed that I have NO GBT WUs on my GPUs now. Is this because all the current GBT WUs are VLARs, or did somebody at SETI make a surreptitious change to keep them off the GPUs for RAC purposes? I proposed in the Breakthrough Listen News thread that a checkbox be added to our project preferences akin to "Allow VLAR work on GPU" with an appropriate caveat ie if run on CUDA may complete slowly for less credit, cause machine slowness/lockup or work unit failure, etc. The box would be initially off, so no one would get them and risk issue(s) unless they chose to do so. Einstein@Home has similar controls with similar caveats in their project preferences page for setting the Count parameter to allow multiple concurrent work units on GPU (which has the same risks of instability plus added bonus of hardware failure risk if the GPU overheats) and they seem to do just fine with the disclaimer, so no reason it wouldn't work here. Volunteers with mixed work and dedicated cruncher machines could set home/work/school profiles and put the machine types in them so computers they actually use wouldn't be slowed. All I can suggest is that if you would like this to be implemented to post in that thread. If enough of us ask, it may be. Edit: Someone could also donate to Berkeley SETI Research Center to get an answer... $350 for an e-mail and $750 for a video reply. I already kicked in... sorry. :^) |
betreger Send message Joined: 29 Jun 99 Posts: 11408 Credit: 29,581,041 RAC: 66 |
I'd send you a parachute but I need it. |
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
No kidding. I have a GTX770 and 2 GTX980's currently in my main cruncher because my original 980 has a fan issue. It is an advance ship RMA with a CC on file that will be charged after 45 days. I thought hmmm, 45 days, huh? Why not set up fan blowing into it, and see what I can temporarily pump it's RAC up to? So, it got to nearly 70k (69 something, never officially broke 70. Darn.) and then these new tasks started flowing in. Last I looked, I am now scraping down around the 60k mark, so effectively, this new situation has pretty much removed the computing capability of a GTX980 card from my system. Nice... So, in a week or so when I have to remove it, I'm afraid what it will nosedive down to at that point. Ouch! |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
I noticed (but cannot document, for obvious reasons, as you will see) recently that the estimated GFLOPS for WUs have dropped very substantially sometime in the last month or two. (EGFLOPS gotten from highlighting a WU in the Tasks tab and clicking Properties button). Way back when (before V8), a typical small GPU WU would have EGFLOPS in the 20-30K range, larger GPU WUs 100K +, and typical CPU WUs 2-300K (from memory, no documentation, but in the ball park). Now, and even before GBT, IIRC, similar WUs have 13K or so for EGFLOPS, and the largest I can find ATM, an AP, has only around 25K, barely 10% of what it used to get... When was this change made (has to be on the servers, right?) - and WHY? - and is this what is causing the slide in credit? Cruncher-Americans want to know! |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
I take it you're referring to the line which says, in full, "estimated computation size". Note that's a size, not a speed. This is part of the runtime estimation system - closely related to, indeed positively embedded in, but not the same thing as CreditNew. BOINC estimates how long a task is going to take (useful when deciding how many tasks to download). It uses the formula "size divided by speed". If you are running stock applications, the size is fixed by the number of calculations actually needed. The speed is monitored by the server, and adjusted on the basis of past experience. If you are running Anonymous Platform, the system has been written in such a way that the calculated speed of your host cannot be transferred back to the host to use in the estimation process. Instead, your computer uses a fixed speed term in the calculation, and the server 'tweaks' the task size. This has been the procedure since 2010: nothing has changed. If you are seeing task sizes now which are smaller than they were in 2010, then I presume: 1) you are running Anonymous Platform. 2) This year's computer is faster than your 2010 model. That's all. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
1) yes and 2) yes. But that does not explain the magnitude of the size of the change. Also, before v8, I was also running Lunatics, just ran default for a short while when L was not available (and a bit longer while I did some minor tests of my systems). And whether it's size or speed, does not the estimate affect credit? |
LetiCern Send message Joined: 3 Oct 99 Posts: 15 Credit: 12,314,846 RAC: 0 |
My usage of the work credits is to balance a "machine" for multiple projects. Since new credit for seti@home the hosts have been trending to have higher credits for the other projects. Two oldies, machines, donating time to four projects. One machine split between MILKWAY (2014) and SETI (1999) and other between SETI and EISTIEN (2007). I try to balance each host based on daily host average units contributed for each project. This was working okay. Each machine contributed equally to projects based on daily host average units. With the new seti@home credit the balanced has been lost. Not sure whether BIONC scheduling or credits are fault. Note that total credits for each projects has been increasing but not sure whether BIONC uses this as part of scheduling algorithm (across machines and time - note the participation start date 1999, 2007 and 2014). Now daily host average is five times higher for EISTIEN v. SETI and 2/3 times MILKWAY v. SETI. Each machine (host) was split between projects a.k.a. 50% "resource share". Have been changing "resource share" to try to rebalance but now not sure whether credits across projects are equal and whether BIONC is scheduling correctly. Any ideas or comments? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
My RAC had stopped falling for several days there. Now it's gone back to free-fall. Grant Darwin NT |
Ulrich Metzner Send message Joined: 3 Jul 02 Posts: 1256 Credit: 13,565,513 RAC: 13 |
Well, my Core2Quad used to have a RAC easily above 10000. Now it has a merely 7700 and it's still falling. I'm clueless and can't believe the makers of this mess still claim it's working fine. -> BAD = Broken As Designed Aloha, Uli |
kittyman Send message Joined: 9 Jul 00 Posts: 51477 Credit: 1,018,363,574 RAC: 1,004 |
Well, my Core2Quad used to have a RAC easily above 10000. Now it has a merely 7700 and it's still falling. Otherwise known as Breaking Bad. "Time is simply the mechanism that keeps everything from happening all at once." |
BigWaveSurfer Send message Joined: 29 Nov 01 Posts: 186 Credit: 36,311,381 RAC: 141 |
I noticed my dropping but know not much has changed with the machines or usage so I can here to see if there was mention of it....and look at this, lol.....smiles.....walks backwards out the door....closes it quietly...never to open it up again! |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
After several years of sleepless nights, much research with a lot of help from friends, I may have found a simple way to restore credit to what it should be (or close enough), without disabling CreditNew. Should I talk to Eric about this ? "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14674 Credit: 200,643,578 RAC: 874 |
After several years of sleepless nights, much research with a lot of help from friends, I may have found a simple way to restore credit to what it should be (or close enough), without disabling CreditNew. Should I talk to Eric about this ? I hope this isn't related to the post on Link Time Optimisation which has just popped up on boinc_dev? Optimise the hell out of Dhrystone, and pretend it's still a benchmark within the meaning of the definition? |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.