Posts by Dr Grey

21) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1839083)
Posted 31 Dec 2016 by Profile Dr Grey
Post:
Consider ARM6 phone crunching only when charging.
http://setiathome.berkeley.edu/host_app_versions.php?hostid=7435236
Среднее время обработки 25.85 days

That is, ~4 weeks. No need to touch deadlines.
If "97%" complete "just in one-two days" then fine, those tasks will be validated and deleted from BOINC database faster. They even not need to be kept week, just those 2 days.
But other still have chance to participate.

P.S. What need to be touched instead is quota system that fails to catch hyper-fast broken GPU hosts. That can produce thousands of broken results per day - amount that slow crunchers will not accumulate through whole year!


I cannot see the benefit to the project of catering for such low processing capability devices. Do they generate publicity or revenue? Even my Raspberry Pi, with a RAC of 87.46 (!) still has a turnaround of 2.74 days. That is about 1/500 of what my PC produces. The only reason I keep it connected is because it makes me happy that way - a bit like keeping a hamster with a wheel.
Is it right to keep the database inflated just to enable folks to keep their pet devices warm? What is the real benefit here? I agree on your last point.
22) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1839032)
Posted 31 Dec 2016 by Profile Dr Grey
Post:
It would make good sense if the workunit deadlines were cut if the aim is to keep the database small.

True, however the project also wants to make use of the widest possible range of hardware, so the longer deadlines are necessary for the much, much slower crunchers out there.


I have heard this said in the past and it is a laudable aim, but allowing 6 weeks when 99% are probably being returned inside of 7 days is a substantial commitment towards outreach and could be considered unwarranted, especially if it is impacting the performance of the project.
23) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1839024)
Posted 31 Dec 2016 by Profile Dr Grey
Post:
Inconclusives have never been a worry of mine, as long as they turn valid at some point and do not become invalid.

The problem with Inconclusives is the load on the database. The more Inconclusives there are, the larger the database becomes to keep track of all the work that is out there.
The project needs more crunchers to crunch the increasing amount of Greenbank work, yet they already have limits on the amount of work people can cache in order to reduce the strain on the database. The more Inconclusives, the greater the strain. Couple that with more crunchers and you've got a recipe for the whole thing crashing and burning repeatedly on a scale we've previously not seen.


It would make good sense if the workunit deadlines were cut if the aim is to keep the database small. Average turnaround is currently standing at less than two days, so three sigma being less than a week would give most of the workunits a good chance of being returned in that time. So, if you could cut the deadline to half what it is now and double the cache size - we'd have fewer workunits waiting around for validation from non-returners and healthier caches for the faster machines to stop them running dry and the operation would be all the more snappier... and happier.
I've just clicked through to see my oldest validation pending and it was sent out on the 28th Oct having timed out on my wingman. Actually, that's interesting. How do you get 2512 tasks in progress?
24) Message boards : Number crunching : GPU Wars 2016:  Pascal vs Polaris (Message 1835902)
Posted 14 Dec 2016 by Profile Dr Grey
Post:
Hopefully that will be putting some price pressure on the 1080 Ti launch
25) Message boards : Number crunching : Found a workaround for uncontrollable GTX 1080 downclocking / throttling (Message 1835360)
Posted 11 Dec 2016 by Profile Dr Grey
Post:
Since getting my EVGA GTX 1080 FTW I have had trouble getting it to sit reliably at a set overclock. Neither MSI's Afterburner or EVGA's Precision tools would work in enabling a constant overclock when running SETI. The card always had a tendency to downclock to 1721 MHz, occasionally bumping up to whatever overclock I had set it to without any apparent cause but always coming back down again, spending the majority of time at the lower clockspeed. I could sit and watch it on GPU-Z and there didn't seem to be any correlation of these jumps with BOINC activity - it wasn't happening with WU's starting and finishing. It's not temperature related as the card would be sitting in the low 70 C range if the overclock was being met, dropping to low 60s C at the downclock. It would also drop the voltage as it downclocked.
It looks like Nvidia's GPU Boost has been deciding that not much was going on, despite GPU-Z saying the GPU load was 98-99 %, and dropping the card into a lower power draw state. Changing 'power management mode' under Nvidia control panel/manage 3D settings to 'Prefer maximum performance' had no effect. All very frustrating and Googling the issue found a number of threads out there where other folks are encountering this with no solution being found.
I was considering seeing if changing the thread priorities would have an effect but I hadn't figured out how to do that yet, but anyway I might have found a way to force GPU Boost to stop the card from being so lazy.
In Nvidia control panel there's the 'Adjust Image Settings with Preview' menu item with the preview window showing a rotating Nvidia logo. Last night I was finding that just opening that setting pushes the card into its high power state. When I closed the panel the card would drop down to 1721 MHz again. You can even pause the animation and minimise it and the card will remain at full power. So I left it in that state over night, finding this morning that the card will sit at full power regardless of whether the panel is open or not - which is a bit weird. Whether this condition will survive a reboot or not I don't know, but I thought I'd put this out there in case anyone else is having this issue and wants to try it.
26) Message boards : Number crunching : You have to love early Xmas and B'day presents (even if you have to pay for them yourself). (Message 1834441)
Posted 6 Dec 2016 by Profile Dr Grey
Post:
I guess a lot of folks will be getting their upgrades for Christmas. The 1060s and 1070s look like really good value for a two or more generations upgrade.
27) Message boards : Number crunching : GPU Wars 2016:  Pascal vs Polaris (Message 1833135)
Posted 29 Nov 2016 by Profile Dr Grey
Post:
It is worth checking that Windows has been fully updated as nVidia can refuse to install on a non-updated system
28) Message boards : Number crunching : It would take 13 new threads to tidy up the formatting - can we have some new topics? (Message 1832341)
Posted 25 Nov 2016 by Profile Dr Grey
Post:
Success! Well done number crunchers!
29) Message boards : Number crunching : Issues with EVGA 1080, 1070s with ACX design (Message 1832256)
Posted 24 Nov 2016 by Profile Dr Grey
Post:
That's got to be some pretty efficient heat removal to stay that low. I'll look out for the vid at the weekend.
30) Message boards : Number crunching : Issues with EVGA 1080, 1070s with ACX design (Message 1832216)
Posted 24 Nov 2016 by Profile Dr Grey
Post:
It looks like thermals may not be the issue, rather some bad components might be responsible according to this set of measurements. But it does sound like the thermal pads and VBIOS upgrades are worth doing. I've got mine so maybe I'll have a go later this evening
31) Message boards : Number crunching : It would take 13 new threads to tidy up the formatting - can we have some new topics? (Message 1832208)
Posted 24 Nov 2016 by Profile Dr Grey
Post:
My OCD is tingling because the front page doesn't look right with all this double spacing going on. Can't we just force the offending thread off the bottom of the page with some new topics? :)
Or are there any older threads anyone wants to bump?
32) Message boards : Number crunching : Spammers (Message 1830937)
Posted 16 Nov 2016 by Profile Dr Grey
Post:
.....
I can see it looks bad but 8123671 is my machine. I had a failed cpu which needed RMA back to Intel but not until I'd tried to RMA the motherboard which turned out to be fine. So I was offline for several weeks and ended up doing a fresh install of windows once I was issued with a replacement. The cached tasks were not recovered. Sorry about that wingmen.
I have been contributing to this project for 17 years and this machine currently ranks at number 78 in the top hosts for total credit. Its RAC is picking up again now that it is back online and the abandoned tasks are being dealt with by resending in the way the project is designed to do - so please be discriminating when looking to penalise genuine bad behaviours.

I'm sorry if you got caught up in that, but I did check threads here that others have put up about their PC failing/failure problems (eg; Cruncher Down or Apologies to Wingmen & similar) and your name wasn't amongst them to cross off, now it's a bit late to remove yours from that list.

Cheers.


It's not about the indignation for being named and shamed, rather seeking to highlight that your proposing a penalty for a lost 1 day cache was inappropriate for a machine in the top 100. Seeking to deflect my point by inferring that I should have posted an apology for hardware failure suggests you were trying to avoid blame for your post. I wasn't blaming, merely pointing out that indiscriminate penalties would have impacted this machine.
33) Message boards : Number crunching : Spammers (Message 1830816)
Posted 16 Nov 2016 by Profile Dr Grey
Post:
I certainly agree with No.1 and here's some No.2's that could do with some sort of penalty for aborting BLC work and hoarding of other work.

http://setiathome.berkeley.edu/results.php?hostid=7123613
http://setiathome.berkeley.edu/results.php?hostid=7947608
http://setiathome.berkeley.edu/results.php?hostid=7921685
http://setiathome.berkeley.edu/results.php?hostid=6624102
http://setiathome.berkeley.edu/results.php?hostid=8030949
http://setiathome.berkeley.edu/results.php?hostid=7114519
http://setiathome.berkeley.edu/results.php?hostid=7999209
http://setiathome.berkeley.edu/results.php?hostid=7284511
http://setiathome.berkeley.edu/results.php?hostid=8031021
http://setiathome.berkeley.edu/results.php?hostid=7921601
http://setiathome.berkeley.edu/results.php?hostid=7470942
http://setiathome.berkeley.edu/results.php?hostid=8103298
http://setiathome.berkeley.edu/results.php?hostid=8103292
http://setiathome.berkeley.edu/results.php?hostid=8061493
http://setiathome.berkeley.edu/results.php?hostid=5745362
http://setiathome.berkeley.edu/results.php?hostid=8123671
http://setiathome.berkeley.edu/results.php?hostid=7025536
http://setiathome.berkeley.edu/results.php?hostid=6420067
http://setiathome.berkeley.edu/results.php?hostid=7248689
http://setiathome.berkeley.edu/results.php?hostid=7436798
http://setiathome.berkeley.edu/results.php?hostid=5501972
http://setiathome.berkeley.edu/results.php?hostid=7937692
http://setiathome.berkeley.edu/results.php?hostid=7347988

You can also tell by some that when AP's become available the dumping gets worse.

Cheers.


I can see it looks bad but 8123671 is my machine. I had a failed cpu which needed RMA back to Intel but not until I'd tried to RMA the motherboard which turned out to be fine. So I was offline for several weeks and ended up doing a fresh install of windows once I was issued with a replacement. The cached tasks were not recovered. Sorry about that wingmen.
I have been contributing to this project for 17 years and this machine currently ranks at number 78 in the top hosts for total credit. Its RAC is picking up again now that it is back online and the abandoned tasks are being dealt with by resending in the way the project is designed to do - so please be discriminating when looking to penalise genuine bad behaviours.
34) Message boards : Number crunching : How would our "computer" rank on the Top500 list? (Message 1815531)
Posted 7 Sep 2016 by Profile Dr Grey
Post:
Those are fun lists & great to see how processing power has improved over the years. So I only need to go back to November 2001 for my desktop to outclass the world top supercomputer the ASCII White?
35) Message boards : Number crunching : How would our "computer" rank on the Top500 list? (Message 1815530)
Posted 7 Sep 2016 by Profile Dr Grey
Post:
According to boincstats Seti currently does 744.233 TeraFLOPS average which would place us at #135. Although I have no idea how accurate those numbers are and if they are somehow comparable to the benchmarks used for the top500 list.
http://boincstats.com/en/stats/0/project/detail

Not sure about this but there does seem to be something wrong with the figures. If I look at my machine (5820K + GTX1080 + GTX980), according to the specs it should be capable of around 14 TFLOPS. That's around 2% of boincstats quoted total. Yet, and I'm aware of the fallacy of RAC but its a fallacy for everyone, my RAC (or what it would be once corrected for the fraction of time boinc is running) comes to just 0.02 % of the total. That's a suspicious 3 orders difference that may well be my error but I can't see where..
So based on that and, while its likely boinc doesn't use all the theoretical TFLOPs available but nevertheless - if you back calculate the total TFLOPs based on my machine as a benchmark, you would get just over 65,000 TFLOPs which would place us at number 2, a far more respectable place to be.
36) Message boards : Number crunching : Average Credit Decreasing? (Message 1815064)
Posted 4 Sep 2016 by Profile Dr Grey
Post:
Now, as to ANYONE ELSE!

I fight for what's honest and FAIR! CreditNew is NEITHER!


TL


To be honest, I suppose the fewer people who crunch, the more likely it will be my machine that finds the signal.
That seems fair to me.
37) Message boards : Number crunching : 1080 underclocking (Message 1811481)
Posted 22 Aug 2016 by Profile Dr Grey
Post:
Ha, just encountered the issue with EVGA's Precision OC not being happy with my 980 in place. It was OK until I swapped the cards around.
Anyway the 1080 is now in the bottom slot & a bit cooler at 76 C with the fans at 54%.
I'm running 2 wu at a time and my TDP is around 58%. Running at stock settings I get 1974 MHz. Noting my memory is also at 2256.8 MHz. I thought it was supposed to be at 2500?


Grab nvidia inspector, and set the p2 power state memory clock up to the same as p0 state.


Hmm, seemed to ignore it. Then complained of an unhandled exception and quit. I'll try again after my fish and chips. Thanks for the pointer though.



Dr. Grey, you have to exit boinc and stop crunching. Open Nvidia inspector and click show overclocking, you will get a warning, click ok. You will see all GPU listed on the left side of the panel, on the right you will see the setting. From the pull down select P2 and change the values there to match what they are in P0 (the default on the pull down). After you do that you must go to the right lower corner click on the "apply clock & voltage" This will save the changes until a reboot. Then go to the left side, select a different GPU and repeat. Do this for all GPUs. Once you are finished restart BONIC and see if the values took


Thanks Zalster. I tried it but it had no effect on the value shown in GPU-Z. So I opened up PrecisionX16 and dialled up the memory from there and it seems to have taken.
38) Message boards : Number crunching : 1080 underclocking (Message 1811445)
Posted 22 Aug 2016 by Profile Dr Grey
Post:
Ha, just encountered the issue with EVGA's Precision OC not being happy with my 980 in place. It was OK until I swapped the cards around.
Anyway the 1080 is now in the bottom slot & a bit cooler at 76 C with the fans at 54%.
I'm running 2 wu at a time and my TDP is around 58%. Running at stock settings I get 1974 MHz. Noting my memory is also at 2256.8 MHz. I thought it was supposed to be at 2500?


Grab nvidia inspector, and set the p2 power state memory clock up to the same as p0 state.


Hmm, seemed to ignore it. Then complained of an unhandled exception and quit. I'll try again after my fish and chips. Thanks for the pointer though.
39) Message boards : Number crunching : 1080 underclocking (Message 1811438)
Posted 22 Aug 2016 by Profile Dr Grey
Post:
Ha, just encountered the issue with EVGA's Precision OC not being happy with my 980 in place. It was OK until I swapped the cards around.
Anyway the 1080 is now in the bottom slot & a bit cooler at 76 C with the fans at 54%.
I'm running 2 wu at a time and my TDP is around 58%. Running at stock settings I get 1974 MHz. Noting my memory is also at 2256.8 MHz. I thought it was supposed to be at 2500?
40) Message boards : Number crunching : 1080 underclocking (Message 1811400)
Posted 22 Aug 2016 by Profile Dr Grey
Post:
Hey Keith,

I'm always aggressive with my fan curves, 80C is the 100% of fan power.

The way the EVGA hybrid kits work, they connect the pump mother and radiator fan to a pass thru connector that you connect to the blower fan connector on the GPU.

So all 3 are powered from the GPU blower fan connector.

WOW! That is really hot at 100% fan speed. I know that all the 1080 reviews document gaming temperatures, but I thought that even with GPGPU work at 100% utilization, the cards would be able to keep from thermal throttling at 100% fan speed. After all you are not hardly even using most of the card as the memory controller, video engine and bus interface loads are almost nil most of the time.

Can confirm. Just installed an EVGA 1080 FTW in an attempt to stop my never ending RAC dive. Put my fan curve to 100% at 80C and it's still thermal throttling to 'just' 1950 MHz. Nice card though & it may fare a bit better lower down on the board where its cooler.


Previous 20 · Next 20


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.