Posts by Dr Grey

1) Message boards : Number crunching : Why Do the People at the Top Suddenly Appear to Stop Contributing? (Message 2016565)
Posted 25 Oct 2019 by Profile Dr Grey
Post:
Another reason might be that in their part of the world it might be summer and the heat is too much to bear. As someone said earlier, sometimes it is difficult to justify the power bill. But during winter time all that heat produced by crunching can be useful for offsetting your heating bills. Here in the UK, once there's a chill in the air I crank up the SETI long before I need to switch in the house heating. It makes for a way better use of my kWh's than just using radiators. Then once spring comes again, I switch on no new tasks and power saving settings mode but I have to say I miss the sound of the fans at night.
2) Message boards : News : 20 years and counting! (Message 1994229)
Posted 18 May 2019 by Profile Dr Grey
Post:
Doesn't time fly. Congratulations and thank you for everything that has been done on the back end to keep the project going for two decades. Here's hoping that in 20 years we're covering 100 % of the sky and 100 % of the bandwidth!
3) Message boards : Number crunching : Panic Mode On (115) Server Problems? (Message 1982650)
Posted 28 Feb 2019 by Profile Dr Grey
Post:
Can you kick it?
4) Message boards : Number crunching : GPU max 100wu Why? (Message 1980179)
Posted 13 Feb 2019 by Profile Dr Grey
Post:
I took a look at the contribution of the top 20 hosts to the overall RAC according to Boincstats. While the top 20 hosts represent a little over 0.01 % of the total number of active users, when you tot up their combined RACs they are actually contributing to almost exactly 3 % of the total Seti@home RAC.
5) Message boards : Number crunching : Happy New Year everyone, Happy Alien 2019!! (243,752,340 at end of 2018) (Message 1972824)
Posted 31 Dec 2018 by Profile Dr Grey
Post:
And a happy New Year to you and everyone else too!
6) Message boards : Number crunching : Setting up Linux to crunch CUDA90 and above for Windows users (Message 1971345)
Posted 21 Dec 2018 by Profile Dr Grey
Post:
So I finally bit the bullet and moved across to Ubuntu. After a couple of hiccups & failed attempts I've got a stable installation I'm happy with and the results are... astoundingly good. A massive thanks and credit to everyone involved in writing, optimising and making these apps available as well as the contributors of this thread guiding us Linux newbies through this. My RAC just keeps heading skywards.
7) Message boards : Number crunching : EVGA GeForce GTX 1080 FTW2 processing times (Message 1959291)
Posted 8 Oct 2018 by Profile Dr Grey
Post:
I have EVGA 1080 FTW Gaming cards which do around 12-13 minutes running two tasks at a time. So I guess that's about 6-6.5 minutes a task. Does that sound about right?
8) Message boards : Number crunching : My Computer Builds And Other Projects (Message 1841999)
Posted 13 Jan 2017 by Profile Dr Grey
Post:
OK. Well those CT scanners can be pretty compute intensive. It might be an opportunity to see some high end hardware in action and maybe mention the benefits of distributed computing to the radiographer...
9) Message boards : Number crunching : Panic Mode On (104) Server Problems? (Message 1841642)
Posted 12 Jan 2017 by Profile Dr Grey
Post:
Now it's 31.5/sec returned in the last hour with a creation rate of 36.7/sec. They are fighting a valiant battle.
Noticing also that the average turnaround time has dropped to 30.5 hours from 33 earlier so the shorter queues are showing.
10) Message boards : Number crunching : Panic Mode On (104) Server Problems? (Message 1841608)
Posted 12 Jan 2017 by Profile Dr Grey
Post:
So generating a surplus of 4/sec which is enough to fill a 100 wu cache every 25 seconds. That's fast. But with 162,000 active hosts, it will take a while to get ahead of the pack.

As long as it continues to produce work at that rate. Sometimes it's faster, but at other times (like the most recent update) it's slower.
Only 26/sec. Nowhere near faster enough.


It's interesting though, that the average turnaround time is as high as 33 hours. With a lot of people struggling with low queues you'd expect it to be much lower. That suggests that the bulk of machines don't run anywhere near dry during the outage so the deficit is probably not all that bad.
11) Message boards : Number crunching : Panic Mode On (104) Server Problems? (Message 1841606)
Posted 12 Jan 2017 by Profile Dr Grey
Post:
The splitting rate shown is an instantaneous figure, compared to the long (hour?) sample used to generate the return rate. As I type the splitting rate is over 35/sec, so should be "more than capable" of keeping up with the demand (based on the results returned), which is just over 31/sec.


So generating a surplus of 4/sec which is enough to fill a 100 wu cache every 25 seconds. That's fast. But with 162,000 active hosts, it will take a while to get ahead of the pack.
12) Message boards : Number crunching : Panic Mode On (104) Server Problems? (Message 1841599)
Posted 12 Jan 2017 by Profile Dr Grey
Post:
Well, this has been interesting. My Windows 10 machine has a full cache of work. My Windows 7 machines have zero GPU work. Looks like the outrage is going to cause at least a 3-4 day lack of GPU work for my fast machines.


Current result creation rate: 30.4548/sec
Results received in last hour: 110,406. 110,406 / 3600 = 30.668 / sec

So it looks like the splitters can't keep up.
13) Message boards : Number crunching : Looking for used laptops (Message 1841230)
Posted 10 Jan 2017 by Profile Dr Grey
Post:
The SATA-to-Molex cables are to power the card through the PCIe x16 connector, like the MB would if it were plugged directly to the MB. The male SATA connecter gets plugged into one of the SATA female plugs on the power supply's SATA connector cables. Spread them out on as many SATA cables as possible, they draw up to 75 watts each.


According to this site the power rating for SATA would be borderline at best.
14) Message boards : Number crunching : Looking for used laptops (Message 1841221)
Posted 10 Jan 2017 by Profile Dr Grey
Post:
Can you tell me what is the use of a 'SATA 15pin Male to MOLEX 4pin power cable' ?
Where do you connect the SATA 15pin Male ?


You could plug it into one of these . And then you'd probably want to accessorise that with one of these.

It's the same as a SATA power connector on an HDD. The female connector comes from the PSU - just be careful of the power draw.
15) Message boards : Number crunching : My Computer Builds And Other Projects (Message 1841048)
Posted 9 Jan 2017 by Profile Dr Grey
Post:
I've got to say that I'd enjoyed this build thread too - best of luck with the tests.
16) Message boards : Number crunching : Driver Restart History? (Message 1840390)
Posted 6 Jan 2017 by Profile Dr Grey
Post:
Reliability monitor is a good place to start in windows. If windows caught the event it might tell you in there.
17) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1839154)
Posted 31 Dec 2016 by Profile Dr Grey
Post:
I've no idea who this Dr Grey is, but what I do know is that I would back Jason and Raistmer aganst his technical ability any day. Have a look at his team name.


Thanks for the vote of confidence for my technical ability. For the record I have none, but I don't see any harm in challenging the status quo and offering up ideas that are worth discussing. Do you?
18) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1839117)
Posted 31 Dec 2016 by Profile Dr Grey
Post:
This imply assumption that maintenance time directly depends on database size.
And that database size directly (or largely enough) depends on mean deadline time.
Second assumption looks very unjustified for the reasons listed earlier. First assumption requires some proofs too.
And third assumption is that if deadline shrinkage will take place it will automatically lead to number of tasks per host limit rise. Hardly...


Not sure all those implications directly follow but nevertheless I've taken a look at my older pendings and can prove I am wrong... 80 % of the pendings are less than 2 weeks old. 12 % are older than 4 weeks and 6 % are older than 6 weeks with the oldest being about 8 weeks old.
Current deadlines appear to be about 7.5 weeks although some appear to be 3 weeks and I'm not sure why that is. All my pendings occur just beyond 1, 7.5 week deadline, suggesting that workunits requiring more than one resend are extremely rare, and when resends happen they turnaround on average in a couple of days. According to these figures anyway, trying to reclaim space by reducing deadlines from 7.5 to 4 weeks will gain only about 12 % space. That would be just 90 workunits based on my pendings and probably not worth it.
19) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1839108)
Posted 31 Dec 2016 by Profile Dr Grey
Post:
The current deadlines for completion will continue to grow to be further away from the time taken for the bulk of canonical results to be achieved.

1) Deadlines will not grow. They will remain the same.
2) So what? Why this bother? Did you ever observe new version roll up where BOINC mis-predict estimated time to completion? With shorter deadline % of killed tasks will be even higher. So, all this just to make credit accounted faster for those over-competitive ones who can't just leave host as it is and spend time on optimization instead of credit rise watching? It will accounted eventually - that's all what matter.

Regarding real time processing: we do only pre-process. Final processing is Nebula. And it requires data movement into Atlas. This SETI search isn't real-time one by design. Its sensitivity comes from data accumulation over few observations spanned by years. Real-time processing just self-imposed goal, not really required for this kind of search. Would be good to complete search faster, but cutting some processing power will not make it faster, it will make it slower. Particular result mean validation time isn't matter. There are millions of such results.


1) As average turnaround shrinks and deadlines remain the same, the time between them grows.
2) Why bother optimising the process? To decrease entries from slow to verify workunits. From my understanding this would enable a greater cache size to be allowed without impacting the database size.

If older, unverified workunits sitting in the database that would otherwise be reduced by shortening the deadlines have little impact on the backend system then I have little to argue about. But it would be interesting to know what proportion that would be. I could do that by looking at my own cache I guess. I'll go and make a coffee and see if I can make some figures.

On your last point though, the potential computational benefit to be gained I agree, is minimal. It would arise from those very high end machines that run dry during an outage. Looking at Petri's record breaker, his turnaround average is 0.16 days. A little under 4 hours. That means he's probably running dry half way into an 8 hour outage, about once a week. Or around 2.5 % of the time. Everyone else will be less than this. However, it could be argued that enabling a larger cache to ensure that 2.5 % performance gain from Petri, at the expense of our slower tail, would be better for the computational output of the project - as that 2.5% could be substantially larger than the tail output. But that would put us at odds with an egalitarian ideal.
20) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1839096)
Posted 31 Dec 2016 by Profile Dr Grey
Post:


As soon as we adopt the intentions of the technology vendors at present, then more than 95% of our hardware compute capacity is immediately defunct. It would be nice if we could all have shiny new Kaby Lakes and GTX 1080s, but it's not realistically going to happen... especially when the gains represent poor return for the money. They need to do a lot better before we can say goodbye to stuff that works.


Nevertheless, time moves on. Looking back over the last few years SETI compute power increases by 5-10% each year. 1080s are soon to be old hat with the 1080Ti arriving soon, AMD's RyZen will hopefully perk up the CPU market pushing Intel towards adopting 10 nm quicker and I'm reading that we're hopeful for even better SETI apps in the near future. Older devices will continue to be retired and all this will all impact the average turnaround time, making analysis ever closer from near time to real time. The current deadlines for completion will continue to grow to be further away from the time taken for the bulk of canonical results to be achieved.


Next 20


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.