Posts by David C Blanchard

1) Message boards : Number crunching : What was once old is new(er? ish?) again. (Message 1921453)
Posted 26 Feb 2018 by Profile David C Blanchard
Post:
Disclaimers:

(1) I apologize in advance if this post is off-topic, but I had a hard time idenfying the right topic for it.
(2) I intend the issues and questions I pose here as sincere, and respectful at all times of the BOINC staff and forum members.
(3) I intend to review each response seriously. I'm not looking to waste anybody's time.

As a long-time supporter of SETI I have mixed feelings--pride and gratitude. IMO, SETI (and more generally BOINC) were the true "breakthrough" ideas. Nothing short of genius. These days the phrase "disruptive technology" is being applied to so many things it's meaning feels diluted. This makes it hard to remember that, all things being equal, some DTs are vastly "more equal" than others (e.g. BOINC).

That said, the appeal for monetary donations in addition to processing so many work units is a hard pill for me to swallow. But it got me thinking. Has anybody attempted to put a dollar estimate on the value of earned credits? I can see a big challenge here in part because of the non-stationary nature of value over time (rising inflation, decreasing cost of computing, etc.). Nonetheless, it ought to be possible to arrive at a simple dollar value for the totality of the work I've done historically for SETI and more recently, other BOINC projects.

Putting this another way, if a project like SETI had relied on a typical PaaS provider like AWS, what would it have cost SETI, in dollars and cents, to process the same number of work units which I provided for free? [At last report, as leader and sole contributor of FAA WJHTC, I had amassed roughly 95M credits over five years.]

Anyone?
2) Questions and Answers : Wish list : I wish the max work per user limit was higher (10 days ain't what it used to be) (Message 1904438)
Posted 2 Dec 2017 by Profile David C Blanchard
Post:
Great answer -- thanks for taking the time to explain it so clearly.
3) Questions and Answers : Wish list : I wish the max work per user limit was higher (10 days ain't what it used to be) (Message 1904302)
Posted 2 Dec 2017 by Profile David C Blanchard
Post:
As user capability continues to increase I've noticed that requesting 10 days worth of work only represents a few days for medium-capability users like me. Consequently even short outages (a day or two) leave me with nothing to do. Could this be either increased (e.g. 20 days) or redefined such that whatever was 10 days of work in 2007 is still 10 days of work in 2017 for most users?

On a tangential note, should systems like mine even be processing small CUDA jobs (e.g. <5 min) at all? I'm guessing that the relative transmission overhead is higher for small jobs.

Finally, just out of curiosity, what is the limiting factor for allowing users to request a very high number of work units? I suspect there is a "downside" to this in the extreme, but I'm not sure what it is. Maybe the risk of not meeting the return deadline? The risk of returning work units issued significantly prior to a change in the processing algorithm?

Finally, it has been an honor and my pleasure to support this project. Whatever the occasional problems of the past or present I have great respect the both the leaders and the even larger group of dedicated inner-circle volunteers. You folks represent a model of achieving "greatness on a budget" of which the commercial world can only dream to emulate.

Congratulations, and thanks for all the fish!
4) Message boards : News : We're back online (Message 1859056)
Posted 1 Apr 2017 by Profile David C Blanchard
Post:
I'm very relieved, but I never doubted you or your team.

I've been in similar situations with the FAA (having to deal with unplanned outages) at 2am with the brass calling on the phone every 15 minutes for progress updates and completion estimates. Our "drop dead" time was 04:10 EST since that's when the automatic flight plan entries started pouring in from the major carriers.

I'm not sure which was worse, fixing the actual problem under these less-than-ideal circumstances, or writing the RCA (Root Cause Analysis) the next day. Simply put, the ability to do the first doesn't always imply the ability to do the second.

So yes, I'm impressed. And grateful. [Anyone who isn't probably hasn't been in a similar situation.]
5) Questions and Answers : Wish list : Proposed Enhancement for TOP GPU MODELS Page (Message 1589375)
Posted 20 Oct 2014 by Profile David C Blanchard
Post:
Thank you ... and the others who posted. I now feel I understand the content.
6) Questions and Answers : Wish list : Proposed Enhancement for TOP GPU MODELS Page (Message 1588141)
Posted 17 Oct 2014 by Profile David C Blanchard
Post:
BilBg -- Thanks for those links!

I think Wiggo nailed it when he said:

"The "most productive GPUs" is based on the amount of work returned by a certain model card so its really based on numbers and popularity of that card.
It does not reflect the actual performance of a card."

My point was that the addition of number GPUs reporting, by model, would allow people to normalize the results to compare performance by model (if only for SETI work). Armed with that information, and allowing for variations of platform architecture, the only other purchase consideration is cost.

-- Dave
7) Questions and Answers : Wish list : Proposed Enhancement for TOP GPU MODELS Page (Message 1587577)
Posted 16 Oct 2014 by Profile David C Blanchard
Post:
I'm seeing a lot of day-to-day variability in the relative performance of various GPU models on the TOP GPU MODELS page which makes me suspect that I may be mis-interpreting it.

I thought (hoped) it was a performance comparison, normalized by quantity for each model type. However, the wild variations I'm seeing suggest that it is instead reporting the relative contribution, by group, of however many GPUs of a given type happen to be reporting.

PROPOSAL -- Since BOTH interpretations are valid but different, perhaps a column could be adding to indicate the number of GPUs represented (in addition to their combined contribution).

This would allow people like me to see (1) The relative popularity of each GPU model, and (2) provided the means to normalize the comparitive contributions. For example, if 10 GTX 670s get a relative rating of 0.4 and 20 GTX 660s have a relative rating of 0.6 I would know that a 670 is actually 33% more powerful than a 660--at least for SETI work!

David C Blanchard (Founder of FAA WJHTC) dblanch256@gmail.com
8) Questions and Answers : Wish list : Control Default Priority for CUDA Jobs (Message 1346819)
Posted 15 Mar 2013 by Profile David C Blanchard
Post:
I only asked originally because SETI/BOINC provides so many other options (to their credit) I was certain there must be one for this and I simply wasn't able to find it.

The situation you describe of SETI "eating my lunch" has not been my experience. My GPU jobs use only 2 to 4 percent of the total system time--pretty much regardless of whether they run at Below Normal or High priority. The only change I notice is that the GPU jobs complete significantly faster--I assume because they are being serviced more promptly at the higher priority.

Oh, and I'm not a gamer--maybe those people have a different user experience than me because they are making their graphics card to double duty, yes?

Since I first posted here, I found 3rd party tool which remembers any priority changes and automatically applies them to all new processes with the same name. It's called PRIO and it works wonderfully--not just for SETI, but for all jobs for which I routinely change the priority.

Again thanks for the feedback!
9) Questions and Answers : Wish list : Control Default Priority for CUDA Jobs (Message 1342809)
Posted 3 Mar 2013 by Profile David C Blanchard
Post:
I run a mixture of SETI regular and CUDA-enabled jobs (on a Windows 7 PC).

The good news is that the CUDA jobs have a higher default priority
than the regular ones (BELOW NORMAL versus NORMAL).

However, I get significantly better performance when I manually
change the priority of a CUDA task to HIGH with no noticeable degradation
to performance on the box.

So ... my question is how to make this happen automatically so
I don't have to keep setting a ten-minute egg timer to do it myself?





 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.