Posts by HAL9000

21) Message boards : Number crunching : Statistics (Message 1860813)
Posted 13 days ago by Profile HAL9000
Post:
Could someone point me to a thread that explains how the "statistics" tab of the Bonic relates to the Work Unit put through / day? When it talks about "average units / host" is that per day or per week?

I think the numbers it is posting are the results of my "verified credits" per task? Over how long a period?

I think I understand, but I would like to review the details

Thanks,
Tom Miller

Unless you have modified save_stats_days in a cc_config.xml the default period is 30 days.
The days should be listed along the bottom of the graph.

The graph is generated from each projects statistics_PROJECT.xml file.
22) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1860346)
Posted 16 days ago by Profile HAL9000
Post:
Web site and forums behaving badly at the moment.
Slow to respond, and other times not even responding at all.

EDIT- just had a look in my Manager's Event log & a few Scheduler errors (Couldn't connect to server) are showing there.

In scanning my stdoutdae.txt I do see a slight increase in the number of scheduler failures over the course of this past week.
Project details for: SETI@home including all dates
Scheduler Requests: 4090
Scheduler Success: 99 %, Count: 4062
Scheduler Failure: 0 %, Count: 28 (Total)
Scheduler Failure: 0 % of total, Count: 24 (Couldn't connect to server)
Scheduler Failure: 0 % of total, Count: 3 (HTTP service unavailable)
Scheduler Failure: 0 % of total, Count: 0 (HTTP internal server error)
Scheduler Failure: 0 % of total, Count: 1 (Failure when receiving data from the peer)
Scheduler Failure: 0 % of total, Count: 0 (Timeout was reached)
Scheduler Timeout: 0 % of failures

Project details for: SETI@home including 08-Apr-2017
Scheduler Requests: 217
Scheduler Success: 98 %, Count: 214
Scheduler Failure: 1 %, Count: 3 (Total)
Scheduler Failure: 1 % of total, Count: 3 (Couldn't connect to server)

Project details for: SETI@home including 07-Apr-2017
Scheduler Requests: 480
Scheduler Success: 99 %, Count: 479
Scheduler Failure: 0 %, Count: 1 (Total)
Scheduler Failure: 0 % of total, Count: 1 (Couldn't connect to server)

Project details for: SETI@home including 06-Apr-2017
Scheduler Requests: 479
Scheduler Success: 99 %, Count: 476
Scheduler Failure: 0 %, Count: 3 (Total)
Scheduler Failure: 0 % of total, Count: 2 (Couldn't connect to server)
Scheduler Failure: 0 % of total, Count: 1 (HTTP service unavailable)

Project details for: SETI@home including 05-Apr-2017
Scheduler Requests: 478
Scheduler Success: 99 %, Count: 474
Scheduler Failure: 0 %, Count: 4 (Total)
Scheduler Failure: 0 % of total, Count: 2 (Couldn't connect to server)
Scheduler Failure: 0 % of total, Count: 2 (HTTP service unavailable)

Project details for: SETI@home including 04-Apr-2017
Scheduler Requests: 381
Scheduler Success: 99 %, Count: 379
Scheduler Failure: 0 %, Count: 2 (Total)
Scheduler Failure: 0 % of total, Count: 2 (Couldn't connect to server)

Project details for: SETI@home including 03-Apr-2017
Scheduler Requests: 480
Scheduler Success: 99 %, Count: 476
Scheduler Failure: 0 %, Count: 4 (Total)
Scheduler Failure: 0 % of total, Count: 4 (Couldn't connect to server)

Project details for: SETI@home including 02-Apr-2017
Scheduler Requests: 480
Scheduler Success: 100 %, Count: 480
Scheduler Failure: 0 %, Count: 0 (Total)
23) Message boards : Number crunching : W 10 CPU core temp (Message 1859737)
Posted 18 days ago by Profile HAL9000
Post:
On my Windows 10 notebook HWInfo & HWMonitor seem to be working fine with the i7-6700HQ.

Also I'm not up to date with the latest version. Currently I have:
HWInfo 5.38-3000
HWMonitor 1.27

HWinfo64 is up to 5.50-3103 currently. Most of the latest updates are to properly handle Ryzen.

Yeah, I grabbed the latest 5.5-3130 after posting, but I wanted to indicate that the older version seem to be working just fine with the same generation CPU & OS.
24) Message boards : Number crunching : W 10 CPU core temp (Message 1859735)
Posted 18 days ago by Profile HAL9000
Post:
On my Windows 10 notebook HWInfo & HWMonitor seem to be working fine with the i7-6700HQ.

Also I'm not up to date with the latest version. Currently I have:
HWInfo 5.38-3000
HWMonitor 1.27
25) Message boards : Number crunching : Which gets filled first after an outage ..... CPU or GPU tasks? (Message 1859205)
Posted 22 days ago by Profile HAL9000
Post:
since last outage i got cpu and gpu-tasks for the Nvidia-adapter but none still for the ATI ... the ATI is about 10x faster than the Nvidia. should i reinstall the client ?
Message says request tasks for Nvidia and ATI but WU´s send are allways only for Nvidia. Cant find the error. Drivers ect are all up to date.

Someone had an issue some time ago. When they had no GPU tasks their CPU cache would fill to the limit and then they would get the "This computer has reached a limit on tasks in progress" message.
They had to disable requesting CPU until GPU tasks started to download. Then they could reenable requesting CPU tasks.
26) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1858893)
Posted 23 days ago by Profile HAL9000
Post:
Scheduler errors just about cleared up.
Now getting work on some requests.

Looking over my logs I didn't have very many failures and my caches filled up pretty quickly.

Project details for: SETI@home including all dates
Scheduler Requests: 701
Scheduler Success: 98 %, Count: 690
Scheduler Failure: 1 %, Count: 11 (Total)
Scheduler Failure: 1 % of total, Count: 10 (Couldn't connect to server)
Scheduler Failure: 0 % of total, Count: 0 (HTTP service unavailable)
Scheduler Failure: 0 % of total, Count: 0 (HTTP internal server error)
Scheduler Failure: 0 % of total, Count: 1 (Failure when receiving data from the peer)
Scheduler Failure: 0 % of total, Count: 0 (Timeout was reached)
Scheduler Timeout: 0 % of failures

Project details for: SETI@home including 01-Apr-2017
Scheduler Requests: 78
Scheduler Success: 88 %, Count: 69
Scheduler Failure: 11 %, Count: 9 (Total)
Scheduler Failure: 10 % of total, Count: 8 (Couldn't connect to server)
Scheduler Failure: 1 % of total, Count: 1 (Failure when receiving data from the peer)

Project details for: SETI@home including 31-Mar-2017
Scheduler Requests: 20
Scheduler Success: 90 %, Count: 18
Scheduler Failure: 10 %, Count: 2 (Total)
Scheduler Failure: 10 % of total, Count: 2 (Couldn't connect to server)
27) Message boards : News : We're back online (Message 1858851)
Posted 23 days ago by Profile HAL9000
Post:
And with some incredible luck we didn't lose any entries in the pulse signal table at all.

Whew!

It is always great news when there turns out to be no data loss.
Thank you guys for your dedication and keeping us up to date.
28) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1858847)
Posted 23 days ago by Profile HAL9000
Post:
Aye. Extended outage = extended recovery time.
29) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1858843)
Posted 23 days ago by Profile HAL9000
Post:
It looks like we are back in business!
3/31/2017 11:39:48 PM	SETI@home	Reporting 100 completed tasks
3/31/2017 11:39:48 PM	SETI@home	Requesting new tasks for ATI GPU
3/31/2017 11:39:53 PM	SETI@home	Scheduler request completed: got 1 new tasks
30) Message boards : Number crunching : Applications and Console Window Host (Message 1858497)
Posted 24 days ago by Profile HAL9000
Post:
Hi all,
Just realized that I have one instance of conhost.exe running for each task slot.
With 5 CPU tasks and 2 GPU tasks running, there are 7 instances of conhost.exe. Is this normal ?
I am running the stock apps :
setiathome_8.05_windows_x86_64.exe
setiathome_8.22_windows_intelx86_opencl_nvidia_SoG.exe

...Grete...

It would be considered normal for most systems.
There are changes that can be made to the OS so it doesn't do that, but that would disable pretty much all of the "pretty" GUI features. Which may or may not be desired.
31) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857801)
Posted 28 days ago by Profile HAL9000
Post:
Use_sleep with a 1080?? I would drop that and use -hp instead. You have to remember that a 5% increase in GPU times will be more than your entire CPU can do for RAC.

EDIT: And I would update to r3584 as well.


1) I am running dual 8 core E5-2670s, so 32 threads. Considering the GPU threads, actually running 24 or 25 CPU threads on my 2 main crunchers, and they come close to ~1 GPU. Or at least not completely left in the dust like a desktop CPU would be.

2) As far as R3584 goes, I am loathe to do anything not in stock or Lunatics, since it is so easy to screw up the app_info.

3) use_sleep seems to work fine for me, as I mentioned above.

The current stock NV & ATI apps are r3584.
32) Message boards : Number crunching : Anything relating to AstroPulse tasks (Message 1857629)
Posted 29 days ago by Profile HAL9000
Post:
AP is dead and gone. Don't expect it to reurn in any big way, ever again.
Have you all forgotten the messages from DA, that this project is winding down?
Nebula is here to analyze the results we collected over the years, and then S@H as we know it will shut down.

There was rumor of running AP on the GBT data, but I don't know if that's going to end up happening. I haven't heard anything about it other than some that are in-the-know mentioning it kind of in-passing a while back.

I suspect new data from Arecibo is pretty much done and over though, but GBT seems like it's going to be around for a while. So.. there's no harm in going and checking all the Arecibo results since there's basically not going to be new results coming in anymore. That's my opinion.

I didn't get S@H is winding down/stopping in any way from the Nebula messages. Getting data from other observatories has been in the works for quite some time. I believe we had a donation drive to get some of the hardware to collect GBT data in 2013/2014ish. Breakthrough just seems to have given the funds to get the rest of the GBT recording stuff done.

I believe it was near the end of the summer, 2016, Eric stated something along the lines of they had a plan to look into splitting AP for the GBT data.
Since we haven't heard anything I would guess the plan to look into it is still in place.
If the time, money, and resources are in place I'm sure AP v8 will go into testing at Beta.
33) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857303)
Posted 24 Mar 2017 by Profile HAL9000
Post:
According to IBM, Informix does have limits, so the question is: Has Seti reached those limits or will it ever reach those limits?

Informix limits

My brain want s to tell me that they are using something like version 10.
So that may or may not matter. It looks like the limits from 11.5 to 12 are pretty much the same.
34) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1857204)
Posted 23 Mar 2017 by Profile HAL9000
Post:
The project has NEVER promised to keep us fed with work, they have ALWAYS told us to have standby projects to fall back onto in the event of (extended) outages.

But it would be nice if it were possible to carry at least 24 hours of work, even 48. The option is there in the settings- it would be appreciated if it were possible.
Why not raise the server side limits, but restrict all users to a maximum of 2 days cache, leaving more work available to faster crunchers to see them through these outages, but still limiting the load on the database?

If there is no data available to crunch, then there's no work.
But If there is work available, but the project has issues, it would be nice to be able to continue processing it.

It has been said that there will (eventually) be way more work than the present user base can process in a reasonable time, and more crunchers are needed. But it doesn't matter how many crunchers there are, if they can't process the work. And even if the servers spend most of a week down, if in their uptime people are able to get enough work to tide them through to the next uptime, people will continue to crunch.

Being able to have a 24 cache of SETI@home work would be nice.
I don't think a project could set a day cache max. Given that is a BOINC wide setting instead of a project setting. I don't doubt they would implement such a setting, but with the lack of BOINC development I don't expect anything like that to happen soon.
Doubling the current 100 CPU / 100 GPU*N limit would probably not be a great idea. Given we have been in the 5-6 million range of in progress fairly recently. Previously the db server was falling over when we were hitting ~10-11 million. I believe it was stated at the time that the server wasn't hitting a hardware limitation, but more of a software or database limitation.
At one time it was mentioned they were considering switching from the current infomatrix software, but that was some time ago. With Nebula I'm not sure how priorities may be heading now.
35) Message boards : Number crunching : Win10 -> Radeon HD 5xxx and HD 6xxx CRIMSON Drivers don´t use OpenCl ? Well not anymore (Message 1856830)
Posted 20 Mar 2017 by Profile HAL9000
Post:
For sure, they might be work, but u have a lot of performance loss if u don't use the crimson drivers.

Try it under Win10.

I try not to use Win10 unless I am forced.
36) Message boards : Number crunching : Problem With New Machine on BoincStats 1.69 (Message 1856718)
Posted 20 Mar 2017 by Profile HAL9000
Post:
I found the problem - I had gotten the BOINC installer from elsewhere than SETI - though it was the correct version (7.6.33), it was 32bit, rather than 64bit, and was incompatible with BoincTasks 1.69.

When I installed it, BT found all the Tasks, etc. that it couldn't find before.

So all seems cool, for now!

Well that all makes more sense now. I was wondering what BOINCstats 1.69 was.
I still prefer BOINCview to watch over my systems. Plus it doesn't switch to the task view at random like BOINC tasks does.
37) Message boards : Number crunching : Panic Mode On (105) Server Problems? (Message 1856635)
Posted 19 Mar 2017 by Profile HAL9000
Post:
Checking out Tasks and Settings taking ages to respond again, and the Replica is falling behind as well.

Looks like it is going for orbit
38) Message boards : Number crunching : 25% badge (Message 1856343)
Posted 18 Mar 2017 by Profile HAL9000
Post:
. . When I select the project I just get stats for the project overall, nothing about my stats. ??

Stephen

<shrug>

I'm not sure why you would expect to see any information about your account when looking at the project stats.
The purpose of looking at the project stats was to get the current active number of participants. Which can be used to calculate where the borders for each of the RAC badges are at that point in time.

If you want to see your information on BOINCstats one of the many ways to get there is by using the links under Cross-project statistics from your account on each project.
39) Message boards : Number crunching : Asymetric "loads" on GTX 750Ti's (different brands) (Message 1856340)
Posted 18 Mar 2017 by Profile HAL9000
Post:
I have been seeing some GPU tasks on my R9 390X and 750ti with larger CPU times as well. Roughly about twice the normal CPU time. I figured it had to do with the data in the tasks since the task run time was about the same as normal.
40) Message boards : Number crunching : Reprocessing (Message 1855946)
Posted 17 Mar 2017 by Profile HAL9000
Post:
As the search did not generate satisfying results, I'm posting this question here:
At the moment, my machine is processing 2008 data and I've seen many times before that we're processing "old" data.
Question is: Is it really old data or are we reprocssing it. If so, for what specific reason (i.e. new algorithms etc.).

Would be very interested to broaden my knowledge.

Greetings,
Michael

This post from Eric probably has much of the information you are looking for.
http://setiathome.berkeley.edu/forum_thread.php?id=78710#1752922
I'm sure if you want a more technical answer some of the developer guys can probably tell you more than you want to know about the code changes.


Previous 20 · Next 20


 
©2017 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.