6.6.36 Released - FYI

Message boards : Number crunching : 6.6.36 Released - FYI
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
André Wagner

Send message
Joined: 4 Jan 02
Posts: 11
Credit: 2,308,487
RAC: 0
Germany
Message 907212 - Posted: 13 Jun 2009, 11:23:45 UTC - in response to Message 907205.  

It doesn't connect, neither automatically nor vial "localhost" oder user name.
ID: 907212 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 907220 - Posted: 13 Jun 2009, 11:47:52 UTC - in response to Message 907212.  

Make sure you allow boinc.exe and boincmgr.exe to 'chat' with each other, them being able to pass through your firewall on TCP port 31416. (Or in the case of Windows firewall, make sure they are in the Exceptions list and if they are there, that they have the correct path to the BOINC directory).

boinc.exe needs a separate allowance on TCP ports 80 and 443 to connect to the internet.
ID: 907220 · Report as offensive
André Wagner

Send message
Joined: 4 Jan 02
Posts: 11
Credit: 2,308,487
RAC: 0
Germany
Message 907266 - Posted: 13 Jun 2009, 13:38:06 UTC - in response to Message 907220.  
Last modified: 13 Jun 2009, 13:59:54 UTC

I've declared them belonging to the trusted zone. I mean, it had worked until this morning, until I did the update ... Could it be the latest Microsoft patches have something to do with it (.NET 3.5 SP1, security updates)?

Edit: I'm having a working BOINC manager now. Out of the blue ... I can't upload, but the log reports a working internet connection.

13.06.2009 15:57:34		Internet access OK - project servers may be temporarily down.


How reliable are such messages?

Oh, and before I forget it: Thank you all very much for your assistance.
ID: 907266 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 907271 - Posted: 13 Jun 2009, 14:04:19 UTC - in response to Message 907266.  

That is possible, it has happened before, but I would have expected a lot more people to complain about it then.

You are saying that the client runs correctly, you just can't get BOINC Manager to communicate with it. So something is blocking that communication. Both the client and the GUI are in the same directory? If they are, the next best bet is to check the firewall again. Perhaps remove their entries and add them again.

Did you specify any range of IP addresses for boinc.exe and boincmgr.exe to use in the firewall?

One other thing, check what's in the gui_rpc_auth.cfg file in your BOINC Data directory.

Also, no error messages in the stderrgui.txt or stdoutgui.txt files?
ID: 907271 · Report as offensive
André Wagner

Send message
Joined: 4 Jan 02
Posts: 11
Credit: 2,308,487
RAC: 0
Germany
Message 907292 - Posted: 13 Jun 2009, 15:21:09 UTC - in response to Message 907271.  

Thanks for your reply. So would I ...

stdoutgui.txt states the GUI has been looking for a connection on port 552:

[06/13/09 15:41:43] TRACE [1688]: init_poll(): sock = 552


That's at least what I'm making of it. I can't imagine what's been blocking it from getting access. The firewall settings haven't been touched for a year or so (ESET Internet Security, the firewall around their NOD32 anti-virus scanner; easily set up in a very detailed way).

Anyway, things are working now and I won't touch the damn thing until tomorrow morning. :)
ID: 907292 · Report as offensive
Profile Jet

Send message
Joined: 25 Sep 07
Posts: 12
Credit: 1,586,013
RAC: 0
Ukraine
Message 907310 - Posted: 13 Jun 2009, 16:25:31 UTC

Installed 6.6.36 version. From the first look, works, as previous, only have not chance to check, will CUDA works together with my opt V8.
Getting just this messages, kinda strange:

6/13/2009 6:51:45 PM SETI@home Sending scheduler request: To fetch work.
6/13/2009 6:51:45 PM SETI@home Requesting new tasks for GPU
6/13/2009 6:52:25 PM SETI@home Scheduler request completed: got 0 new tasks
6/13/2009 6:52:25 PM SETI@home Message from server: (Project has no jobs available)
6/13/2009 6:53:42 PM SETI@home Sending scheduler request: To fetch work.
6/13/2009 6:53:42 PM SETI@home Requesting new tasks for GPU
6/13/2009 6:54:04 PM Project communication failed: attempting access to reference site
6/13/2009 6:54:06 PM Internet access OK - project servers may be temporarily down.
6/13/2009 6:54:07 PM SETI@home Scheduler request failed: Couldn't connect to server
6/13/2009 6:55:08 PM SETI@home Sending scheduler request: To fetch work.
6/13/2009 6:55:08 PM SETI@home Requesting new tasks for GPU
6/13/2009 6:55:28 PM SETI@home Scheduler request completed: got 0 new tasks
6/13/2009 6:55:28 PM SETI@home Message from server: (Project has no jobs available)
6/13/2009 6:57:45 PM SETI@home Sending scheduler request: To fetch work.
6/13/2009 6:57:45 PM SETI@home Requesting new tasks for GPU
6/13/2009 6:58:06 PM SETI@home Scheduler request completed: got 0 new tasks
6/13/2009 6:58:06 PM SETI@home Message from server: (Project has no jobs available)
ID: 907310 · Report as offensive
Profile [AF>france>pas-de-calais]symaski62
Volunteer tester

Send message
Joined: 12 Aug 05
Posts: 258
Credit: 100,548
RAC: 0
France
Message 907342 - Posted: 13 Jun 2009, 18:23:14 UTC

http://setiathome.berkeley.edu/result.php?resultid=1259882918

<core_client_version>6.6.36</core_client_version>
<![CDATA[
<message>
 - exit code -6 (0xfffffffa)
</message>
<stderr_txt>

VLAR WU (AR: 0.010299 )detected... autokill initialised
SETI@home error -6 Bad workunit header

File: ..\seti_header.cpp
Line: 207


</stderr_txt>
]]>



SETI@Home Informational message -9 result_overflow
with a general handicap of 80% and it makes much d' efforts for the community and s' expimer, thank you d' to be understanding.
ID: 907342 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 907386 - Posted: 13 Jun 2009, 22:25:25 UTC - in response to Message 907163.  

Can't be. The screen saver is built into the science app, so a new app would have to be released in order to have a new screensaver.

The screen saver hasn't been built into the science app since BOINC 6, Charlie. It's now a separate graphics application.

But these latest BOINC versions have new code for the screen saver (in case the project releases a new graphics app) and a new BOINC screen saver.


Yes, I'm aware of that. I suppose I should have said, "The screen saver code does not exist within BOINC, so upgrading BOINC should theoretically have no effect on screen saver mechanics."
ID: 907386 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 907395 - Posted: 13 Jun 2009, 22:40:25 UTC - in response to Message 907386.  

I suppose I should have said, "The screen saver code does not exist within BOINC, so upgrading BOINC should theoretically have no effect on screen saver mechanics."

Technically also not correct, as BOINC does have a screen saver of its own, plus you need to enable the BOINC screen saver in Windows to see the Seti screen saver.

"The Seti screen saver code does not exist within BOINC" is more correct.
Now before I am accused of picking the nits again... /me ducks. !-D
ID: 907395 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 907402 - Posted: 13 Jun 2009, 23:12:19 UTC - in response to Message 907342.  

http://setiathome.berkeley.edu/result.php?resultid=1259882918

<core_client_version>6.6.36</core_client_version>
<![CDATA[
<message>
 - exit code -6 (0xfffffffa)
</message>
<stderr_txt>

VLAR WU (AR: 0.010299 )detected... autokill initialised
SETI@home error -6 Bad workunit header

File: ..\seti_header.cpp
Line: 207


</stderr_txt>
]]>






Hi,
I didn't see a question in there but if you are wondering about that error message that is just the optimized App killing the VLAR (Very Low Angle Range) work unit as it should. The VLAR tasks are very hard on graphics cards and it is easier to just send them back for someone else to run them on their CPU. Raistmer designed the app to do just that.



PROUD MEMBER OF Team Starfire World BOINC
ID: 907402 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 907471 - Posted: 14 Jun 2009, 2:56:56 UTC - in response to Message 907395.  
Last modified: 14 Jun 2009, 2:57:28 UTC

I suppose I should have said, "The screen saver code does not exist within BOINC, so upgrading BOINC should theoretically have no effect on screen saver mechanics."

Technically also not correct, as BOINC does have a screen saver of its own, plus you need to enable the BOINC screen saver in Windows to see the Seti screen saver.

"The Seti screen saver code does not exist within BOINC" is more correct.
Now before I am accused of picking the nits again... /me ducks. !-D


LOL

Right, but I didn't say "no screen saver code exists within BOINC". I said "The screen saver code...". "The" refers to the user's suggestion that the SETI screen saver code has been problematic since upgrading BOINC, and I attempted to point out that I don't see the correlation since "the screen saver code does not exist within BOINC".

:)

I can pick nits too! LOL
ID: 907471 · Report as offensive
Andy Williams
Volunteer tester
Avatar

Send message
Joined: 11 May 01
Posts: 187
Credit: 112,464,820
RAC: 0
United States
Message 907875 - Posted: 15 Jun 2009, 16:53:12 UTC

The scheduler in 6.6.36 seems to be totally broken. Starting a machine from scratch last night I've received probably 300 6.03s. Meanwhile my GPUs have been idle for 16 hours. S@H only on the machine in question.
--
Classic 82353 WU / 400979 h
ID: 907875 · Report as offensive
Profile Bob Mahoney Design
Avatar

Send message
Joined: 4 Apr 04
Posts: 178
Credit: 9,205,632
RAC: 0
United States
Message 907896 - Posted: 15 Jun 2009, 17:44:57 UTC - in response to Message 907875.  

The scheduler in 6.6.36 seems to be totally broken. Starting a machine from scratch last night I've received probably 300 6.03s. Meanwhile my GPUs have been idle for 16 hours. S@H only on the machine in question.

Funny, the night before that I received about 300 608s and only two 603s. Then got some 603s, one took a looong time to run, it adjusted the Duration Correction Factor (DCF), that put the system into 'hurry up' mode (EDF), that created a string of "Waiting to run", that over-ran the GPU memory, that locked up the computer, that influenced me to detach it from the project. Now the computer is sitting in the penalty box. :)

The host in question will be allowed to play again, but only after it and BOINC decide to get along. Waiting patiently for BOINC 6.10.xx, which should have multiple DCF, one for each class of task on that host.

Just a note of support: IMHO, SETI@home and BOINC are the most ambitious projects of their type in all the world. All research projects have periods of instability and adaptation. We are in the middle of one of those times. It is the nature of the beast. Bleeding edge is painful but exciting.

Just an opinion from a SETI@home fan...

Bob
Opinion stated as fact? Who, me?
ID: 907896 · Report as offensive
Profile perryjay
Volunteer tester
Avatar

Send message
Joined: 20 Aug 02
Posts: 3377
Credit: 20,676,751
RAC: 0
United States
Message 907902 - Posted: 15 Jun 2009, 18:00:55 UTC - in response to Message 907896.  

6.6.36 is doing good for me. I don't keep much of a cache but it is keeping it filled a few at a time. It will request CPU tasks and get the no work available until it decides I need it and does the same thing for the GPU then gives me a couple of each to shut me up.


PROUD MEMBER OF Team Starfire World BOINC
ID: 907902 · Report as offensive
Andy Williams
Volunteer tester
Avatar

Send message
Joined: 11 May 01
Posts: 187
Credit: 112,464,820
RAC: 0
United States
Message 907907 - Posted: 15 Jun 2009, 18:19:20 UTC

I have a number of machines that had a cache when I upgraded to 6.6.36 which are running without incident. The problem machine had nothing. That seems to exacerbate the situation. I can sit in front of it and watch it make a GPU work request a couple of times, get no work available, then in a few minutes it will start in on CPU requests and doggedly persist until it gets two or three WUs to add to the already lengthy CPU cache. The frustrating thing of course, as we all know, is that 6.03s and 6.08s are identical. The scheduler or WU "brander", it seems, is paying absolutely no attention to hardware utilization.
--
Classic 82353 WU / 400979 h
ID: 907907 · Report as offensive
Profile Westsail and *Pyxey*
Volunteer tester
Avatar

Send message
Joined: 26 Jul 99
Posts: 338
Credit: 20,544,999
RAC: 0
United States
Message 908009 - Posted: 16 Jun 2009, 0:30:33 UTC

Yea..., not sure where you guys get the counts of how many, but it had become clear there were too many 6.03s piling up.
I imagine caused by the disparity in speed between cpu and gpu.
Well, my temporary solution to the problem has been to run v11 as the GPU app and teamwork as the CPUapp ..
*(if I just lost you don't worry this not recommended only experiment)
Now GPU works just like normal but using teamwork for CPU work puts one on the card and one on both cores. For a total of four MB units crunching on dual core with GPU. Also I can crunch Aqua@home cuda or GPUgrid and Seti Cuda in parallel on one card. Not sure how other cards react to this configuration. So far it seems 2 tasks at the same time on GPU is possible but gets complicating fast as far as priorities of worker threads concerned. They just switch off as long as enough video memory. When try too many huge slowdown is evident. Just sharing as for the moment it is clearing my 6.03 cache down, so is a quick dirty solution.
"The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka! (I found it!) but rather, 'hmm... that's funny...'" -- Isaac Asimov
ID: 908009 · Report as offensive
Andy Williams
Volunteer tester
Avatar

Send message
Joined: 11 May 01
Posts: 187
Credit: 112,464,820
RAC: 0
United States
Message 908013 - Posted: 16 Jun 2009, 0:44:03 UTC - in response to Message 908009.  

I haven't tried to fiddle with rebranding. Unfortunately I now have two or three machines with two or three CUDA cards each with no 6.08s. It's crazy. They should be able to be rebranded as 6.08s by the client.

In the meantime, my hardware sits idle.
--
Classic 82353 WU / 400979 h
ID: 908013 · Report as offensive
Profile Westsail and *Pyxey*
Volunteer tester
Avatar

Send message
Joined: 26 Jul 99
Posts: 338
Credit: 20,544,999
RAC: 0
United States
Message 908021 - Posted: 16 Jun 2009, 1:02:33 UTC - in response to Message 908013.  

Andy, check your pm
"The most exciting phrase to hear in science, the one that heralds new discoveries, is not Eureka! (I found it!) but rather, 'hmm... that's funny...'" -- Isaac Asimov
ID: 908021 · Report as offensive
Previous · 1 · 2

Message boards : Number crunching : 6.6.36 Released - FYI


 
©2025 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.