Anyone else not getting work???

Message boards : Number crunching : Anyone else not getting work???
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
Profile T. Moe
Avatar

Send message
Joined: 31 May 12
Posts: 157
Credit: 1,787,403
RAC: 0
United States
Message 1277354 - Posted: 30 Aug 2012, 0:59:36 UTC

I just got 3 WU after I restarted my PC.
ID: 1277354 · Report as offensive
Profile Interstel
Avatar

Send message
Joined: 29 Nov 01
Posts: 23
Credit: 2,231,105
RAC: 0
United States
Message 1277686 - Posted: 30 Aug 2012, 17:58:00 UTC

After my shutdown and restart of the BOINC program I go in a bunch of a whole slew of Einstein's and some LHC's. But only 5 new SETI's 1 6.04, and 4 6.09's. I haven't changed my setting since synching them all up months ago creating essentially 4 types none, home, school and work. that way I could assign each machine into a category that best fit what it had and how long it takes to run things.

My primary workstation has dual CPU's with dual cores and hyperthreading for 8 CPU Cores, 16 gigs of RAM, a GeForce 250 GTS the first that runs openCLl, and U320 15,000 RPM SCSI hard disks.

In the past I was doing around 4000-4400 credit average for just SETI. Since Aug 8th when I added the extra projects the number of SETI 6.03-6.09 WU's has dropped and dropped with the only things coming through being the Astropulse 6.01's. Yet I set the RESOURCE SHARE to 1400 on SETI to force supposedly 70% to SETI and 5% to everything else but it runs more like I've set SETI to 5%.

I know this is the 2nd time I've put this out here but I've been pretty much trouble free for a decade. If it means I have to drop these other projects I will in order to maintain my SETI ever increasing WU count.

Jame

Joined SETI@Home in 2001
Online since ArpNET days
First activity on Honeywell 1648
Series Mainframe in 1975 at age 12.
ID: 1277686 · Report as offensive
Profile Sunny129
Avatar

Send message
Joined: 7 Nov 00
Posts: 190
Credit: 3,163,755
RAC: 0
United States
Message 1277718 - Posted: 30 Aug 2012, 18:54:20 UTC - in response to Message 1277686.  
Last modified: 30 Aug 2012, 18:56:19 UTC

After my shutdown and restart of the BOINC program I go in a bunch of a whole slew of Einstein's and some LHC's. But only 5 new SETI's 1 6.04, and 4 6.09's. I haven't changed my setting since synching them all up months ago creating essentially 4 types none, home, school and work. that way I could assign each machine into a category that best fit what it had and how long it takes to run things.

My primary workstation has dual CPU's with dual cores and hyperthreading for 8 CPU Cores, 16 gigs of RAM, a GeForce 250 GTS the first that runs openCLl, and U320 15,000 RPM SCSI hard disks.

In the past I was doing around 4000-4400 credit average for just SETI. Since Aug 8th when I added the extra projects the number of SETI 6.03-6.09 WU's has dropped and dropped with the only things coming through being the Astropulse 6.01's. Yet I set the RESOURCE SHARE to 1400 on SETI to force supposedly 70% to SETI and 5% to everything else but it runs more like I've set SETI to 5%.

I know this is the 2nd time I've put this out here but I've been pretty much trouble free for a decade. If it means I have to drop these other projects I will in order to maintain my SETI ever increasing WU count.

Jame

as BillBG previously mentioned in the Bug in server affecting older BOINC clients with NVIDIA GPUs thread, BOINC is upposed to respect resource share in the long term, which means that its effects won't be immediately noticeable. as i also previously mentioned in that same thread, i used to have 90% of the CPU in one of my machines allocated to LHC@Home SixTrack (in hopes that on the rare occasion SixTrack WU's became available, my host would download a bunch and give them priority), and the remaining 10% allocated to a handful of other projects. it actually worked the exact opposite as i expected - BOINC wouldn't even bother to download LHC@H work despite the massive 90% resource allocation to the project, and all other projects would continue downloading/crunching/uploading/reporting/invoking scheduler requests like they always do. i then decided to split the resources evenly between all projects, and all of the sudden i started getting LHC@H work!

perhaps you should try splitting resource share evenly between all projects on that host, instead of giving 70% to SETI and the remaining 30% to all other projects, and see what happens? it might not do anything for you, but it sure couldn't hurt to try...
ID: 1277718 · Report as offensive
Profile Interstel
Avatar

Send message
Joined: 29 Nov 01
Posts: 23
Credit: 2,231,105
RAC: 0
United States
Message 1279050 - Posted: 1 Sep 2012, 23:13:37 UTC - in response to Message 1277718.  


as BillBG previously mentioned in the Bug in server affecting older BOINC clients with NVIDIA GPUs thread, BOINC is upposed to respect resource share in the long term, which means that its effects won't be immediately noticeable. as i also previously mentioned in that same thread, i used to have 90% of the CPU in one of my machines allocated to LHC@Home SixTrack (in hopes that on the rare occasion SixTrack WU's became available, my host would download a bunch and give them priority), and the remaining 10% allocated to a handful of other projects. it actually worked the exact opposite as i expected - BOINC wouldn't even bother to download LHC@H work despite the massive 90% resource allocation to the project, and all other projects would continue downloading/crunching/uploading/reporting/invoking scheduler requests like they always do. i then decided to split the resources evenly between all projects, and all of the sudden i started getting LHC@H work!

perhaps you should try splitting resource share evenly between all projects on that host, instead of giving 70% to SETI and the remaining 30% to all other projects, and see what happens? it might not do anything for you, but it sure couldn't hurt to try...


Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

Joined SETI@Home in 2001
Online since ArpNET days
First activity on Honeywell 1648
Series Mainframe in 1975 at age 12.
ID: 1279050 · Report as offensive
Profile Sunny129
Avatar

Send message
Joined: 7 Nov 00
Posts: 190
Credit: 3,163,755
RAC: 0
United States
Message 1279069 - Posted: 2 Sep 2012, 0:42:02 UTC - in response to Message 1279050.  

yes, its entirely possible that switching drivers can effect the run times of GPU tasks. i don't know about the specific driver versions you speak of, but there might be a thread here that talks about driver versions and which ones work best for specific applications and/or GPU architectures. if you're talking about one of the latest driver releases, i don't know if there will be much info in the server database yet...but a search should dig up something.
ID: 1279069 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22149
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1279139 - Posted: 2 Sep 2012, 7:20:20 UTC

Interstel - Just a quick note about Nvidia driver versions - al versions above 301.42 are marked as "BETA", in other words they are being tested, that is they are NOT certified as being "good to use". They may be Windows certified, but that only means they are compatible with the windows api.
Nvidia use "WHQL" to indicate a driver that is stable, and has passed its Beta test phase.
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1279139 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1279175 - Posted: 2 Sep 2012, 8:59:43 UTC

I am getting work...
Built about 1000 in cache over the last few hours.
Cricket is block-solid.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1279175 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1279188 - Posted: 2 Sep 2012, 9:43:34 UTC - in response to Message 1279050.  
Last modified: 2 Sep 2012, 10:38:15 UTC

Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

My Benches of x41 Cuda apps on different drivers has shown an increased runtime on the Cuda 5 preview drivers on legacy (pre-fermi) hardware, best to steer clear of 302.xx and later drivers for pre-fermi hardware.

9800GTX+ on 301.48 Drivers:

WU : PG0444_v7.wu
Lunatics_x41z_win32_cuda23.exe -verb -nog :
Elapsed 87.883 secs, speedup: 81.98% ratio: 5.55x
CPU 13.369 secs, speedup: 97.46% ratio: 39.44x

Lunatics_x41z_win32_cuda32.exe -verb -nog :
Elapsed 89.957 secs, speedup: 81.56% ratio: 5.42x
CPU 14.758 secs, speedup: 97.20% ratio: 35.73x

Lunatics_x41z_win32_cuda41.exe -verb -nog :
Elapsed 99.328 secs, speedup: 79.64% ratio: 4.91x
CPU 12.808 secs, speedup: 97.57% ratio: 41.17x
Lunatics_x41z_win32_cuda42.exe -verb -nog :
Elapsed 99.221 secs, speedup: 79.66% ratio: 4.92x
CPU 12.184 secs, speedup: 97.69% ratio: 43.28x


9800GTX+ on 306.02 Drivers:

WU : PG0444_v7.wu
Lunatics_x41z_win32_cuda23.exe -verb -nog :
Elapsed 95.015 secs, speedup: 80.52% ratio: 5.13x
CPU 14.149 secs, speedup: 97.32% ratio: 37.27x
Lunatics_x41z_win32_cuda32.exe -verb -nog :
Elapsed 111.247 secs, speedup: 77.19% ratio: 4.38x
CPU 15.101 secs, speedup: 97.14% ratio: 34.92x
Lunatics_x41z_win32_cuda41.exe -verb -nog :
Elapsed 118.127 secs, speedup: 75.78% ratio: 4.13x
CPU 11.840 secs, speedup: 97.75% ratio: 44.53x
Lunatics_x41z_win32_cuda42.exe -verb -nog :
Elapsed 118.140 secs, speedup: 75.78% ratio: 4.13x
CPU 12.386 secs, speedup: 97.65% ratio: 42.57x


Claggy
ID: 1279188 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1279190 - Posted: 2 Sep 2012, 9:48:06 UTC - in response to Message 1279188.  

Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

My Benches of x41 Cuda apps on different drivers has shown an increased runtime on the Cuda 5 preview drivers on legacy (pre-fermi) hardware, best to steer clear of 302.xx and later drivers for pre-fermi hardware.

Claggy

Jason has made it clear that the latest and greatest app and drivers do not play very well with the older hardware.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1279190 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1279193 - Posted: 2 Sep 2012, 9:51:28 UTC - in response to Message 1279190.  

Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

My Benches of x41 Cuda apps on different drivers has shown an increased runtime on the Cuda 5 preview drivers on legacy (pre-fermi) hardware, best to steer clear of 302.xx and later drivers for pre-fermi hardware.

Claggy

Jason has made it clear that the latest and greatest app and drivers do not play very well with the older hardware.

I know, I've done the Bench testing for him,

Claggy
ID: 1279193 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1279194 - Posted: 2 Sep 2012, 9:54:06 UTC - in response to Message 1279193.  
Last modified: 2 Sep 2012, 9:54:28 UTC

Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

My Benches of x41 Cuda apps on different drivers has shown an increased runtime on the Cuda 5 preview drivers on legacy (pre-fermi) hardware, best to steer clear of 302.xx and later drivers for pre-fermi hardware.

Claggy

Jason has made it clear that the latest and greatest app and drivers do not play very well with the older hardware.

I know, I've done the Bench testing for him,

Claggy

Right you are, Sir.
And thank you for doing so.

Meow.
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1279194 · Report as offensive
musicplayer

Send message
Joined: 17 May 10
Posts: 2430
Credit: 926,046
RAC: 0
Message 1279222 - Posted: 2 Sep 2012, 11:24:50 UTC
Last modified: 2 Sep 2012, 11:28:19 UTC

The four CPU-based Seti@home tasks I had finished up and was uploaded and reported.

Now I am back with 17 Seti@home CUDA tasks which I will not run now.

Also I have two PrimeGrid tasks running. A Genefer World Record task (CUDA) also is suspended for now.

This means that I still have room for 6 tasks in my CPU. Will I get any such tasks?
ID: 1279222 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14644
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1279247 - Posted: 2 Sep 2012, 13:30:21 UTC - in response to Message 1279188.  

Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

My Benches of x41 Cuda apps on different drivers has shown an increased runtime on the Cuda 5 preview drivers on legacy (pre-fermi) hardware, best to steer clear of 302.xx and later drivers for pre-fermi hardware.

We're getting similar reports at Einstein:

Fermi (unspecified) 5-6% improvement with 306.02 drivers (message 118966)

Quadro FX1800 (similar to 9600 GS/GT) "compute-time with the new 305.93 driver is now 3 times as long as with the old 276.52 driver" (message 118918) [305.93 is a WHQL driver specifically for desktop Quadro cards]
ID: 1279247 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1279254 - Posted: 2 Sep 2012, 13:40:34 UTC - in response to Message 1279247.  
Last modified: 2 Sep 2012, 13:41:08 UTC

Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

My Benches of x41 Cuda apps on different drivers has shown an increased runtime on the Cuda 5 preview drivers on legacy (pre-fermi) hardware, best to steer clear of 302.xx and later drivers for pre-fermi hardware.

We're getting similar reports at Einstein:

Fermi (unspecified) 5-6% improvement with 306.02 drivers (message 118966)

Quadro FX1800 (similar to 9600 GS/GT) "compute-time with the new 305.93 driver is now 3 times as long as with the old 276.52 driver" (message 118918) [305.93 is a WHQL driver specifically for desktop Quadro cards]

Ouch.
Now y'all know why the kitties tend to like the status quo rather than be blazing new trails with their tails.
If it ain't busted..........
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1279254 · Report as offensive
Profile Donald L. Johnson
Avatar

Send message
Joined: 5 Aug 02
Posts: 8240
Credit: 14,654,533
RAC: 20
United States
Message 1279279 - Posted: 2 Sep 2012, 15:15:25 UTC - in response to Message 1279222.  

The four CPU-based Seti@home tasks I had finished up and was uploaded and reported.

Now I am back with 17 Seti@home CUDA tasks which I will not run now.

Also I have two PrimeGrid tasks running. A Genefer World Record task (CUDA) also is suspended for now.

This means that I still have room for 6 tasks in my CPU. Will I get any such tasks?

As I recall from previous discussions, if you have ANY tasks Suspended or running in High Priority mode, BOINC will NOT request new work from ANY project. So no, you should NOT get any new S@H MB tasks for your CPU as long as you have taaks suspended.
Donald
Infernal Optimist / Submariner, retired
ID: 1279279 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1279340 - Posted: 2 Sep 2012, 18:53:34 UTC - in response to Message 1279254.  

Well it certainly wont hurt to try for a couple of days. One thing that is throwing me though can upgrading your video drivers slow down processing? I had cause to install the Nvidia 306.02 drivers which are certified but not WHQL like the 302.42 drivers. And it seems like my astropulse computations have slowed down to 1/2 speed.

James

My Benches of x41 Cuda apps on different drivers has shown an increased runtime on the Cuda 5 preview drivers on legacy (pre-fermi) hardware, best to steer clear of 302.xx and later drivers for pre-fermi hardware.

We're getting similar reports at Einstein:

Fermi (unspecified) 5-6% improvement with 306.02 drivers (message 118966)

Quadro FX1800 (similar to 9600 GS/GT) "compute-time with the new 305.93 driver is now 3 times as long as with the old 276.52 driver" (message 118918) [305.93 is a WHQL driver specifically for desktop Quadro cards]

Ouch.
Now y'all know why the kitties tend to like the status quo rather than be blazing new trails with their tails.
If it ain't busted..........


That is what the alpha testers are for.

I have installed every beta driver so far and only uninstalled the 295 and 296 drivers.

ID: 1279340 · Report as offensive
rob smith Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer moderator
Volunteer tester

Send message
Joined: 7 Mar 03
Posts: 22149
Credit: 416,307,556
RAC: 380
United Kingdom
Message 1279352 - Posted: 2 Sep 2012, 19:31:31 UTC

As I recall from previous discussions, if you have ANY tasks Suspended or running in High Priority mode, BOINC will NOT request new work from ANY project. So no, you should NOT get any new S@H MB tasks for your CPU as long as you have taaks suspended.


All I can say is - Its a shame not all projects obey the rules - LHC is running at high priority, and is downloading new WU, meanwhile S@H sits there waiting for a gap in LHC's stupid short deadlines and obeying the rules of don't download while running high priority.


(Also being posted in LHC)
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
ID: 1279352 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1279356 - Posted: 2 Sep 2012, 19:48:23 UTC - in response to Message 1279352.  

As I recall from previous discussions, if you have ANY tasks Suspended or running in High Priority mode, BOINC will NOT request new work from ANY project. So no, you should NOT get any new S@H MB tasks for your CPU as long as you have taaks suspended.


All I can say is - Its a shame not all projects obey the rules - LHC is running at high priority, and is downloading new WU, meanwhile S@H sits there waiting for a gap in LHC's stupid short deadlines and obeying the rules of don't download while running high priority.


(Also being posted in LHC)

Uhh.....fix it by not running LHC?
"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1279356 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1279361 - Posted: 2 Sep 2012, 20:08:50 UTC - in response to Message 1279279.  

As I recall from previous discussions, if you have ANY tasks Suspended or running in High Priority mode, BOINC will NOT request new work from ANY project.

That's not true, you can have tasks suspended from one project and still request work from another project, and Boinc can request work even if you have tasks running high priority too (in some circumstances)

Claggy
ID: 1279361 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14644
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1279375 - Posted: 2 Sep 2012, 20:59:09 UTC - in response to Message 1279352.  

As I recall from previous discussions, if you have ANY tasks Suspended or running in High Priority mode, BOINC will NOT request new work from ANY project. So no, you should NOT get any new S@H MB tasks for your CPU as long as you have taaks suspended.

All I can say is - Its a shame not all projects obey the rules - LHC is running at high priority, and is downloading new WU, meanwhile S@H sits there waiting for a gap in LHC's stupid short deadlines and obeying the rules of don't download while running high priority.

Whether or not to request new work is always a local decision made by your BOINC client - it's never a decision made by a project. The BOINC server doesn't have to power to force work on you - although it occasionally seems to gives you more work than you've asked for.
ID: 1279375 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : Anyone else not getting work???


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.