Slow Crunching!!! Am I Doing Something Wrong?

Message boards : Number crunching : Slow Crunching!!! Am I Doing Something Wrong?
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile SilentObserver64
Volunteer tester
Avatar

Send message
Joined: 21 Sep 05
Posts: 139
Credit: 680,037
RAC: 0
United States
Message 1151427 - Posted: 12 Sep 2011, 12:18:23 UTC
Last modified: 12 Sep 2011, 12:19:38 UTC

I know that I should be crunching a lot more than I am. Not just for S@H but also for S@H Beta. I know my ATI Radeon HD 6770 isn't the greatest, but not bad either at crunching. My complaint however is with my CPU. I have an AMD Phenom II x6 Black Edition running currently at 3.66GHZ (will up that soon as I get my CPU cooler). S@H RAC currently is at 131.00. S@H Beta RAC is at 1,200.91 (currently, directly from S@H Beta Website). I know darn well, I should be pushing near 10x that. I'm using the most current version's of both projects, and using the most current version (I believe 0.38) Lunatics app with S@H. What do I need to change to fix this? Do I need to change something in the appinfo? Do I need to change project versions? ATI's are known for data crunching, so I must be doing something wrong.

http://www.goodsearch.com/nonprofit/university-of-california-setihome.aspx
ID: 1151427 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1151431 - Posted: 12 Sep 2011, 12:44:32 UTC

Well looking at the numbers.
S@H: Number of tasks completed 5
S@H Beta: Number of tasks completed 167

It looks like you might need to install patience & probably set the resource share for Beta lower. As you have 12 tasks for S@H and 682 for Beta.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1151431 · Report as offensive
Profile SilentObserver64
Volunteer tester
Avatar

Send message
Joined: 21 Sep 05
Posts: 139
Credit: 680,037
RAC: 0
United States
Message 1151433 - Posted: 12 Sep 2011, 12:57:14 UTC - in response to Message 1151431.  
Last modified: 12 Sep 2011, 13:02:38 UTC

I do have patience, just thought I would/should be cranking out a bit more than I am currently for this computer. If the case is just a need for patience, then I can do that easily enough. Resources are set to 50/50, with project switches at 60 minute intervals. Should I change this to 75% S@H and 25% S@H Beta?

S@H Stats for SilentObserver Computer:
State: All (107) | In progress (71) | Pending (33) | Valid (2) | Invalid (0) | Error (1)
Application: All (107) | Astropulse v5 (0) | Astropulse v505 (0) | SETI@home (0) | SETI@home Enhanced (107)

S@H Beta Stats for SilentObserver Computer:
State: All (682) | In progress (357) | Pending (153) | Valid (168) | Invalid (0) | Error (4)
Application: All (682) | AstroPulse (1) | AstroPulse Alpha (0) | SETI@home Enhanced (0) | SETI@home v7 (681)

http://www.goodsearch.com/nonprofit/university-of-california-setihome.aspx
ID: 1151433 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1151436 - Posted: 12 Sep 2011, 13:13:32 UTC - in response to Message 1151433.  

S@H Beta Stats for SilentObserver Computer:
State: In progress (357)

It's not generally regarded as good practice to maintain a large cache when crunching on a Beta (test) project - that's a general observation, not just related to SETI Beta.

It's not good for the programmers and testers, because they need to know quickly whether a new application produces accurate results or not: and it's not good for you, because Beta projects, in general, reserve the right to cancel all work-in-progress without notice if they discover a problem or work out a better way of doing things, and want to switch testing to the new application.

CPDN Beta has quite a good general summary on their front page:

CPDN Beta project Guidelines
• Run models as quickly as possible
• Don't be afraid to abort them if required (forum requests, model goes live on the main site, etc)
• Keep an eye on the forums
• Try to find problems.
• Report both progress and errors in detail, compare with previous versions
• Credits from this beta project are transferred to CPDN accounts.
• We recommend not to hiding computers

(The line about credit is specific to their project, but the rest apply here too)
ID: 1151436 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1151438 - Posted: 12 Sep 2011, 13:16:26 UTC
Last modified: 12 Sep 2011, 13:24:30 UTC

I'm not sure Beta is ever short of work like the main project is at the moment. I was running S@H and Beta on a 4 core machines and I wanted 25% to Beta. However sometimes enough work would build up in Beta that it wasn't able to download any S@H work. My resource share setting are S@H 100 Beta 4. Since it wasn't working like I wanted I decided to run two instances of BOINC on that machine. With one instances limited to 3 CPU's and the other limited to 1. That way if the main project runs out of work it won't fill the queue up with Beta work.

Edit: Also what I meant about patience is that that machine shows it was "created" Sept. 5th 2011. When you said "I should be pushing near 10x that" you should know that your RAC will climb over time and you won't see anything near your average for a few weeks. Some say a few months.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1151438 · Report as offensive
Profile SilentObserver64
Volunteer tester
Avatar

Send message
Joined: 21 Sep 05
Posts: 139
Credit: 680,037
RAC: 0
United States
Message 1151441 - Posted: 12 Sep 2011, 13:31:38 UTC - in response to Message 1151438.  

I'm not sure Beta is ever short of work like the main project is at the moment. I was running S@H and Beta on a 4 core machines and I wanted 25% to Beta. However sometimes enough work would build up in Beta that it wasn't able to download any S@H work. My resource share setting are S@H 100 Beta 4. Since it wasn't working like I wanted I decided to run two instances of BOINC on that machine. With one instances limited to 3 CPU's and the other limited to 1. That way if the main project runs out of work it won't fill the queue up with Beta work.

Edit: Also what I meant about patience is that that machine shows it was "created" Sept. 5th 2011. When you said "I should be pushing near 10x that" you should know that your RAC will climb over time and you won't see anything near your average for a few weeks. Some say a few months.


I understand. Thanks for the input. I will change the settings ASAP. Right now I'm at work, so I can change resources from here, but I can't change the overides currently in place over the preferences from the site. Thanks HAL9000.

I also cut back on my cache by alot. I didn't realize it was still set at 7 days from the initial start up of BETA, and S@H has been set at 7 days for years, because I was on the move a lot and didn't know when I would be able to reconnect to the internet. Changed cache to .5 days for Beta now, and changed S@H to 3 days. Thanks as well, Richard.


http://www.goodsearch.com/nonprofit/university-of-california-setihome.aspx
ID: 1151441 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6534
Credit: 196,805,888
RAC: 57
United States
Message 1151444 - Posted: 12 Sep 2011, 13:41:52 UTC - in response to Message 1151441.  
Last modified: 12 Sep 2011, 13:49:40 UTC

I'm not sure Beta is ever short of work like the main project is at the moment. I was running S@H and Beta on a 4 core machines and I wanted 25% to Beta. However sometimes enough work would build up in Beta that it wasn't able to download any S@H work. My resource share setting are S@H 100 Beta 4. Since it wasn't working like I wanted I decided to run two instances of BOINC on that machine. With one instances limited to 3 CPU's and the other limited to 1. That way if the main project runs out of work it won't fill the queue up with Beta work.

Edit: Also what I meant about patience is that that machine shows it was "created" Sept. 5th 2011. When you said "I should be pushing near 10x that" you should know that your RAC will climb over time and you won't see anything near your average for a few weeks. Some say a few months.


I understand. Thanks for the input. I will change the settings ASAP. Right now I'm at work, so I can change resources from here, but I can't change the overides currently in place over the preferences from the site. Thanks HAL9000.

I also cut back on my cache by alot. I didn't realize it was still set at 7 days from the initial start up of BETA, and S@H has been set at 7 days for years, because I was on the move a lot and didn't know when I would be able to reconnect to the internet. Changed cache to .5 days for Beta now, and changed S@H to 3 days. Thanks as well, Richard.

The cache settings are global to BOINC not per project. Whichever one you changed last will become the global setting.
To have a separate cache length for another projects you would have to setup one of the other venues with the settings you want. Then for that project set the machine to that venue.

Edit: If you need help doing that.
1) Goto your account > Preferences > When and how BOINC uses your computer Computing preferences
2) Select one of the other venues such as "Add separate preferences for work"
3) Configure the settings you want.
4) On the project you want to use those settings goto your list of computers.
5) Select Details for the machine.
6) Go down to "Location" and select the venue you setup in step 2.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[
ID: 1151444 · Report as offensive
Profile SilentObserver64
Volunteer tester
Avatar

Send message
Joined: 21 Sep 05
Posts: 139
Credit: 680,037
RAC: 0
United States
Message 1151446 - Posted: 12 Sep 2011, 13:59:49 UTC

Ahh. Didn't realize that. Makes sense now that I think about it. Thanks for tip.

http://www.goodsearch.com/nonprofit/university-of-california-setihome.aspx
ID: 1151446 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1151486 - Posted: 12 Sep 2011, 17:24:45 UTC

RAC at either project is nearly meaningless at this point, since the computer has only been in use for a week or so.

The best indication now on how well the CPU is doing is to compare its performance against wingmates. WU 3526282 at Beta, for instance, matched your CPU against an Intel core i7 975 apparently running with hyperthreading on. There's no way to know if that i7 is overclocked or not, though, and it being ~7% faster on that one WU is probably within the range of variation caused by other tasks being done on the system.

In particular, your computer was probably doing 6 nearly identical VHAR tasks at that time, the i7 had a mix of work. It has been known for a long time that doing multiple VHAR tasks causes noticeable slowdown even on dual core CPUs, likely because VHAR tasks spend a larger fraction of the run time doing chirps and FFTs, both of which involve an 8 MiB input array and 8 MiB output array. IOW, even with relatively large cache there has to be a lot of transferring to/from RAM, and with multiple cores there's a lot of contention for both cache and RAM access.

All in all, I think your CPU is doing quite well. The next level of overclocking when you get the new cooler installed will certainly improve that more, but don't be surprised if a 9% further overclock (3.66 -> 4.0 GHz.) increases crunching performance less. Seeing if you can increase memory bandwidth might be helpful.
                                                                 Joe
ID: 1151486 · Report as offensive
Profile SilentObserver64
Volunteer tester
Avatar

Send message
Joined: 21 Sep 05
Posts: 139
Credit: 680,037
RAC: 0
United States
Message 1151526 - Posted: 12 Sep 2011, 19:58:51 UTC - in response to Message 1151486.  

RAC at either project is nearly meaningless at this point, since the computer has only been in use for a week or so.

The best indication now on how well the CPU is doing is to compare its performance against wingmates. WU 3526282 at Beta, for instance, matched your CPU against an Intel core i7 975 apparently running with hyperthreading on. There's no way to know if that i7 is overclocked or not, though, and it being ~7% faster on that one WU is probably within the range of variation caused by other tasks being done on the system.

In particular, your computer was probably doing 6 nearly identical VHAR tasks at that time, the i7 had a mix of work. It has been known for a long time that doing multiple VHAR tasks causes noticeable slowdown even on dual core CPUs, likely because VHAR tasks spend a larger fraction of the run time doing chirps and FFTs, both of which involve an 8 MiB input array and 8 MiB output array. IOW, even with relatively large cache there has to be a lot of transferring to/from RAM, and with multiple cores there's a lot of contention for both cache and RAM access.

All in all, I think your CPU is doing quite well. The next level of overclocking when you get the new cooler installed will certainly improve that more, but don't be surprised if a 9% further overclock (3.66 -> 4.0 GHz.) increases crunching performance less. Seeing if you can increase memory bandwidth might be helpful.
                                                                 Joe


Thanks for the help on that one. I have been thinking about buying new RAM, faster, capable of better bandwidths. Also when I OC, I always increase the bandwidth of my memory to help compensate for the faster CPU, and though I have to lower the NB and "Hyperthreading" slightly, I try and keep those as high as possible too, and still be stable. What if I increased the cache size (16MB in and 16 MB out)? Would that help, or hinder?

http://www.goodsearch.com/nonprofit/university-of-california-setihome.aspx
ID: 1151526 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1151583 - Posted: 12 Sep 2011, 22:25:04 UTC - in response to Message 1151526.  

...
What if I increased the cache size (16MB in and 16 MB out)? Would that help, or hinder?

I don't understand what cache you're referencing.
                                                                   Joe
ID: 1151583 · Report as offensive
Profile SilentObserver64
Volunteer tester
Avatar

Send message
Joined: 21 Sep 05
Posts: 139
Credit: 680,037
RAC: 0
United States
Message 1151624 - Posted: 13 Sep 2011, 0:07:54 UTC - in response to Message 1151583.  

...
What if I increased the cache size (16MB in and 16 MB out)? Would that help, or hinder?

I don't understand what cache you're referencing.
                                                                   Joe



The FFA Block and FFA Block Fetch.

http://www.goodsearch.com/nonprofit/university-of-california-setihome.aspx
ID: 1151624 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1151780 - Posted: 13 Sep 2011, 14:01:28 UTC - in response to Message 1151624.  
Last modified: 13 Sep 2011, 14:13:32 UTC

...
What if I increased the cache size (16MB in and 16 MB out)? Would that help, or hinder?

I don't understand what cache you're referencing.
                                                                   Joe



The FFA Block and FFA Block Fetch.



Those are not cache-settings but a command line parameter or setting, telling the ATI app. how to handle ffa_block and ffa_block_fetch!

When using an I7-2600 or similar CPU, Hyper Threading reduces your memory
bandwidth, by 2. IIRC.
Which can be troublesome and time consuming, when running large (CPDN) CPU tasks. (Maybe also AstroPulse WUs?)
ID: 1151780 · Report as offensive
Josef W. Segur
Volunteer developer
Volunteer tester

Send message
Joined: 30 Oct 99
Posts: 4504
Credit: 1,414,761
RAC: 0
United States
Message 1151781 - Posted: 13 Sep 2011, 14:12:03 UTC - in response to Message 1151624.  

...
What if I increased the cache size (16MB in and 16 MB out)? Would that help, or hinder?

I don't understand what cache you're referencing.
                                                                   Joe

The FFA Block and FFA Block Fetch.

In general, tuning of parameters for GPU work is best done by experiment, and what will maximize overall productivity of a computer isn't easy to find. I don't have an OpenCL capable GPU so really can't advise from experience. My guess is that setting "use at most 90% of processors" so only 5 CPU tasks would be running would make sense. The other CPU would then be available both for feeding the GPU and might handle more of the many background services the OS requires.

Those two parameters only apply to OpenCL Astropulse processing, and you have not yet done any of that here. In general, though, GPU processing is most efficient with fewer and larger transfers of data between the mobo and the card. But if you push it so far that the GPU kernels take longer than Windows likes, it may interfere.
                                                                  Joe
ID: 1151781 · Report as offensive

Message boards : Number crunching : Slow Crunching!!! Am I Doing Something Wrong?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.