Subtle invasivity of SETI

Message boards : Number crunching : Subtle invasivity of SETI
Message board moderation

To post messages, you must log in.

AuthorMessage
Sleepy
Volunteer tester
Avatar

Send message
Joined: 21 May 99
Posts: 219
Credit: 98,947,784
RAC: 28,360
Italy
Message 1182658 - Posted: 3 Jan 2012, 22:47:43 UTC

I am not really complaining, and I am even crunching on optimised applications, therefore things can be very different than for those crunching on stock applications.
But I remember in ye old days the promise that SETI would only harvest unused CPU cycles and it would never substantially slow down your PC.
This is not holding true any more. I find myself all more often checking if some tasks I need to be completed as quick as possible can go faster if I snooze crunching from BOINC. And all more often it does. No rules, but it happens often.

I repeat, before you shoot, that I am not running on stock since years, neither I want to. I am quite a die hard cruncher (and a 9500GT burnt on the altar of SETI, never forget to blow and oil the cushions, a couple of days ago, RIP) since long time.
Is the same happening with stock application, or is it just us using optimised ones?

Today I had this thought and I just wanted to share it with you.

About how we are running SETI these days. When we buy big video cards and we never play a 3D game, when we keep CPU burning, while software could slow them down and save some energy when the PC is idle, but still on and vigilant.
It was not like this in the beginning and this was not the pact between distributed crunching as a whole and us then.

Just thinking, while I am trying to find a cheap replacement of the died 9500 (no GTXs here, just a decent one) and there is no 3D game in my hard disks...

Cheers and have a great 2012! :-)

Sleepy
ID: 1182658 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1182674 - Posted: 4 Jan 2012, 0:34:25 UTC - in response to Message 1182658.  
Last modified: 4 Jan 2012, 0:38:35 UTC

Management of the applications and their multitasking on a system is handled by the Operating System. By and large it has always been true that if you were running some extremely intensive applications (3D gaming, 3D rendering, compiling code, high-end CAD work) there has always been issues with thread priorities.

So I don't really think anything has changed, I think it's only become more obvious with the demands we make of our PCs these days. When we only had a single processor running at 300MHz or less, we multitasked within our constraints. Now that we have 2, 4, or even 8 core mega-crunchers with even more powerful GPUs, we are quickly pushing the limits.

So, in all, I don't think the app can be blamed for "invasivity", we're just doing more than ever.
ID: 1182674 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30683
Credit: 53,134,872
RAC: 32
United States
Message 1182685 - Posted: 4 Jan 2012, 1:41:01 UTC

Personally I haven't seen the issue with BOINC. But there can be issues if you don't have enough RAM to support everything you are asking your computer to do at once. I think this is magnified on GPU's.

The problem is data swap at context switches. Program A is running and low priority. Program B wants to run and is high priority. Before the operating system can let program B run it finds it needs even 1k more RAM than in presently free, so it stops everything until it can swap Program A out of RAM and swap Program B into RAM. It is at these context switches that the human can see a pause. Then the human stops moving and clicking and Program B pauses. Program A is ready to run so the operating system has to do the reverse. This context switch isn't seen by the human, but as soon as he moves his mouse and clicks, bingo, another delay.

If there is enough RAM for A and B to be loaded at the same time, the context switches still happen but are a millisecond long and the human can't detect them.

There is a setting in BOINC to stop crunching when the user is active and another for the time to delay after the user stops being active to return to crunching, and similar settings for GPU's. If you have a number of cores you could make use of the setting of how many to use for crunching and leave one free all the time for the user. But these are stop gap measures, the real issue is the RAM, both CPU and GPU that is available.
ID: 1182685 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1182702 - Posted: 4 Jan 2012, 4:12:33 UTC - in response to Message 1182685.  

My personal experiences seem to be different than what you describe happens in milliseconds.


For example, I've started playing the new Massively Multiplayer Online game Star Wars: The Old Republic. This game can seemingly utilize up to two processor cores, is written entirely in 32bit code (so it has a maximum of 2GB per thread), typically uses up to 2.5GB of RAM and about 800MB of video RAM - this is all what I have noted from Windows' Task Manager and GPU-Z in real world game play.

The system I play on is an Intel Core i7 960 (3.2GHz quad core w/Hyperthreading), 12GB of DDR3 1600 RAM, and an AMD HD 6970 with 2GB of GDDR5.

As you can see, my system easily bests the real world game play requirements, but if I have BOINC running (mostly Rosetta@Home which uses about 300MB of RAM per task, x8 CPU cores for 2.4GB of RAM total usage), there is a noticeable lag in-game and in loading talk scenes during game play.

There was a similar noticeable difference when I used to play World of Warcraft, which used similar system resources but had many more people playing, so when I would enter a busy city with lots of people, the graphics would lag almost to the unplayable point.

In both cases, turning off BOINC helped my performance noticeably, even if both games utilized only two cores. So now I have configured my CC_Config.XML file to automatically suspend processing if SWTOR.EXE is running, which is a shame because I wouldn't mind just cutting back about 4 cores or so.
ID: 1182702 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1182703 - Posted: 4 Jan 2012, 4:12:55 UTC

LOL... subtle?

12 years ago, SETI was crunching on my computer only when the screen saver was on screen...

Until 3 months ago, BOINC was crunching 24/7 but using only one core in just one of my computers and no GPUs... (I did not even noticed that it was running if not by the Boinc icon)

Today I have my 3 computers with all the cores enabled (18 in total) plus 4 GPUs, all of them crunching 24/7 using the optimized apps and with no screen saver to not waste CPU clocks!


SETI has not invaded my computers... It went stright to my OCD brain! :D


ID: 1182703 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1513
Credit: 370,893,186
RAC: 340
United States
Message 1182706 - Posted: 4 Jan 2012, 4:22:48 UTC - in response to Message 1182685.  

Personally I haven't seen the issue with BOINC. But there can be issues if you don't have enough RAM to support everything you are asking your computer to do at once. I think this is magnified on GPU's.

The problem is data swap at context switches. Program A is running and low priority. Program B wants to run and is high priority. Before the operating system can let program B run it finds it needs even 1k more RAM than in presently free, so it stops everything until it can swap Program A out of RAM and swap Program B into RAM. It is at these context switches that the human can see a pause. Then the human stops moving and clicking and Program B pauses. Program A is ready to run so the operating system has to do the reverse. This context switch isn't seen by the human, but as soon as he moves his mouse and clicks, bingo, another delay.

If there is enough RAM for A and B to be loaded at the same time, the context switches still happen but are a millisecond long and the human can't detect them.

There is a setting in BOINC to stop crunching when the user is active and another for the time to delay after the user stops being active to return to crunching, and similar settings for GPU's. If you have a number of cores you could make use of the setting of how many to use for crunching and leave one free all the time for the user. But these are stop gap measures, the real issue is the RAM, both CPU and GPU that is available.


Context switches don't necessarily swap ALL of a program's pages, at least in a properly written OS (Disclosure - I worked on the IBM VM OS back in the '70s - one of the first Virtual Memory OSs). If B needs N pages to run, that's all the OS takes from A, thus minimizing the pages read/written. Of course the OS must have good estimates of pages required ("working set") and good guesses as to which pages to take by each program. NOT trivial. I don't think most desktop OSs do this well, so yes, more RAM is better. And a LOT more RAM is even better, especially when it's so cheap today.

But SETI uses relatively small amounts of RAM - about 40MB for a CPU task, and 90 for a GPU task. So RAM really isn't a problem. I'm typing this on an AMD quad core, with dual GTX460s running 2 WU each, (one CPU core reserved for overhead) and 4GB of RAM, and there is > 2GB Free even using Win 7 Ulimate 64 bits.
ID: 1182706 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1182708 - Posted: 4 Jan 2012, 4:34:07 UTC
Last modified: 4 Jan 2012, 4:52:49 UTC

Adding my 2 cents from a development perspective, I can say that I don't regard the changes since Vista as subtle with respect to GPU crunching, but quite pronounced & involved. There have been very major fundamental changes to the OS & Drivers of late, even to support of older cards on older XP, to attempt to yield more reliable behaviour with better, more secure sharing between applications (nb: not necessarily faster).

http://en.wikipedia.org/wiki/Windows_Display_Driver_Model, is a really good start to get a handle on the changes that affect everyone under Windows, even old cards on XP to a large extent if you expect newer Driver & OS features.

With adding of those features, particularly the memory space virtualisation related ones, comes overhead. In newer hardware there is added circuitry to accelerate many of these functions, while legacy support tends to be just enough to get it working & is performed largely in software (added emulation layers in Drivers to keep GPGPU applications compatible). The application context switching example, is a good one to illustrate older cards now perform worse, but newer ones with dedicated circuitry better.

The nuts & bolts moral of the story is, if you have the same or heavier usage patterns as before, but have the same hardware, expect the newer system architecture has a higher overhead as cost, because it does 'do' more. If you were looking for a reason to justify upgrades, for example from a 9500GT to a 560 or similar, perhaps more system RAM or other related system components, then those changes grant you that.

It does get frustrating when Microsoft up & dictates major hardware changes to carry it's OS on for the next decade or so, but do remember XPDM is very old [actually predates matured GPGPU dedicated use by a fair whack], was a major source of Blue screens of Death due to the same architectural limitations, and that virtualising the Video memory space has eased programming for reliability considerably.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1182708 · Report as offensive
musicplayer

Send message
Joined: 17 May 10
Posts: 2430
Credit: 926,046
RAC: 0
Message 1182711 - Posted: 4 Jan 2012, 5:02:38 UTC

Hi, Scubby Doo!
ID: 1182711 · Report as offensive

Message boards : Number crunching : Subtle invasivity of SETI


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.