AstroPulse for Intel GPUs, open beta2

Message boards : Number crunching : AstroPulse for Intel GPUs, open beta2
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 4 · 5 · 6 · 7

AuthorMessage
Oddbjornik Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 15 May 99
Posts: 220
Credit: 349,610,548
RAC: 1,728
Norway
Message 1451059 - Posted: 7 Dec 2013, 12:54:07 UTC

I'm sorry, BilBg, but the effect is exactly the same with 7 active CPU tasks. Immediate temperature drop and slowdown when the GPU processing starts.

I believe I can see the same pattern in Bill Greene's host, although the number of tasks there is quite low.

So I'll stop using the GPU on this host for now, and hope for new developments.

Keep up the good work, Raistmer, and let me know if there's anything more I can do.
ID: 1451059 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1451072 - Posted: 7 Dec 2013, 14:15:15 UTC - in response to Message 1451059.  


If you are in a playful mood ;) you can play a little with PerfMonitor 2
http://www.cpuid.com/softwares/perfmonitor2.html

Even if you don't find the reason of this slowdown effect it's interesting to play with PerfMonitor

For my older CPU (AMD Athlon II X3 455) I (have to) use the older version:
http://www.cpuid.com/softwares/perfmonitor.html






 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1451072 · Report as offensive
Profile petri33
Volunteer tester

Send message
Joined: 6 Jun 02
Posts: 1668
Credit: 623,086,772
RAC: 156
Finland
Message 1451212 - Posted: 7 Dec 2013, 21:32:48 UTC - in response to Message 1451072.  

Awesome tool!
To overcome Heisenbergs:
"You can't always get what you want / but if you try sometimes you just might find / you get what you need." -- Rolling Stones
ID: 1451212 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1451582 - Posted: 8 Dec 2013, 21:33:54 UTC - in response to Message 1451059.  

I'm sorry, BilBg, but the effect is exactly the same with 7 active CPU tasks. Immediate temperature drop and slowdown when the GPU processing starts.

I believe I can see the same pattern in Bill Greene's host, although the number of tasks there is quite low.

So I'll stop using the GPU on this host for now, and hope for new developments.

Keep up the good work, Raistmer, and let me know if there's anything more I can do.


Thought I would share this early analysis. Will take another look in about a week . After that, I will cut back to 2 WU's for the GPU for another round of data before doing the same thing for a single GPU WU.

Run Time (hrs) CPU Time (hrs) Avg Credit/WU Avg WU/Run Time Hr Avg WU/Execution Hr
Astropulse V6 (GPU - 28 WU's)
14.63 6.41 639.66 (5 WU's only) 43.7 99.7

Astropulse V6 (CPU - 10 WU's)
22.6 21.4 663.29 (single WU) 29.4 31.1

Astropulse V7 (CPU - 113 WU's before GPU execution)
3.6 3.6 81.614 (5 WU's only) 22.6 23.0

Astropulse V7 (CPU - 77 WU's during GPU execution)
5.1 4.8 89.1 (73 WU's) 17.5 18.6

WU's missing from Avg Credit/WU column are pending.
GPU is pumping out V6 WU's about 3 times the speed of CPU.
Appears CPU's are loosing about 5 credits/Hr. with the GPU executing I assume due to GPU overhead management.
However, only 5 of 113 CPU WU's have been credited since bringing the GPU on line.

ID: 1451582 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1451583 - Posted: 8 Dec 2013, 21:36:39 UTC - in response to Message 1451582.  

I'm sorry, BilBg, but the effect is exactly the same with 7 active CPU tasks. Immediate temperature drop and slowdown when the GPU processing starts.

I believe I can see the same pattern in Bill Greene's host, although the number of tasks there is quite low.

So I'll stop using the GPU on this host for now, and hope for new developments.

Keep up the good work, Raistmer, and let me know if there's anything more I can do.


Thought I would share this early analysis. Will take another look in about a week . After that, I will cut back to 2 WU's for the GPU for another round of data before doing the same thing for a single GPU WU.

Run Time (hrs) CPU Time (hrs) Avg Credit/WU Avg WU/Run Time Hr Avg WU/Execution Hr
Astropulse V6 (GPU - 28 WU's)
14.63 6.41 639.66 (5 WU's only) 43.7 99.7

Astropulse V6 (CPU - 10 WU's)
22.6 21.4 663.29 (single WU) 29.4 31.1

Astropulse V7 (CPU - 113 WU's before GPU execution)
3.6 3.6 81.614 (5 WU's only) 22.6 23.0

Astropulse V7 (CPU - 77 WU's during GPU execution)
5.1 4.8 89.1 (73 WU's) 17.5 18.6

WU's missing from Avg Credit/WU column are pending.
GPU is pumping out V6 WU's about 3 times the speed of CPU.
Appears CPU's are loosing about 5 credits/Hr. with the GPU executing I assume due to GPU overhead management.
However, only 5 of 113 CPU WU's have been credited since bringing the GPU on line.



Unfortunately, forum protocol apparently doesn't allow table formatting ...
ID: 1451583 · Report as offensive
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 834
Credit: 1,807,369
RAC: 0
Germany
Message 1451591 - Posted: 8 Dec 2013, 22:06:30 UTC - in response to Message 1451583.  

Unfortunately, forum protocol apparently doesn't allow table formatting ...

Use the "pre" tag. And btw. there is no Astropulse v7.
ID: 1451591 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 19 Sep 00
Posts: 3184
Credit: 446,358
RAC: 0
Germany
Message 1451599 - Posted: 8 Dec 2013, 22:34:59 UTC - in response to Message 1451583.  

Unfortunately, forum protocol apparently doesn't allow table formatting ...

As link already mentioned, that would look like this:-)

Run Time (hrs)	CPU Time (hrs)	Avg Credit/WU		Avg WU/Run Time Hr	Avg WU/Execution Hr
	Astropulse V6 (GPU - 28 WU's)						
14.63		6.41		639.66	(5 WU's only)	43.7			99.7
							
	Astropulse V6 (CPU - 10 WU's)						
22.6		21.4		663.29	(single WU)	29.4			31.1
							
	Astropulse V7 (CPU - 113 WU's before GPU execution)						
3.6		3.6		81.614	(5 WU's only)	22.6			23.0
							
	Astropulse V7 (CPU - 77 WU's during GPU execution)						
5.1		4.8		89.1	(73 WU's)	17.5			18.6

Gruß
Gundolf
ID: 1451599 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1451601 - Posted: 8 Dec 2013, 22:35:27 UTC - in response to Message 1451582.  

I'm sorry, BilBg, but the effect is exactly the same with 7 active CPU tasks. Immediate temperature drop and slowdown when the GPU processing starts.

I believe I can see the same pattern in Bill Greene's host, although the number of tasks there is quite low.

So I'll stop using the GPU on this host for now, and hope for new developments.

Keep up the good work, Raistmer, and let me know if there's anything more I can do.


Thought I would share this early analysis. Will take another look in about a week . After that, I will cut back to 2 WU's for the GPU for another round of data before doing the same thing for a single GPU WU.

Run Time (hrs) CPU Time (hrs) Avg Credit/WU Avg WU/Run Time Hr Avg WU/Execution Hr
Astropulse V6 (GPU - 28 WU's)
14.63 6.41 639.66 (5 WU's only) 43.7 99.7

Astropulse V6 (CPU - 10 WU's)
22.6 21.4 663.29 (single WU) 29.4 31.1

Astropulse V7 (CPU - 113 WU's before GPU execution)
3.6 3.6 81.614 (5 WU's only) 22.6 23.0

Astropulse V7 (CPU - 77 WU's during GPU execution)
5.1 4.8 89.1 (73 WU's) 17.5 18.6

WU's missing from Avg Credit/WU column are pending.
GPU is pumping out V6 WU's about 3 times the speed of CPU.
Appears CPU's are loosing about 5 credits/Hr. with the GPU executing I assume due to GPU overhead management.
However, only 5 of 113 CPU WU's have been credited since bringing the GPU on line.

This may sound a little strange, but what happens if you disable HyperThreading?

Cheers.
ID: 1451601 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1451881 - Posted: 9 Dec 2013, 16:15:45 UTC - in response to Message 1451599.  

Unfortunately, forum protocol apparently doesn't allow table formatting ...

As link already mentioned, that would look like this:-)

Run Time (hrs)	CPU Time (hrs)	Avg Credit/WU		Avg WU/Run Time Hr	Avg WU/Execution Hr
	Astropulse V6 (GPU - 28 WU's)						
14.63		6.41		639.66	(5 WU's only)	43.7			99.7
							
	Astropulse V6 (CPU - 10 WU's)						
22.6		21.4		663.29	(single WU)	29.4			31.1
							
	Astropulse V7 (CPU - 113 WU's before GPU execution)						
3.6		3.6		81.614	(5 WU's only)	22.6			23.0
							
	Astropulse V7 (CPU - 77 WU's during GPU execution)						
5.1		4.8		89.1	(73 WU's)	17.5			18.6

Gruß
Gundolf


Thanks. Not sure how you did that but will take a harder look. And, yes, of course - no Astropulse V7; too fast on the ctl-v. Change Astropulse V7 to Setiathome V7.

Bill
ID: 1451881 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1452202 - Posted: 10 Dec 2013, 7:01:10 UTC - in response to Message 1451601.  

I'm sorry, BilBg, but the effect is exactly the same with 7 active CPU tasks. Immediate temperature drop and slowdown when the GPU processing starts.

I believe I can see the same pattern in Bill Greene's host, although the number of tasks there is quite low.

So I'll stop using the GPU on this host for now, and hope for new developments.

Keep up the good work, Raistmer, and let me know if there's anything more I can do.


Thought I would share this early analysis. Will take another look in about a week . After that, I will cut back to 2 WU's for the GPU for another round of data before doing the same thing for a single GPU WU.

Run Time (hrs) CPU Time (hrs) Avg Credit/WU Avg WU/Run Time Hr Avg WU/Execution Hr
Astropulse V6 (GPU - 28 WU's)
14.63 6.41 639.66 (5 WU's only) 43.7 99.7

Astropulse V6 (CPU - 10 WU's)
22.6 21.4 663.29 (single WU) 29.4 31.1

Astropulse V7 (CPU - 113 WU's before GPU execution)
3.6 3.6 81.614 (5 WU's only) 22.6 23.0

Astropulse V7 (CPU - 77 WU's during GPU execution)
5.1 4.8 89.1 (73 WU's) 17.5 18.6

WU's missing from Avg Credit/WU column are pending.
GPU is pumping out V6 WU's about 3 times the speed of CPU.
Appears CPU's are loosing about 5 credits/Hr. with the GPU executing I assume due to GPU overhead management.
However, only 5 of 113 CPU WU's have been credited since bringing the GPU on line.

This may sound a little strange, but what happens if you disable HyperThreading?

Cheers.


Not sure exactly but suspect CPU output drops by 1/4 or so. An interesting question and something I may look into later.
ID: 1452202 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1452206 - Posted: 10 Dec 2013, 7:24:46 UTC



Not sure exactly but suspect CPU output drops by 1/4 or so. An interesting question and something I may look into later.

Some around here have been reporting better results with it disabled, but as usual YMMV.

Cheers.
ID: 1452206 · Report as offensive
Profile SongBird
Volunteer tester

Send message
Joined: 23 Oct 01
Posts: 104
Credit: 164,826,157
RAC: 297
Bulgaria
Message 1452222 - Posted: 10 Dec 2013, 8:50:16 UTC
Last modified: 10 Dec 2013, 9:15:10 UTC

Hi,

I have a number of Haswell machines with INTEL Intel(R) HD Graphics 4000. I want to use the GPUs but can't really spend the time to tinker with all of them (editing appinfo xmls and so on). I did set the AP6_win_x86_SSE2_OpenCL_Intel_r2058 on one machine and it works quite well...

In this regard I want to ask if you have a timeline on when your app will get integrated as SETI@Home's official BOINC Intel GPU app? I'm guessing that BOINC will then automatically download it and start crunching on GPUs...

Thanks!

[edit]On a somewhat related note. What is that "aimerge.cmd" you keep talking about? I've been crunching on my Intel GPU for some time now and I have never done any aimerging :/

[edit2]For anyone interested here is my app_info.xml file. I bet it is a mess...
https://www.dropbox.com/s/wkr7sm7u2a2mf12/app_info.xml
ID: 1452222 · Report as offensive
Profile Gundolf Jahn

Send message
Joined: 19 Sep 00
Posts: 3184
Credit: 446,358
RAC: 0
Germany
Message 1452223 - Posted: 10 Dec 2013, 8:52:57 UTC - in response to Message 1451881.  
Last modified: 10 Dec 2013, 8:53:23 UTC

Thanks. Not sure how you did that but will take a harder look.

Just quote my message (again), without actually posting it. In the quoted part, you'll see the BBCode tags I've used. Oh, and I had to add/delete some tab characters; the Preview button is your friend! ;-)

Gruß
Gundolf
ID: 1452223 · Report as offensive
Profile Link
Avatar

Send message
Joined: 18 Sep 03
Posts: 834
Credit: 1,807,369
RAC: 0
Germany
Message 1452230 - Posted: 10 Dec 2013, 9:44:03 UTC - in response to Message 1452222.  
Last modified: 10 Dec 2013, 9:49:49 UTC

I have a number of Haswell machines with INTEL Intel(R) HD Graphics 4000. I want to use the GPUs but can't really spend the time to tinker with all of them (editing appinfo xmls and so on). I did set the AP6_win_x86_SSE2_OpenCL_Intel_r2058 on one machine and it works quite well...

You should not need to tinker that much at all. If all machines are the same (or at least don't need special settings for the GPU app), get it to work properly on one of them, than let the cache run dry (also on the other machines), copy all files from the SETI project directory to USB drive or a network shared folder, whatever might be more suitable to you, and copy those file to the project folders of all other computers. Done.

BTW, aimerge is coming with the lunatics installer and you can make a working app_info.xml by a double click on it if you have all needed .aistub files in the project folder, so no editing needed here, unless you want to add some cmd parameters or change the WU count for the GPUs.
ID: 1452230 · Report as offensive
Profile Raistmer
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 16 Jun 01
Posts: 6325
Credit: 106,370,077
RAC: 121
Russia
Message 1452407 - Posted: 10 Dec 2013, 20:43:51 UTC - in response to Message 1452222.  


In this regard I want to ask if you have a timeline on when your app will get integrated as SETI@Home's official BOINC Intel GPU app?


In testing on beta project right now. Hope soon if no show-stopper bugs will be found.

SETI apps news
We're not gonna fight them. We're gonna transcend them.
ID: 1452407 · Report as offensive
Profile ausymark

Send message
Joined: 9 Aug 99
Posts: 95
Credit: 10,175,128
RAC: 0
Australia
Message 1452601 - Posted: 11 Dec 2013, 5:06:46 UTC
Last modified: 11 Dec 2013, 5:16:24 UTC

Just some general thoughts on intel GPU results.

Firstly, on hyperthreaded systems I always run seti on 50% of the processors, this equates to the actualy number of cores. The extra hyperthreads I keep 'spare' for normal task usage so the computer still is a fluid beast.

Secondly I always free up a core or two for feeding any gpu's running. This assumes that the GPU can process faster than any CPU doing the same task. So freeing up a core or two or three (depending on how many GPU's you are feeding) is essentual to feeding the "GPU Beast".

In the light of the above, with the Intel GPU in this specific case, i would keep at least one real core free. So, with 50% processor usage, (100% actualy cores), drop that to 37.5% or to 25% if you want to free up two actual cores. (maybe you want to run 2 instances on the GPU).

Doing so should free the CPU to feed the GPU and run a smaller amount of CPU tasks while keeping the computer 'fluid and free'. By not bogging the CPU down you maximise GPU use and overall computer crunching rate should increase.

Just my 2c worth.

PS: Waiting for the linux versions of the test to arrive ;)
ID: 1452601 · Report as offensive
Profile ausymark

Send message
Joined: 9 Aug 99
Posts: 95
Credit: 10,175,128
RAC: 0
Australia
Message 1452603 - Posted: 11 Dec 2013, 5:09:24 UTC

Sandy Bridge GPU

I know the Sandy Bridge GPU's handle Open CL 1.01. So I was wondering why they arent being included in the trial (Unless they are and I missed it). Yes they maybe limited compared to the two recent gens of intel GPU's but my guess is that they could still outperform the cpu cores by a factor of 2.

Cheers

Mark
ID: 1452603 · Report as offensive
Claggy
Volunteer tester

Send message
Joined: 5 Jul 99
Posts: 4654
Credit: 47,537,079
RAC: 4
United Kingdom
Message 1452610 - Posted: 11 Dec 2013, 5:34:14 UTC - in response to Message 1452603.  

Sandy Bridge GPUs don't have OpenCL Support, they are 2nd Gen Core processors, 3rd Gen required for OpenCL Support on the GPU:

http://software.intel.com/en-us/articles/opencl-sdk-frequently-asked-questions/#14

Claggy
ID: 1452610 · Report as offensive
Profile ausymark

Send message
Joined: 9 Aug 99
Posts: 95
Credit: 10,175,128
RAC: 0
Australia
Message 1452631 - Posted: 11 Dec 2013, 6:27:58 UTC - in response to Message 1452610.  

Hi Claggy

Thanks for that confirmation. Looks like i wont be crunching anything on that GPU then. Luckily I have an nVidia 570 joining my 580 in the next week to help crunching. Im sure that will perform better than the intel gpu in the i7 2600K cpu ;)

Cheers

Mark
ID: 1452631 · Report as offensive
Bill Greene
Volunteer tester

Send message
Joined: 3 Jul 99
Posts: 80
Credit: 116,047,529
RAC: 61
United States
Message 1453704 - Posted: 14 Dec 2013, 2:40:08 UTC - in response to Message 1452223.  

Thanks. Not sure how you did that but will take a harder look.

Just quote my message (again), without actually posting it. In the quoted part, you'll see the BBCode tags I've used. Oh, and I had to add/delete some tab characters; the Preview button is your friend! ;-)

Gruß
Gundolf


Thanks. Will follow up. Bill
ID: 1453704 · Report as offensive
Previous · 1 . . . 4 · 5 · 6 · 7

Message boards : Number crunching : AstroPulse for Intel GPUs, open beta2


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.