question and or idea.....

Message boards : Number crunching : question and or idea.....
Message board moderation

To post messages, you must log in.

AuthorMessage
Profile Peter M. Ferrie
Volunteer tester

Send message
Joined: 28 Mar 03
Posts: 86
Credit: 9,967,062
RAC: 0
United States
Message 1615836 - Posted: 18 Dec 2014, 17:13:52 UTC

is it theoretically possible to have a multi core pc, example: (a 2 core or more (atom up to current i7 cpus)) all work on the same workunit together at the same time, to dramatically reduce time it takes for a single core to compute a workunit?

what would need to be done make boinc(seti) able to do this?


thoughts? comments?
ID: 1615836 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34255
Credit: 79,922,639
RAC: 80
Germany
Message 1615847 - Posted: 18 Dec 2014, 17:34:45 UTC

No, its not possible atm.
It would require new science apps.
Joe, Raistmer and or Jason can explain this much better.


With each crime and every kindness we birth our future.
ID: 1615847 · Report as offensive
Aurora Borealis
Volunteer tester
Avatar

Send message
Joined: 14 Jan 01
Posts: 3075
Credit: 5,631,463
RAC: 0
Canada
Message 1615889 - Posted: 18 Dec 2014, 18:39:58 UTC - in response to Message 1615847.  

No, its not possible atm.
It would require new science apps.
Joe, Raistmer and or Jason can explain this much better.

My understanding is that the SETI analysis is not conducive to paralleling. I'll leave it to the Lunatic Pros to chime in.
ID: 1615889 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1615904 - Posted: 18 Dec 2014, 19:07:33 UTC - in response to Message 1615889.  

No, its not possible atm.
It would require new science apps.
Joe, Raistmer and or Jason can explain this much better.

My understanding is that the SETI analysis is not conducive to paralleling. I'll leave it to the Lunatic Pros to chime in.

What do you think we do on GPUs? They are parallel devices, par excellence.

SETI probably isn't appropriate for the old-style symmetric multi-processor (SMP) style of parallel programming, but it's been suggested that OpenCL could use multiple CPUs in the same way that it uses multiple compute units within a GPU.

For traditional (x86-style) CPUs, there probably isn't any point of incurring the extra overhead of the middleware - we have efficient enough CPU applications to perform the work required in a timely fashion on single cores.

If OpenCL drivers/runtime support are available, there might be some benefit of from running multi-core applications on Atom and similar devices, which might struggle to meet deadlines in single-core mode. Recent versions of BOINC already have the capability to schedule OpenCL on CPUs, though I'm not aware of anyone testing that facility yet.
ID: 1615904 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34255
Credit: 79,922,639
RAC: 80
Germany
Message 1615969 - Posted: 18 Dec 2014, 21:59:17 UTC - in response to Message 1615904.  
Last modified: 18 Dec 2014, 22:00:16 UTC

No, its not possible atm.
It would require new science apps.
Joe, Raistmer and or Jason can explain this much better.

My understanding is that the SETI analysis is not conducive to paralleling. I'll leave it to the Lunatic Pros to chime in.

What do you think we do on GPUs? They are parallel devices, par excellence.

SETI probably isn't appropriate for the old-style symmetric multi-processor (SMP) style of parallel programming, but it's been suggested that OpenCL could use multiple CPUs in the same way that it uses multiple compute units within a GPU.

For traditional (x86-style) CPUs, there probably isn't any point of incurring the extra overhead of the middleware - we have efficient enough CPU applications to perform the work required in a timely fashion on single cores.

If OpenCL drivers/runtime support are available, there might be some benefit of from running multi-core applications on Atom and similar devices, which might struggle to meet deadlines in single-core mode. Recent versions of BOINC already have the capability to schedule OpenCL on CPUs, though I'm not aware of anyone testing that facility yet.


In case of programming technique its not the case.
If so i could use all my 8 cores plus 3 instances on my GPU to proccess the same task.
That was the original question and this is not possible atm.

It was discuessed a few years ago at Lunatics that this isn`t an easy task.


With each crime and every kindness we birth our future.
ID: 1615969 · Report as offensive
Profile arkayn
Volunteer tester
Avatar

Send message
Joined: 14 May 99
Posts: 4438
Credit: 55,006,323
RAC: 0
United States
Message 1616114 - Posted: 19 Dec 2014, 7:13:00 UTC - in response to Message 1615904.  

No, its not possible atm.
It would require new science apps.
Joe, Raistmer and or Jason can explain this much better.

My understanding is that the SETI analysis is not conducive to paralleling. I'll leave it to the Lunatic Pros to chime in.

What do you think we do on GPUs? They are parallel devices, par excellence.

SETI probably isn't appropriate for the old-style symmetric multi-processor (SMP) style of parallel programming, but it's been suggested that OpenCL could use multiple CPUs in the same way that it uses multiple compute units within a GPU.

For traditional (x86-style) CPUs, there probably isn't any point of incurring the extra overhead of the middleware - we have efficient enough CPU applications to perform the work required in a timely fashion on single cores.

If OpenCL drivers/runtime support are available, there might be some benefit of from running multi-core applications on Atom and similar devices, which might struggle to meet deadlines in single-core mode. Recent versions of BOINC already have the capability to schedule OpenCL on CPUs, though I'm not aware of anyone testing that facility yet.


John over at Collatz does have a OpenCL CPU app available.

ID: 1616114 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1616116 - Posted: 19 Dec 2014, 7:16:32 UTC - in response to Message 1616114.  

No, its not possible atm.
It would require new science apps.
Joe, Raistmer and or Jason can explain this much better.

My understanding is that the SETI analysis is not conducive to paralleling. I'll leave it to the Lunatic Pros to chime in.

What do you think we do on GPUs? They are parallel devices, par excellence.

SETI probably isn't appropriate for the old-style symmetric multi-processor (SMP) style of parallel programming, but it's been suggested that OpenCL could use multiple CPUs in the same way that it uses multiple compute units within a GPU.

For traditional (x86-style) CPUs, there probably isn't any point of incurring the extra overhead of the middleware - we have efficient enough CPU applications to perform the work required in a timely fashion on single cores.

If OpenCL drivers/runtime support are available, there might be some benefit of from running multi-core applications on Atom and similar devices, which might struggle to meet deadlines in single-core mode. Recent versions of BOINC already have the capability to schedule OpenCL on CPUs, though I'm not aware of anyone testing that facility yet.


John over at Collatz does have a OpenCL CPU app available.

But how efficient is it compared to just straight out running it without a 3rd party app involved?

Cheers.
ID: 1616116 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1616142 - Posted: 19 Dec 2014, 8:05:53 UTC - in response to Message 1616116.  

I'm testing one v6.04 Mini, running on 3 CPU cores of my i5-2500K.
It's kind of cool looking:


I see these run 'cooler' on my CPU than Multibeam or Astropulse do. Those tend to be around 60 Celsius, these OpenCL only 56 Celsius.
ID: 1616142 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1616145 - Posted: 19 Dec 2014, 8:22:42 UTC

Cooler maybe, but maybe not so efficient as the old Collatz app running on a single core?

Cheers.
ID: 1616145 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1616158 - Posted: 19 Dec 2014, 8:40:11 UTC - in response to Message 1616145.  
Last modified: 19 Dec 2014, 8:41:55 UTC

I got three tasks, two single core CPU Mini Collatz tasks and one for OpenCL_CPU.
The single core ones sit waiting at 3h 17m 56s initial estimate.
While its initial run time was 2 hours 5 minutes on the OpenCL task, best estimates say it's going to take about 2 hours.

But I'll have to run one of those single core tasks after it to see what it results into. Will do that tonight. Have to be places in a bit and won't be back home until after 6pm.

Ooh... also see it continues to run even with BOINC now set to Suspend - time of day.
ID: 1616158 · Report as offensive
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 1616436 - Posted: 19 Dec 2014, 21:16:35 UTC - in response to Message 1616158.  

Damn, in the end I didn't see what actual elapsed time it had.
The task's values are diluted, as run time and CPU time show as 20,902.81 seconds.
However, the stderr.txt speaks of CPU time: 34246.5 seconds and Elapsed time: 7545.7 seconds.

The latter may be the closest, the last time I checked it the elapsed time was an hour and 47 minutes with 17 minutes to go. But then things sped up, and around the moment supreme I got a phone call.
ID: 1616436 · Report as offensive

Message boards : Number crunching : question and or idea.....


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.