GPU v. CPU - Is the GPU a sleepy fellow or is it the jobsize which is larger for the GPU?

Questions and Answers : GPU applications : GPU v. CPU - Is the GPU a sleepy fellow or is it the jobsize which is larger for the GPU?
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · Next

AuthorMessage
DanHansen@Denmark
Volunteer tester
Avatar

Send message
Joined: 14 Nov 12
Posts: 194
Credit: 5,881,465
RAC: 0
Denmark
Message 1480156 - Posted: 21 Feb 2014, 12:58:08 UTC
Last modified: 21 Feb 2014, 13:05:53 UTC

Hi Crunchers,


I've been mumbling about the GPU not to be that much better than a CPU, but now I just realized, the job size of the CUDA55 is many times larger than the job the CPU does!?!? Is this why the GPU job takes much more time or is there some other explanation?
I know my GPU/GeForce 610 is a "little" card, but that's because this is a testing system. I trying to build the perfect cruncher. If I find a card, which does work better than it's CPU counterpart, I just might mount a MB with 4 x PCI-E 2,0 slots and as many graphic cards...

Here's a sample of validated job units. Please note the much larger work size of the CUDA55 job.
Is this normal?
Is this because the graphic card is much better at crunching, as I heard?
And, which is fastest in this case, this 1 GPU or 1 of the CPU cores?

The CPU did 1 work unit in about 1,5 hours. The GPU/CUDA55 did the job in 12.9 hours. So if the work unit size is that much bigger for the GPU/CUDA55, it's not as slow as it seemed in the first place. My calculation tells me, that 1 CPU core would do the job that the GPU/CUDA55 did, in about 7 hours!?!?
31318757 	13311792 	20 Feb 2014, 12:02:08 UTC 	21 Feb 2014, 5:25:57 UTC 	Completed and validated 	41,808.17 	83.69 	480.00 	Period Search Application v101.11 (cuda55)
31298116 	13212330 	17 Feb 2014, 15:05:45 UTC 	18 Feb 2014, 3:20:50 UTC 	Completed and validated 	42,848.56 	63.46 	480.00 	Period Search Application v101.11 (cuda55)
31274546 	12757484 	14 Feb 2014, 1:52:03 UTC 	14 Feb 2014, 13:43:17 UTC 	Completed and validated 	6,831.74 	5,319.02 	480.00 	Period Search Application v102.10 (avx)
31274513 	12757201 	14 Feb 2014, 1:52:04 UTC 	14 Feb 2014, 19:01:32 UTC 	Completed and validated 	6,861.73 	5,127.46 	480.00 	Period Search Application v102.10 (avx)

.
Project Headless CLI Linux Multiple GPU Boinc Servers
Ubuntu Server 14.04.1 64bit
Kernel 3.13.0-32-generic
CPU's i5-4690K
GPU's GT640/GTX750TI
Nvidia v.340.29
BOINC v.7.2.42

ID: 1480156 · Report as offensive
draco
Volunteer tester

Send message
Joined: 6 Dec 05
Posts: 119
Credit: 3,327,457
RAC: 0
Latvia
Message 1480174 - Posted: 21 Feb 2014, 13:40:49 UTC

compare wu time to wingman, who do the same wu on CPU.
there is example:
http://setiathome.berkeley.edu/workunit.php?wuid=1429526775
ID: 1480174 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1480294 - Posted: 21 Feb 2014, 17:43:51 UTC - in response to Message 1480174.  

Hello Dan,

I'm in no way an expert. Generally, GPUs are much faster than CPU in both Multibeam and Astropulses work units. That being said, it depend on the GPU and the CPU. My example. A mutlibeam work unit takes 12-20 minutes on my Gpu where the cpu takes 1.5-2 hours. For astropulses, GPU can crunch 1 an hour, the CPU takes just over 12 hours. Again, all times are going to vary depending on type of Graphic cards, CPUs, and set up of your system. Hope this helps
ID: 1480294 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1480342 - Posted: 21 Feb 2014, 18:53:03 UTC - in response to Message 1480156.  

I just realized, the job size of the CUDA55 is many times larger than the job the CPU does!?!? Is this why the GPU job takes much more time or is there some other explanation?
...
Here's a sample of validated job units. Please note the much larger work size of the CUDA55 job.
Is this normal?

Wrong assumption for both Asteroids@home and SETI@home as tasks send to CPU or GPU are the same.

Which number makes you believe "the much larger work size of the CUDA55 job"?
See the credit '480.00' - it is the same.
And how some imaginary 'big CUDA55 task' will validate against the same (but this time somehow 'small') task sent to somebody else's CPU?

Read my post about Asteroids@home CUDA (slow) vs SETI@home CUDA (fast):
http://setiathome.berkeley.edu/forum_thread.php?id=73359&postid=1479147#1479147
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1480342 · Report as offensive
DanHansen@Denmark
Volunteer tester
Avatar

Send message
Joined: 14 Nov 12
Posts: 194
Credit: 5,881,465
RAC: 0
Denmark
Message 1480897 - Posted: 23 Feb 2014, 10:05:53 UTC
Last modified: 23 Feb 2014, 10:26:17 UTC

Draco: OK, thanks for the link. I'll check it out ;)

Zalster: OK, then it's me who can't read and calculate ;) Thank you for that. I'll continue on my search for the "ultimate cruncher times 5" ;)

BilBg: This is what got me thinking.

According to BOINC Desktop Manager:
This job was done in 11,8 hours (time elapsed: 41,808.17 83.69 480.00 Period Search Application v101.11 (cuda55)
This job was done in 1,5 hours (time elapsed: 6,831.74 5,319.02 480.00 Period Search Application v102.10 (avx)

To be frank, I don't know what scale 480 is!? I was referring to the 4 jobs, where 2 were done in 1,5 hours and 2 were done in 11,8 hours.
The titles from /results.php is:

Status ..... Run time(sec) ..... CPU time(sec) ..... Opret (and this means "create" in danish which might be the reason I didn't guess) ;)

Wrong assumption for both Asteroids@home and SETI@home as tasks send to CPU or GPU are the same.

Yes, exactly! And when they are the same, why is it, that it takes the GPU so much longer, than the job the CPU crunched I ask!?!?
.
Project Headless CLI Linux Multiple GPU Boinc Servers
Ubuntu Server 14.04.1 64bit
Kernel 3.13.0-32-generic
CPU's i5-4690K
GPU's GT640/GTX750TI
Nvidia v.340.29
BOINC v.7.2.42

ID: 1480897 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1480967 - Posted: 23 Feb 2014, 17:18:04 UTC - in response to Message 1480897.  
Last modified: 23 Feb 2014, 17:20:57 UTC

Dan,

What is the computer set up you are looking to upgrade? Is it a store bought or one that you built yourself? Store bought have limitations of power supply and slot sizes. The GT 610 is entry level graphic card which is fine for store bought computers as usually there is limit to power supply to feed it. I don't know you're level of experience with computer. Looking at your stamp after your name, I would assume you have much experience with computers. Myself, it has been a huge learning experience over the past few months. Higher end graphic cards tend to perform much better but you have to be sure the motherboard and the power supply of the computer can handle it. Also, many require extra power in the form of an 6 pin or 8 pin connector from the power supply. Here's a break down of the GT 610 compared to the GT 620. (I was going to post the GT 630 but there are so many different versions it would take too much time)

GT 610
GPU Engine Specs:
CUDA Cores 48
Base Clock 810
Boost Clock 1620
Texture Fill Rate (billion/sec) 6.5
Memory Specs:
Memory Clock 1.8 Gbps
Standard Memory Config 1024MB
Memory Interface DDR3
Memory Interface Width 64-bit
14.4Memory Bandwidth (GB/sec)

Standard Graphics Card Dimensions:
Length 5.7 inches
Height 2.7 inches
Width Single-width
Thermal and Power Specs:
Maximum GPU Temperature (in C) 102 C
Maximum Graphics Card Power (W) 29 W
Minimum System Power Requirement (W) 300 W


GT 620

GPU Engine Specs:
CUDA Cores 96
Graphics Clock (MHz) 700
Processor Clock Tester(MHz) 1400
Texture Fill Rate (billion/sec) 11.2
Memory Specs:
Memory Clock 1.8 Gbps
Standard Memory Config 1024 MB
Memory Interface DDR3
Memory Interface Width 64-bit
Memory Bandwidth (GB/sec) 14.4
Feature Support:
4.2OpenGL
PCI Express 2.0Bus Support

Standard Graphics Card Dimensions:
Length 5.7 inches
Height 2.7 inches
Width Dual-width
Thermal and Power Specs:
Maximum GPU Temperature (in C) 98 C
Maximum Graphics Card Power (W) 49 W
Minimum System Power Requirement (W) 300 W



I have a GT 640 in a store bought computer because I have limited expansion with it, only 1 free slot and no power for it other than the PCI. Recently, in some of the other threads, there has been talk of a GTX 750 Ti. Much excitement about this graphic cards, surprising thing is, is only requires 66 W maximum at full usage and is just as good as a GTX 650 Ti...This is very impressive without require extra power supply(all power comes from the PCI slot). I have order one myself and am waiting to see if I can put this in my computer that has the GT 640. I made sure this is version without a 6 pin since some manufactures will add the 6 pin if they overclock the card. If the motherboard will accept it will be much faster at work than my GT 640. I post specification here for you. I believe Stream processors = Cuda cores

GTX 750 Ti GPU Engine Specs:
CUDA Cores 640
Base Clock (MHz) 1020
Boost Clock (MHz) 1085
GTX 750 Ti Memory Specs:
Memory Clock 5.4 Gbps
Standard Memory Config 2048 MB
Memory Interface GDDR5
Memory Interface Width 128-bit
86.4 Memory Bandwidth (GB/sec)


GTX 750 Ti Graphics Card Dimensions:
Length 5.7 inches
Height 4.376 inches
Width Single-slot
Thermal and Power Specs:
Maximum GPU Tempurature (in C) 95 C
Graphics Card Power (W) 60 W
Minimum System Power Requirement (W) 300 W

While this says single slot, it may actually require 2 slots because of the fan attached. Fortunately my computer only has 1 slot and I can remove an extra expansion grill from the back of the computer to make it fit. If the numbers are true and if my store bought computer will accept this card, then it would process data near to what my home build will do with a GTX650 Ti. Very impressive. I hope this helps. Look in message boards under number crunching, there are many topics threads there dealing with optimizing your systems and many experienced builders there. That is where I have gone to read and have learned much from these people.

Happy Crunching...
ID: 1480967 · Report as offensive
DanHansen@Denmark
Volunteer tester
Avatar

Send message
Joined: 14 Nov 12
Posts: 194
Credit: 5,881,465
RAC: 0
Denmark
Message 1481657 - Posted: 25 Feb 2014, 16:35:04 UTC

Hi Zalster,

Well, I build my computers myself, and have been doing so for 20 years now. So I have some know-how at that area. But I'm pretty new when it comes to Linux and newer graphic cards.
I'm trying to build a RACK mounted super cruncher, a super cruncher which is not to expensive. The guy's inhere has taught me a lot. I now know that GT640 is a much better card, for a little extra cash. It can endure a lot more heat and is about 4 times faster.
I also learned that the CPU I used in this first edition of my Linux Boinc Headless Rack-mounted Server alias "BOINC Super-Cruncher" was the wrong kind. I installed Intel i5-3470 which is NOT a 64bit processor! The new CPU I bought for BOINC Super-Cruncher v.2.0 is Intel i5-3570K which is a real 64bit processor. I think it was Mr. Guy who taught me about these things. He's from this forum as well.
I'm also trying with a new Motherboard from Asus with 2 PCIe x 16 slots. 1 for v. 3.0 and 1 for v. 2.0. For this reason I now bought 2 Asus GeForce GT640 one of each type. Will try to install 2 GPU's in the same case.

The case I use is a 2U RACK mounted case from Germany. Great case! And the I'm installing industrial PSU, CPU-cooler. SSD disks to safe/preserve power and avoid harddisk failure.
So I really think this next version 2.0 of the BOINC Super-Cruncher will be a good one ;) All the hardware should be here in the weekend. And I'm looking so much forward to put this one together.

There's only one (what the word? hitch?), we haven't succeeded running BOINC on GPU with Ubuntu Server 12.04. This should be because of the missing graphical interface in the Server Edition. I'm running a Desktop Edition now on test machine v.1.0 but that's not my goal. My goal is a headless boinc cruncher. I found a couple of things from ubuntu server guide I want to try, and the fact that the CPU in the present tester is NOT a real 64bit CPU may also be a reason.

Thanks for all the info, thanks a lot my friend ;)

I'll be back ;)
Project Headless CLI Linux Multiple GPU Boinc Servers
Ubuntu Server 14.04.1 64bit
Kernel 3.13.0-32-generic
CPU's i5-4690K
GPU's GT640/GTX750TI
Nvidia v.340.29
BOINC v.7.2.42

ID: 1481657 · Report as offensive
DanHansen@Denmark
Volunteer tester
Avatar

Send message
Joined: 14 Nov 12
Posts: 194
Credit: 5,881,465
RAC: 0
Denmark
Message 1481908 - Posted: 26 Feb 2014, 14:32:13 UTC

Hi Zalster,


Is there any "Unit" you can measure how fast a graphic card/GPU and a CPU is? There's some indications in your last post, but what is it I have to look for, to pick the right GPU? Do you know that?

In details over the computers we see this:

Measured floating point speed 3832.76 million ops/sec
Measured integer speed 21698.46 million ops/sec


But how can we compare GPU's and GPU's to CPU's ???

Does a GPU really have this many cores??? Cores in the same sense as CPU's ???
GT 610
GPU Engine Specs:
CUDA Cores 48

.
Project Headless CLI Linux Multiple GPU Boinc Servers
Ubuntu Server 14.04.1 64bit
Kernel 3.13.0-32-generic
CPU's i5-4690K
GPU's GT640/GTX750TI
Nvidia v.340.29
BOINC v.7.2.42

ID: 1481908 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1481943 - Posted: 26 Feb 2014, 16:23:37 UTC - in response to Message 1481908.  

But how can we compare GPU's and GPU's to CPU's ???

Does a GPU really have this many cores??? Cores in the same sense as CPU's ??


No, they are not the same type of cores as in a CPU. A GPU's "cores" are highly specialized floating point math execution units that do a particular type of math really, really well (the same type of math used by many distributed computing projects). If you were to try to do standard integer math on these execution units, such as running the basic functions of an Operating System, these units would be dog slow as they were not designed for this function.

Really, attempting to compare CPUs to GPUs, you have to understand by default that you are comparing apples to oranges, and the only comparison you can make is in how quickly a CPU completes a workunit from a specific Angle Range (AR) and how a GPU completes a workunit from the same AR, and it will give you a rough idea on how much faster a GPU is than a CPU on this type of function.
ID: 1481943 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1481947 - Posted: 26 Feb 2014, 16:32:54 UTC - in response to Message 1481908.  

Hi Dan,

Oh boy..That's a question...I'm basically what you would call and end user so...I'll answer what I can with what I can find from other sources.

CUDA cores are not the same as CPU cores. AMD and Nvidia has broken down their graphic calculation process using hundreds of small shading calculation hardware units. Each one of those gets a small portion of the image and dedicates to it, working in parallel with the rest of the "shaders" or "cores" for a complete final result.

Just like with CPU cores, all CUDA or GPU cores are also not equal, thus unless we are talking the same generation or architecture, actual output is not scalable with the number of cores: newer Kepler GPUs like the GTX 680 have 3x the CUDA cores the card it replaces had (512 in the Fermi based GTX 580) but it is nowhere near 3x as fast - it is actually slower in many tasks, especially for GPU rendering, and with 3x the cores it barely breaks even in performance with its predecessor - that's with the current generation of software. The Fermi cores were "bigger" and more complex, the Kepler ones are smaller, simpler yet faster. This suits some tasks and boosts performance , while penalizes other tasks.

So beware when comparing, keep it apples to apples, as oranges are present even within the same company's lineup.

If you have a direct comparison, the Cuda cores are better 1 to 1 to CPU. However, that is not the way to compare ATI VS NVIDIA graphic cards. Because if you look at the GTX 580, it has 512 CUDA cores but the 7970 (I believe he means the ATI 7970) has 2048 AT Stream cores.. so you cannot compare them as these two cards perform very close to one another. but have different number of cores . Yes you want a card with more but don't buy a card based on the core count if you got an ATI card with 512 cores you would be very disapointed but if you got an Nvidia card with 512 Cores you should be very happy.. The cores are used to process video to the screen or do calculations for software that uses CUDA or Open CL.

In general, GPU are superior to CPUs in crunching the data. How do you know which GPU to pick? That comes down to how much money you are willing to spend and cost of electricity where you live, your Power supply unit and how many free cable connections you have, Motherboard,Bus speed, etc. Also preference for which manufacture of Graphic cards. I prefer to stick with Nvidia. I understand them better than ATI cards. As far as direct comparison of graphic cards crunching the data. Which Cards are the best?

At the top of this webpage you will see different options , home..participate..about..community..account...statistics.. The last one is the one you want, click there and you will see the top participants and top GPU... clicking on the GPU will show you the most productive GPU based on operating systems. Also remember, some of these computers are running multiple GPUs per computer..I've seen some with 8 graphic cards per computer. How did I end up with my choice? It was a cost issue, what could I afford at the time and what could my system handle..I ended up with GTX 650 Ti after my GT 620 burned out after 2 days..I researched some, saw the GTX were faster, but couldn't afford any GTX700's, plus there was a sale on the GTX 650 Tis so that is how i ended up with them. Except for that new GTX 750 ti(Maxwell architecture [new])..I just order it and it should be here tomorrow. I'll throw that in my store bought computer and will see how it performs ;)

My last statement is a question for you..what room is there in your server? From what I remember reading is there is a height limitation on servers.. If you do end up getting a higher end GPU, will it fit? I seem to remember someone saying that servers required low clearance cards..which end up limiting what will fit in them. Number Crunching Thread under message boards is a good place to research alot of these things. I know alot of the more knowledgable users hang out there alot. I don't see them alot over here on questions and answers. They will be better source of information than I. Most of the info above I've taken from other websites to help explain.

I hope this helps and not confuses you. Keep Crunching....
ID: 1481947 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1482226 - Posted: 27 Feb 2014, 8:22:17 UTC - in response to Message 1481657.  

I installed Intel i5-3470 which is NOT a 64bit processor!

What do you mean by this?
http://ark.intel.com/products/68316/intel-core-i5-3470-processor-6m-cache-up-to-3_60-ghz
http://ark.intel.com/products/65520/Intel-Core-i5-3570K-Processor-6M-Cache-up-to-3_80-GHz

... both:
"Instruction Set 64-bit"

In fact they look very much similar.
And i5-3470 shown to have some more 'Advanced Technologies' which is strange as Specifications look like the same chip with only small difference in GHz and Intel HD Graphics version.

(I also wonder how did you manage to install 64 bit Linux on "NOT a 64bit" processor ;) )

I don't think there exist 'today' Intel or AMD CPUs that are "NOT 64bit"
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1482226 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1482242 - Posted: 27 Feb 2014, 9:08:11 UTC - in response to Message 1481908.  

But how can we compare GPU's and GPU's to CPU's ???

You can't really do some 'compare' that will be valid for all cases (problems, algorithms, programs)
The theoretical figures are inflated - given for the 'best' case when/if you load all the cores at all times to the max.

The difference:
- every CPU core can do/compute totally different 'things' (algorithms, instructions) independent of other cores
- all GPU 'cores' (or at least big group of them) have to compute the same instruction (at one point in time) on multiple data (matrix)

So to be able to load all GPU 'cores' the algorithm and implementation have to be highly parallel.


You can use the results of WUProp@Home
http://wuprop.boinc-af.org/results/compar_gpu.py?fabricant=NVIDIA&type=GeForce+600+Series&modele=GeForce+GT+640

(at the left is menu to choose different GPU to compare)

You can see how MilkyWay@Home is very bad optimized for NVIDIA
I'm not sure if this graph takes into account how many simultaneous tasks per GPU are run
Also some of the data may be from only one computer (not averaged)

This is for my GPU
http://wuprop.boinc-af.org/results/compar_gpu.py?fabricant=ATI&type=HD+6000&modele=Radeon+HD+6570


You can participate in WUProp@Home if you wish - it do not load the CPU at all:
http://wuprop.boinc-af.org/index.php


P.S.
I'm not sure what do you think is a "super cruncher" but the really big super crunchers are in this list:
http://setiathome.berkeley.edu/top_hosts.php
 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1482242 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1482310 - Posted: 27 Feb 2014, 15:14:41 UTC - in response to Message 1482226.  

I don't think there exist 'today' Intel or AMD CPUs that are "NOT 64bit"


That's correct. Every x86 CPU since approximately 2002/2003 are 64bit capable, with the only notable exceptions being the original Intel Core Solo/Duo (not to be confused with the Intel Core 2 Solo/Duo) and the Intel Atom processor.
ID: 1482310 · Report as offensive
DanHansen@Denmark
Volunteer tester
Avatar

Send message
Joined: 14 Nov 12
Posts: 194
Credit: 5,881,465
RAC: 0
Denmark
Message 1482507 - Posted: 27 Feb 2014, 21:43:32 UTC
Last modified: 27 Feb 2014, 21:50:23 UTC

Hi,

Due to a discussion at microtech and a ref. from in here to http://en.wikipedia.org/wiki/X86-64#History_of_Intel_64

And this from a site comparing Intel CPU's:

[...]
CPU ............... i5-3470 ..... i5-3570K
64-bit Computing ..... - ........... yes
[...]

If I'm misinformed, then I'm just a happy guy. Because then I didn't buy the wrong CPU last time. Even though it'll be i5-2570K which will be the next tested CPU. Great performance:

http://cpuboss.com/cpu/Intel-Core-i5-3570K
http://cpuboss.com/cpu/Intel-Core-i5-3470

Thanks for setting me straight ;) That's all I ask for ;)


Hi Zalster,

Thank you my friend! I learned a great deal by that post! Thanks to you I just solved a problem. Thank you for that! I'll be fighting 2 x GPU next time ;)


Hi "Mr. anonymous" ;)

Thank you, to you too.. That was kind of you ;)
Project Headless CLI Linux Multiple GPU Boinc Servers
Ubuntu Server 14.04.1 64bit
Kernel 3.13.0-32-generic
CPU's i5-4690K
GPU's GT640/GTX750TI
Nvidia v.340.29
BOINC v.7.2.42

ID: 1482507 · Report as offensive
OzzFan Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Apr 02
Posts: 15691
Credit: 84,761,841
RAC: 28
United States
Message 1482515 - Posted: 27 Feb 2014, 22:14:05 UTC - in response to Message 1482507.  

Hi,

Due to a discussion at microtech and a ref. from in here to http://en.wikipedia.org/wiki/X86-64#History_of_Intel_64

And this from a site comparing Intel CPU's:

[...]
CPU ............... i5-3470 ..... i5-3570K
64-bit Computing ..... - ........... yes
[...]


That Wikipedia link confirms that the Core i5 3470 is a 64bit CPU. Under "Intel 64bit implementations", it lists the entire Core i5 series (as well as the i3 and i7's) as all being 64bit capable.
ID: 1482515 · Report as offensive
DanHansen@Denmark
Volunteer tester
Avatar

Send message
Joined: 14 Nov 12
Posts: 194
Credit: 5,881,465
RAC: 0
Denmark
Message 1482762 - Posted: 28 Feb 2014, 12:33:51 UTC
Last modified: 28 Feb 2014, 13:05:29 UTC

Hi, "Volunteer"

OK, thanks.. This is good news. Then I didn't make as big a mistake as I thought I did ;)


Hi BilBg,

This was a great link! Thanks for that one ;) http://wuprop.boinc-af.org/results/compar_gpu.py?fabricant=NVIDIA&type=GeForce+600+Series&modele=GeForce+GT+640


Hi Zalster,

After reading your post again, I think I know which way to go regarding the next test system. To build a headless boinc computer I think it's the cuda drivers which is the way forward. Actually some of you guys tried to tell me this earlier on. https://developer.nvidia.com/cuda-downloads

There's one thing I don't understand. I found some pages regarding this, but never found a real explanation. Can "Shades", with another word be called "replicas" ??

My servers are Rack mounted. First I started out with 1U high cases, but there was to much heat and the was no room for graphic cards. It was possible to get an adapter which changed the angle 90 degrees of the PCI bus, but it was not possible to get a adaptor for the graphic card.
Then I changed cases and now I use these, which I import from Germany. Very good cooling and a lot of room. I fit it with industrial 2U PSU's and a Fan controller from American Aerocool.
The results I've got from the last 14 days testing the first test system i5-3470/4Gb/Asus P8H61-MX/MSI-GeForce GT610 with the Desktop Edition of Ubuntu, is really bad. So I will keep on trying, hoping to solve the issue. How to make Ubuntu Server crunch Boinc data with multiple GPU's.

Here's the cases I'm using:


I'll take a few pictures to show you ;)

Thanks for your help ;)
.
Project Headless CLI Linux Multiple GPU Boinc Servers
Ubuntu Server 14.04.1 64bit
Kernel 3.13.0-32-generic
CPU's i5-4690K
GPU's GT640/GTX750TI
Nvidia v.340.29
BOINC v.7.2.42

ID: 1482762 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1482778 - Posted: 28 Feb 2014, 14:30:03 UTC - in response to Message 1482762.  
Last modified: 28 Feb 2014, 14:31:44 UTC

Hi Dan,

This is beyond me, unfortunately. I had thought shaders refered to the cores running the programs used in calcuations. Replicas in this instance I believe refers to Parallel processing that is done by the cuda cores in a GPU. IF both of these assumptions are correct. THEN one could say, that shaders (cuda cores) operate on the shading (computer program language) working in replicas (parallel processing) to complete a task. I don't have a computer guru here to bounce this idea off of so I hope one of the others here will be able to give you a better explaination. Sorry :(

You could try posting here and see what these people think.

http://setiathome.berkeley.edu/forum_forum.php?id=10[url][/url]
ID: 1482778 · Report as offensive
DanHansen@Denmark
Volunteer tester
Avatar

Send message
Joined: 14 Nov 12
Posts: 194
Credit: 5,881,465
RAC: 0
Denmark
Message 1483335 - Posted: 1 Mar 2014, 19:21:54 UTC - in response to Message 1482778.  
Last modified: 1 Mar 2014, 19:34:50 UTC

Hi Zalster,


You could try posting here and see what these people think.

I don't think I'm good enough for those guru's. Maybe.. I don't know. There's a lot of things I don't know, so I can't place a questions in the right words, I think. Maybe I don't know.

Here' Zalster, this I my Rack and the first Headless Linux Boinc Server which I'm testing right now. It's the one where we are using the Desktop Edition to make GPU crunch.
I'm sorry the picture is a little blurred, I will try to find the cooler from another site. Like the PSU it's a industrial peace so that it can run for years at full speed ;)




Project Headless CLI Linux Multiple GPU Boinc Servers
Ubuntu Server 14.04.1 64bit
Kernel 3.13.0-32-generic
CPU's i5-4690K
GPU's GT640/GTX750TI
Nvidia v.340.29
BOINC v.7.2.42

ID: 1483335 · Report as offensive
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1483359 - Posted: 1 Mar 2014, 21:26:37 UTC - in response to Message 1483335.  
Last modified: 1 Mar 2014, 21:46:08 UTC

Looks good Dan,

You will have to keep us updated on how it goes. As far as questions, you can always ask there. Doesn't matter if your english is good or not, you can even post in your native lanugage if that is easier. I use Chrome and have auto translate extension on there. Work OK. I've seen many people post in a variety of languages, most of those gurus are willing to help. That is why they created those links, to help people get these computers crunching. I'll check back here periodically to see how you are doing. I wish you good luck. As for my little experiment. I got the GTX 750 Ti and put it in my store bought computer where it used to have a GT 640. It's crunching much faster now. The longest mutlbeam went from about 1 hour 15 minutes down to 30 minutes, the quickest went from 40 minutes down to 20 minutes so in a sense it did cut the computation time in half. I haven't had a chance to test any Astropulses on it as we won't be getting any for at least 2 weeks since the server went down. I thought about using a pci to pci express riser and adding that GT 640 but Bonic won't utilize the 640, it favored the 750 Ti and ignored the 640 so I removed it and now just keep the 750 in there. Maybe I'll get another 750 and see if I could run both in the store bought computer..as long as the PSU hold out, lol....

Keep Crunching...

Opps..look ike I spoke too soon..Just got an resend Astropulse..going to move it to the front of the line and see how long it takes..
ID: 1483359 · Report as offensive
Profile BilBg
Volunteer tester
Avatar

Send message
Joined: 27 May 07
Posts: 3720
Credit: 9,385,827
RAC: 0
Bulgaria
Message 1483916 - Posted: 3 Mar 2014, 4:35:01 UTC - in response to Message 1483335.  
Last modified: 3 Mar 2014, 5:08:49 UTC

You could try posting here and see what these people think.

I don't think I'm good enough for those guru's. Maybe.. I don't know.

Don't be so shy ;)

See a poster ([B^S] madmac) which 'Joined: 9 Feb 04' have 'Posts: 1130' and still don't understand ... well, almost everything ;)
http://setiathome.berkeley.edu/forum_thread.php?id=74015

(See the style he writes:
"Click on what it said now will have to wait as computer needs to be restarted and have over 139 windows updates to do or have I done it wrong need information so that my gpu is checked to see if it can run Seti etc"
- because of that, hmm ... 'style', I never respond to him for years
)

*****

Instead of re-posting in Number Crunching you may create a thread there with links to most important for you threads that you already have here in Questions and Answers

E.g.
Title:
Servers, Linux, NVIDIA - can you help me to do it right?

Text:
I have some threads in Questions and Answers about that but I need more technical help from people knowing how to setup Linux Servers with NVIDIA drivers so they can work with BOINC for CUDA and OpenCL apps

Summary:
I am building ... (just a few sentences to describe in short what you want to do)

You may read and help me in these threads:
<links to most important for you threads that you have in Questions and Answers, e.g. the one for Linux + NVIDIA drivers + CUDA and OpenCL + BOINC + SETI@home>


 


- ALF - "Find out what you don't do well ..... then don't do it!" :)
 
ID: 1483916 · Report as offensive
1 · 2 · 3 · Next

Questions and Answers : GPU applications : GPU v. CPU - Is the GPU a sleepy fellow or is it the jobsize which is larger for the GPU?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.