Message boards :
Number crunching :
Monster GPU Cruncher Build
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 . . . 6 · Next
Author | Message |
---|---|
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Thanks. It should be PCIe v3.0 x16 speed. OK, if I use the PCIe riser cable, this will reduce performance, I don't know, maybe -1%? I thought this should be enough. AFAIK, the new APv7 GPU app work now different than the outdated APv6. This normally unneeded 'stuff' don't need to be calculated on CPU no more. So this PCIe speed is no longer a bottleneck. Or no, or yes? This RAM-Drive + SSD (or HDD?) would work with Win8.1 Pro x64? Or SSD + HDD? Or what's your ultimate solution? After reading & writing so much in english (prepared before I saw the two above messages) I can't understand english currently. ;-) I read in the internet e.g. ... If you have a Desktop PC which just stand on the bottom, then the HDD is OK. If you have a Laptop and you move, then a SSD would be better than HDD (e.g. Laptop fall down, the HDD's read and write head touches the disc, HDD damaged). Also ... If someone make internet streaming (IP TV on PC?) then a SSD isn't an ideal idea. Because continuously write, write ... (OK, not in my case, but would like to mention it for others ;-) AFAIK, the 850 Pro SSD is not faster than the older 840 Pro SSD. The SATA III '6 Gb/s' is the bottleneck. If I understood correct the 850 Pro SSD 'just' have the new 3D technic (read/write not just 2D (left, right) it have 3D so up and down also). OK, the following sounds crazy, but ...? The PCIe v3.0 x16 slot (#6) which will run 'always' @v3.0 x8 speed ... so I could add ... I was again in this above mentioned online shop, and found following: A Controller Card (up to 12 Gb/s): Adaptec 8805 2277500-R PCIe 3.0 x8 Low Profile retail - 2277500-R (currently €519.84) And 2.0ms (maybe fastest?) average access time - 600GB Hitachi Ultrastar C15K600 512n ISE 64MB 2.5" SAS 12Gb/s - HUC156060CSS200 (currently €319.84) This are together €839.68 for the HDD thing. Wow. I don't know if the above mentioned online shop give me really the correct & best specs. I saw now the ASUS Z9PE-D8 WS mobo support ASUS SSD Caching II. If I understood correct it's a SSD and HDD connected to the mobo and a software decide where to read/write? I don't understand really what's meant. Like I mentioned I would like to go with 4x4GB/CPU, so 8x4GB/PC = 32GB. Is this not enough for a RAM-Drive? 8x8GB (64GB) at least needed? That's 'just' the double costs (~+€300) ... ;-) 'Cheaper' then the above mentioned crazy idea. ;-) From my experiences ... AMD Phenom II X4 940 BE with 4x GTX260 GPU cards. WinXP x86. 2x2GB (4GB) installed. OS/BOINC see & use just 2303.29MB (2.25GB). Fresh start or reboot, full load, 4x CPU & 4x GPU tasks, IIRC ~700MB RAM usage. Just 1.57GB free for other usage (it was also just for SETI). After ~2 weeks the whole system RAM was used ('emergency', so the HDD worked as system-RAM, or something like this, HDD LED continuously on). Then I decided to reboot every 2 weeks latest. If this will happen also with this build here, so at latest every 2 weeks reboot, then all in RAM-Drive will be lost. So maybe every 30 minuntes backup to SSD? Or this mentioned scenario will not happen with this build here? Hints and tips are very welcome. Thanks. I saw following, after I wrote the above (before I looked here in forum, I looked again to online shop) ... There are also 12GB/s SSD's, but ~€800. Oh wow, no, better - yes, I saw now: 400GB Intel 750 Series Add-In PCIe 3.0 x4 32Gb/s MLC - SSDPEDMW400G401 I'm speechless now. Currently €416.39, this would be 'OK' for this performance, or not? But again, how long is the use-/lifetime of this monster in my monster? BTW. Would work this PCIe v3.0 x4 SSD card in a PCIe x16 slot with v3.0 x8 speed? I need to make a break now after this very interesting catch. And take a 'tranquilizer'. ;-) EDIT: I don't know if all this is understandable english. Or if it makes sense here and there. ;-) I need a break now. :-) |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
Apart from at start up SETI uses very little disk access, one write per minute per task once it is loaded into memory. This is not going to stress any modern hard disk. When a task is loaded it has to be transfered from the disk to the processing unit's memory - ~300k of transfer for a MB, or 8M for an AP. given the run time of a task that is going to be a very insignificant time (both are less than 1 second! The results write on completion and the other file updates are likewise very short, a total write of say 50k - nothing to worry about. So stick with a simple set up, hard disk, no RAMdrive and it will be very stable - remember RAMdrives need to have their data synchronised with a hard disk at both boot and shut down, as well as a the other data transfer points, so all they do is defer writes to the disk, and in the event of a glitch they can get thrown badly out of synch... Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
ML1 Send message Joined: 25 Nov 01 Posts: 20283 Credit: 7,508,002 RAC: 20 |
If you are wanting a Boinc-crunch-only behemoth, then do away with Windows and bloatware and license restrictions and go for one of the small Linux distros... You can run a full Linux OS and Boinc entirely in ram. No HDD/SSD IO needed! (Or more simply, just any of the mainstream Linux distros and simply run from a HDD or SSD. The disk IO is negligible in any case and in any case, Linux automatic file-system caching utilising any spare ram available is very good.) No overhead from anti-virus or firewalls or "advertisements" software and no Microsoft nags! Happy fast cool crunchin'! Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Bruce Send message Joined: 15 Mar 02 Posts: 123 Credit: 124,955,234 RAC: 11 |
Dirk You are right, I already have a pair of the R9-295x2 cards and I am about to pick up the other two. I had originally planed to use a ASUS Rampage V Extreme motherboard, but after your explanation of your board and reading the manual on it, I may use the same one. Asrock has one that may work the same way, but no sound. Maybe one of the AMD guru's could chime in and explain if Crossfire is truly needed with multiple cards. Thanks. Bruce Bruce |
Link Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 |
Or SSD + HDD? SSD. I wouldn't expect BOINC to write more than 2PB to the disk during the time you'll probably use that machine, even if you should use it veeery long. HDD will very likely wear out sooner, since it's spinning all the time. And with this I/O performance you don't need to think much about a RAM drive. I searched the web, and found the 2.000 Watt Super Flower Leadex 80 Plus Platinum 8Pack Edt. PSU. AFAIK available since January '15. I have a Super Flower in my desktop computer, bought it about 10 years ago for my AthlonXP system after a GPU upgrade. Still works without any problems, even if with the current configuration the 25A 12V rail is pretty much maxed out when crunching on both CPU and GPU. Can't say one bad word about it so far. Here is btw the homepage of Super Flower. And according to the feature list, the 2000W PSU has overheat protection and a single 166.6A 12V rail. So should be OK. |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
Neither Crossfire nor SLI are required by SETI. Indeed some have reported that using SLI can reduce the performance of multi-GPU systems Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65745 Credit: 55,293,173 RAC: 49 |
Here the biggest psu size that I know of is 1600w. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
Here the biggest psu size that I know of is 1600w. That's because here in the US we are dealing with the limitation of standard 120v 15a branch circuits. If one rewires a special outlet for 240v, they could take advantage of the higher output PSUs that do exist, but are mainly sold overseas. Years ago, I ran a 240v 50a subfeed box to the crunching den, but it is split into standard 120v 15a branch circuits, as I have no need to power any single rig with more wattage than that. If the situation ever arose, I could combine 2 of the 120v 15a circuits into a 240v breaker. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
How I pity the deprived colonials ;-) (I just found the input breaker to my house is rated at 150A, so I guess that means I can up my power demand somewhat....) Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
How I pity the deprived colonials ;-) LOL...just make sure the branch circuits are up to the task. I had trouble with that years ago and fried up some things... That's what led to the subfeed box upgrade. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65745 Credit: 55,293,173 RAC: 49 |
Here the biggest psu size that I know of is 1600w. 240v over here is two 120v circuits, overseas, 240v is one circuit of 240v, I don't think what works only on 240v ac over there would work on 240v ac here. Now of course one could rewire the relevant circuits where a PC was at to 120v 20A circuit, I have have two here. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
240vac over there is the same as 240vac over here, with the exception that some countries run at 50hz instead of our 60hz. Most PSUs are not bothered by the frequency difference. And yes, some 120v circuits here can be upgraded to 20a by replacing the breaker and upgrading the wire from 14ga to 12ga. 20a over a 14ga cable is not recommended, and does not meet code. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65745 Credit: 55,293,173 RAC: 49 |
240vac over there is the same as 240vac over here, with the exception that some countries run at 50hz instead of our 60hz. Most PSUs are not bothered by the frequency difference. My 20a circuits are both 12ga, but then I paid to have them wired up. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
J. Mileski Send message Joined: 9 Jun 02 Posts: 632 Credit: 172,116,532 RAC: 572 |
240vac over there is the same as 240vac over here, with the exception that some countries run at 50hz instead of our 60hz. Most PSUs are not bothered by the frequency difference. 240v over there is not quite the same as over there. their 240v is 1 hot and 1 neutral. Here it is 2 hots. I'm not sure if their domestic electrical transformers are center tap or not, if it is it would give them 240v - 480v. The frequency I don't think really makes that much difference, except in electric motors. A 1750 rpm motor at 60 herts will only turn 1458.3 rpm on 50 herts. |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
Well, yes and no... Most domestic installations are single phase, live plus neutral (which is meant to be at earth potential...) 230ish to neutral. However many non-domestic installations are three phase ("230" phase to neutral, or "440" phase to phase). A typical domestic will have a 100A feeder, split into a couple of 30A rings, plus lighting, cooker and other spurs. Light commercial is 30A per phase, and it goes up from there. Many village halls are rated at 100-200A per phase (great for lighting rigs), and light industrial premises tend to start at about 100A per phase. No worries about the distribution within the house as that is all rated at 30 minimum - I could always run a couple of 60/100A spurs if needed (just think of the cruncher farm that could feed). Domestic installations tend not to have input transformers, but they are becoming more common on light commercial and industrial units. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Bruce wrote: Dirk 4x R9 295X2, you make me jealous! ;-) I'm happy that I could help you. - - - - - - - - - - @ all I wrote this thread here not for to say 'hey people look what I will do - be envious!' I thought it's nice to get input of others and their recommendations. This thread could be used as information source - if other members want to go the same way. Or it will animate other members to go the same way? ;-) - - - - - - - - - - Your R9 295X2 cards have the AMD's reference design? 2 slot wide card and water cooling and 2x PCIe 6+2pin (8pin) power connector? AFAIK, after reading about in the internet in past ... I don't know if you know it already. ;-) http://www.eteknix.com/amd-radeon-r9-295x2-8gb-graphics-card-review - 1 year ago AMD’s second main problem, power consumption, has been circumvented by pushing beyond the limits of the ATX power delivery specification. Typically we’d see graphics cards always follow the golden rule of power delivery – 75W through the PCIe bus, 75W through the 6 pin and 150W through the 8 pin. In effect this means AMD’s R9 295X2 should be a 375W card because it has dual 8 pins. However, this is a 500W card and AMD is relying on people having power supplies capable of supplying 28 Amps to each of the 8 pin connectors. I’m not sure why AMD didn’t just opt for three 8 pins as I’ve seen a lot of other high-end GPU solutions use this. The end result is that you will need at least a 750W power supply of a really high quality if you want to be able to use this graphics card. If you would like to go that 2 cards get power from 1 PSU, I would do, then at least 1,500W with at least 12V/~85A (or more, more is every time better ;-) single rail (1,000W / 12V = 83.33A). Example: 4096MB (x2) PowerColor Radeon R9 290X2 (R9 295X2) Devil 13 - 21234-00-40G This card have 4x PCIe 6+2pin (8pin) connectors. AFAIK, it's also a 500W card, heatsink with 3 fans and (but) it's a 3 slot wide card. - - - - - - - - - - @ all http://www.tomshardware.de/ssd-ram-ram-disk-ramdrive-vergleich,testberichte-241438.html - 27. November 2013 I used the Google Translator, but he gave me more terrible english than I write ;-), so I tried it to make it better. Besides SuperSpeed RamDisk Plus (double faster than following tool (my additional note)), Primo Ramdisk Ultimate is the second commercial tool that consistently recommended and convinces with its wide range of settings. SoftPerfect RAM Disk and StarWind RAM Disk does not offer quite as many features, but are for free and can compete well in terms of speed. Dissuasion we must ImDisk Virtual Disk Driver the third free tool, when it comes to the question of speed it is simply too slow compared to the other programs. I understood it like this, this tools create a theoretically HDD, you can install software on it, before you switch off the PC the things in RAM-Disk will be backuped on real HDD, if the PC start again, this things are loaded again automatically to RAM-Disk. So I could install both BOINC folders there, and everything stay there for 'ever'. At least with this monster build, how big will be the folders/files - that I can calculate the needed size of system-RAM (how much I must buy at least). If this would work like I think, then a regular HDD would be fine (no SSD needed or other 'balderdash' ;-) . Hints and tips are very welcome. Thanks. BTW, AFAIK, in Germany, modern house, a wall plug deliver 230V/16A/3680W. Of course, you can't use all wall plugs in one room with this specs. You must check the related fuse/room. |
woohoo Send message Joined: 30 Oct 13 Posts: 972 Credit: 165,671,404 RAC: 5 |
There hasn't been a lot of AP split recently so that would give MB Cuda apps the edge. The ASUS ROG MARS760-4GD5 G-SYNC Support GeForce GTX 760x2 4GB 512-Bit GDDR5 PCI Express 3.0 HDCP Ready SLI Support Video Card looks interesting and running two of them would use up 680W and give you four gpus. I've run four of the ASUS R9290X-DC2OC-4GD5 Radeon R9 290X 4GB 512-Bit GDDR5 PCI Express 3.0 HDCP Ready CrossFireX Support Video Card in one computer. That's supposed to be 1160W, but the CORSAIR AXi series AX1200i 1200W Digital ATX12V v2.31 and EPS 2.92 SLI Ready CrossFire Ready 80 PLUS PLATINUM Certified Full Modular Active PFC Power Supply New 4th Gen CPU Certified Haswell Ready handled it fine. The first problem I ran into was that the cards were so close together they would overheat and throttle down tremendously after exceeding 95C. Just for fun I ordered an x1 riser and that worked but it was a bottleneck for some projects. So I tried a 50cm x16 ribbon riser and I think that worked. I started having problems after getting a second 50cm x16 ribbon riser. Using the risers fixed the heat problem although a computer case with two gpus hanging out of it is a bit strange. But most of the time when I restarted the computer only two of the four gpus would be detected. So I would average five tries for every boot. I don't know what the problem was, a bad motherboard, a bad riser, or bad driver/OS support. So for something different, I wanted to try five gpus: three 290xs and one 295x2. That's 1370W so I got an EVGA 220-T2-1600-X1 1600W ATX12V / EPS12V SLI Ready CrossFire Ready 80 PLUS Titanium certified Full Modular Power Supply. Using one riser it looked like four gpus would work with no problem. But when the second riser went in, all the cards weren't being detected again, and this was on a different motherboard. So now I just have six gpus across five video cards on three computers. My work computer is a Dell XPS 8700 but in addition to the 290x gpu, I changed the psu to a Corsair vx550, maxed the RAM to 32GB, added another DVD burner I hardly use, added two Samsung SSDs and changed the 1TB hard drive to a 2TB hybrid drive. The AX1200i is in another computer driving two gpus but it doesn't even use 500W; the fan hardly ever turns on so it's essentially silent. The 1600 T2 in the last computer only drives three gpus. The fan is noisy so I activated the ECO switch which allows the fan to shut off under lower loads. Apparently my loads aren't that high because I can never hear the fan, or I'm deaf. Either that or the main fan from the 295x2 is drowning everything else out. |
Sutaru Tsureku Send message Joined: 6 Apr 07 Posts: 7105 Credit: 147,663,825 RAC: 5 |
Did you use a powered x16 riser with a molex power connection? 1x used/PC and 2x used/PC config (both times)? |
woohoo Send message Joined: 30 Oct 13 Posts: 972 Credit: 165,671,404 RAC: 5 |
The one x1 riser that I got was powered and I didn't have any problems with it except that I didn't want to take the performance hit going from x16 to x1. The two x16 ribbons that I got were not powered and they weren't cheap either. I figured if they were just straight extenders why would they need to be powered, would 75W of PCI bus power attenuate over just 50cm? The reason I wanted 50cm ribbons is because I wanted enough length to put a video card on top of the case or on a box outside the case. There are x16 risers that are powered but they're not very long, more like 19cm. In order to use that I would need to design a custom rack and use zip ties, or use string and suspend my cards from the ceiling like mobiles. The first setup I tried was an Intel 4790K/Corsair AX1200i/Asus Maximus VI Extreme. The second setup was the EVGA 1600 T2/EVGA Z97 Classified. I thought about all the things that could have been wrong, maybe the x16 risers needed to be powered, maybe I should have used AMD CPU/MB, maybe I should have use Nvidia gpus, maybe I should have used four risers instead of two, maybe I should have run Linux, or maybe I should have picked a CPU/MB that had native support for 40 lanes of PCIe. One way to redo everything would be to run dual 295x2s. The price much lower than when they were first released and the one I have runs very cool. Two of them would need 100A total and adding any more creates problems where you have to add more power supplies and figure out where to mount all the extra radiators. I looked at the Thermaltake X9 when it was announced and I thought what if I stacked two of them, could I run ribbon from the the bottom case to the top case? But the ribbon has to get past the top panel of the bottom case, the bottom panel of the top case, and then the motherboard tray of the top case. Some of these pieces might be removable but not all of them are, and I know they don't put gaping holes in their cases by default, plus I'm not a fan of cutting holes in metal. |
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
I'm wondering if you could put a powered riser on the end of your 50cm risers. You would definitely need some lengthy molex extenders though. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.