2012 Ultimate Cruncher "PC"?

Message boards : Number crunching : 2012 Ultimate Cruncher "PC"?
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · Next

AuthorMessage
AndyJ
Avatar

Send message
Joined: 17 Aug 02
Posts: 248
Credit: 27,380,797
RAC: 0
United Kingdom
Message 1182515 - Posted: 3 Jan 2012, 1:12:12 UTC - in response to Message 1182503.  

EVGA SR2 £ 500
2X Xeon5680 £ 2200
48 GIG af the fastest ram available. £ 1000
A couple of SSD,s £ 600
That litte box AndyJ pointed out.
(hang on a minute that will only use 1 Pcie slot ,
and the board has 7)
4 of those little boxes with EVGA Classified Hydrocoppers. £16000
Small power station.
OOPS case,odds and ends,things you forgot till you were half way through the build and hadn't budgeted for (etc). £ 1000


total £21000


Message From Server: You haven't got a hope in Hell of getting enough tasks for THAT. Should have put a downpayment on a Porsche. :-)

Regards,

A
ID: 1182515 · Report as offensive
Profile john3760
Avatar

Send message
Joined: 9 Feb 11
Posts: 334
Credit: 3,400,979
RAC: 0
United Kingdom
Message 1182519 - Posted: 3 Jan 2012, 1:30:38 UTC - in response to Message 1182515.  

He did say ultimate.Sorry I pinched your idea(x4). ;)

It would be a bit starved i agree!!

john3760

ID: 1182519 · Report as offensive
Horacio

Send message
Joined: 14 Jan 00
Posts: 536
Credit: 75,967,266
RAC: 0
Argentina
Message 1182710 - Posted: 4 Jan 2012, 4:54:54 UTC

This is what I call ultimate, at least for now and may be for a couple of months:
http://www.digitalstormonline.com/comploadsub-zero-tec.asp?id=615827
It can be customized to change the 580s for 590s...

Anyway, If I were able to afford something like that I should have to move me to the SETI lab and plug it directly on the download servers or it will allways be starving (or crunching something else...)


ID: 1182710 · Report as offensive
Greg W Jones

Send message
Joined: 28 Nov 02
Posts: 29
Credit: 25,473,210
RAC: 0
United States
Message 1183353 - Posted: 7 Jan 2012, 0:59:03 UTC - in response to Message 1182710.  

This is just the topic I have been wanting to start.

I am planning a new build for this spring. I want to crunch mostly seti but I subscribe to several other GPU enabled bonic projects. So this build is for a dedicated boinc cruncher. General usage and gaming performance are not a concern.

My thought is to have a single CPU socet and as many GPUs in a single case as possible. To keep the noise down and manage the heat I will watercool the whole rig. I don't want to mess with linux, so it will be a Windows OS.

It will have a single HDD, not a SSD. Boinc crunching should not be disk bound and I understand that SSDs will wear out with the constant writing and deleting of those thousands of WUs.

I am thinking of a Intel LGA2011 based motherboard with 5-7 PCIex16 slots. Current canidates are:
ASUS Rampage IV Extreme http://www.newegg.com/Product/Product.aspx?Item=N82E16813131803
EVGA X79 Classified http://www.newegg.com/Product/Product.aspx?Item=N82E16813188117

It is not a problem putting 4 GPU cards on one motherboard. My question is, since the GPUs will be watercooled, can I fit more? If I choose watercooled GPUs that are only a single slot, can the water loops be connected in such a tight space?

My current GPU canidate is:
EVGA GTX580 Hydro Copper 3GB http://www.newegg.com/Product/Product.aspx?Item=N82E16814130665

I know a new round of GPUs are just around the corner. But, the concept is the same. I will use the latest flagship Nvidia GPU and maybe even the dual chip GPUs as long as I can still get a single slot watercooled version.

The single CPU just needs to be able to keep the GPUs busy. So, I am thinking of an i7-3930K http://www.newegg.com/Product/Product.aspx?Item=N82E16819116492 - Overclocked and watercooled of course.

Since the GPUs will be doing most of the processing, I don't think I need to maximize the RAM speed or capacity.
I am thinking of 16GB of DDR3-1866 http://www.newegg.com/Product/Product.aspx?Item=N82E16820233253

All of this hardware will need alot of power. So, I would use two power supplies.

I am still looking for the right case for all of this. I want a case that was designed with watercooling in mind.
My current canidate is: XSPC H2 Tower Case http://www.performance-pcs.com/catalog/index.php?main_page=product_info&products_id=30440

My plan is to build this out over time. I am thinking of Phase I having all the basic parts, one GPU card and one PSU. I would then add another GPU every few months. I would add the second PSU with GPU #3.

Any feedback on this approach and questions I raised or my assumptions would be appreciated.

Regards,

ID: 1183353 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 14009
Credit: 208,696,464
RAC: 304
Australia
Message 1183357 - Posted: 7 Jan 2012, 1:14:01 UTC - in response to Message 1183353.  

I understand that SSDs will wear out with the constant writing and deleting of those thousands of WUs.

Maybe in 8-15 years.

Grant
Darwin NT
ID: 1183357 · Report as offensive
Greg W Jones

Send message
Joined: 28 Nov 02
Posts: 29
Credit: 25,473,210
RAC: 0
United States
Message 1183363 - Posted: 7 Jan 2012, 1:40:04 UTC - in response to Message 1183359.  

VW Bobier

Your first reply to this thread stated that you are running with six GTX295 single slot watercooled cards.

How, exactly, did you connect the water loops between the GPUs?

Thank you in advance,
Greg

ID: 1183363 · Report as offensive
Greg W Jones

Send message
Joined: 28 Nov 02
Posts: 29
Credit: 25,473,210
RAC: 0
United States
Message 1183396 - Posted: 7 Jan 2012, 3:22:59 UTC - in response to Message 1183368.  

I found the specifications of that fitting over at sidewinder.com and it looks like it would need 10mm between the two GPU waterblocks.

The spacing between PCIe slots is 20.32mm (0.8 inches)and the thickness of the EVGA Hydro Copper 2 GTX580 waterblock is about 15mm. That leaves only 5.32mm between the two GPU waterblocks.

So, unless I'm missing something, it looks like this fitting will not work for these GPUs.

But, now I know that I only have about 5mm to work with to connect these two watercooled GPUs together. That's going to be tough.

If anyone has done this, please chime in.

Regards,
Greg

ID: 1183396 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 21968
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1183479 - Posted: 7 Jan 2012, 13:46:45 UTC - in response to Message 1183357.  
Last modified: 7 Jan 2012, 13:48:40 UTC

I understand that SSDs will wear out with the constant writing and deleting of those thousands of WUs.

Maybe in 8-15 years.

That depends on (other) usage and utilisation... With data misalignment causing write amplification, an online review shows an SSD likely wearing our in less than a year. Whereas with sympathetic use, they should suffer obsolescence long before suffering wear-out.

For Boinc, just increase the write to disk time by x10 or x100 and use a UPS!

Or use a ramdisk (rsync) backed up to the SSD.

Happy fast crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1183479 · Report as offensive
Team kizb

Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1183511 - Posted: 7 Jan 2012, 17:02:24 UTC
Last modified: 7 Jan 2012, 17:11:32 UTC

I'd look into the "EVGA 4-Way Waterblock Bridge for GTX 580 Classified", looks like a good solution and if its anything like my EK 3-Way block you can just buy plugs if your not running all four cards yet. I'll only be running 2 cards on my 3-way with the center ports blocked.


My Computers:
â–ˆ Blue Offline
â–ˆ Green Offline
â–ˆ Red Offline
ID: 1183511 · Report as offensive
Team kizb

Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1183523 - Posted: 7 Jan 2012, 17:35:52 UTC - in response to Message 1183516.  

@ VW Bobier, yeah might not work to well for your setup, but should be perfect for the OP. Nice clean solution.
My Computers:
â–ˆ Blue Offline
â–ˆ Green Offline
â–ˆ Red Offline
ID: 1183523 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 14009
Credit: 208,696,464
RAC: 304
Australia
Message 1183613 - Posted: 7 Jan 2012, 23:31:17 UTC - in response to Message 1183479.  

I understand that SSDs will wear out with the constant writing and deleting of those thousands of WUs.

Maybe in 8-15 years.

That depends on (other) usage and utilisation... With data misalignment causing write amplification, an online review shows an SSD likely wearing our in less than a year.

Can you remember the web site?
A couple of articles i've read have been on Anandtech, but i can't find the one i'm thinking of at the moment. Basically, as long as you got more than 40% free space on the SSD, and aren't running it on a database server then you should expect a miminimum life of 5 years even on a small (64GB) capacity SSD. The larger the capacity, and the greater the free space, the longer the life expectancy.
Grant
Darwin NT
ID: 1183613 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 38589
Credit: 261,360,520
RAC: 489
Australia
Message 1183622 - Posted: 8 Jan 2012, 0:07:31 UTC - in response to Message 1183613.  

I understand that SSDs will wear out with the constant writing and deleting of those thousands of WUs.

Maybe in 8-15 years.

That depends on (other) usage and utilisation... With data misalignment causing write amplification, an online review shows an SSD likely wearing our in less than a year.

Can you remember the web site?
A couple of articles i've read have been on Anandtech, but i can't find the one i'm thinking of at the moment. Basically, as long as you got more than 40% free space on the SSD, and aren't running it on a database server then you should expect a miminimum life of 5 years even on a small (64GB) capacity SSD. The larger the capacity, and the greater the free space, the longer the life expectancy.

I was reading an article early last month sometime (I can't remember where atm, it may come back to me) but it was very comprehensive and in the end it came to the conclusion that presently if you can get over 2.5yrs out of 1 then you are doing very well. Also most of the failures with SSD's (several big brands were involved in this article) are not related to the over-write issue at all, but just plain up and dying for no reason is.

Cheers.

ID: 1183622 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6666
Credit: 121,090,076
RAC: 0
United States
Message 1183625 - Posted: 8 Jan 2012, 0:19:49 UTC

What seems to strike out at me is that backup becomes even more important with an SSD. If they fail, there is no way to get data off them, while if a HDD fails, there is at least a good chance of getting data from it.

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1183625 · Report as offensive
Profile SciManStev Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Jun 99
Posts: 6666
Credit: 121,090,076
RAC: 0
United States
Message 1183635 - Posted: 8 Jan 2012, 0:38:24 UTC - in response to Message 1183628.  

What seems to strike out at me is that backup becomes even more important with an SSD. If they fail, there is no way to get data off them, while if a HDD fails, there is at least a good chance of getting data from it.

Steve

Then Raid1 comes in handy, Look at My last post, the link talks about reliability and what not.

I agree. Raid1 takes care of the backup. I did try RAID0 and RAID1 on my home system with HDD's. The RAID0 was fast, but scared me as even though I had everything backed up, if either drive failed it meant over a month of reinstalling everything. The RAID1 was a different matter. I couldn't go 1 month without one of the drives failing, and me having to replace it. Mostly I was lucky to get two weeks. I may well have been doing something wrong, but I could no longer afford the cost of buying new drives, so I bought 4 single terabyte SATA drives and loaded up my system. It has been excellent for 3 years. (There are numerous fans constantly cooling my hard drives, in addition to everything else. Heat was the enemy.)

Steve
Warning, addicted to SETI crunching!
Crunching as a member of GPU Users Group.
GPUUG Website
ID: 1183635 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 21968
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1183792 - Posted: 8 Jan 2012, 16:06:35 UTC - in response to Message 1183613.  
Last modified: 8 Jan 2012, 16:11:51 UTC

I understand that SSDs will wear out with the constant writing and deleting of those thousands of WUs.

Maybe in 8-15 years.

That depends on (other) usage and utilisation... With data misalignment causing write amplification, an online review shows an SSD likely wearing our in less than a year. ...

Can you remember the web site?

See:
Intel's SSD 710: Making Enterprise Storage More Affordable?
... Initially, the company's warranty policy on the SSD 710 did worry us because it included some fairly non-standard verbiage. Most of Intel's SSDs have a flat three-year warranty (except for the SSD 320, which is five years), but the terms for the 710 are three years or when the media wear-out indicator (E9) reaches 1, whichever comes first. With a bit of clever math, however, we found that it would take 4.2 years to consume all of the 200 GB SSD 710's P/E cycles, assuming a 100% 4 KB random write workload, 24x7, at a queue depth of 32. That's about 880 GB per day of data, by the way. Compare that to a 300 GB SSD 320, which would be worn out in a year. ...

HOWEVER! Those big numbers come down further when you allow for the x4 (approx.) write-amplification shown in their table on the previous page. (Suggesting a 16kByte page size for the SSD?)

Also note that x86 architecture PCs (both Windows and Linux) typically write out data in 4kByte chunks. That fits nicely if aligned to the SSD page boundary. Otherwise you get yet more write-amplification... Worse case is if you are displaced by 512 bytes due to a very old MS-DOS 'feature' that was never corrected and you then get typically a further x2 or x4 (or worse) write amplification.

Use RAID with unsympathetic numbers and/or misalignment and...

That all conspires to bring the numbers down to 'not long'.

I have had a RAID1 mirror destroy one of a pair of devices in just 3 months, and the second device of that pair is very nearly worn out.


A couple of articles i've read have been on Anandtech, but i can't find the one i'm thinking of at the moment. Basically, as long as you got more than 40% free space on the SSD, and aren't running it on a database server then you should expect a miminimum life of 5 years even on a small (64GB) capacity SSD. The larger the capacity, and the greater the free space, the longer the life expectancy.


Look at what numbers and assumptions are used. The figures tend to be 'cooked' to give a life expectancy of 3 to 5 years of what Marketing claims to be 'extreme' use...


SSDs are a very good idea and should last longer than your computer, but sympathetic use is important (at least avoid pathological data misalignment).

Happy fast crunchin',
Martin
See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1183792 · Report as offensive
Dave

Send message
Joined: 29 Mar 02
Posts: 778
Credit: 25,001,396
RAC: 0
United Kingdom
Message 1183795 - Posted: 8 Jan 2012, 16:15:30 UTC

While you can set boinc write-to-disk to whatever you like, say 3600 secs from the default 600, there is still regular every-minute updating of the daily_xfer_history, client_state & client_state_prev xmls. How much of a concern is this therefore?
ID: 1183795 · Report as offensive
Profile ML1
Volunteer moderator
Volunteer tester

Send message
Joined: 25 Nov 01
Posts: 21968
Credit: 7,508,002
RAC: 20
United Kingdom
Message 1183830 - Posted: 8 Jan 2012, 18:34:06 UTC - in response to Message 1183795.  

While you can set boinc write-to-disk to whatever you like, say 3600 secs from the default 600, there is still regular every-minute updating of the daily_xfer_history, client_state & client_state_prev xmls. How much of a concern is this therefore?

I had wondered about the continued disk activity... Somewhat 'non-optimal' if you ever hope for a laptop HDD to spin down!

To give a general answer:

IF:

You are using an SSD (and NOT a "USB memory stick");
And you have your filesystem partition aligned to a page boundary for the SSD;
And you are at no more than say 80% utilisation on the SSD;
And you are NOT using RAID;

Then you should have no worries whatsoever.


If you are using Win7 or later, or a 'recent' Linux distribution, then you should have no worries about page boundary alignment. I believe a 1MByte boundary size is assumed which should fit well with most flash devices.

Note however that USB memory sticks can have very poor wear levelling and can have erase blocks that are 4MBytes or more to give a huge write amplification!


For the sake of speed and paranoia, I use a 'ramdisk' for Boinc on my diskless or SSD-based systems. All on Linux ofcourse. (Also, the Linux ext4 filesystem can be configured to cache all disk activity for a period so as to minimise physical disk activity.)


Happy fast crunchin',
Martin

See new freedom: Mageia Linux
Take a look for yourself: Linux Format
The Future is what We all make IT (GPLv3)
ID: 1183830 · Report as offensive
Team kizb

Send message
Joined: 8 Mar 01
Posts: 219
Credit: 3,709,162
RAC: 0
Germany
Message 1183837 - Posted: 8 Jan 2012, 18:59:50 UTC

@ Greg W Jones

Let us know when you get this new crunching beast up and running, should really be able to tear things up, assuming you can keep it fed.
My Computers:
â–ˆ Blue Offline
â–ˆ Green Offline
â–ˆ Red Offline
ID: 1183837 · Report as offensive
Greg W Jones

Send message
Joined: 28 Nov 02
Posts: 29
Credit: 25,473,210
RAC: 0
United States
Message 1183881 - Posted: 8 Jan 2012, 22:12:19 UTC - in response to Message 1183837.  

I am just in the planning stage right now for that new super boinc build. But I think I am leaning towards a more realistic approach with "just" 4 GPUs with a dead slot between them. It will be built over time, but starting this spring.

Although, checkout this system that someone built that has 7 GTX580s! Water cooled of course.

He's running linux and somewhere he stated that Windows would not boot with all 7 GPUs installed. Does anyone know if Windows 7 has a max number of GPUs limit?

Cheers,
Greg

ID: 1183881 · Report as offensive
Greg W Jones

Send message
Joined: 28 Nov 02
Posts: 29
Credit: 25,473,210
RAC: 0
United States
Message 1183913 - Posted: 9 Jan 2012, 1:07:35 UTC - in response to Message 1183894.  

How will you get 12 GPUs in one system? Six dual chip GPU cards?
ID: 1183913 · Report as offensive
Previous · 1 · 2 · 3 · 4 · Next

Message boards : Number crunching : 2012 Ultimate Cruncher "PC"?


 
©2026 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.