It Lives, It Lives !!


log in

Advanced search

Message boards : Number crunching : It Lives, It Lives !!

Previous · 1 · 2 · 3 · 4
Author Message
Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5685
Credit: 56,144,564
RAC: 49,774
Australia
Message 1229631 - Posted: 9 May 2012, 18:14:17 UTC - in response to Message 1229627.


In that case i'd go with 5 GTX690s. One 690 uses 100W less than 2 680s.
____________
Grant
Darwin NT.

Profile red-ray
Avatar
Send message
Joined: 24 Jun 99
Posts: 308
Credit: 9,024,991
RAC: 0
United Kingdom
Message 1229636 - Posted: 9 May 2012, 18:24:13 UTC - in response to Message 1229631.
Last modified: 9 May 2012, 18:30:45 UTC

In that case i'd go with 5 GTX690s. One 690 uses 100W less than 2 680s.

The power is less as the 690 clocks are slower so the RAC on 2 x 680 will be higher than 1 x 690.

Profile jason_gee
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 24 Nov 06
Posts: 4920
Credit: 72,609,059
RAC: 1,462
Australia
Message 1229648 - Posted: 9 May 2012, 18:33:02 UTC - in response to Message 1229636.

In that case i'd go with 5 GTX690s. One 690 uses 100W less than 2 680s.

The power is less as the 690 clocks are slower so the RAC on 2 x 680 will be higher than 1 x 690.


~10% lower base clock. They're supposedly using low leakage top binned parts for 690's, So by rights the boost clocks should wind themselves up further beforre temps & dissipation cap out.
____________
"It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is the most adaptable to change."
Charles Darwin

-BeNt-
Avatar
Send message
Joined: 17 Oct 99
Posts: 1234
Credit: 10,116,112
RAC: 0
United States
Message 1229654 - Posted: 9 May 2012, 18:40:25 UTC - in response to Message 1229627.
Last modified: 9 May 2012, 18:48:23 UTC


No, 10 x 690 is 20 GPUs so the PCI memory limit applies.



If you are using a 64bit server that can address more than 4GB of memory and a motherboard/controller that supports MMIO limits >4GB, this can be pushed out to the maximum of all the system memory(BIOS) on some machines. Of course most of the systems that do this are normally reserved to business due to expense. (Reference IBM servers) This has been around for awhile, especially in servers that need to run very large raid arrays, databases, etc. with many adapters, or machines that use GPU's for super computer grids.

Needless to say but you would probably be hard pressed, if possible at all, to find a board that's affordable to build a home server that could access more than that. Even then it's not really a 4GB physical threshold it's how much of the upper memory address space the memory controlled will set aside to give to the system devices in the virtual space. Least that's my understanding of it. Granted I've never really had to research it because it's been years and years since manual addressing and allocation has been needed, thank god.

But there is a software limit to 32 device per PCI bus, with the physical limit being much lower due to electrical loading issues. Wonder if there is any documentation out there on the most gpu's in a single machine?! (FWIW Nvidia's driver only supports 8 in one machine. either 1x8 or 2x4)
____________
Traveling through space at ~67,000mph!

Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1665
Credit: 203,424,760
RAC: 26,095
Australia
Message 1229848 - Posted: 10 May 2012, 2:30:21 UTC - in response to Message 1229618.
Last modified: 10 May 2012, 2:31:37 UTC

Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread.

Sorry to steal your thunder Vic but this was all my own work.

I discovered the PCI to PCIE adapter and the PCIE extender cables on Ebay last December and saw the possibilities. I've made a number posts on my experiments with them, the first was on 20th of January :-)

T.A.

Profile Ex
Volunteer moderator
Volunteer tester
Avatar
Send message
Joined: 12 Mar 12
Posts: 2895
Credit: 1,682,825
RAC: 1,184
United States
Message 1229857 - Posted: 10 May 2012, 2:59:18 UTC - in response to Message 1229618.

Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread.

It was us discussing it lol.

I was wondering what kind of performance loss you get with an adapter like that.
____________
-Dave #2

3.2.0-33

Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1665
Credit: 203,424,760
RAC: 26,095
Australia
Message 1229879 - Posted: 10 May 2012, 3:52:22 UTC - in response to Message 1229857.
Last modified: 10 May 2012, 3:54:40 UTC

[quote]Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread.

It was us discussing it lol.

I was wondering what kind of performance loss you get with an adapter like that.

The performance loss is minimal, any loss is mainly due to the increased loading times at the start of a unit.

The SAH app makes very little demand on the PCIE bus. A card running in a x1 slot shows no difference in performance compared to a card in a x16 slot. Even running through the PCI bus adapter there is almost no change in crunching times and what difference there is, is no more than a few percent.

Even when I tried driving a GTX580 with a 2.8GHz P4 through one of the PCI converters the crunching times blew out by only 5% and I think that was mainly due to the fact the poor old P4 was running at about 80% to keep the 580 fed. For a GTX570 and below any reduction in performance is small enough to be irrelevant.

T.A.

doug
Volunteer tester
Send message
Joined: 10 Jul 09
Posts: 199
Credit: 9,388,424
RAC: 1,275
United States
Message 1229909 - Posted: 10 May 2012, 5:14:59 UTC - in response to Message 1229879.


The performance loss is minimal, any loss is mainly due to the increased loading times at the start of a unit.

The SAH app makes very little demand on the PCIE bus. A card running in a x1 slot shows no difference in performance compared to a card in a x16 slot. Even running through the PCI bus adapter there is almost no change in crunching times and what difference there is, is no more than a few percent.

Even when I tried driving a GTX580 with a 2.8GHz P4 through one of the PCI converters the crunching times blew out by only 5% and I think that was mainly due to the fact the poor old P4 was running at about 80% to keep the 580 fed. For a GTX570 and below any reduction in performance is small enough to be irrelevant.

T.A.

I've messed around with my p4/gt430 combo and I can't seem to get anything more than 10% CPU usage even if I suspend all tasks except the single Cuda task. I've also set I/O to maximum. I'm getting ~90% GPU useage regardless of what I try. I did manage to get the GPU temp down into the 80s once I turned the case fan around :-) I bought a 120mm fan today and I'll be drilling out the side where I'm going to place it tomorrow. Maybe I can get it into the 60s. I pushed the clock up on the GPU. Might take a while to see if RAC improves.I think the wall times of tasks has decreased maybe 20%. I don't know if I'll be able to get over 3000 RAC out of this. I haven't gotten a single AP Nvidia wu yet, nothing with OpenCL. I guess that's all beta or BOINC is ignoring me. I have plenty of cuda MB wus so I'm not bored.

Doug

Grant (SSSF)
Send message
Joined: 19 Aug 99
Posts: 5685
Credit: 56,144,564
RAC: 49,774
Australia
Message 1229930 - Posted: 10 May 2012, 6:20:28 UTC - in response to Message 1229636.

In that case i'd go with 5 GTX690s. One 690 uses 100W less than 2 680s.

The power is less as the 690 clocks are slower so the RAC on 2 x 680 will be higher than 1 x 690.

That's the base clock, they will run boost to their thermal limit.
It would be interesting to see how the cards would actually compare in a system crunching Seti.
____________
Grant
Darwin NT.

Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1665
Credit: 203,424,760
RAC: 26,095
Australia
Message 1229932 - Posted: 10 May 2012, 6:34:12 UTC - in response to Message 1229909.

I've messed around with my p4/gt430 combo and I can't seem to get anything more than 10% CPU usage even if I suspend all tasks except the single Cuda task. I've also set I/O to maximum. I'm getting ~90% GPU useage regardless of what I try. I did manage to get the GPU temp down into the 80s once I turned the case fan around :-) I bought a 120mm fan today and I'll be drilling out the side where I'm going to place it tomorrow. Maybe I can get it into the 60s. I pushed the clock up on the GPU. Might take a while to see if RAC improves.I think the wall times of tasks has decreased maybe 20%. I don't know if I'll be able to get over 3000 RAC out of this. I haven't gotten a single AP Nvidia wu yet, nothing with OpenCL. I guess that's all beta or BOINC is ignoring me. I have plenty of cuda MB wus so I'm not bored.

Doug

Just to confirm we are talking about the same thing. Is your P4 the socket 478 version or the later socket 775 issue.

If you only getting 10% max CPU usage, that is good. It means the CPU isn't chasing its tail to keep up.

With the low CPU usage, I reckon you'd better off adding another GPU rather than trying to crunch CPU as well. My experiments have shown that the PCI bus can handle 2 low power cards ;-)

T.A.

doug
Volunteer tester
Send message
Joined: 10 Jul 09
Posts: 199
Credit: 9,388,424
RAC: 1,275
United States
Message 1229964 - Posted: 10 May 2012, 11:38:07 UTC - in response to Message 1229932.


Just to confirm we are talking about the same thing. Is your P4 the socket 478 version or the later socket 775 issue.

If you only getting 10% max CPU usage, that is good. It means the CPU isn't chasing its tail to keep up.

With the low CPU usage, I reckon you'd better off adding another GPU rather than trying to crunch CPU as well. My experiments have shown that the PCI bus can handle 2 low power cards ;-)

T.A.

I'm not sure what the socket 775 issue is, but I have socket 775. I started getting a lot of tasks pending with "validation inconclusive" I don't have a lot of experience with the Nvidia card or overclocking. I know if you overclock too much you either burn up or start producing errors. I set my gt430 back to the defaults for now. I've got 2 more available PCI slots and an AGP slot available. I'm getting my cooling problems taken care of. I am thinking about adding a second gt430 since they're still $55 or I could add something more powerful. I've got lot's of empty cables from the power supply. I want to gather a bit of experience with what I've got first. Here's a snippet of the CPU-Z report.

Processor 1 ID = 0
Number of cores 1 (max 1)
Number of threads 2 (max 2)
Name Intel Pentium 4 631
Codename Cedar Mill
Specification Intel(R) Pentium(R) 4 CPU 3.00GHz
Package (platform ID) Socket 775 LGA (0x2)
CPUID F.6.5
Extended CPUID F.6
Core Stepping D0
Technology 65 nm
Core Speed 2994.2 MHz
Multiplier x FSB 15.0 x 199.6 MHz
Rated Bus speed 798.4 MHz
Stock frequency 3000 MHz
Instructions sets MMX, SSE, SSE2, SSE3, EM64T
L1 Data cache 16 KBytes, 8-way set associative, 64-byte line size
Trace cache 12 Kuops, 8-way set associative
L2 cache 2048 KBytes, 8-way set associative, 64-byte line size
FID/VID Control yes
FID range 14.0x - 15.0x
VID range 1.116 V - 1.324 V
# of P-States 0

Terror Australis
Volunteer tester
Send message
Joined: 14 Feb 04
Posts: 1665
Credit: 203,424,760
RAC: 26,095
Australia
Message 1230099 - Posted: 10 May 2012, 18:13:12 UTC - in response to Message 1229964.

I'm not sure what the socket 775 issue is, but I have socket 775. I started getting a lot of tasks pending with "validation inconclusive" I don't have a lot of experience with the Nvidia card or overclocking. I know if you overclock too much you either burn up or start producing errors. I set my gt430 back to the defaults for now. I've got 2 more available PCI slots and an AGP slot available. I'm getting my cooling problems taken care of. I am thinking about adding a second gt430 since they're still $55 or I could add something more powerful. I've got lot's of empty cables from the power supply. I want to gather a bit of experience with what I've got first. Here's a snippet of the CPU-Z report.

The reason I asked is because there are P4's and there are P4's :-)

The Cedar Mill socket 775 P4 is more efficient than the socket 478 Northwood P4's. This is why you are getting much lower CPU usage than I did and the reason for the slightly crossed wires we had.

Re the inconclusives. go to your taks page and check the stderr_txt files of yourself and your wingman. The problem may not be you.

T.A.

Profile HAL9000
Volunteer tester
Avatar
Send message
Joined: 11 Sep 99
Posts: 3842
Credit: 106,494,791
RAC: 92,614
United States
Message 1230110 - Posted: 10 May 2012, 18:28:23 UTC - in response to Message 1230099.

I'm not sure what the socket 775 issue is, but I have socket 775. I started getting a lot of tasks pending with "validation inconclusive" I don't have a lot of experience with the Nvidia card or overclocking. I know if you overclock too much you either burn up or start producing errors. I set my gt430 back to the defaults for now. I've got 2 more available PCI slots and an AGP slot available. I'm getting my cooling problems taken care of. I am thinking about adding a second gt430 since they're still $55 or I could add something more powerful. I've got lot's of empty cables from the power supply. I want to gather a bit of experience with what I've got first. Here's a snippet of the CPU-Z report.

The reason I asked is because there are P4's and there are P4's :-)

The Cedar Mill socket 775 P4 is more efficient than the socket 478 Northwood P4's. This is why you are getting much lower CPU usage than I did and the reason for the slightly crossed wires we had.

Re the inconclusives. go to your taks page and check the stderr_txt files of yourself and your wingman. The problem may not be you.

T.A.

The Prescott P4's are better than the previous Northwood chips as well. I have a nice mix of Socket 478 Northwood & Prescott chips. As well as 775 Prescott ones. It would be nice if I could swap them all out to the latest & greatest old stuff. lol
____________
SETI@home classic workunits: 93,865 CPU time: 863,447 hours

Join the BP6/VP6 User Group today!

tbret
Volunteer tester
Avatar
Send message
Joined: 28 May 99
Posts: 2600
Credit: 187,794,560
RAC: 438,476
United States
Message 1230182 - Posted: 10 May 2012, 20:42:56 UTC - in response to Message 1229964.

Doug,

This is totally unsolicited advice and I hope you find it worth more than I'm charging for it.

Get yourself one of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16811129066

$70, free shipping, 3 120mm and 1 140mm fans included. That's less money than you'd spend on the fans. (beware; not all Antec 300s come pre-populated with fans. Find one for $45, and it probably doesn't.)

Put the system you are having trouble cooling in it. Use the 120mm case fan you have as a side-panel fan (not included in the price).

Your heat issues will go away. If you ever decide to replace the guts and stick a couple of hotter nVIDIA cards in it, you've already got the case.

I've got three Antec 300s and find them super-easy to work in. They have adequate cable management and have everything but front-panel USB 3.0 access that I could want.

And I'm not just rah-rahing "look what I've got! It's the BEST!" It isn't "the best" of anything except maybe price I paid vs how well it works. I prefer these to the more expensive cases I've bought.

You can kill-off power supplies, graphics cards, and drive yourself both crazy and deaf trying to cool a case that just refuses to cool-down, or you can get another case.

My advice: Buy the case. In the end, it's better AND cheaper.

zoom314
Avatar
Send message
Joined: 30 Nov 03
Posts: 45757
Credit: 36,386,469
RAC: 8,056
Message 1230391 - Posted: 11 May 2012, 4:24:06 UTC - in response to Message 1229848.

Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread.

Sorry to steal your thunder Vic but this was all my own work.

I discovered the PCI to PCIE adapter and the PCIE extender cables on Ebay last December and saw the possibilities. I've made a number posts on my experiments with them, the first was on 20th of January :-)

T.A.

Ok, I stand corrected, a Parallel idea then.
____________

zoom314
Avatar
Send message
Joined: 30 Nov 03
Posts: 45757
Credit: 36,386,469
RAC: 8,056
Message 1230393 - Posted: 11 May 2012, 4:27:58 UTC - in response to Message 1229857.

Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread.

It was us discussing it lol.

I was wondering what kind of performance loss you get with an adapter like that.

And I still have it in the bag too, as I was wondering if It would even work as I'd remembered after It had been here for a bit of something similar from the Amiga, It was a Bus extender of some sort with slots, It didn't work then, so I wasn't sure anymore.
____________

doug
Volunteer tester
Send message
Joined: 10 Jul 09
Posts: 199
Credit: 9,388,424
RAC: 1,275
United States
Message 1230409 - Posted: 11 May 2012, 5:03:22 UTC - in response to Message 1230182.

That's a good idea and it's upgrade-able once the existing hardware dies. I don't think I'll spend any more money on the existing hardware. I'd be better off going with an Ivy Bridge setup and a better GPU. T.A. has a lot of legacy equipment laying around that he/she is putting to good use (well he/she does have a fag or two so you never know.) I will go for a max cooling case the next time. My current one is going to end up looking like a cosmetic surgery gone bad, but it will suffice. Maaybe if I can get it cool enough I'll avoid the "inconclusives" that I got by overclocking. Seems to be ok now that I went back to the defaults. I wasted a day's worth of WUs though and made my wingmen wait.

There's always something to do. Such is SETI crunch life.

Doug

Previous · 1 · 2 · 3 · 4

Message boards : Number crunching : It Lives, It Lives !!

Copyright © 2014 University of California