Message boards :
Number crunching :
It Lives, It Lives !!
Message board moderation
Previous · 1 · 2 · 3 · 4
Author | Message |
---|---|
red-ray Send message Joined: 24 Jun 99 Posts: 308 Credit: 9,029,848 RAC: 0 |
In that case i'd go with 5 GTX690s. One 690 uses 100W less than 2 680s. The power is less as the 690 clocks are slower so the RAC on 2 x 680 will be higher than 1 x 690. |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
In that case i'd go with 5 GTX690s. One 690 uses 100W less than 2 680s. ~10% lower base clock. They're supposedly using low leakage top binned parts for 690's, So by rights the boost clocks should wind themselves up further beforre temps & dissipation cap out. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
-BeNt- Send message Joined: 17 Oct 99 Posts: 1234 Credit: 10,116,112 RAC: 0 |
If you are using a 64bit server that can address more than 4GB of memory and a motherboard/controller that supports MMIO limits >4GB, this can be pushed out to the maximum of all the system memory(BIOS) on some machines. Of course most of the systems that do this are normally reserved to business due to expense. (Reference IBM servers) This has been around for awhile, especially in servers that need to run very large raid arrays, databases, etc. with many adapters, or machines that use GPU's for super computer grids. Needless to say but you would probably be hard pressed, if possible at all, to find a board that's affordable to build a home server that could access more than that. Even then it's not really a 4GB physical threshold it's how much of the upper memory address space the memory controlled will set aside to give to the system devices in the virtual space. Least that's my understanding of it. Granted I've never really had to research it because it's been years and years since manual addressing and allocation has been needed, thank god. But there is a software limit to 32 device per PCI bus, with the physical limit being much lower due to electrical loading issues. Wonder if there is any documentation out there on the most gpu's in a single machine?! (FWIW Nvidia's driver only supports 8 in one machine. either 1x8 or 2x4) Traveling through space at ~67,000mph! |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread. Sorry to steal your thunder Vic but this was all my own work. I discovered the PCI to PCIE adapter and the PCIE extender cables on Ebay last December and saw the possibilities. I've made a number posts on my experiments with them, the first was on 20th of January :-) T.A. |
Ex: "Socialist" Send message Joined: 12 Mar 12 Posts: 3433 Credit: 2,616,158 RAC: 2 |
Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread. It was us discussing it lol. I was wondering what kind of performance loss you get with an adapter like that. #resist |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
[quote]Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread. The performance loss is minimal, any loss is mainly due to the increased loading times at the start of a unit. The SAH app makes very little demand on the PCIE bus. A card running in a x1 slot shows no difference in performance compared to a card in a x16 slot. Even running through the PCI bus adapter there is almost no change in crunching times and what difference there is, is no more than a few percent. Even when I tried driving a GTX580 with a 2.8GHz P4 through one of the PCI converters the crunching times blew out by only 5% and I think that was mainly due to the fact the poor old P4 was running at about 80% to keep the 580 fed. For a GTX570 and below any reduction in performance is small enough to be irrelevant. T.A. |
doug Send message Joined: 10 Jul 09 Posts: 202 Credit: 10,828,067 RAC: 0 |
I've messed around with my p4/gt430 combo and I can't seem to get anything more than 10% CPU usage even if I suspend all tasks except the single Cuda task. I've also set I/O to maximum. I'm getting ~90% GPU useage regardless of what I try. I did manage to get the GPU temp down into the 80s once I turned the case fan around :-) I bought a 120mm fan today and I'll be drilling out the side where I'm going to place it tomorrow. Maybe I can get it into the 60s. I pushed the clock up on the GPU. Might take a while to see if RAC improves.I think the wall times of tasks has decreased maybe 20%. I don't know if I'll be able to get over 3000 RAC out of this. I haven't gotten a single AP Nvidia wu yet, nothing with OpenCL. I guess that's all beta or BOINC is ignoring me. I have plenty of cuda MB wus so I'm not bored. Doug |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
In that case i'd go with 5 GTX690s. One 690 uses 100W less than 2 680s. That's the base clock, they will run boost to their thermal limit. It would be interesting to see how the cards would actually compare in a system crunching Seti. Grant Darwin NT |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
I've messed around with my p4/gt430 combo and I can't seem to get anything more than 10% CPU usage even if I suspend all tasks except the single Cuda task. I've also set I/O to maximum. I'm getting ~90% GPU useage regardless of what I try. I did manage to get the GPU temp down into the 80s once I turned the case fan around :-) I bought a 120mm fan today and I'll be drilling out the side where I'm going to place it tomorrow. Maybe I can get it into the 60s. I pushed the clock up on the GPU. Might take a while to see if RAC improves.I think the wall times of tasks has decreased maybe 20%. I don't know if I'll be able to get over 3000 RAC out of this. I haven't gotten a single AP Nvidia wu yet, nothing with OpenCL. I guess that's all beta or BOINC is ignoring me. I have plenty of cuda MB wus so I'm not bored. Just to confirm we are talking about the same thing. Is your P4 the socket 478 version or the later socket 775 issue. If you only getting 10% max CPU usage, that is good. It means the CPU isn't chasing its tail to keep up. With the low CPU usage, I reckon you'd better off adding another GPU rather than trying to crunch CPU as well. My experiments have shown that the PCI bus can handle 2 low power cards ;-) T.A. |
doug Send message Joined: 10 Jul 09 Posts: 202 Credit: 10,828,067 RAC: 0 |
I'm not sure what the socket 775 issue is, but I have socket 775. I started getting a lot of tasks pending with "validation inconclusive" I don't have a lot of experience with the Nvidia card or overclocking. I know if you overclock too much you either burn up or start producing errors. I set my gt430 back to the defaults for now. I've got 2 more available PCI slots and an AGP slot available. I'm getting my cooling problems taken care of. I am thinking about adding a second gt430 since they're still $55 or I could add something more powerful. I've got lot's of empty cables from the power supply. I want to gather a bit of experience with what I've got first. Here's a snippet of the CPU-Z report. Processor 1 ID = 0 Number of cores 1 (max 1) Number of threads 2 (max 2) Name Intel Pentium 4 631 Codename Cedar Mill Specification Intel(R) Pentium(R) 4 CPU 3.00GHz Package (platform ID) Socket 775 LGA (0x2) CPUID F.6.5 Extended CPUID F.6 Core Stepping D0 Technology 65 nm Core Speed 2994.2 MHz Multiplier x FSB 15.0 x 199.6 MHz Rated Bus speed 798.4 MHz Stock frequency 3000 MHz Instructions sets MMX, SSE, SSE2, SSE3, EM64T L1 Data cache 16 KBytes, 8-way set associative, 64-byte line size Trace cache 12 Kuops, 8-way set associative L2 cache 2048 KBytes, 8-way set associative, 64-byte line size FID/VID Control yes FID range 14.0x - 15.0x VID range 1.116 V - 1.324 V # of P-States 0 |
Terror Australis Send message Joined: 14 Feb 04 Posts: 1817 Credit: 262,693,308 RAC: 44 |
I'm not sure what the socket 775 issue is, but I have socket 775. I started getting a lot of tasks pending with "validation inconclusive" I don't have a lot of experience with the Nvidia card or overclocking. I know if you overclock too much you either burn up or start producing errors. I set my gt430 back to the defaults for now. I've got 2 more available PCI slots and an AGP slot available. I'm getting my cooling problems taken care of. I am thinking about adding a second gt430 since they're still $55 or I could add something more powerful. I've got lot's of empty cables from the power supply. I want to gather a bit of experience with what I've got first. Here's a snippet of the CPU-Z report. The reason I asked is because there are P4's and there are P4's :-) The Cedar Mill socket 775 P4 is more efficient than the socket 478 Northwood P4's. This is why you are getting much lower CPU usage than I did and the reason for the slightly crossed wires we had. Re the inconclusives. go to your taks page and check the stderr_txt files of yourself and your wingman. The problem may not be you. T.A. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
I'm not sure what the socket 775 issue is, but I have socket 775. I started getting a lot of tasks pending with "validation inconclusive" I don't have a lot of experience with the Nvidia card or overclocking. I know if you overclock too much you either burn up or start producing errors. I set my gt430 back to the defaults for now. I've got 2 more available PCI slots and an AGP slot available. I'm getting my cooling problems taken care of. I am thinking about adding a second gt430 since they're still $55 or I could add something more powerful. I've got lot's of empty cables from the power supply. I want to gather a bit of experience with what I've got first. Here's a snippet of the CPU-Z report. The Prescott P4's are better than the previous Northwood chips as well. I have a nice mix of Socket 478 Northwood & Prescott chips. As well as 775 Prescott ones. It would be nice if I could swap them all out to the latest & greatest old stuff. lol SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
tbret Send message Joined: 28 May 99 Posts: 3380 Credit: 296,162,071 RAC: 40 |
Doug, This is totally unsolicited advice and I hope you find it worth more than I'm charging for it. Get yourself one of these: http://www.newegg.com/Product/Product.aspx?Item=N82E16811129066 $70, free shipping, 3 120mm and 1 140mm fans included. That's less money than you'd spend on the fans. (beware; not all Antec 300s come pre-populated with fans. Find one for $45, and it probably doesn't.) Put the system you are having trouble cooling in it. Use the 120mm case fan you have as a side-panel fan (not included in the price). Your heat issues will go away. If you ever decide to replace the guts and stick a couple of hotter nVIDIA cards in it, you've already got the case. I've got three Antec 300s and find them super-easy to work in. They have adequate cable management and have everything but front-panel USB 3.0 access that I could want. And I'm not just rah-rahing "look what I've got! It's the BEST!" It isn't "the best" of anything except maybe price I paid vs how well it works. I prefer these to the more expensive cases I've bought. You can kill-off power supplies, graphics cards, and drive yourself both crazy and deaf trying to cool a case that just refuses to cool-down, or you can get another case. My advice: Buy the case. In the end, it's better AND cheaper. |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65746 Credit: 55,293,173 RAC: 49 |
Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread. Ok, I stand corrected, a Parallel idea then. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65746 Credit: 55,293,173 RAC: 49 |
Well the Monster might not have happened If I had not mentioned that I'd bought one of those pci-e x16 extenders over in another thread. And I still have it in the bag too, as I was wondering if It would even work as I'd remembered after It had been here for a bit of something similar from the Amiga, It was a Bus extender of some sort with slots, It didn't work then, so I wasn't sure anymore. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
doug Send message Joined: 10 Jul 09 Posts: 202 Credit: 10,828,067 RAC: 0 |
That's a good idea and it's upgrade-able once the existing hardware dies. I don't think I'll spend any more money on the existing hardware. I'd be better off going with an Ivy Bridge setup and a better GPU. T.A. has a lot of legacy equipment laying around that he/she is putting to good use (well he/she does have a fag or two so you never know.) I will go for a max cooling case the next time. My current one is going to end up looking like a cosmetic surgery gone bad, but it will suffice. Maaybe if I can get it cool enough I'll avoid the "inconclusives" that I got by overclocking. Seems to be ok now that I went back to the defaults. I wasted a day's worth of WUs though and made my wingmen wait. There's always something to do. Such is SETI crunch life. Doug |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.