|
141)
留言板 :
Number crunching :
Oh what a beating, what a punishment.
(消息 1041650)
发表于:13 Oct 2010 作者: Todd Hebert
Post: If they do have to get a new UPS, will it have to be delivered by UPS? If it is coming by UPS don't expect it to arrive on time and in working order Todd |
|
142)
留言板 :
Number crunching :
Greetings
(消息 1040379)
发表于:9 Oct 2010 作者: Todd Hebert
Post: Glad to have you amongst the fray. Welcome! |
|
143)
留言板 :
Number crunching :
Emergency fund drive for the project.............
(消息 1039700)
发表于:8 Oct 2010 作者: Todd Hebert
Post: I believe that they can be - these are all new items. We had a committment from users to provide parts - like the purchase of the RAM. I myself am donating the Processors, the server motherboard, heatsinks and others are combining to purchase the chassis, hard drives and raid controller. If there is a different direction to take we would appreciate input. Todd |
|
144)
留言板 :
Number crunching :
Emergency fund drive for the project.............
(消息 1039688)
发表于:8 Oct 2010 作者: Todd Hebert
Post: The GPU Users group is also engaged currently to provide a new server for the project. Thus far we have generated two Intel X5560's processors, two heatsinks, an Intel server chassis with server motherboard and 12GB of RAM. With all of the new servers our project will nice and stable once again. Todd |
|
145)
留言板 :
Number crunching :
Closed *SETI/BOINC Milestones [ v2.0 ] - XXI* Closed
(消息 1037922)
发表于:1 Oct 2010 作者: Todd Hebert
Post: Doesn't look right to me either Todd |
|
146)
留言板 :
Number crunching :
ATI MultiBeam beta
(消息 1037912)
发表于:1 Oct 2010 作者: Todd Hebert
Post: Time to get out the 4850x2 - this will be online this weekend for the testing phase of this development. Glad to see that the ATI users are being included in the GPU compute circle. We will have to update our team mission statement now that ATI cards are to be amongst the power crunchers with nVidia based cards :) Excellent work! Todd - Team GPU Users Admin |
|
147)
留言板 :
Number crunching :
Panic Mode On (39) Server problems
(消息 1037849)
发表于:1 Oct 2010 作者: Todd Hebert
Post: Some of us users that have machines that are capable of processing thousands of WU's a week should not be looked as the root cause of some of the issues for S@H. On average my machines process 30k WU's per week and should not be looked down upon because they are trying to report their completed tasks. Doesn't seem like the other projects like Folding@Home seem to care about the users doing allot of work for them - only here are there negative comments and disdain for the power crunchers in our lot. No one was beating up on Nez while he was still here and he was running S@H on 4600 machines - I have 20 and don't make calls back to the servers manually - everything is as it was built into the client. If the client isn't able to report back it will try again at a later interval - if there is no work available it will check back when the servers tell them to. If it can't get through it will wait until it is its time. Is the power cruncher not advancing the project to higher goals by allowing more work to be done? Are we not the same ones making significant monitary donations for the advancement of the science and technology of this project? When there is a problem don't we attempt to band together to resolve it for our fellow users? If you have machines that are able to process a work unit and report it back to the servers before it expires then you are doing exactly what the goals of the project are - if you do the same on 2 or 20 machines then you are still well within the spirit and intent. If I don't have work or can't report my tasks I surely don't mind waiting until the time that they are sent back. During the extended outage I did run out of work with a 10 day cache so despite the setting for this period my machines were capable of processing the work that was assigned to my machines in less time than what the machines thought. Am I going to increase the setting by manual methods - no, because that isn't what was intended by this project. The reason why the 10 days limit was implimented was so that people would report their tasks back in a timely fashion and would allow the other wingmen to also get proper credit for the work that they participated in. And to get through times when there was no work. After an outage is it not expected that the servers will be overwhelmed by clients reporting their tasks? Does the post office not expect to have lots of mail around Christmas? Do people not expect to wait in line at the amusment park on the 4th of July? When it is your time you will get through and get your chance - just like the power users amongst us that are doing the exact same thing. There is no special priority given to any user or their reported tasks based on the number of credits that they have. Every client has the same priviledge level - 0 Todd |
|
148)
留言板 :
Number crunching :
Well my 10 day cache is running dry...
(消息 1037726)
发表于:1 Oct 2010 作者: Todd Hebert
Post: Please see my reply in this thread listed below. http://setiathome.berkeley.edu/forum_thread.php?id=61607&nowrap=true#1037722 |
|
149)
留言板 :
Number crunching :
Panic Mode On (39) Server problems
(消息 1037722)
发表于:1 Oct 2010 作者: Todd Hebert
Post: Some of us users that have machines that are capable of processing thousands of WU's a week should not be looked as the root cause of some of the issues for S@H. On average my machines process 30k WU's per week and should not be looked down upon because they are trying to report their completed tasks. If tasks are provided to them and they have been properly completed how can we be considered to be at fault? I will not dismiss that some of our machines are packing some significant horsepower - but that is the way that we designed them to perform and have invested the resources/time to process work for this project and others. Please don't liken us to the enemy of the project when we are just doing our part with what we have available. Of course this is just my two cents worth and should be taken just as that and nothing more. For the record I have not done any manual modifications to the client, increased the cache size beyond what is pre-programmed as a maximum or anything other than having very fast machines. Todd Team GPU Users Admin - World RAC Leader |
|
150)
留言板 :
Number crunching :
Closed *SETI/BOINC Milestones [ v2.0 ] - XXI* Closed
(消息 1037606)
发表于:1 Oct 2010 作者: Todd Hebert
Post: Finally made it over 110 Million credits since things have come back online - sat there at 109,997,xxx for 9 days straight. After all WU's get processed I should hit 115M easily. 36k WU's should be enough for 4M I would think. Todd |
|
151)
留言板 :
Number crunching :
How many work units are you trying to report
(消息 1037599)
发表于:30 Sep 2010 作者: Todd Hebert
Post: I saw the idea of this thread immediately, tho I think somehow Todd would have won anyway ;). Just happen to have lots of machines running the project - wasn't trying to out do anyone (hope no one got the wrong idea I was boasting or anything like that) - we were all in the same mess at the time. Thankfully everything is getting sent out now - really looking forward to some work again :) |
|
152)
留言板 :
Number crunching :
How many work units are you trying to report
(消息 1037252)
发表于:30 Sep 2010 作者: Todd Hebert
Post: I have 28.8k WU's to report and maybe another 10k that need to be uploaded before they can report. Todd Team GPU Users Admin |
|
153)
留言板 :
Number crunching :
Making Core i7 Quieter
(消息 1036987)
发表于:29 Sep 2010 作者: Todd Hebert
Post: The H50 is old news, I'd go for the Corsair H70 which has a thicker radiator and a thinner water block than the old H50. You want quiet? Go H70, Noise go Air... Looking at the case specs, he may not have the room for a double radiator, which I why suggested the H50, it's smaller. The only available fan space is only one 120mm. The Corsair H70 does not have a dual radiator - dual fans and a thicker radiator but you don't need to run both fans. The extra surface area will make a big difference even if you are only using one fan to blow cooler air across the radiator from the outside of the case. Todd Team GPU Users Admin and #1 Leader RAC for the world |
|
154)
留言板 :
Number crunching :
Closed *SETI/BOINC Milestones [ v2.0 ] - XXI* Closed
(消息 1036755)
发表于:29 Sep 2010 作者: Todd Hebert
Post: A milestone to remember: lowest RAC I've had in a long time... Me too - now below 400k |
|
155)
留言板 :
Number crunching :
Closed *SETI/BOINC Milestones [ v2.0 ] - XXI* Closed
(消息 1034067)
发表于:18 Sep 2010 作者: Todd Hebert
Post: Well 1 mill total is good, but when Todd have ½ mill every day I'm not so sure... :) Vyper is right - I have allot of computing resources at my disposal so my circumstances are different. But they are also all mine - not those that are other persons property as has happened before here. No matter what you contribute to this project it is a positive and just because I have a lot of credit doesn't make me more likely to find the next WOW signal. In allot of ways it just means I have a higher electric bill :) Todd Team GPU Users Admin |
|
156)
留言板 :
Number crunching :
Closed *SETI/BOINC Milestones [ v2.0 ] - XXI* Closed
(消息 1033580)
发表于:17 Sep 2010 作者: Todd Hebert
Post: Just moved into the #5 spot of highest credited users on the project - should move into #4 in just a few days. Of course still holding onto the RAC leadership as well. Todd |
|
157)
留言板 :
Number crunching :
Does seti 6.10.58 on Windows work with a Tesla C2050 GPU?
(消息 1032596)
发表于:10 Sep 2010 作者: Todd Hebert
Post: No point in the expense of buying a Telsa card - all of that extra ECC ram is going to serve no gain when running even 3 WU's per card that would only utilize 768MB of RAM on the card. Todd Team GPU Users Group Admin |
|
158)
留言板 :
Number crunching :
New dedicated system for SETI/BOINC
(消息 1032583)
发表于:10 Sep 2010 作者: Todd Hebert
Post: The choice of a processors can go two different directions for obvious reasons. I have scanned the Top 100 systems on this project and have found 14 systems using AMD processors of the various types. One of those systems (The Top Host - owned by Vyper) doesn't use the CPU in his system to crunch - only to feed the GPU's so I know that his doesn't count into that figure - but had to be included for accuracy reasons. AMD cpu's do offer a great value and that can't be argued with - but the performance is not the same given the application. Just the same way that AMD chips sometimes outperform Intel's in database applications - Intel chips appear to outperform AMD's in this project. I won't deny that I do prefer Intel over AMD in almost every case - but I do sell both - however it is 95% Intel and 5% AMD - and the AMD chip are always sold to customers with huge databases. Up until the release of the Intel Xeon 75xx series of CPU's the AMD systems (Opteron's only) there was no match in the capabilities offered by Intel in the super-enterprise. AMD offers more sockets and more RAM options for these needs. But we aren't talking about the enterprise or databases that scale to 100GB and millions of rows in this thread. I don't specifically think that the Xeon offers much difference in this project other than being able to running more threads if you are using two processors (or more if you have the 75xx series) The Intel i7-920 is one of the most popular processors around - they have an excellent price point at around $300 and overclocking headroom of at least 50% with the right components selected for the job. The arguement that the Hyperthreading increases processing times on GPU WU's is no longer much of an issue since the latest apps have improved the setup time for each WU that is being processed on a GPU. I have tried it each way and have found that there was little difference and the cpu's are running as intended with HT turned on. On SSD's - there is no improvement in WU processing when using an SSD. We are not talking about disk intensive operations being performed here. Some of my systems do have SSD's and I love them - but only because I do other things on the systems that I run S@H on. For the expense there is no reason to drop the extra cash on an SSD - in fact on one of my highest performing systems I have a 5400rpm laptop hard drive in it just because I had it laying around and a hot swap tray that would accept 2.5" drives easily. Rack mount servers have never been intended for workstation class work and will generate enough noise to make you go deaf at higher processing levels because of those tiny fans spinning at 10k rpm. In a closed off data center that is cooled by forced induction and elevated floors that are plumbed to remove heat while keeping the servers at no more than 70F and processor temps at 50C you are dealing with a totally different environment than your den or basement. Since this tread is about performance and product selection for this project and not other work loads I think I have made compelling arguements why not to choose a server platform from Dell as a high performance system dedicated to this project - and also touched on some of the other key points that have come along as this thread has progressed. Todd Team GPU Users Group Admin and #1 RAC Leader for the World |
|
159)
留言板 :
Number crunching :
New dedicated system for SETI/BOINC
(消息 1032448)
发表于:10 Sep 2010 作者: Todd Hebert
Post: There is no question that the ability to custom build a machine is more desirable than purchasing something from the mass market. The machines that I have in my fleet are all custom built with the specific design goal of being run 24/7 at 100% duty. Mass market servers typically don't reach the levels of duty that we on this project or any other reach. When running something full bore you will seriously stress the components at all times and become more prone to failures for obvious reasons and it is good to have the best of the best stuff that you can afford. The biggest issue that most people encounter is heat - heat kills in simple terms. When using mass market systems the heatsinks and system fans will scream! By having a system that us personally build you can compensate for this level of heat. It would be impossible to install even the most basic water cooling heatsinks in a Dell server since they are really a closed system that never was intended to this task. You can see that most of my machines are Xeon's and they do perform very well - but I also have taken significant steps to keep them cool. Noisy fans will detract your level of enjoyment from contributing to this project - not to mention your family will hate hearing them. If you have other Dell servers take off the side panel and power up the system with it off - that is what you will hear all the time once they heat up. It is deafining! GPU's will give you the most bang for the buck in terms of credit toward the project without doubt. The 295's are the current leader for productivity mainly because they are a dual gpu card that is essentially two 275's on a single card. At this point there is no card in the GF400 series that will match the output - but this might change some day as the apps become more optimized or future cards are released. My 480's do a very good job - but in the system that I have 3-480's installed it is surpassed by a system with a single processor and two 295's running Server 2003 R2 by almost 20k. Power supplies also are not up to the task in mass market servers. They don't have the amount of PCI-E video card connectors that you would need to run a truely great system for crunching. The needs of a cruncher are very different than what you would see in a data center. Also it can be a serious challenge to fit a full sized video card into a server - they really have never been designed to have something like this installed. I'll give you a example of one of my big cruchers that you will see around the top 20 and the components that I use. Here is the host http://setiathome.berkeley.edu/show_host_detail.php?hostid=5285985 Corsair 800D Full Size Tower Chassis Dual Corsair H50 water coolers - they have since come out with the H70 Dual Xeon X5680's Dual EVGA 480's (One is out right now) 12GB RAM running at 1333Mhz (also Corsair) SuperMicro Server Class Board with Dual x16 PCI-E slots I think it is the X8DAi Dual 500GB Hard Drives in RAID 1 Corsair 1200 Watt Power Supply MS Server 2008 R2 12x 120MM Fans to keep good air flow. All of this stuff is top notch and really high end gear with significant investment costs - but it is also more than able to handle the demands of the choosen duty while still being quiet enough that you won't notice it being next to you. And it looks darn nice :) I don't mean to burst your bubble about getting a good deal on a box that would seem nice for the price but if your goal is serious crunching you might want to look for other alternatives. Worst case depending on the CPU's in that box you might be well served to pull them out and install them into a different board that is standardize and would allow you to perform upgrades or have more flexibility. Keep in mind that any ofthe Xeon 55xx or 56xx cpus will work in a single processor socket board based on the X58 chipset. I have two systems that use EVGA boards (X58 SLI 3) and I have Xeon's that are overclocked to the hilt. One is more than 90% overclocked (L5630 - stock 2.13Ghz - I'm running 4.1Ghz!) That is only possible when you personally select your components. I don't know if all this information helps your decision making process easier but at least you do know what has worked well for myself. I did notice your PM and will respond to it later today (It's pretty late and it has been a long day - not to mention that I really created a long message here) Todd Hebert Team GPU Users Admin and #1 RAC Leader for the World |
|
160)
留言板 :
Number crunching :
Quick fundraiser for SETI's new server
(消息 1031566)
发表于:7 Sep 2010 作者: Todd Hebert
Post: Welcome back Mark - now don't go away again. I hated seeing a Banished User ID for you. Todd |
©2020 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.