Message boards :
Number crunching :
RAC Race again!!!
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
Astro Send message Joined: 16 Apr 02 Posts: 8026 Credit: 600,015 RAC: 0 |
Teef, wanna borrow my account key for a week or two? lol |
Ned Slider Send message Joined: 12 Oct 01 Posts: 668 Credit: 4,375,315 RAC: 0 |
Yes, teef joined our team (ATHLONMB.COM) a while back and occasionally brings large numbers of machines online for testing - he's an active member of our seti forum. He's quietly been planning this current test for a few months now (I believe it was originally planned for September) and has been in contact with me since late summer about which optimized clients to use. Funnily enough, one of his main concerns was the amount of bandwidth all those crunchers would use! I'm guessing that seti provides a good simulation of a large scale render farm for things like heat output, particularly if they're rack mounted servers that may be prone to overheating. Nice job teef :) Ned *** My Guide to Compiling Optimised BOINC and SETI Clients *** *** Download Optimised BOINC and SETI Clients for Linux Here *** |
SunMicrosystemsLLG Send message Joined: 4 Jul 05 Posts: 102 Credit: 1,360,617 RAC: 0 |
Yep, out of all the jobs we have ran on our grid in the last 6-7 months (since we built it), S@H/BOINC has sustained the highest CPU usage and therefore thermal load of any of the applications. We figure if it can survive SETI it will pretty much survive anything ... |
John Cropper Send message Joined: 3 May 00 Posts: 444 Credit: 416,933 RAC: 0 |
I would speculate that users such as Nez and Teef must be two very large groups of people working under 2 user accounts. There must be several hundred computers in each account! Perhaps they were teams in Classic and now they registered as an individual user here sharing the user ID. I can't imagine one person having access to that number of computers. Just my opinion. Perhaps they're competing individuals in some similar line of work... Nez - Nose Teef - Teeth Think about it ;o) Stewie: So, is there any tread left on the tires? Or at this point would it be like throwing a hot dog down a hallway? Fox Sunday (US) at 9PM ET/PT |
teef Send message Joined: 10 Jan 00 Posts: 3 Credit: 2,550,255 RAC: 0 |
Yep, out of all the jobs we have ran on our grid in the last 6-7 months (since we built it), S@H/BOINC has sustained the highest CPU usage and therefore thermal load of any of the applications. Well our application is perhaps a little worse, but processing the same input over and over when we could be contributing a little bit for the same load seemed a little crazy. Assuming things go well this weekend, we'll start ramp-up on our app at the beginning of next week, then seti will be a thing of the past for these machines. So far we've had only a couple of machines early life fail from this load, very impressive. |
teef Send message Joined: 10 Jan 00 Posts: 3 Credit: 2,550,255 RAC: 0 |
Yes, teef joined our team (ATHLONMB.COM) a while back and occasionally brings large numbers of machines online for testing - he's an active member of our seti forum. He's quietly been planning this current test for a few months now (I believe it was originally planned for September) and has been in contact with me since late summer about which optimized clients to use. Funnily enough, one of his main concerns was the amount of bandwidth all those crunchers would use! In the end we ran a couple of days on the stock client, and then on Neds advice started running Harold Naparst's client for SSE2. It's 32bit but hey, at least it's faster than the stock, and no apparent difference in heat output. We'd have prefered to run Ned's 64bit client for a while to measure that too. Bandwidth is looking ok, but it's a significant pull on it. But again, http data is a good model for our networking load, so that's useful too. Heat is our primary concern, as well as flagging problem machines. Once we are live we'll be allowed virtually no downtime, so it's very important to catch all this as early as possible to allow the supply chain time to react. |
Hans Dorn Send message Joined: 3 Apr 99 Posts: 2262 Credit: 26,448,570 RAC: 0 |
Did you try running cpuburn? This little app burns more watts than anything else I came across. (At least on a P4, that is) Regards Hans |
SunMicrosystemsLLG Send message Joined: 4 Jul 05 Posts: 102 Credit: 1,360,617 RAC: 0 |
We are in a similar situation, except we don't know what applications we'll be running from one project to the next. So far no hard failures but it has allowed us to do some thermal profiling, check our power requirements and flag some minor issues we may not have found until we started running some 'proper' work. Hopefully we'll be getting some brand new machines in a month or so, so we'll probably put them to work on S@H to burn them in ... |
Sir Ulli Send message Joined: 21 Oct 99 Posts: 2246 Credit: 6,136,250 RAC: 0 |
|
michael37 Send message Joined: 23 Jul 99 Posts: 311 Credit: 6,955,447 RAC: 0 |
Working in customer relations for a linux clustering company, I can only drool at some clusters I get to work on. Just in the past 3 months, I did a 800-node cluster and a 1024-node cluster -- just imagine, one dual-CPU Xeon can pull 1000 credits per day with Harold's client. That's a MILLION credits per day. We only have clusters of 4 in the lab, and I ran Seti only at night. Other jobs are usually down, so heat is not an issue. Oh, btw, we are hiring, but you have to live in Boston area. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.