Message boards :
Technical News :
There Goes a Tenner (Jan 20 2011)
Message board moderation
Previous · 1 · 2
Author | Message |
---|---|
DJStarfox Send message Joined: 23 May 01 Posts: 1066 Credit: 1,226,053 RAC: 2 |
I suppose we should wait and see if Bruno is still viable. At this point, Todd has talked me into it. Hopefully, Bruno's problem is just something simple like loose cables or weak power supply. |
nick Send message Joined: 22 Jul 05 Posts: 284 Credit: 3,902,174 RAC: 0 |
would bumping up the disk cache on synergy help? seems like 96 Gb of ram would give you a lot of options... though it depends on what its trying to do, and how random, the read/writes are, along with how much data gets pushed through. Nick |
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 |
we had two large storerooms of full height 5.25" 1GB Micropolis SCSI drives and one storeroom would be empty in a month. Humm, the M-word might be key there. I was the first kid on my block to own a Gigabyte hard drive. This enabled me to do audio editing, as an entire concert would fit on the drive. The actual drive was a 1.6 Gbyte IDE Micropolis, but back then I think it was pretty common to sell the same drive hardware with IDE or SCSI interface, with maybe a $50 adder for the SCSI variant. I was extremely pleased to pay a little over $1000 for my drive. The first one failed less than two days of installation--not subtle bad sectors, but hard failure. Warranty replacement went OK, but the second one lasted only a couple of years before also failing hard. Now a single user experience on tiny sample is not the way to judge HD reliability, but I have heard that Micropolis did not lead the field in reliability even in their good days. How did Cray come to choose them? Oh, yes, and my application was not hammering the thing at all. I probably did active audio work less than 1% of the time, and did not yet have an internet connection, so the drive was closer to excess idle than to excess activity. |
kittyman Send message Joined: 9 Jul 00 Posts: 51468 Credit: 1,018,363,574 RAC: 1,004 |
we had two large storerooms of full height 5.25" 1GB Micropolis SCSI drives and one storeroom would be empty in a month. Your comment is interesting, because I had bad things happen with Micropolis drives too, back in the day. Horribly unreliable......went through a streak of about 10 bad ones. All purchased at different times from different vendors. Seagate forever for the kitties now. And very happy with them. I learned to buy 'server class' drives for the 100% uptime I give them...... Yet another piece of valuable advice I gleaned from my friends at Seti. "Freedom is just Chaos, with better lighting." Alan Dean Foster |
Alex Hilleary Send message Joined: 9 May 01 Posts: 3 Credit: 19,762,445 RAC: 25 |
I'd swear that Micropolis was the company that got caught packaging bricks in order make it's quarterly numbers one time. They had to prove that they had all these drives that that were sold and not yet delivered. So somebody packaged a bunch of real bricks, not dead drives, to show an auditor. Of course today that looks like mere child's play after Madoff, Enron, and others..... |
Philhnnss Send message Joined: 22 Feb 08 Posts: 63 Credit: 30,694,327 RAC: 162 |
I think it was MiniScribe that was shipping the bricks. New York Times http://query.nytimes.com/gst/fullpage.html?res=950DE3DE1130F930A2575AC0A96F948260 |
archae86 Send message Joined: 31 Aug 99 Posts: 909 Credit: 1,582,816 RAC: 0 |
Alex Hilleary wrote: I'd swear that Micropolis was the company that got caught packaging bricks in order make it's quarterly numbers one time. Close, but you are thinking of MiniScribe. I'm not sure the bricks story was proven, or the fact that employees believed it to be true was cited as one illustration of just how completely broken things were. It was a famous mess, either way. |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
Alex Hilleary wrote:I'd swear that Micropolis was the company that got caught packaging bricks in order make it's quarterly numbers one time. They were not actual bricks, but the one of last rounds of drives only worked as bricks. I got 3 of them one after another that had to be replaced under warranty. The 4th one lasted for about 5 years. The first one I got lasted 3 days. The second one lasted 7 hours. The third lasted 1 hour. (I was a bit peeved about spending about 6 hours filling the disk from backups before imminent failure each time except the last). The solution was to send me a drive with a different capacity (25% larger) that did not have the infant mortality problem. The drive I ordered was 120MB, three failures. The drive I ended up with was 150MB. These were among the largest drives available at the time. Miniscribe went out of business within 2 years of this fiasco. BOINC WIKI |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
I think I have a 40 MB Miniscribe disk on my AT&T UNIX PC (PC 7300 aka Safari) dating back from 1986. It is still working, after many reformatting and reloading the UNIX System V OS fro 5 1/4 inch disks. Memory is a hefty 512 KB. Tullio |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
I think I have a 40 MB Miniscribe disk on my AT&T UNIX PC (PC 7300 aka Safari) dating back from 1986. It is still working, after many reformatting and reloading the UNIX System V OS fro 5 1/4 inch disks. Memory is a hefty 512 KB. Yes, some of their earlier drives were tanks - built to keep running through anything forever. It was some of their last disks that had problems. BOINC WIKI |
PhonAcq Send message Joined: 14 Apr 01 Posts: 1656 Credit: 30,658,217 RAC: 1 |
So who bought this company, Micropolis? I seem to recall them being bought out. |
Cosmic_Ocean Send message Joined: 23 Dec 00 Posts: 3027 Credit: 13,516,867 RAC: 13 |
So who bought this company, Micropolis? I seem to recall them being bought out. Quote from wiki.. This company was one of the many hard drive manufacturers in the 1980s and 1990s that went out of business, merged, or closed their hard drive divisions; as a result of capacities and demand for products increased, and profits became hard to find. While Micropolis was able to hold on longer than many of the others, it ultimately sold its hard drive business to Singapore Technologies (now Temasek Holdings), who has ceased to market the brand. Linux laptop: record uptime: 1511d 20h 19m (ended due to the power brick giving-up) |
zoom3+1=4 Send message Joined: 30 Nov 03 Posts: 65746 Credit: 55,293,173 RAC: 49 |
I'd swear that Micropolis was the company that got caught packaging bricks Actually I had to go through 10 Seagate ST4096 MFM 5.25" FH hdds before I had one that worked Years ago, So Yeah there were crap dives being made, Even by Seagate. Today there is no such thing as a New 5.25" FH(Full Height) hdd anymore, But those were Bricks alrighty. The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's |
zii Send message Joined: 24 May 03 Posts: 7 Credit: 828,565 RAC: 0 |
Also by the way, somebody asked if we should have two upload servers. We used to have the upload server split onto two systems but this wasn't helping - in fact it was making it worse. The problem is not the lack of bandwidth i/o, but disk i/o. The results have to live somewhere, and require lots of random read/writes. So it's best if the upload server saves the results on directly attached storage. If it is also serving them over NFS (or likewise equivalent) such that a second upload server can write to them, it's too much of an overhead drag. So the upload server has to be a singular server which also (1) holds the results and (2) as much of the backend processing on these result files as possible. I think right now the only backend processing on results which bruno does NOT do is assimilation, which vader handles. You might think "why not just have the upload server save the results IT gets on ITS own storage?" Then we end up with two piles of results, randomly split, and then the NFS/mounting bottleneck is simply pushed down the pike to the validators, who need to read both piles at once. (First post, so please bear with me...) My spontaneous thought when reading the above, is that since the entire S@H thingie is about distribution... shouldn't there be a way to distribute this too? Just like the home computers might not always be so fast, but compensate this by their large numbers, wouldn't it be possible to have even more upload servers (not just two), and perhaps one dedicated "indexing server", to keep track of which upload server has which data... And by increasing the number of ul servers to more than two, you also distribute the network traffic. So even if the total traffic gets higher, the load gets more spread out. In other words, just two ul servers maybe isn't enought to distribute the network traffic, but with even more servers, the mere numbers will compensate for the network limitations. I realise I'm probably just missing something more or less obvious here, just thought I'd mention it anyways... ;) |
John McLeod VII Send message Joined: 15 Jul 99 Posts: 24806 Credit: 790,712 RAC: 0 |
Also by the way, somebody asked if we should have two upload servers. We used to have the upload server split onto two systems but this wasn't helping - in fact it was making it worse. The problem is not the lack of bandwidth i/o, but disk i/o. The results have to live somewhere, and require lots of random read/writes. So it's best if the upload server saves the results on directly attached storage. If it is also serving them over NFS (or likewise equivalent) such that a second upload server can write to them, it's too much of an overhead drag. So the upload server has to be a singular server which also (1) holds the results and (2) as much of the backend processing on these result files as possible. I think right now the only backend processing on results which bruno does NOT do is assimilation, which vader handles. You might think "why not just have the upload server save the results IT gets on ITS own storage?" Then we end up with two piles of results, randomly split, and then the NFS/mounting bottleneck is simply pushed down the pike to the validators, who need to read both piles at once. Unfortunately, the uploaded data has to end up at Berkeley for validation and incorporation into the Science Data Base. The bottle neck here is the 100Mb link up the hill. We shall see if the project to get Gb up the hill will actually help much as that Gb is shared among several labs. BOINC WIKI |
zii Send message Joined: 24 May 03 Posts: 7 Credit: 828,565 RAC: 0 |
Unfortunately, the uploaded data has to end up at Berkeley for validation and incorporation into the Science Data Base. The bottle neck here is the 100Mb link up the hill. We shall see if the project to get Gb up the hill will actually help much as that Gb is shared among several labs. Sorry if I was unclear... I didn't mean distributed as "distributed between users", but rather as "distributed between a larger number of local upload servers at Berkeley". It won't solve the problem with the 100mbit link (but won't make it worse either), my thought was that it however might solve the problem with concentrating all data to one single upload server, which of course doesn't give much redundancy AND as far as I've understood is a bottleneck even when it works at its best? |
KWSN THE Holy Hand Grenade! Send message Joined: 20 Dec 05 Posts: 3187 Credit: 57,163,290 RAC: 0 |
Sorry, but all the data HAS to go thru one server - as that server does all the validating, assimilation, and deletions. As it has been explained to me, any other possibility (and several have been tried, so I understand...) makes too much network thrashing as the WU/results get tossed back and forth between servers. . Hello, from Albany, CA!... |
1mp0£173 Send message Joined: 3 Apr 99 Posts: 8423 Credit: 356,897 RAC: 0 |
It's always good to read RFC-1925. What we're talking about is Truth #6. First of all, this isn't a "problem" because the BOINC client will keep trying to upload until it succeeds, and data is no more likely to be lost at the client (cruncher) than it is on some intermediate server. Distributing upload servers adds a whole layer of logic to deal with getting the files from the upload servers to the central site for validation, and it doesn't change the amount of data that has to be transmitted to the central site. It moves the problem out of the client, and into a whole new infrastructure, but the problem still exists. "Redundancy" is expensive, and it's only important if you have fickle consumers on an E-Commerce site or something that has to be done in real-time. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.