There Goes a Tenner (Jan 20 2011)

Message boards : Technical News : There Goes a Tenner (Jan 20 2011)
Message board moderation

To post messages, you must log in.

Previous · 1 · 2

AuthorMessage
DJStarfox

Send message
Joined: 23 May 01
Posts: 1066
Credit: 1,226,053
RAC: 2
United States
Message 1069088 - Posted: 21 Jan 2011, 19:24:04 UTC - in response to Message 1069019.  

I suppose we should wait and see if Bruno is still viable.


At this point, Todd has talked me into it. Hopefully, Bruno's problem is just something simple like loose cables or weak power supply.
ID: 1069088 · Report as offensive
nick
Volunteer tester
Avatar

Send message
Joined: 22 Jul 05
Posts: 284
Credit: 3,902,174
RAC: 0
United States
Message 1069138 - Posted: 21 Jan 2011, 22:45:03 UTC

would bumping up the disk cache on synergy help? seems like 96 Gb of ram would give you a lot of options...

though it depends on what its trying to do, and how random, the read/writes are, along with how much data gets pushed through.

Nick


ID: 1069138 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 1069162 - Posted: 21 Jan 2011, 23:17:21 UTC - in response to Message 1069042.  

we had two large storerooms of full height 5.25" 1GB Micropolis SCSI drives and one storeroom would be empty in a month.


Humm, the M-word might be key there. I was the first kid on my block to own a Gigabyte hard drive. This enabled me to do audio editing, as an entire concert would fit on the drive. The actual drive was a 1.6 Gbyte IDE Micropolis, but back then I think it was pretty common to sell the same drive hardware with IDE or SCSI interface, with maybe a $50 adder for the SCSI variant. I was extremely pleased to pay a little over $1000 for my drive.

The first one failed less than two days of installation--not subtle bad sectors, but hard failure. Warranty replacement went OK, but the second one lasted only a couple of years before also failing hard.

Now a single user experience on tiny sample is not the way to judge HD reliability, but I have heard that Micropolis did not lead the field in reliability even in their good days. How did Cray come to choose them?

Oh, yes, and my application was not hammering the thing at all. I probably did active audio work less than 1% of the time, and did not yet have an internet connection, so the drive was closer to excess idle than to excess activity.
ID: 1069162 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51468
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1069165 - Posted: 21 Jan 2011, 23:24:39 UTC - in response to Message 1069162.  

we had two large storerooms of full height 5.25" 1GB Micropolis SCSI drives and one storeroom would be empty in a month.


Humm, the M-word might be key there. I was the first kid on my block to own a Gigabyte hard drive. This enabled me to do audio editing, as an entire concert would fit on the drive. The actual drive was a 1.6 Gbyte IDE Micropolis, but back then I think it was pretty common to sell the same drive hardware with IDE or SCSI interface, with maybe a $50 adder for the SCSI variant. I was extremely pleased to pay a little over $1000 for my drive.

The first one failed less than two days of installation--not subtle bad sectors, but hard failure. Warranty replacement went OK, but the second one lasted only a couple of years before also failing hard.

Now a single user experience on tiny sample is not the way to judge HD reliability, but I have heard that Micropolis did not lead the field in reliability even in their good days. How did Cray come to choose them?

Oh, yes, and my application was not hammering the thing at all. I probably did active audio work less than 1% of the time, and did not yet have an internet connection, so the drive was closer to excess idle than to excess activity.

Your comment is interesting, because I had bad things happen with Micropolis drives too, back in the day.
Horribly unreliable......went through a streak of about 10 bad ones. All purchased at different times from different vendors.

Seagate forever for the kitties now.
And very happy with them. I learned to buy 'server class' drives for the 100% uptime I give them......
Yet another piece of valuable advice I gleaned from my friends at Seti.

"Freedom is just Chaos, with better lighting." Alan Dean Foster

ID: 1069165 · Report as offensive
Alex Hilleary

Send message
Joined: 9 May 01
Posts: 3
Credit: 19,762,445
RAC: 25
United States
Message 1069631 - Posted: 22 Jan 2011, 22:58:06 UTC

I'd swear that Micropolis was the company that got caught packaging bricks
in order make it's quarterly numbers one time. They had to prove that they
had all these drives that that were sold and not yet delivered. So somebody
packaged a bunch of real bricks, not dead drives, to show an auditor. Of course
today that looks like mere child's play after Madoff, Enron, and others.....
ID: 1069631 · Report as offensive
Philhnnss
Volunteer tester

Send message
Joined: 22 Feb 08
Posts: 63
Credit: 30,694,327
RAC: 162
United States
Message 1069649 - Posted: 22 Jan 2011, 23:35:19 UTC - in response to Message 1069631.  
Last modified: 23 Jan 2011, 0:29:00 UTC

I think it was MiniScribe that was shipping the bricks.


New York Times

http://query.nytimes.com/gst/fullpage.html?res=950DE3DE1130F930A2575AC0A96F948260
ID: 1069649 · Report as offensive
archae86

Send message
Joined: 31 Aug 99
Posts: 909
Credit: 1,582,816
RAC: 0
United States
Message 1069650 - Posted: 22 Jan 2011, 23:37:38 UTC - in response to Message 1069631.  

Alex Hilleary wrote:
I'd swear that Micropolis was the company that got caught packaging bricks in order make it's quarterly numbers one time.

Close, but you are thinking of MiniScribe. I'm not sure the bricks story was proven, or the fact that employees believed it to be true was cited as one illustration of just how completely broken things were. It was a famous mess, either way.
ID: 1069650 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1069657 - Posted: 23 Jan 2011, 0:00:34 UTC - in response to Message 1069650.  

Alex Hilleary wrote:
I'd swear that Micropolis was the company that got caught packaging bricks in order make it's quarterly numbers one time.

Close, but you are thinking of MiniScribe. I'm not sure the bricks story was proven, or the fact that employees believed it to be true was cited as one illustration of just how completely broken things were. It was a famous mess, either way.

They were not actual bricks, but the one of last rounds of drives only worked as bricks. I got 3 of them one after another that had to be replaced under warranty. The 4th one lasted for about 5 years. The first one I got lasted 3 days. The second one lasted 7 hours. The third lasted 1 hour. (I was a bit peeved about spending about 6 hours filling the disk from backups before imminent failure each time except the last). The solution was to send me a drive with a different capacity (25% larger) that did not have the infant mortality problem. The drive I ordered was 120MB, three failures. The drive I ended up with was 150MB. These were among the largest drives available at the time. Miniscribe went out of business within 2 years of this fiasco.


BOINC WIKI
ID: 1069657 · Report as offensive
Profile tullio
Volunteer tester

Send message
Joined: 9 Apr 04
Posts: 8797
Credit: 2,930,782
RAC: 1
Italy
Message 1069853 - Posted: 23 Jan 2011, 13:34:00 UTC

I think I have a 40 MB Miniscribe disk on my AT&T UNIX PC (PC 7300 aka Safari) dating back from 1986. It is still working, after many reformatting and reloading the UNIX System V OS fro 5 1/4 inch disks. Memory is a hefty 512 KB.
Tullio
ID: 1069853 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1069896 - Posted: 23 Jan 2011, 18:18:45 UTC - in response to Message 1069853.  

I think I have a 40 MB Miniscribe disk on my AT&T UNIX PC (PC 7300 aka Safari) dating back from 1986. It is still working, after many reformatting and reloading the UNIX System V OS fro 5 1/4 inch disks. Memory is a hefty 512 KB.
Tullio

Yes, some of their earlier drives were tanks - built to keep running through anything forever. It was some of their last disks that had problems.


BOINC WIKI
ID: 1069896 · Report as offensive
PhonAcq

Send message
Joined: 14 Apr 01
Posts: 1656
Credit: 30,658,217
RAC: 1
United States
Message 1069940 - Posted: 23 Jan 2011, 19:16:53 UTC

So who bought this company, Micropolis? I seem to recall them being bought out.
ID: 1069940 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1069988 - Posted: 23 Jan 2011, 21:14:47 UTC - in response to Message 1069940.  

So who bought this company, Micropolis? I seem to recall them being bought out.

Quote from wiki..

This company was one of the many hard drive manufacturers in the 1980s and 1990s that went out of business, merged, or closed their hard drive divisions; as a result of capacities and demand for products increased, and profits became hard to find. While Micropolis was able to hold on longer than many of the others, it ultimately sold its hard drive business to Singapore Technologies (now Temasek Holdings), who has ceased to market the brand.

After the disk business sale, Micropolis was reorganized as StreamLogic Corporation, which declared bankruptcy in 1997 amid securities fraud allegations.

Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1069988 · Report as offensive
Profile zoom3+1=4
Volunteer tester
Avatar

Send message
Joined: 30 Nov 03
Posts: 65746
Credit: 55,293,173
RAC: 49
United States
Message 1070139 - Posted: 24 Jan 2011, 6:18:21 UTC - in response to Message 1069631.  

I'd swear that Micropolis was the company that got caught packaging bricks
in order make it's quarterly numbers one time. They had to prove that they
had all these drives that that were sold and not yet delivered. So somebody
packaged a bunch of real bricks, not dead drives, to show an auditor. Of course
today that looks like mere child's play after Madoff, Enron, and others.....

Actually I had to go through 10 Seagate ST4096 MFM 5.25" FH hdds before I had one that worked Years ago, So Yeah there were crap dives being made, Even by Seagate. Today there is no such thing as a New 5.25" FH(Full Height) hdd anymore, But those were Bricks alrighty.
The T1 Trust, PRR T1 Class 4-4-4-4 #5550, 1 of America's First HST's
ID: 1070139 · Report as offensive
zii

Send message
Joined: 24 May 03
Posts: 7
Credit: 828,565
RAC: 0
Sweden
Message 1072646 - Posted: 31 Jan 2011, 9:57:03 UTC - in response to Message 1068814.  

Also by the way, somebody asked if we should have two upload servers. We used to have the upload server split onto two systems but this wasn't helping - in fact it was making it worse. The problem is not the lack of bandwidth i/o, but disk i/o. The results have to live somewhere, and require lots of random read/writes. So it's best if the upload server saves the results on directly attached storage. If it is also serving them over NFS (or likewise equivalent) such that a second upload server can write to them, it's too much of an overhead drag. So the upload server has to be a singular server which also (1) holds the results and (2) as much of the backend processing on these result files as possible. I think right now the only backend processing on results which bruno does NOT do is assimilation, which vader handles. You might think "why not just have the upload server save the results IT gets on ITS own storage?" Then we end up with two piles of results, randomly split, and then the NFS/mounting bottleneck is simply pushed down the pike to the validators, who need to read both piles at once.


(First post, so please bear with me...)

My spontaneous thought when reading the above, is that since the entire S@H thingie is about distribution... shouldn't there be a way to distribute this too?

Just like the home computers might not always be so fast, but compensate this by their large numbers, wouldn't it be possible to have even more upload servers (not just two), and perhaps one dedicated "indexing server", to keep track of which upload server has which data...

And by increasing the number of ul servers to more than two, you also distribute the network traffic. So even if the total traffic gets higher, the load gets more spread out.
In other words, just two ul servers maybe isn't enought to distribute the network traffic, but with even more servers, the mere numbers will compensate for the network limitations.

I realise I'm probably just missing something more or less obvious here, just thought I'd mention it anyways... ;)
ID: 1072646 · Report as offensive
John McLeod VII
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 15 Jul 99
Posts: 24806
Credit: 790,712
RAC: 0
United States
Message 1072883 - Posted: 1 Feb 2011, 0:09:31 UTC - in response to Message 1072646.  

Also by the way, somebody asked if we should have two upload servers. We used to have the upload server split onto two systems but this wasn't helping - in fact it was making it worse. The problem is not the lack of bandwidth i/o, but disk i/o. The results have to live somewhere, and require lots of random read/writes. So it's best if the upload server saves the results on directly attached storage. If it is also serving them over NFS (or likewise equivalent) such that a second upload server can write to them, it's too much of an overhead drag. So the upload server has to be a singular server which also (1) holds the results and (2) as much of the backend processing on these result files as possible. I think right now the only backend processing on results which bruno does NOT do is assimilation, which vader handles. You might think "why not just have the upload server save the results IT gets on ITS own storage?" Then we end up with two piles of results, randomly split, and then the NFS/mounting bottleneck is simply pushed down the pike to the validators, who need to read both piles at once.


(First post, so please bear with me...)

My spontaneous thought when reading the above, is that since the entire S@H thingie is about distribution... shouldn't there be a way to distribute this too?

Just like the home computers might not always be so fast, but compensate this by their large numbers, wouldn't it be possible to have even more upload servers (not just two), and perhaps one dedicated "indexing server", to keep track of which upload server has which data...

And by increasing the number of ul servers to more than two, you also distribute the network traffic. So even if the total traffic gets higher, the load gets more spread out.
In other words, just two ul servers maybe isn't enought to distribute the network traffic, but with even more servers, the mere numbers will compensate for the network limitations.

I realise I'm probably just missing something more or less obvious here, just thought I'd mention it anyways... ;)

Unfortunately, the uploaded data has to end up at Berkeley for validation and incorporation into the Science Data Base. The bottle neck here is the 100Mb link up the hill. We shall see if the project to get Gb up the hill will actually help much as that Gb is shared among several labs.


BOINC WIKI
ID: 1072883 · Report as offensive
zii

Send message
Joined: 24 May 03
Posts: 7
Credit: 828,565
RAC: 0
Sweden
Message 1073011 - Posted: 1 Feb 2011, 6:25:21 UTC - in response to Message 1072883.  

Unfortunately, the uploaded data has to end up at Berkeley for validation and incorporation into the Science Data Base. The bottle neck here is the 100Mb link up the hill. We shall see if the project to get Gb up the hill will actually help much as that Gb is shared among several labs.


Sorry if I was unclear...

I didn't mean distributed as "distributed between users", but rather as "distributed between a larger number of local upload servers at Berkeley".

It won't solve the problem with the 100mbit link (but won't make it worse either), my thought was that it however might solve the problem with concentrating all data to one single upload server, which of course doesn't give much redundancy AND as far as I've understood is a bottleneck even when it works at its best?
ID: 1073011 · Report as offensive
Profile KWSN THE Holy Hand Grenade!
Volunteer tester
Avatar

Send message
Joined: 20 Dec 05
Posts: 3187
Credit: 57,163,290
RAC: 0
United States
Message 1073196 - Posted: 1 Feb 2011, 17:14:05 UTC - in response to Message 1073011.  



Sorry if I was unclear...

I didn't mean distributed as "distributed between users", but rather as "distributed between a larger number of local upload servers at Berkeley".

It won't solve the problem with the 100mbit link (but won't make it worse either), my thought was that it however might solve the problem with concentrating all data to one single upload server, which of course doesn't give much redundancy AND as far as I've understood is a bottleneck even when it works at its best?


Sorry, but all the data HAS to go thru one server - as that server does all the validating, assimilation, and deletions. As it has been explained to me, any other possibility (and several have been tried, so I understand...) makes too much network thrashing as the WU/results get tossed back and forth between servers.
.

Hello, from Albany, CA!...
ID: 1073196 · Report as offensive
1mp0£173
Volunteer tester

Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 1073243 - Posted: 2 Feb 2011, 0:03:13 UTC - in response to Message 1073196.  
Last modified: 2 Feb 2011, 0:05:25 UTC



Sorry if I was unclear...

I didn't mean distributed as "distributed between users", but rather as "distributed between a larger number of local upload servers at Berkeley".

It won't solve the problem with the 100mbit link (but won't make it worse either), my thought was that it however might solve the problem with concentrating all data to one single upload server, which of course doesn't give much redundancy AND as far as I've understood is a bottleneck even when it works at its best?


Sorry, but all the data HAS to go thru one server - as that server does all the validating, assimilation, and deletions. As it has been explained to me, any other possibility (and several have been tried, so I understand...) makes too much network thrashing as the WU/results get tossed back and forth between servers.

It's always good to read RFC-1925.

What we're talking about is Truth #6.

First of all, this isn't a "problem" because the BOINC client will keep trying to upload until it succeeds, and data is no more likely to be lost at the client (cruncher) than it is on some intermediate server.

Distributing upload servers adds a whole layer of logic to deal with getting the files from the upload servers to the central site for validation, and it doesn't change the amount of data that has to be transmitted to the central site.

It moves the problem out of the client, and into a whole new infrastructure, but the problem still exists.

"Redundancy" is expensive, and it's only important if you have fickle consumers on an E-Commerce site or something that has to be done in real-time.
ID: 1073243 · Report as offensive
Previous · 1 · 2

Message boards : Technical News : There Goes a Tenner (Jan 20 2011)


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.