There Goes a Tenner (Jan 20 2011)


log in

Advanced search

Message boards : Technical News : There Goes a Tenner (Jan 20 2011)

Previous · 1 · 2
Author Message
Profile Todd Hebert
Volunteer tester
Avatar
Send message
Joined: 16 Jun 00
Posts: 647
Credit: 217,127,962
RAC: 0
United States
Message 1069067 - Posted: 21 Jan 2011, 17:52:05 UTC - in response to Message 1069048.

That was many many years ago - late 80's into the early 90's. Went back to school to get my masters from UW-Madison and then went to Microsoft as a 5th level enterprise networking tech. Been up and down the road a few times :)
Todd
____________

DJStarfox
Send message
Joined: 23 May 01
Posts: 1045
Credit: 568,320
RAC: 353
United States
Message 1069088 - Posted: 21 Jan 2011, 19:24:04 UTC - in response to Message 1069019.

I suppose we should wait and see if Bruno is still viable.


At this point, Todd has talked me into it. Hopefully, Bruno's problem is just something simple like loose cables or weak power supply.

nick
Volunteer tester
Avatar
Send message
Joined: 22 Jul 05
Posts: 281
Credit: 2,763,201
RAC: 2
United States
Message 1069138 - Posted: 21 Jan 2011, 22:45:03 UTC

would bumping up the disk cache on synergy help? seems like 96 Gb of ram would give you a lot of options...

though it depends on what its trying to do, and how random, the read/writes are, along with how much data gets pushed through.

Nick
____________


archae86
Send message
Joined: 31 Aug 99
Posts: 889
Credit: 1,572,794
RAC: 3
United States
Message 1069162 - Posted: 21 Jan 2011, 23:17:21 UTC - in response to Message 1069042.

we had two large storerooms of full height 5.25" 1GB Micropolis SCSI drives and one storeroom would be empty in a month.


Humm, the M-word might be key there. I was the first kid on my block to own a Gigabyte hard drive. This enabled me to do audio editing, as an entire concert would fit on the drive. The actual drive was a 1.6 Gbyte IDE Micropolis, but back then I think it was pretty common to sell the same drive hardware with IDE or SCSI interface, with maybe a $50 adder for the SCSI variant. I was extremely pleased to pay a little over $1000 for my drive.

The first one failed less than two days of installation--not subtle bad sectors, but hard failure. Warranty replacement went OK, but the second one lasted only a couple of years before also failing hard.

Now a single user experience on tiny sample is not the way to judge HD reliability, but I have heard that Micropolis did not lead the field in reliability even in their good days. How did Cray come to choose them?

Oh, yes, and my application was not hammering the thing at all. I probably did active audio work less than 1% of the time, and did not yet have an internet connection, so the drive was closer to excess idle than to excess activity.
____________

Alex Hilleary
Send message
Joined: 9 May 01
Posts: 3
Credit: 6,629,399
RAC: 569
United States
Message 1069631 - Posted: 22 Jan 2011, 22:58:06 UTC

I'd swear that Micropolis was the company that got caught packaging bricks
in order make it's quarterly numbers one time. They had to prove that they
had all these drives that that were sold and not yet delivered. So somebody
packaged a bunch of real bricks, not dead drives, to show an auditor. Of course
today that looks like mere child's play after Madoff, Enron, and others.....
____________

Philhnnss
Send message
Joined: 22 Feb 08
Posts: 57
Credit: 10,729,274
RAC: 8,845
United States
Message 1069649 - Posted: 22 Jan 2011, 23:35:19 UTC - in response to Message 1069631.
Last modified: 23 Jan 2011, 0:29:00 UTC

I think it was MiniScribe that was shipping the bricks.


New York Times

http://query.nytimes.com/gst/fullpage.html?res=950DE3DE1130F930A2575AC0A96F948260

archae86
Send message
Joined: 31 Aug 99
Posts: 889
Credit: 1,572,794
RAC: 3
United States
Message 1069650 - Posted: 22 Jan 2011, 23:37:38 UTC - in response to Message 1069631.

Alex Hilleary wrote:
I'd swear that Micropolis was the company that got caught packaging bricks in order make it's quarterly numbers one time.

Close, but you are thinking of MiniScribe. I'm not sure the bricks story was proven, or the fact that employees believed it to be true was cited as one illustration of just how completely broken things were. It was a famous mess, either way.
____________

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 24787
Credit: 524,053
RAC: 86
United States
Message 1069657 - Posted: 23 Jan 2011, 0:00:34 UTC - in response to Message 1069650.

Alex Hilleary wrote:
I'd swear that Micropolis was the company that got caught packaging bricks in order make it's quarterly numbers one time.

Close, but you are thinking of MiniScribe. I'm not sure the bricks story was proven, or the fact that employees believed it to be true was cited as one illustration of just how completely broken things were. It was a famous mess, either way.

They were not actual bricks, but the one of last rounds of drives only worked as bricks. I got 3 of them one after another that had to be replaced under warranty. The 4th one lasted for about 5 years. The first one I got lasted 3 days. The second one lasted 7 hours. The third lasted 1 hour. (I was a bit peeved about spending about 6 hours filling the disk from backups before imminent failure each time except the last). The solution was to send me a drive with a different capacity (25% larger) that did not have the infant mortality problem. The drive I ordered was 120MB, three failures. The drive I ended up with was 150MB. These were among the largest drives available at the time. Miniscribe went out of business within 2 years of this fiasco.
____________


BOINC WIKI

Profile tullioProject donor
Send message
Joined: 9 Apr 04
Posts: 3816
Credit: 393,242
RAC: 238
Italy
Message 1069853 - Posted: 23 Jan 2011, 13:34:00 UTC

I think I have a 40 MB Miniscribe disk on my AT&T UNIX PC (PC 7300 aka Safari) dating back from 1986. It is still working, after many reformatting and reloading the UNIX System V OS fro 5 1/4 inch disks. Memory is a hefty 512 KB.
Tullio
____________

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 24787
Credit: 524,053
RAC: 86
United States
Message 1069896 - Posted: 23 Jan 2011, 18:18:45 UTC - in response to Message 1069853.

I think I have a 40 MB Miniscribe disk on my AT&T UNIX PC (PC 7300 aka Safari) dating back from 1986. It is still working, after many reformatting and reloading the UNIX System V OS fro 5 1/4 inch disks. Memory is a hefty 512 KB.
Tullio

Yes, some of their earlier drives were tanks - built to keep running through anything forever. It was some of their last disks that had problems.
____________


BOINC WIKI

PhonAcq
Send message
Joined: 14 Apr 01
Posts: 1624
Credit: 22,515,099
RAC: 4,915
United States
Message 1069940 - Posted: 23 Jan 2011, 19:16:53 UTC

So who bought this company, Micropolis? I seem to recall them being bought out.

Cosmic_Ocean
Avatar
Send message
Joined: 23 Dec 00
Posts: 2326
Credit: 8,868,118
RAC: 942
United States
Message 1069988 - Posted: 23 Jan 2011, 21:14:47 UTC - in response to Message 1069940.

So who bought this company, Micropolis? I seem to recall them being bought out.

Quote from wiki..

This company was one of the many hard drive manufacturers in the 1980s and 1990s that went out of business, merged, or closed their hard drive divisions; as a result of capacities and demand for products increased, and profits became hard to find. While Micropolis was able to hold on longer than many of the others, it ultimately sold its hard drive business to Singapore Technologies (now Temasek Holdings), who has ceased to market the brand.

After the disk business sale, Micropolis was reorganized as StreamLogic Corporation, which declared bankruptcy in 1997 amid securities fraud allegations.

____________

Linux laptop uptime: 1484d 22h 42m
Ended due to UPS failure, found 14 hours after the fact

zoom314Project donor
Avatar
Send message
Joined: 30 Nov 03
Posts: 46757
Credit: 36,999,451
RAC: 3,420
United States
Message 1070139 - Posted: 24 Jan 2011, 6:18:21 UTC - in response to Message 1069631.

I'd swear that Micropolis was the company that got caught packaging bricks
in order make it's quarterly numbers one time. They had to prove that they
had all these drives that that were sold and not yet delivered. So somebody
packaged a bunch of real bricks, not dead drives, to show an auditor. Of course
today that looks like mere child's play after Madoff, Enron, and others.....

Actually I had to go through 10 Seagate ST4096 MFM 5.25" FH hdds before I had one that worked Years ago, So Yeah there were crap dives being made, Even by Seagate. Today there is no such thing as a New 5.25" FH(Full Height) hdd anymore, But those were Bricks alrighty.
____________
My Facebook, War Commander, 2015

zii
Send message
Joined: 24 May 03
Posts: 7
Credit: 828,565
RAC: 0
Sweden
Message 1072646 - Posted: 31 Jan 2011, 9:57:03 UTC - in response to Message 1068814.

Also by the way, somebody asked if we should have two upload servers. We used to have the upload server split onto two systems but this wasn't helping - in fact it was making it worse. The problem is not the lack of bandwidth i/o, but disk i/o. The results have to live somewhere, and require lots of random read/writes. So it's best if the upload server saves the results on directly attached storage. If it is also serving them over NFS (or likewise equivalent) such that a second upload server can write to them, it's too much of an overhead drag. So the upload server has to be a singular server which also (1) holds the results and (2) as much of the backend processing on these result files as possible. I think right now the only backend processing on results which bruno does NOT do is assimilation, which vader handles. You might think "why not just have the upload server save the results IT gets on ITS own storage?" Then we end up with two piles of results, randomly split, and then the NFS/mounting bottleneck is simply pushed down the pike to the validators, who need to read both piles at once.


(First post, so please bear with me...)

My spontaneous thought when reading the above, is that since the entire S@H thingie is about distribution... shouldn't there be a way to distribute this too?

Just like the home computers might not always be so fast, but compensate this by their large numbers, wouldn't it be possible to have even more upload servers (not just two), and perhaps one dedicated "indexing server", to keep track of which upload server has which data...

And by increasing the number of ul servers to more than two, you also distribute the network traffic. So even if the total traffic gets higher, the load gets more spread out.
In other words, just two ul servers maybe isn't enought to distribute the network traffic, but with even more servers, the mere numbers will compensate for the network limitations.

I realise I'm probably just missing something more or less obvious here, just thought I'd mention it anyways... ;)
____________

John McLeod VII
Volunteer developer
Volunteer tester
Avatar
Send message
Joined: 15 Jul 99
Posts: 24787
Credit: 524,053
RAC: 86
United States
Message 1072883 - Posted: 1 Feb 2011, 0:09:31 UTC - in response to Message 1072646.

Also by the way, somebody asked if we should have two upload servers. We used to have the upload server split onto two systems but this wasn't helping - in fact it was making it worse. The problem is not the lack of bandwidth i/o, but disk i/o. The results have to live somewhere, and require lots of random read/writes. So it's best if the upload server saves the results on directly attached storage. If it is also serving them over NFS (or likewise equivalent) such that a second upload server can write to them, it's too much of an overhead drag. So the upload server has to be a singular server which also (1) holds the results and (2) as much of the backend processing on these result files as possible. I think right now the only backend processing on results which bruno does NOT do is assimilation, which vader handles. You might think "why not just have the upload server save the results IT gets on ITS own storage?" Then we end up with two piles of results, randomly split, and then the NFS/mounting bottleneck is simply pushed down the pike to the validators, who need to read both piles at once.


(First post, so please bear with me...)

My spontaneous thought when reading the above, is that since the entire S@H thingie is about distribution... shouldn't there be a way to distribute this too?

Just like the home computers might not always be so fast, but compensate this by their large numbers, wouldn't it be possible to have even more upload servers (not just two), and perhaps one dedicated "indexing server", to keep track of which upload server has which data...

And by increasing the number of ul servers to more than two, you also distribute the network traffic. So even if the total traffic gets higher, the load gets more spread out.
In other words, just two ul servers maybe isn't enought to distribute the network traffic, but with even more servers, the mere numbers will compensate for the network limitations.

I realise I'm probably just missing something more or less obvious here, just thought I'd mention it anyways... ;)

Unfortunately, the uploaded data has to end up at Berkeley for validation and incorporation into the Science Data Base. The bottle neck here is the 100Mb link up the hill. We shall see if the project to get Gb up the hill will actually help much as that Gb is shared among several labs.
____________


BOINC WIKI

zii
Send message
Joined: 24 May 03
Posts: 7
Credit: 828,565
RAC: 0
Sweden
Message 1073011 - Posted: 1 Feb 2011, 6:25:21 UTC - in response to Message 1072883.

Unfortunately, the uploaded data has to end up at Berkeley for validation and incorporation into the Science Data Base. The bottle neck here is the 100Mb link up the hill. We shall see if the project to get Gb up the hill will actually help much as that Gb is shared among several labs.


Sorry if I was unclear...

I didn't mean distributed as "distributed between users", but rather as "distributed between a larger number of local upload servers at Berkeley".

It won't solve the problem with the 100mbit link (but won't make it worse either), my thought was that it however might solve the problem with concentrating all data to one single upload server, which of course doesn't give much redundancy AND as far as I've understood is a bottleneck even when it works at its best?
____________

Profile KWSN THE Holy Hand Grenade!
Volunteer tester
Avatar
Send message
Joined: 20 Dec 05
Posts: 1990
Credit: 10,986,247
RAC: 13,856
United States
Message 1073196 - Posted: 1 Feb 2011, 17:14:05 UTC - in response to Message 1073011.



Sorry if I was unclear...

I didn't mean distributed as "distributed between users", but rather as "distributed between a larger number of local upload servers at Berkeley".

It won't solve the problem with the 100mbit link (but won't make it worse either), my thought was that it however might solve the problem with concentrating all data to one single upload server, which of course doesn't give much redundancy AND as far as I've understood is a bottleneck even when it works at its best?


Sorry, but all the data HAS to go thru one server - as that server does all the validating, assimilation, and deletions. As it has been explained to me, any other possibility (and several have been tried, so I understand...) makes too much network thrashing as the WU/results get tossed back and forth between servers.
____________
.

1mp0£173
Volunteer tester
Send message
Joined: 3 Apr 99
Posts: 8423
Credit: 356,897
RAC: 0
United States
Message 1073243 - Posted: 2 Feb 2011, 0:03:13 UTC - in response to Message 1073196.
Last modified: 2 Feb 2011, 0:05:25 UTC



Sorry if I was unclear...

I didn't mean distributed as "distributed between users", but rather as "distributed between a larger number of local upload servers at Berkeley".

It won't solve the problem with the 100mbit link (but won't make it worse either), my thought was that it however might solve the problem with concentrating all data to one single upload server, which of course doesn't give much redundancy AND as far as I've understood is a bottleneck even when it works at its best?


Sorry, but all the data HAS to go thru one server - as that server does all the validating, assimilation, and deletions. As it has been explained to me, any other possibility (and several have been tried, so I understand...) makes too much network thrashing as the WU/results get tossed back and forth between servers.

It's always good to read RFC-1925.

What we're talking about is Truth #6.

First of all, this isn't a "problem" because the BOINC client will keep trying to upload until it succeeds, and data is no more likely to be lost at the client (cruncher) than it is on some intermediate server.

Distributing upload servers adds a whole layer of logic to deal with getting the files from the upload servers to the central site for validation, and it doesn't change the amount of data that has to be transmitted to the central site.

It moves the problem out of the client, and into a whole new infrastructure, but the problem still exists.

"Redundancy" is expensive, and it's only important if you have fickle consumers on an E-Commerce site or something that has to be done in real-time.

Previous · 1 · 2

Message boards : Technical News : There Goes a Tenner (Jan 20 2011)

Copyright © 2014 University of California