Although I'm not sure that the word organisation has ever been looked up by these people.
My first suggestion would be to close membership of this 'beta' test, at it's current number (50,000 is being bandied about).
My second suggestion is tell us the truth, more often, the project obviously is not, repeat NOT running normally, that much is obvious from there being still no work, still no credit, totally bogged down webpages from too many people trying to access a too corrupt database. Even a small to medium sized corporate network has more data flying around than the current seti boinc, they often have larger user bases as well. My own small intranet, while very amateurish, seems to be much more competent. I'm appalled that such a small (compared to seti classic) userbase has brought the project to its knees. I can only echo someone elses jibe along the lines of 'what are they running it on, a 486dx2/66?'.
Thirdly, secure the site, via user name, password, ssl, whatever, to prevent a repeat of the last 'breach'. After all we only number 50,000 ish, there must be some vaguely intact copy of all our details, berkely could email us the credentials.
I've now run out of boinc work, so my 'farm' is back on classic, as far as I know my work after what looked an upload has dissappeared into cyber space never to be seen again, but do I care? No, I care more about the credibility of this exciting project, who's credibility is very rapidly going down the pan.
Most of the above could probably be achieved by just minor tweaks of the webpages and webserver, but hey who am I, someone, please tell me, and the rest of us why it can't be achieved.
I can't fix it myself, I by no means claim to be able to do so, I have enough problems understanding php, mysql, and all the rest myself, I wish I did understand it more, because, I believe it's probably the future of the internet, money to be made and all that, but, I don't. Someone over at berkeley, however, does, but apart from things getting more broken there seems to be no evidence of any 'mending' activity, just more 'lies' and dis-information, to placate us.
On a lighter note, this has taken ages to type out, because I've been almost constantly fighting my new kitten to stop it pulling out my laptops network cable, stop it biting my other cabling, and keeping it out of any other trouble, so I'll be miffed if this disappears :-)
TTFN, Ken Phillips (UK)
> My second suggestion is tell us the truth, more often
This is a frequent complaint and one I agree with. More frequent news updates would go a long way towards silencing all the complaining and moaning around here. predictor@home does a much better job of this with almost daily news updates.
> Even a small to medium sized
> corporate network has more data flying around than the current seti boinc,
> they often have larger user bases as well.
Uh... Where did you pull this little bit of information from? Because it certainly didn't come from the real world. A company with 50,000 employees is considered very LARGE and their corporate network will be at least somewhat de-centralized to handle the load. If you mean people accessing the company website, then once again, you are mistaken. Most corporate websites are mostly static. My P3-500 would have no problem serving up a couple hundred thousand static pages a day. The bottleneck is the database server. EVERYTHING has to go through it. Every time you load up a web page around here, the database has to serve up information for it. Every time a new work unit is created, it has to be inserted. Every time a work unit is distributed, the database must be updated. Every time a result is recieved, the database has to go through and look to see how many times it has been returned and wether or not it needs to go out again. Every bit of credit that is awarded comes from the database.
Tuning databases is a very tricky thing. They have been making progress but the main problem is simply a matter of data transfer rates. They could be running it on a quad AMD 64 running at 5 TerraHerz and it wouldn't help the problem because the problem is actually with the speed at which the data can be read off the hard disks. They have a RAID set up for it but it just isn't enough. To get better disk I/O costs much money and takes some time. They have ordered some new hardware and it is on its way.
> My own small intranet, while very
> amateurish, seems to be much more competent.
HAHAHAHAHAHA! Are you trying to kill me with laughter?
> Most of the above could probably be achieved by just minor tweaks of the
> webpages and webserver, but hey who am I, someone, please tell me, and the
> rest of us why it can't be achieved.
They have made many 'minor tweaks' in the past couple of weeks. There has been much CVS activity. Things have been improved and tweaked but once again, this ultimately comes down to disk access.
> Someone over at
> berkeley, however, does, but apart from things getting more broken there seems
> to be no evidence of any 'mending' activity, just more 'lies' and
> dis-information, to placate us.
No. There is no evidence visible to YOU and this just goes back to the issue of infrequent news updates on which I agree with you. The developers obviously prefer programming to writing news updates. However they are NOT sitting on their hands over there. Monitor CVS activity and you will see that they are making changes and fixing bugs at a good rate.
> On a lighter note, this has taken ages to type out, because I've been almost
> constantly fighting my new kitten to stop it pulling out my laptops network
> cable, stop it biting my other cabling, and keeping it out of any other
> trouble, so I'll be miffed if this disappears :-)
I have a few ideas of how to fix this problem but PETA would probably object to such things being posted on the internet :)
- A member of The Knights Who Say NI!
Possibly the best stats site in the universe: