Strange result, how is this possible?

Message boards : Number crunching : Strange result, how is this possible?
Message board moderation

To post messages, you must log in.

1 · 2 · 3 · 4 . . . 5 · Next

AuthorMessage
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30651
Credit: 53,134,872
RAC: 32
United States
Message 1061173 - Posted: 30 Dec 2010, 2:47:53 UTC

IIRC -9 errors, if real, indicate the work unit was polluted with RFI.

IIRC the system grants credit for the time spent crunching even though the result is unusable because crunch time was needed to know to throw the W/U away.


ID: 1061173 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1061190 - Posted: 30 Dec 2010, 3:18:29 UTC - in response to Message 1061169.  
Last modified: 30 Dec 2010, 3:55:12 UTC

How can the Cuda machine here wuid=666294818, validate and be given credit, when its result is a "-9 result_overflow", when the other Cuda which btw also owerflowed, got an invalid result?


A few extra things to consider,
- '-9 overflow' is not technically an 'error' but an 'Informational Message', indicating there are more than the allocated result space for reportable signals present. This would often indicated RFI (as mentioned) but can have other causes (not just GPUs either)

- The selected 'Canonical' result, i.e. for entry into the science database, was one of the CPU ones, the other two granted were at least 'weakly similar', the remaining erroneous one was not ( i.e. was 'different' )

- There are known precision and signal ordering differences that can be exposed with the Cuda apps compared to the CPU apps, particularly in the case of overflow ('correct' or not), and with signals near thresholds. This, IMO, is directly related to the CPU codebase being relatively mature, having some 10-11 years (x 10's to hundreds of peoples' contributions) toward refinement, with the Cuda codebase having more like on the order of 6 months x a few people.

- Given that the selected Canonical result was clearly
'the right' one (in this particular case), the science is not polluted by the clearly erroneous result, or even the weak similarity overflow one. To my mind this case is actually an example of the system working as it should.

Jason

[Edit:] a bit more info from:
http://www.boinc-wiki.info/Canonical_Result
So the only things you can be certain about are:

1. The Canonical Result is strongly similar to at least one of the other Results.
2. If there is "no consensus yet", no pair of Results was strongly similar.
3. Any Result that didn't error-out for various reasons, and is marked "invalid" is not weakly similar to the Canonical Result. "

"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1061190 · Report as offensive
Dave Stegner
Volunteer tester
Avatar

Send message
Joined: 20 Oct 04
Posts: 540
Credit: 65,583,328
RAC: 27
United States
Message 1061211 - Posted: 30 Dec 2010, 3:59:12 UTC

Sten-Arne's post got me to looking around at some of my pendings. I sure have a lot of inconclusive, never have had before. My machines have not changed configuration in over a year and have never had issues with validating. But, pair it with a Cuda and look out.

http://setiathome.berkeley.edu/workunit.php?wuid=668605751

I guess I will need to monitor this closely, as I am not interested in spending money on machines and electricity for unstable results.

Things should get really fun when ATI apps are released, MB will be more complicated and AP will have a chance to become unstable also.


Dave

ID: 1061211 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1061218 - Posted: 30 Dec 2010, 4:05:52 UTC - in response to Message 1061211.  
Last modified: 30 Dec 2010, 4:06:48 UTC

Sten-Arne's post got me to looking around at some of my pendings. I sure have a lot of inconclusive, never have had before. My machines have not changed configuration in over a year and have never had issues with validating. But, pair it with a Cuda and look out.

http://setiathome.berkeley.edu/workunit.php?wuid=668605751

I guess I will need to monitor this closely, as I am not interested in spending money on machines and electricity for unstable results.

Things should get really fun when ATI apps are released, MB will be more complicated and AP will have a chance to become unstable also.


Now this example is a much clearer case of one host generating spurious results (for whatever reason). The validator needs a third result to decide which is really 'correct' :)
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1061218 · Report as offensive
Dave Stegner
Volunteer tester
Avatar

Send message
Joined: 20 Oct 04
Posts: 540
Credit: 65,583,328
RAC: 27
United States
Message 1061231 - Posted: 30 Dec 2010, 4:19:37 UTC

I agree, in the interest of the science we need a third.

But, as stated 1 machine is running a stable and proven app the other is running Cuda. If the third proves that the stable app is correct, nothing will be done about the cuda machine. I think that was the op's point.

It is not only a waste of resources on the client's part, it is a huge waste on Seti's part.

Dave

ID: 1061231 · Report as offensive
Profile skildude
Avatar

Send message
Joined: 4 Oct 00
Posts: 9541
Credit: 50,759,529
RAC: 60
Yemen
Message 1061240 - Posted: 30 Dec 2010, 4:29:19 UTC - in response to Message 1061169.  

How can the Cuda machine here wuid=666294818, validate and be given credit, when its result is a "-9 result_overflow", when the other Cuda which btw also owerflowed, got an invalid result?

Also, the other 2 machines which also got valid results and credits, did not turn in overflow results.

Worth noting also is that the Cuda that got credits have thousands of invalid results "-9 result_overflow", and almost no (if any) valid results from the Cuda card

I fear that the hundreds of mismanaged/misconfigured Cuda machines, threaten the integrity of the science here. Looking through my results and my wingmates, I see more and more Cuda machines turning in invalid results, overflows by the thousands, but in this case an invalid result all of a sudden becomes valid. Thanks someone though that his her result in this case didn't become the canonical result.

But when two misconfigured Cuda machines gets paired against each other, which happens very often, both overflows, and validates, it must pollute the science with invalid data.

It's something very sick with not being able to throw out these mis configured machines from the project. The more I see of them, the less I trust that we will ever get any valid result from this project.

Edit: And if anything would make me jump the ship here, it is if these hundreds of misconfigured Cuda machines are being allowed to continue to waste bandwidth, disk space, and possibly even destroy the credibility of the science behind this project.

It appears that the results are similar enough that it granted you credit. the 2 you matched with both had 20 spikes you had 30 the other CUDA found 7 pulses


In a rich man's house there is no place to spit but his face.
Diogenes Of Sinope
ID: 1061240 · Report as offensive
Profile jason_gee
Volunteer developer
Volunteer tester
Avatar

Send message
Joined: 24 Nov 06
Posts: 7489
Credit: 91,093,184
RAC: 0
Australia
Message 1061259 - Posted: 30 Dec 2010, 4:47:40 UTC - in response to Message 1061231.  
Last modified: 30 Dec 2010, 5:09:12 UTC

I agree, in the interest of the science we need a third.

But, as stated 1 machine is running a stable and proven app the other is running Cuda. If the third proves that the stable app is correct, nothing will be done about the cuda machine. I think that was the op's point.

It is not only a waste of resources on the client's part, it is a huge waste on Seti's part.



OK, I agree in principle the owner of the machine needs to look at his system, no argument there. But let's put all this in a little scientific perspective. We're talking about a data reduction mechanism using redundancy to validate the data. That inbuilt redundancy in itself could be viewed as an inefficiency, since if every host was truly reliable we wouldn't need such waste. I view it as the Boinc mechanisms protecting the scientific integrity of the data, and using a resource that it has ample supply of to do so (compute power).

Since very few of us CPU users use ECC RAM, fault tolerant hard drive arrays, etc, there are occasions CPU hosts can go haywire too... Just ask msattler, LoL

When push comes to shove, none of the signals in any of those results means more than 'there might have been something at that point at this time'. To be a 'valuable' detection of a candidate, potentially for re-observation, the project has determined there must be 'persistency' as well, which incidentally is something the famous 'WoW! signal' never achieved.

Boinc is designed to inherently mistrust the results being returned, and these are examples where the system is working, rather than broken IMO. Show me a clear example of a bodgy result being selected as canonical and I'll join you jumping up and down. I'm sure some exist, but am fairly confident other steps in the science catch such 'issues'.

Having said all that, there is a tradeoff going on. New potent compute power is being introduced, that is based on historic supercomputer designs, but is fundamentally (relatively) new to this particular application. These are vastly different programming techniques to traditional CPU programming, and there are liable to be growing pains.

No doubts some have expectations for software to be 'perfect' before any kind of release, but sadly it just doesn't work that way, and I suggest those expectations for something this complex are not realistic.

So the choices become to either write the progress off as 'a bad idea', or go through the growing pains with sufficient safeguards in place to avoid 'contamination'. The reason there isn't really a 'middle ground' in this process, as such, is that you cannot find & fix errors that don't occur.

Jason
"Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions.
ID: 1061259 · Report as offensive
Profile Gary Charpentier Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 25 Dec 00
Posts: 30651
Credit: 53,134,872
RAC: 32
United States
Message 1061298 - Posted: 30 Dec 2010, 6:22:09 UTC

FYI I believe the canonical result is the first of the matching results returned.

There are two possible errors that could get into the science.
False positive
False negative

In the first case it will sort itself out at the re-observation stage.
In the second case, we might miss ET unless he keeps sending.


ID: 1061298 · Report as offensive
Dave Stegner
Volunteer tester
Avatar

Send message
Joined: 20 Oct 04
Posts: 540
Credit: 65,583,328
RAC: 27
United States
Message 1061308 - Posted: 30 Dec 2010, 6:45:25 UTC

If I am reading the intent of the original post correctly, we are takling about machines that create issues.

It does not matter if a machine is using an old app (which create problems also) or a bleeding edge supercomputer,

If it is creating issues it should be cut off.
Dave

ID: 1061308 · Report as offensive
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 1061387 - Posted: 30 Dec 2010, 12:17:27 UTC - in response to Message 1061374.  

Sten-Arne's post got me to looking around at some of my pendings. I sure have a lot of inconclusive, never have had before. My machines have not changed configuration in over a year and have never had issues with validating. But, pair it with a Cuda and look out.

http://setiathome.berkeley.edu/workunit.php?wuid=668605751

I guess I will need to monitor this closely, as I am not interested in spending money on machines and electricity for unstable results.

Things should get really fun when ATI apps are released, MB will be more complicated and AP will have a chance to become unstable also.



Yeah, and again the Cuda you're paired with is one of those that doesn't produce almost any valid results whatsoever. It only produces -9's, or other errors. It downloads and have sitting in its inner self, thousands of WU's and it trashes all of them.

Keep watching your results and you'll see plenty of other Cuda wingmates that behaves the same. It's getting worse and worse over time, and if this is allowed to continue, badly setup, overheating, or otherwise faulty Cuda's will eat up all bandwidth, all server power, and everything else the project will throw at them, and this project will grind to a halt because nobody was prepared to throw out the ones crapping in their own nest.

A CPU machine is possible and easy to setup for everyone and then join the project, and then just forget about the machine, and it will keep producing perfectly valid results, until it finally just dies from old age. However it becomes more and more obvious that to get a good working Cuda setup properly, takes more than too many who join here are capable of, or even know anything about whatsoever.

A Cuda machine will throw fits with 100% certainty if you do not make sure everything is working properly all the time. Cooling seems to be number one, (more issues comes up if you install opt applications perhaps, but that's nothing the casual fly by user even know anything about) and how many casual PC users out there does even know how to read the temperature of their GPU or CPU for that matter.

No, being able to install GPU crunching capabilities on their machine, should IMHO not be as easy as it is to install CPU crunching. Continue on this path, and hear my words the project will grind to a halt because it is flooded from all directions with millions of WU's downloaded and uploaded from badly configured or maintained GPU machines. WU's that will have to be sent out to a third, fourth ,fifth computer to be crunched again, and the more and powerful GPU's we get it will fast become impossible to handle.

Even worse is my lingering suspicion that all these badly managed CUDA's slowly pollutes the science database with bogus results, making this whole project totally worthless when it comes to its science results.


I was worried about trashing units when I upgraded, and yes I trashed two (on CPU), entirely my fault. But I tried to keep damage to a minimum.

Yes, I do get a few "-9 reult_overflow" results, Hopefully these are caused by the actual workunits not by errors on my machine, on which I keep a regular watch over, that includes monitoring temps on both GPU's and CPU. Dust bunnies are regularly evicted.

I did not realise that some machines could be performing that badly.


Kevin


ID: 1061387 · Report as offensive
Profile James Sotherden
Avatar

Send message
Joined: 16 May 99
Posts: 10436
Credit: 110,373,059
RAC: 54
United States
Message 1061394 - Posted: 30 Dec 2010, 12:46:30 UTC - in response to Message 1061387.  

Sten-Arne's post got me to looking around at some of my pendings. I sure have a lot of inconclusive, never have had before. My machines have not changed configuration in over a year and have never had issues with validating. But, pair it with a Cuda and look out.

http://setiathome.berkeley.edu/workunit.php?wuid=668605751

I guess I will need to monitor this closely, as I am not interested in spending money on machines and electricity for unstable results.

Things should get really fun when ATI apps are released, MB will be more complicated and AP will have a chance to become unstable also.



Yeah, and again the Cuda you're paired with is one of those that doesn't produce almost any valid results whatsoever. It only produces -9's, or other errors. It downloads and have sitting in its inner self, thousands of WU's and it trashes all of them.

Keep watching your results and you'll see plenty of other Cuda wingmates that behaves the same. It's getting worse and worse over time, and if this is allowed to continue, badly setup, overheating, or otherwise faulty Cuda's will eat up all bandwidth, all server power, and everything else the project will throw at them, and this project will grind to a halt because nobody was prepared to throw out the ones crapping in their own nest.

A CPU machine is possible and easy to setup for everyone and then join the project, and then just forget about the machine, and it will keep producing perfectly valid results, until it finally just dies from old age. However it becomes more and more obvious that to get a good working Cuda setup properly, takes more than too many who join here are capable of, or even know anything about whatsoever.

A Cuda machine will throw fits with 100% certainty if you do not make sure everything is working properly all the time. Cooling seems to be number one, (more issues comes up if you install opt applications perhaps, but that's nothing the casual fly by user even know anything about) and how many casual PC users out there does even know how to read the temperature of their GPU or CPU for that matter.

No, being able to install GPU crunching capabilities on their machine, should IMHO not be as easy as it is to install CPU crunching. Continue on this path, and hear my words the project will grind to a halt because it is flooded from all directions with millions of WU's downloaded and uploaded from badly configured or maintained GPU machines. WU's that will have to be sent out to a third, fourth ,fifth computer to be crunched again, and the more and powerful GPU's we get it will fast become impossible to handle.

Even worse is my lingering suspicion that all these badly managed CUDA's slowly pollutes the science database with bogus results, making this whole project totally worthless when it comes to its science results.


I was worried about trashing units when I upgraded, and yes I trashed two (on CPU), entirely my fault. But I tried to keep damage to a minimum.

Yes, I do get a few "-9 reult_overflow" results, Hopefully these are caused by the actual workunits not by errors on my machine, on which I keep a regular watch over, that includes monitoring temps on both GPU's and CPU. Dust bunnies are regularly evicted.

I did not realise that some machines could be performing that badly.


I to keep a close eye on my machines. Ive dug around some wingmans and sometimes its down right scary how some trash work by the thousands. And most wont even answer a PM. They jusy happily trash work. Im wondering how many have the new fermis and installed the wrong opp apps on them?
[/quote]

Old James
ID: 1061394 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1061403 - Posted: 30 Dec 2010, 13:04:58 UTC

Seems this computer is one one those Sten-Arne is talking about http://setiathome.berkeley.edu/show_host_detail.php?hostid=5293938
ID: 1061403 · Report as offensive
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 1061405 - Posted: 30 Dec 2010, 13:06:53 UTC - in response to Message 1061394.  


I to keep a close eye on my machines. Ive dug around some wingmans and sometimes its down right scary how some trash work by the thousands. And most wont even answer a PM. They jusy happily trash work. Im wondering how many have the new fermis and installed the wrong opp apps on them?



I am running Lunatics not stock apps on a pair of 470's, so I actually have the new Fermis with non stock apps running without trashing work by the thousands.

The machine mentioned above is not running Fermis and is running stock apps, So it may not be down to the actual software or hardware but how some users actually set up there hardware or software.

It ain't what you got, Its the way that you use it.


Kevin


ID: 1061405 · Report as offensive
Profile Bernie Vine
Volunteer moderator
Volunteer tester
Avatar

Send message
Joined: 26 May 99
Posts: 9954
Credit: 103,452,613
RAC: 328
United Kingdom
Message 1061410 - Posted: 30 Dec 2010, 13:30:35 UTC - in response to Message 1061405.  


I to keep a close eye on my machines. Ive dug around some wingmans and sometimes its down right scary how some trash work by the thousands. And most wont even answer a PM. They jusy happily trash work. Im wondering how many have the new fermis and installed the wrong opp apps on them?



I am running Lunatics not stock apps on a pair of 470's, so I actually have the new Fermis with non stock apps running without trashing work by the thousands.

The machine mentioned above is not running Fermis and is running stock apps, So it may not be down to the actual software or hardware but how some users actually set up there hardware or software.

It ain't what you got, Its the way that you use it.



Which I believe was Sten-Arne's point. But how do we insure this doesn't happen?

I recently had a problem on a couple of my machines, but worked hard to recover the WU's and in the end succeeded. I have to admit it is odd to spend time and money on a fast cruncher and then never check it.

Bernie

ID: 1061410 · Report as offensive
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 1061420 - Posted: 30 Dec 2010, 13:57:37 UTC - in response to Message 1061410.  


I to keep a close eye on my machines. Ive dug around some wingmans and sometimes its down right scary how some trash work by the thousands. And most wont even answer a PM. They jusy happily trash work. Im wondering how many have the new fermis and installed the wrong opp apps on them?



I am running Lunatics not stock apps on a pair of 470's, so I actually have the new Fermis with non stock apps running without trashing work by the thousands.

The machine mentioned above is not running Fermis and is running stock apps, So it may not be down to the actual software or hardware but how some users actually set up there hardware or software.

It ain't what you got, Its the way that you use it.



Which I believe was Sten-Arne's point. But how do we insure this doesn't happen?

I recently had a problem on a couple of my machines, but worked hard to recover the WU's and in the end succeeded. I have to admit it is odd to spend time and money on a fast cruncher and then never check it.

Bernie


There are some of us that have built - upgraded machines for doing SETI - other Boinc work, and then there are those that have got fast machines (probabaly mainly for gaming) that think they are being helpful.

Some of these will hopefully mature into dedicated crunchers, but others will probably forget they even installed it in the first place and the only time we will loose them is when they upgrade their machines and forget to install it again.

Unfortunately this is probably a never ending cycle.


Kevin


ID: 1061420 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1061422 - Posted: 30 Dec 2010, 13:59:13 UTC - in response to Message 1061405.  
Last modified: 30 Dec 2010, 14:07:33 UTC


Seems this computer is one one those Sten-Arne is talking about host
5293938.




The host in question, does run FERMI's; GTX480(2x), maybe it's running to many WU's at a time, see this:


Stderr output

<core_client_version>6.10.58</core_client_version>
<![CDATA[
<stderr_txt>
setiathome_CUDA: Found 2 CUDA device(s):
Device 1 : GeForce GTX 480
totalGlobalMem = 1576468480
sharedMemPerBlock = 49152
regsPerBlock = 32768
warpSize = 32
memPitch = 2147483647
maxThreadsPerBlock = 1024
clockRate = 810000
totalConstMem = 65536
major = 2
minor = 0
textureAlignment = 512
deviceOverlap = 1
multiProcessorCount = 15
Device 2 : GeForce GTX 480
totalGlobalMem = 1576468480
sharedMemPerBlock = 49152
regsPerBlock = 32768
warpSize = 32
memPitch = 2147483647
maxThreadsPerBlock = 1024
clockRate = 810000
totalConstMem = 65536
major = 2
minor = 0
textureAlignment = 512
deviceOverlap = 1
multiProcessorCount = 15
setiathome_CUDA: CUDA Device 1 specified, checking...
Device 1: GeForce GTX 480 is okay
SETI@home using CUDA accelerated device GeForce GTX 480
V12 modification by Raistmer
Priority of worker thread rised successfully
Priority of process adjusted successfully
Total GPU memory 1576468480 free GPU memory 1063374848
setiathome_enhanced 6.02 Visual Studio/Microsoft C++

Build features: Non-graphics CUDA VLAR autokill enabled FFTW USE_SSE x86
CPUID: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz

Cache: L1=64K L2=256K

CPU features: FPU TSC PAE CMPXCHG8B APIC SYSENTER MTRR CMOV/CCMP MMX FXSAVE/FXRSTOR SSE SSE2 HT SSE3
libboinc: 6.3.22

Work Unit Info:
...............
WU true angle range is : 0.426153
After app init: total GPU memory 1576468480 free GPU memory 969003008
SETI@Home Informational message -9 result_overflow
NOTE: The number of results detected exceeds the storage space allocated.

Flopcounter: 204566200.959168

Spike count: 0
Pulse count: 31
Triplet count: 0
Gaussian count: 0


Something is using much memory.............?!
ID: 1061422 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14650
Credit: 200,643,578
RAC: 874
United Kingdom
Message 1061427 - Posted: 30 Dec 2010, 14:09:27 UTC - in response to Message 1061422.  

The host in question, does run FERMI's; GTX480(2x), maybe it's running to many WU's at a time, see this:

Stderr output

V12 modification by Raistmer
Build features: Non-graphics CUDA VLAR autokill enabled FFTW USE_SSE x86


Something is using much memory.............?!

No, somebody has deliberately installed an 'optimised' (tweaked to reduce errors, but no real speedup) application, incompatible with their graphics card.

Raistmer, do you have any way to autokill your autokill application? ;-)))
ID: 1061427 · Report as offensive
Profile Fred J. Verster
Volunteer tester
Avatar

Send message
Joined: 21 Apr 04
Posts: 3252
Credit: 31,903,643
RAC: 0
Netherlands
Message 1061430 - Posted: 30 Dec 2010, 14:16:16 UTC - in response to Message 1061423.  
Last modified: 30 Dec 2010, 14:22:40 UTC


See: wuid=669862967, and you'll find two super trashing machines validate each others -9's, making the likely right result invalid, and enter false data into the data base and science.



You're right, this doesn't look good and will put the wrong result in the DataBase!


No, somebody has deliberately installed an 'optimised' (tweaked to reduce errors, but no real speedup) application, incompatible with their graphics card.

Raistmer, do you have any way to autokill your autokill application? ;-)))



You were just ahead of me ;-), Richard.......And this is even worse!
ID: 1061430 · Report as offensive
Kevin Olley

Send message
Joined: 3 Aug 99
Posts: 906
Credit: 261,085,289
RAC: 572
United Kingdom
Message 1061433 - Posted: 30 Dec 2010, 14:20:23 UTC - in response to Message 1061423.  

The host in question, does run FERMI's; GTX480(2x), maybe it's running to many WU's at a time, see this:
Spike count: 0
Pulse count: 31
Triplet count: 0
Gaussian count: 0 [/color]

Something is using much memory.............?!





See: wuid=669862967, and you'll find two super trashing machines validate each others -9's, making the likely right result invalid, and enter false data into the data base and science.

Edit: And soon it will be gone from view too, when the system purges old data from view. Sneaking into the data base as valid result, and nobody will react and do anything about this joke of science.


The machine that I was looking at was

http://setiathome.berkeley.edu/workunit.php?wuid=668605751

5625026

Owner *******
Created 2 Dec 2010 6:54:43 UTC
Total credit 101,626
Average credit 3,606.75
Cross project credit
CPU type GenuineIntel
Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz [Family 6 Model 26 Stepping 5]
Number of processors 8
Coprocessors [4] NVIDIA GeForce GTX 295 (869MB) driver: 26099
Operating System Microsoft Windows 7
Ultimate x64 Edition, (06.01.7600.00)
BOINC version 6.10.58
Memory 12279.12 MB
Cache 256 KB
Measured floating point speed 2911.97 million ops/sec
Measured integer speed 9213.48 million ops/sec
Average upload rate 39.93 KB/sec
Average download rate 381.27 KB/sec
Average turnaround time 0.03 days
Application details Show
Tasks 4300

ATM he has 6 valid tasks from 17 Dec
Kevin


ID: 1061433 · Report as offensive
JohnDK Crowdfunding Project Donor*Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 28 May 00
Posts: 1222
Credit: 451,243,443
RAC: 1,127
Denmark
Message 1061434 - Posted: 30 Dec 2010, 14:25:51 UTC - in response to Message 1061427.  

The host in question, does run FERMI's; GTX480(2x), maybe it's running to many WU's at a time, see this:

Stderr output

V12 modification by Raistmer
Build features: Non-graphics CUDA VLAR autokill enabled FFTW USE_SSE x86


Something is using much memory.............?!

No, somebody has deliberately installed an 'optimised' (tweaked to reduce errors, but no real speedup) application, incompatible with their graphics card.

Raistmer, do you have any way to autokill your autokill application? ;-)))

Or how about the SETI project tries to contact these host owners to correct the problem, and if that doesn't help for whatever reason, block these hosts? Can they block hosts if they want?
ID: 1061434 · Report as offensive
1 · 2 · 3 · 4 . . . 5 · Next

Message boards : Number crunching : Strange result, how is this possible?


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.