Message boards :
Number crunching :
Average Credit Decreasing?
Message board moderation
Previous · 1 . . . 6 · 7 · 8 · 9 · 10 · 11 · 12 . . . 32 · Next
Author | Message |
---|---|
Zalster Send message Joined: 27 May 99 Posts: 5517 Credit: 528,817,460 RAC: 242 |
On the upside, I feel like I have 980 Titans now with how many files I am going though :) Validation pending is approaching 2000 due to these things, lol |
Mr. Kevvy Send message Joined: 15 May 99 Posts: 3806 Credit: 1,114,826,392 RAC: 3,319 |
|
Brent Norman Send message Joined: 1 Dec 99 Posts: 2786 Credit: 685,657,289 RAC: 835 |
I found a new game Disable all tasks, then enable only MESSIER then see how fast they disappear :) |
John Neale Send message Joined: 16 Mar 00 Posts: 634 Credit: 7,246,513 RAC: 9 |
Is it fair to assume that these MESSIER031 tasks were created when the GBT was pointed at the Andromeda Galaxy? |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
Seems like over the last couple of days, my RAC has stabilized (on one machine) and even increased for two days (on the other), after falling for several days on both. Is this just a fluke, or is the worst over? |
kittyman Send message Joined: 9 Jul 00 Posts: 51478 Credit: 1,018,363,574 RAC: 1,004 |
Seems like over the last couple of days, my RAC has stabilized (on one machine) and even increased for two days (on the other), after falling for several days on both. Mine seems to be recovering a bit as well. However, that's gonna go out the window if the servers can't continue to feed the GPUs enough kibbles. "Time is simply the mechanism that keeps everything from happening all at once." |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Seems like over the last couple of days, my RAC has stabilized (on one machine) and even increased for two days (on the other), after falling for several days on both. I predict oscillation with work mix, around +/- 37% of some approximate midpoint. As oscillation instabilities work, some hosts will stay on the low side, and others stay on the high side of that, 5% in each end case never crossing the middle over extended periods, and the other 90% of us flopping around. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
Seems like over the last couple of days, my RAC has stabilized (on one machine) and even increased for two days (on the other), after falling for several days on both. Prior to v8 & GBT work I a RAC oscillation of +/- 25-30% was not uncommon on some of my machines. The CPU only machines had a tendency to wiggle the least. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
ReiAyanami Send message Joined: 6 Dec 05 Posts: 116 Credit: 222,900,202 RAC: 174 |
All mine have been going down for over 2 weeks (between 9 to 12%) and still going. It seems that effects are at least similar to many machines. My fastest machine's been staying in the middle of the same page of the Top hosts for at least 6 months and I don't see too many position changes in my neighborhood. Since I don't do anything other than SETI, this is not too bad. Decreasing RAC certainly doesn't motivate me, though. I prefer numbers reflect calculation power and time of my machines and something more stable.... |
AllenIN Send message Joined: 5 Dec 00 Posts: 292 Credit: 58,297,005 RAC: 311 |
From April 3rd (22,000 RAC) I was on a steady uphill to 24,200 RAC on April 10th, but on April 14th I was on a fast downhill to 22,100 RAC by April 24th. Now it seems to be steady at this point not moving up or down more than about 70 RAC. This is for 5 machines, 4 with GPU's. I was thinking of suspending all of the guppis and see if the RAC starts to climb again. |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
guppis need too be pocessed |
Lionel Send message Joined: 25 Mar 00 Posts: 680 Credit: 563,640,304 RAC: 597 |
I went back and had a look at some old data. Prior to v7, I had a daily credit run rate of 220+k/day. Then came v7. Now GBT data. My daily run rate is now bouncing around between 130k and 170k per day. On the surface some would same not to bad given passed history. The twist in here is that I used to run with 2 x GTX580s in two boxes (4 cards), 2 x GTX470s in the third box and 2 x GTX295s in the fourth box. Some time after v7 I began changing things. I replaced the GTX580s with GTX770 SCs and then replaced these with GTX780 TI SCs. The GTX470s were replaced with GTX680 Classifieds and the GTX295s I replaced with 2 of the old GTX580s. All the machines now process more WUs than before, upwards of 50% in 2 cases. So I look at what recognition I receive today vs 3 years ago and considering that the machines are doing considerably more than they used to, I wonder how on earth anyone at Berkerley could say that there isn't an issue with the credit system and not want to understand why it is behaving in the manner that it is. |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
I wonder how on earth anyone at Berkerley could say that there isn't an issue with the credit system and not want to understand why it is behaving in the manner that it is. It is obvious that creditnew is not reflecting thru put so the real question I have is what is it supposed to show? |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
I wonder how on earth anyone at Berkerley could say that there isn't an issue with the credit system and not want to understand why it is behaving in the manner that it is. It's supposed to first dial in (adaptively converge) elapsed time estimates for sending the right amount of work to hosts, and then on client side hooks into the scheduling of tasks and projects. Once validations occur, it then awards by the cobblestone scale for tasks, in theory RAC then directly indicative of successful throughput, yes (just a different scale). The elapsed times then go back into the first step to dial in estimates. As an adaptive control system, it has some notable & classic engineering instabilities, that make each stage pretty fragile under anything but rare idealised (non existent) conditions. The rub is that it 'sortof works' as described, but not quite meeting the expectations of users. In that respect it's a proof-of-concept mechanism, rather than a complete system that should be in deployment. The reason for that is there are no health/quality-control indicators built in (other than user complaints) that would make refinement easier. Also semantically and structurally the code uses its own 'standards' that in some areas obscure what the elements actually are, making replacement of the pieces with 'better/more-standard' implementations harder than it should be. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
William Send message Joined: 14 Feb 13 Posts: 2037 Credit: 17,689,662 RAC: 0 |
In easier words, it is supposed to take the time you took for a task, award you credit according to how efficient you were (compared to all tasks and tasks in your group of tasks) in crunching the task, [I think in theory, you are supposed to get a bonus if you are faster than the average] and then use the time (in relation to how 'big' the task was) to estimate your speed for future tasks (APR). I'm not sure that was an easier explanation of what it's supposed to do... In praxi, we have noticed that it falls sadly short of both making a good estimate of your speed (it lacks adaptability) and of awarding credit relative to your efficiency. In short, the way in which it was implemented (coded) cannot work. It leads to chaotic behaviour. A person who won't read has no advantage over one who can't read. (Mark Twain) |
Ulrich Metzner Send message Joined: 3 Jul 02 Posts: 1256 Credit: 13,565,513 RAC: 13 |
Interestingly the slower you crunch, the higher the 'award' with this strange 'system'. I can easily recognize this, cause i have two different GPUs, the GT 640 being nearly three times faster than the GT 430. On similar WUs the GT 430 always get more credit, only topped by the CPU (Core2Quad), which again earns more credit for a similar WU. I think "borked" is way too friendly for this behavior... Aloha, Uli |
jason_gee Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 |
Interestingly the slower you crunch, the higher the 'award' with this strange 'system'. I can easily recognize this, cause i have two different GPUs, the GT 640 being nearly three times faster than the GT 430. On similar WUs the GT 430 always get more credit, only topped by the CPU (Core2Quad), which again earns more credit for a similar WU. I think "borked" is way too friendly for this behavior... Yep, that's right. This was discovered during controlled tests on Albert, which had tasks that process identically at the time. If you crunch less efficiently (i.e. take longer and generate more heat) you end up on the high side of the claim. [i.e. process much slower than the expected ~5%+/- of peak_flops (GPU)] Higher credit will happen when you crunch as slowly as possible (claiming lots of operations), and are paired with a blazingly efficient wingman [crunches at higher % of his peak_flops]. Importantly the average claim on the quorum is scaled back to a reference application anyway, in our project case making it a gross underclaim because of no compensation for that being SIMD enabled. Crunching slower at some point will lead to overall lower credit, but the loophole is you have enough memory these days to pile on tasks [to run slowly in parallel]. It's just a matter of time until we start adding scaling/throttle controls into the various apps for other reasons (mainly safety and usability control), but in the meantime you can pile on the instances and downclock the GPU after Boinc starts, then you'll consistently claim high. In an extreme case you would use as many instances of the lowest performing application as will fit. [Will always be working toward reduced memory, Ram and Vram, modes to support more devices] A problem can come with that if you are paired with someone doing the same (intentionally or unintentionally), but I think even if everyone that reads this did it, still enough host-owners never visit the forums that this low effective claim pairing would remain rare. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Cruncher-American Send message Joined: 25 Mar 02 Posts: 1513 Credit: 370,893,186 RAC: 340 |
All this kvetching is so unseemly. You are all AGCNDs (that's Anthropogenic Global Credit New Deniers). Shame on you! You should just grin and bear it, like good little Computation Drones; your Betters have decided what is good for you. ---------------------------------------------------------------------- Oh, and by the way, what use are the time estimates for WUs, anyway? We all just get our 100 for CPU and 100/GPU anyway, so that is pointless at this time, is it not? |
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
Interestingly the slower you crunch, the higher the 'award' with this strange 'system'. I can easily recognize this, cause i have two different GPUs, the GT 640 being nearly three times faster than the GT 430. On similar WUs the GT 430 always get more credit, only topped by the CPU (Core2Quad), which again earns more credit for a similar WU. I think "borked" is way too friendly for this behavior... Absolutely true it appears. In this example, I have 2 machines, a Genuine Intel(R) Atom(TM) CPU 330 @ 1.60GHz [Family 6 Model 28 Stepping 2] (4 processors) and a Genuine Intel(R) Pentium(R) 4 CPU 3.80GHz [Family 15 Model 4 Stepping 10] (2 processors) and have recently been keeping an eye on them, and wondering what might be going on. They are both configured to only run GPU tasks, as the Atom would pretty much choke on anything CPU, it really wouldn't be worth it. As the P4 3.8 proc is unfortunately fairly pitiful as well, I decided to set that up as a GPU only cruncher too. Now to be honest, the Atom just sits upstairs happily crunching away with no one bothering it at all, and the P4 is used by my daughter for web based games and such, maybe 1 hour each day of the week and on one of those days for a total of probably 2-3 hours, where crunching is suspended because it really effects the usability of the system if it is enabled. So because of that, it's not an _exact_ apples to apples comparison, but pretty dang close I'd say, and the difference in RAC (or should I say lack of significant difference) is pretty surprising. And actually, the Atom is (and has been for a while) running higher (6,304 ) than the P4 (5,765), and it only has a GTX260 in it, whereas the P4 has a GTX950, which is 5 generations newer then the 260, and both the 260 and the 950 are pretty much the lower end of their respective spectrums when they were new. Just thought I'd toss that out there as an interesting comparison. |
betreger Send message Joined: 29 Jun 99 Posts: 11416 Credit: 29,581,041 RAC: 66 |
MY stats have that same flavor, the question I have is when will it flatten out? |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.