Message boards :
Number crunching :
New outage schedule....
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 . . . 8 · Next
Author | Message |
---|---|
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51527 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
I would think that the Boinc devs would be aware of and give some respect to all of the things that have been done on behalf of the Seti project by everybody ever involved in the Lunatics camp. As I understand it, much of what is now 'stock' code was gleaned or donated from the optimized apps created there. I am sure that Eric is aware of and appreciates it....the Boinc devs should not be so ready to 'piss in our general direction'. Maybe Eric needs to sit down and have another 'little talk' with the Boinc devs and their attitude towards third party work on behalf of the science project. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
![]() ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
Let's hope so Mark :D "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51527 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
Maybe this will give you an idea Mark..http://fragment1.berkeley.edu/newcricket/grapher.cgi?target=%2Frouter-interfaces%2Finr-250%2Fgigabitethernet2_3;view=Octets;ranges=d Looks like we might have broken something again. :-( Sorry.....but the problem is the way the quota system is working....right now the i7 920 is down to 25 MB and 15 Cuda tasks. And getting refused work because it has 'reached it's quota'.... This is rubbish....one of the higher performing hosts on the project can't get work because it has reached quota limits???? It obviously can't be producing too many errors if it getting all that work done. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
![]() ![]() Send message Joined: 20 Aug 02 Posts: 3377 Credit: 20,676,751 RAC: 0 ![]() |
Well, it looks like I might be giving the new quota system a little test. I have ten shorties in a row lined up for my GPU and the server thinks they are going to take 58 minutes each. This should be interesting. My CPUs and my GPU seem to be following the same DCF. Wonder if they will drop me down enough to trigger some -177 errors? ![]() PROUD MEMBER OF Team Starfire World BOINC |
Robert Ribbeck ![]() Send message Joined: 7 Jun 02 Posts: 644 Credit: 5,283,174 RAC: 0 ![]() |
Don't mean to cause any trouble but what does gpu performance have to do with this thead on the NEW outage schedule ![]() ![]() |
![]() ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
Don't mean to cause any trouble Think about it. "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
kittyman ![]() ![]() ![]() ![]() Send message Joined: 9 Jul 00 Posts: 51527 Credit: 1,018,363,574 RAC: 1,004 ![]() ![]() |
Don't mean to cause any trouble And think about it some more.... It's the GPU's that do the greatest percentage of a host's work, if so equipped. It's my GPU's that are starving mostly. It's GPU's in general that have increased the productivity of the active hosts on this project. And if you 'Don't mean to cause any trouble'... Why are you even asking....?? It's MY thread, and if I have any problem with the direction of the discussion here, I shall ask it to be modified. As I am asking you now. "Time is simply the mechanism that keeps everything from happening all at once." ![]() |
![]() ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
Don't mean to cause any trouble Ahhh, Mark, always echoing my most computational sentiments :D Give me a G-P-U .. What does it spell ? GeePooo! "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
Josef W. Segur Send message Joined: 30 Oct 99 Posts: 4504 Credit: 1,414,761 RAC: 0 ![]() |
Here's my perspective, FWIW. Awhile back, nVidia Fermi GPUs became available in quantity, people started buying them and assumed they were properly backwards compatible with older CUDA software. The BOINC core client recognized them as CUDA capable cards, asked the project for tasks to run thereon, got them, and the card promptly generated a result_overflow on each task. Oops, Fermi cards aren't compatible with the CUDA apps nVidia engineers built for earlier CUDA cards. Interestingly, the particular pattern of false signals the Fermi cards produced matched a pattern some earlier CUDA cards generated when they got in a bad state. So that long-running problem with CUDA cards sometimes getting in a bad state only curable by a reboot had suddenly become an always bad state for Fermi cards running the older applications. As huge numbers of tasks were achieving validity with obviously wrong results, the project was taken down until they could find a solution. Richard Haselgrove suggested that the newer Fermi build which was working fine at SETI Beta be promptly installed here to relieve the critical situation. That was done, but rather than waiting long enough to see if that alone would staunch the bleeding, it was decided that part of the BOINC server changes which kept separate statistics for app versions should also be installed here. It definitely wasn't the whole BOINC trunk at that point, perhaps because David Anderson knew the changes needed to support anonymous platform were incomplete. The partial transplant did not work well. Perhaps assuming that if they backed out the changes they would be back to square one with huge numbers of bad results being assimilated, they went forward instead and installed the full BOINC trunk version. At that point much of the needed structure to deal with anonymous platform hosts was already present in trunk. That is, the app_version information derived from an app_info.xml was transferred into a client_app_version structure for each different version, the Scheduler could make decisions about which version was most appropriate for work and send work to the host on that basis, etc. There was originally no code to build up the statistical averages needed for other features like the server-side adjustments of rsc_fpops_xxxx. But it was clearly intended that anonymous platform requests would be accepted and acted upon by delivering work (if any were available). Code to do the statistical averaging was added a few days later, though David has decided it wasn't working as he intended so has disabled it pending further efforts. Like all others here I find the rapid changes disorienting, though my modest systems are not affected to the extent that top performing crunching systems are. Combined with the mostly unrelated changes in the outage schedule, this is a disturbing time for all who are paying attention. I rather envy the set and forget crowd sometimes. I'm convinced the "'Anonymous Platform' mechanism is not supported by Boinc" phrase meant simply that support for it within the CreditNew changes is not yet complete. Heck, we already knew that from observation, but it's an unfortunate time for such misunderstandings to arise. Joe |
![]() ![]() Send message Joined: 24 Nov 06 Posts: 7489 Credit: 91,093,184 RAC: 0 ![]() |
...I'm convinced the "'Anonymous Platform' mechanism is not supported by Boinc" phrase meant simply that support for it within the CreditNew changes is not yet complete. Heck, we already knew that from observation, but it's an unfortunate time for such misunderstandings to arise.Joe I hope that's the case. I don't quite see how rants I received pointing the bone at the Anonymous Platform, GPU and third party apps makes sense in that context, followed by claims that Lunatics don't do development for other platforms than Windows (which is false, thanks Urs, Arkayn, Sunu and helpers with Linux & OSX apps.) I certainly agree that there are issues with the GPU apps here (stock included), but since we're likely the ones tasked to fix these problems (See anyone else working on the -12 & VLAR issues?), I do not understand the automatic tendency to "P%^$ in our general direction" as Mark put it. Oh well, maybe I'll wake up one day & it'll be all clear. Jason "Living by the wisdom of computer science doesn't sound so bad after all. And unlike most advice, it's backed up by proofs." -- Algorithms to live by: The computer science of human decisions. |
TheFreshPrince a.k.a. BlueTooth76 ![]() Send message Joined: 4 Jun 99 Posts: 210 Credit: 10,315,944 RAC: 0 ![]() |
Maybe this will give you an idea Mark..http://fragment1.berkeley.edu/newcricket/grapher.cgi?target=%2Frouter-interfaces%2Finr-250%2Fgigabitethernet2_3;view=Octets;ranges=d Looks like we might have broken something again. :-( 20 WU's crashed on june 22 (fixed after removing flops from app_info) and I'm still punished for that... It succesfully finished 250 WU's on CUDA after that but I still get the message that I reached my 155 WU quota... And those are all for my CPU, not my GPU... It just doesn't get any work while I see 67000 splitted WU's in the queue but my CUDA Fermi is dry... And what when the servers go offline for 2 or 3 days on thuesday? (I need at least 750 WU's for 3 days...) I'm getting to feel sick about this... I have a LOT of patience... So if MY patience runs out, you must have really screwed up things... I'll disconnect from the project and do a Windows reinstall again... It's the only way to reset your quota and get work again... That's for the second time in 2 weeks... I know there is no guarantee for work and that it's a scientific project but it's also a hobby and part of my life for 11 years... Sad to see this happen... Rig name: "x6Crunchy" OS: Win 7 x64 MB: Asus M4N98TD EVO CPU: AMD X6 1055T 2.8(1,2v) GPU: 2x Asus GTX560ti Member of: Dutch Power Cows |
![]() ![]() Send message Joined: 4 Apr 01 Posts: 201 Credit: 47,158,217 RAC: 0 ![]() |
One way Im managing to get more workunits is by rescheduling them to the GPU and then letting the gpu crunch them. This ups my quota as the units when returned go onto my cpu quota and I get more to download... I have managed to get 500wu's in the last 24hrs doing this.. Don't give up, the quota system will bed down eventually and all will be OK ![]() |
TheFreshPrince a.k.a. BlueTooth76 ![]() Send message Joined: 4 Jun 99 Posts: 210 Credit: 10,315,944 RAC: 0 ![]() |
One way Im managing to get more workunits is by rescheduling them to the GPU and then letting the gpu crunch them. This ups my quota as the units when returned go onto my cpu quota and I get more to download... With Reschedule 1.9 you can't reschedule to Fermi CUDA... That's another part of the problem... Rig name: "x6Crunchy" OS: Win 7 x64 MB: Asus M4N98TD EVO CPU: AMD X6 1055T 2.8(1,2v) GPU: 2x Asus GTX560ti Member of: Dutch Power Cows |
Richard Haselgrove ![]() Send message Joined: 4 Jul 99 Posts: 14690 Credit: 200,643,578 RAC: 874 ![]() ![]() |
One way Im managing to get more workunits is by rescheduling them to the GPU and then letting the gpu crunch them. This ups my quota as the units when returned go onto my cpu quota and I get more to download... But MadMaC can reschedule to his Fermi.... Read Running SETI@home on an nVidia Fermi GPU to find out how we did it. |
![]() ![]() Send message Joined: 15 Dec 99 Posts: 707 Credit: 108,785,585 RAC: 0 ![]() |
One way Im managing to get more workunits is by rescheduling them to the GPU and then letting the gpu crunch them. This ups my quota as the units when returned go onto my cpu quota and I get more to download... Exactly! With the excessive use of the Rescheduler and fpops in the app_info and corrected dcf i'm able to download yesterday 5000 Workunits in 19 hours. Actually my Caches are full (for 3 Days) Helli |
TheFreshPrince a.k.a. BlueTooth76 ![]() Send message Joined: 4 Jun 99 Posts: 210 Credit: 10,315,944 RAC: 0 ![]() |
One way Im managing to get more workunits is by rescheduling them to the GPU and then letting the gpu crunch them. This ups my quota as the units when returned go onto my cpu quota and I get more to download... I followed the thread earlier and thought it would be too complicated. But just 1 copy-paste in the APP_INFO.XML and changing fermi_cuda into cuda did the job! First rescheduled 2 WU's back to GPU, to try it (and not lose too much work if it didn't) and it worked. Now 50WU's are rescheduled from CPU to the fermi :) Thanx!!! Rig name: "x6Crunchy" OS: Win 7 x64 MB: Asus M4N98TD EVO CPU: AMD X6 1055T 2.8(1,2v) GPU: 2x Asus GTX560ti Member of: Dutch Power Cows |
![]() ![]() Send message Joined: 23 May 99 Posts: 4292 Credit: 72,971,319 RAC: 0 ![]() |
Does anyone know if Boards will work during new long outages or is it wait and see as usual? Official Abuser of Boinc Buttons... And no good credit hound! ![]() |
![]() ![]() Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 ![]() ![]() |
Does anyone know if Boards will work during new long outages or is it wait and see as usual? yes SETI@home classic workunits: 93,865 CPU time: 863,447 hours ![]() |
![]() ![]() Send message Joined: 23 May 99 Posts: 4292 Credit: 72,971,319 RAC: 0 ![]() |
Does anyone know if Boards will work during new long outages or is it wait and see as usual? ROFL, yes to which? Official Abuser of Boinc Buttons... And no good credit hound! ![]() |
![]() ![]() Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 ![]() ![]() |
Does anyone know if Boards will work during new long outages or is it wait and see as usual? yes SETI@home classic workunits: 93,865 CPU time: 863,447 hours ![]() |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.