Message boards :
Number crunching :
SETI orphans
Message board moderation
Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 43 · Next
Author | Message |
---|---|
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
and due to this, the project devs have restricted their scheduler to only send the new app to Pascal or newer only, they wont even send the app to hosts with 900-series or older. The only way to run the new app on Maxwell would be to have the maxwell card obfuscated behind a newer GPU that's being reported as prime (since BOINC only reports multiples of the primary GPU of the same vendor). but likely wont see any benefit on the Maxwell cards anyway. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Damn, I just fired up the I5 to give it a try, and for nada :( A shame cos those 970's are fine cards. . . With 460 drivers they are running @ 17.2 mins, only about 45 secs slower than the 1060's ... Stephen :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . If I could squeeze another GPU into the I5 machine I would take the 1050ti out of the defunct Core2 Duo and try a little 'deception'. I would like to see if there is any improvement at all on the 970s. It would only be a trial in any case as the temps will soon force me to shut them down again. Still winter and the fans are barely coping at 100%. I'm scared of mixing water and electronics and I cannot afford nitrogen cooling :( Stephen . . BTW they settled down a bit and now running with 460 at under 17 mins. :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . An interesting development. I recovered the GTX1050ti from the defunct Core2 Duo machine and moved it to the Core2 Quad machine in place of the GTX950 which was running E@H tasks in 35.8 mins. Now with the 1050ti and 470 drivers it is running those tasks in 17.8 mins. Now that is a worthwhile improvement. . . I make this observation because the GTX970's in the I5-6600 machine take 17 mins neat. I am impressed ... Stephen 8-} |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . It would seem that my excitement over the new app and my phobia about overheating has killed the conversation .... Sorry people :( Stephen :( |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Hi people, . . I might be about to become unpopular (or more so) but I am beginning to believe I have been misinformed. . . I was sure you guys said that the new app was not to be distributed to 900 series nvidia rigs because these cards get little or no benefit from it. . . Well, with a days work stored on the Core2Quad machine where the GTX1050ti has been happily crunching them in 17.7 mins I decided to try the GTX950 that had been in the machine before the 1050ti. I was expecting (hoping) to see the run times drop by maybe a minute or 3 from the 35.8 mins it was taking before the update to 470 drivers. You can imagine I was stunned, but in a good way, when I found that the run times are a mere 20.9 mins .... (picking up jaw from the floor). It seems my 900 series card is from an alternate dimension to those used for the original testing. That represents a bigger improvement in performance than the 1060-6 cards are getting. . . 17.4 mins compared to 10.5 mins is a 66% increase in throughput, but 35.8 mins down to 20.9 is a 71% increase. For the record the 1050ti was getting 28.4 min runtimes on 460 drivers and the 1.20 app dropping to the 17.7 times with the update, that is only a 60% increase. So now I am waiting for the big kaboom. That is, to find that these tasks all fail to validate or something else that Murphy's Law might predict. Stephen |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Could be or probably was just the difference in tasks between the initial test of my 970's to what the current tasks are doing. I just retested the latest code injection that Petri came up with against the stock 1.28 app and got another 5% reduction in task times. Stock 1.28 app = 665 seconds latest Petri tweak via code injection = 635 seconds Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
Could be or probably was just the difference in tasks between the initial test of my 970's to what the current tasks are doing. might be worth a test to see if 1.20 (without code injection) still runs about 665s. just to be sure Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Could be or probably was just the difference in tasks between the initial test of my 970's to what the current tasks are doing. . . So what GPU were those times from? Have you seem the improvements I am getting on my 970's and even more on a humble 950 ... 8-} ... That is a name to conjure with ... Petri ... I am in awe of what he has achieved. Stephen |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I tested my GTX 970 for Ian and Petri's development. I did not see much if any improvement on the original code that was released and tested by the Team. Hence my original opinion that Maxwell was not worthy of the code tweaks. But I just retested the latest code again on my 970 against the stock 1.28 app which of course has the base code tweaks that Petri developed and passed onto the devs at Einstein. It actually shows gains. Ian wants me to retest the stock, original 1.20 app for Maxwell again. I would have to pull both the 2070 and 1080Ti out of the system to test the 970 by itself so I would be sent the old 1.20 app that is compatible with Maxwell but is NOT compatible with Pascal or Turing. But the 1080 Ti is in the same loop as the cpu on that host. So a big chore or I guess the easier path would be to block that card from usage by BOINC with an exclude statement. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
yeah an exclude statement would certainly be easier. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
I tested my GTX 970 for Ian and Petri's development. I did not see much if any improvement on the original code that was released and tested by the Team. Hence my original opinion that Maxwell was not worthy of the code tweaks. . . I apologise. I did not mean for you to go through a major rebuild, I thought your 970/s were in a separate machine. I jumped straight from the old 1.20 app to the current 1.28 and I am very pleased with the result. :) But I was sure that my 1060's were also still running the 1.20 app before the update. . . I know I often sound like I am crawling but Petri does bl^&*y good work. :) {PS and Ian too of course} . . PPS I tried to tweak the 970's into P3 last night but I cannot remember the command to run the scripts I have. How do you execute a shell file?? Stephen |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
you can use the sh command, or put "./" in front of the filename. sh filename or ./filename Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
you can use the sh command, or put "./" in front of the filename. . . D'oh! . . I am not familiar with the sh command but I know now (until I forget it like other things), but I should have remembered to use the "./", again thanks :) Stephen |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Hi Ian, . . Well it's done ... and dusted! The cards are running the 1.28 app quite happily with the enhanced memory clock speeds. I had hoped/expected around a 10% improvement ... but sadly I am only seeing a gain of approximately 9% ... :) ... yippee! . . BTW, that takes the total improvement in speed with 470 drivers and the 1.28 app to 70%, which matches the gain I am seeing on the 950. <insert face with huge grin> Stephen |
tullio Send message Joined: 9 Apr 04 Posts: 8797 Credit: 2,930,782 RAC: 1 |
WCG has moved to Krembil Research Institute in Toronto, Canada. It sponsors the Mapping Cancer Markers project, which gives most of my tasks. I have passed the 5 years milestone in WCG. Tullio |
Sirius B Send message Joined: 26 Dec 00 Posts: 24920 Credit: 3,081,182 RAC: 7 |
Congrats. I'll hit Diamond for Open Pandemics sometime in the 1st week of Dec then I'll switch over to MCM (Sapphire atm), barring any hiccups with the transfer from IBM. |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
. . Hi Ian, glad to hear it's working for you. maybe I can get the project admins to open up the new tasks to Maxwell cards too. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
. . Hi, . . Was there any interest from the boffins about adding 5.2 cards to the 1.28 distro list? . . If not you might point out that hosts with GTX1030's can get the 1.28 app but take about 95 mins to complete one task compared to those with humble GTX950's, which currently cannot, can complete a task in 21 mins. Stephen |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
I haven’t asked yet. Bernd is out until the end of September. Is 95 mins slower or faster than before? Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.