Message boards :
Number crunching :
3 GeForce GTX 980 Ti or 1 RTX 2080 Ti
Message board moderation
Previous · 1 · 2 · 3
Author | Message |
---|---|
Carlos Send message Joined: 9 Jun 99 Posts: 29834 Credit: 57,275,487 RAC: 157 |
My numbers are still climbing. I am running 1 WU per GPU as Grant suggests, Should run more? I have the 3-2070's |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
My numbers are still climbing.Major hardware/software changes take 4-8 weeks to settle down, and that's with no change in WUs being processed & no server issues. Given the occasional bursts of Arecibo work (MB & AP) i'd say it'll take 2 months for RAC to end up around it's nominal range. I am running 1 WU per GPU as Grant suggests, Should run more? I have the 3-2070'sThe only way is to try it & see. With the CUDA applications, even on my GTX 750Ti running 2 WUs at a time produced more work per hour than just 1. However with SoG, regardless of the command line values I used, there was no benefit; others such as Zalster with their hardware have found 2-3 at a time (depending on the hardware) gives more work per hour. I found that running 2 WUs of the same type gave similar output (ie roughly double the run time, so the same number of WUs per hour) as running 1 at a time. However when running a Arecibo & and a GBT WU on the same card, the runtime for the Arecibo WU would end up double, sometimes triple, it's runtime if run with another Arecibo WU. The GBT WU run time was the same as for running 2 GBT WUs at the same time. When I did it, I made up a cheat sheet of runtimes based on the 1 WU at a time as that was my reference. I put run times across the page, and above that how many WU per hour that gave. Then below that I another row, with runtimes for 2 WU at a time, then another row below that for runtimes for 3 WU at a time. Then it was just a case of changing how many WUs were running at a time, and letting maybe a couple of dozen or so process from start to finish (for 2 at a time, 3 dozen for 3 at a time) to see what the times were, did they vary when different types were run together etc. With SoG, I've always come back to 1 at a time, others have found 2 or more to be better. All you can do, is try it and see how it goes on your system. 10 15 20 30 40 60 70 80 90 1x 6 4 3 2 1.5 1 .86 .75 .67 2x 12 8 6 4 3 2 1.7 1.5 1.3 3x 18 12 9 6 4.5 3 2.58 2.25 2 WUs per hour across the top (10, 15, 20 etc) 1x, 2X for the number of WUs at a time. The other numbers being the processing time, to produce the top values (WUs per hour). Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Damm, too late to put in the missing 50 per hour values. 50 1x 1.2 2x 2.4 3x 3.6 Grant Darwin NT |
Carlos Send message Joined: 9 Jun 99 Posts: 29834 Credit: 57,275,487 RAC: 157 |
I am going to let it go for a while and try to see where it stops. I suspect around 70-75 rac. Then I will Ii will play around some more. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Now would probably be as good a time as any; no more Arecibo work coming through at present (other than the odd resend), and all the GBT work is of 1 type- for me i'm finishing off BLC32 WUs and the replacements are all BLC43. While you get the odd WU that runs longer or shorter in a groups such as these, they tend to be much less variable than when getting WUs from multiple different files. Of course if you do see how things go now, it would be worth re-checking your results when a mix of work is coming through to see what impact different WUs at the same time on 1 card has on their processing times. Grant Darwin NT |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.