Ryzen and Threadripper

Message boards : Number crunching : Ryzen and Threadripper
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 69 · Next

AuthorMessage
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2017615 - Posted: 2 Nov 2019, 22:34:28 UTC - in response to Message 2017604.  

One question - has anyone tried, in a controlled way, to see what happens if the GPU has too little memory to hold both tasks concurrently? I'm guessing memory conflicts and consequent data corruption, but I could be a mile out.


I think it would only be an issue on 2GB cards, which is the lowest VRAM size supported on the special app anyway, cards like 750ti, 950, 1050 might not work well. I do not know what happens when you try since I do not have any 2GB cards.

each Special App instance uses ~1300MB of VRAM on my host, so 2 of them ~2600MB used per GPU.

another thing to keep in mind is that is uses double the system memory also. ~600MB per Special App instance.
my 7-GPU system with the mutex build uses about 10GB of the 16GB installed.
my 10-GPU system with the mutex build uses about 17GB of the 32GB installed.
probably not an issue for most people if you have 4 or less GPUs and at least 8GB memory.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2017615 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2017617 - Posted: 2 Nov 2019, 22:57:50 UTC - in response to Message 2017604.  
Last modified: 2 Nov 2019, 23:12:45 UTC

Thanks Juan.
Some of the basic principals may be applicable to lesser devices, but would need some very careful testing across a wide range of GPU/CPU combinations to find the exact limits. So it is wise to heavily restrict its availability to a small pool of trusted people just in case there is a problem
One question - has anyone tried, in a controlled way, to see what happens if the GPU has too little memory to hold both tasks concurrently? I'm guessing memory conflicts and consequent data corruption, but I could be a mile out.

AFAIK There are no one who test with less than 2 GB on the GPU . W3Perl has some 2GB 750 Ti and i believe works fine with them. Provided you not run another heavy GPU memory use application at the same time of course. He could answer the question for us.
The main problem is with the host memory itself. Each WU uses about 0.5 GB of additional main memory to run so in a 4 GPU, running 6 CPU WU like my host uses about 10 GB to run including the OS itself, etc.
IMHO the big problem happening on large mining hosts who normally runs with relatively small memory and has few memory slots. look this commentary from Tbar about in the thread you can't read:

By my calculations 28 tasks would need around 19GBs of Memory. I suppose with only 16 Real and 6GBs SSD swap, at some point it's bound to run out.
A quick check shows it would cost a minimum of around $140 for 32GB of RAM for a Two slot board. I'm not sure it would be worth it.

But that not affect the most "common" 1 or 2 GPU hosts with 4 or 8 GB.
One interesting point the tests showed is: slower the CPU or the HDD, bigger the gain since it will require more time to load the WU before the crunching happening.
ID: 2017617 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2017620 - Posted: 2 Nov 2019, 23:18:29 UTC
Last modified: 2 Nov 2019, 23:28:14 UTC

Answering the memory question. There is the answered received from W3Perl during the test phase:

My post:
So unless i'm wrong the mutex builds must work on a 2 GB, n the 1 GB i'm probably no, but need to test.


His Answer:
yes it runs on a GTX 750 Ti (2Gb). But sometimes there is no more 2 Gb free, Xorg may claims lots of memory.
So I must set gpu_mode to never when not enough memory is found.

ID: 2017620 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2017621 - Posted: 2 Nov 2019, 23:19:53 UTC

Maybe a moderator could change this mgs to a more appropriated thread.
ID: 2017621 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2017653 - Posted: 3 Nov 2019, 2:43:12 UTC - in response to Message 2017621.  

Maybe a moderator could change this mgs to a more appropriated thread.


Opps. Sorry, I did kick off a long digression from the central topic of this thread. And I agree. It should spilt off intoa "Mutex for dedicated seti crunchers thread.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2017653 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2017654 - Posted: 3 Nov 2019, 2:49:33 UTC - in response to Message 2017542.  

Cpu crunching is never good efficiencywise for your wallet. The only cpu that comes near is the new Threadripper with 64 cores. But the price is another thing. As usual its almost always better to have a latest gen Nvidia bang/buck wise.

Otherwise its always better to have a cpu enough to feed powerrestrained gpus for efficiency.


Very good point.

But when I compare a 2700 to a 3700x or even a 3900x would I be using less electricity for the same or more cores crunching?

I run a cpu-only project as my "other" project on my Seti@Home machine(s).

Tom


I know we have some participants in this thread with 3900x cpus. Is anyone running a 3700x cpu.? That would be a pretty direct power draw comparison between it and a2700. I am ignoring the addition performance gains from using the AVX specific version of the app.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2017654 · Report as offensive     Reply Quote
juan BFP Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 16 Mar 07
Posts: 9786
Credit: 572,710,851
RAC: 3,799
Panama
Message 2017659 - Posted: 3 Nov 2019, 3:10:00 UTC - in response to Message 2017604.  
Last modified: 3 Nov 2019, 3:18:12 UTC

One question - has anyone tried, in a controlled way, to see what happens if the GPU has too little memory to hold both tasks concurrently? I'm guessing memory conflicts and consequent data corruption, but I could be a mile out.

The first WU loads and start to cruch as normal, the second WU stops with the task posponed error, waiting for GPU memory, then when the first one ends it restart. But in this case there are no gain in the use of the Mutex build since the second WU is not waiting on the memory ready to be crunched. On the contrary all slows down because that posponed error.
It`s about the same what happening if you try to run several instances of the SoG builds on a low memory GPU. I not see any data corruption or memory conflics on my test.

If the host main memory is to low, the program simply not load exactly as happening with the common builds.
ID: 2017659 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2017662 - Posted: 3 Nov 2019, 3:39:50 UTC - in response to Message 2017654.  

The TDP of a 3700X is 65watts. The TDP of a 2700X is 105watts. So I expect the 3700X to be more power efficient at the same clocks.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2017662 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2017678 - Posted: 3 Nov 2019, 11:59:13 UTC - in response to Message 2017662.  

The TDP of a 3700X is 65watts. The TDP of a 2700X is 105watts. So I expect the 3700X to be more power efficient at the same clocks.


TY. I still yearn for "more cores" but if I continue to be on the hunt to lower my power bill without lowering my crunching this looks like a possibility.

Last month I paid $485 for my power bill. This month it looks like may $200.
A proud member of the OFA (Old Farts Association).
ID: 2017678 · Report as offensive     Reply Quote
jsm

Send message
Joined: 1 Oct 16
Posts: 124
Credit: 51,135,572
RAC: 298
Isle of Man
Message 2017680 - Posted: 3 Nov 2019, 12:06:20 UTC - in response to Message 2017678.  

And crunching with the 2990WX dedicated costs £1 (GDP) per day or £365 per annum. I wonder what a 3990WX (8 way memory a must) will set me back?
jsm
ID: 2017680 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2017681 - Posted: 3 Nov 2019, 12:12:05 UTC - in response to Message 2017680.  

And crunching with the 2990WX dedicated costs £1 (GDP) per day or £365 per annum. I wonder what a 3990WX (8 way memory a must) will set me back?
jsm


While there is no way to tell until we get "official" #'s my impression is the 3990wx runs way hotter than the 2990wx so I am guessing it will probably pull more power.

Tom
A proud member of the OFA (Old Farts Association).
ID: 2017681 · Report as offensive     Reply Quote
Profile StFreddy
Avatar

Send message
Joined: 4 Feb 01
Posts: 35
Credit: 14,080,356
RAC: 26
Hungary
Message 2017722 - Posted: 3 Nov 2019, 17:59:27 UTC - in response to Message 2017678.  

The TDP of a 3700X is 65watts. The TDP of a 2700X is 105watts. So I expect the 3700X to be more power efficient at the same clocks.


TY. I still yearn for "more cores" but if I continue to be on the hunt to lower my power bill without lowering my crunching this looks like a possibility.

Last month I paid $485 for my power bill. This month it looks like may $200.


I think these TDP values are true, when core performance boost is disabled. When core performance boost (not PBO or AutoOC!) is enabled, core package power rises significantly: my 3900x eats 145W with core performance boost enabled (without PBO and AutoOC). The TDP is 105W according to AMD.
(if you do manual overclock without undervolting, power consumption will rise further)
With -0.064V undervolt max core package power decreased to ~130W under World Community Grid. Noctua NH-d15s can keep this beast around 75-77 Celsius under 75% WCG load, which is fine I think.

If you disable core performance boost, your CPU will operate only on its base clock and won't boost. You will loose performance, but its package power consumption will be close to the advertised TDP value.
ID: 2017722 · Report as offensive     Reply Quote
Profile Tom M
Volunteer tester

Send message
Joined: 28 Nov 02
Posts: 5126
Credit: 276,046,078
RAC: 462
Message 2017737 - Posted: 3 Nov 2019, 21:20:36 UTC - in response to Message 2017722.  


If you disable core performance boost, your CPU will operate only on its base clock and won't boost. You will loose performance, but its package power consumption will be close to the advertised TDP value.


Thank you for the numbers. It almost sounds like if I was running a 3900x with no turbo boost just letting it run up to its normal maximum that I might be able to pull the same power that my 2700 is now pulling and still gain 8 more threads..... More points to ponder.

I am hoping to make a decision either during the Black Friday/Cyber Monday sales cycle or probably for Christmas. Somehow I don't really expect anyone to offer a 3900x for less than list price!

But who knows, maybe the 3950x will have hit the shelves and started driving down the price of the 3900x ;)

Tom
A proud member of the OFA (Old Farts Association).
ID: 2017737 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2017753 - Posted: 4 Nov 2019, 1:16:24 UTC - in response to Message 2017722.  

If you disable core performance boost, your CPU will operate only on its base clock and won't boost. You will loose performance, but its package power consumption will be close to the advertised TDP value.

You can also just set a manual all-core overclock you are comfortable with. And set the necessary voltage to run that. The plus side of doing that is once you set a manual multiplier the Vcore starts out at a default low value of 1.017V. Which runs very cool. You just have to add in some positive offset to get your set clock to run stable at the minimum required voltage. Which generates less heat and uses less power.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2017753 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2017822 - Posted: 4 Nov 2019, 20:21:25 UTC

Another great article from 1usmus, the developer of the Ryzen Memory Calculator software. He has prepared another article at TechPowerUp about his new 1usmus Custom Power Plan for Ryzen 3000 Zen 2 Processors.
https://www.techpowerup.com/review/1usmus-custom-power-plan-for-ryzen-3000-zen-2-processors/

I suggest anyone running a Zen2 3000 or better processor as well as anyone considering the new upcoming Threadripper processors to give it a read. Specially so if you run Windows10.

He discusses the shortcomings of the Windows thread scheduler as well as the lame attempt by AMD to boost cpu clocks to advertised boost speeds with the recent AGESA 1.0.0.3 ABBA BIOS update.

I would like to hear from anyone who installs his AMD Power Profile in Windows10 whether they see any improvements or the advertised improvement that 1usmus espouses.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2017822 · Report as offensive     Reply Quote
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 2018004 - Posted: 6 Nov 2019, 21:43:21 UTC
Last modified: 6 Nov 2019, 21:56:34 UTC

It was a bit of a wait, but I finally snatched an AMD Ryzen 9 3900X CPU tonight, just moments ago. Can go fetch it tomorrow.
Will now look for a motherboard. Pref at the same store of course. :)
(Edit: I think I'll go for an Asrock X470 Taichi, it may just need the BIOS update, but the store can do that for me. Have to order the board anyway)

I do think I paid a higher price for it than when it was first released, but still under 600 euros (which is main stream price, I see): €592,95
ID: 2018004 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2018007 - Posted: 6 Nov 2019, 22:19:13 UTC

Congratz, Jord. Think you will like it. Plenty of cpu threads to play with. The math performance upgrade of Zen 2 is really obvious over Zen+. Happy you could find the cpu at close to original MSRP. Though I still see the cpu listed at Newegg.com for above MSRP but not at the absurd prices that were listed by third party sellers when the cpu was not in good supply.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2018007 · Report as offensive     Reply Quote
Profile Jord
Volunteer tester
Avatar

Send message
Joined: 9 Jun 99
Posts: 15184
Credit: 4,362,181
RAC: 3
Netherlands
Message 2018009 - Posted: 6 Nov 2019, 22:23:16 UTC - in response to Message 2018007.  
Last modified: 6 Nov 2019, 22:30:33 UTC

I'd just planned on going for the €339,- 3700X when I saw that one of the stores I frequent had one 3900X... didn't wait, didn't think it over, didn't sleep a night over it, just plain went for it. :)

So now that that's out of the way, next is the videocard. Most all RX5700 XT's are plain ugly. And Nividia with their non-Super and Super idiocy isn't making things easier. Go for a 1650, 1650 Super, 1660, 1660 Super, 2070, 2070 Super or plain 2080? Go away Nvidia! Make things easy!

;-)
ID: 2018009 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2018011 - Posted: 6 Nov 2019, 22:39:15 UTC - in response to Message 2018009.  

I think you have to stay away from Navi cards still with the driver issues with OpenCL not computing results correctly.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2018011 · Report as offensive     Reply Quote
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 36878
Credit: 261,360,520
RAC: 489
Australia
Message 2018012 - Posted: 6 Nov 2019, 22:50:14 UTC

And Nividia with their non-Super and Super idiocy isn't making things easier.
That's really easy, the Super models are performance enhanced replacements for the non-Super models. ;-)

Cheers.
ID: 2018012 · Report as offensive     Reply Quote
Previous · 1 . . . 33 · 34 · 35 · 36 · 37 · 38 · 39 . . . 69 · Next

Message boards : Number crunching : Ryzen and Threadripper


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.