ubuntu Install

Message boards : Number crunching : ubuntu Install
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · Next

AuthorMessage
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1897669 - Posted: 26 Oct 2017, 23:00:55 UTC

Well I am experimenting on my own. I never got around to asking whether what Mike found on the SSE4.1 app applied to the old FX processors. So I am running a batch with the SSE4.1 app and will compare times to my standard AVX app times.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1897669 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1897670 - Posted: 26 Oct 2017, 23:02:21 UTC

Rick do you have time to run a BLC task through the benchmarks? Mike said his results were equally faster on both the Arecibo and BLC tasks.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1897670 · Report as offensive
Profile RueiKe Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 14 Feb 16
Posts: 492
Credit: 378,512,430
RAC: 785
Taiwan
Message 1897671 - Posted: 26 Oct 2017, 23:09:12 UTC - in response to Message 1897670.  

Rick do you have time to run a BLC task through the benchmarks? Mike said his results were equally faster on both the Arecibo and BLC tasks.


I will try to find one and kick it off before I head to work.
ID: 1897671 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1897688 - Posted: 27 Oct 2017, 1:13:45 UTC - in response to Message 1897671.  
Last modified: 27 Oct 2017, 1:14:54 UTC

I will be waiting for your results. Right now with only a couple of CPU tasks completed after the changeover, it looks like I might have sped up the Arecibo tasks by 10 minutes and maybe 5 minutes on the BLC tasks. If this holds, I will regret not testing the SSE4.1 app a lot sooner after building the linux box. I just assumed the AVX app would be fastest since that is what works on my FX based Windows 7 systems and the same for the Ryzen based Windows 10 systems. But of course, the AVX app is the only one available on Windows other than the stock SSE2 app.

I read some Wikipedia entries and as far as I understand it, the AVX instructions contain the superset of the SSE4.1/4.2 instructions and should be the fastest. But the entry is about graphics based tests and not compute tests. Likely a big difference in the performance of the specific instructions that compute uses.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1897688 · Report as offensive
Profile RueiKe Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 14 Feb 16
Posts: 492
Credit: 378,512,430
RAC: 785
Taiwan
Message 1897720 - Posted: 27 Oct 2017, 9:10:03 UTC

Here is a comparison of Linux Optimized CPU MB Apps on my 1950X system running a guppy WU:



Again, it shows AVX version is fastest, though all time are faster than normal task times. I suspect it is using more than 1 core when in bench mode.
Test done with this WU: blc04_2bit_guppi_57976_08263_HIP74234_0029.21404.0.21.44.41.vlar
ID: 1897720 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34253
Credit: 79,922,639
RAC: 80
Germany
Message 1897731 - Posted: 27 Oct 2017, 12:44:24 UTC
Last modified: 27 Oct 2017, 12:46:28 UTC

Here is a bench i made in july.

Mint 18.2. Kernel 4.10

Current WU: PG0009_v7.wu

----------------------------------------------------------------
Skipping default app MBv8_8.04r3306_sse41_linux64, displaying saved result(s)
Elapsed Time: ....................... 306 seconds
----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_sse41_linux64
./MBv8_8.04r3306_sse41_linux64 177.35 sec 175.15 sec 0.02 sec
Elapsed Time : ...................... 177 seconds
Speed compared to default : ......... 172 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_ssse3_linux64
./MBv8_8.04r3306_ssse3_linux64 177.87 sec 175.61 sec 0.08 sec
Elapsed Time : ...................... 178 seconds
Speed compared to default : ......... 171 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.05r3345_avx_linux64
./MBv8_8.05r3345_avx_linux64 179.32 sec 177.18 sec 0.02 sec
Elapsed Time : ...................... 180 seconds
Speed compared to default : ......... 170 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

----------------------------------------------------------------
Done with PG0009_v7.wu

====================================================================
Current WU: PG0395_v7.wu

----------------------------------------------------------------
Skipping default app MBv8_8.04r3306_sse41_linux64, displaying saved result(s)
Elapsed Time: ....................... 339 seconds
----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_sse41_linux64
./MBv8_8.04r3306_sse41_linux64 200.56 sec 197.94 sec 0.29 sec
Elapsed Time : ...................... 200 seconds
Speed compared to default : ......... 169 %
-----------------
Comparing results
Result : Strongly similar, Q= 99.99%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_ssse3_linux64
./MBv8_8.04r3306_ssse3_linux64 196.70 sec 194.41 sec 0.09 sec
Elapsed Time : ...................... 197 seconds
Speed compared to default : ......... 172 %
-----------------
Comparing results
Result : Strongly similar, Q= 99.99%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.05r3345_avx_linux64
./MBv8_8.05r3345_avx_linux64 201.39 sec 198.49 sec 0.29 sec
Elapsed Time : ...................... 201 seconds
Speed compared to default : ......... 168 %
-----------------
Comparing results
Result : Strongly similar, Q= 99.99%

----------------------------------------------------------------
Done with PG0395_v7.wu

====================================================================
Current WU: PG0444_v7.wu

----------------------------------------------------------------
Skipping default app MBv8_8.04r3306_sse41_linux64, displaying saved result(s)
Elapsed Time: ....................... 315 seconds
----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_sse41_linux64
./MBv8_8.04r3306_sse41_linux64 186.19 sec 184.03 sec 0.02 sec
Elapsed Time : ...................... 187 seconds
Speed compared to default : ......... 168 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_ssse3_linux64
./MBv8_8.04r3306_ssse3_linux64 178.37 sec 176.20 sec 0.02 sec
Elapsed Time : ...................... 178 seconds
Speed compared to default : ......... 176 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.05r3345_avx_linux64
./MBv8_8.05r3345_avx_linux64 176.89 sec 174.48 sec 0.18 sec
Elapsed Time : ...................... 177 seconds
Speed compared to default : ......... 177 %
-----------------
Comparing results
Result : Strongly similar, Q= 99.98%

----------------------------------------------------------------
Done with PG0444_v7.wu

====================================================================
Current WU: PG1327_v7.wu

----------------------------------------------------------------
Skipping default app MBv8_8.04r3306_sse41_linux64, displaying saved result(s)
Elapsed Time: ....................... 329 seconds
----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_sse41_linux64
./MBv8_8.04r3306_sse41_linux64 174.79 sec 172.68 sec 0.01 sec
Elapsed Time : ...................... 175 seconds
Speed compared to default : ......... 188 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.04r3306_ssse3_linux64
./MBv8_8.04r3306_ssse3_linux64 187.77 sec 185.62 sec 0.03 sec
Elapsed Time : ...................... 188 seconds
Speed compared to default : ......... 175 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

----------------------------------------------------------------
Running app with command : .......... MBv8_8.05r3345_avx_linux64
./MBv8_8.05r3345_avx_linux64 194.90 sec 192.51 sec 0.10 sec
Elapsed Time : ...................... 195 seconds
Speed compared to default : ......... 168 %
-----------------
Comparing results
Result : Strongly similar, Q= 100.0%

As you can see on mid range tasks sometimes SSSE3 is slightly faster but AVX always slower.
Since on CPU will run more and more VLARs i concentrate on those tasks.
Ref time was done with my old FX 8350.


With each crime and every kindness we birth our future.
ID: 1897731 · Report as offensive
Profile Mike Special Project $75 donor
Volunteer tester
Avatar

Send message
Joined: 17 Feb 01
Posts: 34253
Credit: 79,922,639
RAC: 80
Germany
Message 1897732 - Posted: 27 Oct 2017, 12:56:10 UTC
Last modified: 27 Oct 2017, 13:04:45 UTC

Don`t forget offline benches should be made under loaded conditions to show best real life performance.
Benches on a idle system only shows theoretical performance which are nice for a presentation.
A CPU reacts always different under busy conditions.

I always make my benches while Boinc is running at least 6 CPU cores plus one instance on GPU.
This will show nearly the same results like running Boinc during the day.


With each crime and every kindness we birth our future.
ID: 1897732 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1897747 - Posted: 27 Oct 2017, 15:42:31 UTC - in response to Message 1897732.  

My observation seems to be holding after a day of crunching with the SSE4.1 app. On the simple runtime face of it, the Arecibo tasks seem to have sped up from 10-15 minutes over the typical AVX app runtimes. The BLC tasks seem to have sped up from 5-10 minutes over the typical AVX app runtimes. So I will be staying with the SSE4.1 app for the meantime unless something drastic changes. This is on my FX system. I will be turning that into a Ryzen system next week, so things may indeed change.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1897747 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1897835 - Posted: 28 Oct 2017, 7:20:52 UTC

Rick, have you given any thought to turning off Core Performance Boost and just running all cores at a common clock? Was looking at your tasks and you are running with quite variable clock frequencies. I saw as low as 2200 Mhz and 2800 Mhz. The bulk of your tasks are running at 3400 Mhz which I believe is the stock clock. The 1950X is getting to 4.0 Ghz with only 1.35V on a lot of systems. You might try to get to 3800 Mhz all cores, that should be easily reached.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1897835 · Report as offensive
Profile RueiKe Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 14 Feb 16
Posts: 492
Credit: 378,512,430
RAC: 785
Taiwan
Message 1897836 - Posted: 28 Oct 2017, 7:41:46 UTC - in response to Message 1897835.  

Rick, have you given any thought to turning off Core Performance Boost and just running all cores at a common clock? Was looking at your tasks and you are running with quite variable clock frequencies. I saw as low as 2200 Mhz and 2800 Mhz. The bulk of your tasks are running at 3400 Mhz which I believe is the stock clock. The 1950X is getting to 4.0 Ghz with only 1.35V on a lot of systems. You might try to get to 3800 Mhz all cores, that should be easily reached.


Hi Keith, I am trying to control my power bill, so I did not plan to overclock this system at all. But since I am watercooling, I should be able to get more performance without increasing Vcore much. This variable boost could be resulting in single task results in bench tests processing faster than usual, but should not affect relative performance of different app versions...I think. I have another test in progress now, but after that I will lock the frequency and turn off core boost. Should make my results more consistent. Thanks for the recommendation.
ID: 1897836 · Report as offensive
Profile RueiKe Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 14 Feb 16
Posts: 492
Credit: 378,512,430
RAC: 785
Taiwan
Message 1897848 - Posted: 28 Oct 2017, 11:49:02 UTC - in response to Message 1897732.  
Last modified: 28 Oct 2017, 12:00:02 UTC

Don`t forget offline benches should be made under loaded conditions to show best real life performance.
Benches on a idle system only shows theoretical performance which are nice for a presentation.
A CPU reacts always different under busy conditions.

I always make my benches while Boinc is running at least 6 CPU cores plus one instance on GPU.
This will show nearly the same results like running Boinc during the day.


Hi Mike, Thanks for sharing your insight. I have completed another run with 16 tasks running simultaneously. Here are the results:



The 2 grayed out WU results were bad WUs. I still have a concern with this approach. It is hard to have all trials have the exact same system loading, since as the test runs, there may be app types out of sync. Also, in this case, guppi's finished first so the Arecibo results with red boarders were without guppies running, and probably resulted in shorter processor times. Also, as Keith mentioned, auto core boost may be causing problems with the results. I have run out of time and can not get back to it until next weekend, but when I return, I will try running each app type in a single run to make sure all see the same system loading. Also, I have manually set the clock speed of my system to 3.5GHz with no auto core boost. Wish I had a good per core frequency monitor, but I have not found an appropriate one yet. Let me know of any recommendations.
ID: 1897848 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1897890 - Posted: 28 Oct 2017, 17:21:03 UTC - in response to Message 1897836.  

A couple of folks over on the OCN thread have golden 1950X it seems or the magic touch and are running them at 1.125V @ 4.0 Ghz. So it is possible to run at low voltages and temps to help your power bill.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1897890 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1897895 - Posted: 28 Oct 2017, 17:29:52 UTC - in response to Message 1897890.  
Last modified: 28 Oct 2017, 17:33:27 UTC

Some simple command line entries can poll the CPU core frequency in realtime.

cat /proc/cpuinfo | grep "MHz"

watch -n 0 "lscpu | grep 'MHz'"

cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq


There is also a app called turbostat that can be pulled from the repository. That one looks interesting and I will have to investigate it myself.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1897895 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1898847 - Posted: 3 Nov 2017, 9:24:38 UTC - in response to Message 1897747.  

My observation seems to be holding after a day of crunching with the SSE4.1 app. On the simple runtime face of it, the Arecibo tasks seem to have sped up from 10-15 minutes over the typical AVX app runtimes. The BLC tasks seem to have sped up from 5-10 minutes over the typical AVX app runtimes. So I will be staying with the SSE4.1 app for the meantime unless something drastic changes. This is on my FX system. I will be turning that into a Ryzen system next week, so things may indeed change.

Not surprising the FX system does better with SSSE than AVX- the implementation of VAX on those was not that great.
With a Ryzen system, AVX appears to be the way to go. AMD have improved it's implementation significantly.
Grant
Darwin NT
ID: 1898847 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1898893 - Posted: 3 Nov 2017, 16:09:01 UTC - in response to Message 1898847.  

Yes, it appears SSE4.1 is the way to go on the FX processors. Just wish that was available for the Windows platform. The Lunatics AVX app is faster than the only other alternative on Windows, the Lunatics SSE3 app.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1898893 · Report as offensive
Profile Jeff Buck Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 11 Feb 00
Posts: 1441
Credit: 148,764,870
RAC: 0
United States
Message 1898901 - Posted: 3 Nov 2017, 16:35:48 UTC - in response to Message 1898893.  

Yes, it appears SSE4.1 is the way to go on the FX processors. Just wish that was available for the Windows platform. The Lunatics AVX app is faster than the only other alternative on Windows, the Lunatics SSE3 app.
It does depend on the processor and the WU type though, it appears. After converting my daily driver on Tuesday, I ran a number of bench tests yesterday to see if the AVX would be faster than the SSE3 on the Xeon. (My old AMD processor didn't have AVX.) I hadn't realized SSE4.1 and 4.2 apps weren't available until I went looking yesterday.

For a BLC24 guppi, AVX was actually about 26 seconds slower. For an Arecibo VLAR, AVX was less than 2 minutes faster. Only on an Arecibo non-VLAR (AR=0.383049) was the AVX significantly better, gaining about 11 minutes. If I have time today, I'll probably run a few more tests with higher ARs, before I decide whether to make the switch.
ID: 1898901 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1898907 - Posted: 3 Nov 2017, 17:27:23 UTC - in response to Message 1898901.  

Yes, it appears SSE4.1 is the way to go on the FX processors. Just wish that was available for the Windows platform. The Lunatics AVX app is faster than the only other alternative on Windows, the Lunatics SSE3 app.
It does depend on the processor and the WU type though, it appears. After converting my daily driver on Tuesday, I ran a number of bench tests yesterday to see if the AVX would be faster than the SSE3 on the Xeon. (My old AMD processor didn't have AVX.) I hadn't realized SSE4.1 and 4.2 apps weren't available until I went looking yesterday.

For a BLC24 guppi, AVX was actually about 26 seconds slower. For an Arecibo VLAR, AVX was less than 2 minutes faster. Only on an Arecibo non-VLAR (AR=0.383049) was the AVX significantly better, gaining about 11 minutes. If I have time today, I'll probably run a few more tests with higher ARs, before I decide whether to make the switch.

Wish you could use the old SETI V7 SSE4.2 app on MB. I still have it in my oldBackup directory in Projects. And it doesn't help that the Lunatics Installer still shows BOTH the SSE4.1 and SSE4.2 apps in the Chooser and defaults to the SSE4.2 app. It also says that the SSE4.2 app is faster on FX processors compared to the AVX. What you don't realize until you run the Installer is that there is no SSE4.1 or SSE4.2 app in the Lunatics Beta-06 installer and that letting the installer make the SSE4.2 choice for you actually installs the SSE3 in your app_info.xml.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1898907 · Report as offensive
Profile RueiKe Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 14 Feb 16
Posts: 492
Credit: 378,512,430
RAC: 785
Taiwan
Message 1899208 - Posted: 5 Nov 2017, 1:25:16 UTC

I am still trying to convince myself which app is best for 1950X on Linux. My observation so far is that running only a single app bench test uses more cores/resources than normal, so the results are not representative. Trying to run a bunch of apps in bench resulted in inconsistent loading between the different apps which resulted in data collected not being representative. I decided to try another approach. I setup a single core VM and collected results with the system not loaded and fully loaded with SETI tasks. I was hoping the results would be similar, but the non-load conditions were faster. I suspect that since a single core to the VM is actually a single thread, the thread used more than half the capacity of a core when it is available. But still, I think this method gives the most consistent results compared to what I have tried so far. In this case, I am running the Vbox VM on a Windows machine with BOINC still running all but 1 core. Here are the results:


This test used only a single Green Bank WU. I am now running 7 each Arecibo and GB tasks with AVX vs SSE41 in a fully loaded condition.
GitHub: Ricks-Lab
Instagram: ricks_labs
ID: 1899208 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13161
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1899236 - Posted: 5 Nov 2017, 5:21:55 UTC

Thanks for the continued testing Rick. I have just now brought the Linux machine back online after the Ryzen 1800X upgrade. The hardware upgrade went fairly smoothly other than putting in a duff AIO that I had my suspicions about anyway. It never worked correctly in Windows but I thought the issue was a Windows thing. Turned out Windows had nothing to do with the problem, it was the hardware. Pump to be more specific. Fell back on the original AIO that was in place when it was a FX system. That is working fine now. Have to RMA the H110i on Monday.

The software turned out to be the real wrench in the works. I could not get my desktop to load forever. I had read that putting new hardware under a Linux installation was supposed to be easy compared to Windows. In my case, that wasn't true. And I have learned that there just isn't any way to monitor the CPU temps. I have not had any success in compiling a module that is supposed to pick up the chip and motherboard sensors but I can't get it to compile yet. Missing resources that I can't figure out how to solve yet. Been putting the system through Prime95 stress test and stress-ng stress tests for a day. So far, no hardware issues but I really don't like not knowing what the Vcpu voltage is or what the CPU temp is. I have been resorting to shooting components under stress with a IR temp gun as the only way to get some idea of where I am at. I basically set the system up as a carbon copy of the 1700X system with regard to frequency and settings.

One mystery for me is that when I ran the BOINC benchmarks on the system, it came out much lower than the Windows system. I had always thought that the benchmarks are always higher when run under Linux. Should have been either identical or higher theoretically. Will have to look at other Ryzen hosts at SETI and see if I am normal.

For now, I am just continuing with the SSE4.1 app that I switched to last week when Jeff, Grant and Brent said it was faster than the AVX app I was previously running. I hear though and have the experience in Windows, that Ryzen does especially well with the AVX app because it has much better AVX topology compared to the old FX processors. I will run the SSE4.1 app for a week to get a baseline and then switch back over to the AVX app and then compare performance. It will interesting to find out if the same is true under Linux.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1899236 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13720
Credit: 208,696,464
RAC: 304
Australia
Message 1899238 - Posted: 5 Nov 2017, 5:31:04 UTC - in response to Message 1899236.  

For now, I am just continuing with the SSE4.1 app that I switched to last week when Jeff, Grant and Brent said it was faster than the AVX app I was previously running.

Not me!
While not as good as Intel's AVX implementation, the Ryzen is way ahead of the previous series of AMD CPUs, so i'd expect best performance out of the AVX application by a considerable margin, at least for Windows.
Grant
Darwin NT
ID: 1899238 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · Next

Message boards : Number crunching : ubuntu Install


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.