Posts by HAL9000

21) Message boards : Number crunching : GPU Wars 2016:  Pascal vs Polaris (Message 1831971)
Posted 9 days ago by Profile HAL9000
Personally I'm looking at some of the stuff on the lower end of the spectrum. For it's lower power consumption. These two, on paper, are pretty evenly matched.

GPU			SP Base(Boost)	DP Base(Boost)	TDP	Cost
Radeon RX 460		1953(2150)	122		<75	$109 (2GB) $139 (4GB)
GeForce GTX 1050 Ti	1981(2138)	62 (67)		75	$139 (4GB)


Which makes it a great time to be a shopper.

I'll likely be getting a 460, or sometime similar depending on the new stuff launching this quarter or next, to replace the HD6870 in my HTPC. As it would give me the same GFLOPS for about half the TDP.
Plus I need a specific Radeon driver display scaling function, that Intel and Nvidia don't have in their drivers, to have 1080 displayed correctly on my older 61" TV.

Then I might grab a GTX 1050 Ti or two to play around with. Like I did with the 750 TI FTW I have now.
22) Message boards : Number crunching : Building a 32 thread xeon system doesn't need to cost a lot (Message 1831963)
Posted 9 days ago by Profile HAL9000
Another thought for cutting costs. A lot of the Xeons and the mobos that support them will support either registered or unregistered memory, though you cannot mix the different types.
Why hassle with it? There are a ton of tested server pulls for sale on eBay right now for totally dirt cheap. I just got 6 ea. DDR3 4gb PC-10600 ECC RDIMMS for $25.50 including shipping, which will work quite nicely in my Z600. What I could possible need 24 gig of RAM for, I dunno, but at $4.25 a stick, worth keeping in mind.
I suspect I'll be able to sell the 12 gig (6x DDR3 2gb 10700 ECC UDIMM) I pull out for a bunch more than $25.00

I usually do pretty well at finding bargain memory deals on old stuff. Like 32GB(8x4GB) of DDR2 PC2-5300 ECC FBDIMMs I picked up to put in an older Xeon system for well under a $1/GB. I'm only using 16GB of it right now. As that stuff gets HOT and eat up power. So I mostly look at it as having spares in the event I loose a DIMM or two.
Also for my dual E5-2670 system I took half of the 128GB(16x8GB) it came with out. Since I'm just doing some bench SETI@home testing and I don't think it has broken 6GB of total memory used.
Additionally it is now running 1 DIMM per channel instead of two. As there have been tests showing reduction in memory speeds when more than one DIMM per channel is used I figured why not give it a try. So far it doesn't look like there is any significant difference when running SETI@home tasks on the system. The difference may only be apparent when running synthetic benchmarks or under more intense memory loads.
23) Message boards : Number crunching : Do we want to believe the SSP status is real? (Message 1831955)
Posted 9 days ago by Profile HAL9000
Just noticed the SSP looks current and that Carolyn, the replica database server is shown running. However the status is still shown as offline. Do we think that things might be returning to something more normal?

You are free to believe in whatever you like. The Server Status is often just a small glimpse of what may, or may not, be the true status of the servers at the given timestamp.
I imagine it is mostly SNMP data, while highly reliable, still doesn't qualify for 100% status. Especially given the number of system involved.
24) Message boards : Number crunching : how to avoid cuda50 in favour of _SoG (Message 1831886)
Posted 10 days ago by Profile HAL9000
Somebody suggested running multiple tasks on the GPU when in CUDA mode, and only a single task in SoG mode. That can be done with the plan_class extension to app_config.xml - doesn't need the whole Lunatic - and will drive cuda APRs downwards until SoG is dominant.

That is also a more elegant option than my brute force gutting CUDA from the system. Especially to keep CUDA for other projects.
25) Message boards : Number crunching : Issues with 1 GPU on Penta Nano System (Message 1831885)
Posted 10 days ago by Profile HAL9000
And here is the bug:
BOINC assigns device 2
4 slot of 64 used for this instance; total_GPU_instances_num=5
Info: BOINC provided OpenCL device ID used
Info: CPU affinity mask used: 0; system mask is ff
With such mask no CPUs allowed at all.
Perhaps Windows API reacts on such mask as it would be 0xff instead allowing ALL CPUs.
Please check affinity of all active tasks on that host - each of them is pinned to only single CPU or some have few available CPUs?


Well, seems the reason of bug is clear - system mask ff corresponds not 8 CPU system but 32 CPU one. Hence wrong path selected.

So I need value that correctly reflects real number of CPUs in system...


If system mask is the same as the hex value used for the start /affinity command then it should be much longer hex for a 32 CPU system. 0x55555555 would be for all even number CPUs on a 32 CPU system
I've used this command line to have BOINC run only on physical cores and not use HT cores for some test on my 16c/32t system.
start /AFFINITY 55555555 boinc.exe --detach
Using FF give apps access only to CPUs 0-7. But the start /affinity command may not be operating the same as calling the values in code.
26) Message boards : Number crunching : how to avoid cuda50 in favour of _SoG (Message 1831824)
Posted 11 days ago by Profile HAL9000
BOINC doesn't currently have the ability to disable specific GPU features. Like disabling CUDA and leave OpenCL enabled. However if you want to get tricky you could remove the CUDA section from your coproc_info.xml and them make it read only.
I have used similar tricks for my Radeon GPUs. When I either needed to tell the server a GPU did have support CAL when it didn't or telling the server it didn't have support CAL when I only OpenCL.

I haven't the same method CUDA. So I don't know if it will work the same. Proceed at your own risk.

EDIT: Looking at the coproc_info.xml for my 750ti it looks like maybe the value <have_cuda>1</have_cuda> could be changed to 0 instead of removing the CUDA section. If you give it a try be sure to let us know what happens.


I decided since I'm only using the OpenCL apps I'd give it a try.
Looks like the whole CUDA section has to come out.

11/21/2016 7:18:21 PM		Starting BOINC client version 7.4.42 for windows_x86_64
11/21/2016 7:18:21 PM		log flags: sched_ops
11/21/2016 7:18:21 PM		Libraries: libcurl/7.39.0 OpenSSL/1.0.1j zlib/1.2.8
11/21/2016 7:18:21 PM		Data directory: D:\BOINC
11/21/2016 7:18:21 PM		Failed to delete old coproc_info.xml. error code -110
11/21/2016 7:18:21 PM		OpenCL: NVIDIA GPU 0: GeForce GTX 750 Ti (driver version 364.51, device version OpenCL 1.2 CUDA, 2048MB, 1967MB available, 1622 GFLOPS peak)
11/21/2016 7:18:21 PM		OpenCL: Intel GPU 0: Intel(R) HD Graphics (driver version 10.18.10.4358, device version OpenCL 1.2, 1195MB, 1195MB available, 358 GFLOPS peak)
11/21/2016 7:18:21 PM		OpenCL CPU: Intel(R) Celeron(R) CPU  J1900  @ 1.99GHz (OpenCL driver vendor: Intel(R) Corporation, driver version 3.0.1.10891, device version OpenCL 1.2 (Build 76427))
11/21/2016 7:18:21 PM		Asteroids@home	Found app_info.xml; using anonymous platform
11/21/2016 7:18:21 PM		SETI@home	Found app_info.xml; using anonymous platform
11/21/2016 7:18:21 PM		Host name: SIMIII
11/21/2016 7:18:21 PM		Processor: 4 GenuineIntel Intel(R) Celeron(R) CPU J1900 @ 2.41GHz
11/21/2016 7:18:21 PM		Processor features: fpu vme de pse <SNIP>
11/21/2016 7:18:21 PM		OS: Microsoft Windows 7: Ultimate x64 Edition, Service Pack 1, (06.01.7601.00)
11/21/2016 7:18:21 PM		Memory: 3.71 GB physical, 7.42 GB virtual
11/21/2016 7:18:21 PM		Disk: 198.09 GB total, 188.47 GB free
11/21/2016 7:18:21 PM		Local time is UTC -5 hours


You may notice the line. Failed to delete old coproc_info.xml. error code -110. BOINC is just complaining that it can't delete the old one and make a new one, but that is the desired effect in this situation.
If you were to change your coprocessor configuration in any way you would probably want to let BOINC generate a new one and modify it again.

This does drop the driver version from your host information on the website. Because when BOINC doesn't do driver version detection for OpenCL. Which is why there are no driver versions display for Intel GPUs and some Radeon GPUs.
27) Message boards : Number crunching : how to avoid cuda50 in favour of _SoG (Message 1831820)
Posted 11 days ago by Profile HAL9000
BOINC doesn't currently have the ability to disable specific GPU features. Like disabling CUDA and leave OpenCL enabled. However if you want to get tricky you could remove the CUDA section from your coproc_info.xml and them make it read only.
I have used similar tricks for my Radeon GPUs. When I either needed to tell the server a GPU did have support CAL when it didn't or telling the server it didn't have support CAL when I only OpenCL.

I haven't the same method CUDA. So I don't know if it will work the same. Proceed at your own risk.

EDIT: Looking at the coproc_info.xml for my 750ti it looks like maybe the value <have_cuda>1</have_cuda> could be changed to 0 instead of removing the CUDA section. If you give it a try be sure to let us know what happens.
28) Message boards : Number crunching : Open Beta test: SoG for NVidia, Lunatics v0.45 - Beta6 (RC again) (Message 1831731)
Posted 11 days ago by Profile HAL9000
I just installed Beta 6 and checked for the affinity issue noticed when troubleshooting my Penta-Nano system. Not sure if an attempt was made to fix it, but it looks like it is still an issue. One of the 5 running MB tasks has affinity set to all cores, while the other 4 are set to unique cores.

This isn't really an installation question, more a question about the application payload.

Please contact Raistmer about this sort of question, citing

OpenCL version by Raistmer, r3557
AMD HD5 version by Raistmer


Thanks. I will PM him.

Is it the same issue you were discussing in this thread?
29) Message boards : Number crunching : 16th Anniversary (Message 1831718)
Posted 11 days ago by Profile HAL9000
If there is one of these already, I can't find it, sorry!

I would have posted earlier, but the site wasn't available. 2 days ago was my 16th Anniversary of crunching for Seti@home. I joined 19th November 2000. It's been a roller coaster ride from Seti Classic to Boinc, and maybe now Atlas. But I'm still here plugging away despite a very sad holiday some years ago. But like all bedtime stories, the good guys do get to win in the end.

20 million credit for me and 1 Billion for my team I founded, not a bad record methinks. Also helped out with donations along the way where I could. Happy to be here, even if not so up front as before.

Kind Regards to everyone.

BT Retired Club

Chris S

Welcome to the 16 year club
30) Message boards : Number crunching : No work for ATI5? (Message 1831714)
Posted 11 days ago by Profile HAL9000
I can see your system has, at this timestamp, 281 tasks in progress. The server limits each machine to 100 CPU task and 100 tasks per GPU. So you defiantly have work for both GPUs to be running at the same time. As both of your GPUs have the same pool of tasks to work from.

With Windows 10 I'd suspect it may have done some driver or other update that has caused as issue, but let's hope that's not the issue.
Posting the BOINC starting lines will help determining that.

Do you have any GPU tasks that are in a state other that Running or Ready to start, like Waiting to run?

EDIT: Since I just saw your startup. Is the R9 running Milkyway tasks instead of SETI right now?
If not try Use GPU always to see if they both start. If they do then we know there is just a BOINC issue.

Also I just looked at your task list. It looks like that machine has not returned any GPU tasks for about a week. So none of your GPUs seem to be running SETI at the moment.
31) Message boards : Number crunching : Power Distribution (Message 1831663)
Posted 11 days ago by Profile HAL9000
[quote]You probably have already found this info, but figured I'd toss it in here if you had not for your cable sizing.
US Standards for a 30A circuit are 10 AWG minimum in most situations (sizes go up for long runs), but you can always go up a size in a shorter run for a lower voltage drop across the cable. Normally the extra cost of the large cables isn't justified but for larger power usage situations, like yours, it is likely worth the extra expense.
AWG	Inches	MM
6 	0.162 	4.11
7 	0.1443 	3.67
8 	0.1285 	3.26
10 	0.1019 	2.59
12 	0.0808 	2.05
14 	0.0641 	1.63
16 	0.0508 	1.29
18 	0.0403 	1.02

The 3.5mm cabling you went with is some pretty heavy duty stuff. As someone with an electrical engineering degree I completely approve!/quote]

In the video, I said mm when it is actually mm2, so this conversion table applies.
AWG    mm     inch      mm2
7    3.6649  0.1443  10.5488
8    3.2636  0.1285   8.3656
9    2.9064  0.1144   6.6342
10   2.5882  0.1019   5.2612
11   2.3048  0.0907   4.1723
                      3.5000 (new wire)
12   2.0525  0.0808   3.3088
13   1.8278  0.0720   2.6240
14   1.6277  0.0641   2.0809 (original wire)


So it is not quite that impressive. I checked out the 5mm2 triple wire cabling and it was just too massive to work with. US standards are more conservative than international, but having trouble finding the table that led me to believe this. Looks like I should not attempt to push to 30A on the line to my desktop with this cable.


Oh I missed the mm2. That does make a slight difference. So it looks like you basically have 12 AWG cable. Which is standard US 20a circuit wiring. However I think the 20A rating is for solid cable. I'd have to look up the ratings for stranded cabling. I'd guess it might be 25A, but probably not 30A. Even though it might handle 30A fine 24/7 it would likely run warmer, and likely why the US standards go up to the next size for 30A. Given you are already in the 25-30ºC range I personally wouldn't want to see much more from my cabling.

I mostly use 14AWG power cables for my PC to UPS connections and they are all sub KW systems. Mostly because I like overkill when it comes to power related things like that. I did once buy a 10 foot section of 10,000 strand 000AWG cable to use as a ground for some Tesla Coil stuff I was doing.
32) Message boards : Number crunching : GPU testing on low powered CPU (Message 1831652)
Posted 12 days ago by Profile HAL9000
Which is why I don't normally run those tasks. Not sure how well it would do with GBT VLARs tho...

Probably even worse.
Celeron flawed by cache size first hand. And VLAR (GBT including) needs semi-random access to largest arrays in whole algorithm.

In this CPU cache is split between cores. Each 2 cores get 1MB cache. Instead of a single large L3 cache line in Core ix CPUs., but the split cache might be the reason it didn't suffer CPU slow downs when running iGPU MB tasks like Core ix CPUs tend to show.
I like having different types of hardware to see how the design changes effect our crunching.
I really want to get a CPU with Iris Pro graphics to see how an iGPU with its own cache handles things. However they tend to be hard to find or cost much over list prices.
33) Message boards : Number crunching : Power Distribution (Message 1831648)
Posted 12 days ago by Profile HAL9000
You probably have already found this info, but figured I'd toss it in here if you had not for your cable sizing.
US Standards for a 30A circuit are 10 AWG minimum in most situations (sizes go up for long runs), but you can always go up a size in a shorter run for a lower voltage drop across the cable. Normally the extra cost of the large cables isn't justified but for larger power usage situations, like yours, it is likely worth the extra expense.
AWG	Inches	MM
6 	0.162 	4.11
7 	0.1443 	3.67
8 	0.1285 	3.26
10 	0.1019 	2.59
12 	0.0808 	2.05
14 	0.0641 	1.63
16 	0.0508 	1.29
18 	0.0403 	1.02

The 3.5mm cabling you went with is some pretty heavy duty stuff. As someone with an electrical engineering degree I completely approve!
34) Message boards : Number crunching : Power Distribution (Message 1831575)
Posted 12 days ago by Profile HAL9000
At my last job I had a rack of older PowerEdge 2850 servers and only two 120v 20a power drops to feed it. I ended up having to seek a few other outlets across the room to get everything up and going. The UPS's were connected to the mains with 10' 14-3 cords, but were always a bit warmer than ambient.

Recently at home I've been working on identifying what breakers are tied to what. Since I have systems in a bedroom, living room and a loft over the loving room. I am hoping the left outlets are not tied to the living room. I have considered taking the unused 240v electric dryer outlet, as I have a gas dryer,and have some new high amp outlets installed in my loft for dedicated computer stuff. Then I could switch several things to 240v which supposedly gains a few % in efficiency with computer PSUs.
35) Message boards : Number crunching : Building a 32 thread xeon system doesn't need to cost a lot (Message 1831564)
Posted 12 days ago by Profile HAL9000
Here is some data on run times for the E5-2670's at 3.0GHz with the AVX app.
32 threads: VLARs ~120m, Normal AR ~150m
16 threads: VLARs ~75m, Normal AR ~90m


So HT gains 25% in total work done then:

In 5 hours (600 minutes) for VLARs

HT on - 120min/WU = 5/Tx32 = 160 WUs done
HT off - 75min/WU = 8/Tx16 = 128 WUs done

160/128 = 1.25 = 25% gain.

And similarly for Normal ARs.

A little better than I expected, actually.


I normally take the time and come up with a tasks per day value.
So something along the lines of:
32 threads: VLARs ~384, Normal AR ~307
16 threads: VLARs ~307, Normal AR ~256
However with sorties and other in between work the count seems to fall more in the 450-550 task a day range.
Extracting the host_total_credit values from the statistics_setiathome.berkeley.edu.xml it looks like it is generating ~29K credit a day with the current work.
So a pair of small-medium GPUs would likely double it's output. AS I recall my 750ti FTW would do ~10k/day with normal CUDA tasks. I expect with GBT data it would be a bit less. So perhaps a pair of 1050ti FTWs would be a good choice to about double the systems output without ramping up the KW/h's.
36) Message boards : Number crunching : GPU testing on low powered CPU (Message 1831519)
Posted 12 days ago by Profile HAL9000
iGPU build hardly required CPU reservation. It's supporting runtime CPU consumption quite low.

The system when running only SETI@home AP tasks, 4 CPU AP & 1 iGPU AP, had no issues.
When running other projects on the CPU, 4 CPU PrimeGrid & 1 iGPU AP, the iGPU would be slower. So I reserved a CPU to keep it happy.

For the 750Ti with the NV SoG app I did some tests with 2 free CPU cores.
This command line caused NV driver restarts:
-high_prec_timer -use_sleep -hp -cpu_lock -cpu_lock_fixed_cpu 2

This command line also caused NV driver restarts:
-high_prec_timer -use_sleep -hp

This command line was OK, but was found to have slightly lower performance:
-high_prec_timer -use_sleep -cpu_lock -cpu_lock_fixed_cpu 2

Looks like either the system, or the version driver I'm using, didn't like -hp command. with r3557.

Now I have configured the system for 3 CPU PrimeGrid, 1 iGPU AP, & 2 NV SoG. System is currently sitting ~90% CPU load. So hopefully not much extra time is added for Starting GPU tasks. If it does maybe I'll switch to 2 PrimeGrid and 1 SETI MB CPU task for comparison.
The Celeron J1900 is not great at MB CPU tasks. Which is why I don't normally run those tasks. Not sure how well it would do with GBT VLARs tho...
37) Message boards : Number crunching : v8.19 opencl_nvidia_SoG for x86_64-pc-linux-gnu (Message 1831447)
Posted 13 days ago by Profile HAL9000
Ok let me rephrase the question: why are there no official 8.19 binaries for linux? I'm guessing it's a matter of resources? I know my way around build-essential; if it's a matter of gitting, making and testing I'd be willing to try to help.

One of the first steps would be having the apps deployed at Beta and having them undergo a round of testing. Then if all went well they would get deployed to main. In another thread Raistmer had mentioned (Urs said debugging finished so some new test binaries could be generated).
That may or may not be related to a 8.19 release. You would have to get into the loop with Urs to find out exactly what the current plans are.
38) Message boards : Number crunching : Newb poster looking for clarity. (Message 1831417)
Posted 13 days ago by Profile HAL9000
Thanks to you both so much. I'm just playing at this compared to you guys and many others. I've been doing SETI for a long time but never really got into the nitty gritty of performance and the technicalities of it. You guys really do an incredible job for the project.

So to summarise, you think I should try out Lunatics, it will process more units over using the standard BOINC/SETI apps, and it has no detrimental effect on the units I'm sending back to SETI. I guess by using Lunatics my GPU will process more than 1 unit simultaneously too, right?

As you can tell by my average I don't run this 24/7, or have multiple machines like you dedicated members. This is my main PC, my hobby, my gaming PC. In your opinions are my results OK, no errors or corrupt returned units?

I do have a spare 2011-3 motherboard here, do you have any recommendations regarding getting an efficient CPU for SETI, on the cheap? Maybe a server pulled Xeon?

Thank you again Brent and HAL9000 for your help, much appreciated.

Basically the optimized apps will increase your computing efficiency. So you could for fewer KW/h get the same amount of work done or get more work for the same KW/h.

I don't believe the Lunatics installer automatically enables running more than 1 GPU app instance per card. The easiest way to configure the number of GPU app instances by creating an app_config.xml.
The instructions on the BOINC site may look overly complicated at first but is pretty straight forward. Here are a few basic samples.

This will run 1 GPU app instance and allocate 1 CPU core/thread to "feed" the GPU.
<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>1.0</gpu_usage>
			<cpu_usage>1.0</cpu_usage>
		</gpu_versions>
	</app>
</app_config>

This will run 2 GPU app instances and allocate 2 CPU core/threads to "feed" the GPU.
<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>0.5</gpu_usage>
			<cpu_usage>1.0</cpu_usage>
		</gpu_versions>
	</app>
</app_config>

This will run 2 GPU app instances and allocate 1 CPU core/thread to "feed" the GPU.
<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>0.5</gpu_usage>
			<cpu_usage>0.5</cpu_usage>
		</gpu_versions>
	</app>
</app_config>

This will run 3 GPU app instances and allocate 1 CPU core/thread to "feed" the GPU.
<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>0.33</gpu_usage>
			<cpu_usage>0.33</cpu_usage>
		</gpu_versions>
	</app>
</app_config>

This will run 4 GPU app instances and allocate 2 CPU core/threads to "feed" the GPU.
<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>0.25</gpu_usage>
			<cpu_usage>0.5</cpu_usage>
		</gpu_versions>
	</app>
</app_config>

This will run 4 GPU app instances and allocate 1 CPU core/thread to "feed" the GPU.
<app_config>
	<app>
		<name>setiathome_v8</name>
		<gpu_versions>
			<gpu_usage>0.25</gpu_usage>
			<cpu_usage>0.25</cpu_usage>
		</gpu_versions>
	</app>
</app_config>


Also worth considering is making use of the optional tuning parameters for the GPU applications. There are ReadMe files included in the Lunatics installer. I believe the server also sends a version with the "stock" GPU apps as well.
Normally I tend to let the people that really know what they are doing find good vales and then use their configurations as a starting point for tuning.
39) Message boards : Number crunching : GPU testing on low powered CPU (Message 1831328)
Posted 14 days ago by Profile HAL9000
Running 2 GUPPI VLARs with SoG seems to give a pretty solid 35m run time for me.

The CPU time is > than the Run Time, but I expect that has to do with the nature of the slower CPU.
I could play with -use_sleep settings to see if they make any measurable difference.


. . Hi again,

. . I am using -use_sleep to get 4 concurrent tasks on the GPus with only 2 cores in the Pentium D.

. . The command in the command line file with _SoG in the name is-

-high_prec_timer -use sleep.

. . The Pentium D copes quite OK with the load.

Stephen

.

It looks like I had not really noticed -high_prec_timer in the read me. However I'm not sure if the system I'm using the 750TI FTW in supports that hardware function. I know several of my MBs can toggle HPET off/on. It is likely HPET is supported on the system but can't be disabled.



. . I suspect a zero current initialisation would restore the rig to defaults.

Stephen

.

I went the easy route and set the command in the app config. Since it didn't throw any errors it must support the function despite note having a setting to toggle it off/on in the BIOS for that particular system.

I should add that after a few days using -high_prec_timer -use_sleep I lowered the CPU reservation.
From: <gpu_usage>0.5</gpu_usage> <cpu_usage>1.0</cpu_usage>
To: <gpu_usage>0.5</gpu_usage> <cpu_usage>0.5</cpu_usage>
This let me return PrimeGrid to running 2 CPU tasks tasks instead of 1.
So the system is running:
1 iGPU AP task
2 NV SoG MB tasks
2 PrimeGrid CPU tasks
2 CPU cores free. 1 for iGPU & 1 for NV GPU.
I might be able to get by leaving only 1 CPU core free for both the iGPU & NV GPU on the system. I am going to have to try running 3 PrimeGrid CPU tasks to see if that starts to slow down anything.

It seems like a low powered GPU is a good match to a small CPU.
40) Message boards : Number crunching : Newb poster looking for clarity. (Message 1831274)
Posted 14 days ago by Profile HAL9000
Ah yes, I forgot about AVX, that is a BIG improvement for his 12 cores.

I've not seen a huge difference between SSE3 & AVX on my systems, but it is significant enough that I am glad they put forth the effort to make it.
Perhaps AVX2 will show some speed improvements one day as well.


Previous 20 · Next 20


 
©2016 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.