LotzaCores and a GTX 1080 FTW

Message boards : Number crunching : LotzaCores and a GTX 1080 FTW
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 · Next

AuthorMessage
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10528
Credit: 143,180,206
RAC: 78,831
Australia
Message 1794291 - Posted: 7 Jun 2016, 23:48:59 UTC - in response to Message 1794248.  

Checking the Nvidia driver download page, as of a couple of minutes ago, the currently recommended driver for the 1080 and 1070 is 368.39, and unlike the previous 1080 driver this one lists a very large set of supported products, including for example the 750, 980, and a fullish looking list back through the 400 series.

Coming so soon after the initial release, if I had a GTX 10xx card i'd probably wait till the next release, or at least let others test it out a while before trying it myself.
Grant
Darwin NT
ID: 1794291 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1676
Credit: 395,289,827
RAC: 283,535
United States
Message 1794788 - Posted: 9 Jun 2016, 19:48:22 UTC
Last modified: 9 Jun 2016, 19:55:35 UTC

Hmm, so they finally released the specs on the FTW version I am getting, turns out it's an incremental improvement, but nothing earthshattering.



Biggest differences is that there are 2 BIOS chips on it, it's 35 watts higher, and it now takes two 8 pin power connectors instead of one. Overall, I think I'd probably go for the step down and do a slight overclock, and get about the same performace and not have to have dual 8 pin power connectors.

*edit* Unless of course with the dual 8 pins there is more overclocking headroom...

ID: 1794788 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3826
Credit: 92,994,988
RAC: 151,534
Australia
Message 1795354 - Posted: 11 Jun 2016, 12:50:08 UTC - in response to Message 1791668.  

Look in the Event Log again. Do you see "This computer has reached a limit of tasks in progress", or words to that effect?

That again is in place to protect the servers, and apportion the work more evenly among the ~150K volunteers. If there was no restriction, and everybody turned every knob up to 11 (as they did at one point), the fastest hosts would be caching thousands of tasks. Each task requires an entry in the database at Berkeley: thousands of tasks times thousands of computers times many days equals a hugely bloated and inefficient database.

I'll be shouted down for this, but no permanently-connected SETI host *needs* a cache longer than about six hours, to cover 'Maintenance Tuesdays'. Make that 12 hours, to cover the congestion period after the end of maintenance, but no more.


. . I tend to agree with that except I was running with 0.9 days cached but the outage last week went over 24 hours and I ended up without work for several hours. I originally was using 0.5 days but maintenance outages persuaded me to go 0.9. I have now succumbed to "insurance measures" and have increased to 1.5 days. That seems to have a reasonable balance but there are still some anomalies that cause under caching. Because I have had so few APs the system keeps telling me they will take 6 days to process and so starves my cache until the one Ap task is completed, which when it comes up takes about 2 hours :(
ID: 1795354 · Report as offensive
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 12347
Credit: 127,028,377
RAC: 36,215
United Kingdom
Message 1795358 - Posted: 11 Jun 2016, 13:10:36 UTC - in response to Message 1795354.  

Because I have had so few APs the system keeps telling me they will take 6 days to process and so starves my cache until the one Ap task is completed, which when it comes up takes about 2 hours :(

That's partly because of so few Astropulse tasks needing to be processed (most of the data in these old tapes has been processed already), and partly because they conditions set for considering an AP task to have a 'normal' runtime, and hence being worth taking into account when deciding the true estimate, have been set quite strictly. Your host 8012534 is showing

Number of tasks completed	4
Consecutive valid tasks		7

You need to get that 'completed' line up to 11 before the estimates are corrected: sadly, that probably needs another fifteen or twenty tasks to run.
ID: 1795358 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3826
Credit: 92,994,988
RAC: 151,534
Australia
Message 1795364 - Posted: 11 Jun 2016, 13:51:30 UTC - in response to Message 1791681.  

The limit is NOT 100 per 24 hours, but 100 tasks reported as "in progress". This has been the case for a few years.
The only time you are capped per day is when a computer returns a number of error results, in which case the concurrent limit is reduced - as I had a couple of weeks back when I screwed up an update and had an invalid driver installed on one of my rigs. Thankfully I caught that quickly and the cap was removed within a few hours as the computer concerned returned enough valid results.



. . OK, I had to test that (Don't touch that it's hot! :) ) And when I blew out the setting to 3.5 days it still only cached 200 tasks, so I guess that is 100 for the CPU and 100 for the GPU. As it turns out that is about 1 days work for the GPU and a l.4 for the CPU. Which I guess overrides the the settings in BOINC Manager, making the current setting of 1.5 seem about the limit anyway.
ID: 1795364 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3826
Credit: 92,994,988
RAC: 151,534
Australia
Message 1795369 - Posted: 11 Jun 2016, 14:24:15 UTC - in response to Message 1792451.  

Thanks! My RAC in the software shows that it has rocketed from 0 to 4600 in those 2 short days. Who knows how high it just might go? :-)


Don't know for sure, but can do some ballparking.

IIRC 8 Xeon cores doing MB, back in cobblestone scale days, used to get about 20K RAC on PreAVX AKv8 code. Since then there's been two main credit drops amounting to x ~30%. You claw back a little for increased throughput with AVX (about 1.5x), so my guess with 48 CPU cores alone (AVX capable + fast memory), would be 20K*6*0.3*1.5 ~= 50K (1 significant digit). Lots of variability, especially if adding AP and weird work mixes and GPUs into the picture.



. . Should not that formula be 20K*6*0.7*1.5 ??

. . Or is that drop "to" 30% not "of" 30%.

. . I am curious to know if the drop has been that large?
ID: 1795369 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3826
Credit: 92,994,988
RAC: 151,534
Australia
Message 1795375 - Posted: 11 Jun 2016, 15:02:37 UTC - in response to Message 1795358.  

Because I have had so few APs the system keeps telling me they will take 6 days to process and so starves my cache until the one Ap task is completed, which when it comes up takes about 2 hours :(

That's partly because of so few Astropulse tasks needing to be processed (most of the data in these old tapes has been processed already), and partly because they conditions set for considering an AP task to have a 'normal' runtime, and hence being worth taking into account when deciding the true estimate, have been set quite strictly. Your host 8012534 is showing

Number of tasks completed	4
Consecutive valid tasks		7

You need to get that 'completed' line up to 11 before the estimates are corrected: sadly, that probably needs another fifteen or twenty tasks to run.


. . That could be 12 months away :)

. . It's taken 4 months to get to 4 ....
ID: 1795375 · Report as offensive
Profile HAL9000
Volunteer tester
Avatar

Send message
Joined: 11 Sep 99
Posts: 6530
Credit: 190,587,129
RAC: 14,687
United States
Message 1795376 - Posted: 11 Jun 2016, 15:03:36 UTC - in response to Message 1795369.  

Thanks! My RAC in the software shows that it has rocketed from 0 to 4600 in those 2 short days. Who knows how high it just might go? :-)


Don't know for sure, but can do some ballparking.

IIRC 8 Xeon cores doing MB, back in cobblestone scale days, used to get about 20K RAC on PreAVX AKv8 code. Since then there's been two main credit drops amounting to x ~30%. You claw back a little for increased throughput with AVX (about 1.5x), so my guess with 48 CPU cores alone (AVX capable + fast memory), would be 20K*6*0.3*1.5 ~= 50K (1 significant digit). Lots of variability, especially if adding AP and weird work mixes and GPUs into the picture.



. . Should not that formula be 20K*6*0.7*1.5 ??

. . Or is that drop "to" 30% not "of" 30%.

. . I am curious to know if the drop has been that large?

The change from MBv6->MBv7 was a drop of 40-50% for some.

I prefer to just use the actual run times to calculate the number of tasks a day & then guesstimate the credit. It looks like their normal AR tasks are running ~2.5 hours. So that gives us ~450 tasks/day. I like to figure 100 credit per task to give a max theoretical RAC value of 45,000. Then I figure 80% of the max for the low end of 36,000. Which would put a RAC of ~40.5K in the middle.
The daily credit values for the past few days on their 48 core host are: 34,506 36,640 46,096 62,035 35,202 39,718 35,010 35,898 44,786. Which averages out to 41099 for the past 9 days.
SETI@home classic workunits: 93,865 CPU time: 863,447 hours
Join the BP6/VP6 User Group today!
ID: 1795376 · Report as offensive
Cruncher-American Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 Mar 02
Posts: 1485
Credit: 289,892,008
RAC: 139,023
United States
Message 1795394 - Posted: 11 Jun 2016, 15:56:35 UTC

A data point for you:

On my 32 core dual E5-2670 (AVX) running 29 CPU threads, a typical WU runs about 2.5 hours, so ~275 there. I also have 2 x GTX 980s running 3/dev, at ~1 hour/WU. So that's another 144 there, for a total of about 420/day, and I am averaging a bit over 40K over the last few days.
ID: 1795394 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10528
Credit: 143,180,206
RAC: 78,831
Australia
Message 1795502 - Posted: 12 Jun 2016, 0:16:42 UTC - in response to Message 1795376.  
Last modified: 12 Jun 2016, 0:18:47 UTC

The daily credit values for the past few days on their 48 core host are: 34,506 36,640 46,096 62,035 35,202 39,718 35,010 35,898 44,786. Which averages out to 41099 for the past 9 days.

And those single day boosts to Credit would have been a result of AP splitting.
I noticed when Al got his first batch of AP work, there were MB & AP WUs that each took approx 11,000secs to crunch. The MB WU paid just under 100, the AP WUs paid just on 500 (although the usual difference between AP & MB was more like 4 to 1).
Grant
Darwin NT
ID: 1795502 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1676
Credit: 395,289,827
RAC: 283,535
United States
Message 1795614 - Posted: 12 Jun 2016, 14:09:21 UTC

Looking at the stats screen in BOINC, I'm currently sitting at around 26,000 on this machine, but I have had a couple hiccups over the last week, first with the project downtime, and the 2nd when I temporarily had to borrow this network cable for another computer, but forgot to switch it back for about 18 or so hours, and hence this guy sat idling for the majority of that time, the cache was probably empty within 2-3 hours as has been the case I've noticed during the maint outages. The graph has leveled off a bit, but hopefully will keep increasing slowly as we get past those two 'breaks'. No more loafing!

On another note, this machine has been running nonstop since 5/29 per the Event log, not sure how many lines that is, but it hasn't crashed it yet, will be interesting to see just how unlimitied 'unlimited' really is... :-)

ID: 1795614 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 3826
Credit: 92,994,988
RAC: 151,534
Australia
Message 1795643 - Posted: 12 Jun 2016, 15:56:27 UTC - in response to Message 1795376.  



. . Should not that formula be 20K*6*0.7*1.5 ??

. . Or is that drop "to" 30% not "of" 30%.

. . I am curious to know if the drop has been that large?


The change from MBv6->MBv7 was a drop of 40-50% for some.

I prefer to just use the actual run times to calculate the number of tasks a day & then guesstimate the credit. It looks like their normal AR tasks are running ~2.5 hours. So that gives us ~450 tasks/day. I like to figure 100 credit per task to give a max theoretical RAC value of 45,000. Then I figure 80% of the max for the low end of 36,000. Which would put a RAC of ~40.5K in the middle.
The daily credit values for the past few days on their 48 core host are: 34,506 36,640 46,096 62,035 35,202 39,718 35,010 35,898 44,786. Which averages out to 41099 for the past 9 days.


. . Thanks for that clarification. I guess I missed the halcyon days of MB6 :(
ID: 1795643 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1676
Credit: 395,289,827
RAC: 283,535
United States
Message 1798104 - Posted: 23 Jun 2016, 3:47:53 UTC
Last modified: 23 Jun 2016, 3:53:08 UTC

Well, couple quick updates on this machine, looks like it is leveling off at about 33k RAC, which is a little lower than I had hoped for, but it is what it is. Also, just got the confirmation today that the 1080 FTW shipped, I will be getting it Friday, so I will be able to install it this weekend. This brings up which client to run, it looks like the beta client seems to be working wonders for people, but the config is a little confusing. If I were to go that route, is there a typical setup yet when installing it? Or would I be able to get some suggestions as to the config files? I would like to process work as efficiently as possible, and this 48 core machine should be a very good candidate to do it on.

ID: 1798104 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 10528
Credit: 143,180,206
RAC: 78,831
Australia
Message 1798116 - Posted: 23 Jun 2016, 4:34:11 UTC - in response to Message 1798104.  

If I were to go that route, is there a typical setup yet when installing it? Or would I be able to get some suggestions as to the config files? I would like to process work as efficiently as possible, and this 48 core machine should be a very good candidate to do it on.


Hopefully others will be along that know, but i'm pretty sure the GTX 1080 has 20CUs (Compute Units in OpenCL terms), and for maximum performance at this stage the best option is the OpenCL SoG application and giving it 1 CPU core for each GPU WU being crunched. I'd suggest 3WUs to start with & see how it goes- there's a good chance 4 or even 5 may actually give more work per hour, but start with 3 to get a baseline.

Mike is the one for help with configuration settings, and I suspect the one that's best for the GTX 980Ti would probably be best (or very close to it) for the GTX 1080 for getting the most from the SoG application.
Grant
Darwin NT
ID: 1798116 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1676
Credit: 395,289,827
RAC: 283,535
United States
Message 1798161 - Posted: 23 Jun 2016, 13:35:37 UTC

Well, had a small incident a few minutes ago, I accidentally kicked the cord out of the wall on this machine, and after it rebooted, my event log shows this:

6/23/2016 8:25:35 AM |  | Starting BOINC client version 7.6.22 for windows_x86_64
6/23/2016 8:25:35 AM |  | log flags: file_xfer, sched_ops, task
6/23/2016 8:25:35 AM |  | Libraries: libcurl/7.45.0 OpenSSL/1.0.2d zlib/1.2.8
6/23/2016 8:25:35 AM |  | Data directory: C:\ProgramData\BOINC
6/23/2016 8:25:35 AM |  | Running under account Flash
6/23/2016 8:25:35 AM |  | No usable GPUs found
6/23/2016 8:25:35 AM | SETI@home | Found app_info.xml; using anonymous platform
6/23/2016 8:25:35 AM |  | Host name: LotzaCores
6/23/2016 8:25:35 AM |  | Processor: 48 GenuineIntel       Intel(R) Xeon(R) CPU E5-2692 v2 @ 2.20GHz [Family 6 Model 62 Stepping 4]
6/23/2016 8:25:35 AM |  | Processor features: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss htt tm pni ssse3 cx16 sse4_1 sse4_2 popcnt aes f16c rdrandsyscall nx lm avx vmx smx tm2 dca pbe fsgsbase smep
6/23/2016 8:25:35 AM |  | OS: Microsoft Windows 7: Ultimate x64 Edition, Service Pack 1, (06.01.7601.00)
6/23/2016 8:25:35 AM |  | Memory: 31.97 GB physical, 63.93 GB virtual
6/23/2016 8:25:35 AM |  | Disk: 424.70 GB total, 347.19 GB free
6/23/2016 8:25:35 AM |  | Local time is UTC -5 hours
6/23/2016 8:25:35 AM |  | Config: GUI RPCs allowed from:
6/23/2016 8:25:35 AM |  | Zoom-PC
6/23/2016 8:25:35 AM |  | Config: event log limit disabled
6/23/2016 8:25:35 AM |  | Config: use all coprocessors
6/23/2016 8:25:35 AM | SETI@home | URL http://setiathome.berkeley.edu/; Computer ID 8012837; resource share 100
6/23/2016 8:25:40 AM | SETI@home | General prefs: from SETI@home (last modified 03-Apr-2013 23:59:56)
6/23/2016 8:25:40 AM | SETI@home | Computer location: home
6/23/2016 8:25:40 AM | SETI@home | General prefs: no separate prefs for home; using your defaults
6/23/2016 8:25:40 AM |  | Preferences:
6/23/2016 8:25:40 AM |  | max memory usage when active: 16367.02MB
6/23/2016 8:25:40 AM |  | max memory usage when idle: 31097.34MB
6/23/2016 8:25:40 AM |  | max disk usage: 100.00GB
6/23/2016 8:25:40 AM |  | (to change preferences, visit a project web site or select Preferences in the Manager)
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | [error] no project URL in task state file
6/23/2016 8:25:40 AM | SETI@home | Sending scheduler request: To fetch work.
6/23/2016 8:25:40 AM | SETI@home | Requesting new tasks for CPU
6/23/2016 8:25:41 AM | SETI@home | Scheduler request completed: got 0 new tasks
6/23/2016 8:25:41 AM | SETI@home | Not sending work - last request too recent: 114 sec
6/23/2016 8:29:58 AM | SETI@home | Message from task: 0
6/23/2016 8:29:58 AM | SETI@home | Computation for task 13dc10ae.17649.23799.6.33.68_1 finished
6/23/2016 8:29:58 AM | SETI@home | Starting task 13jn10aa.30773.1707.5.32.172_0
6/23/2016 8:30:00 AM | SETI@home | Started upload of 13dc10ae.17649.23799.6.33.68_1_0
6/23/2016 8:30:10 AM | SETI@home | Finished upload of 13dc10ae.17649.23799.6.33.68_1_0
6/23/2016 8:30:46 AM | SETI@home | Sending scheduler request: To fetch work.
6/23/2016 8:30:46 AM | SETI@home | Reporting 1 completed tasks
6/23/2016 8:30:46 AM | SETI@home | Requesting new tasks for CPU
6/23/2016 8:30:48 AM | SETI@home | Scheduler request completed: got 1 new tasks
6/23/2016 8:30:50 AM | SETI@home | Started download of 13dc10ae.10893.21345.10.37.167
6/23/2016 8:30:52 AM | SETI@home | Finished download of 13dc10ae.10893.21345.10.37.167


I shut down and restarted BOINC and the errors are still appearing during startup, but it is running tasks currently and looks to have downloaded a couple as well, so am I safe to assume that this is just a one off event, or is trouble brewing due to this accident? Thank guys.

ID: 1798161 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2531
Credit: 309,522,116
RAC: 680,377
Canada
Message 1798172 - Posted: 23 Jun 2016, 14:55:58 UTC

I have seen this many times with a computer crash, for me I usually get disconnected with seti, and need to reattach the project.

BONIC, should sort it all out in 2 hours and give you a full cache again.
ID: 1798172 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1676
Credit: 395,289,827
RAC: 283,535
United States
Message 1798176 - Posted: 23 Jun 2016, 15:28:34 UTC - in response to Message 1798172.  

Yeah, just took a quick look at it, it appears that everything is running normally again. One thing that caught my eye, was that the temps on the CPUs appear to be running 3-4 degrees cooler than previously. From around 40 to 50, with most 45 and below. I don't understand how a reboot could have effected that, maybe it's because it's cooler today? Or maybe the moon is aligned with mars rising, and I dropped something on my foot and hopped around a few times may have added to the voodoo? Who knows, I'll take it though.

ID: 1798176 · Report as offensive
Profile Brent Norman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester

Send message
Joined: 1 Dec 99
Posts: 2531
Credit: 309,522,116
RAC: 680,377
Canada
Message 1798182 - Posted: 23 Jun 2016, 16:15:15 UTC - in response to Message 1798176.  

IT's because you were running around the room moving wires and creating extra air flow to your open box.

You should add this into your morning exercise routine :))
ID: 1798182 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1676
Credit: 395,289,827
RAC: 283,535
United States
Message 1798185 - Posted: 23 Jun 2016, 16:42:14 UTC - in response to Message 1798182.  

Ahh, _that's_ what it was. I think I'll find another way to exercise, do this too many times and things will get corrupted and then bad juju for me... ;-) I actually am getting more than enough exercise right now, I'm building my brewery/shed, and that keeps me more than physically busy enough. Especially pouring the concrete, that'll whip you into shape, or kill you... lol

It's not that I don't already have enough on my plate, I can't help myself, sadly. Summer is so short up here, I need to get done as much as I can cram in, and I have to say that SETI for me I think will be more of a wintertime endeavor, though I seem to be managing to fit it in after dark often. Sleep? Naa, it's optional, right? :-)

ID: 1798185 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1676
Credit: 395,289,827
RAC: 283,535
United States
Message 1798418 - Posted: 24 Jun 2016, 13:55:56 UTC

Well, today's the day, just checked UPS, it says it's out for delivery. Now is the time to figure out how I will proceed. Hopefully I will be able to get some good advice from Mike as was mentioned previously, as I'd really like to get the most I can out of this setup. Looks like the version 368.39 which has been out since 6/7 is the latest and greatest, any reason not to go with that one? Should be an exciting day!

ID: 1798418 · Report as offensive
Previous · 1 . . . 5 · 6 · 7 · 8 · 9 · 10 · 11 · Next

Message boards : Number crunching : LotzaCores and a GTX 1080 FTW


 
©2018 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.