Posts by HAL9000

21) Message boards : Number crunching : Building 4 GPU host, which cabinet? (Message 1885332)
Posted 21 Aug 2017 by Profile HAL9000
Post:
Use a 19" x 15" rack frame, PSUs & other "support" stuff in a 4U layer at the bottom, motherboard in a 6U layer in the middle, then radiators etc in a 3 or 4 U layer at the top. Plenty of space for circulation fans on the middle layer to keep the beast cool.

I think the Core WP200 might work along the lines of what you are thinking.
The expansion cabinet could be placed on top with four 600mm radiators.

22) Message boards : Number crunching : I have a new system, expected runtimes? (Message 1885259)
Posted 21 Aug 2017 by Profile HAL9000
Post:
A big improvement in memory bandwidth, particularly writes.


Indeed it is, I wonder what the difference is when I add an additional dimm per socket

What app generates the memory chart output?
I can run it on mine for comparison.
Intel states the max memory bandwidth for these CPUs is 51.2 GB/s.
23) Message boards : News : Web site upgrade (Message 1885258)
Posted 21 Aug 2017 by Profile HAL9000
Post:
I suppose it's the same colours as used on the server status page: https://setiathome.berkeley.edu/show_server_status.php

I believe it works for the status page. As it has a larger front size and it bold.
The forum buttons with white text on green are somewhat less user friendly. It is nearly as bad a bright yellow on white for me.
The darker green that is used when mousing over the buttons I find to be much more clear.
24) Message boards : Number crunching : Building 4 GPU host, which cabinet? (Message 1885208)
Posted 21 Aug 2017 by Profile HAL9000
Post:
I'm going to try something like the Thermaltake Core X9 when I need new cases as the GPU's won't be stacked on top of each other and they should run cooler.

Cheers.

I have a Core X9 and it is huge. The Core X5 might be a better option for anyone planning to only use radiators up to 360mm.
25) Message boards : Number crunching : Come on admit it, you have an E5-2670 crunching on Boinc and/or Seti@Home. How is it doing? (Message 1885048)
Posted 20 Aug 2017 by Profile HAL9000
Post:
For interest, anyone got a comparison for running the system with a lean Linux distro compared to the much larger Windows installs?

The top spots look to go to Linux for the time being at least:

Top hosts


Is there a few more percent to be gained there??

Happy fast crunchin',
Martin

Petri's GPU app has significant gains and is currently only for Linux.
For the CPU apps there is no advantage to running any specific OS.
26) Message boards : Number crunching : Come on admit it, you have an E5-2670 crunching on Boinc and/or Seti@Home. How is it doing? (Message 1884540)
Posted 17 Aug 2017 by Profile HAL9000
Post:
My next experiment with the E5-2670 box is to upgrade to a MS Windows Server 2012 r2 (standard) so I can practice being a system admin and other assorted skills.


Having now installed MS Windows Server 2012 r2 I have some advice.

Don't install this type of OS [SERVER] unless you have an over-riding need like I did/do. Even when installed with the GUI it feels awkward and clumsy because it is a "Server" not a workstation.

It doesn't even have a shutdown button in the Gui. You have to go to the powershell and type "shutdown /s" w/o the quotes. No, I don't know what the /s means. I am working my way through the very beginning of an "install and configure" the MS Server 2012 r2 course.

I ran across a method of installing MS Security Essentials (which is unsupported) so I don't have to pay even more for a Server level anti-virus. I did manage to get the two profiles merged on the Seti Website. Right now it still looks like I may end up going thru the application test cycle again.

If you change OS's it might be a good idea to immediately suspend get new tasks until you have the profiles merged. Then it MIGHT just send you what it was sending you.

Otherwise it seems to be just munching along like it used to.

Tom

I'm not sure why are are having to run shutdown from the command line. You can press the windows button and then click the power button on the start screen just like in the desktop version of Windows.
If you add the server feature Desktop Experience then you will get most of the things from the desktop OS that you may want for a workstation setup.
27) Message boards : Number crunching : Panic Mode On (107) Server Problems? (Message 1883911)
Posted 14 Aug 2017 by Profile HAL9000
Post:
I've simplified my cc_config files on all machines down to bare minimum. There was a lot of fluff in there. Basically every option that can be set was in the file and even though they mostly were all set to 0, I figure if the client doesn't need to read them, all the better. Will be interesting to watch and see if that makes any kind of dramatic difference in my work request issues.

My cc_config is set in a similar way. If I add an option for testing I leave it in and set to 0. I figure it is less work if I want to use it later.
Currently my standard cc_config, that I copy when setting up a new host, is 82 lines line.
28) Message boards : Number crunching : Completed WUs going nowhere? (Message 1883910)
Posted 14 Aug 2017 by Profile HAL9000
Post:
Hi everyone,

I have an old netbook that was working successfully on SETI@home WUs for a few weeks before I went on vacation. I left it running while on vacation, but there was a power failure while I was away, and then another power failure after that.

After I got back and turned the machine back on and got Boinc running again, it looks (from boincmgr) to be working. It has been submitting completed WUs most days. But the WUs aren't being counted. There is an increasingly large list of "In Progress" tasks, and the total credit has been stuck at 2977 for quite a while (maybe since before I went on vacation?)

Another strange symptom: the "number of times client has contacted server" number was 0 after restart, and stayed at 0 through a few apparent complete WU uploads.

The computer is Squishy.

I'm just hoping someone has some pointers for figuring out what is going wrong.

If you check out the external stat sites you can kind of get an idea of what your computer was doing.
It has been at 2977 credit for a few days and received 988 credit on August 11th.


Validated tasks are cleaned up after ~24 hours.
29) Message boards : Number crunching : So who is going to be a guinea-pig this time?? (Message 1883648)
Posted 13 Aug 2017 by Profile HAL9000
Post:
Thanks for all the comments guys, and it may be too early to tell, though I know that I've heard that X299 has had some teething problems, but does anyone have any thoughts on the v 1.0 of this new platform? X99 sure could have been rolled out better, and this one looks like a copy of that experience?

No idea.
If you're buying something for work, then go with the established platform. If it's for play, then go for the new platform IMHO. If you really want to go with the new platform, give it at least 6 months for any issues to be discovered and addressed.

+1
Which sort of flows back to the original premise of the OP. Who is willing to be a guinea pig for ANY of the new platforms? X299 or X399. Do you like living on the bleeding edge and just diving in or do you wait and test the waters to make sure it is a deep pool or is in reality a bathtub.

I have been tested with the teething problems of the new AM4 platform and Ryzen with the memory and BIOS issues. I think I will be seated on the sidelines for a good half year watching TR develop before I consider testing the waters. I don't think the platform maturity will take as long though with the learning from Ryzen and AM4.

I was looking to upgrade my main gaming system from an i5-4670 to an i5-8600 or i7-8700 with a 300 series chipset. Since since they already announced the 10nm process CPUs and 400 series chipset will be released early 2018 I'll wait to upgrade that system.
I am thinking about putting together an i3-8350K and a R3-1300X. So that I can compare them in power, thermal, and clock-clock task run times.
Then whichever one "wins" can replace my other i5-4670 as my HTPC.
30) Message boards : Number crunching : Panic Mode On (107) Server Problems? (Message 1883631)
Posted 13 Aug 2017 by Profile HAL9000
Post:
The 1000 total WU limit is in BOINC.
The 100 WUs per CPU/GPU limit is a Seti one.

So the 1000 limit is circumventable with the proper coding knowledge, but the 100 is client side and is in stone?

I can only guess that the "1000 limit" in the client is per work request.
The current client is more than happy to cache several thousand tasks at once.
It is only slightly amusing when checking on a machine to find it has 8,000+ tasks from a newly attached project because you forgot your cache settings were 10+10 days and it only stopped downloading work because the partition where BOINC stores its data is full. -.-

One of the ways to circumvent the task limits only requires a very small change. It was two characters IIRC.
Having the limits in place and working is better for the project. Otherwise we get the random db crashes that sometimes take days to fix.

They did change the hard limit of 100 GPU tasks to 100*n GPUs per vendor. Which is nice, but with the increase in CPU power it would be nice to see the CPU limits get looked at again.
Currently 100 tasks don't last that long for even older CPU hardware.
E5-2670 @ 3.0GHz running 32 at a time ~8hr
i5-4670 @ 3.4GHz running 4 at a time ~20hr
I have a feeling the new i7-8700K with 6c/12t could tear through 100 tasks in ~6-7hrs.
31) Message boards : Number crunching : Panic Mode On (107) Server Problems? (Message 1883393)
Posted 12 Aug 2017 by Profile HAL9000
Post:

Pretty early on things like BOINC and OS versions were discussed to try and pinpoint their issue.
Nothing has really made sense as to why only a few users are having issues.
The only thing I can think of at this point is that there is some weirdness along their route connecting to Berkley.

You allude to connectivity issues being the problem. But if you don't connect to Berkeley, then you obviously couldn't get the responses we receive that you have reached a limit of tasks in progress or that there are no tasks to send.

Connectivity issues can be a lot more complicated than simply connection vs no connection.
If the data is malformed or truncated you can still get a response, but it likely won't be what is expected.
Not receiving a "Not sending work - last request too recent:" response after performing several updates would indicate something is being lost.
32) Message boards : Number crunching : Panic Mode On (107) Server Problems? (Message 1883388)
Posted 12 Aug 2017 by Profile HAL9000
Post:
I wish I had some advice for everybody here, but alas I do not.
I don't wish to jinx myself, but I have never had any problem getting work except when RTS was empty and nobody was getting much work.

I am using an old tried and true version of Boinc. I can't imagine that the work requests from the old Boinc are different than from a recent version, or that the scheduler would treat them any differently.
I am using XP on 4 rigs, and 7 on my daily driver. No difference there.
I am using the most recent version of the Lunatics installer.

I dunno. Maybe the scheduler just says 'Make way, it's the kittyman calling'...............LOL.

Meow meow meow.

Pretty early on things like BOINC and OS versions were discussed to try and pinpoint their issue.
Nothing has really made sense as to why only a few users are having issues.
The only thing I can think of at this point is that there is some weirdness along their route connecting to Berkley.
33) Message boards : Number crunching : Thoughts on the new Intel X299 platform? (Message 1883113)
Posted 11 Aug 2017 by Profile HAL9000
Post:
Well, that was what piqued my interest.

I can think of lots of questions:
--can a Phi run an x86 app unmodified?
--can it run a windows x86 app unmodified?
--what is it about the app that makes it run on the Phi, not the standard CPU?
--does intel supply a launcher/dispatcher/wrapper which can initiate any given app on the Phi?

--can BOINC define a Phi coprocessor? (yes, there's a 'roll your own' option)
--can a given BOINC app be defined to run on the r-y-o copro? (yes, app_info should take care of that, subject to the first group of answers)

--will a BOINC server allocate tasks to an anonymous app on a r-y-o copro?
--will the SETI server allocate tasks to an anonymous app on a r-y-o copro?

Any more, before I pony up 8 months of pension?

You may be interested in this review https://www.servethehome.com/supermicro-sys-5038k-i-es1-intel-xeon-phi-x200-developer-workstation-review/

Looking at BOINCstats I did find it lists 445 Intel(R) Xeon Phi(tm) CPU 7210F @ 1.30GHz CPUs. It looks like there are a number of them on Rosetta. So far the only two hosts I have found, 3226111 3225268, are no longer active, but did complete worked with their standard app.
34) Message boards : Number crunching : Thoughts on the new Intel X299 platform? (Message 1883054)
Posted 10 Aug 2017 by Profile HAL9000
Post:
I'd idly wandered past the Xeon Phi pages a few days ago, now that they are available to order in the UK in a workstation format.

But I think the 68-core Phi itself is configured as a coprocessor, not directly available to either Windows or BOINC. But we won't know for certain until somebody buys one...

Unless Intel is releasing them with different names in the UK I think that site might have some mixed up info.
Currently on Intel's site they list the Phi x100's as add-in cards and the Phi x200's as LGA3647 CPUs.
There is no mention of a 7220A. There is a 7120A add-in card but it is only 61 cores. The 68 core CPUs are 7250's.
Since it they have been released I have been considering something like a SuperMicro setup https://www.supermicro.com/products/system/tower/5038/SYS-5038K-I.cfm with the 7230.

For some reason I was thinking these had 8:1 HT but it looks like it is actually 4:1. Which may still prove interesting. However it's not '$3500 for base components' interesting. At least not to me.
35) Message boards : Number crunching : Thoughts on the new Intel X299 platform? (Message 1882897)
Posted 10 Aug 2017 by Profile HAL9000
Post:
I'm not really sure what the purpose of the i5-7640X & i7-7740X are. They are slightly higher clock versions of the i5-7600K & i7-7700K for LGA2066.
Probably one of the first reasons to even go with a LGA2066 setup is quad channel memory. Which those CPUs don't give you.
Another reason to go with LGA2066 would be for the extra PCIe lanes. The i7-7800X & i7-7820X only give you 28 PCIe lanes
So it doesn't sense to me to go with anything less than the i9-7900X for a LGA2066 x299 setup.

If I needed a single socket system with a lot of cores I might consider a Xeon Phi x200 CPU setup. It would be interesting to see how an OS handles a 72c/576t processor.

For now I'm just considering getting an i3-8350K and a Ryzen 3 1300. Then putting them side by side with the same clock settings and see how well they crunch. Since most crunching doesn't translate well to standard benchmarks.
36) Message boards : Number crunching : Windows 10 - Yea or Nay? (Message 1882696)
Posted 7 Aug 2017 by Profile HAL9000
Post:
The "You're all crazy" sent a perfectly innocent coffee up my nose and elsewhere...!!
Martin, you still haven't learned how to drink coffee off your monitor? :-)

Something along the lines of this I would imagine.
37) Message boards : Number crunching : Panic Mode On (107) Server Problems? (Message 1882523)
Posted 7 Aug 2017 by Profile HAL9000
Post:
I don't know whether it was the flipping applications or a complete restart of BOINC, but I am slowly refilling my gpu cache. What I always wonder is how everyone else but me gets such a huge slug of tasks in one download. I have never received more than 20 tasks at one time on any machine of mine even though I might have zero tasks in my cache. Your 70 tasks in that request TBar just astonishes me.

After an outage I will often have me cached filled in one request. Along the lines of:
01-Aug-2017 20:09:22 [SETI@home] Requesting new tasks for CPU
01-Aug-2017 20:09:24 [SETI@home] Scheduler request completed: got 100 new tasks
38) Message boards : Number crunching : Seti task runtime on Westmare or IvB or Hsw/Bdw (Message 1882501)
Posted 6 Aug 2017 by Profile HAL9000
Post:
Hi, curently I'm using double X5675 Westmares. I'm interested how S@H tasks would benefit from AVX or AVX2 instruction sets. Now my 3.07GHz thread takes 4h50mins to process a workunit. How long would it take on E5-26.../V2 or E5-26... V3/4, supposing the same GHz (which I know that are different)? Is it worth transiting to those processors comparing to geting a second double X5675 computer (which is times cheaper)?

I normally find the AVX app is about 35-40% faster than the SSE3 app on my hosts.
My 16c/32t dual E5-2670 @ 3.0GHz host using the AVX app complete SETI@home tasks in ~2-2.5 hours.
39) Message boards : Number crunching : Panic Mode On (107) Server Problems? (Message 1882498)
Posted 6 Aug 2017 by Profile HAL9000
Post:
I'm down to about 50 gpu tasks when I should be at 300. Keep getting the project has no work available. I have changed my preferences multiple times now. This is on the special app machine so it crunches through them especially fast. Anybody else having issues getting gpu work?

My hosts have full queues and I have a total of 3 "project has no work" messages across them.
I don't seem to be one of the users that has bad luck if hitting an empty feeder queue often.
40) Message boards : Number crunching : Panic Mode On (107) Server Problems? (Message 1881937)
Posted 3 Aug 2017 by Profile HAL9000
Post:

My router allows me to add DNS entries. Then I have the DHCP server pass out the router address for DNS to my machines. That way I don't have to update host files for all of my machines.

That could be a more convenient way for others as well.

You know, that is a very good suggestion and more elegant than editing Hosts. I think my router can do the same and it is my DNS server for my computers.

The only disadvantage I have found is not being able to enter both addressed for the download servers.


Previous 20 · Next 20


 
©2017 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.