Posts by HAL9000

1) Message boards : Number crunching : "BOINC portable" for Windows hosts (Message 1875429)
Posted 1 day ago by Profile HAL9000
Post:
You could use --allow_multiple_clients and--gui_rpc_port to start multiple instances of BOINC and do them all at once.
2) Message boards : Number crunching : "BOINC portable" for Windows hosts (Message 1875289)
Posted 2 days ago by Profile HAL9000
Post:
Thanks for detailed instruction.
I did just the same (but used relative path ..\BOINCdata to get to data directory ).
But encountered those issues i described in the first post.
Will try to reproduce yours exactly.

And how you handle same tasks cranching on "online" and "offline" hosts? Or "online" BOINc setup used only for fetching in this case (I tried to duplicte working BOINc from netbook so it had some duplicated tasks in cache after duplication).

Online and offline hosts are completely different data directories.
Online host would be D:\BOINC with data in D:\BOINCdata.
Then for flash drives from offline hosts would be E:\BOINC with data in E:\BOINCdata, or whatever drive letter the flash drive is mounted.
So there is no shared or duplicate data.
3) Message boards : Number crunching : Panic Mode On (106) Server Problems? (Message 1875212)
Posted 2 days ago by Profile HAL9000
Post:
DNS issues can sometimes take a while to sort.
A can't resolve host name error is not coming from the Seti servers.
Somebody forget to renew a domain?

It could be someone in the IT/IS department made changes in the berkeley DNS config. Perhaps moved the DNS servers to different IPs or something along those lines.
The weekend would be a more ideal time for the campus to do something like that.
4) Message boards : Number crunching : Panic Mode On (106) Server Problems? (Message 1875083)
Posted 3 days ago by Profile HAL9000
Post:
I am getting 'can't resolve host name'......................

Looks like an uptick in errors.
Project details for: SETI@home including all dates
Scheduler Requests: 7325
Scheduler Success: 99 %, Count: 7275
Scheduler Failure: 0 %, Count: 50 (Total)
Scheduler Failure: 0 % of total, Count: 15 (Couldn't connect to server)
Scheduler Failure: 0 % of total, Count: 6 (HTTP service unavailable)
Scheduler Failure: 0 % of total, Count: 0 (HTTP internal server error)
Scheduler Failure: 0 % of total, Count: 29 (Couldn't resolve host name)
Scheduler Failure: 0 % of total, Count: 0 (Failure when receiving data from the peer)
Scheduler Failure: 0 % of total, Count: 0 (Timeout was reached)
Scheduler Timeout: 0 % of failures

Project details for: SETI@home including 25-Jun-2017
Scheduler Requests: 230
Scheduler Success: 90 %, Count: 208
Scheduler Failure: 9 %, Count: 22 (Total)
Scheduler Failure: 0 % of total, Count: 1 (Couldn't connect to server)
Scheduler Failure: 9 % of total, Count: 21 (Couldn't resolve host name)

Project details for: SETI@home including 24-Jun-2017
Scheduler Requests: 281
Scheduler Success: 99 %, Count: 279
Scheduler Failure: 0 %, Count: 2 (Total)
Scheduler Failure: 0 % of total, Count: 1 (Couldn't connect to server)
Scheduler Failure: 0 % of total, Count: 1 (HTTP service unavailable)

Project details for: SETI@home including 23-Jun-2017
Scheduler Requests: 279
Scheduler Success: 100 %, Count: 279

Project details for: SETI@home including 22-Jun-2017
Scheduler Requests: 278
Scheduler Success: 100 %, Count: 278
5) Message boards : Number crunching : "BOINC portable" for Windows hosts (Message 1875035)
Posted 3 days ago by Profile HAL9000
Post:
I don't think I have used the installer to put BOINC on a PC in 8 or 9 years.
If there are no registry entries telling BOINC where to put data. Then it will use the directory from where it was launched.

At my previous workplace I had a few machines that were on isolated networks without internet access. So I used sneakernet to get them work.

1. Connect flash drive for offline host.
2. Copy program files to flash drive. Something like E:\BOINC\
3. Stop BOINC client running on online host.
4. Start BOINC client from flash drive on online host.
5. Set BOINC to suspend processing
6. Connect to project(s).
7. Upgrade projects until desired cache level reached.
8. Stop BOINC client running on flash drive.
9. Connect flash drive to offline host.
10. Start BOINC client on online host.
12. Start BOINC client from flash drive on offline host.
13. Set BOINC to resume processing.

I have not had a problem with BOINC detecting GPUs when started this way. I would use an online host that had the same type of GPU as the offline host. So they would fetch the correct work & I would not have to reschedule tasks. Initially I rescheduled tasks for offline hosts with GPUs, but the introduction of task limits made that not as feasible.
Also this process could be modified so that you are running multiple instances of BOINC on the online host to send/fetch tasks, but I found it easier to just stop the running instance. As some previous versions of BOINC did not handle running multiple instances very well.

Because the path for USB drives sometimes changes I start BOINC with a .bat file like this. So I don't have to know the driver letter before hand.
pushd %~dp0
boinc.exe --skip_cpu_benchmarks --detach
start boincmgr.exe /s

You could add the command for --dir %~d0\BOINCData if you wanted BOINC place data in a directory other than from BOINC is launched.
6) Message boards : Number crunching : Looking for Best bang for the buck in CPU cooling for my Z-600 (Message 1874889)
Posted 4 days ago by Profile HAL9000
Post:
On my dual E5-2670 I'm using a pair of Noctua i4 NH-U12DXi4 coolers. They seem to being a good job of keeping the CPUs between 55-64ºC while running 32 threads at 3GHz.
There is some variance since I set the thermostat to 80ºF during the day and the system is up in my lot. Which is generally always a few degrees warmer than the main level.

I am considering a pair of Corsair H110i for one of my other dual E5-2670 systems just to see how a LCS setup compares, but that is about $100 more than the air coolers.
7) Message boards : Number crunching : Anything relating to AstroPulse tasks (Message 1874887)
Posted 4 days ago by Profile HAL9000
Post:
dirk posted something about that in the Radeon Software Crimson thread.

in an ideal world, one should be able to both mb/ap apps as well as multiple work units/gpu without producing invalids. the last time i was able to run multiple wu(s)/gpu without producing invalids was driver 14.4, but that driver is not as fast as the newer drivers. 14.4 works with my 290x/295x2 but not my 390x/radeon pro duo. i had similar problems on einstein as well.

so now i'm using a driver from this year but i only run one wu per gpu and i don't run ap at all. but i only speak of my own cards; cards less expensive than mine(like the 280x) don't have the same problem. and i don't have anything new like a rx480 so i can't speak to that either.

I've not been having problems with 15.12 on my 390x. Each time I have tried a 16.x+ driver I have had issues. Like a BSOD while windows is sitting at the desktop.
8) Message boards : Number crunching : Welcome to the 18 Year Club! (Message 1874156)
Posted 8 days ago by Profile HAL9000
Post:
It just turned midnight, so it is officially June 20th. I can now join the 18 year club.

Steve

It looks like you were nearly the 19th from your original sign up time.
9) Message boards : Number crunching : Anything relating to AstroPulse tasks (Message 1873733)
Posted 11 days ago by Profile HAL9000
Post:
SETI@home	Requesting new tasks for CPU
SETI@home	Scheduler request completed: got 0 new tasks
SETI@home	No tasks sent
SETI@home	No tasks are available for AstroPulse v7
SETI@home	This computer has reached a limit on tasks in progress


Well it's been a while since I've seen that.
10) Message boards : Number crunching : Getting the most production for the least electricity (Message 1873641)
Posted 11 days ago by Profile HAL9000
Post:
Hal,

Why an i5 instead of an i7?

Tom
As I was specing the system for dedicated SETI@home use. Mostly cost and efficiency.
I would rather put the $110 difference in cost between the CPUs towards a 3rd GPU for a dedicated system.


Hal,
That leads to the question of does the proposed motherboard have "space" for a third GPU or do we get into "risers" and/or taking it out of its case?

This is fascinating thank you for the conversation.

Tom

The MB I considered for this configuration does have 3 PCIe x16 slots. Two of them are only x4 electrical, but that shouldn't be much of an issue.
Here are links to the parts I had in mind. MB, CPU, Mem, PSU, GPUs
I would probably use a M.2 SSD I already have, but if I needed to order one SSD
A case is always optional, but I did recently order a Thermaltake V51... I'm not sure why I did... maybe I'll use it for my next gaming PC build?
11) Message boards : Number crunching : Building a 32-Thread Xeon Monster PC for Less Than the Price of a Flagship Core i7 (Message 1873587)
Posted 11 days ago by Profile HAL9000
Post:
Are there any other lower cost alternative cpu's that produce similar results?
----------
There was a flood of E5-2670's to the 2nd hand market. As many data centers upgraded to the latest versions. So supply and demand are really the factors for pricing.
Once the E5-2600 v2 CPUs drop to a reasonable amount I'll pick up some to upgrade my current boards with v1 CPUs, but I don't expect that to happen soon.


I just saw some E5-2670's for $46 and some E5-2600's for $130-$150. I could not tell at the level I was looking if they were V2 or not.

Hmmmm.....

Tom

For E5-2670's around $50 is good. At the peak low point you could find pairs of them for $80 or ~$45 alone, but mostly you find single CPUs for ~$100 today.
The issue I originally had was finding a dual LGA2011 socket MB for under $200. Sometimes I would find one but then it would turn out to be a dual LGA2011-3 MB. Which is the socket for v3 and v4 Xeons.

The newer CPUs should be clearly labeled as V2. Given that it is part of the product name.
Often you find the L versions of CPUs much cheaper and not listed correctly. You can check the advertised clock speed against Intel's spec list. E5-2650L v2 (1.7GHZ 10c/20t) vs the E5-2650 v2 (2.6GHZ 8c/16t)
To replace my E5-2670's I'm looking for either E5-2650 v2 (2.6GHZ 8c/16t) or E5-2670 V2 (2.5GHz10c20t) CPUs.
12) Message boards : Number crunching : Getting the most production for the least electricity (Message 1873582)
Posted 11 days ago by Profile HAL9000
Post:
If I wanted to build a system dedicated to running SETI@home today. I would go for:
i5-7400 CPU
H270 motherboard
two GTX 1060 3GB's
650w 80Plus Platinum PSU


Hal,

Why an i5 instead of an i7?

Tom

As I was specing the system for dedicated SETI@home use. Mostly cost and efficiency.
I would rather put the $110 difference in cost between the CPUs towards a 3rd GPU for a dedicated system.
13) Message boards : Number crunching : Getting the most production for the least electricity (Message 1873580)
Posted 11 days ago by Profile HAL9000
Post:
i5-7400 CPU
H270 motherboard
two GTX 1060 3GB's
650w 80Plus Platinum PSU

Even with Petrie's special, total draw of that system would be around 350W (65+120+120+ rest of system) (SSD for boot/storage & 16GB RAM).
For that system i'd go with a 500W (brand name) PSU- 45%-55% load is roughly the sweet spot for a switch mode PSU maximum efficiency (that band is wider the higher the rating of the PSU- Bronze, Gold, Platinum etc) (The lower the load the less efficient, the closer to it's maximum rating the more likely problems are, so 65-75% of it's maximum rating is a good compromise).

There are a few reasons I decided on a 650W PSU.
1) It gives headroom to add a 3rd GPU. I estimated total system load to be ~230w for 2 GPUs and ~320w for 3 GPUs.
2) Efficiency actually starts to fall off after 50%. At least all of the data in the 80 PLUS Verification and Testing Reports indicates that it does.
Here is part of the report for the PSU I have in mind. http://i.imgur.com/V7VDR1J.png
3) The PSUs series that I currently get starts at 650w and go up.
14) Message boards : Number crunching : Building a 32-Thread Xeon Monster PC for Less Than the Price of a Flagship Core i7 (Message 1873467)
Posted 12 days ago by Profile HAL9000
Post:
The prices for dual E5-2670's are going up. months ago the CPUs were going for ~$50 and today are around $100
In October I bought a bundle with Intel S2600CP2J MB, 2 E5-2670's, & 128GB PC3-12800R for $470. Then a 2nd one a few weeks later.
Now that vendor has the same config for $626.

I did considering a LCS setup but air cooling with a pair of Noctura NH-U12DXi4's is working fine for me
http://i.imgur.com/ZMqgcrB.jpg


Are there any other lower cost alternative cpu's that produce similar results?

Thank you for helping me return this thread to the topic in the title!

Tom

There was a flood of E5-2670's to the 2nd hand market. As many data centers upgraded to the latest versions. So supply and demand are really the factors for pricing.
Once the E5-2600 v2 CPUs drop to a reasonable amount I'll pick up some to upgrade my current boards with v1 CPUs, but I don't expect that to happen soon.
15) Message boards : Number crunching : I have a new system, expected runtimes? (Message 1873464)
Posted 12 days ago by Profile HAL9000
Post:
For those that are interested, Tom's Hardware posted an article looking at the new Mesh architecture Intel are using for their high core count/multi socket CPUs to replace their long standing Ring Bus architecture.
... on the Broadwell LCC (Low Core Count) die... for instance, moving data from one core to its closest neighbor requires one cycle. Moving data to more distant cores requires more cycles, thus increasing the latency associated with data transit. It can take up to 12 cycles to reach the most distant core...
The larger HCC (High Core Count) die exposes one of the problems with this approach. To increase the cores and cache, the HCC die employs dual ring buses. Communication between the two rings has to flow through a buffered switch (seen between the two rings at the top and bottom). Traversing the switch imposes a five-cycle penalty, and that is before the data has to continue through more hops to its destination.

So now I can see why, even though Seti work itself doesn't benefit from huge amounts of memory bandwidth, Kiska's runtimes were so high with the original setup of both DIMMs on the one CPU socket.
Intel mesh architecture.

Not populating all of the memory channels for a CPU reminds me of the saying "You can't put 10lbs of 'stuff' into a 5lb box"
16) Message boards : Number crunching : Getting the most production for the least electricity (Message 1873462)
Posted 12 days ago by Profile HAL9000
Post:
For GPUs you will find this nice GPU Chart

For CPUs, basically pay attention to the TDP for their heating effect.


Thank you for pointing me to that spiffy chart.

Now I wonder if we can come up with an approximation of the same information for cpu's....

Tom

Newer generation CPUs are normally more efficient than their previous generation.
When looking to upgrade and you want to be "more efficient" there are a few avenues to help guide your decision.
A) Same power usage and greater output.
B) Lower power usage and the same output.
C) Somewhere between A & B
D) Lower power usage and greater output.
E) Lower power usage and less output.
There isn't normally much gain from one generation to the next, but skipping several gens can't be rather noticeable.
A current gen i5, or probably even an i3, would run circles around a Xeon X5680.

For me what I need the machine to do is what I solve first. Then I try to find the most efficient options.
Like one of my machines is a low powered Celeron J1900. Which pulls ~23w at full SETI@home load. For fun I popped a GTX 750 Ti to see if it hand enough ommph to feed the GPU as well. So now the system power usage is up to ~60w. The systems purpose is file storage, running scripts for reports about my network, and some other tasks. I plan to use it for some home automation stuff once I get around to it as well.

If I wanted to build a system dedicated to running SETI@home today. I would go for:
i5-7400 CPU
H270 motherboard
two GTX 1060 3GB's
650w 80Plus Platinum PSU
17) Message boards : Number crunching : Building a 32-Thread Xeon Monster PC for Less Than the Price of a Flagship Core i7 (Message 1873249)
Posted 13 days ago by Profile HAL9000
Post:
As far as I know, it's 100 for each CPU (regardless as to how many cores it has) and 100 for each GPU in the computer. I know it wasn't supposed to be that way, it was supposed to be even more restrictive but regardless, that is what we have today.

The app_config and cc_config don't have anything to do with that, it's something from the server end

It is a limit of 100 CPU tasks in total per host. BOINC doesn't know how many physical CPUs are in a system. It only knows the count of total processors. :/
It would be great to have more than a cache of 100 tasks on my 16c/32t system. Since that is only about 6-7.5 hours.



The prices for dual E5-2670's are going up. months ago the CPUs were going for ~$50 and today are around $100
In October I bought a bundle with Intel S2600CP2J MB, 2 E5-2670's, & 128GB PC3-12800R for $470. Then a 2nd one a few weeks later.
Now that vendor has the same config for $626.

I did considering a LCS setup but air cooling with a pair of Noctura NH-U12DXi4's is working fine for me
http://i.imgur.com/ZMqgcrB.jpg
18) Message boards : Number crunching : Optimised application installation thread 2. (Message 1872987)
Posted 14 days ago by Profile HAL9000
Post:
OK. So I am currently processing 4 X WUs and I X Graphics...a total of 5 WU been processed. Each WU is around 3 hours long.
While I had the Lunatic installed I had one almost 12 hour WU been processed. Is this in fact what the Optimized app does....?

I believe Wiggo mentioned this previously previously. The estimated completion times in BOINC when you change apps will be inaccurate until the server gets an accurate estimate.

This requires 11 good tasks to be completed. You can check the status of this from the Application details on your host.
So far your i3 has 3 results that count toward this. See Number of tasks completed.
SETI@home v8 (anonymous platform, CPU)
Number of tasks completed	3
Max tasks per day		36
Number of tasks today		0
Consecutive valid tasks		3
Average processing rate		17.65 GFLOPS
Average turnaround time		0.37 days

Sometimes you may see the value for Consecutive valid tasks higher than Number of tasks completed. because your returned a good result, but it was perhaps a VLAR or VHAR which falls outside the bounds to be included in Number of tasks completed

The Optimized apps work just like the other apps, but have code optimizations which can allow them to complete the same work faster.
19) Message boards : Number crunching : RX 480 OpenCL Question (Message 1872983)
Posted 14 days ago by Profile HAL9000
Post:
It's been awhile, but have been playing with the command line parameters and watching their effects. Finally figured out why Boinc went from reporting 8192MB of memory on the card to 7536MB back on April 19th. It was due to a bios setting involving the iGPU on the motherboard. Even though I have the iGPU disabled, you still have to set where shared memory is set. Choices are above 4gb or below 4gb. Setting it to above 4gb, Boinc and Opencl report 7536MB of memory on the card. Setting to below 4gb, Boinc and Opencl report 8192MB of memory on the card.

Changing the iGPU memory settings sounded weird to me, but then I realized you have 8GB of system memory. So then it made sense.
There is something about video memory being mapped to system memory... or something along those lines.
20) Message boards : Number crunching : I have a new system, expected runtimes? (Message 1872982)
Posted 14 days ago by Profile HAL9000
Post:
Update: with 2 channels occupied, preliminary results are around about 9k seconds per task with 26 tasks running at once

That is looking much better. That is around the upper limit for tasks on mine running 32 tasks at once with 8 dimms (4 per CPU) in quad channel mode.


Next 20


 
©2017 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.