Posts by HAL9000


log in
1) Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore (Message 1813479)
Posted 1 day ago by Profile HAL9000
Some links on the WU limit:


I was thinking of posts a bit older.
1185411
1197674
1229214

The posts I can find where the staff told us the values of the task limits are from 2010. Other posts are just notes like "task limits were raised".
2) Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore (Message 1813062)
Posted 3 days ago by Profile HAL9000
The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient.

I don't recall ever having per processor or per core WU limits before.
I do remember them making the GPU limit per GPU instead of for all GPUs.

I believe it was around the end of 2011 or start of 2012. I seem to recall when they first set the task limits they had accidentally set 50 total per host. Then changed it to per processor for CPU. Then after some time the limits were removed, db when splat again, & then the limits were implemented again.
3) Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore (Message 1812863)
Posted 4 days ago by Profile HAL9000
Most of my machines are i7's so they get 100 / 8 threads = 12.5 WU per thread.

If we expand on that by saying we get 12.5 x number of threads then it could work for the smaller (single thread) machines as well as the larger (56 thread) machines. I would suggest we make it something like 15 per thread. Its simplistic and achievable with the current infrastructure.

A more long term approach might be to increase that number based upon the average turnaround if the host is considered reliable. It could also be applied the other way to reduce the number if the host is unreliable.

The last time per processor CPU limits were used the value was 50. So perhaps half of that, at 25 per processor, would be sufficient.
4) Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore (Message 1812650)
Posted 5 days ago by Profile HAL9000
A few options come to mind.

1.
The CPU limit could be applied based on processors and determined value. Something along the lines of (host # of processors/4)*task limit.

2.
Perhaps modifying the JobLimits options to allow for specified host # of processor range limits.

Something along the lines of:
<project> <cpu_limit> if set, limit is applied to all hosts unless another values applies to the host <jobs>N</jobs> <cpu_limit_16> if set, limit is applied to host with 16+ processors <jobs>N</jobs> <cpu_limit_32> if set, limit is applied to host with 32+ processors <jobs>N</jobs> <cpu_limit_64> if set, limit is applied to host with 64+ processors <jobs>N</jobs> <cpu_limit_128> if set, limit is applied to host with 128+ processors <jobs>N</jobs> </project>


3.
A graduated max CPU tasks in progress could be derived using Number of tasks today from the CPU apps. It might be necessary to take the Number of tasks today and then create an Average daily Number of tasks to use. Then a using the specified value set by the project a dynamic limit could be applied based on how productive the machine is rather than by the indicated number of processors.
I think this might be the most complicated method & with each app version change the average would be reset.
5) Message boards : Number crunching : Thought(s) on changing S@h limit of: 100tasks/mobo ...to a: ##/CPUcore (Message 1812197)
Posted 6 days ago by Profile HAL9000
I don't know the method that is currently being used to detect the # of processors, but I'm aware that there are cpuid functions that provide core/thread counts. That may also provide the socket count for the system, or could be somewhat wonky. Providing two lists of core 0,1,2,3 for a dual 4 core MB.

If the CPU socket count can be determined. Then having a limit of tasks per CPU socket would be helpful for multi CPU systems.
6) Message boards : Number crunching : Windows 10 - Yea or Nay? (Message 1810603)
Posted 10 days ago by Profile HAL9000
My computers just went into no more updates mode.

I installed Sophos to handle my AV and malware needs.

Most of my machines never do updates.
http://i.imgur.com/TH5OjMS.png
But I did have to install an update on my HTPC once to get something to work.
http://i.imgur.com/WJ9Ggf8.png
7) Message boards : Number crunching : Windows 10 - Yea or Nay? (Message 1809912)
Posted 13 days ago by Profile HAL9000
[IMG ]http://i715.photobucket.com/albums/ww153/Jimbocous/the-true-origin-of-the-tin-foil-hat.jpg[/IMG][/URL] Here we go again ...

I don't know why anyone would wear a foil hat like that to try to block signals in/out of their head. Unless it is properly grounded it would only serve as antenna.

With the free upgrade period offer over I've been considering setting up another Window 7 VM to see what updates they push out to it. Last time I did that I came back to the VM running Windows 10 with no human intervention.
8) Message boards : Number crunching : 20-core/40-thread Xeon E5-2630 v4 duo (Message 1809080)
Posted 16 days ago by Profile HAL9000
I'm pretty sure there are ways to split PCIe slots into multiple x4s and so forth. Theoretically you should be able to turn one x16 into 16 x1's.

I don't see how.

Each slot has it's own device number. The PCIe*1 section includes power, data, signalling connections. The other sections just include data & signalling to indicate they are being used.
The slot, and the software drivers, are designed for a single device- not for running multiple different devices in the one slot.
That's why there are multiple slots.

Some slots share PCIe lanes, which is why a PCIe*16 slot can become a PCIe*8 slot when another slot on the motherboard is used.
One device per slot; the PCIe lanes can be shared between slots, but they are only ever used by one device at a time.

I believe the configuration of the PCIe slots on a MB may be limited by Intel or AMD. On Intel's site they often list the PCIe configuration options under the CPU specs.
Supported Processor PCI Express Port Configurations: 1x16, 2x8, 1x8 and 2x4

The use of PCIe switches or splitters can be utilized to allow more PCIe devices than originally intended. The easiest options is to use a PCIe expansion chassis.
9) Message boards : Number crunching : 20-core/40-thread Xeon E5-2630 v4 duo (Message 1808581)
Posted 19 days ago by Profile HAL9000
Given:

... a simple rig with twin 1070's should be more productive.

also

And when you consider how well Al's system performs, this would be a very capable cruncher.

and, although this remains unanswered

I'd hazard a guess that a pair of 1070's would be a bit better throughput.

For guppies or for traditional work?

My interest is long-term (ie. Efficiency).  For example, referencing Shaggie's graphs (most recent one here), we can see that when it comes to production (Avg Credit/Hour), the 1070 spanks the 750Ti.  However, when it comes to efficiency (Avg Credit/Watt-Hour), the 750 Ti more than holds it own, which means more bang for your buck (efficiency-wise).

Having said that, if we were to compare a rig only crunching on two 1070s (with 1, 2 or 3 WUs concurrently), with the topic rig of this thread crunching on all threads concurrently, which one would wear the efficiency crown?

Since Credit is variable. I find comparing watt hours per task to be much more helpful in finding how efficient a device is for crunching. With the Wh/task info charts could be made ranking the devices by efficiency and a column for the run time would be helpful to guesstimate credit.
10) Message boards : Number crunching : Connection/computing scheduler...anyone ? (Message 1806843)
Posted 27 days ago by Profile HAL9000
I tried manually setting multiple network time windows in the global_prefs_override.xml, but as I suspected BOINC only read the last entry. So it looks like BOINC itself is not designed for what you want to accomplish.

However as Shaggie suggested you can use the OS to schedule events. Then you can use boinccmd to give BOINC commands with a script along the lines of:
boinccmd --set_network_mode auto timeout 30 boinccmd --project http://setiathome.berkeley.edu/ update timeout 120 boinccmd --set_network_mode never

Note: You may need to adjust the timeout values depending on your system(s) and internet connection.


Thank you Shaggie & Hal, so I guess that the answer to my question about the single instance of boinc is implied in your answer : scheduling 5 times per day your script ( embedded in a .cmd file) and adjust especially the second timeout to upload the 100/120 worked wu should do the trick, right ?
One more question: is this command line parameter overriding any gui settings or should I set the network activities to " based on preferences" ?

Thank you..!


Yes, For your Windows hosts setting multiple triggers for the specific times you wish to connect is likely the route you will want to take. You could even factor in the schedule weekly server maintenance if you wanted to do so. I would guess that OS X on your MAC host would have something similar, but I don't have an OS X host to check at this time. The scripting would be similar, but I don't know what the OS X command for a timeout is without looking it up.
The first delay is to give BOINC a moment to switch modes. On a slower machine this can take a few seconds. A faster machine might be ready in ~5 seconds.
The second delay is how long to wait before telling BOINC to stop network activity again. A value of 600 to 900 sec, 10 to 15 min, might be more suitable. It just depends on your internet connection to the servers.

Boinccmd is simply another way to control the BOINC client. Changing your network connection to never will leave it set to never. Just as if you used the GUI to do so. For a full description of the command line tool you might want to check out http://boinc.berkeley.edu/wiki/Boinccmd_tool
A quick description of the options for the network, CPU, & CPU run commands.
always = always
never = never/suspend
auto = based on preferences
11) Message boards : Number crunching : Is the S@H server biased at sending more guppis to NV GPUs? (Message 1806828)
Posted 27 days ago by Profile HAL9000
When Guppis were introduced on NV GPUs earlier this year, the plan might have been to limit the ratio of Guppi to nonVLAR as compared to what was distributed for CPUs.

AFAIK the only plan was to split work, the ratio between Guppies & Arecibo being dependant on how much of each is available.

I believe Eric had stated that at some point 90% of the work may be coming from GBT. As Arecibo hasn't been very active recently. It might be having budget/closure issues again.
12) Message boards : Number crunching : Connection/computing scheduler...anyone ? (Message 1806827)
Posted 27 days ago by Profile HAL9000
I tried manually setting multiple network time windows in the global_prefs_override.xml, but as I suspected BOINC only read the last entry. So it looks like BOINC itself is not designed for what you want to accomplish.

However as Shaggie suggested you can use the OS to schedule events. Then you can use boinccmd to give BOINC commands with a script along the lines of:
boinccmd --set_network_mode auto timeout 30 boinccmd --project http://setiathome.berkeley.edu/ update timeout 120 boinccmd --set_network_mode never

Note: You may need to adjust the timeout values depending on your system(s) and internet connection.
13) Message boards : Number crunching : What are acceptable acronymes/terminology for the S@h main forum? (Message 1805761)
Posted 30 Jul 2016 by Profile HAL9000
VHARs >1.0 (aka "Shorties")
Mid-range (0.12 - 0.99) (aka "MARs"?)
VLARs <0.12 (aka "OMG Why are these SO SLOW!")

@HAL9000
    Would it be safe to say that these #s supersede those given in your Message 1773487 from 4 months ago?  To maintain continuity, VMARs could be referenced.


@Stubbles69

    Here's a description of the AR categories.


For my newer post in this thread I was working from memory instead of using my reference sheet. My main goal was showing that the acronyms referred to specific numbers. My older post has a more accurate value for VHARs. As did the post Richard made in this thread with the specif AR value that had been found previously.
Looking at the original graph. I want to run something similar with the current apps/data & see how they compare to the previous findings. Currently I don't have the time to do that.

Does VMAR stand for Very Mid Angle Range? I would think a very mid AR would be a normal AR.
We really we don't need to define everything with a name. I had found it amusing since I noticed an acronym for Mid Angle Range could spell mars when plural.
14) Message boards : Number crunching : BOINC client isn't downloading new S@H workunits on S6 Android (Message 1805456)
Posted 28 Jul 2016 by Profile HAL9000
I am having this same issue, and the old version will not install on my phone.

Is there another thread to follow, which has updates on progress of a new version?

Possibly on the BOINC Message boards.
15) Message boards : Number crunching : What are acceptable acronymes/terminology for the S@h main forum? (Message 1805455)
Posted 28 Jul 2016 by Profile HAL9000
I'm not sure if the VHAR & VLAR terms came from the staff or if they were terms we started using first to refer to the angle range of specific tasks. Either way the terms refer to the angle range of the work we are doing. The project has currently defined VLAR(Very Low Angle Range) tasks to be a task with angle range of <0.12, it was once <0.013, & adds to .vlar the workunit/task names.
VHAR(Very High Angle Range) tasks, aka "shorties", may not have a strict definition by the project, but an angle range >1.0 is often referred to as a VHAR.
I believe one of the lunatics has generated a time/AR chart several years ago. Which showed a pretty clear change in runtime based on the AR.
It more or less looking something like this:
\ --- \
With time going up and AR going from low to high.
Everything between a VLAR & VHAR is often referred to as mid-range. I think we should be calling them MAR myself. Then when we use the plural form it would be MARs. However saying "you have a bunch of MARs tasks" might be confusing to some. As they might thing we are looking for radio signals from the planet Mars instead of from distance stars.

I believe the AR that the classic project used, when they were paying for the telescope time, was ~0.42. So 0.40-0.44 AR tasks I normally refer to as "normal AR". Which are ideally what you want to use for baseline comparisons, but with the GBT data they will likely be rare.

VHARs >1.0 (aka "Shorties")
Mid-range (0.12 - 0.99) (aka "MARs"?)
VLARs <0.12 (aka "OMG Why are these SO SLOW!")
16) Message boards : Number crunching : What is the % of S@H top computers with Lunatics apps/.exe installed? (Message 1803737)
Posted 20 Jul 2016 by Profile HAL9000
Wow! Thanks Shaggie
10,9,1 100,66,34 1000,327,673 10000,1284,8716
So: ~87% (7/8) of the "Top 10,000 hosts by RAC" do nothing to optimize!!!
...and also about two thirds (2/3) in the top 1000 PCs. {I'm in shock}

No wonder it took me almost no effort (and only ~$900can) for me to rise to the 1% club so quickly (even with RAC being as slow as it is to rise).
And I thought it was because I had bought the best gear for the money! ;-}

Hmmm...so... considering that optimizers usually have more than 1 power rig, it could very well be that over 90% (maybe even 95%) of the "Top 10,000 participants" only run stock.

Thanks again Shaggie for crunching those stats!!!


At my peak I was about #30 by RAC with my ~35 hosts running. I think only 4 or 5 were in the top 10,000 list.
With our 138,101 active users, boincstats isn't showing hosts right now, I think we have something like 300,000 active hosts. So to say 0.5% of active hosts have actively installed the lunatics apps doesn't seem to far fetched to me. Considering <1% of users ever visit the fora & would not otherwise have any knowledge the other apps exist.
17) Message boards : Number crunching : SETI@home on older computers (Message 1803727)
Posted 20 Jul 2016 by Profile HAL9000

Because you are using Windows' remote desktop connection which replaces the GPU driver with a basic one that doesn't support GPU computing.


OK, that could be an explanation.
But, why the GPU doesn't work immediately after the restart? (there is a auto-login)
There is no remote connection yet.



You have to use something else for remote management.


:-(
I need the Windows remote desktop for other reasons.


Chrome Remote Desktop works fine also.


Schleich di mit dem Sauglump. ;-)



BTW: I wouldn't use the GPU, just the 4 cores. Using all cores plus the GPU will produce a lot of heat (possibly throttling the CPU) and you may even end up with less work done because the GPU will use some CPU also. Just my 2 cents.


OK, that's a good point.
At the moment my J1900 is running in turbo mode.

My J1900 system has never had a problem running 4 CPU + iGPU tasks while staying in Boost at 2.41GHz. It is an ASRock Q1900-ITX. Which has a fairly large heatsink.
When using an iGPU sometimes there can be problems when Intel updates the driver. Causing SETI@home apps to no longer work or generate garbage results.

Current temps with 4CPU + iGPU at 100% load.
Room 25ºC
CPU 33ºC
iGPU 39ºC
CPU cores 41-42ºC.
18) Message boards : Number crunching : Building a 32 thread xeon system doesn't need to cost a lot (Message 1802708)
Posted 15 Jul 2016 by Profile HAL9000
Regarding the E5450 HT, I didn't know that because that would have been a huge miss for me when I was evaluating which procs to put into those boards, so I took a quick look at the Intel site again, and according to it:

Advanced Technologies Intel® Turbo Boost Technology ‡ No Intel® Hyper-Threading Technology ‡ No Intel® Virtualization Technology (VT-x) ‡ Yes Intel® VT-x with Extended Page Tables (EPT) ‡ No Intel® 64 ‡ Yes Idle States Yes Enhanced Intel SpeedStep® Technology Yes Intel® Demand Based Switching Yes Thermal Monitoring Technologies Yes


looks like it doesn't support it either? Or am I not looking at the right thing?

CPUs using the Intel Core microarchitecture do not support HT.

There are small LGA 771 to 775 adapters that allow using the less expensive Xeon CPUs in 775 boards. Which is great if you happen to have a load of old 775 boards that support Xeon CPUs. Most recently it is easier to find 771 boards vs compatible 775 boards.
19) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1801072)
Posted 6 Jul 2016 by Profile HAL9000
I scanned the host export and found about a dozen hosts with Ellesmere cards (the RX 480 is the only released Ellesmere card I think). I can't tell if they're doing unusual things like running multiple tasks but the numbers aren't that far apart so I doubt it.

Host Id: Credits/Hr 7492259 420.6506388 7431180 647.7853343 8034949 367.8899388 8037810 498.1638136 Average 483.6224314


I also scanned for Fiji parts; there's quite a few in the db and I can't tell what version they are (Nano, Fury, Fury X, etc)

Host Id: Credits/Hr 8001648 341.4652324 8001994 557.7983477 8003231 661.4473237 8003833 413.5674201 8013353 551.747657 8014347 385.3597423 8029489 336.9615636 Average 464.0496124


Note that RueiKe's R9 Nano's do a lot better than these do on average -- my guess is his water-cooling setup helps a lot because the Nanos are reputed to throttle when they get hot as I'm sure they do when left crunching for hours.


Personally I've been disappointed in the performance of the Fury cards compared to my R9 390X. With less than 70% of the shaders of a Fury Nano or Fury X it still manages to churn through MB tasks in ~6 minutes. With the fans set to auto it does run up to 68ºC but they are still silent at ~40%. THe only config settings I use are -hp -cpu_lock & I think that -cpu_lock might be depreciated in the current version app. So it might not be doing anything.
20) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1800809)
Posted 4 Jul 2016 by Profile HAL9000
My single EVGA GTX 750 Ti FTW had a RAC in 9-10k with two tasks running at once. Power consumption was in the 40-45W range.
At the moment I'm planning on sticking it in my Celeron J1900 system to see how well that CPU can run the GPU.


Next 20

Copyright © 2016 University of California