Posts by HAL9000


log in
21) Message boards : Number crunching : GPU FLOPS: Theory vs Reality (Message 1800351)
Posted 3 Jul 2016 by Profile HAL9000
I'm trying to collect data to make the best computation/power-usage choices possible for upgrading my modest farm. I was hoping to get some help to fill in the blanks.

Here's my observed / theoretical performance for my cards on SETI@home tasks:

  • 980 TI ~1000GF / 5632GF (18%)
  • 780 ~650GF / 3977GF (16%)
  • 960 ~385GF / 2308GF (17%)


The theoretical FLOPS is from the Wikipedia entries for the GeForce 700 and 900 series parts and I compared it to the observed FLOPS in a bunch of my completed work-units.

I trawled through recent stats submitted by other people and found one for a GeForce 1080 that suggests the ratio is much higher for those parts: ~2400GF / 8873GF (27%). Could it really be that a 1080 can crunch more than 2x the tasks as a 980Ti? This seems unlikely to me.

If you have a single GeForce 1080 crunching SETI tasks and have more data-points to share I'd really appreciate getting more numbers.

I was also quite excited by the news of AMD's RX 480 because it's a relatively low-power part and priced at a point that makes fitting 2 or 3 of them in a PC cheaper than a single high-end part of lower theoretical performance.

There's just one problem: the theoretical FLOPS on Wikipedia are evidently calculated differently for AMD parts than NVidia parts.

So again, if you have a single RX 480 crunching SETI tasks I'd love to see your work-unit numbers.


I'm not sure using Device peak FLOPS in a task result is the best way to determine the application efficiency. Taking the value displayed in Flopcounter: and diving by the number of seconds the task took might be a more accurate value to compare to the manufacture mex theoretical FLOPs. If you are running multiple tasks per GPU you would also want to correct for that.

I don't see anything in the Radeon 400 series wiki page that points out they will be using a different way to calculate Single Precision FLOPs. For many years (Shaders*2)*clock has been used for Nvidia & Radeon GPUs.
22) Message boards : Number crunching : Say goodbye to SETI@home v7. (Message 1800340)
Posted 3 Jul 2016 by Profile HAL9000
Only 4 tasks left

We're getting there, but it will take time :-)

It's like a NYE countdown, but a bit slower.
23) Message boards : Number crunching : What is the % of S@H top computers with Lunatics apps/.exe installed? (Message 1800170)
Posted 2 Jul 2016 by Profile HAL9000
Even seeing "Anonymous platform" in someone app detail or task list doesn't tell you if they are using the optimized lunatics apps. It only tell you that host is using an app_info.xml.

Someone might want to use the stock apps but limit which apps run or pick specific combinations of apps. Rather than run everything the server sends them.
24) Message boards : Number crunching : Average Credit Decreasing? (Message 1799912)
Posted 1 Jul 2016 by Profile HAL9000
Found it...now!

But still feeling contradicting emotions towards SETi@home & it's team...by releasing an app (v8 SoG & sah), which hangs on some nVidia cards...didn't they use BETA for sthg?!
:/

With limited users & hardware participating in SETI@home Beta some issue will not be found until the apps are released to the main project.
25) Message boards : Number crunching : BOINC client isn't downloading new S@H workunits on S6 Android (Message 1799754)
Posted 30 Jun 2016 by Profile HAL9000
I checked the BOINC fora for your issue & didn't find anything.
I would guess the missing CPU information for the host is likely the cause.
26) Message boards : Number crunching : USB Risers (Message 1799508)
Posted 29 Jun 2016 by Profile HAL9000
You definitely want to power that as well, because the card is expecting (I believe, correct me if I am wrong on the amount) 75 watts to come thru the PCI-E bus itself, and get the rest of what it needs from the external connectors you mentioned.


But aren't those 75 Watts already supplied "through" the 1X slot on the Mobo and "carried" by the USB 3.0 cable to the card, or is the USB cable responsible only for data transfer between the MoBo and the GPU's ?

PCIe specification states
x16 slots may provide up to 75W
x8/x4 slots may provide up to 25W
x1 slots should initially provide 10W & full-height cards may request up to 25W

Some motherboard manufactures design their boards to provide a full 75W to every slot. No matter if it is an x16 or x1 slot.
Some manufactures do not even provide the power specified in the PCIe bus specification to their slots.
It is safest to always use the power connector provided by the adapter or extension cable.
27) Message boards : Number crunching : BOINC 7.4.51 on a Droid. (Message 1799141)
Posted 28 Jun 2016 by Profile HAL9000
Wish Amazon and the BOINC developers would coordinate better on the released apps for Amazon Fire tablets. Last version that will work on my Fire HDX is 7.4.14. Any newer release tries to install but fails with an incompatible version error message.

Is your Kindle Fire a 2nd generation one? The OS for that version of Kindle looks to only go to 4.0.3 & the current release of BOINC requires Android 4.1 or higher.
28) Message boards : Number crunching : BOINC 7.4.51 on a Droid. (Message 1798939)
Posted 27 Jun 2016 by Profile HAL9000
I rebooted my phone 7615058 in the past hour, and noticed Google Play updated BOINC from 7.4.43 to 7.4.51. Looking at BOINC.berkeley.edu says x.41 is "stable" and x.43 is, essentially, a test version...

Where could I find info on x.51?
Does the BOINC page need an update?

(As far as I know I'm not signed up for alpha or beta BOINC testing.)

I would expect the primary release mechanism for BOINC on Android to be via Google Play. Rather than the BOINC website.
As the BOINC site states "We recommend that, rather than downloading BOINC from here, you get it from the Google Play Store or the Amazon app store (for Kindle Fire)"

You can find the release notes for the posted version of BOINC for Android
under the section What's New section for the app in the Play store.
https://play.google.com/store/apps/details?id=edu.berkeley.boinc
New to 7.4.51:
* Add support for new processor types.
* Update support libraries.

The "new processor types" likely refers to the added support for aarch64 ARM CPUs.

EDIT:
Looking in the BOINC download directory versions 7.4.44 to 7.4.52 have been made available for test recently.
It looks like BOINC v7.4.51 was built on Friday. Then likely given a test over the weekend before being deployed to the app stores for release Monday.

Additionally you may find this post helpful
http://boinc.berkeley.edu/dev/forum_thread.php?id=11065&postid=70443#70443
29) Message boards : Number crunching : Building a 32 thread xeon system doesn't need to cost a lot (Message 1798008)
Posted 22 Jun 2016 by Profile HAL9000
Funny guy... :-p

Wait, that article wasn't published on April 1? Hmmm


I have a few issues with their statement.
"The 1,000 processors can execute 115 billion instructions per second while dissipating only 0.7 Watts which mean it can be powered by a single AA battery."

Power dissipation is not the same as power consumption. A device could have a power input of 10w & dissipate 0.7w. Which would make it ~93% efficient.

With all of the power wires going into the device I imagine it might consume a fair bit more juice at full oomph.



I would guess that the press release information may have an error. It it likely the chip can execute 115 billion instructions per second & it may also be able to run with 0.7w of power input. I imagine it is not both at the same time unless it is some kind of RISC architecture running very specific instructions. Like those coin miners running hashes.
30) Message boards : Number crunching : CPU time difference (Message 1797959)
Posted 22 Jun 2016 by Profile HAL9000
Both i5's are laptops, so lower powered makes sense.

I hear you as to your last point, and I understand it. Due to edits of existing posts, you may have missed what I added to my 22 Jun 2016, 0:07:40 UTC post (last 1.5 paragraphs). Maybe I'm just failing to understand, but I don't see how I can compare my times for two different tasks in any meaningful way. I don't know of any indicator of the "size" of a task other than the CPU time itself.

When you are looking at your Task List
Select the Task ID for a completed task.
Then within the Stderr output you sell see a line WU true angle range is :. You want to compare task where that value is similar. "Normal" tasks, which you ideally want to use as a baseline, fall in the 0.42-0.44 range. So if you are comparing a task with a value of 0.431888 to one with a value of 0.008735 it doesn't really tell you anything.
31) Message boards : Number crunching : CPU time difference (Message 1797825)
Posted 22 Jun 2016 by Profile HAL9000
I think that's been the general conclusion of most people that have given crunching using their Intel on die GPU.

That's been my point throughout this thread. Raistmer said on 5 June: "Try to limit number of cores BOINC use to 2 instead of 4. How this will affect runtime?" Apparently he hadn't received the memo. Now, with any luck, he has.

It is a known issue. I believe all previous tests were performed on i5 CPUs. Where HT was not a factor. So it would make sense to see if HT was a factor. Also checking how the current gen hardware responds with the newest app version is useful information.

The right mix of project CPU & iGPU apps is still an unknown. There are several projects that offer CPU & iGPU apps if you wish to experiment.
32) Message boards : Number crunching : Enabling APU's GPU with installed discrete GPU. Mobo is MSI FM2-A75MA-E35 (Message 1797311)
Posted 19 Jun 2016 by Profile HAL9000
Thanks for links and comments.

It seems that I need another AMD-only GPU indeed :/

Currently, AMD Radeon™ Dual Graphics is supported on the AMD A-Series APUs in conjunction with select AMD Radeon™ R7 series and AMD Radeon™ HD 6000 series graphics cards used under the Microsoft Windows 7 operating system.


Not the right OS also...

Well, will try to find "better home" for NV GTX260 then :)

That is true. Windows Server 2008 is the same level as Windows Vista.
33) Message boards : News : Arecibo still threatened with closure. (Message 1796103)
Posted 14 Jun 2016 by Profile HAL9000
Just for a bit of understanding, does anyone know
how important the Arecibo dish is to spotting big
rocks that may hit the earth?

My understanding is that it was being used for that. At least at one point in time. However when being used to look for space rocks we don't get data for SETI@home.
34) Message boards : News : Did SETI@home ever find aliens? (Message 1795859)
Posted 13 Jun 2016 by Profile HAL9000
I pronounce BOINC like "boink" or "bo ink"

Usually the first one. :)

There are those that pronounce it BOINC, but personally I'm going to keep saying BOINC.
35) Message boards : News : Arecibo still threatened with closure. (Message 1795851)
Posted 13 Jun 2016 by Profile HAL9000
Arecibo is what in Carso (Karst) highland is called a dolina. It cannot be dismantled. Also the China FAST is in a dolina.
Tullio

The landform it's in can't be dismantled (at least not without lots of explosives), however all of the structures & equipment can be removed. Arecibo the place will still be there, but not the observatory.

I'm sure we could blow the whole area well below sea level if needed. I mean have you seen our military spending?

I'd like to see the Arecibo Observatory stay funded & running, but on the other hand I hate already seeing 30% of my pay taken in taxes.
36) Message boards : Number crunching : Welcome to the 17 Year Club! (Message 1795541)
Posted 12 Jun 2016 by Profile HAL9000
Well, are here only BOINC addicts welcome or does the "old" Seti@Home also count?
Cause if it counts, i'm in... ;)

BOINC was released 14 years ago. 2002-04-10
SETI@home officially started using BOINC 12 years ago. 2004-06-22

Some of us have been going for 17 years since the project started in 1999.
37) Message boards : Number crunching : LotzaCores and a GTX 1080 FTW (Message 1795376)
Posted 11 Jun 2016 by Profile HAL9000
Thanks! My RAC in the software shows that it has rocketed from 0 to 4600 in those 2 short days. Who knows how high it just might go? :-)


Don't know for sure, but can do some ballparking.

IIRC 8 Xeon cores doing MB, back in cobblestone scale days, used to get about 20K RAC on PreAVX AKv8 code. Since then there's been two main credit drops amounting to x ~30%. You claw back a little for increased throughput with AVX (about 1.5x), so my guess with 48 CPU cores alone (AVX capable + fast memory), would be 20K*6*0.3*1.5 ~= 50K (1 significant digit). Lots of variability, especially if adding AP and weird work mixes and GPUs into the picture.



. . Should not that formula be 20K*6*0.7*1.5 ??

. . Or is that drop "to" 30% not "of" 30%.

. . I am curious to know if the drop has been that large?

The change from MBv6->MBv7 was a drop of 40-50% for some.

I prefer to just use the actual run times to calculate the number of tasks a day & then guesstimate the credit. It looks like their normal AR tasks are running ~2.5 hours. So that gives us ~450 tasks/day. I like to figure 100 credit per task to give a max theoretical RAC value of 45,000. Then I figure 80% of the max for the low end of 36,000. Which would put a RAC of ~40.5K in the middle.
The daily credit values for the past few days on their 48 core host are: 34,506 36,640 46,096 62,035 35,202 39,718 35,010 35,898 44,786. Which averages out to 41099 for the past 9 days.
38) Message boards : Number crunching : CPU time difference (Message 1795368)
Posted 11 Jun 2016 by Profile HAL9000

It was recommended they suspend GPU work several days ago to see how it effected their CPU processing times. No feedback was provided as to the results or if it was tried.
Ivy Bridge & Haswell based CPUs have shown using SETI@home apps on the CPU & iGPU at the same time would cause CPU apps to run more slowly. From one host with a Skylake i5 it was thought that the CPU slowdown may be greater on the newest CPUs. If that is true I would speculate that the increased iGPU speed in Skylake may be causing even more "cache thrashing". It has been speculated that running an app from a less cache heavy app on either the CPU or iGPU may be the solution, but I'm not sure anyone has set down and tested that yet.


. .

. . I5-6400 (2.7GHz) HD530 GPU.

. . Runtime for CPU non VLAR tasks with iGPU crunching approx 3.5 hours

. . Runtime same tasks with GPU use discontinued approx 2 hours

. . Runtime with Lunatics 0.44 running AVX approx 1 hour 15 mins.

. . And since 2 cores are hyperthreading does that not mean there is competition for the associated maths unit? I would think it would be best to try just 2 basic cores under lunatics and compare his productivity then.


Nearly double CPU run time when using the iGPU is similar to previous observations.
My initial posts when I noticed it occurring
A journey: iGPU slowing CPU processing
iGPU tuning
Raistmer's research thread
Loading APU to the limit: performance considerations - ongoing research
It would be interesting to see the results of a CPU where the iGPUs has a dedicated cache to use. Typically only the Iris Pro iGPUs have the addition cache, but the current generation Iris 540 & 550 also list a dedicated iGPU cache. I suspect that little to no CPU slowdown will occur. Similar to the results I found with my Celeron J1900, Bay Trail, CPU when using the iGPU.

To run HT or not is an often debated topic here. I have always seen an increase in work output when using HT.
There are a few different configurations to test how well using HT works in a specific configuration after a baseline using all cores & threads has been found.
1) Disable HT in their BIOS.
2) Leave HT enabled & reduce the number of CPU threads BOINC is allowed to run.
3) Leave HT enabled, reduce the number of CPU threads BOINC is allowed to run, & start BOINC with affinity settings only allowing the physical cores to be used.
Running 4 CPU tasks at once on my i7-860 I found no real noticeable difference using config #3 over #2, but config #1 was slightly worse. Running 8 CPU tasks at once on my i7-860 I found an 11.1% increase in power consumption & a 27.7% increase in work output.

I have not run similar test on my i3-390M system. I will typically just use its Radeon GPU. As it is a notebook it doesn't really have the thermal capacity to run the GPU+CPU full tilt & the GPU can do about as much work as the CPU alone.
39) Message boards : Number crunching : Are some gpu tasks longer now? (Message 1795190)
Posted 10 Jun 2016 by Profile HAL9000

MB8_win_x86_SSE3_OpenCL_NV_r3430_SoG.exe, I thought that was clear from my post, where I mention SoG several times. MB8_win_x86_SSE3_OpenCL_NV_r3430.exe is not a SoG version.
MB8_win_x86_SSE3_OpenCL_NV_r3430_SoG.exe also have the -use_sleep option if one wants to use it.


I guess I am very confused then. So you are saying that MB8_win_x86_SSE3_OpenCL_NV_r3430_SoG.exe IS NOT a SoG app, EVEN THOUGH it ships with the <plan_class>opencl_nvidia_SoG</plan_class> in its aistub file???

I think you might have misread their post.
40) Message boards : Number crunching : Burning rubber... err, CPU's... on an Android phone. (Message 1795098)
Posted 10 Jun 2016 by Profile HAL9000
The default configuration for BOINC on Android stops applications from running when the device battery temp reaches 40ºC(104ºF). You can set it to a lower value by going to Preferences, checking Show advanced preferences and controls... and then scrolling down to MAX battery temperature.


Previous 20 · Next 20

Copyright © 2016 University of California