Posts by HAL9000


log in
21) Message boards : Number crunching : Thoughts on this card, Deal or more likely Crapshoot? (Message 1788178)
Posted 8 days ago by Profile HAL9000
Well, they arrived late last week from China, fairly sparse inside the package, a driver CD and a molex-6 pin adapter, and the card. I tossed it into one of my freshly built systems, and have been letting it run for a few days - Computer 8001193.

It hasn't set the world on fire, but the RAC seems to be slowly creeping up, though it will be a week or to for it to (hopefully) stabilize. Will be interesting to see how it compares to the various versions of the 750 and 950's that I have.


How many tasks are you running at the same time on this GPU ? It seems a little slow for just one ...
Also, did you try GPU-Z ? It should report the chip that is actually on the card.

Tom

Working on the GPU-Z screen shot now, it shows it as a GK104 GTX 770, 192 shaders, 790 default clock which I have O/C'ed to 868 and seems to be running ok at that speed. The bus width shows as 192 bit as well. The system is running an X3370 CPU, running 3 out of 4 cores with one reserved for the GPU, and it is running 2 tasks at a time.

I would have expected to see 1152 shaders.
22) Message boards : Number crunching : Thoughts on this card, Deal or more likely Crapshoot? (Message 1788172)
Posted 8 days ago by Profile HAL9000
Well, they arrived late last week from China, fairly sparse inside the package, a driver CD and a molex-6 pin adapter, and the card. I tossed it into one of my freshly built systems, and have been letting it run for a few days - Computer 8001193.

It hasn't set the world on fire, but the RAC seems to be slowly creeping up, though it will be a week or to for it to (hopefully) stabilize. Will be interesting to see how it compares to the various versions of the 750 and 950's that I have.


How many tasks are you running at the same time on this GPU ? It seems a little slow for just one ...
Also, did you try GPU-Z ? It should report the chip that is actually on the card.

Tom

It would be interesting to see a screen shot from GPUz of that GPU. From the ebay description it seems like a rebadged 192-bit GTX 760.

My 750ti runs 2 tasks in ~25 min. So we can compare how long it takes vs a 750 or compare to other Kepler GPUs
23) Message boards : Number crunching : Average Credit Decreasing? (Message 1787955)
Posted 9 days ago by Profile HAL9000
BTW, Einstein@home on CreditScrew currently?

I believe Einstein@home uses fixed credit per app. All of my tasks have been the same credit for Binary Radio Pulsar Search (Arecibo, GPU) & Binary Radio Pulsar Search (Parkes PMPS XT).
24) Message boards : Number crunching : CPU Invalids (Message 1787861)
Posted 9 days ago by Profile HAL9000
I am just wondering if there is something wrong with this computer as the 2 APs that were crunched on the CPU both came up invalid.
http://setiathome.berkeley.edu/results.php?hostid=7965534&offset=0&show_names=0&state=0&appid=20
Generally CPU APs are a rarity on my systems and the worst part is that I am leaving for a week this morning. I will try and see what you advise me while I am away as I will be checking by remote if there is enough bandwidth where I will be staying.

The first thing I noticed is that both of your results state Found 30 single pulses and 30 repeating pulses, exiting.
Where the results for the other hosts did not have 30/30 signals.
They found
single pulses: 6 repetitive pulses: 1
and
single pulses: 5 repetitive pulses: 0
for the two workunits tasks.

It looks like your host isn't returning good results. Often this can be from a thermal issue or overclocking.
25) Message boards : Number crunching : Question Of Meaning = SETI Is Useful Currently : Yes Or No? (Message 1787722)
Posted 10 days ago by Profile HAL9000
Is there any news about kevin work?

It has been 4 months since this video.
he was then working on 10% of the database at the time, testing new nitpickr for amazon cloud servers.

That is the most up to date, and only, information about the subject.
26) Message boards : Number crunching : raspberry pi 3 vs GPU, whats best? (Message 1787720)
Posted 10 days ago by Profile HAL9000

At 50 hours running 4 tasks at once that is about 1.92 tasks per day on average. Scaling that up to 8 devices using 2.5w each would be 20w & 15.36 tasks a day. However that still comes to (20w*24)/15.36 tasks a day = 31.25Wh per task average.

Raspberry Pi devices are cheap and consume little power, but are not the most efficient SETI@home devices. So if your goal is to most cost effectively increase your SETI@home contribution they are not the answer at this time. If you already have a Raspberry Pi & want to use it for SET@home there is an application available.


Yep, seems fairly conclusive.
I'm getting 10 x Orange Pi One boards just for kicks and will try a cluster of 40 ARM CPU's and see how it goes.
I don't expect the efficiency to be magically greater than the calcs for a single Rpi2 though.

If your plans are to run BOINC on an ARM cluster. You should be aware that BOINC uses shared memory to communicate with apps. So each node will have to run its own instance of BOINC.
27) Message boards : Number crunching : Question Of Meaning = SETI Is Useful Currently : Yes Or No? (Message 1787572)
Posted 11 days ago by Profile HAL9000
Well then someone needs to go through it we probably have petabytes of data just loitering on the hdd's and tapes

They want to.. but need 10TB of SSDs and at least 512GB of RAM in a single server to do so, if they want to keep it in-house. Renting Cloud computing time for running through it would cost a fortune, but it would be faster. The minimum specs for that server is estimated to still take several months to run through the science DB's ~17-year backlog.

It was mentioned recently that they were working on making NTPCkr cloud based.
https://www.youtube.com/watch?v=gR91rObbfqs
All hopes rest in the hands of Kevin. So no pressure for them!
28) Message boards : Number crunching : Question Of Meaning = SETI Is Useful Currently : Yes Or No? (Message 1787417)
Posted 12 days ago by Profile HAL9000
Well then someone needs to go through it we probably have petabytes of data just loitering on the hdd's and tapes

Very easy to say...that 'someone needs to go through it'.

Are you willing to fully fund the resources to do so?
The computer time, the manhour time.
A couple of new astrophysicists on the Seti staff?
Many complain that our work to date has not been fully analyzed yet.
There are reasons why.

We are lucky enough that Eric and others have kept the project together with string and bailing wire and duct tape to at least continue to accumulate the results of the data we have crunched and send back to them.

It could be a treasure trove, once fully processed.

Meow.

Wait... they had a budget for duck tape?
29) Message boards : Number crunching : Building a 32 thread xeon system doesn't need to cost a lot (Message 1787395)
Posted 12 days ago by Profile HAL9000
power load is 2.8kw at 100% cpu but the psu's are 2 phase 220v 9.8Amp per psu

2.8kW!
Is that actual load when running, or maximum rating for the unit?

The spec sheet for the Dell PowerEdge C6145 says it can be equipped with either 1100W or 1400W redundant PSUs. I'm guessing they have the 1400w PSUs. All of my Dell servers typically would run 50-60% of the PSU rating when at full load & filled with HDDs. That is normally in the peak efficiency range for PSUs. So it makes sense the power supplies would be spec'd to run in that range.
30) Message boards : Number crunching : How to calculate performance per watt of power to compare different architecture? (Message 1787148)
Posted 13 days ago by Profile HAL9000
So tasks are linear, as in they do not vary in complexity?
To confirm, a task that takes 2 hours is always doing 2x as much work as a task that takes 1 hour?

If i'm not mistaken, the tasks are heavily rely on floating point operations, thus would FLOPS also be a good yardstick?

I've read the thread you mention, but was not able to draw a conclusion, and was not confident the calculations being made by some users were correct and valid.

Tasks differ by Angle Range, or AR. You can see this value when you look at a completed task. Then look for WU true angle range is :. A value 0.42-0.44 is "normal" with values being much higher or lower differing in the time it takes to complete them. It is best to compare similar AR tasks & that have a similar number of other counts. That are displayed as:
Spike count: 8
Autocorr count: 0
Pulse count: 0
Triplet count: 0
Gaussian count: 0

Then you can compare the performance of device A to device B.

I haven't found using FLOPs across different types of hardware to work for me.

EDIT: Also I believe jason_gee, who does much of the CUDA app development, estimates the CUDA apps are around 5-10% efficient. I might have the percentage incorrect, but it's rather low.
31) Message boards : Number crunching : How to calculate performance per watt of power to compare different architecture? (Message 1787142)
Posted 13 days ago by Profile HAL9000
I prefer to calculate watt hours per task. You only need to know how long a normal task takes, how many are running on the device, and the power consumption.

Using credit to determine performance is no good as it is variable.

The efficiency of the Raspberry Pi is a topic that just came up in raspberry pi 3 vs GPU, whats best? thread.
32) Message boards : Number crunching : raspberry pi 3 vs GPU, whats best? (Message 1787050)
Posted 13 days ago by Profile HAL9000
So a GTX 750Ti is (almost) 4 times as efficient as a RPi2.


On it's own, yes, but a GPU doesn't work on it's own, it's part of a PC system that also draws extra power. CPU, motherboard, drives, etc.
The RPi2 is a complete system on it's own, so it's not an apples to apples comparison.
Care to redo the calcs for your complete system?

The OPs original question was Raspberry Pi 3 vs GPU. Which is why there is a lot of talk about GPUs. On the GPU side I expected the OP was considering upgrading a GPU in one of their system or adding an addition GPU. On the Raspberry Pi side you can get several for the price of a GPU.

However to first the system vs system question. In a previous post I was using the processor TDP values to calculate Watt-hours per task. For two of my systems the results came out to be.
For comparison some of my systems
i5-4670K with a TDP of 84W running 4 MB tasks at once in ~1h each.
(84w * 60min)/60)/4 = 21Wh per MB task
Celeron J1900 with a TDP of 10W running 4 MB tasks at once in ~6h each.
(10w * 360min)/60)/4 = 15Wh per MB task

To get the complete system power usage I used the power display on my UPS.
For my host 5837483 which is a gaming machine. It has several 4 - 3.5" HDDs & a R9 390X. So not really geared toward being super efficient, but does have an 80Plus Platinum PSU. I was reading 97-102w while running 4 CPU tasks.
I'll use the high figure of 102w.
(102w * 60min)/60)/4 = 25.5Wh per MB task
For my host 7324426 which was kind of made with efficiency in mind. It does use a 2.5" HDD, but has an old 350w PSU that isn't even 80Plus certified. I was reading 15-16w while running 4 CPU tasks.
(16w * 360min)/60)/4 = 24Wh per MB task

Previously I was using 45w for my 750ti GPUs power consumption. Other have observed similar usage form their power meters. So if I were to add the GPU to either one of my two system I referenced the numbers would indeed be different.
Since my previous method of ((watts * run time in minutes)/60)/number of concurrent tasks does work when mixing CPU & GPU I'll use daily watt hours/daily number of tasks
Host 5837483 base CPU figures
(102w*24)/96 tasks a day = 25.5Wh per task
Host 7324426 base CPU figures
(16w*24)/16 tasks a day = 24Wh per task
Host 5837483 with 45w GTX 750ti doing 114 tasks a day added
(147w*24)/210 tasks a day = 16.8Wh per task average
Host 7324426 with 45w GTX 750ti doing 114 tasks a day added
(61w*24)/130 tasks a day = 11.26Wh per task average

So if you wanted to build a highly efficient cruncher from scratch you might want to look into getting an ASrock Q1900M & a GTX 750ti.
In USD an EVGA GTX 750ti can range from $100-130 depending on which version you get & the MB/CPU runs $70-80 depending on where you shop. Then about another $80-90 for a SSD, 8GB of RAM, & a PSU. So about $300 total. The Raspberry Pi is listed as being $35, but the cheapest I can find it is $47. That does include a power supply, but no SD card. However at $35 8 of them about only be $280.
At 50 hours running 4 tasks at once that is about 1.92 tasks per day on average. Scaling that up to 8 devices using 2.5w each would be 20w & 15.36 tasks a day. However that still comes to (20w*24)/15.36 tasks a day = 31.25Wh per task average.

Raspberry Pi devices are cheap and consume little power, but are not the most efficient SETI@home devices. So if your goal is to most cost effectively increase your SETI@home contribution they are not the answer at this time. If you already have a Raspberry Pi & want to use it for SET@home there is an application available.
33) Message boards : Number crunching : 2 new crunchers (Message 1786919)
Posted 14 days ago by Profile HAL9000
well i'm test running it now and man the heat that thing puts out is enormous

can you estimate what the rac could be ?

and my god that thing is loud

You could probably put the heat coming out to use drying clothes or making jerky.

Just for quick math I tend to use 100 credit for a normal AR task. So 150 tasks a day would give a RAC of about 15,000. Since the credit system is variable a RAC of 10-15K might be more realistic.
34) Message boards : Number crunching : 2 new crunchers (Message 1786896)
Posted 14 days ago by Profile HAL9000
yeah this is one single server with 2 boards in it so total is 96 cores

i shall see how it performs

With 920w of CPUs it will make a fantastic space heater while crunching work.
I'm going to estimate that normal AR tasks will run 8-10 hours. So each node will pump out ~150 MB tasks a day.
35) Message boards : Number crunching : Windows 10 - Yea or Nay? (Message 1786828)
Posted 14 days ago by Profile HAL9000
So I purged all the bad ones.. and just turned updates off entirely. Problem solved.

No problems here.
36) Message boards : Number crunching : Radeon Software Crimson (Message 1786824)
Posted 14 days ago by Profile HAL9000
Ap's are not working right. I have 16.4.2 installed. I aborted this wonky ap.


With 16.3.2 being rubbish on my system. I'll wait for the next released driver before giving another 16.x driver a chance.
37) Message boards : Number crunching : Trying To Increase The Clock Speed Of An Nvidia 750 ti ?? (Message 1786822)
Posted 14 days ago by Profile HAL9000
I tried several of the same commands on my 750ti. Mine was already running 1345.5MHz when running SETI@home, but I figured I would see if I could push it any further using the nvidia-smi commands.

C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -pm 1 Setting persistence mode is not supported for GPU 0000:08:00.0 on this platform. Treating as warning and moving on. All done.

Looks like Windows doesn't get that function.




Interesting.

Just for grins I might have tried 1463 MHz. first rather than work my way up. When I ran "nvidia-smi -q -d SUPPORTED_CLOCKS" it gave a short list of discrete values. Perhaps you cannot set it to any of the in between values not listed. It would only be an academic exercise though if you are limited by low voltage.

I wonder why your max. power is 52 W while mine is 38.5 W. I guess it must be a hard limit the card manufacturer puts in their firmware and the cure is the same as for the under voltage condition - flashing a modded BIOS. After my recent MB fiasco, I do not think I am going there :).


Looks like I was misinterpreting the values being displayed. I saw Memory & 2700 and figured it was a list of memory clock speeds.
Attached GPUs : 1 GPU 0000:08:00.0 Supported Clocks Memory : 2700 MHz Graphics : 1463 MHz Graphics : 1450 MHz Graphics : 1437 MHz Graphics : 1424 MHz Graphics : 1411 MHz Graphics : 1398 MHz Graphics : 1385 MHz Graphics : 1372 MHz Graphics : 1359 MHz Graphics : 1346 MHz Graphics : 1333 MHz[pre] I'm surprised that 1345 & 1347 MHz were accepted previously. Perhaps +/- 1MHz is allowed? After setting setting nvidia-smi -ac 2700,1463 my GPU still only runs at 1345.5 MHz, but the reason displayed changes from [b]Applications Clocks Setting[/b] to [b]Unknown[/b]. [pre]C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d PERFORMANCE ==============NVSMI LOG============== Timestamp : Wed May 11 15:46:07 2016 Driver Version : 364.51 Attached GPUs : 1 GPU 0000:08:00.0 Performance State : P0 Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Not Active SW Power Cap : Not Active HW Slowdown : Not Active Sync Boost : Not Active Unknown : Active


I imagine if I used a tool like Nvidia Inspector I could overclock the card further, but I don't really feel the need to. The fact it runs at 1345MHz instead of specified 1189 MHz or 1268 MHz already seems like a win to me.
38) Message boards : Number crunching : BOINC And Interference With Other Programs [ RESOLVED ] (Message 1786774)
Posted 14 days ago by Profile HAL9000
I place multiple elements on a single line in mine without any problems. I think the issue you had was using non-XML separators between elements..

<options> <exclusive_app>TS3W.exe</exclusive_app><exclusive_app>fallout4.exe</exclusive_app> <exclusive_gpu_app>TS3W.exe</exclusive_gpu_app><exclusive_gpu_app>fallout4.exe</exclusive_gpu_app> </options>

09-May-2016 17:52:32 [---] Config: don't compute while TS3W.exe is running
09-May-2016 17:52:32 [---] Config: don't compute while fallout4.exe is running
09-May-2016 17:52:32 [---] Config: don't use GPUs while TS3W.exe is running
09-May-2016 17:52:32 [---] Config: don't use GPUs while fallout4.exe is running

Note: My use of <exclusive_app> & <exclusive_gpu_app> is redundant. However I was originally using <exclusive_gpu_app> then found I needed to have CPU apps suspended as well for fallout. So I copied the <exclusive_gpu_app> line to <exclusive_app>. Thinking it would be easier to comment it out if I stopped running CPU tasks. For reference XML comments are done with start and end tags line this <!--something goes here--> & can span multiple lines.
Also I imagine I could have placed all 4 of my exclusions on one line, but I prefer to have CPU & GPU exclusions on separate lines.

Greetings Hal,

This was how I was trying to delimit the filenames:
<exclusive_app>filename1.exe | filename2.exe | ...</exclusive_app>

and the commas:
<exclusive_app>filename1.exe, filename2.exe, ...</exclusive_app>


They showed up in the event log as:
Config: don't compute while filename1.exe | filename2.exe | ... is running Config: don't compute while filename1.exe, filename2.exe, ... is running

The comment line code is the same as in HTML. :)

Thing is, I saw an example of GPUs delimited with the vertical bar which gave me the idea. Which of course, in this case, was wrong. ;)

Anyway, it's working just fine now. Thanks again! :)

Keep on BOINCing...! :)

Ah I see what you were doing. I think maybe it would be most correct to say each application needs to be enclosed in its own set of tags. It is probably possible to make a cc_config.xml that is one single line. That would be hard for humans to read in an app like notepad, but would be displayed correctly in an XML viewer.

I would guess when you saw an example of things separated by a vertical bar, or pipe*, it was something like this.
*I tend to call it a pipe even when I'm not piping a command.
<exclude_gpu> <url>project_URL</url> [<device_num>N</device_num>] [<type>NVIDIA|ATI|intel_gpu</type>] [<app>appname</app>] </exclude_gpu>

Which in the example is meant to display the available options & I guess could be read an "or". <type>NVIDIA or ATI or intel_gpu</type>

It looks like the exclusive app function reads every character between the tags. Which would hopefully work for applications that have a space in their name such as "my app.exe".
39) Message boards : Number crunching : Trying To Increase The Clock Speed Of An Nvidia 750 ti ?? (Message 1786768)
Posted 14 days ago by Profile HAL9000
I tried several of the same commands on my 750ti. Mine was already running 1345.5MHz when running SETI@home, but I figured I would see if I could push it any further using the nvidia-smi commands.

C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -pm 1 Setting persistence mode is not supported for GPU 0000:08:00.0 on this platform. Treating as warning and moving on. All done.

Looks like Windows doesn't get that function.


C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d CLOCK ==============NVSMI LOG============== Timestamp : Wed May 11 11:50:38 2016 Driver Version : 364.51 Attached GPUs : 1 GPU 0000:08:00.0 Clocks Graphics : 135 MHz SM : 135 MHz Memory : 405 MHz Video : 405 MHz Applications Clocks Graphics : 1189 MHz Memory : 2700 MHz Default Applications Clocks Graphics : 1189 MHz Memory : 2700 MHz Max Clocks Graphics : 1463 MHz SM : 1463 MHz Memory : 2700 MHz Video : 1317 MHz SM Clock Samples Duration : 109.42 sec Number of Samples : 13 Max : 1189 MHz Min : 135 MHz Avg : 1027 MHz Memory Clock Samples Duration : 109.42 sec Number of Samples : 13 Max : 2700 MHz Min : 405 MHz Avg : 2296 MHz Clock Policy Auto Boost : N/A Auto Boost Default : N/A

So it says 1463 MHz. Let's go for it!

C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -ac 2700,1333 Applications clocks set to "(MEM 2700, SM 1333)" for GPU 0000:08:00.0 All done. C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -ac 2700,1345 Applications clocks set to "(MEM 2700, SM 1345)" for GPU 0000:08:00.0 All done. C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -ac 2700,1346 Applications clocks set to "(MEM 2700, SM 1346)" for GPU 0000:08:00.0 All done. C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -ac 2700,1347 Applications clocks set to "(MEM 2700, SM 1347)" for GPU 0000:08:00.0 All done. C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -ac 2700,1348 Specified clock combination "(MEM 2700, SM 1348)" is not supported for GPU 0000:08:00.0. Run 'nvidia-smi -q -d SUPPORTED _CLOCKS' to see list of supported clock combinations Treating as warning and moving on. All done.

Looks like the max I am allowed to set is 1347 MHz. All values over that gave me the same warning message. The command nvidia-smi -q -d SUPPORTED_CLOCKS only displayed memory clock values. Starting with 2700 and descending.


After setting nvidia-smi -ac 2700,1347

C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d CLOCK ==============NVSMI LOG============== Timestamp : Wed May 11 11:58:57 2016 Driver Version : 364.51 Attached GPUs : 1 GPU 0000:08:00.0 Clocks Graphics : 135 MHz SM : 135 MHz Memory : 405 MHz Video : 405 MHz Applications Clocks Graphics : 1346 MHz Memory : 2700 MHz Default Applications Clocks Graphics : 1189 MHz Memory : 2700 MHz Max Clocks Graphics : 1463 MHz SM : 1463 MHz Memory : 2700 MHz Video : 1317 MHz SM Clock Samples Duration : 589.67 sec Number of Samples : 26 Max : 1346 MHz Min : 135 MHz Avg : 522 MHz Memory Clock Samples Duration : 589.67 sec Number of Samples : 26 Max : 2700 MHz Min : 405 MHz Avg : 1257 MHz Clock Policy Auto Boost : N/A Auto Boost Default : N/A C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER ==============NVSMI LOG============== Timestamp : Wed May 11 11:58:24 2016 Driver Version : 364.51 Attached GPUs : 1 GPU 0000:08:00.0 Power Readings Power Management : Supported Power Draw : 1.00 W Power Limit : 52.00 W Default Power Limit : 52.00 W Enforced Power Limit : 52.00 W Min Power Limit : 30.00 W Max Power Limit : 52.00 W Power Samples Duration : 54.60 sec Number of Samples : 119 Max : 3.38 W Min : 0.77 W Avg : 0.97 W

I was unable to set the power limit beyond 52w & received a similar message as you did for the command.

With a task running. I was just running 1 as I have run down the queue on the system. I'm planning to try the 750 in my HTPC to see how it interacts with my TV.
C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d POWER ==============NVSMI LOG============== Timestamp : Wed May 11 12:09:03 2016 Driver Version : 364.51 Attached GPUs : 1 GPU 0000:08:00.0 Power Readings Power Management : Supported Power Draw : 19.78 W Power Limit : 52.00 W Default Power Limit : 52.00 W Enforced Power Limit : 52.00 W Min Power Limit : 30.00 W Max Power Limit : 52.00 W Power Samples Duration : 10.13 sec Number of Samples : 119 Max : 24.46 W Min : 18.46 W Avg : 20.33 W C:\Program Files\NVIDIA Corporation\NVSMI>nvidia-smi -q -d PERFORMANCE ==============NVSMI LOG============== Timestamp : Wed May 11 12:09:44 2016 Driver Version : 364.51 Attached GPUs : 1 GPU 0000:08:00.0 Performance State : P0 Clocks Throttle Reasons Idle : Not Active Applications Clocks Setting : Active SW Power Cap : Not Active HW Slowdown : Not Active Sync Boost : Not Active Unknown : Not Active


Despite setting nvidia-smi -ac 2700,1347 I was still seeing 1345.5 MHz in GPUz. GPUz also indicates that my GPU performance is being limited by voltage. Which it was doing before I had used any nvidia-smi commands.

40) Message boards : Number crunching : BOINC And Interference With Other Programs [ RESOLVED ] (Message 1786750)
Posted 14 days ago by Profile HAL9000
I place multiple elements on a single line in mine without any problems. I think the issue you had was using non-XML separators between elements..

<options> <exclusive_app>TS3W.exe</exclusive_app><exclusive_app>fallout4.exe</exclusive_app> <exclusive_gpu_app>TS3W.exe</exclusive_gpu_app><exclusive_gpu_app>fallout4.exe</exclusive_gpu_app> </options>

09-May-2016 17:52:32 [---] Config: don't compute while TS3W.exe is running
09-May-2016 17:52:32 [---] Config: don't compute while fallout4.exe is running
09-May-2016 17:52:32 [---] Config: don't use GPUs while TS3W.exe is running
09-May-2016 17:52:32 [---] Config: don't use GPUs while fallout4.exe is running

Note: My use of <exclusive_app> & <exclusive_gpu_app> is redundant. However I was originally using <exclusive_gpu_app> then found I needed to have CPU apps suspended as well for fallout. So I copied the <exclusive_gpu_app> line to <exclusive_app>. Thinking it would be easier to comment it out if I stopped running CPU tasks. For reference XML comments are done with start and end tags line this <!--something goes here--> & can span multiple lines.
Also I imagine I could have placed all 4 of my exclusions on one line, but I prefer to have CPU & GPU exclusions on separate lines.


Previous 20 · Next 20

Copyright © 2016 University of California