SETI orphans

Message boards : Number crunching : SETI orphans
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 26 · 27 · 28 · 29 · 30 · 31 · 32 . . . 43 · Next

AuthorMessage
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072736 - Posted: 7 Apr 2021, 15:16:23 UTC - in response to Message 2072711.  

Richard, are you able to get a look with GPUz on a more modern dedicated GPU? i'd like to see memory controller use on say a GTX 1660 or other more recent nvidia card.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072736 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2072737 - Posted: 7 Apr 2021, 15:21:47 UTC - in response to Message 2072736.  
Last modified: 7 Apr 2021, 15:22:38 UTC

I have not yet sold all my boards on the German eBay market for 2x what I paid for them so I can possibly help with your request. I still 1660ti and RTX2080 and i9 7900x so can run gpuz if you tell me what to crunch.

Lemme know as I have not much else to do.
ID: 2072737 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072738 - Posted: 7 Apr 2021, 16:08:10 UTC - in response to Message 2072737.  

I have not yet sold all my boards on the German eBay market for 2x what I paid for them so I can possibly help with your request. I still 1660ti and RTX2080 and i9 7900x so can run gpuz if you tell me what to crunch.

Lemme know as I have not much else to do.


crunch World Community Grid, Open Pandemics - COVID-19 GPU project. (you have to go into your settings as outlined above to enable the GPU tasks.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072738 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2072739 - Posted: 7 Apr 2021, 16:15:56 UTC - in response to Message 2072738.  
Last modified: 7 Apr 2021, 16:32:12 UTC

I have not yet sold all my boards on the German eBay market for 2x what I paid for them so I can possibly help with your request. I still 1660ti and RTX2080 and i9 7900x so can run gpuz if you tell me what to crunch.

Lemme know as I have not much else to do.


crunch World Community Grid, Open Pandemics - COVID-19 GPU project. (you have to go into your settings as outlined above to enable the GPU tasks.



Incredible! I had no idea they have a GPU project! All my CPUs are doing COVID while I am waiting for my 2nd Pfiizer. Will kick off all my Einstein tasks!
[EDIT] It appears I had previously set GPU tasks to OK and there are no GPU tasks available except for an Intel GPU which I would not use even if I had one .
World Community Grid	4/7/2021 11:26:44 AM	update requested by user	
World Community Grid	4/7/2021 11:26:50 AM	No tasks sent	
World Community Grid	4/7/2021 11:26:50 AM	No tasks are available for OpenPandemics - COVID 19	
World Community Grid	4/7/2021 11:26:50 AM	No tasks are available for OpenPandemics - COVID-19 - GPU	
World Community Grid	4/7/2021 11:26:50 AM	Tasks for Intel GPU are available, but your preferences are set to not accept them	
ID: 2072739 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072745 - Posted: 7 Apr 2021, 16:51:53 UTC - in response to Message 2072742.  

I have not yet sold all my boards on the German eBay market for 2x what I paid for them so I can possibly help with your request. I still 1660ti and RTX2080 and i9 7900x so can run gpuz if you tell me what to crunch.

Lemme know as I have not much else to do.


crunch World Community Grid, Open Pandemics - COVID-19 GPU project. (you have to go into your settings as outlined above to enable the GPU tasks.



Incredible! I had no idea they have a GPU project! All my CPUs are doing COVID while I am waiting for my 2nd Pfiizer. Will kick off all my Einstein tasks!

The GPU project started yesterday after months of Beta tests.
Don't expect to get any GPU work from WCG. There's only 81600 tasks available each day, (1700/30 min batches), and with a modern GPU crunching the tasks in seconds, or 1-2 minutes, you can guess how long they last.
Poof, and each 30 minute batch is gone within seconds, gobbled up by thousands of starving GPU's.
I haven't been able to get any at all for soon to be 12 hours now.

You have better chance to win a million in the lottery.


I haven't had much issue getting work. since I enabled processing on only 2 hosts last night, i've processed like 200 GPU tasks already. on only 3 GPUs. I haven't even open the flood gates to my fastest systems yet (21 more RTX GPUs across 3 hosts).

I run a short looping command in the terminal window that uses boinccmd to keep hitting Update on a 5 min interval. it seems after 2 failed attempts (4 mins, 2min delay after each) BOINC starts going into a silent extended backoff, so it's just not checking as often and you miss out on the times that they are available. my looping command keeps that from happening.

my command for Linux, you can modify with Windows tools/commands I'm sure:
watch -n 300 ./boinccmd --project http://www.worldcommunitygrid.org/ update

Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072745 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072748 - Posted: 7 Apr 2021, 17:01:34 UTC - in response to Message 2072747.  
Last modified: 7 Apr 2021, 17:02:54 UTC

boinccmd is a tool provided by BOINC LOL. I'm literally using default functions given to everyone by the BOINC devs. no part of this is "gaming". I gave you and everyone a command that I formed, linux users can use it as is, and Windows users can modify for their systems, it's not that hard to understand.

but it's OK Grumpy, no need to be grumpy. I'm sure more tasks will be available in the future enough for everyone.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072748 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072750 - Posted: 7 Apr 2021, 17:06:46 UTC - in response to Message 2072747.  

with either multi GPU systems


people aren't allowed to have more than one GPU?
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072750 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072751 - Posted: 7 Apr 2021, 17:07:57 UTC - in response to Message 2072749.  

I'm not talking about boinccmd. You and others from here are known for other kinds of gaming of the system.
And no, there's likely not going to be more than 81600 WU's per day, each of them are doing 20 times more work than 1 CPU task.


reading the press release, it seems clear their intention is to ramp up eventually with longer tasks and more tasks. I expect that more tasks will be available at some point.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072751 · Report as offensive     Reply Quote
Profile Joseph Stateson Project Donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 309
Credit: 70,759,933
RAC: 3
United States
Message 2072768 - Posted: 7 Apr 2021, 18:06:52 UTC
Last modified: 7 Apr 2021, 18:08:29 UTC

I scanned through 40 pages (no, did not look at each page) and all COVID tasks that I contributed to were identified as "OPN1_xxx". There were no OPNG tasks. I have 3 systems capable of GPU tasks, a fourth one if RX-570 is ok (I suspect not).

Running 24x7 it seems to be that at least one OPNG should have shown up. A thread started by a user gave statistics for a number of GPU boards.

https://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,43317_offset,0#654788

I assume those stats were from the users contribution as I cannot find any detailed statistics for "leaders"


I just put a {X} in the beta testing program. Maybe that was needed to get GPU tasks?

can someone provide a link to a OPNG task so I can see what the results look like?

TIA
ID: 2072768 · Report as offensive     Reply Quote
Richard Haselgrove Project Donor
Volunteer tester

Send message
Joined: 4 Jul 99
Posts: 14679
Credit: 200,643,578
RAC: 874
United Kingdom
Message 2072773 - Posted: 7 Apr 2021, 18:21:28 UTC - in response to Message 2072736.  

Richard, are you able to get a look with GPUz on a more modern dedicated GPU? i'd like to see memory controller use on say a GTX 1660 or other more recent nvidia card.
Sorry, didn't see that till late. Sure - here are two comparisons.

1) An Einstein BRP4 task on the same HD 4600 iGPU I showed you before:


Much smoother.

2) The same WCG task as before, this time run on an NV GTX 1650 SUPER:


Not enough time resolution to see all the wriggles.
ID: 2072773 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072778 - Posted: 7 Apr 2021, 18:54:04 UTC - in response to Message 2072768.  

I scanned through 40 pages (no, did not look at each page) and all COVID tasks that I contributed to were identified as "OPN1_xxx". There were no OPNG tasks. I have 3 systems capable of GPU tasks, a fourth one if RX-570 is ok (I suspect not).

Running 24x7 it seems to be that at least one OPNG should have shown up. A thread started by a user gave statistics for a number of GPU boards.

https://www.worldcommunitygrid.org/forums/wcg/viewthread_thread,43317_offset,0#654788

I assume those stats were from the users contribution as I cannot find any detailed statistics for "leaders"


I just put a {X} in the beta testing program. Maybe that was needed to get GPU tasks?

can someone provide a link to a OPNG task so I can see what the results look like?

TIA


the GPU project, named OPNG only just started yesterday so you shouldn't have any OPNG tasks unless you crunched some beta tasks. you need to edit your preferences in the project, Settings->Device Manager->Device Profiles: click "default" or whatever venue you're using, then click the bubble for "Custom Profile" and scroll down to "Graphics Card Usage" where you will select "Do work on my graphics card while computer is in use? >>YES" and select the appropriate GPU that your system has available. In my case I've only selected YES on NVIDIA.

WCG does not allow detailed stats to be viewable for anyone but yourself. kind of silly, but there's no easy way to compare systems directly and see runtimes outside of a user reporting their performance on the forum. so unfortunately no one will be able to link you to a result, you wont be able to see it.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072778 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072780 - Posted: 7 Apr 2021, 18:57:20 UTC - in response to Message 2072773.  

Richard, are you able to get a look with GPUz on a more modern dedicated GPU? i'd like to see memory controller use on say a GTX 1660 or other more recent nvidia card.
Sorry, didn't see that till late. Sure - here are two comparisons.

1) An Einstein BRP4 task on the same HD 4600 iGPU I showed you before:


Much smoother.

2) The same WCG task as before, this time run on an NV GTX 1650 SUPER:


Not enough time resolution to see all the wriggles.


thanks Richard, that will suffice. I just wanted to see how much of a role the memory controller played. judging by your screenshot, not a lot, 0-5% memory controller use. I have a feeling that the application is constantly reading data from system memory or even from disk, so the GPU is constantly waiting around for something to do as data is shuffled back and forth. with such fast/short tasks, they could probably preload all data into GPU memory, or at the very least think of a more efficient way to move data around to not cause such a bottleneck.
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072780 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2072786 - Posted: 7 Apr 2021, 20:21:49 UTC - in response to Message 2072768.  

As Ian mentioned, no links available to any of your results. So here is an output from one of my tasks.

Result Name: OPNG_ 0000352_ 00314_ 0--


<core_client_version>7.17.0</core_client_version>
<![CDATA[
<stderr_txt>
../../projects/www.worldcommunitygrid.org/wcgrid_opng_autodockgpu_7.28_x86_64-pc-linux-gnu__opencl_nvidia_102 -jobs OPNG_0000352_00314.job -input OPNG_0000352_00314.zip -seed 1077476647 -wcgruns 1100 -wcgdpf 22
INFO: Using gpu device from app init data 1
INFO:[13:02:02] Start AutoGrid...

autogrid4: Successful Completion.
INFO:[13:02:06] End AutoGrid...
INFO:[13:02:06] Start AutoDock for ZINC000137553766-ACR2.1_RX1--fr2266benz_001--CYS114.dpf(Job #0)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:09] End AutoDock...
INFO:[13:02:09] Start AutoDock for ZINC000423900005-ACR2.18_RX1--fr2266benz_001--CYS114.dpf(Job #1)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:10] End AutoDock...
INFO:[13:02:10] Start AutoDock for ZINC000852865819-ACR2.1_RX1--fr2266benz_001--CYS114.dpf(Job #2)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:12] End AutoDock...
INFO:[13:02:12] Start AutoDock for ZINC000418467689-ACR2.8_RX1--fr2266benz_001--CYS114.dpf(Job #3)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:15] End AutoDock...
INFO:[13:02:15] Start AutoDock for ZINC000629599500-ACR2.14_RX1--fr2266benz_001--CYS114.dpf(Job #4)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:19] End AutoDock...
INFO:[13:02:19] Start AutoDock for ZINC000424440248-ACR2.6_RX1--fr2266benz_001--CYS114.dpf(Job #5)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:20] End AutoDock...
INFO:[13:02:20] Start AutoDock for ZINC000629532905-ACR2.9_RX1--fr2266benz_001--CYS114.dpf(Job #6)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:23] End AutoDock...
INFO:[13:02:23] Start AutoDock for ZINC000334856718-ACR2.18_RX1--fr2266benz_001--CYS114.dpf(Job #7)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:25] End AutoDock...
INFO:[13:02:25] Start AutoDock for ZINC000576003991-ACR2.13_RX1--fr2266benz_001--CYS114.dpf(Job #8)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:26] End AutoDock...
INFO:[13:02:26] Start AutoDock for ZINC000415228684-ACR2.6_RX1--fr2266benz_001--CYS114.dpf(Job #9)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:29] End AutoDock...
INFO:[13:02:29] Start AutoDock for ZINC000422627857-ACR2.21_RX1--fr2266benz_001--CYS114.dpf(Job #10)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:33] End AutoDock...
INFO:[13:02:33] Start AutoDock for ZINC000385310472-ACR2.20_RX1--fr2266benz_001--CYS114.dpf(Job #11)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:35] End AutoDock...
INFO:[13:02:35] Start AutoDock for ZINC000818006136-ACR2.12_RX1--fr2266benz_001--CYS114.dpf(Job #12)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:37] End AutoDock...
INFO:[13:02:37] Start AutoDock for ZINC000652137565-ACR2.16_RX1--fr2266benz_001--CYS114.dpf(Job #13)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:41] End AutoDock...
INFO:[13:02:41] Start AutoDock for ZINC000418125537-ACR2.4_RX1--fr2266benz_001--CYS114.dpf(Job #14)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:43] End AutoDock...
INFO:[13:02:43] Start AutoDock for ZINC000415127617-ACR2.1_RX1--fr2266benz_001--CYS114.dpf(Job #15)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:45] End AutoDock...
INFO:[13:02:45] Start AutoDock for ZINC000638610453-ACR2.1_RX1--fr2266benz_001--CYS114.dpf(Job #16)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:48] End AutoDock...
INFO:[13:02:48] Start AutoDock for ZINC000578121681-ACR2.6_RX1--fr2266benz_001--CYS114.dpf(Job #17)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:51] End AutoDock...
INFO:[13:02:51] Start AutoDock for ZINC000415689125-ACR2.12_RX1--fr2266benz_001--CYS114.dpf(Job #18)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:53] End AutoDock...
INFO:[13:02:53] Start AutoDock for ZINC000086333600_3-ACR2.4_RX1--fr2266benz_001--CYS114.dpf(Job #19)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:55] End AutoDock...
INFO:[13:02:55] Start AutoDock for ZINC000415233095-ACR2.14_RX1--fr2266benz_001--CYS114.dpf(Job #20)...
OpenCL device: GeForce RTX 2080
INFO:[13:02:58] End AutoDock...
INFO:[13:02:58] Start AutoDock for ZINC000424472924-ACR2.18_RX1--fr2266benz_001--CYS114.dpf(Job #21)...
OpenCL device: GeForce RTX 2080
INFO:[13:03:00] End AutoDock...
INFO:Cpu time = 57.173251
13:03:00 (2728359): called boinc_finish(0)

</stderr_txt>
]]>

Cpu time = 57.173251 so ~60 seconds of crunching for a 20 job task on a RTX 2080.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2072786 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2072787 - Posted: 7 Apr 2021, 20:32:20 UTC

I get the feeling the new app is configured to just make the gpu look like a very big cpu with many cores. Tackle one job at a time sequentially then move onto the next. I don't think there really is much parallelism actually going on.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2072787 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072796 - Posted: 7 Apr 2021, 21:16:53 UTC - in response to Message 2072787.  

It’s def paralleled if it’s maxing out the GPU to 100% at times with thousands of GPU cores active. I just think they could make better use of it by not having so much up/down behavior. But it could be that each run to 100% corresponds to each instance number listed in the task report. I’d have to count the spikes to know for sure though. Not as easy to do on Linux
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072796 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2072798 - Posted: 7 Apr 2021, 21:33:09 UTC - in response to Message 2072796.  

I guess my choice of wording was not what was inferred. What I mean was that each job is standalone, there is no preloading of pipelines for the next batch job I think.

The app needs a rethink of how to load work into the gpu along the lines of what Raistmer and Petri did to keep utilization constantly at full 100% and reduce the amount of fetches from main memory to reduce the idle time.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2072798 · Report as offensive     Reply Quote
Ian&Steve C.
Avatar

Send message
Joined: 28 Sep 99
Posts: 4267
Credit: 1,282,604,591
RAC: 6,640
United States
Message 2072800 - Posted: 7 Apr 2021, 21:36:02 UTC - in response to Message 2072798.  

Yeah I agree with that
Seti@Home classic workunits: 29,492 CPU time: 134,419 hours

ID: 2072800 · Report as offensive     Reply Quote
Profile StFreddy
Avatar

Send message
Joined: 4 Feb 01
Posts: 35
Credit: 14,080,356
RAC: 26
Hungary
Message 2072886 - Posted: 8 Apr 2021, 19:24:58 UTC

You can compare your wcg opng results with your wingman: under your account, click Results Status, then click on the name of the workunit in the Result name column. You will see your wingman here. Click on the Valid link in the Status column.
ID: 2072886 · Report as offensive     Reply Quote
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 2072888 - Posted: 8 Apr 2021, 19:52:19 UTC - in response to Message 2072886.  

But that is all you can do unfortunately. No conventional BOINC stats page, no server status page, no member host pages. Very limited and an outlier project compared to BOINC project norms.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 2072888 · Report as offensive     Reply Quote
alanb1951 Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor

Send message
Joined: 25 May 99
Posts: 10
Credit: 6,904,127
RAC: 34
United Kingdom
Message 2072906 - Posted: 9 Apr 2021, 2:42:18 UTC - in response to Message 2072796.  

It’s def paralleled if it’s maxing out the GPU to 100% at times with thousands of GPU cores active. I just think they could make better use of it by not having so much up/down behavior. But it could be that each run to 100% corresponds to each instance number listed in the task report. I’d have to count the spikes to know for sure though. Not as easy to do on Linux

I can confirm that the activity burst seems to be a single job from within the work-unit.

During the Beta I was trying to work out why my 1050Ti jobs (on an i7-7700K) used CPU for 98% of the elapsed time whilst my 1660Ti jobs (on a Ryzen 3700X) only used CPU for about 60% of the elapsed time(!); as a side-effect of looking into that I was able to tie GPU usage to one-second time-slices (which was also the most accurate timing statistic I could easily get for the jobs, of course...), and it was quite obvious that there was "quiet time" on the GPU starting before one job finished and continuing until the next job had started.

The only assumption I made to get GPU usage was that nvidia-smi dmon streaming processors usage (sm %) was a reasonably accurate estimate of what had happened in the preceding 1 or 2 second interval. I do realize that the non-percentages are point-in-time status snapshots :-) but a percentage ought to be what it says it is, as should the data transfer numbers.

OPN tasks have a fairly small data footprint compared with some other GPU projects we might run(!) so there's not likely to be a lot of data movement; hence low PCIe numbers. The power draw can get to be higher than for either Einstein or Milkyway jobs...

By the way, I found that running two at a time on the Ryzen got more use out of the GPU, but there were still too many intervals with several seconds of GPU inactivity -- I put this down to whatever is causing the CPU-to-elapsed oddity in the first place, possibly a revised task scheduler (or I/O scheduler?) in the 5.4 kernel on the Ryzen as against the 4.15 kernel on the Intel. I see other oddities with BOINC-related stuff other than OPNG as well, including times when jobs finish but still haven't updated their status in client_state.xml several seconds later (my monitoring software gives up if it takes 5 seconds; it should not take anywhere near that long, I'd've thought...)

And I've tried various things to see the effects -- fewer concurrent CPU tasks, suspending tasks that are heavy on I/O, and so on -- and nothing seems to make much difference... Once we've got a reliable flow of production work I can get back to this, but I can't sit glued to a screen 24/7 waiting for work to turn up!

Hope the above is of interest - Al.

P.S. I have got some strace output from both machines to plough through at some point, but without access to the source code it's not immediately obvious why there are these delays.

P.P.S. if anyone knows how to get better granularity for GPU usage without writing one's own tools against the NVIDIA libraries, please tell!
ID: 2072906 · Report as offensive     Reply Quote
Previous · 1 . . . 26 · 27 · 28 · 29 · 30 · 31 · 32 . . . 43 · Next

Message boards : Number crunching : SETI orphans


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.