GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU

Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU
Message board moderation

To post messages, you must log in.

Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 37 · Next

AuthorMessage
Profile Zalster Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 27 May 99
Posts: 5517
Credit: 528,817,460
RAC: 242
United States
Message 1808828 - Posted: 13 Aug 2016, 6:31:06 UTC - in response to Message 1808816.  

Stephen, I can't find that message 1804557 and thus can't read Zalster's response. Can you provide the link please.


http://setiathome.berkeley.edu/forum_thread.php?id=79954&postid=1804557

Here's the link, though I don't remember being part of this discussion, lol.....
ID: 1808828 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1808830 - Posted: 13 Aug 2016, 6:55:58 UTC - in response to Message 1808799.  


Hi Keith,

. . This error has been reported before by Zalster ...

Message 1804557

. . And Mr Kevvy made a cogent response. Were there any nonVLAR tasks in your CPU queue on that machine at the time? If the app does not find any nonVLAR tasks in the CPU cache then it will terminate without taking any action.


Yes, there were nonVLAR tasks assigned to the CPU on board at the time.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1808830 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1808831 - Posted: 13 Aug 2016, 7:06:07 UTC

I looked through my tasks and saw I had several nonVLAR that were on board from yesterday assigned to the CPU. For some reason the script and app didn't find them this afternoon when I ran the script at my normal shutdown time. I just ran the script again and this time it found the nonVLARs and rescheduled them to the GPU. In fact, running them now. So, must have been a fluke or something. I ran the script several times this afternoon with the same result of not finding any CPU tasks. Don't know why it didn't work earlier.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1808831 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13855
Credit: 208,696,464
RAC: 304
Australia
Message 1808844 - Posted: 13 Aug 2016, 9:04:25 UTC - in response to Message 1808818.  

Error: could not determine CPU version_num from client_state.

This can happen if there aren't enough of a certain work unit type in your queue for it to make a determination. Give it a few hours until the queue is different and it should clear up, if not please advise.

Grant
Darwin NT
ID: 1808844 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1808905 - Posted: 13 Aug 2016, 16:50:29 UTC - in response to Message 1808816.  

Stephen, I can't find that message 1804557 and thus can't read Zalster's response. Can you provide the link please.



. . I don't know how to copy it as an active link but if you scroll back through this message thread it is there, I just checked that it is.
ID: 1808905 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1808906 - Posted: 13 Aug 2016, 16:54:08 UTC - in response to Message 1808828.  

Stephen, I can't find that message 1804557 and thus can't read Zalster's response. Can you provide the link please.


http://setiathome.berkeley.edu/forum_thread.php?id=79954&postid=1804557

Here's the link, though I don't remember being part of this discussion, lol.....


Sorry mate, it was Rasputin. I have had a cold and been very tired of late so that is my excuse :(
ID: 1808906 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1808907 - Posted: 13 Aug 2016, 16:55:28 UTC - in response to Message 1808830.  


Hi Keith,

. . This error has been reported before by Zalster ...

Message 1804557

. . And Mr Kevvy made a cogent response. Were there any nonVLAR tasks in your CPU queue on that machine at the time? If the app does not find any nonVLAR tasks in the CPU cache then it will terminate without taking any action.


Yes, there were nonVLAR tasks assigned to the CPU on board at the time.


. . Well there goes one perfectly good theory :)
ID: 1808907 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1808913 - Posted: 13 Aug 2016, 17:39:12 UTC - in response to Message 1808905.  

When you make a reply you have to use the BBCode tags tools that are right above the edit window. There are examples of how to insert media content in the link to the left of the tools list where it says "Use BBCode tags to format your text"

I searched for Message 1804557 several times and for some reason the site search function only would list the text in your message, not the actual original Message 1804557. Not sure why it didn't work .... always has in the past.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1808913 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1808914 - Posted: 13 Aug 2016, 17:42:11 UTC - in response to Message 1808907.  

Grant has posted several times the text in the 1804557 message that in some cases with very low nonVLAR tasks on board, the app and script may not be able to identify the task app. I think I might have figured out that in my case the threshold seems to be anything under two tasks.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1808914 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1808988 - Posted: 13 Aug 2016, 23:32:41 UTC - in response to Message 1808913.  
Last modified: 13 Aug 2016, 23:35:11 UTC

When you make a reply you have to use the BBCode tags tools that are right above the edit window. There are examples of how to insert media content in the link to the left of the tools list where it says "Use BBCode tags to format your text"

I searched for Message 1804557 several times and for some reason the site search function only would list the text in your message, not the actual original Message 1804557. Not sure why it didn't work .... always has in the past.


http://setiathome.berkeley.edu/forum_thread.php?id=79954&postid=1804557#1804557

OK I tried posting it as a URL to the message but it looks different to how it looks when others link to previous messages, but at least it works this time.
ID: 1808988 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1809006 - Posted: 14 Aug 2016, 3:02:06 UTC - in response to Message 1808988.  

The link URL always uses the full link syntax. You can however make the link descriptor anything you want it to say. I'll post the code of my previous message as an example.


[url=http://setiathome.berkeley.edu/bbcode.php]"Use BBCode tags to format your text"[/url]



It just takes some familiarity before you just get the hang of it. The link for the BBCode tags has examples of the correct syntax to use for each type of media content.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1809006 · Report as offensive
Stephen "Heretic" Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 20 Sep 12
Posts: 5557
Credit: 192,787,363
RAC: 628
Australia
Message 1809017 - Posted: 14 Aug 2016, 4:24:13 UTC - in response to Message 1809006.  

. . Hi Keith

. . Thanks for that, it clears that up for me.
ID: 1809017 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1809177 - Posted: 14 Aug 2016, 23:28:57 UTC

Gents, thanks for your work on these 2 programs/scripts. I finally just tried it on my LotsaCores machine, it did move a few things around, but I guess it wasn't earthshaking, I think possibly 7-10 tasks? I do have a question though, as it is a manually driven event, how often does it need to be ran? I can't imagine hourly, but 12 hours? Daily? Anything I should know specificly regarding machines that processes 40-55 tasks at a time vs more 'normal' ones that do 6-12? More frequently? Doesn't make a difference?

Also, is there a way to automate this task so I don't have to remember to run it regularly? I thought that Windows has some type of built in scheduler, but I haven't looked at it a many years, so if someone could explain a quick and dirty way to get it set up on there (Win 7), that would be great, assuming that it is OK to run unattended? Does it create a log file entry every time it runs, so that the results/info can be reviewed at a later time? Thanks!

ID: 1809177 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1809189 - Posted: 15 Aug 2016, 1:29:57 UTC - in response to Message 1809177.  
Last modified: 15 Aug 2016, 1:39:06 UTC

Hey Al,
Thanks for giving the DeviceQueueOptimization apps a try. (as I now call them to be more accurate).
Gents, thanks for your work on these 2 programs/scripts. I finally just tried it on my LotsaCores machine, it did move a few things around, but I guess it wasn't earthshaking, I think possibly 7-10 tasks? I do have a question though, as it is a manually driven event, how often does it need to be ran? I can't imagine hourly, but 12 hours? Daily? Anything I should know specificly regarding machines that processes 40-55 tasks at a time vs more 'normal' ones that do 6-12? More frequently? Doesn't make a difference?

Short but complex answer:
It needs to be run every time the tasks in one of the queues (CPU or GPU) have been processed/replaced/turned-over (those that were in the queues since the last optimization).

Scenarios: (with Cuda50 as the GPU app since it doesn't take away as many CPUcores best served for guppis)

1. GPU queue processed faster than CPU queue:
on rigs with up to 6 CPUcores processing guppis and with only high-end GPUs, the GPU queue is likely to get processed faster than the CPU queue of 100.
So run the script combo (my front-end calls Mr K's) in about the time it takes the GPUs to process 100*#ofGPUs tasks.

2. CPU queue processed faster than GPU queue:
on your LotzaCores, I'm guessing 100 CPU tasks gets processed faster than 100 tasks on your average GPU (not the fastest one, since with 4 GPUs you get 400 tasks on GPU queue). So run the script combo in about the time it takes the CPU cores to process 100 tasks.

3. unknown if CPU queue is processed faster than GPU queue:
calculations need to be done.

Also, is there a way to automate this task so I don't have to remember to run it regularly? I thought that Windows has some type of built in scheduler, but I haven't looked at it a many years, so if someone could explain a quick and dirty way to get it set up on there (Win 7), that would be great, assuming that it is OK to run unattended?
I think Keith and a few other have done this and I don't think it is complicated.
I haven't since I'm still testing different obscure scenarios, as I don't want to be insulted with "nefarious", "a hoarder", or "very stupid" again.

Does it create a log file entry every time it runs, so that the results/info can be reviewed at a later time? Thanks!
No, but I could make a new version in 10 days to log almost everything that appears in the .cmd window...but I might need help to append the date&time to the logfile so that it doesn't overwrite the previous one (since my previous batch file experience dates back to the last millennium)
I always wanted to write that one day. Now I have! lol

Once the 2nd Tuesdays during WoW has passed (>Tues, Aug 23rd), I'll be happy to consider any fairly-simple script modification requests.
(assuming that the "August 2016" saga is over by then)

Since no one has taken up my offer to take over my front-end, I'd now like to work with someone who has PowerShell knowledge and experience for the complex commands that can't be kept simple with normal .cmd commands: http://ss64.com/nt/
My goal is to keep is simple (ie: try to stay away from the registry) so that anyone who's played with batch files in the past can "read" my front-end.
One simple example is that there is no Wait command so I had to use:
"PING -n 5 127.0.0.1>nul" for a 5 secs delay.
I'd like that to be done in PowerShell.

Cheers,
RobG
ID: 1809189 · Report as offensive
kittyman Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 9 Jul 00
Posts: 51478
Credit: 1,018,363,574
RAC: 1,004
United States
Message 1809192 - Posted: 15 Aug 2016, 1:37:15 UTC

Uhh....
I hope that you are able to resolve your August 4th 'misunderstanding'.
I have been on both sides, as both protagonist and recipient over the years, and I never wished these things to last very long.

Hope your problem is short-lived as well.
This is a community project, and there is no need for us to be at each other's throats, even if we disagree at times.

There is room for disagreement here.
Just hope you can talk it out in good time.

Meow.
"Time is simply the mechanism that keeps everything from happening all at once."

ID: 1809192 · Report as offensive
Al Crowdfunding Project Donor*Special Project $75 donorSpecial Project $250 donor
Avatar

Send message
Joined: 3 Apr 99
Posts: 1682
Credit: 477,343,364
RAC: 482
United States
Message 1809193 - Posted: 15 Aug 2016, 1:55:50 UTC - in response to Message 1809189.  
Last modified: 15 Aug 2016, 2:05:54 UTC

Short but complex answer:
It needs to be run every time the tasks in one of the queues (CPU or GPU) have been processed/replaced/turned-over (those that were in the queues since the last optimization).

Scenarios: (with Cuda50 as the GPU app since it doesn't take away as many CPUcores best served for guppis)

1. GPU queue processed faster than CPU queue:
on rigs with up to 6 CPUcores processing guppis and with only high-end GPUs, the GPU queue is likely to get processed faster than the CPU queue of 100.
So run the script combo (my front-end calls Mr K's) in about the time it takes the GPUs to process 100*#ofGPUs tasks.

Good to know for the other machines


2. CPU queue processed faster than GPU queue:
on your LotzaCores, I'm guessing 100 CPU tasks gets processed faster than 100 tasks on your average GPU (not the fastest one, since with 4 GPUs you get 400 tasks on GPU queue). So run the script combo in about the time it takes the CPU cores to process 100 tasks.

Pretty sure this is where my bigger machines fall, as the entire cache of CPU tasks turn over in about 4 hours, I am dry before the end of Tuesday Maint. so it would appear on these machines, that 3-4 hours would be appropriate. I know my cards, even running a 1080 SC and 2 980Ti's, don't process tasks nearly at that rate, just due to the overwhelming core vs. GPU concurrent # of task processing advantage the CPUs have. Is it possible to run it too frequently, will it break/mess up anything, or just run and do nothing?


3. unknown if CPU queue is processed faster than GPU queue:
calculations need to be done.

Also, is there a way to automate this task so I don't have to remember to run it regularly? I thought that Windows has some type of built in scheduler, but I haven't looked at it a many years, so if someone could explain a quick and dirty way to get it set up on there (Win 7), that would be great, assuming that it is OK to run unattended?
I think Keith and a few other have done this and I don't think it is complicated.
I haven't since I'm still testing different obscure scenarios, as I don't want to be insulted with "nefarious", "a hoarder", or "very stupid" again.

Does it create a log file entry every time it runs, so that the results/info can be reviewed at a later time? Thanks!
No, but I could make a new version in 10 days to log almost everything that appears in the .cmd window...but I might need help to append the date&time to the logfile so that it doesn't overwrite the previous one (since my previous batch file experience dates back to the last millennium)
I always wanted to write that one day. Now I have! lol

Both of those would be great if you had the time, esp the logfile. Others may chime in on the auto task setup before you get to it, which would be one less thing on your plate.

Once the 2nd Tuesdays during WoW has passed (>Tues, Aug 23rd), I'll be happy to consider any fairly-simple script modification requests.
(assuming that the "August 2016" saga is over by then)

Since no one has taken up my offer to take over my front-end, I'd now like to work with someone who has PowerShell knowledge and experience for the complex commands that can't be kept simple with normal .cmd commands: http://ss64.com/nt/

Cheers,
RobG

Thanks to both of you for taking the time to put these together, my goal is to always use the proper tool for the job, and until Jason has worked his magic on the guts of BOINC and that gets built into the system by default, tools like this just make everything work more efficiently. If you're going to spend hard earned money on the hobby (which pretty much everyone here happily does) and work to make it run the best it possibly can, why not use every tool at your disposal, assuming of course that it doesn't adversely effect the project as a whole. Which I can't think of a way that this could, it just puts the tasks to the hardware that can most efficiently process them.

*edit* Just ran it again before I head upstairs (first time was ~3 hours ago), it moved 24 tasks each way, which is about 1/2 of the currently running tasks. Wonder if it needs to be ran more frequently? I have these tasks now waiting to run, after running the script again. Does this look normal?



ID: 1809193 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1809197 - Posted: 15 Aug 2016, 2:28:59 UTC

I have been running the script manually so far when I shut down the crunchers for peak power rates intervals. I was doing that only once a day but yesterday I ran it about 12 hours apart. In my situation with only 3 cores doing CPU work at a time, that is too soon as there isn't enough new Arecibo work brought on board. Probably would have been more needed back when there wasn't Guppi's' and VLAR's predominately in the workload. I will be running 24/7 during the WOW contests' two weeks. Even then I believe only once a day will be needed. Unless we get a cr*pload of new Arecibo tapes and the splitters start spitting them out more than the dribbles we now get.

I think that will get me off my duff and automate the process. I will see if I can just use Windows Task Scheduler to make a simple task to run the script front end once a day. Probably will do it after 2PM PDT to allow for the project's Outrage on Tuesday's to come back up and make it more efficient in finding tasks to move since that is when the largest slug of new work shows up.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1809197 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1809200 - Posted: 15 Aug 2016, 3:04:32 UTC - in response to Message 1809197.  
Last modified: 15 Aug 2016, 3:04:57 UTC

Unless we get a cr*pload of new Arecibo tapes and the splitters start spitting them out more than the dribbles we now get.

Expect the unexpected since Zalster posted on http://www.seti-germany.de/Wow/ the following:
We won't run out of work. 331 channels from Arecibo and over 13K channels from GreenBank, lol
(see Wow!-Communication: Aug. 14, 01:19)

I think that will get me off my duff and automate the process. I will see if I can just use Windows Task Scheduler to make a simple task to run the script front end once a day. Probably will do it after 2PM PDT to allow for the project's Outrage on Tuesday's to come back up and make it more efficient in finding tasks to move since that is when the largest slug of new work shows up.

If you have very little left in either CPU or GPU queues, you might want to do 2 scheduled tasks:
- one daily (slightly "after 2PM PDT" as you suggested);
- one for Tuesdays: set to an hour (or two) after the daily one.
The reason for the latter is if either Queue was below 50% of its max, the daily Scheduled Task would likely only move the 1st dozen of so tasks newly downloaded
...and by the time the script is run again the next day, some Guppi on GPU or nonVLARs on CPU could have been processed.
Does that make any sense? ...or am I seeing a scenario that is very unlikely to happen? lol

Cheers,
RobG :-)

PS: and now for a draft reply to Al's post
ID: 1809200 · Report as offensive
Profile Keith Myers Special Project $250 donor
Volunteer tester
Avatar

Send message
Joined: 29 Apr 01
Posts: 13164
Credit: 1,160,866,277
RAC: 1,873
United States
Message 1809202 - Posted: 15 Aug 2016, 3:18:39 UTC - in response to Message 1809200.  

No, I think you've summed it up quite well. I agree with your analysis. Also, I wonder (hoping actually) that there might have been changes made to the scheduler. I am seeing almost all of the Arecibo nonVLAR work get assigned to the GPU's in the past couple of days. I haven't dropped in on the developer forum lately and don't know if any changes have been implemented in the project code. Richard Haselgrove is always on top of this. Maybe he will chime in on this thread or I should contact him directly and see if anything has been afoot by D.A. Or could be simple coincidence.
Seti@Home classic workunits:20,676 CPU time:74,226 hours

A proud member of the OFA (Old Farts Association)
ID: 1809202 · Report as offensive
Profile Stubbles
Volunteer tester
Avatar

Send message
Joined: 29 Nov 99
Posts: 358
Credit: 5,909,255
RAC: 0
Canada
Message 1809207 - Posted: 15 Aug 2016, 3:58:01 UTC - in response to Message 1809193.  
Last modified: 15 Aug 2016, 4:01:07 UTC

Pretty sure this is where my bigger machines fall, as the entire cache of CPU tasks turn over in about 4 hours, I am dry before the end of Tuesday Maint. so it would appear on these machines, that 3-4 hours would be appropriate. I know my cards, even running a 1080 SC and 2 980Ti's, don't process tasks nearly at that rate, just due to the overwhelming core vs. GPU concurrent # of task processing advantage the CPUs have. Is it possible to run it too frequently, will it break/mess up anything, or just run and do nothing?

Yes...although it might be even better after 2 to 3 hrs since your LotzaCores could be running the equivalent of almost 50% of the CPU queue.
If you have many CPUcores supporting the GPUs then 3-4hrs could be ok.
Both of those would be great if you had the time, esp the logfile. Others may chime in on the auto task setup before you get to it, which would be one less thing on your plate.

Thanks Keith for trying to figure out the Task Scheduling.
After reading Zalster's post on SG WoW, I'm wondering if the project staff is planning to increase the ratio of Guppi:nonVLARs after WoW.
Currently, I am still getting much more nonVLARs than Guppi through either Queue (although not at the same ratio and no one has been able to explain that yet), even though a few have mentioned here-&-there that guppis are in greater #s than nonVLARs out-in-the-wild.
Once it gets to 2guppi for every nonVLAR (2:1), the apparent benefit of DeviceQueueOptimisation (DQO) will likely drop to below 10%, at which point NV_SoG will likely be shown to be better overall than Cuda50 (since GPUs will need to process more than 1 guppi here-&-there.
...and until Jason has worked his magic on the guts of BOINC and that gets built into the system by default.
Could you expand on this Al? I thought Jason was working on Cuda...or are you referring to another Jason?

If you're going to spend hard earned money on the hobby (which pretty much everyone here happily does) and work to make it run the best it possibly can, why not use every tool at your disposal,

I had already asked Shaggie if he could do a script to find out what is the contribution of the daily project output: (of ~150G cr/day) for:
- the top 10,000 PCs; and
I'll now add:
- the "Anonymous Platform"s within the top 10,000 PCs.
...but he was too busy back then with his incredible GPU comparison charts.
Maybe someone else might want to tap his sholder to see if he has more time and interest now.

assuming of course that it doesn't adversely effect the project as a whole. Which I can't think of a way that this could, it just puts the tasks to the hardware that can most efficiently process them.
There's currently a minor issue (that could be more than minor) with any DQO:
- when the Boinc Client send tasks back to the server, the server seems to ignore that the task was run on a different device. The only issue relating to (that that I'm aware of) is:
- Shaggie can't trust any "Anonymous Platform" task times without making his scripts MUCH more complex.

*edit* Just ran it again before I head upstairs (first time was ~3 hours ago), it moved 24 tasks each way, which is about 1/2 of the currently running tasks. Wonder if it needs to be ran more frequently?
Here's when the DQO script wasn't run early enough:
- When it starts processing a Guppi on GPU or a nonVLAR on CPU (when the DQO could have transferred the task to the other queue).

I have these tasks now waiting to run, after running the script again. Does this look normal?
Yes, since the CPU cores are only processing VLARs (guppi & Arecibo)

I'm pleased to see you've recovered from your glue-sniffing-PVC party; your Qs are all spot on!!! :-D
I just wished someone had asked them at the end of July. But back then the ambiance was more tense towards DQOs!
How does it feel to now be considered as a user of "nefarious" means?!? ;-} lol

Keep the Qs and comments coming if you have any...as I've been working on this on&off for over 1.5 month,
RobG :-D
ID: 1809207 · Report as offensive
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 . . . 37 · Next

Message boards : Number crunching : GUPPI Rescheduler for Linux and Windows - Move GUPPI work to CPU and non-GUPPI to GPU


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.