Message boards :
Number crunching :
Ryzen and Threadripper
Message board moderation
Previous · 1 . . . 4 · 5 · 6 · 7 · 8 · 9 · 10 . . . 69 · Next
Author | Message |
---|---|
jsm Send message Joined: 1 Oct 16 Posts: 124 Credit: 51,135,572 RAC: 298 |
With seti not having any tasks available the output for nearly all threads is 1719 with an occasional jump of one or two to 1800 or 1900 which seems to be the default position. jsm |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
OK 7 day report. At 95% cpu setting in Boinc my ratio has increased from .9 with 45% cpu to 1.142. Now set at 85% to see change next week. Yes, Seti is down "again". :( After the DB crash earlier and what looked like an Ok DB rebuild, suddenly the DB wasn't answering requests. That screwed up the Website since it is "data driven" as well as uploading/downloading tasks. Tom A proud member of the OFA (Old Farts Association). |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
With seti not having any tasks available the output for nearly all threads is 1719 with an occasional jump of one or two to 1800 or 1900 which seems to be the default position. When the cpu isn't processing, that appears to be "normal" :) Tom A proud member of the OFA (Old Farts Association). |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
A question going out to all the other Threadripper users regarding cpu tasks and their run_times and cpu_times. How close are you able to get the two times to match up and at what level or number of cpu tasks running were you able to achieve that. Also, if you did run an amount of cpu tasks less than 100% maximum, what method did you use to control or limit the number of runnable cpu tasks. % cpu utilization OR max_concurrent? Ever since I had to drop using max_concurrent because of a gpu exclusion and use the % of cpu utilization method, I am not able to achieve equal run_times and cpu_times. Always get a discrepancy of 6-10 minutes. I get pretty close or at least much closer with the Ryzens. Only running 12 cpu task out of the 24 possible. And I have given up trying to assign affinity like I used to do with my script. It just does not work when you are not able to use 100% of the cpu. Until the problem with max_concurrent, gpu_exclusion and work fetch gets worked out by the developers and makes it into master, I don't see what I can do to eliminate the problem without dropping GPUGrid so I don't have to run a gpu_exclude. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Bill G Send message Joined: 1 Jun 01 Posts: 1282 Credit: 187,688,550 RAC: 182 |
Keith, I am running W10 so perhaps that makes a difference but my times are very close. https://setiathome.berkeley.edu/results.php?hostid=8366659&offset=0&show_names=0&state=4&appid= Not sure what my settings are. I feel out of it right now and can hardly keep my eyes open I have 3 cores reserved but I am running 29 CPU tasks at the same time SETI@home classic workunits 4,019 SETI@home classic CPU time 34,348 hours |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Thanks for the input Bill. I took a stab at the issue by reducing my cpu count by two and that seems to have greatly helped. Might have just been an issue of overloading the processor with too many concurrent tasks. I still don't like the method I have to use for controlling how many tasks run. Doesn't let the TR XFR2, Performance Enhancement and PBO settings work correctly. Or the ability to use my affinity script. I think I am going to have to take another stab at compiling the beta BOINC branch that has the latest work fetch commits in it along with my #PR2918 bug fix. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
Keith, I am running W10 so perhaps that makes a difference but my times are very close. https://setiathome.berkeley.edu/results.php?hostid=8366659&offset=0&show_names=0&state=4&appid= You have some lovely numbers but he is eliciting Threadripper numbers :) Tom A proud member of the OFA (Old Farts Association). |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
You have some lovely numbers but he is eliciting Threadripper numbers :) He is referring to his TR, Tom. Not bad for the stock cpu app under Windows10. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
So knocking two cpu tasks off the concurrency improved the run_time vs cpu_time delta from 8 minutes to 3 minutes. But theoretical back of the envelope calculations show a reduction from 384 tasks per day to 380 tasks per day. Will have to look at a multiple day average of tasks per day and tasks per week with BoincTasks to see the real world impact. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Kevin Olley Send message Joined: 3 Aug 99 Posts: 906 Credit: 261,085,289 RAC: 572 |
A question going out to all the other Threadripper users regarding cpu tasks and their run_times and cpu_times. How close are you able to get the two times to match up and at what level or number of cpu tasks running were you able to achieve that. Also, if you did run an amount of cpu tasks less than 100% maximum, what method did you use to control or limit the number of runnable cpu tasks. % cpu utilization OR max_concurrent? About 5 seconds difference, running at 35% processors in computing preferences. Kevin |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Thanks Kevin. That is essentially the same run_time and cpu_time At least it scales linearly with utilization it appears. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
OK 7 day report. At 95% cpu setting in Boinc my ratio has increased from .9 with 45% cpu to 1.142. Now set at 85% to see change next week. I am averaging a little over an hour on my cpu tasks with a AMD 2700 on a Linux/Cud91. So I just looked at your cpu task list and it looks like you are averaging above 1.5 hours. There maybe room to squeeze that lower without dropping the thread count very much. Tom A proud member of the OFA (Old Farts Association). |
jsm Send message Joined: 1 Oct 16 Posts: 124 Credit: 51,135,572 RAC: 298 |
Weekly report. At 85% ratio dropped to just over 1 for the week but is at 1.25 over the last couple of days. I trace this to the fact that seti was down for a significant period during the week and the 2990 ran out of tasks to work very quickly while the ryzens soldiered on for quite some time. This is because the BOINC max tasks per computer appears to be the same regardless of the capability of the m/cs concerned. Study of the event log shows the 2990 constantly being refused tasks to download because max achieved. This is NBG if a m/c cannot continue crunching when seti goes down. Now down to 75% |
Tom M Send message Joined: 28 Nov 02 Posts: 5124 Credit: 276,046,078 RAC: 462 |
Weekly report. At 85% ratio dropped to just over 1 for the week but is at 1.25 over the last couple of days. I trace this to the fact that seti was down for a significant period during the week and the 2990 ran out of tasks to work very quickly while the ryzens soldiered on for quite some time. This is because the BOINC max tasks per computer appears to be the same regardless of the capability of the m/cs concerned. Study of the event log shows the 2990 constantly being refused tasks to download because max achieved. This is NBG if a m/c cannot continue crunching when seti goes down. Does the graph available in BOINC Manager show your RAC is climbing again? Tom A proud member of the OFA (Old Farts Association). |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
SETI@Home has NEVER promised a continuous supply of work, and has always suggested that one should have standby projects available. One of the sad things about how credit is calculated (and thus RAC) is that it is highly dependant on the type of work available coupled with a poorly executed "feed-forward" control loop that is all but impossible to fix for all types of work and processor combination. SETI (not BOINC) has set the limits of tasks in progress for some years. They are 100 for the CPU and 100 per GPU. There is a hard limit of 1000 tasks progress. The problems with the 100 for the CPU has only come to light with the increasing use of high-core-count devices (they've dropped in price and have become "affordable"), perhaps it is time to consider something like "100 tasks per 8 cores". If one sets a mid to high "Store at least" figure (3-6) and a very low "Store up to an additional" figure (0.01 or less) then BOINC will keep the cache continuously filled, but if one sets a high "Store up to an additional" value it becomes hit and miss if the cache will be filled just before an outage (scheduled or otherwise). Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
There is a hard limit of 1000 tasks progress. That is a client side limit, not server like the other limits. easily bypassed by editing the BOINC code and recompiling your own client. Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
jsm Send message Joined: 1 Oct 16 Posts: 124 Credit: 51,135,572 RAC: 298 |
Perhaps I am being naive but it seems to me that the objective should be to have all participants keep crunching during downtime. Thus you need to establish historically what the longest period of downtime is. Then at each benchmark run establish the average number of tasks this m/c will process to cover this period. Then when the client asks for more work enough tasks should be downloaded to maintain that benchmark. Thus it doesnt matter how many cores or speed of the cpus you have the objective should be attained. Is that too easy - or too difficult? ): jsm |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13736 Credit: 208,696,464 RAC: 304 |
Is that too easy - or too difficult? ): Too difficult, as the limits are in place so the servers don't crash. Increasing the limits will result in more & longer server down time, resulting in people wanting higher limits, resulting in more and longer down time. Rinse & repeat. If the servers were more stable under heavy load, people wouldn't feel the need to have such large caches, which would reduce the load on them & help their stability. Approx. $750,000+ will resolve the servers current limits and allow more work to be out in the field at any given time. Grant Darwin NT |
jsm Send message Joined: 1 Oct 16 Posts: 124 Credit: 51,135,572 RAC: 298 |
Thats a pity because every time Seti goes down my 2990 with its 64 available threads runs out of work in a couple of hours and then goes on holiday. A bit of a waste because it is solely dedicated to seti. jsm |
rob smith Send message Joined: 7 Mar 03 Posts: 22200 Credit: 416,307,556 RAC: 380 |
Read and understand this: SETI@Home has NEVER promised a continuous supply of work, and has always suggested that one should have standby projects available. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.