Message boards :
Number crunching :
The Server Issues / Outages Thread - Panic Mode On! (118)
Message board moderation
Previous · 1 . . . 80 · 81 · 82 · 83 · 84 · 85 · 86 . . . 94 · Next
Author | Message |
---|---|
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
My Ryzen 7 3700X crunches 150 tasks in about 12 hours, so it was just barely able to coast over the last downtime without idling. It has 16 threads but I run only 8 parallel cpu tasks on it. That's the number of true cores it has and going over that number would give very little benefit as the two tasks running in the threads of same core would be fighting for the same fpu. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13903 Credit: 208,696,464 RAC: 304 ![]() ![]() |
I had dual CPU computer on here at one time and I'm pretty sure the cache was per CPU, that's why I'm surprised the cache is not per core or thread.The allocation of 150 (or whatever) WUs per GPU was a glitch in the system. It was meant to be the same as for CPUs- xxx number of WUs allocated to that computing resource (CPU or GPU; whether there was only one, or 500). An Octal Socket CPU system will still only get 150WUs. Grant Darwin NT |
Ville Saari ![]() Send message Joined: 30 Nov 00 Posts: 1158 Credit: 49,177,052 RAC: 82,530 ![]() ![]() |
I had dual CPU computer on here at one time and I'm pretty sure the cache was per CPU, that's why I'm surprised the cache is not per core or thread.In 2006 CPUs were orders of magnitude slower than these days, so few people ever hit the task number limit before hitting their configured time limit. If there even was task number limiting back then. |
![]() ![]() ![]() Send message Joined: 1 Apr 13 Posts: 1858 Credit: 268,616,081 RAC: 1,349 ![]() ![]() |
I had dual CPU computer on here at one time and I'm pretty sure the cache was per CPU, that's why I'm surprised the cache is not per core or thread. If so, I haven't seen a way to configure it, and don't think there is one. This machine has dual hexa-core Xeons with hyper-threading, so 12 physical cores in two chips, total of 24 logical threads. I have it set for 14 CPUs in cc_config.xml, as that makes the most sense for my config. Regardless of that setting, I'm limited to 150 (was 100 until recently) tasks at a time. Changing the CPU count effects work unit assignment, including number of tasks in progress at once, but not maximum tasks in queue. Wish it wasn't so ... ![]() ![]() |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19534 Credit: 40,757,560 RAC: 67 ![]() ![]() |
I had dual CPU computer on here at one time and I'm pretty sure the cache was per CPU, that's why I'm surprised the cache is not per core or thread.In 2006 CPUs were orders of magnitude slower than these days, so few people ever hit the task number limit before hitting their configured time limit. If there even was task number limiting back then. That has to be so, but since then the sensitivity of the MB app has been doubled twice, and it follows the inverse sq law so each doubling, quadruples the processing. And the tasks have been doubled in length. So the time to crunch has gone down as much as you might think. A modern day i5 is only about twice as fast as my mid range PentiumM, with similar clock speeds, which I got back in 2008. The real increase in computing power has been in the use of GPU's. |
![]() ![]() ![]() Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 ![]() ![]() |
It has 16 threads but I run only 8 parallel cpu tasks on it. That's the number of true cores it has and going over that number would give very little benefit as the two tasks running in the threads of same core would be fighting for the same fpu. That's a misconception about Ryzen 3000 carried over from the FX cpu days. There is no penalty for having two threads performing two FP operations. It can even do a FP operation at the same time as an integer operation. https://en.wikichip.org/wiki/amd/microarchitectures/zen_2#Floating_Point_Unit Much more sensible dispatching of operations. Plus the FP register is 256bits wide now. Four ALUs, two AGUs/load–store units, and two floating-point units per core. Seti@Home classic workunits:20,676 CPU time:74,226 hours ![]() ![]() A proud member of the OFA (Old Farts Association) |
![]() Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 ![]() |
Actually I wonder about the non-profit and similar issues with the idea? I am presuming the technical mechanics of the idea are possible. And heck, even non-profits are allowed to charge fees as long as they don't make money at the end of the year. Tom A proud member of the OFA (Old Farts Association). |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
I can only conclude the system is illogical. . . The special sauce is only on Nvidia GPUs, it does not apply to CPUs. Stephen :( |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
I had dual CPU computer on here at one time and I'm pretty sure the cache was per CPU, that's why I'm surprised the cache is not per core or thread.In 2006 CPUs were orders of magnitude slower than these days, so few people ever hit the task number limit before hitting their configured time limit. If there even was task number limiting back then. . . The doubling is sample size was the change from 2 bit to 4 bit per sample in the data. It improves the resolution but testing showed it made very little difference to run times. Barely even noticeable. . . My i5-6600 will crunch a task in about 1/8 the time it took on my old Pentium4, but much of that improvement was the advent of AVX, and that is per core, of which the Pentium4 had only one not 4 like the i5. So the output of the i5 is many magnitudes greater than a Pentium4 which could only do a few tasks per day (3 to 5) compared to over 100. Stephen :) |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 ![]() ![]() |
When I said it didn't take long, all I meant was it didn't affect my main machine nearly as much as the last couple of outages, so I'll take it, and be happy with it. https://imgur.com/aWojoXt |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 ![]() ![]() |
[quoteActually I wonder about the non-profit and similar issues with the idea? I am presuming the technical mechanics of the idea are possible. And heck, even non-profits are allowed to charge fees as long as they don't make money at the end of the year.[/quote] Actually TomM, The mechanics would be nearly impossible without a very large group of dedicated volunteer programmers. Something like that would require a top to bottom overhaul of the BOINC client to support, as well as the changes at SETI itself. Now, I'm not suggesting that BOINC doesn't need a massive overhaul, because there are several places it could be made much better, to include prioritization of tasks, possible communications between hosts for coordination of tasks, getting rid of the RPC component, adding SSH as communication between client and manager, multiple setup options such that LAN and DATACENTER operations can customize the apps for things like central storage, local task handling by a network manager as opposed to a host manager, and the backend at the projects to be able to help set priorities like Science Priority, Science Necessity, Time Necessity, Storage Necessity, and have the manager be able to set local priorities, which would only be additive to the backend priorities. I mean there are a lot of things which could be done to improve the overall projects both BOINC and SETI. Question really is, does the manpower exist. I've seen this community fund things in record times, such as that recent hard drive purchase. I don't think money is as big of an issue as it appears to be on paper. Guy Edit: One thing I would definitely love to see added is a heartbeat message from clients to servers which could be a signal to preemptively reassign work units to active machines and cut tasks from machines which have gone dark. That helps both the number crunching community and the storage at the backend. |
W-K 666 ![]() Send message Joined: 18 May 99 Posts: 19534 Credit: 40,757,560 RAC: 67 ![]() ![]() |
Way. way long before that. https://setiathome.berkeley.edu/forum_thread.php?id=18850&postid=154163#154163 I didn't even consider that event in my resume of changes. |
![]() ![]() ![]() ![]() Send message Joined: 15 May 99 Posts: 3832 Credit: 1,114,826,392 RAC: 3,319 ![]() ![]() |
Thread is getting offtopic... I suggest for proposed improvements another thread is used such as this one. Don't want to have to go moving dozens of posts. Thanks! ![]() |
TBar Send message Joined: 22 May 99 Posts: 5204 Credit: 840,779,836 RAC: 2,768 ![]() ![]() |
It's getting late, and the Splitters have dropped off to the point there is No RTS. This doesn't look good; Results ready to send = 3 Current result creation rate ** = 4.0569/sec Results received in last hour ** = 150,781 |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
Way. way long before that. https://setiathome.berkeley.edu/forum_thread.php?id=18850&postid=154163#154163 . . And Astropulses became much in demand but are not part of the overall SETI MB throughput of the project. They are not even for SETI in any way, they are a sub-contract for another project in another Uni. (As I understand it). Stephen ? ? |
AllgoodGuy Send message Joined: 29 May 01 Posts: 293 Credit: 16,348,499 RAC: 266 ![]() ![]() |
It's getting late, and the Splitters have dropped off to the point there is No RTS. This doesn't look good; Recovering a little, seeing AP being cut and delivered. Results ready to send 35,768 Current result creation rate ** 35.9130/sec Results received in last hour ** 149,577 Still nowhere near optimal. |
Stephen "Heretic" ![]() ![]() ![]() ![]() Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 ![]() ![]() |
Way. way long before that. https://setiathome.berkeley.edu/forum_thread.php?id=18850&postid=154163#154163 . . I was under the impression that Seti@home back end does not look at AP results at all, they are shipped off to another project somewhere. Perhaps I have that wrong. Maybe I was just being too parochial and not recognising that the other project is also part of SETI. . . As for why it is here, look at the quotes and you will see you need to be asking Nick about that. Stephen ? ? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13903 Credit: 208,696,464 RAC: 304 ![]() ![]() |
Looks like there've been a few server issues in the last 24hrs or so. Return rate has died for a while, then Ready-to-send gets pounded when the Scheduler comes back to life, but the splitters don't pick up on the increased load. Grant Darwin NT |
Boiler Paul Send message Joined: 4 May 00 Posts: 232 Credit: 4,965,771 RAC: 64 ![]() ![]() |
For some reason, the server status is unavailable. Just a blank black page. Very odd. I wonder how long this will last |
andybutt ![]() Send message Joined: 18 Mar 03 Posts: 262 Credit: 164,205,187 RAC: 516 ![]() ![]() |
back on now but loads of red! problem ready for the weekend? |
©2025 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.