Message boards :
Number crunching :
CPU time difference
Message board moderation
Previous · 1 · 2 · 3 · 4 · Next
Author | Message |
---|---|
Al Send message Joined: 3 Apr 99 Posts: 1682 Credit: 477,343,364 RAC: 482 |
Found it, and yes, there was that, and a whole lot more. Was interesting, though confusing, quite a bit going on inside that file. Thanks for pointing it out to me. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Core Temp shows Frequency as: 3688.27MHz (99.68 x 37.0). With that info is there any need to install CPU-Z? . . You may want to remove the cover and just make sure your fan is running and is not full of dust/fluff, that sounds way too hot. Running only 2 or even one CPU crunching might give you an idea if the problem is that you are overloading the CPU, if the high temps persist you have other issues, but I urge you to try turning off the iGPU crunching. If the problem goes away you can try sneaking back up on it with 2 CPu cores. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Try to limit number of cores BOINC use to 2 instead of 4. . . If you go to options-computer preferences-daily schedules and under Network set it to only use the network for an hour a day (as far from your current time as possible ie 23 hours ahead) then the results will not immediately upload and disappear that way you can see what your runtimes are over a series of tasks. I am presuming here that you are running Boinc Manager as your manager. . . If the temps normalise with just the one CPU crunching then you know there is a loading issue, if they stay high then you most probbably have a hardware issue. If you turn off the iGPU then see what your runtimes do, if they stay the same (I will be surprised) then you can restart GPU crunching, if they decrease significantly you have the choice whether to restart GPu work or not. I have a Core2 Duo 3GHz that was running WUs on both cores and was doing about 11 tasks a day on each core. I should think your I3 could manage a similar level if running well. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Ok, I made that change, although it's not clear whether BOINC is now using one thread of each core or both threads of one core. Given that total CPU usage per Task Manager dropped from 99% to around 55-65%, I don't see how that could be good for total throughput. . . Give it a while and see :) |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
In which case a large fraction of that 99% might have been wasted time, I gather. I'll watch for a dramatic change in host RAC, beyond its gradual increase and its "normal" erratic behavior. If there is a better metric, even involving freeware, let me know. . . You might have a look at Boinc Tasks, when running/configured properly it can track your performance and output over time. |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
One important thing is the memory bandwidth. . . If you are running Win 7 or later run teh memory test function, that should tell you if a memory fault is your problem |
Matthew@SETI@home Send message Joined: 14 Feb 16 Posts: 40 Credit: 9,278,146 RAC: 6 |
This WU is about as close to apples-and-apples as I think I'll ever get. The computers are similar and even their benchmarks are close. Yet there's a 3.5X difference in CPU time. This is with "50% of CPUs" and GPU tasks enabled. FTR when the task drops off the list: Computer 7914479: 3,810 secs Computer 7973906 (mine): 13,495 secs Name: blc6_2bit_guppi_57403_69499_HIP11048_OFF_0005.5998.0.22.45.118.vlar Application (both computers): SETI@home v8 v8.00 windows_intelx86 |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13750 Credit: 208,696,464 RAC: 304 |
This WU is about as close to apples-and-apples as I think I'll ever get. The computers are similar and even their benchmarks are close. Yet there's a 3.5X difference in CPU time. This is with "50% of CPUs" and GPU tasks enabled. The difference between the 2 systems is that you are using your integrated Intel GPU for crunching, the other person isn't. EDIT- also their CPU has 4 physical cores, yours would be 2 physical, 2 Hyperthreaded. Your CPU is capable of very high clock speeds, but not at the same time as doing heavy work on the internal GPU. I'd suggest re-enabling all CPU cores, and disable GPU crunching for a while & see what times result. Grant Darwin NT |
Wiggo Send message Joined: 24 Jan 00 Posts: 34896 Credit: 261,360,520 RAC: 489 |
This WU is about as close to apples-and-apples as I think I'll ever get. The computers are similar and even their benchmarks are close. Yet there's a 3.5X difference in CPU time. This is with "50% of CPUs" and GPU tasks enabled. You need to find a similar i3 (or same socket & speed i7, or i5 mobile CPU) to make a good comparison with, the i5's cores are just pure cores and perform well ahead of hyperthreaded cores. Cheers. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
This WU is about as close to apples-and-apples as I think I'll ever get. The computers are similar and even their benchmarks are close. Yet there's a 3.5X difference in CPU time. This is with "50% of CPUs" and GPU tasks enabled. It was recommended they suspend GPU work several days ago to see how it effected their CPU processing times. No feedback was provided as to the results or if it was tried. Ivy Bridge & Haswell based CPUs have shown using SETI@home apps on the CPU & iGPU at the same time would cause CPU apps to run more slowly. From one host with a Skylake i5 it was thought that the CPU slowdown may be greater on the newest CPUs. If that is true I would speculate that the increased iGPU speed in Skylake may be causing even more "cache thrashing". It has been speculated that running an app from a less cache heavy app on either the CPU or iGPU may be the solution, but I'm not sure anyone has set down and tested that yet. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Matthew@SETI@home Send message Joined: 14 Feb 16 Posts: 40 Credit: 9,278,146 RAC: 6 |
It was also recommended that I try 50% CPUs. I didn't see any point in trying both that and GPU-suspended at the same time. I wasn't seeing anything clearly apples-to-apples in the immediate results, so I was just waiting. Since host RAC was in a slow but steady decline with 50% CPUs and GPU enabled, I have now switched to 100% CPUs and GPU suspended. An easy-to-use way to do before-and-after testing using the same real-world data would be very useful. If maximizing total global throughput is a goal, seems like that would be something worth working on at Berkeley, but that's just me. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
This WU is about as close to apples-and-apples as I think I'll ever get. The computers are similar and even their benchmarks are close. Yet there's a 3.5X difference in CPU time. This is with "50% of CPUs" and GPU tasks enabled. So, instead of reduce number of variations you increased it enabling GPU computations on hybrid (APU in AMD terms) device like iGPU is? Well, I though initial question was why CPU processing much slower... Disable GPU part, anable only 50% of cores, complete few tasks. One shoulw understand that with hyperthreaded hybrid device there are much less real computational resources inside device than interfaces to load it. Also, this suggestion not to maximize device throughput, but to find answer of thread start question. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
And it exists more than 10 years already: http://lunatics.kwsn.info/index.php?action=downloads;cat=5
What Berkeley can do with underpowered i3 line of Intel devices?? |
Matthew@SETI@home Send message Joined: 14 Feb 16 Posts: 40 Credit: 9,278,146 RAC: 6 |
What Berkeley can do with underpowered i3 line of Intel devices?? If Berkeley is not interested in maximizing productivity of all computers (or at least all modern computers), they should be. That's intuitive. Few of us have a lot of money to throw at this project, and I suspect there are many like me who have absolutely no other need for that much power. As for the rest, please understand that I have no way of knowing which expert advice I should follow. I'll let you debate that with the others, and I'll watch for any resolution. Re the testing tool you provided, I made a point of saying "easy-to-use". By that, I meant as easy as the BOINC manager, and preferably a part of the BOINC manager. That page doesn't even provide any usage instructions that I can see, let alone easy-to-use ones. It looks like it might be usable by extreme computer geeks like yourself, but not so much by the general population. But thanks for the thought. |
HAL9000 Send message Joined: 11 Sep 99 Posts: 6534 Credit: 196,805,888 RAC: 57 |
It was also recommended that I try 50% CPUs. I didn't see any point in trying both that and GPU-suspended at the same time. I wasn't seeing anything clearly apples-to-apples in the immediate results, so I was just waiting. Since host RAC was in a slow but steady decline with 50% CPUs and GPU enabled, I have now switched to 100% CPUs and GPU suspended. I would probably say that their primary goal is stability of applications & being able to support as many types of hardware as possible. Instead of maximum raw throughput of data processing. Much of the application development is done by volunteers rather than by the SETI@home project. As the admins just don't have the time, funding, or resources. It is left to the users if they wish to get go deeper & get the absolute most out of their systems. The way BOINC works does not lend itself to being easily manipulated to throw in work for benchmarking purposes. So most of the tools to do that will operate outside of BOINC, but, for the most part, are fairly simple to use. SETI@home classic workunits: 93,865 CPU time: 863,447 hours Join the [url=http://tinyurl.com/8y46zvu]BP6/VP6 User Group[ |
Matthew@SETI@home Send message Joined: 14 Feb 16 Posts: 40 Credit: 9,278,146 RAC: 6 |
I would welcome any effort to dumb down the usage of that tool so that it's usable by a mere mortal such as myself. I would then be willing to do sufficient testing to take most of the guesswork out of optimization of i3's (I'm an idiot in these environments, but I have 30 years in mainframes, half of that in system software, and the testing concepts and procedures are essentially the same). There would no longer be any debate or question such as we have seen in this thread. Future non-geek i3 users enquiring on these forums would be told unequivocally, "do this", and they would need no understanding beyond basic usage of BOINC manager's options. Better yet, such guidance could be made more conspicuous by adding it to the BOINC or S@h doc. Further, if disabling GPU work were clearly proven to improve overall productivity on i3's, the project could be modified so as not to send i3's any GPU work. Clearly the system knows it's an i3, that's shown on the computer's details page. If you halve the CPU time required for a task on an i3, you effectively add another i3 to the project. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13750 Credit: 208,696,464 RAC: 304 |
Further, if disabling GPU work were proven to improve overall productivity on i3's, the project could be modified so as not to send i3's any GPU work. The problem (one of too many to list) is one of the type of GPU. Yours is an on die GPU- built in to the CPU package, so it shares power, heat & memory resources with the CPU. Depending on configuration the Internal GPU can produce a lot of work, but it's no where near as good as an external GPU. And generally the internal GPU output isn't that much better than the CPU (at this stage). The addon GPU configuration & application settings can result in no CPU work being possible, however their increased GPU output can offset that. Grant Darwin NT |
Matthew@SETI@home Send message Joined: 14 Feb 16 Posts: 40 Credit: 9,278,146 RAC: 6 |
Ok, point taken, but the type of GPU is also available to the system. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13750 Credit: 208,696,464 RAC: 304 |
Ok, point taken, but the type of GPU is also available to the system. And like everything else on this project, everything has it's priority levels, and given the present funding only the most absolute extreme urgency & important ones will receive any attention. I'm guessing 95% of the things on the list will never get touched unless there's a huge increase in funding, or everything just suddenly falls in to place. Neither is very likely. Grant Darwin NT |
Matthew@SETI@home Send message Joined: 14 Feb 16 Posts: 40 Credit: 9,278,146 RAC: 6 |
Ok, so they can't afford to automate that. I can understand that. That still leaves the other options that don't require software changes. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.