Message boards :
Number crunching :
SETI orphans
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 43 · Next
Author | Message |
---|---|
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
About how much system RAM is required per CPU job for Rosetta? I have had upto 1.3GB on the regular task 4,08? under Linux. There is a version called Mini-Rosetta which uses less. You can tell the website how long you want the cpu to run a task and it will send you ones that run near that. When I had it set to 1 hour, I got tasks that ran about 1.5 hours or less. I am experimenting with the default setting (8 hours) to see if I get more Coronavid19 tasks. Tom A proud member of the OFA (Old Farts Association). |
Ian&Steve C. Send message Joined: 28 Sep 99 Posts: 4267 Credit: 1,282,604,591 RAC: 6,640 |
I was seeing if it would be worth it for me to run some Rosetta, generally I don't like CPU processing since it's much less efficient. Getting more RAM is no problem since my systems with the most CPU power run cheap DDR3 ram. changing from 32GB to 64GB isn't that expensive. maybe $50. but I have rather old CPUs (E5-2600v2) so maybe still not worth the electricity use. Does the RosettaCPU app use AVX? Seti@Home classic workunits: 29,492 CPU time: 134,419 hours |
Dr Who Fan Send message Joined: 8 Jan 01 Posts: 3348 Credit: 715,342 RAC: 4 |
... . Does the RosettaCPU app use AVX? No it does not use AVX |
Bill G Send message Joined: 1 Jun 01 Posts: 1282 Credit: 187,688,550 RAC: 182 |
I am running at 4 hours and am getting about 50% COVID-19 tasks. SETI@home classic workunits 4,019 SETI@home classic CPU time 34,348 hours |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
I just stole another 16GB of memory from other hosts to get 32GB installed in the daily driver which I added Rosetta to last night. Seeing up to 1.4GB per 8 hour task and generally around 1.1-1.2GB for most. Now I can run half my cores which was impossible on just 16GB. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Bill G Send message Joined: 1 Jun 01 Posts: 1282 Credit: 187,688,550 RAC: 182 |
I just stole another 16GB of memory from other hosts to get 32GB installed in the daily driver which I added Rosetta to last night. Seeing up to 1.4GB per 8 hour task and generally around 1.1-1.2GB for most. Now I can run half my cores which was impossible on just 16GB. I had to run only 28 cores on my 32GB Ryzen as I was getting halts for lack of memory when I was running 29 cores. I am runniing my three GPUs reserving 2 core each so there was not extra at 29 cores . SETI@home classic workunits 4,019 SETI@home classic CPU time 34,348 hours |
Link Send message Joined: 18 Sep 03 Posts: 834 Credit: 1,807,369 RAC: 0 |
I was seeing if it would be worth it for me to run some Rosetta, generally I don't like CPU processing since it's much less efficient. The job that Rosetta does can't be done on GPUs, so CPU's are the most efficient way to get it done. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
It can be done of GPUs- they are actually putting some effots in to that- but they do't have the resources to make it a priority (at least according to a post on one of their forums).I was seeing if it would be worth it for me to run some Rosetta, generally I don't like CPU processing since it's much less efficient. And as far as efficiency goes, their CPU applications could really use some work. My APR here, around 37. At Rosetta it's 2.8 to 3 depending on the application. Yeah, the maths operations they are using may have higher overheads, and the higher I/O with system RAM will also cause a performance hit. But those applications could use some serious optimisation work. Grant Darwin NT |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
I wouldn't use APR as a measure of efficiency. R is 'rate' - yes, we can measure time, accurately. But P is 'processing' - how do we measure that? I think it comes from the project-supplied estimate of the number of floating point operations to be performed to complete the task, which is an arbitrary guess, at best. Another of those values which are useful for comparison within a project, but can't be compared across projects. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
I wouldn't use APR as a measure of efficiency. R is 'rate' - yes, we can measure time, accurately. But P is 'processing' - how do we measure that? I think it comes from the project-supplied estimate of the number of floating point operations to be performed to complete the task, which is an arbitrary guess, at best.FLOPs, the number of floating point operations performed per second. Yes, different operations have different overheads, and different data sets will have different memory bounds (L1, L2, L3 or main memory etc) impacting on performance, but that is what it's about- the number of operations performed per second. The efficiency of the Application (it's APR) is a large factor in how much Credit is awarded for processing a WU under Credit New. One of the major points for BOINC using the Cobblestone (in it's intended un-mutilated form) was to provide reward for work done, and allow comparisons not just between systems on a project, but between projects. And the basis of it was the number of FLOPs performed. Another of those values which are useful for comparison within a project, but can't be compared across projects.And we can't use Credit thanks to Credit New (and each project doing their own thing as they see fit) even though that was one of the points of the creation of the Cobblestone for BOINC, and a stated goal of Credit New. So with Credit New of no use, and APR of no use, then there is no way of making any sort of meaning full comparison between projects, even though so many of them quote the number FLOPs (supposedly) done as an indicator of their work output. Edit- and thinking about it application APRs can't be compared between projects for applications that use Anonymous Platform because the Scheduler fiddles the FLOPS estimate of the Task to get the Task runtime estimates to work... However, for stock applications the Application APR should be comparable between projects. Yes? The number of FLOPs processed per second should be the number of FLOPs processed per second (assuming the estimate of the number of FLOPs to be processed is at least almost accurate as they did away with actual FLOPs counting...). Grant Darwin NT |
Richard Haselgrove Send message Joined: 4 Jul 99 Posts: 14679 Credit: 200,643,578 RAC: 874 |
... even though so many of them quote the number FLOPs (supposedly) done as an indicator of their work output.And so does BOINC itself - misleadingly. Nobody counts flops any more: any BOINC flops-figure you read has been reverse-calculated from the (arbitrary) credit awarded by the project concerned. I posted an analogy with the Zimbabwean Dollar being exchanged at parity with the US dollar, many years ago. |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
So they should just do away with Credit New & award Credit according to the original definition of the Cobblestone, and as long as the Estimted FLOPs for any given Task is close to the actual value then the Credit awarded for work done would be comparable within & between projects.... even though so many of them quote the number FLOPs (supposedly) done as an indicator of their work output.And so does BOINC itself - misleadingly. Nobody counts flops any more: any BOINC flops-figure you read has been reverse-calculated from the (arbitrary) credit awarded by the project concerned. I posted an analogy with the Zimbabwean Dollar being exchanged at parity with the US dollar, many years ago. Yeah, i know. I'm dreaming. Grant Darwin NT |
Tom M Send message Joined: 28 Nov 02 Posts: 5126 Credit: 276,046,078 RAC: 462 |
About how much system RAM is required per CPU job for Rosetta? When I run 8 hour tasks, they are now taking up to 2.3GB per task. Tom A proud member of the OFA (Old Farts Association). |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
When I run 8 hour tasks, they are now taking up to 2.3GB per task.Mine are still no more than 1.3GB (that i've seen so far). Grant Darwin NT |
Buckeye4LF Send message Joined: 19 Jun 00 Posts: 173 Credit: 54,916,209 RAC: 833 |
Yes they don't have GPU apps unfortunately. Rosetta uses up to 2GB of RAM per CPU job. It adds up quickly if you are multithreaded.... |
Buckeye4LF Send message Joined: 19 Jun 00 Posts: 173 Credit: 54,916,209 RAC: 833 |
About how much system RAM is required per CPU job for Rosetta? Up to 2GB |
Jord Send message Joined: 9 Jun 99 Posts: 15184 Credit: 4,362,181 RAC: 3 |
Of course, last day of work issue and my BOINC runs 9 Seti tasks on CPU. What? Do these really take just 18 minutes per task? Wow. How far we got since P4 times. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6325 Credit: 106,370,077 RAC: 121 |
What about SETI orphans teams on different projects? That would bind together :) SETI apps news We're not gonna fight them. We're gonna transcend them. |
Dave Stegner Send message Joined: 20 Oct 04 Posts: 540 Credit: 65,583,328 RAC: 27 |
I just created the team "SETI orphans" at Rosetta Feel free to join. Much of their work at present is COVAD19. Let's see how the team goes. Dave |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13855 Credit: 208,696,464 RAC: 304 |
Yay! I'm top of a team for RAC & Credit (probably only till the next person joins up). Grant Darwin NT |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.