Message boards :
Number crunching :
SETI orphans
Message board moderation
Previous · 1 · 2 · 3 · 4 · 5 · 6 · 7 · 8 . . . 21 · Next
| Author | Message |
|---|---|
|
Harri Liljeroos Send message Joined: 29 May 99 Posts: 127 Credit: 85,281,665 RAC: 287
|
BTW Check what was the actual error that was reported. If it is 196 EXIT_DISK_LIMIT_EXCEEDED then the task was configured badly on server side and rsc_disk_bound value was set too low for the task (or the task really did have problems) We have seen a lot of these at LHC.
|
Tom M Send message Joined: 28 Nov 02 Posts: 4936 Credit: 276,046,078 RAC: 1,048 |
BTW Yes. Rosetta@Home uses over 17GB on my system currently. I would run the limit as high as you can stand till you see what is working and lower it. I hadn't caught that it was unpacking the DB for each task. My impression is they were all running against one DB. I think I saw some "flags" files which is where I got that impression. R@H uses both more memory and more disk space than anything else I have ever run. The reason I added an SSD to my system before S@h went away is that my system was "thrashing" the HD. I could hear it and it was logging on very slowly. Tom M "I owe", "I owe", "Its off to work I go" (from a bumper sticker on a smallish Mercedes Benz) (on the back of a Semi Tractor) "If you can read this bumper sticker, I've LOST MY TRAILER!" |
|
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 12990 Credit: 208,696,464 RAC: 690
|
BTWI've never had one of those errors. I used to get plenty of "Insufficient disk space" notices, and after bumping it up the Tasks would then run. I ended up with these settings for a 6c/12t (all in use) system. Disk
Use no more than 20 GB
Leave at least 2 GB free
Use no more than 60 % of totalLooking at the disk usage in the BOINC manager graph, it has varied between 9-13GB. Present usage is the highest i've seen so far, even after reducing my cache size further. Roughly 1.2GB of storage space per Task run, and i'd allow 1.5GB RAM. For a less than 4 core system, you'd probably want to allow 2GB RAM per Task minimum. Grant Darwin NT |
Raistmer Send message Joined: 16 Jun 01 Posts: 6242 Credit: 106,370,077 RAC: 275
|
BTW Stderr output <core_client_version>7.4.53</core_client_version> <![CDATA[ <message> Maximum disk usage exceeded </message> <stderr_txt> Is it BOINC error? Should I increase allowed space for BOINc in client settingto avoid such abortions? SETI apps news We're not gonna fight them. We're gonna transcend them. |
Raistmer Send message Joined: 16 Jun 01 Posts: 6242 Credit: 106,370,077 RAC: 275
|
Rosetta is possibly the most inefficient, wasteful project in BOINC Lol, better tell where they are very responsive :)))) Same on Einstein, same on PrimeGrid.... And regarding to inefficience - I just can't stop to think how to improve things :) Much better if admins react on suggestions of course, but something can be done on client side too. First that came into mind - RAM drive ! Huge one though would be required. Second - could be any way to TRICK app and point it to the same directory each time it attempts to access static data. 960MB per slot - not bad saving even for dual core system, but what to say about modern manycores... SETI apps news We're not gonna fight them. We're gonna transcend them. |
Keith T. Send message Joined: 23 Aug 99 Posts: 919 Credit: 537,293 RAC: 20
|
Rosetta is possibly the most inefficient, wasteful project in BOINC It's like the old days of LHC I remember now why I stopped running Rosetta and RALPH in the past! Their project scientists and administration are not very responsive to user suggestions and comments as well See some of my old comments from 10 years ago in both projects forums. I don't think it has got much better ! |
Raistmer Send message Joined: 16 Jun 01 Posts: 6242 Credit: 106,370,077 RAC: 275
|
Learning Rosetta workflow still. Found that after start there was almost zero CPU load from Rosettaq 4.15 app. Looking for reason I found saturated disk queue. And it was saturated more than 5 minues (it's SD card). It seems app extracted whole project database archive into slot directory. It's 960MB. And, cause it's slot directory, it will do the same for each new task!!! So, Rosetta keeps zipped database in project folder and UNZIP it each and every new task into slot ???!!! I optimized SETI wisdom generation of less than MB to remove its generation from slot cause it's resource waste, but here almost GB per each task ??!! OMG.... That could be justified if database MODIFIED for each task. But at first glance it doesn't looks so. At least extracted files have just same size as they had in zip archive. I'll do MD5 calculations for few tasks... and if they are really th same.... It's just ENORMOUSLY huge waste! SETI apps news We're not gonna fight them. We're gonna transcend them. |
|
Grumpy Swede Send message Joined: 1 Nov 08 Posts: 8170 Credit: 49,849,242 RAC: 147
|
Beta for Covid-19 on WCG, have started with a small batch. More will come pretty soon. As a teaser, a sample of a finished stderr for the Covid-19 Beta: Result Log Result Name: BETA_ OPN1_ TEST029_ 0171_ 0-- <core_client_version>7.6.22</core_client_version> <![CDATA[ <stderr_txt> INFO:[08:43:08] Start AutoGrid... autogrid4: Successful Completion. INFO:[08:43:40] End AutoGrid... INFO:[08:43:40] Start AutoDock for a_ZINC000744297501.dpf(Job #0)... INFO: In AutoDock main_autodock() Beginning AutoDock... INFO: Setting num_generations: 9999999 About to enter main loop...(dockings already completed: 0) Finished Docking number 0 Finished Docking number 1 Finished Docking number 2 Finished Docking number 3 Finished Docking number 4 Finished Docking number 5 Finished Docking number 6 Finished Docking number 7 Finished Docking number 8 Finished Docking number 9 Finished Docking number 10 Finished Docking number 11 Finished Docking number 12 Finished Docking number 13 Finished Docking number 14 Finished Docking number 15 Finished Docking number 16 Finished Docking number 17 Finished Docking number 18 Finished Docking number 19 Finished Docking number 20 Finished Docking number 21 Finished Docking number 22 Finished Docking number 23 Finished Docking number 24 Finished Docking number 25 Finished Docking number 26 Finished Docking number 27 Finished Docking number 28 Finished Docking number 29 Finished Docking number 30 Finished Docking number 31 Finished Docking number 32 Finished Docking number 33 Finished Docking number 34 Finished Docking number 35 Finished Docking number 36 Finished Docking number 37 Finished Docking number 38 Finished Docking number 39 Finished Docking number 40 Finished Docking number 41 Finished Docking number 42 Finished Docking number 43 Finished Docking number 44 Finished Docking number 45 Finished Docking number 46 Finished Docking number 47 Finished Docking number 48 Finished Docking number 49 INFO:[08:59:24] End AutoDock... WARNING: No benchmark data to run! INFO:Cpu time = 975.171875 08:59:24 (4084): called boinc_finish(0) </stderr_txt> ]]> |
Sirius B ![]() Send message Joined: 26 Dec 00 Posts: 21809 Credit: 3,081,182 RAC: 15
|
Microbiome Immunity Project. |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
Thanks. I've always installed Boinc & let it run whatever projects & tasks selected on a default basis, so nice to know. 100% Stephen . . |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
Seeing what you & Tom has said, I've gone back & changed mine from default to custom. Also changed most from unlimited to 20 of each. Changed ARP to 4 & SCC to unlimited, it worked. Sadly no ARP though. . . African Rainfall Project. But I will have to look to see what MIP is (Men In Pink?) Stephen :) |
Stephen "Heretic" ![]() Send message Joined: 20 Sep 12 Posts: 5384 Credit: 192,787,363 RAC: 1,426
|
. . I joined the group two days ago but I do not appear in the listing. There are however 2 'anonymous' listings, I originally thought I might have been one of them because of the 24 hour provisional period, but even now I do not show up. My hosts are definitely NOT 'anonymous'. . . Thanks Sven, . . I found a setting on their site about "Sharing data", I set that to on and now I show up in the team stats (I was one of the anonymous members). . . I guess it's all in the language ... Stephen :) |
Wiggo "Democratic Socialist" Send message Joined: 24 Jan 00 Posts: 18404 Credit: 261,360,520 RAC: 1,109
|
Have my resource share set to 10000 to 10.Here at Seti I use venues with a 3 day + 0.1 cache set, but my other projects just use my general settings which use a 1 day + 0.1 so I always have room for more Seti work. Cheers. |
Gary Charpentier ![]() Send message Joined: 25 Dec 00 Posts: 27000 Credit: 53,134,872 RAC: 73
|
Have my resource share set to 10000 to 10. There are some projects that allow fractional shares such as 0.01 Try whatever on the web page; until you hit update nothing has actually changed. My only warning is getting things set too far imbalanced. The work fetch is rather stupid about knowing a project has no work long term. If you have big cache settings, you can undo everything you are trying to do. Why? Because when it finally gives up trying to get work from the large share project it will get work from the small share project and fill the cache. Now the cache may all go into EDF mode and no work will be requested until the cache is emptied. After all Tuesday outage is gone, there is no longer a point in having a monster cache. The BOINC defaults may make more sense. OBW some project admins expect defaults and set their task deadline assuming them. |
juan BFP ![]() Send message Joined: 16 Mar 07 Posts: 9764 Credit: 572,710,851 RAC: 8,616
|
Have my resource share set to 10000 to 10. Some projects does not allow you to set the resource share to zero. You could try instead of set to 0 , clear the the space and see if it set to zero by itself. Others times to force a project o zero resource share, you need to edit the configuration file generated by the project itself. Not an easy task if you already not know how to do.
|
|
Dave Stegner Send message Joined: 20 Oct 04 Posts: 540 Credit: 65,583,328 RAC: 62
|
Have my resource share set to 10000 to 10. Can't do much more than that. Dave |
Tom M Send message Joined: 28 Nov 02 Posts: 4936 Credit: 276,046,078 RAC: 1,048 |
Don't know how anyone else is doing but, I have not received even 1 Seti work unit in last couple of weeks. If you set your non-Seti@Home gpu project to resource "0" it will always run S@H gpu tasks when they show up. Your cpu tasks for S@H would benefit from having a higher resource setting than anything else on the machine. Then S@H will bump Rosetta out of the way when they arrive. Resends are still going out. Tom M "I owe", "I owe", "Its off to work I go" (from a bumper sticker on a smallish Mercedes Benz) (on the back of a Semi Tractor) "If you can read this bumper sticker, I've LOST MY TRAILER!" |
|
Dave Stegner Send message Joined: 20 Oct 04 Posts: 540 Credit: 65,583,328 RAC: 62
|
Don't know how anyone else is doing but, I have not received even 1 Seti work unit in last couple of weeks. Seti orphans is still running on Rosetta. Come join us. Dave |
|
Grumpy Swede Send message Joined: 1 Nov 08 Posts: 8170 Credit: 49,849,242 RAC: 147
|
ARP appears to be Africa Rainfall And the uploaded result of each finished ARP is more than 60 MB, and that's huge. They take time yes, but then who's in a hurry? I'm retired, and I have from now to the end of my life to finish them :-) I get them too, not very regularly, but a couple of them/day at least. |
Tom M Send message Joined: 28 Nov 02 Posts: 4936 Credit: 276,046,078 RAC: 1,048 |
ARP appears to be Africa Rainfall Ah. I get those "regularly" but they run 12+ hours on my reasonably fast machine. They probably take even longer on a slower machine. Tom M "I owe", "I owe", "Its off to work I go" (from a bumper sticker on a smallish Mercedes Benz) (on the back of a Semi Tractor) "If you can read this bumper sticker, I've LOST MY TRAILER!" |
©2020 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.