Message boards :
Number crunching :
About Deadlines or Database reduction proposals
Message board moderation
Previous · 1 . . . 13 · 14 · 15 · 16
Author | Message |
---|---|
ML1 Send message Joined: 25 Nov 01 Posts: 20959 Credit: 7,508,002 RAC: 20 |
My other question is: why not keep acquiring and crunching more data while start processing the old one? The sky is so big.Because they don't have manpower do to both at the same time. Keeping a boinc server running at setiathome scale needs lot of work. And doubly so with all the problems we have had since December. Part of the pressure may well have been precipitated by the database running out of RAM? There will be a big jump needed in effort and manpower to upgrade to new hardware to fix that... And then there is always the non-science educated Bean Counters who apply unreasonable pressure to have their bean counting arbitrary metrics massaged to boost their non-scientific bonuses?... And all too often sadly, the Bean Counters never account for the highly valuable value of continuity and loyalty... All a game and all conjecture. Meanwhile, s@h has already achieved groundbreaking/spacebreaking fantastic results and started a whole new way of planet-wide distributed computing. (Special note: This is all my own uneducated personal prejudice and random guesses!!) Keep searchin', Martin See new freedom: Mageia Linux Take a look for yourself: Linux Format The Future is what We all make IT (GPLv3) |
Keith Myers Send message Joined: 29 Apr 01 Posts: 13164 Credit: 1,160,866,277 RAC: 1,873 |
Exactly. Not questioning the time and devotion to the project Eric has put in over the years. Seti@Home classic workunits:20,676 CPU time:74,226 hours A proud member of the OFA (Old Farts Association) |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Finally able to check out Tasks again. Of all of my Validation Pendings, 19% of them are over 28 days old. An almost 20% reduction in the size of the database would have been very helpful. Grant Darwin NT |
Stephen "Heretic" Send message Joined: 20 Sep 12 Posts: 5557 Credit: 192,787,363 RAC: 628 |
Finally able to check out Tasks again. . . Well if it is not difficult to do (IF?) then it would be advisable to implement the 28 day deadline (or thereabouts) and maybe return the server limits to 100 per device for the remaining couple of weeks to give the backlog a chance to clear before shutting down. Maybe even reduce deadlines to 14 days since it is such a short term thing now. It would be nice to see things clear up before the plug is pulled. I say this because the validation backlog has passed the 15 million mark. Stephen ? ? |
rob smith Send message Joined: 7 Mar 03 Posts: 22439 Credit: 416,307,556 RAC: 380 |
I suspect there are a number of people who read the first few words of Eric's announcement and turned their computers off. We have seen this every time there has been a glitch in tasks being distributed, some folks assume they have to connect to the servers to be able to process the tasks they have in hand. Bob Smith Member of Seti PIPPS (Pluto is a Planet Protest Society) Somewhere in the (un)known Universe? |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Maybe even reduce deadlines to 14 days since it is such a short term thing now. It would be nice to see things clear up before the plug is pulled. I say this because the validation backlog has passed the 15 million mark.Yep. Deadlines- 2 Weeks for all WUs, inc AP. 3 days for re-sends. Why drag this out for 9 months (or more)? Grant Darwin NT |
Grant (SSSF) Send message Joined: 19 Aug 99 Posts: 13835 Credit: 208,696,464 RAC: 304 |
Appropriate thread. Changing the deadlines has no effect whatsoever to the assimilation blockage.Since the Assimilation backlog is a result of the database bloat which has been caused by the increase in Quorum to deal with the RX 5000 issue, it will help with with Assimilation backlog. Fix the cause, and everything else that is affected will be sorted out as well. Grant Darwin NT |
juan BFP Send message Joined: 16 Mar 07 Posts: 9786 Credit: 572,710,851 RAC: 3,799 |
Remember few weeks ago, before the announce of the S@H hibernation, when i propose to send the resends to the fastest APR hosts to help to reduce the DB size? There where someones who said that was wrong and that will never put to work. Ok since we can't access the server code, we develop an option in the controversial spoofed client who does exactly that only using the WU received by the client of course. But the regular client could do the same. Just by add the related code. This is an example of it working on a totally random WU https://setiathome.berkeley.edu/workunit.php?wuid=3833651396 Workunit 3833651396 name blc35_2bit_guppi_58691_81912_HIP79672_0098.4892.0.21.44.119.vlar application SETI@home v8 created 10 Jan 2020, 17:47:51 UTC canonical result 8430193649 granted credit 14.10 minimum quorum 2 initial replication 3 max # of error/total/success tasks 5, 10, 5 Task click for details Computer Sent Time reported or deadline explain Status Run time (sec) CPU time (sec) Credit Application 8430193649 8061859 10 Jan 2020, 17:59:54 UTC 11 Jan 2020, 6:06:03 UTC Completed and validated 44.30 24.66 14.10 SETI@home v8 Anonymous platform (NVIDIA GPU) 8430193650 6736251 10 Jan 2020, 18:00:04 UTC 20 Jan 2020, 14:45:46 UTC Completed and validated 1,039.08 898.80 14.10 SETI@home v8 v8.08 (alt) windows_x86_64 8462356857 8886774 20 Jan 2020, 14:46:12 UTC 26 Mar 2020, 12:33:44 UTC Timed out - no response 0.00 0.00 --- SETI@home v8 v8.00 windows_intelx86 8687267991 8662921 26 Mar 2020, 16:42:52 UTC 26 Mar 2020, 16:56:36 UTC Completed and validated 36.52 7.17 14.10 SETI@home v8 Anonymous platform (NVIDIA GPU) As you could see the WU was siting around the DB from Jan 10. My host receives the resend today and already crunched it, so the WU is ready to assimilate and cleared from the DB in minutes after received. This is an example of a single scheduler call, from my own host not one of the fastest on the SETIverse, so you could extrapolate and check the effect of that measure if 100's rely fast hosts where doing the same. This is a random scheduled task too. I will leave to each one do the math and find it's own conclusion. 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 01jn15ab.25254.87357.11.38.159_3 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 25ja20ac.18848.7838.12.39.119_3 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26ja20ab.25254.12541.16.43.250_3 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58691_81912_HIP79672_0098.4892.0.21.44.119.vlar_3 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 29ap12ae.16700.49233.3.30.255_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 05my10aa.12600.13159.5.32.125_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.13069.280216.16.43.247_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.11292.284306.9.36.46_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 09no11ae.22114.87568.9.36.108_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.10656.40858.8.35.80_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 22jl11af.20470.6611.14.41.244_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.26477.3748.13.40.5_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 08mr11ah.4854.6210.6.33.248_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 08mr11ah.3826.3756.8.35.114_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 08mr11ah.3826.8255.8.35.58_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 11ja09af.7953.77930.11.38.246_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 11ja09af.6752.86110.3.30.128_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 11ja09af.6752.890.3.30.4_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 11ja09af.31185.890.15.42.249_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 18mr20ac.11813.2930.11.38.26_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 17mr20af.32039.476.6.33.66_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ac.9973.16836.9.36.159_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 18mr20af.17495.8247.12.39.246_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ae.11001.23008.12.39.201_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ag.15440.18881.7.34.137_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ag.14161.18063.11.38.170_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ag.15440.17654.7.34.3_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20af.3855.2112.11.38.192_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20af.30763.20517.3.30.254_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20aa.20937.24607.6.33.176_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ag.21339.19290.5.32.254_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ac.11606.14382.16.43.248_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 18mr20ag.21385.476.16.43.253_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ag.16425.19699.15.42.231_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 18mr20ae.5202.6202.16.43.98_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ag.16425.25016.15.42.91_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 22mr20aa.30013.11519.11.38.192_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 22mr20aa.14345.16027.3.30.5_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ag.11335.20517.12.39.46_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.24085.282874.5.32.100.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 14oc15aa.26815.15613.14.41.157.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_09261_HIP84123_0141.31612.818.22.45.174.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc64_2bit_guppi_58838_31043_TIC66561343_0116.5822.818.19.28.82.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_09904_HIP84123_0143.31886.0.22.45.175.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc45_2bit_guppi_58838_02594_TIC468880077_0021.23841.818.19.28.80.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc44_2bit_guppi_58838_29085_TIC43647325_0110.12611.409.20.29.145.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 05my10aa.7244.4566.15.42.45.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19mr20ad.27734.237983.14.41.1.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_09904_HIP84123_0143.31886.0.22.45.208.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc41_2bit_guppi_58838_30369_TIC43647325_0114.5795.818.20.29.35.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_10223_HIP84166_0144.31678.0.22.45.204.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 19oc12aa.22950.275407.15.42.92.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.7143.283079.14.41.176.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_09261_HIP84123_0141.31612.818.22.45.201.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc45_2bit_guppi_58838_13469_TIC249067445_0056.16155.818.20.29.89.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc66_2bit_guppi_58692_78819_HIP80366_0088.32758.818.21.44.134.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_10223_HIP84166_0144.31678.0.22.45.202.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_09582_HIP84253_0142.707.0.22.45.204.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc66_2bit_guppi_58692_78819_HIP80366_0088.32758.818.21.44.135.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc35_2bit_guppi_58692_09261_HIP84123_0141.31612.818.22.45.202.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 10mr20ac.30881.1703.12.39.226.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.24085.11621.5.32.2.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task blc66_2bit_guppi_58838_27577_TIC67772767_0105.1854.818.20.29.34.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.13069.13564.16.43.0.vlar_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 17mr10ae.8023.42024.15.42.124_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 26fe13ad.24085.4157.5.32.192_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 05my10aa.6538.20521.3.30.19_2 26-Mar-2020 11:58:03 [SETI@home] [sched_op] handle_scheduler_reply(): got ack for task 08mr11ah.6855.1711.7.34.80_2 BTW I'm not trying to start a new fire, just presenting the facts and the data to corroborate, maybe that could warning the others there are some around who knows a little about the way the computers works, has the knowledge and are ready to help. That could make someones give a chance and their ideas could help the others projects in the future. Say not to those voices is what is wrong! For S@H is to late to make any difference i know. |
©2024 University of California
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.