Anything relating to AstroPulse (2) tasks

Message boards : Number crunching : Anything relating to AstroPulse (2) tasks
Message board moderation

To post messages, you must log in.

Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 50 · Next

AuthorMessage
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1937239 - Posted: 26 May 2018, 8:26:16 UTC

1 resend for the 25th UTC.

Cheers.
ID: 1937239 · Report as offensive
Grant (SSSF)
Volunteer tester

Send message
Joined: 19 Aug 99
Posts: 13736
Credit: 208,696,464
RAC: 304
Australia
Message 1937345 - Posted: 27 May 2018, 0:38:03 UTC

Another Arecibo file has turned up, AP work being split now.
Grant
Darwin NT
ID: 1937345 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1937437 - Posted: 28 May 2018, 0:59:34 UTC

35 arrived here for the 27th UTC.

Cheers.
ID: 1937437 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1937484 - Posted: 28 May 2018, 14:13:40 UTC - in response to Message 1937437.  

37 here for the 27th.

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1937484 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1937546 - Posted: 29 May 2018, 0:02:13 UTC

45 were picked up here for the 28th UTC.

Cheers.
ID: 1937546 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1937556 - Posted: 29 May 2018, 1:42:35 UTC - in response to Message 1937546.  

Only 25 for the 28th here.

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1937556 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1937614 - Posted: 30 May 2018, 0:14:42 UTC

14 for the 29th UTC.

Cheers.
ID: 1937614 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1937643 - Posted: 30 May 2018, 5:07:17 UTC

I thought a few years ago, Matt had mentioned something about being able to detect this kind of thing server-side and not even waste bandwidth and time with them?
2018-05-30 00:42:07 SETI@home Started download of ap_28my18aa_B3_P1_00267_20180529_18102.wu
2018-05-30 00:43:00 SETI@home Finished download of ap_28my18aa_B3_P1_00267_20180529_18102.wu
2018-05-30 00:43:00 SETI@home Starting ap_28my18aa_B3_P1_00267_20180529_18102.wu_0
2018-05-30 00:43:00 SETI@home Starting task ap_28my18aa_B3_P1_00267_20180529_18102.wu_0 using astropulse_v7 version 700
2018-05-30 00:43:03 SETI@home Computation for task ap_28my18aa_B3_P1_00267_20180529_18102.wu_0 finished
2018-05-30 00:43:05 SETI@home Started upload of ap_28my18aa_B3_P1_00267_20180529_18102.wu_0_r216410812_0
2018-05-30 00:43:08 SETI@home Finished upload of ap_28my18aa_B3_P1_00267_20180529_18102.wu_0_r216410812_0


If my understanding is correct, one of the channels of the alpha receiver is effectively a boolean on/off for when radar is present and then that boolean stream gets overlaid on the data. Which means the splitters should, theoretically, know when a WU is going to be 100% blanked before it even gets split.

I know B3_P1 has been a problem channel for many years now, but I find 100% blanked APs on B5_P0 a lot, as well, and very seldom on others, but it does happen.

The point is though.. I thought there was some kind of server-side ability to detect 100% blanked stuff before it even goes into the field. It's possible maybe Matt was just mentioning a wishlist kind of idea, but it seems like all the logic exists for it, it just doesn't end up being implemented.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1937643 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1937650 - Posted: 30 May 2018, 6:28:59 UTC

That is not a constant as it seems to vary depending on where the dish is focused at the time.

Cheers.
ID: 1937650 · Report as offensive
Cosmic_Ocean
Avatar

Send message
Joined: 23 Dec 00
Posts: 3027
Credit: 13,516,867
RAC: 13
United States
Message 1937651 - Posted: 30 May 2018, 7:00:01 UTC

Yeah, I know B3_P1 works sometimes.

But my understanding of the logic behind the scenes is something like..

-One of the channels for the Alpha receiver records when radar is happening
-That radar on/off data is then overlaid on the data channels
-The remove_RFI process (which isn't listed on SSP anymore, but it used to be) then fills those areas of the "tapes" with pseudo noise (as going with digital zeros suddenly would give false positives at the boundary edges of the silence during analysis)

But I know at least with AP, the blanked areas are known by the science app on our end and get skipped altogether.

So if the science app knows where the blanked areas are before crunching even starts, then that would mean that the *server* knows how much is blanked before the WU even gets sent out to anyone.

If my understanding is correct, then the feeder or scheduler would be able to pretty easily go ahead and mark those WUs as 100% blanked in the DB without clients even needing to attempt it, thus saving bandwidth and simplifying the process without any notable increase in processing server-side. WUs can be marked as 100% blanked by the splitters themselves and be logged in the DB and moved straight to the file_deleter and db_purge queues.

As a visual example, this is sort of what I mean by knowing what portions are 100% blanked in advance:
radar: -----      --- - -  --------    ------ -- - - -   ---  ---- ---   --  -
data : 001010010010101001010101101010100101010101101111100010100010010010101001
WUs  :     |     |    |    |    |    |    |    |    |    |    |    |    |    |
combo: ----+     |--- + -  +----+--  | ---+-- -+ - -|-   +--  +--- +--  |--  +
100% : =====               ======

So you can see with the 100% line there, there are two WUs that are 100% blanked, and the server-side *knows* that as soon as the splitter runs through that tape and makes WUs out of it. So the data and logic already exists to just mark them in the db and discard before it even goes to the feeder.

Unless I'm just completely misunderstanding this, which is possible.
Linux laptop:
record uptime: 1511d 20h 19m (ended due to the power brick giving-up)
ID: 1937651 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1937748 - Posted: 31 May 2018, 0:20:30 UTC

24 randomly made their way here on the 30th UTC.

Cheers.
ID: 1937748 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1938185 - Posted: 4 Jun 2018, 0:12:35 UTC

Well 1 resend wound up here on both the 31st and 1st while 8 turned up here for the 3rd UTC.

Cheers.
ID: 1938185 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1938199 - Posted: 4 Jun 2018, 3:04:44 UTC - in response to Message 1938185.  

8 here as well.,

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1938199 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1938276 - Posted: 5 Jun 2018, 0:06:36 UTC

Just 6 got here for the 4th UTC.

Cheers.
ID: 1938276 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1938282 - Posted: 5 Jun 2018, 1:14:05 UTC - in response to Message 1938276.  
Last modified: 5 Jun 2018, 1:14:27 UTC

Managed 15 here. Just shows the luck of the draw.

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1938282 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1938359 - Posted: 6 Jun 2018, 0:01:29 UTC

16 for the 5th UTC.

Cheers.
ID: 1938359 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1938369 - Posted: 6 Jun 2018, 3:14:04 UTC - in response to Message 1938359.  

11 for me.

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1938369 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1938528 - Posted: 7 Jun 2018, 1:23:18 UTC

13 here for the 6th UTC.

Cheers.
ID: 1938528 · Report as offensive
Profile Bill G Special Project $75 donor
Avatar

Send message
Joined: 1 Jun 01
Posts: 1282
Credit: 187,688,550
RAC: 182
United States
Message 1938536 - Posted: 7 Jun 2018, 3:21:19 UTC - in response to Message 1938528.  

Only 8 here, but I had an interesting thing happen...…..I had a fuse blow and my main computer continued on battery UPS. I turned it off. But it seems that when I recently moved computers around I left the cable from the UPS plugged into a different computer and I did not notice that it turned off that computer. More lost computer time. I really need to get that power straightened out and that computer off an extension cord (I was not thinking when I turned on the air compressor. but I was able to change the blades on the mower)

SETI@home classic workunits 4,019
SETI@home classic CPU time 34,348 hours
ID: 1938536 · Report as offensive
Profile Wiggo
Avatar

Send message
Joined: 24 Jan 00
Posts: 34744
Credit: 261,360,520
RAC: 489
Australia
Message 1938636 - Posted: 8 Jun 2018, 0:25:35 UTC

3 resends for the 7th UTC.

My rigs are all powered from separate power points so I have no such problems like that here Bill and thankfully they all powered down well when the power here went out for 3hrs in the early hours this morning.

I had to reboot the modem though as it was trying to connect before the local exchange came back online so that was a walk down to the house before the sun came up to get that sorted.

Cheers.
ID: 1938636 · Report as offensive
Previous · 1 . . . 3 · 4 · 5 · 6 · 7 · 8 · 9 . . . 50 · Next

Message boards : Number crunching : Anything relating to AstroPulse (2) tasks


 
©2024 University of California
 
SETI@home and Astropulse are funded by grants from the National Science Foundation, NASA, and donations from SETI@home volunteers. AstroPulse is funded in part by the NSF through grant AST-0307956.