You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see lots of these messages in the cloudwatch logs:
[ERROR] 2024-11-20T00:07:21.519Z 8d16a43d-5de3-5078-b041-fb8a778d4d4c Could not process message 66d5c59e-ce97-41dd-a674-a294dcdd251e
Traceback (most recent call last):
File "/var/task/main.py", line 254, in lambda_handler
_ = process_scene(message[product_id])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/main.py", line 196, in process_scene
reference = get_sentinel2_stac_item(scene)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/var/task/sentinel2.py", line 77, in get_sentinel2_stac_item
raise ValueError(
ValueError: 0 items for S2A_MSIL1C_20241119T023251_N0511_R103_T50KMV_20241119T035620 found in Sentinel-2 STAC collection: https://earth-search.aws.element84.com/v1//collections/sentinel-2-l1c
The text was updated successfully, but these errors were encountered:
This is likely because Synergize places the products in AWS and manages the SNS topics, but the STAC catalog is managed by Element84, so there is likely a delay between SNS message from Synergize and the item being indexed by Element84.
It might be worth setting up a SQS for Sentinel-2 separate from Landsat so that we can add a 15-minute delivery delay for Sentinel-2 messages. We could also allow separate visibility time-outs for Landsat (prev. was 5 min) and sentinel-2 instead of the uniform 8 hours we currently use.
I see lots of these messages in the cloudwatch logs:
The text was updated successfully, but these errors were encountered: