...
once all acq-lists have been generated, facet on such acq-lists in Tosca
you can query by the AOI id and then facet on the S1-GUNW-acq-list dataset
then submit the localizer jobs:
Action:
Standard Product S1-GUNW slc_localizer [develop]
Queue: factotum queue is
factotum-job_worker-aria-standard_product-slc_localizer
(this is an ASG and is now tagged withCharlie
with the ASG queue isaria-standard_product-localizer
queue name)asf_ngap_download_queue:
factotum-job_worker-slc_sling-asf
Note the other queue
slc-sling-extract-asf
is an ASG
esa_download_queue:
factotum-job_worker-slc_sling-scihub
Note the other queue
slc-sling-extract-scihub
is an ASG
spyddder_sling_extract_version:
develop
Result: this job will iterate over the SLCs listed in the acq-list and submit a data sling job
these sling jobs take acquisition-S1-IW_SLCs as an input and will download the corresponding SLC from ASF (relatively old acquisition) or Scihub (acquisition is less than 2 weeks old) to s3 and register the SLC in the S1-IW_SLC dataset in GRQ
Notes:
Acquisition lists are in one-to-one correspondence with ifg-cfgs
SLCs can be shared among acquisition lists and ifg-cfg's within an AOI. Therefore, #SLCs < # acq-lists = #ifg-cfgs within your AOI. As an example, within an AOI, there were ~700 SLCs for 2300 ifg-cfgs.
Say you run the localizer and you see that you have a bunch ifg-cfgs haven’t been created even though most of the sling jobs have been completed successfully. You may have only a few SLCs to download (or much less than the ifg-cfgs that are missing). Check the unique SLCs in the ops report.
If you have the proper trigger rules set up and activated, every time a new SLC is slinged and put into the system, then an ifg-cfg is created. This is a helpful trigger rule to have. Currently it is called
acqlist_evaluator
.
if you ever need to download a particular SLC, facet on the corresponding acquisition-S1-IW_SLC (the SLC is a substring of the acquisition id) and submit the following job
Action:
Data Sling and Extract for {asf, Scihub} [develop]
Queue:
factotum-job_worker-{large,small}
sling jobs have a tendency to fail since certain products are archived in the DAACs
retrying/resubmitting the failed jobs a little later will usually complete
...
Ensuring that SLCs for an AOI are downloaded en masse. That is every acquisition list has all its SLCs. Of course, getting all the SLCs on the system is never attainable in practice. However, the more SLCs from an AOI that are downloaded, the more directly and thus faster the topsApp processing can be done. Also the purging can be done more quickly.
speed of processing staged SLCs (post enumeration) into GUNWs using topsapp - in other words, ensuring the topsApp jobs are run quickly once the SLCs have been staged so that you are not waiting
this is most efficiently done with trigger rules on ifg-cfgs (see the topsapp section above).
Purging SLCs that are no longer needed
Removing the datasets also purges the SLCs from S3
While it is beneficial to purge SLCs that are no longer needed, note figuring which are needed and which are not is complicated and is why its best to get as many of the required SLCs downloaded at once
If you have a small number of GUNWs that are missing it’s best to purge the existing SLCs and repeat the pipeline on the related acquisition lists/ifg-cfgs as those required to produce the GUNWs.
Removing Jobs after bad Facets
It is inevitable there will be times
TopsApp Bug Documented (related to intermediate datasets)