Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

<update pipeline diagram>

PGE's

...

to port to

...

Python3

  •  AOI Ops Report

...

ariamh

...

SLC sling

...

enumerator

...

SAR avail

...

SLCP-COR

...

SLCP-PM

...

COD

...

evaluator

...

localizer

  •  AOI Enumerator Submitter
  •  TopsApp
  •  GUNW Completeness Evaluator
  •  Blacklist Generation
  •  Greylist Generation
  •  AOI Based IPF Scraper
  •  AOI Based Acquisition Scraper
  •  Add Machine Tag
  •  Add User Tag
  •  Product Delivery
  •  Create AOI
  •  IPF ASF Scraper for Acq
  •  IPF SciHub Scraper for Acq
  •  Orbit Crawler
  •  Orbit Ingest
  •  LAR
  •  SLC sling
  •  Enumerator
  •  SAR avail
  •  SLCP-COR
  •  SLCP-PM
  •  COD
  •  Evaluator
  •  Localizer
  •  multi acquisition localizer

PGE I/O

PGE test list is incomplete. We are currently focusing on the main pipeline

Expand
titlePGE list

AOI based submission of acq scraper jobs

  • build: develop

  • input:

  • output:

AOI based submission of ipf scraper jobs

  • build: develop

  • input:

  • output:

AOI Enumerator Submitter

  • build: develop

  • input:

  • output:

AOI Merged Track Stitcher

  • build: python3

  • input:

  • output:

AOI Validate Acquisitions

  • build: develop

  • input:

  • output:

AWS Get Script

  • build: v0.0.8

  • input:

  • output:

Data Sling Extract for asf

  • build: python3

  • input:

  • output:

Data Sling Extract for Scihub

  • build: python3

  • input:

  • output:

Test Procedures

Expand
titleEnumerator Submitter

Test Description:

Run the enumerator submitter as an on-demand job to generate the acquisition lists. Compare output acquisition lists to those generated over the same AOI on C-cluster.

Github Repo: https://github.com/aria-jpl/aoi_enumerator_submitter

Pass Criteria:

  • All expected acquisition lists were successfully generated

Input dataset:  <link to dataset used> (a single AOI)

Output dataset: <link to slc's generated> (acquisition lists)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Expand
titleAcquisition Enumerator

Test Description:

Run the AOI track acquisition enumerator as an on-demand job to generate the acquisition lists. Compare output acquisition lists to those generated over the same AOI on C-cluster.

Github Repo: https://github.com/aria-jpl/standard_product.git

Pass Criteria:

  • All expected acquisition lists were successfully generated

Input dataset:  <link to dataset used> (S1-AUX_POEORB)

Output dataset: <link to slc's generated> (acquisition lists)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Expand
titleLocalizer

Test Description:

Run the localizer as an on-demand job to generate the sling jobs for the missing slc's.

Github Repo: https://github.com/aria-jpl/standard_product.git

Pass Criteria:

  • All identified missing slc's from acquisition list input set have sling jobs spun up for them.

  • There are no sling jobs generated for slc's already in the system.

Input dataset:  <link to dataset used> (acquisition lists)

Output dataset: <link to slc's generated> (sling jobs)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Expand
titleEvaluator

Test Description:

Run the evaluator as an on-demand job to ensure all expected ifg-cfg's are generated.

Github Repo: https://github.com/aria-jpl/standard_product.git

Pass Criteria:

  • Compare ifg-cfg datasets that were generated to those that were generated on C-cluster over the same set of SLC's.

Input dataset:  <link to dataset used> (slc's)

Output dataset: <link to slc's generated> (ifg-cfgs)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Expand
titleTopsApp

Test Description:

Run the topsapp PGE as an on-demand job to generate GUNW's.

Github Repo: https://github.com/aria-jpl/ariamh.git

Pass Criteria:

  • GUNW's for all input ifg-cfg's are generated and compared to those that were generated over the same ifg-cfg set.

Input dataset:  <link to dataset used> (ifg-cfg)

Output dataset: <link to slc's generated> (s1-gunw)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Expand
titleCompleteness Evaluator

Test Description:

Run the completeness evaluator as an on-demand job to generate the aoi_track dataset if the track is complete.

Github Repo: https://github.com/aria-jpl/standard_product_completeness_evaluator.git

Pass Criteria:

  • If any GUNW is missing over a single date-pair spatial extent of aoi, aoi_track dataset is not generated.

  • If all GUNW's over a single date-pair spatial extent of aoi exist, aoi_track is generated.

Input dataset:  <link to dataset used> (gunw)

Output dataset: <link to slc's generated> (aoi track)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Expand
titleSling/Extract

Test Description:

Ensures this PGE pulls SLC products from ASF endpoint and ingests them into S3/ES when PGE is run as an on-demand job.

Pass Criteria:

  • Job must complete successfully

  • SLCs must be registered in S3

  • SLCs must be found when faceting on them in tosca

Input dataset:  <link to dataset used>

Output dataset: <link to slc's generated>

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Expand
titleSling/Extract

Test Description:

Ensures this PGE pulls SLC products from SciHub endpoint and ingests them into S3/ES when PGE is run as an on-demand job.

Pass Criteria:

  • Job must complete successfully

  • SLCs must be registered in S3

  • SLCs must be found when faceting on them in tosca

Input dataset:  <link to dataset used>

Output dataset: <link to slc's generated>

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

...

  1. Create a python3 virtual environment.
     virtualenv  virtualenv env3 -p python3 to create the virtual environment

  2. Start the python3 virtual environment.
    source ~/env3/bin/activate to start a python 3 virtual environment

  3. Pull contents of repo on a new branch named python3

  4.  Run Run futurize over the contents of the repo.
    pip install future
    cd <repo>
    futurize -w -n -p .

    Output will show what has been changed.

  5. ssh into e mozart to add the python3 converted pge : sds using this command: 
    sds ci add_job -k -b develop https://github.com/aria-jpl/standard_product_report.git  s3

  6. Go to e jenkins Jenkins and click configure. Specify the branch as python3 branch and build. check Check the docker file and change FROM to the latest branch. wait Wait for build to complete successfully. may May take a few minutes. will Will publish job to e cluster automatically.

  7. Go to e cluster and run job. step into container if you need to debug stuff.

  8. Once job runs successfully, push changes to dev.

...