Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

We are working on porting our python 2 PGE's to python 3. Individual PGE's will be tested and validated against equivalent datasets in C-cluster. Once all PGE's have been validated, a system level test of the pipeline will be performed to ensure all trigger rules are in place and all expected products are generated.

Pipeline

<update pipeline diagram>

PGE's Ported to Python 3

  • AOI Ops Report

  • ariamh

  • SLC sling

  • enumerator

  • SAR avail

  • SLCP-COR

  • SLCP-PM

  • COD

  • evaluator

  • localizer

  • multi acquisition localizer

PGE I/O

PGE test list is incomplete. We are currently focusing on the main pipeline

 PGE list

AOI based submission of acq scraper jobs

  • build: develop

  • input:

  • output:

AOI based submission of ipf scraper jobs

  • build: develop

  • input:

  • output:

AOI Enumerator Submitter

  • build: develop

  • input:

  • output:

AOI Merged Track Stitcher

  • build: python3

  • input:

  • output:

AOI Validate Acquisitions

  • build: develop

  • input:

  • output:

AWS Get Script

  • build: v0.0.8

  • input:

  • output:

Data Sling Extract for asf

  • build: python3

  • input:

  • output:

Data Sling Extract for Scihub

  • build: python3

  • input:

  • output:

Test Procedures

 Click here to expand...

Test Description:

Run the enumerator submitter as an on-demand job to generate the acquisition lists. Compare output acquisition lists to those generated over the same AOI on C-cluster.

Github Repo: https://github.com/aria-jpl/aoi_enumerator_submitter

Pass Criteria:

  • All expected acquisition lists were successfully generated

Input dataset:  <link to dataset used> (a single AOI)

Output dataset: <link to slc's generated> (acquisition lists)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>


 Click here to expand...

Test Description:

Run the AOI track acquisition enumerator as an on-demand job to generate the acquisition lists. Compare output acquisition lists to those generated over the same AOI on C-cluster.

Github Repo: https://github.com/aria-jpl/standard_product.git

Pass Criteria:

  • All expected acquisition lists were successfully generated

Input dataset:  <link to dataset used> (S1-AUX_POEORB)

Output dataset: <link to slc's generated> (acquisition lists)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>


 Click here to expand...

Test Description:

Run the localizer as an on-demand job to generate the sling jobs for the missing slc's.

Github Repo: https://github.com/aria-jpl/standard_product.git

Pass Criteria:

  • All identified missing slc's from acquisition list input set have sling jobs spun up for them.

  • There are no sling jobs generated for slc's already in the system.

Input dataset:  <link to dataset used> (acquisition lists)

Output dataset: <link to slc's generated> (sling jobs)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>


 Click here to expand...

Test Description:

Run the evaluator as an on-demand job to ensure all expected ifg-cfg's are generated.

Github Repo: https://github.com/aria-jpl/standard_product.git

Pass Criteria:

  • Compare ifg-cfg datasets that were generated to those that were generated on C-cluster over the same set of SLC's.

Input dataset:  <link to dataset used> (slc's)

Output dataset: <link to slc's generated> (ifg-cfgs)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>


 Click here to expand...

Test Description:

Run the topsapp PGE as an on-demand job to generate GUNW's.

Github Repo: https://github.com/aria-jpl/ariamh.git

Pass Criteria:

  • GUNW's for all input ifg-cfg's are generated and compared to those that were generated over the same ifg-cfg set.

Input dataset:  <link to dataset used> (ifg-cfg)

Output dataset: <link to slc's generated> (s1-gunw)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>


 Click here to expand...

Test Description:

Run the completeness evaluator as an on-demand job to generate the aoi_track dataset if the track is complete.

Github Repo: https://github.com/aria-jpl/standard_product_completeness_evaluator.git

Pass Criteria:

  • If any GUNW is missing over a single date-pair spatial extent of aoi, aoi_track dataset is not generated.

  • If all GUNW's over a single date-pair spatial extent of aoi exist, aoi_track is generated.

Input dataset:  <link to dataset used> (gunw)

Output dataset: <link to slc's generated> (aoi track)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>


 Click here to expand...

Test Description:

Ensures this PGE pulls SLC products from ASF endpoint and ingests them into S3/ES when PGE is run as an on-demand job.

Pass Criteria:

  • Job must complete successfully

  • SLCs must be registered in S3

  • SLCs must be found when faceting on them in tosca

Input dataset:  <link to dataset used>

Output dataset: <link to slc's generated>

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>


 Click here to expand...

Test Description:

Ensures this PGE pulls SLC products from SciHub endpoint and ingests them into S3/ES when PGE is run as an on-demand job.

Pass Criteria:

  • Job must complete successfully

  • SLCs must be registered in S3

  • SLCs must be found when faceting on them in tosca

Input dataset:  <link to dataset used>

Output dataset: <link to slc's generated>

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Process

For this example we are porting the aoi ops report.

  1. Create a python3 virtual environment.
     virtualenv env3 -p python3 to create the virtual environment

  2. Start the python3 virtual environment.
    source ~/env3/bin/activate to start a python 3 virtual environment

  3. Pull contents of repo on a new branch named python3

  4.  Run futurize over the contents of the repo.
    pip install future
    cd <repo>
    futurize -w -n -p .

    Output will show what has been changed.

  5. ssh into e mozart to add the python3 converted pge : sds ci add_job -k -b develop https://github.com/aria-jpl/standard_product_report.git  s3

  6. Go to e jenkins and click configure. Specify the branch as python3 branch and build. check the docker file change FROM to the latest branch. wait for build to complete successfully. may take a few minutes. will publish job to e cluster automatically.

  7. Go to e cluster and run job. step into container if you need to debug stuff.

  8. Once job runs successfully, push changes to dev.



  • No labels