Python2 to Python3 Port

We are working on porting our python 2 PGE's to python 3. Individual PGE's will be tested and validated against equivalent datasets in C-cluster. Once all PGE's have been validated, a system level test of the pipeline will be performed to ensure all trigger rules are in place and all expected products are generated.

Pipeline

<update pipeline diagram>

PGE's to port to Python3

AOI Ops Report
AOI Enumerator Submitter
TopsApp
GUNW Completeness Evaluator
Blacklist Generation
Greylist Generation
AOI Based IPF Scraper
AOI Based Acquisition Scraper
Add Machine Tag
Add User Tag
Product Delivery
Create AOI
IPF ASF Scraper for Acq
IPF SciHub Scraper for Acq
Orbit Crawler
Orbit Ingest
LAR
SLC sling
Enumerator
SAR avail
SLCP-COR
SLCP-PM

 

PGE I/O

PGE test list is incomplete. We are currently focusing on the main pipeline

 

AOI based submission of acq scraper jobs

  • build: develop

  • input:

  • output:

AOI based submission of ipf scraper jobs

  • build: develop

  • input:

  • output:

AOI Enumerator Submitter

  • build: develop

  • input:

  • output:

AOI Merged Track Stitcher

  • build: python3

  • input:

  • output:

AOI Validate Acquisitions

  • build: develop

  • input:

  • output:

AWS Get Script

  • build: v0.0.8

  • input:

  • output:

Data Sling Extract for asf

  • build: python3

  • input:

  • output:

Data Sling Extract for Scihub

  • build: python3

  • input:

  • output:

Test Procedures

Test Description:

Run the enumerator submitter as an on-demand job to generate the acquisition lists. Compare output acquisition lists to those generated over the same AOI on C-cluster.

Github Repo: GitHub - aria-jpl/aoi_enumerator_submitter: Submits a given AOI's enumeration jobs for all intersecting poeorbs

Pass Criteria:

  • All expected acquisition lists were successfully generated

Input dataset:  <link to dataset used> (a single AOI)

Output dataset: <link to slc's generated> (acquisition lists)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Test Description:

Run the AOI track acquisition enumerator as an on-demand job to generate the acquisition lists. Compare output acquisition lists to those generated over the same AOI on C-cluster.

Github Repo: GitHub - aria-jpl/standard_product

Pass Criteria:

  • All expected acquisition lists were successfully generated

Input dataset:  <link to dataset used> (S1-AUX_POEORB)

Output dataset: <link to slc's generated> (acquisition lists)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Test Description:

Run the localizer as an on-demand job to generate the sling jobs for the missing slc's.

Github Repo: GitHub - aria-jpl/standard_product

Pass Criteria:

  • All identified missing slc's from acquisition list input set have sling jobs spun up for them.

  • There are no sling jobs generated for slc's already in the system.

Input dataset:  <link to dataset used> (acquisition lists)

Output dataset: <link to slc's generated> (sling jobs)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Test Description:

Run the evaluator as an on-demand job to ensure all expected ifg-cfg's are generated.

Github Repo: GitHub - aria-jpl/standard_product

Pass Criteria:

  • Compare ifg-cfg datasets that were generated to those that were generated on C-cluster over the same set of SLC's.

Input dataset:  <link to dataset used> (slc's)

Output dataset: <link to slc's generated> (ifg-cfgs)

Summary:

<summary of test results; quick description of any validation done with datasets from c-cluster>

Process

For this example we are porting the aoi ops report.

  1. Create a python3 virtual environment.
     virtualenv env3 -p python3 to create the virtual environment

  2. Start the python3 virtual environment.
    source ~/env3/bin/activate to start a python 3 virtual environment

  3. Pull contents of repo on a new branch named python3

  4. Run futurize over the contents of the repo.
    pip install future
    cd <repo>
    futurize -w -n -p .

    Output will show what has been changed.

  5. ssh into e mozart to add the python3 converted pge using this command: 
    sds ci add_job -k -b develop GitHub - aria-jpl/standard_product_report: Generates intermediate products reports for GUNW standard products by input AOI  s3

  6. Go to e Jenkins and click configure. Specify the branch as python3 branch and build. Check the docker file and change FROM to the latest branch. Wait for build to complete successfully. May take a few minutes. Will publish job to e cluster automatically.

  7. Go to e cluster and run job. step into container if you need to debug stuff.

  8. Once job runs successfully, push changes to dev.