S1-GCOV pipeline
Introduction
This is a guide to set up a S1-GCOV pipeline in preconfigured HySDS/ARIA system. This tool enables user to request for geo-coded covariance matrix products on-demand, using stacks resulted from TOPS Stack processor. At its core of is a PGE (https://github.com/aria-jpl/s1-gcov-plant) based on Polarimetric Interferometric Lab and Analysis Tool (PLAnT https://gitlab.com/plant/plant).
Note: B-cluster is used in discussion below. However, all steps described still apply if other cluster is used.
Build PGE
At first, a docker image of the PGE must be built and published through Jenkins. Here are the steps:
Login to B-cluster as
$ ssh -i key.pem ops@x.x.x.x
And configure Jenkins to pull from PGE source repo at Github
$ sds ci add_job -k -b master https://github.com/aria-jpl/s1-gcov-plant.git s3
Then login to http://b-ci.grfn.hysds.io:8080/login, find an entry page as
http://b-ci.grfn.hysds.io:8080/job/ops-bcluster_container-builder_aria-jpl_s1-gcov-plant_master/
On this page, click “Build Now“, a progressing bar will appear to indicate the status of build.
When it completes, click on “Console Output“ to make sure a tar ball do exist at AWS S3.
How to use
First, login to https://b-datasets.grfn.hysds.io/search/ and select a stack
dataset, a page will show as:
On this page, click “On-Demand” button, a popup will a form will appear as follows, in which one needs to give a “Tag” for later tracking, select “S1-GCOV-PLANT Processor “ as “Action”, choose a “Queue” and a “Priority”:
Next, press Process Now
button in the popup above and a job will be started in HYSDS pipeline.
Now login to https://b-jobs.grfn.hysds.io/figaro/ and search using the “Tag” provided earlier, a page like this will appear:
Shown on this page are job related pipeline information and its status. When the job completes, click on link “work directory“ on this page and a new page will appear as
On the page, folder “s1-gcov” contains S1-GCOV product.