Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Current »

Introduction

This is a guide to set up a S1-GCOV pipeline in preconfigured HySDS/ARIA system. This tool enables user to request for geo-coded covariance matrix products on-demand, using stacks resulted from TOPS Stack processor. At its core of is a PGE (https://github.com/aria-jpl/s1-gcov-plant) based on Polarimetric Interferometric Lab and Analysis Tool (PLAnT https://gitlab.com/plant/plant).

Note: B-cluster is used in discussion below. However, all steps described still apply if other cluster is used.

Build PGE

At first, a docker image of the PGE must be built and published through Jenkins. Here are the steps:

  • Login to B-cluster as

$ ssh -i key.pem ops@x.x.x.x

  • And configure Jenkins to pull from PGE source repo at Github

$ sds ci add_job -k -b master https://github.com/aria-jpl/s1-gcov-plant.git s3

  • Then login to http://b-ci.grfn.hysds.io:8080/login, find an entry page as

http://b-ci.grfn.hysds.io:8080/job/ops-bcluster_container-builder_aria-jpl_s1-gcov-plant_master/

On this page, click “Build Now“, a progressing bar will appear to indicate the status of build.

When it completes, click on “Console Output“ to make sure a tar ball do exist at AWS S3.

How to use

First, login to Tosca and select a stack dataset, a page will show as:

On this page, click “On-Demand” button, a popup will a form will appear as follows, in which one needs to give a “Tag” for later tracking, select “S1-GCOV-PLANT Processor “ as “Action”, choose a “Queue” and a “Priority”:

Next, press Process Now button in the popup above and a job will be started in HYSDS pipeline.

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.