Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Related Github Repos and tickets

...

  • Code Block
    raise  Exception('Could not determine a suitable burst offset')
  • There must only be one track in your SLC inputs

correct facet SLC inputs incorrect facet SLC inputs

...

Job Inputs:

  • Bbox (*required)

    • min_lat max_lat min_lon max_lon

...

CI Integration (Jenkins)

HySDS-io and Jobspec-io

hysds-io.json.topsstack

Code Block
{
  "label": "topsStack Processor",
  "submission_type": "individual",
  "allowed_accounts": [ "ops" ],
  "action-type":  "both",
  "params": [
    {
      "name": "min_lat",
      "from": "submitter",
      "type": "number",
      "optional": false
    },
    {
      "name": "max_lat",
      "from": "submitter",
      "type": "number",
      "optional": false
    },
    {
      "name": "min_lon",
      "from": "submitter",
      "type": "number",
      "optional": false
    },
    {
      "name": "max_lon",
      "from": "submitter",
      "type": "number",
      "optional": false
    },
    {
      "name":"localize_products",
      "from":"dataset_jpath:",
      "type":"text",
      "lambda" : "lambda met: get_partial_products(met['_id'],get_best_url(met['_source']['urls']),[met['_id']+'.zip'])"
    }
  ]
}

...

Code Block
{
  "recommended_queues": ["jjacob_stack"],
  "command": "/home/ops/verdi/ops/topsstack/run_stack.sh",
  "imported_worker_files": {
    "/home/ops/.netrc": "/home/ops/.netrc",
    "/home/ops/.aws": "/home/ops/.aws"
  },
  "soft_time_limit": 10800,
  "time_limit": 18000,
  "disk_usage": "100GB",
  "params": [
    {
      "name": "min_lat",
      "destination": "context"
    },
    {
      "name": "max_lat",
      "destination": "context"
    },
    {
      "name": "min_lon",
      "destination": "context"
    },
    {
      "name": "max_lon",
      "destination": "context"
    },
    {
      "name":"localize_products",
      "destination": "localize"
    } 
  ]
}

Job Outputs

Main file that gets executed is run_stack.sh

...

Output directory structure

...

STILL TODO:

  • Publish the contents from merged/ into a datasets directory

  • Create dataset file name template

  • Add regex and entry to etc/_datasets.json