Backend bos sarcat scraper (stand-alone CLI tool)
https://github.com/aria-jpl/bos_sarcat_scraper
https://github.com/aria-jpl/bos_sarcat_scraper/blob/master/bos_sarcat_scraper/bosart_scrape.py
This script queries BOS and outputs a json of te result set
Inputs
start/end time
OR
Since last ingest time on BOS
BOS expects wkt format for query
Front end GUI
Currently inside tosca repo on https://github.com/hysds/tosca/blob/develop/tosca/templates/facetview-saravail.html
PGE: bos ingest
https://github.com/aria-jpl/bos_sarcat_scraper/blob/master/docker/job-spec.json.bos_ingest
Runs hourly on 2-hour
This is the HySDS PGE wrapper for https://github.com/aria-jpl/bos_sarcat_scraper/blob/master/bos_sarcat_scraper/bosart_scrape.py
PGE: bos scrubbed of older
Runs daily
Scrubs outdated PLANNED and PREDICTED, after 2-days margin
Cron scripts
https://github.com/aria-jpl/bos_sarcat_scraper/blob/master/archived_planned-cron.py
https://github.com/aria-jpl/bos_sarcat_scraper/blob/master/scrub_planned_predicted-cron.py
https://github.com/aria-jpl/bos_sarcat_scraper/blob/master/create_acquisitions.py
Avoids ingesting past PLANNED and PREDICTED
Calls ingest from inside te PGE
If forgot to delete the folder, then verdi will try to ingest it if the dataset dir is still there.
Bug if cannot remove dir and verdi
Currently no retries. If job failed due to any reason like ES timeout, then job fails. Currently relying on next scrubber to run. But only get 3-hour sliding window, so about 3 tries.
Still need a daily 5-day window back-filler to run bos ingest.
This runs on b-cluster factotum currently
Other
Location
Log files
Debugging process
Deployment
Watchdogs to check on hourly scraper already in. current checks on bos_ingest_:master
ES on b-cluster
Alias for sar-availability: acquisition
Add Comment