...
Endpoint | Function | Input | Output/Result |
---|---|---|---|
POST /aoi-track/ | Submit new aoi-track |
|
|
GET /aoi-track/ | List aoi-track
| none |
|
GET /aoi-track/{aoitrack_id} | Get aoi-track |
|
|
Implementation Notes
...
create new aoi-track
This API will call the mozart api to submit new job “create_aoi-track:develop” to create a new AOI using conventions for aoi-track.
Note: dataset=area_of_interest does not have track info in the dataset. Fork https://github.com/hysds/create_aoi to create alternate “create_aoitrack” where the track number and orbit direct is intrinsic in the dataset. This job creates aoitrack.dataset.json with space and time extents. Update aoitrack.met.json to also include the track_number and orbit_direction. username is already included.
update jenkins to build new job type create_aoitrack and register into https://mamba-mozart.aria.hysds.io/figaro/
create new aoi-track
calls mozart api to submit new job “create_aoi-track:develop”.
reference So we can store track info, orbit direction, user info, etc. either in the AOI’s met.json, or even easier by storing it in string tokens that are part of AOI dataset naming convention for this AOI.
Approach can be to call Mozart API to create new dataset=area_of_interest but with dataset ID of convention “aoi_track-<orbit_direction>-<track_number>-<username>-<YYYYMMDDTHHMMSS>”. This will result in publishing new AOI to GRQ with this naming convention.
See an example of how to submit mozart api job in python here: https://github.com/aria-jpl/scihub_acquisition_scraper/blob/develop/crons/aoi_ipf_scrape_cron.py
job type “create_aoi-track:develop” will result in publishing tin GRQ a new dataset=aoi-track
submits Submits sync catch-up jobs for IPF for all acquisitions in that aoi-track.
...
list aoi-track
query GRQ for all dataset type aoi-track dataset=area_of_interest , but then filter results for dataset IDs that starts with “aoi_track-*”
get aoi-track
query GRQ given aoitrack_id
...