AOI Processing Plan
https://docs.google.com/spreadsheets/d/1PH9bOU0jE6bUWqkuf2wCJ_o3Chh-cMu49GKPlNl5M14/edit#gid=0
HEC Group ID allocations on Pleiades and their rabbitmq queues
HEC group id s2037
program_pi_id: ESI2017-owen-HEC_s2037
rabbitmq queue: standard_product-s1gunw-topsapp-pleiades_s2037
HEC group id s2252
program_pi_id: NISARST-bekaert-HEC_s2252
rabbitmq queue: standard_product-s1gunw-topsapp-pleiades_s2252
HEC group id s2310
program_pi_id: CA-HEC_s2310
rabbitmq queue: standard_product-s1gunw-topsapp-pleiades_s2310
PGEs that run on Pleiades job worker singularity
job type: job-request-s1gunw-topsapp-local-singularity:ARIA-446_singularity
job type: job-spyddder-sling-extract-local-asf-singularity:ARIA-446_singularity
job type: job-spyddder-sling-extract-local-scihub-singularity:ARIA-446_singularity
Job Metrics for pipeline
(Pleiades) job-request-s1gunw-topsapp-local-singularity:ARIA-446_singularity
(Pleiades) job-spyddder-sling-extract-local-scihub-singularity:ARIA-446_singularity
(Pleiades) spyddder-sling-extract-local-asf-singularity:ARIA-446_singularity
RabbitMQ
https://mamba-mozart.aria.hysds.io:15673/#/queues
regex: ^(?!celery)
Repo of utils for Pleiades
https://github.com/hysds/hysds-hec-utils
SSH Tunnel from mamba cluster to Pleiades head node
from mamba-factotum, run screen
comment, then inside the screen
session, ssh with tunnel to tpfe2 head node.
screen
screen -ls
screen -U -R -D pleiades
screen -x pleiades # shared terminal
to split screen: ctrl-a and then shift-s
to detach screen: ctrl-a and then d
Auto-scaling job-workers singularity via PBS scripts
Run autoscaling for each group id in background mode with nohup
(no hangup), with max 140 nodes in total across all group idsesi_sar@tpfe2:~/github/hysds-hec-utils> nohup pbs_auto_scale_up.sh s2037 140 > pbs_auto_scale_up-s2037.log 2>&1 &
esi_sar@tpfe2:~/github/hysds-hec-utils> nohup pbs_auto_scale_up.sh s2310 140 > pbs_auto_scale_up-s2310.log 2>&1 &
esi_sar@tpfe2:~/github/hysds-hec-utils> nohup pbs_auto_scale_up.sh s2252 140 > pbs_auto_scale_up-s2252.log 2>&1 &
note: these commands are wrapped in the following shell script
esi_sar@tpfe2:~/github/hysds-hec-utils> ./all_pbs_auto_scale_up.sh <num_workers>
Daily purge of older job work dirs
Pleiades has allocated us a quota of 100 TB and 5000000 files. This script finds and deletes all files older than 3-days 2.1 days and under each of the group id worker directories.
crontab
that runs every night at 1:37am and 1:37pm Pacific Time:
esi_sar@hfe1:~/github/hysds-hec-utils> crontab -l 37 1 * * * /home4/esi_sar/github/hysds-hec-utils/purge_old_files.sh 37 13 * * * /home4/esi_sar/github/hysds-hec-utils/purge_old_files.sh
cat /home4/esi_sar/github/hysds-hec-utils/purge_old_files.sh #!/usr/bin/env bash find /nobackupp12/esi_sar/s2037/worker/ -type f -mtime +2.1 | xargs rm -f find /nobackupp12/esi_sar/s2037/worker/ -type d -empty -delete find /nobackupp12/esi_sar/s2252/worker/ -type f -mtime +2.1 | xargs rm -f find /nobackupp12/esi_sar/s2252/worker/ -type d -empty -delete find /nobackupp12/esi_sar/s2310/worker/ -type f -mtime +2.1 | xargs rm -f find /nobackupp12/esi_sar/s2310/worker/ -type d -empty -delete
How to stop, flush, and restart production on Pleiades
stop auto-scaling scripts
revoke job type: job-request-s1gunw-topsapp-local-singularity:ARIA-446_singularity in mozart-figaro that are in running/queued states.
qdel all jobs
https://github.com/hysds/hysds-hec-utils/blob/master/qdel_all.sh
qstat -u esi_sar | awk '{ if ($8 == "R" || $8 == "Q") print "qdel "$1; }' | sh
then nuke all of the work dirs for the three group ids:
/nobackupp12/esi_sar/s2037/worker/2020/11/**
/nobackupp12/esi_sar/s2252/worker/2020/11/**
/nobackupp12/esi_sar/s2310/worker/2020/11/**
retry all failed topsapp jobs / on-demand submit from runconfig-topsapp
start up auto scaling scripts
Add Comment