Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

AOI Processing Plan

https://docs.google.com/spreadsheets/d/1PH9bOU0jE6bUWqkuf2wCJ_o3Chh-cMu49GKPlNl5M14/edit#gid=0

HEC Group ID allocations on Pleiades and their rabbitmq queues

  • HEC group id s2037

    • program_pi_id: ESI2017-owen-HEC_s2037

    • rabbitmq queue: standard_product-s1gunw-topsapp-pleiades_s2037

  • HEC group id s2252

    • program_pi_id: NISARST-bekaert-HEC_s2252

    • rabbitmq queue: standard_product-s1gunw-topsapp-pleiades_s2252

  • HEC group id s2310

    • program_pi_id: CA-HEC_s2310

    • rabbitmq queue: standard_product-s1gunw-topsapp-pleiades_s2310

PGEs that run on Pleiades job worker singularity

Job Metrics for pipeline

RabbitMQ

Repo of utils for Pleiades

https://github.com/hysds/hysds-hec-utils

SSH Tunnel from mamba cluster to Pleiades head node

from mamba-factotum, run screen comment, then inside the screen session, ssh with tunnel to tpfe2 head node.

screen

  • screen -ls

  • screen -U -R -D pleiades

  • screen -x pleiades # shared terminal

  • to split screen: ctrl-a and then shift-s

  • to detach screen: ctrl-a and then d

Auto-scaling job-workers singularity via PBS scripts

Run autoscaling for each group id in background mode with nohup (no hangup), with max 140 nodes in total across all group ids
esi_sar@tpfe2:~/github/hysds-hec-utils> nohup pbs_auto_scale_up.sh s2037 140 > pbs_auto_scale_up-s2037.log 2>&1 &
esi_sar@tpfe2:~/github/hysds-hec-utils> nohup pbs_auto_scale_up.sh s2310 140 > pbs_auto_scale_up-s2310.log 2>&1 &
esi_sar@tpfe2:~/github/hysds-hec-utils> nohup pbs_auto_scale_up.sh s2252 140 > pbs_auto_scale_up-s2252.log 2>&1 &

note: these commands are wrapped in the following shell script

esi_sar@tpfe2:~/github/hysds-hec-utils> ./all_pbs_auto_scale_up.sh <num_workers> 

Daily purge of older job work dirs

Pleiades has allocated us a quota of 100 TB and 5000000 files. This script finds and deletes all files older than 3-days 2.1 days and under each of the group id worker directories.

crontab that runs every night at 1:37am and 1:37pm Pacific Time:

esi_sar@hfe1:~/github/hysds-hec-utils> crontab -l
37 1 * * * /home4/esi_sar/github/hysds-hec-utils/purge_old_files.sh
37 13 * * * /home4/esi_sar/github/hysds-hec-utils/purge_old_files.sh
cat /home4/esi_sar/github/hysds-hec-utils/purge_old_files.sh

#!/usr/bin/env bash
find /nobackupp12/esi_sar/s2037/worker/ -type f -mtime +2.1 | xargs rm -f
find /nobackupp12/esi_sar/s2037/worker/ -type d -empty -delete
find /nobackupp12/esi_sar/s2252/worker/ -type f -mtime +2.1 | xargs rm -f
find /nobackupp12/esi_sar/s2252/worker/ -type d -empty -delete
find /nobackupp12/esi_sar/s2310/worker/ -type f -mtime +2.1 | xargs rm -f
find /nobackupp12/esi_sar/s2310/worker/ -type d -empty -delete

How to stop, flush, and restart production on Pleiades

  1. stop auto-scaling scripts

    1. https://github.com/hysds/hysds-hec-utils/blob/master/pbs_auto_scale_up.sh

  2. revoke job type: job-request-s1gunw-topsapp-local-singularity:ARIA-446_singularity in mozart-figaro that are in running/queued states.

  3. qdel all jobs

    1. https://github.com/hysds/hysds-hec-utils/blob/master/qdel_all.sh

      1. qstat -u esi_sar | awk '{ if ($8 == "R" || $8 == "Q") print "qdel "$1; }' | sh

  4. then nuke all of the work dirs for the three group ids:

    1. /nobackupp12/esi_sar/s2037/worker/2020/11/**

    2. /nobackupp12/esi_sar/s2252/worker/2020/11/**

    3. /nobackupp12/esi_sar/s2310/worker/2020/11/**

  5. retry all failed topsapp jobs / on-demand submit from runconfig-topsapp

  6. start up auto scaling scripts

  • No labels

0 Comments

You are not logged in. Any changes you make will be marked as anonymous. You may want to Log In if you already have an account.