Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update Maintenance slides #1821

Merged
merged 4 commits into from
Mar 4, 2020
Merged
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
161 changes: 113 additions & 48 deletions topics/admin/tutorials/maintenance/slides.html
Original file line number Diff line number Diff line change
Expand Up @@ -2,98 +2,163 @@
layout: tutorial_slides
logo: "GTN"

title: "Server Maintenance"
title: "Server Maintenance and Backups"
zenodo_link: ""
questions:
- How to maintain a Galaxy service?
- How do I maintain a Galaxy server?
- What happens if I lose everything?
objectives:
- Learn about different maintenance steps
time_estimation: "1h"
- Learn what to back up and how to recover
time_estimation: "30m"
key_points:
- Remember to back Galaxy up
- Use Ansible
- Use configuration management (e.g. Ansible)
- Back up the parts of Galaxy that can't be recreated
contributors:
- natefoo
- bgruening
- slugger70
- hexylena
---
class: left

# Runaway Storage

Tips:
- Set quotas
- `tmpwatch` your job working directory
- `tmpwatch` (`tmpreaper` on Debian-based distros) your job working directory
- `cleanup_job` in `galaxy.yml` (defaults to `always` though)
hexylena marked this conversation as resolved.
Show resolved Hide resolved
- `tmpwatch` your `new_file_path`
- `/usr/bin/tmpwatch -v --mtime --dirmtime 7d /srv/galaxy/var/tmp`
- Set up dataset cleanup

---
# Dataset Cleanup
natefoo marked this conversation as resolved.
Show resolved Hide resolved

- `scripts/cleanup_datasets/pgcleanup.py`: PostgreSQL-optimized fast cleanup script
- `scripts/cleanup_datasets/cleanup_datasets.py`: General cleanup script
- `gxadmin cleanup 30`: calls pgcleanup
| method | description |
| ---- | ---- |
| `scripts/cleanup_datasets/pgcleanup.py` | PostgreSQL-optimized fast cleanup script |
| `scripts/cleanup_datasets/cleanup_datasets.py` | General cleanup script |
| `gxadmin cleanup <days>` | calls pgcleanup |

---
# Dataset Cleanup Lifecycle
class: reduce70

Mark deleted all "anonymous" histories not used within the last `$days` days:
# pgcleanup invocation
natefoo marked this conversation as resolved.
Show resolved Hide resolved

```console
python ./scripts/cleanup_datasets/pgcleanup.py \
-o $days -s delete_userless_histories
$ ./scripts/cleanup_datasets/pgcleanup.py --help
usage: pgcleanup.py [-h] [-c CONFIG_FILE] [-d] [--dry-run] [--force-retry]
[-o DAYS] [-U] [-s SEQUENCE] [-w WORK_MEM] [-l LOG_DIR]
[ACTION [ACTION ...]]

positional arguments:
ACTION Action(s) to perform, chosen from: delete_datasets,
delete_exported_histories, delete_inactive_users,
delete_userless_histories, purge_datasets,
purge_deleted_hdas, purge_deleted_histories,
purge_deleted_users, purge_error_hdas,
purge_hdas_of_purged_histories,
purge_historyless_hdas, update_hda_purged_flag

optional arguments:
-h, --help show this help message and exit
-c CONFIG_FILE, --config-file CONFIG_FILE, --config CONFIG_FILE
Galaxy config file (defaults to
$GALAXY_ROOT/config/galaxy.yml if that file exists or
else to ./config/galaxy.ini if that exists). If this
isn't set on the command line it can be set with the
environment variable GALAXY_CONFIG_FILE.
-d, --debug Enable debug logging (SQL queries)
--dry-run Dry run (rollback all transactions)
--force-retry Retry file removals (on applicable actions)
-o DAYS, --older-than DAYS
Only perform action(s) on objects that have not been
updated since the specified number of days
-U, --no-update-time Don't set update_time on updated objects
-s SEQUENCE, --sequence SEQUENCE
DEPRECATED: Comma-separated sequence of actions
-w WORK_MEM, --work-mem WORK_MEM
Set PostgreSQL work_mem for this connection
-l LOG_DIR, --log-dir LOG_DIR
Log file directory
```

Remove all history exports older than `$days` days:
---
class: reduce70

```console
python ./scripts/cleanup_datasets/pgcleanup.py \
-o $days -s delete_exported_histories
```
# pgcleanup actions
natefoo marked this conversation as resolved.
Show resolved Hide resolved

Mark purged HDAs in histories deleted `$days` or more days ago (not user-recoverable):

```console
python ./scripts/cleanup_datasets/pgcleanup.py \
-o $days -s purge_deleted_histories
```
| action | description |
| ---- | ---- |
| `delete_userless_histories` | <ul><li>Mark deleted all "anonymous" Histories (not owned by a registered user) that are older than the specified number of days.</li></ul> |
| `delete_exported_histories` | <ul><li>Mark deleted all Datasets that are derivative of JobExportHistoryArchives that are older than the specified number of days.</li></ul> |
| `purge_deleted_users` | <ul><li>Mark purged all users that are older than the specified number of days.</li><li>Mark purged all Histories whose user_ids are purged in this step.</li><li>Mark purged all HistoryDatasetAssociations whose history_ids are purged in this step.</li><li>Delete all UserGroupAssociations whose user_ids are purged in this step.</li><li>Delete all UserRoleAssociations whose user_ids are purged in this step EXCEPT FOR THE PRIVATE ROLE.</li><li>Delete all UserAddresses whose user_ids are purged in this step.</li></ul> |
| `purge_deleted_histories` | <ul><li>Mark purged all Histories marked deleted that are older than the specified number of days.</li><li>Mark purged all HistoryDatasetAssociations in Histories marked purged in this step (if not already purged).</li></ul> |
| `purge_deleted_hdas` | <ul><li>Mark purged all HistoryDatasetAssociations currently marked deleted that are older than the specified number of days.</li><li>Mark deleted all MetadataFiles whose hda_id is purged in this step.</li><li>Mark deleted all ImplicitlyConvertedDatasetAssociations whose hda_parent_id is purged in this step.</li><li>Mark purged all HistoryDatasetAssociations for which an ImplicitlyConvertedDatasetAssociation with matching hda_id is deleted in this step.</li></ul> |

---
class: reduce70

# Dataset Cleanup Lifecycle
# More pgcleanup actions
natefoo marked this conversation as resolved.
Show resolved Hide resolved

Mark purged individual HDAs deleted `$days` or more days ago (not user-recoverable):
| action | description |
| ---- | ---- |
| `purge_historyless_hdas` | <ul><li>Mark purged all HistoryDatasetAssociations whose history_id is null.</li></ul> |
| `purge_error_hdas` | <ul><li>Mark purged all HistoryDatasetAssociations whose dataset_id is state = 'error' that are older than the specified number of days.</li></ul> |
| `purge_hdas_of_purged_histories` | <ul><li>Mark purged all HistoryDatasetAssociations in histories that are purged and older than the specified number of days.</li></ul> |
| `delete_datasets` | <ul><li>Mark deleted all Datasets whose associations are all marked as deleted (LDDA) or purged (HDA) that are older than the specified number of days.</li><li>JobExportHistoryArchives have no deleted column, so the datasets for these will simply be deleted after the specified number of days.</li></ul> |
| `purge_datasets` | <ul><li>Mark purged all Datasets marked deleted that are older than the specified number of days.</li></ul> |

```console
python ./scripts/cleanup_datasets/pgcleanup.py \
-o $days -s purge_deleted_hdas
```
---
class: left, reduce90

Mark datasets with all purged HDAs last updated `$days` or more days ago deleted:
# Backups
natefoo marked this conversation as resolved.
Show resolved Hide resolved

```console
python ./scripts/cleanup_datasets/pgcleanup.py \
-o $days -s delete_datasets
```
Restorable from Ansible Playbook:
natefoo marked this conversation as resolved.
Show resolved Hide resolved

Mark purged all datasets last updated `$days` or more days ago **and remove from disk**:
| item | path (from tutorials) |
| ---- | ---- |
| Galaxy | `/srv/galaxy/server` |
| Virtualenv | `/srv/galaxy/venv` |
| *Static* configs | `/srv/galaxy/config` |

```console
python ./scripts/cleanup_datasets/pgcleanup.py \
-o $days -s purge_datasets
```
What to back up:

| item | path (from tutorials) |
| ---- | ---- |
| Database ([PITR][postgresql-pitr], enable in [galaxyproject.postgresql][ansible-postgresql]) | system dependent |
| Installed shed tools | `/srv/galaxy/var/shed_tools` |
| *Managed* (aka *mutable*) configs | `/srv/galaxy/var/config` |
| Logs (if you like...) | `systemd-journald` |
| Datasets (if you can...) | `/data` |

---
class: left, reduce90

# Backups
# Backups (continued)

What to back up:
- Configs (done with Ansible)
- Database
- Installed shed tools and dependencies
- Datasets (if you can...)
What to back up if absolute reproducibility matters:

| item | path (from tutorials) |
| ---- | ---- |
| Tool dependencies | `/srv/galaxy/var/dependencies` |
| Data manager-installed reference data | `/srv/galaxy/var/tool-data` |

What not to back up:

- Anything in `database/` not mentioned above
- Job working directories
| item | path (from tutorials) |
| ---- | ---- |
| Anything in *managed/mutable data dir* not mentioned above | `/srv/galaxy/var` |
| Job working directories | `/srv/galaxy/jobs` |

[postgresql-pitr]: https://www.postgresql.org/docs/current/continuous-archiving.html
[ansible-postgresql]: https://github.com/galaxyproject/ansible-postgresql

---

# Restoring from backups
natefoo marked this conversation as resolved.
Show resolved Hide resolved

If lost *database* or *managed/mutable configs*, then **restore these first**

Then run playbook