Skip to content

Commit

Permalink
WIP refactor 2
Browse files Browse the repository at this point in the history
  • Loading branch information
Lun4m committed Nov 19, 2024
1 parent 1c96173 commit bdeb755
Show file tree
Hide file tree
Showing 33 changed files with 1,168 additions and 187 deletions.
3 changes: 0 additions & 3 deletions ansible/.gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,3 @@ notes.txt
ansible.cfg
.yamlfmt
.run.sh

roles/deploy/files/resources
roles/deploy/files/lard_ingestion
61 changes: 42 additions & 19 deletions ansible/readme.md → ansible/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ ansible-galaxy collection install -fr requirements.yml

You need to create application credentials in the project you are going to
create the instances in, so that the ansible scripts can connect to the right
ostack_cloud which in our case needs to be called lard.
`ostack_cloud` which in our case needs to be called lard.

The file should exist in `~/.config/openstack/clouds.yml`.
If have MET access see what is written at the start of the readme [here](https://gitlab.met.no/it/infra/ostack-ansible21x-examples)
Expand All @@ -34,32 +34,55 @@ Go to "Compute" then "Key Pairs" and import your public key for use in the provi

### Provision!

The IPs in `inventory.yml` should correspond to floating ips you have requested
in the network section of the open stack GUI. If you need to delete the old VMs
(compute -> instances) and Volumes (volumes -> volumes) you can do so in the
ostack GUI.
The IPs associated to the hosts in `inventory.yml` should correspond to
floating ips you have requested in the network section of the open stack GUI.
If you need to delete the old VMs (compute -> instances) and Volumes (volumes
-> volumes) you can do so in the ostack GUI.

> \[!CAUTION\] For some reason when deleting things to build up again one of the IPs
> did not get disassociated properly, and I had to do this manually (network ->
> floating IPs).
> \[!CAUTION\] When deleting things to build up again, if for some reason one of the IPs
> does not get disassociated properly, you have to do it manually from the GUI (network -> floating IPs).
The vars for the network and addssh tasks are encrypted with ansible-vault
(ansible-vault decrypt roles/networks/vars/main.yml, ansible-vault decrypt
roles/addshhkeys/vars/main.yml, ansible-vault decrypt
roles/vm_format/vars/main.yml). But if this has been setup before in the ostack
project, these have likely already been run and therefore already exits so you
could comment out this role from provision.yml. Passwords are in [ci_cd variables](https://gitlab.met.no/met/obsklim/bakkeobservasjoner/lagring-og-distribusjon/db-products/poda/-/settings/ci_cd).
The vars for the `network` and `addssh` roles are encrypted with ansible-vault

```terminal
ansible-playbook -i inventory.yml -e ostack_key_name=xxx provision.yml
ansible-vault decrypt roles/networks/vars/main.yml
ansible-vault decrypt roles/addsshkeys/vars/main.yml
ansible-vault decrypt roles/vm_format/vars/main.yml
```

But if this has been setup before in the ostack project, these have likely
already been run and therefore already exits so you could comment out this role
from `provision.yml`.
Passwords are in [ci_cd variables](https://gitlab.met.no/met/obsklim/bakkeobservasjoner/lagring-og-distribusjon/db-products/poda/-/settings/ci_cd).

```terminal
ansible-playbook -i inventory.yml -e ostack_key_name=xxx provision.yml
```

After provisioning the next steps may need to ssh into the hosts, and thus you need to add them to your known hosts.
Ansible appears to be crap at this, so its best to do it before running the next step by going:
`ssh [email protected].*.*`
For all the VMs.
Ansible appears to be crap at this, so its best to do it before running the next step.
First of all, it might be helpful to create host aliases and add them to your `~/.ssh/config` file,
so you don't have to remember the IPs by heart. An example host alias looks like the following:

```ssh
Host lard-a
HostName 157.249.*.*
User ubuntu
```

Then run:

```terminal
ssh lard-a
ssh lard-b
```

If cleaning up from tearing down a previous set of VMs you may also need to remove them first:
`ssh-keygen -f "/home/louiseo/.ssh/known_hosts" -R "157.249.*.*"`

```terminal
ssh-keygen -f "~/.ssh/known_hosts" -R lard-a
ssh-keygen -f "~/.ssh/known_hosts" -R lard-b
```

### Configure!

Expand Down
19 changes: 1 addition & 18 deletions ansible/bigip.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,23 +6,6 @@
ostack_cloud: lard
ostack_region: Ostack2-EXT
gather_facts: false
pre_tasks:
# copy file, so we have an .sql file to apply locally
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /etc/postgresql/16/db/bigip
state: directory
mode: '0755'
become: true

- name: Copy the schema to the remote 1
ansible.builtin.copy:
src: ./roles/bigip/vars/bigip.sql
dest: /etc/postgresql/16/db/bigip/bigip.sql
mode: '0755'
become: true

# loops over both servers
roles:
# NOTE: it will fail to create table in the standby (since read only)
- role: bigip
# will fail to create table in the standby (since read only)
63 changes: 32 additions & 31 deletions ansible/configure.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,38 +2,39 @@
- name: Mount disks and install stuff on the VMs
hosts: servers
remote_user: ubuntu
gather_facts: false
vars:
ostack_cloud: lard
ipalias_network_name: ipalias
ostack_region: Ostack2-EXT
pre_tasks:
- name: List ansible_hosts_all difference from ansible_host (aka the vm not currently being iterated on)
ansible.builtin.debug:
msg: "{{ (ansible_play_hosts_all | difference([inventory_hostname])) | first }}"
roles:
- role: addsshkeys
- role: vm_format
vars:
name_stuff: "{{ inventory_hostname }}" # name of current vm for finding ipalias port
- role: ssh
vars:
vm_ip: "{{ ansible_host }}" # the current vm's ip
primary: lard-a
ostack_primary_floating_ip: # provide via cmd
ostack_db_password: # provide via cmd
ostack_repmgr_password: # provide via cmd

- name: Setup primary and standby
vars:
ostack_cloud: lard
ostack_region: Ostack2-EXT
hosts: localhost
gather_facts: false
tasks:
- name: Add user SSH keys
ansible.builtin.include_role:
name: ssh
tasks_from: users.yml

- name: Format VM
ansible.builtin.include_role:
name: ostack
tasks_from: vm_format.yml

roles:
- role: primarystandbysetup
when: inventory_hostname == "lard-a"
- name: Share postgres SSH key between hosts
ansible.builtin.include_role:
name: ssh
tasks_from: postgres.yml

- role: standbysetup
when: inventory_hostname == "lard-b"
# vars:
# primary_name: lard-a
# primary_ip: '{{ ansible_host }}' # the first one is a
# standby_name: lard-b
# standby_ip: '{{ hostvars[groups["servers"][1]].ansible_host }}' # the second one is b
- name: Setup primary host
ansible.builtin.include_role:
name: ostack
tasks_from: create_primary.yml
when: inventory_hostname == primary

- name: Setup standby host
ansible.builtin.include_role:
name: ostack
tasks_from: create_standby.yml
vars:
ostack_primary_host_ip: "{{ hostvars[primary].ansible_host }}"
when: inventory_hostname != primary
3 changes: 0 additions & 3 deletions ansible/deploy.yml
Original file line number Diff line number Diff line change
@@ -1,10 +1,7 @@
---
- name: Deploy binaries
# Deploy on both VMs, only the primary is "active"
hosts: servers
remote_user: ubuntu
gather_facts: false
# All role tasks require root user
become: true
roles:
- role: deploy
27 changes: 27 additions & 0 deletions ansible/group_vars/servers/main.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
---
ostack_cloud: lard
ostack_state: present
ostack_region: Ostack2-EXT
ostack2: true

# networks
ostack_network_name: "{{ vault_ostack_network_name }}"
ostack_network_cidr: "{{ vault_ostack_network_cidr }}"
ostack_netword_dns: "{{ vault_ostack_netword_dns }}"
ostack_network_security_groups: "{{ vault_ostack_network_security_groups }}"
ostack_ipalias_network_cidr: "{{ vault_ostack_ipalias_network_cidr }}"

# vm_create
ostack_vm_flavor: "{{ vault_ostack_flavor }}"
ostack_vm_image: "{{ vault_ostack_image }}"
ostack_vm_security_groups: "{{ vault_ostack_security_groups }}"
ostack_vm_volume_type: "{{ vault_ostack_volume_type }}"
ostack_vm_volume_size: "{{ vault_ostack_volume_size }}"
# ostack_vm_key_name: provide via cmd

# vm_format
ostack_mount_device: "{{ vault_ostack_mount_device }}"
ostack_mount_point: "/mnt/ssd-data"

# ssh
ssh_user_key_list: "{{ vault_ssh_user_key_list }}"
1 change: 1 addition & 0 deletions ansible/migrate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
remote_user: ubuntu
gather_facts: false
vars:
# TODO: is there a better way to get this fact automatically?
primary: lard-a

tasks:
Expand Down
14 changes: 9 additions & 5 deletions ansible/provision.yml
Original file line number Diff line number Diff line change
@@ -1,16 +1,20 @@
---
- name: Setup networks and 2 vms
- name: Provision
hosts: servers
gather_facts: false
vars:
ostack_vm_key_name: # provide via cmd

tasks:
- name: Setup networks # noqa: run-once[task]
- name: Setup networks
ansible.builtin.include_role:
name: networks
name: ostack
tasks_from: networks.yml
delegate_to: localhost
run_once: true

- name: Setup VMs
- name: Create VMs
ansible.builtin.include_role:
name: vm
name: ostack
tasks_from: vm_create.yml
delegate_to: localhost
File renamed without changes.
13 changes: 13 additions & 0 deletions ansible/roles/bigip/tasks/main.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,17 @@
---
- name: Create bigip directory if it does not exist
ansible.builtin.file:
path: /etc/postgresql/16/db/bigip
state: directory
mode: '0755'

- name: Copy the bigip schema to the remote
ansible.builtin.copy:
src: bigip.sql
dest: /etc/postgresql/16/db/bigip/bigip.sql
mode: '0755'

# TODO: add failed_when inventory_hostname != primary
- name: Create bigip user and basic database
# this is allowed to fail on the secondary, should work on the primary and be replicated over
ignore_errors: true
Expand Down
4 changes: 4 additions & 0 deletions ansible/roles/deploy/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
ansible.builtin.group:
name: lard
state: present
become: true

- name: Create lard user
ansible.builtin.user:
Expand All @@ -13,6 +14,7 @@
append: true
state: present
create_home: false
become: true

# TODO: should we deploy in non root user?
- name: Copy files to server
Expand All @@ -22,6 +24,7 @@
mode: "{{ item.mode }}"
owner: root
group: root
become: true
loop: "{{ deploy_files }}"

- name: Start LARD ingestion service
Expand All @@ -30,3 +33,4 @@
name: lard_ingestion
state: restarted
enabled: true
become: true
42 changes: 25 additions & 17 deletions ansible/roles/ostack/defaults/main.yml
Original file line number Diff line number Diff line change
@@ -1,28 +1,36 @@
---
# TODO: separate what should be public and what private

# public
# PUBLIC
ostack_cloud: lard
ostack_region: Ostack2-EXT
ostack_ipalias_network_name: ipalias
# ostack_state: present
ostack_state: present

# private
## networks
# PRIVATE
# networks
ostack_network_name:

# TODO: probably makes sense to move these to network if they are not reused
# and networks_dns should be moved here since it depends on ostack_region
ostack_cidr:
ostack_ipalias_cidr:
ostack_security_groups:
ostack_network_cidr:
ostack_netword_dns: # dict[ostack_region -> list(ipv4)]
ostack_network_security_groups:
- name:
rule:
subnet:
port:
ostack_ipalias_network_cidr:

# vm_create
ostack_vm_image:
ostack_vm_flavor:
ostack_vm_key_name:
ostack_vm_security_groups:
ostack_vm_volume_type:
ostack_vm_volume_size:

# vm_format
ostack_mount_device:
ostack_mount_point:
ostack_repmgr_password:

## vm
ostack_availability_zone:
ostack_image:
ostack_flavor:
ostack_key_name:
# create_primary / create_standby
ostack_db_password:
ostack_primary_floating_ip:
ostack_primary_ip:
Loading

0 comments on commit bdeb755

Please sign in to comment.