-
Notifications
You must be signed in to change notification settings - Fork 3
Service deployment
The configuration for these machines comes from the heliodines private git
repository, a copy of which is installed at /var/local/projects/heliodines/git
.
The physical trop01
hyper-visor machine has two physical network interfaces, br0
and br1
.
#[user@trop02]
/sbin/ifconfig
br0 Link encap:Ethernet HWaddr 0c:c4:7a:35:12:06
inet addr:129.215.175.97 Bcast:129.215.175.255 Mask:255.255.255.0
inet6 addr: fe80::ec4:7aff:fe35:1206/64 Scope:Link
....
br1 Link encap:Ethernet HWaddr 0c:c4:7a:35:12:07
inet addr:192.168.137.233 Bcast:192.168.137.255 Mask:255.255.255.0
inet6 addr: fe80::ec4:7aff:fe35:1207/64 Scope:Link
....
Interface br0
is connected to the local VLAN created for the rack of machines in the ROE machine room, and interface br1
is connected to the internal VLAN for the SQLServer databases.
The source configuration for these interfaces comes from the heliodines private git
repository at /var/local/projects/heliodines/git/src/cfg/tropo/trop02/etc/network/interfaces
.
#[user@trop02]
less /var/local/projects/heliodines/git/src/cfg/tropo/trop02/etc/network/interfaces
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# Public interface
auto br0
iface br0 inet static
address 129.215.175.97
netmask 255.255.255.0
network 129.215.175.0
gateway 129.215.175.126
broadcast 129.215.175.255
# dns-* options are implemented by the resolvconf package, if installed
dns-nameservers 195.194.120.1 195.194.120.2
dns-search roe.ac.uk
# Configure bridge port and STP.
bridge_ports eth0
bridge_fd 0
bridge_stp off
bridge_maxwait 0
# Private interface
auto br1
iface br1 inet static
address 192.168.137.233
netmask 255.255.255.0
network 192.168.137.0
broadcast 192.168.137.255
# Configure bridge port and STP.
bridge_ports eth1
bridge_fd 0
bridge_stp off
bridge_maxwait 0
Both of these interfaces are configured as bridges with static IP addresses, allowing them to be used as routes to access hosts on other networks.
The br0
interface is allocated an external public IP address, 129.215.175.97
, on the 129.215.175.0/24
address range, providing access to/from the public internet.
The br1
interface is allocated an internal IP address, 192.168.137.233
, on the 192.168.137.0/24
network, providing access to the SQLServer databases inside ROE.
Each of these interfaces also has a corresponding port that inherits the MAC address and IP address of the bridge, creating an internal interface that the OS on the physical hyper-visor machine can use.
#[user@trop02]
/sbin/ifconfig
eth0 Link encap:Ethernet HWaddr 0c:c4:7a:35:12:06
....
eth1 Link encap:Ethernet HWaddr 0c:c4:7a:35:12:07
....
The physical trop01
hyper-visor machine is running two virtual networks created by the libvirt
system.
#[user@trop02]
virsh \
--connection 'qemu:///system' \
net-list
Name State Autostart Persistent
----------------------------------------------------------
bridged active yes yes
default active yes yes
The source configuration for these virtual networks comes from the heliodines private git
repository at /var/local/projects/heliodines/git/src/cfg/tropo/trop02/etc/libvirt/qemu/networks/
.
#[user@trop02]
ls /var/local/projects/heliodines/git/src/cfg/tropo/trop02/etc/libvirt/qemu/networks/
bridged.xml
default.xml
The bridged
network is configured as a forwarding bridge connected to the external br0
interface.
#[user@trop02]
less /var/local/projects/heliodines/git/src/cfg/tropo/trop02/etc/libvirt/qemu/networks/bridged.xml
<network ipv6='yes'>
<name>bridged</name>
<uuid/>
<forward mode='bridge'/>
<bridge name='br0'/>
</network>
This configuration means any virtual machines connected to the bridged
network is effectively connected to the external br0
bridge interface on the ROE machine room VLAN. This means that the virtual machine can use this interface for outbound access the to external public internet, and, if the virtual machine is given a public IP address within the 129.215.175.0/24
range, then it can also be reached by inbound traffic from the public internet.
The default
network is configured as a standard libvirt
NAT network with slots for 8 virtual machines.
#[user@trop02]
less /var/local/projects/heliodines/git/src/cfg/tropo/trop02/etc/libvirt/qemu/networks/default.xml
<network ipv6='yes'>
<name>default</name>
<uuid/>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr0' stp='off' delay='0'/>
<mac address='52:54:00:02:02:01'/>
<ip family='ipv4' address='192.168.202.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.202.8' end='192.168.202.15'/>
<host mac='52:54:00:02:02:08' ip='192.168.202.8' name='Araybwyn'/>
<host mac='52:54:00:02:02:09' ip='192.168.202.9' name='Lothigometh'/>
<host mac='52:54:00:02:02:0A' ip='192.168.202.10' name='Ulov'/>
<host mac='52:54:00:02:02:0B' ip='192.168.202.11' name='Dwiema'/>
<host mac='52:54:00:02:02:0C' ip='192.168.202.12' name='Ibalehar'/>
<host mac='52:54:00:02:02:0D' ip='192.168.202.13' name='Eterathiel'/>
<host mac='52:54:00:02:02:0E' ip='192.168.202.14' name='Siamond'/>
<host mac='52:54:00:02:02:0F' ip='192.168.202.15' name='Acilamwen'/>
</dhcp>
</ip>
</network>
The MAC addresses and IP addresses for the NAT network on each of the hyper-visor machines, trop01
to trop04
, are allocated within specific ranges.
+-------------+---------------------+--------------------+
| hyper-visor | MAC address range | IP address range |
+-------------+---------------------+--------------------+
| trop01 | 52:54:00:02:01:xx | 192.168.201.xxx |
| trop02 | 52:54:00:02:02:xx | 192.168.202.xxx |
| trop03 | 52:54:00:02:03:xx | 192.168.203.xxx |
| trop04 | 52:54:00:02:04:xx | 192.168.204.xxx |
+-------------+---------------------+--------------------+
This pattern makes it easier to identify which physical hyper-visor an IP or MAC address from a network trace belongs to.
There are 5 virtual machines running on trop02
.
#[user@trop02]
virsh \
--connect 'qemu:///system' \
list
Id Name State
----------------------------------------------------
2 Acilamwen running
3 Lothigometh running
4 Ulov running
5 Eterathiel running
6 Ibalehar running
Two of them, Acilamwen and Eterathiel, are running services for the TAP services the rest do not have any active containers.
Acilamwen is running the front-end Apache HTTP proxy in a container.
#[user@trop02]
ssh Acilamwen \
'
hostname
date
docker ps
'
Acilamwen
Tue 18 Jul 16:11:48 BST 2023
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9683ea05bb04 firethorn/apache:latest "/usr/local/bin/http…" 2 years ago Up 4 weeks 0.0.0.0:80->80/tcp apache
Eterathiel is running the back-end components that make up the Firethorn webservice.
#[user@trop02]
ssh Eterathiel \
'
hostname
date
docker ps
'
Eterathiel
Tue 18 Jul 16:12:56 BST 2023
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a9775b40354e firethorn/ogsadai:2.1.36 "/bin/sh -c '/var/lo…" 5 days ago Up 5 days (healthy) 8080/tcp ft_jarmila.1.4yzmmuh6jfiqvw4tmslylzox8
aca731144335 firethorn/firethorn:2.1.36 "/bin/sh -c '/var/lo…" 5 days ago Up 5 days (healthy) 8080/tcp ft_gillian.1.14o28ydavjbctxh8sirh1yp1i
6216fd3ef158 firethorn/firethorn-py:2.1.36 "python3" 4 weeks ago Up 4 weeks ft_firethorn-py.1.rxfuqi2evdpkeeixz6khdhvin
97980c766d5d firethorn/postgres:2.1.36 "docker-entrypoint.s…" 4 weeks ago Up 4 weeks 5432/tcp ft_carolina.1.zx12opk1k05luhlbkza0pa37l
a562c255b7de firethorn/postgres:2.1.36 "docker-entrypoint.s…" 4 weeks ago Up 4 weeks 5432/tcp ft_bethany.1.sibtonwiibkdc7v08fpazjrt7
The virtual machines are configured to be a minimal Linux install needed to be a Docker host, plus some system administration tools to help with debugging network issues. The last set of notes I have that cover creating a new VM image are from October 2018, 20181016-02-update-vmimage.txt, which matches the qcow2
backing file for Acilamwen.
#[user@trop02]
virsh \
--connect 'qemu:///system' \
dumpxml \
'Acilamwen' \
| xmllint \
--xpath '//disk[@device="disk"]' \
-
<disk type="file" device="disk">
<driver name="qemu" type="qcow2"/>
<source file="/libvirt/storage/live/Acilamwen.qcow"/>
<backingStore type="file" index="1">
<format type="qcow2"/>
<source file="/var/lib/libvirt/images/base/fedora-28-32G-docker-base-20181016.qcow"/>
<backingStore/>
</backingStore>
<target dev="vda" bus="virtio"/>
<alias name="virtio-disk0"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x04" function="0x0"/>
</disk>
It doesn't look like they have been updated since then.
#[user@trop02]
ssh Acilamwen \
'
hostname
date
cat /etc/redhat-release
echo
sudo yum history
'
Acilamwen
Tue 18 Jul 16:20:48 BST 2023
Fedora release 28 (Twenty Eight)
ID | Command line | Date and time | Action(s) | Altered
-------------------------------------------------------------------------------
2 | -y install docker-ce | 2018-10-16 13:41 | Install | 3 EE
1 | | 2018-10-16 13:37 | Install | 435 EE
Acilamwen should have one network connection to the default
NAT network, and one connection to the public internet, using a connection to the br0
external interface on the physical hyper-visor via the bridged
virtual network.
The last set of notes I have on configuring the network on Acilamwen are from February 2021, 20210208-01-float-deploy.txt, describing the process for adding a floating point IP address to the VM following a reboot of the host hyper-visor.