If you are using a released version of Kubernetes, you should refer to the docs that go with that version.
The latest release of this document can be found [here](http://releases.k8s.io/release-1.1/docs/getting-started-guides/libvirt-coreos.md).Documentation for other releases can be found at releases.k8s.io.
Table of Contents
- Highlights
- Warnings about
libvirt-coreos
use case - Prerequisites
- Setup
- Interacting with your Kubernetes cluster with the
kube-*
scripts. - Troubleshooting
- !!! Cannot find kubernetes-server-linux-amd64.tar.gz
- Can't find virsh in PATH, please fix and retry.
- error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory
- error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied
- error: Out of memory initializing network (virsh net-create...)
- Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
- Reduced disk usage thanks to COW
- Reduced memory footprint thanks to KSM
The primary goal of the libvirt-coreos
cluster provider is to deploy a multi-node Kubernetes cluster on local VMs as fast as possible and to be as light as possible in term of resources used.
In order to achieve that goal, its deployment is very different from the “standard production deployment” method used on other providers. This was done on purpose in order to implement some optimizations made possible by the fact that we know that all VMs will be running on the same physical machine.
The libvirt-coreos
cluster provider doesn’t aim at being production look-alike.
Another difference is that no security is enforced on libvirt-coreos
at all. For example,
- Kube API server is reachable via a clear-text connection (no SSL);
- Kube API server requires no credentials;
- etcd access is not protected;
- Kubernetes secrets are not protected as securely as they are on production environments;
- etc.
So, an k8s application developer should not validate its interaction with Kubernetes on libvirt-coreos
because he might technically succeed in doing things that are prohibited on a production environment like:
- un-authenticated access to Kube API server;
- Access to Kubernetes private data structures inside etcd;
- etc.
On the other hand, libvirt-coreos
might be useful for people investigating low level implementation of Kubernetes because debugging techniques like sniffing the network traffic or introspecting the etcd content are easier on libvirt-coreos
than on a production deployment.
- Install dnsmasq
- Install ebtables
- Install qemu
- Install libvirt
- Install openssl
- Enable and start the libvirt daemon, e.g:
systemctl enable libvirtd
systemctl start libvirtd
- Grant libvirt access to your user¹
- Check that your $HOME is accessible to the qemu user²
¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.
You can test it with the following command:
virsh -c qemu:///system pool-list
If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .
In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules
as follows to grant full access to libvirt to $USER
sudo /bin/sh -c "cat - > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules" << EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:
$ ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock
$ usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account
(Replace $USER
with your login name)
All the disk drive resources needed by the VM (CoreOS disk image, Kubernetes binaries, cloud-init files, etc.) are put inside ./cluster/libvirt-coreos/libvirt_storage_pool
.
As we’re using the qemu:///system
instance of libvirt, qemu will run with a specific user:group
distinct from your user. It is configured in /etc/libvirt/qemu.conf
. That qemu user must have access to that libvirt storage pool.
If your $HOME
is world readable, everything is fine. If your $HOME is private, cluster/kube-up.sh
will fail with an error message like:
error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied
In order to fix that issue, you have several possibilities:
- set
POOL_PATH
insidecluster/libvirt-coreos/config-default.sh
to a directory:- backed by a filesystem with a lot of free disk space
- writable by your user;
- accessible by the qemu user.
- Grant the qemu user access to the storage pool.
- Edit
/etc/libvirt/qemu.conf
to run under that user, that have access to the storage pool (not recommended for production usage).
On Arch:
setfacl -m g:kvm:--x ~
By default, the libvirt-coreos setup will create a single Kubernetes master and 3 Kubernetes nodes. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.
To start your local cluster, open a shell and run:
cd kubernetes
export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh
The KUBERNETES_PROVIDER
environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.
The NUM_NODES
environment variable may be set to specify the number of nodes to start. If it is not set, the number of nodes defaults to 3.
The KUBE_PUSH
environment variable may be set to specify which Kubernetes binaries must be deployed on the cluster. Its possible values are:
release
(default ifKUBE_PUSH
is not set) will deploy the binaries of_output/release-tars/kubernetes-server-….tar.gz
. This is built withmake release
ormake release-skip-tests
.local
will deploy the binaries of_output/local/go/bin
. These are built withmake
.
You can check that your machines are there and running with:
$ virsh -c qemu:///system list
Id Name State
----------------------------------------------------
15 kubernetes_master running
16 kubernetes_node-01 running
17 kubernetes_node-02 running
18 kubernetes_node-03 running
You can check that the Kubernetes cluster is working with:
$ kubectl get nodes
NAME LABELS STATUS
192.168.10.2 <none> Ready
192.168.10.3 <none> Ready
192.168.10.4 <none> Ready
The VMs are running CoreOS.
Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub)
The user to use to connect to the VM is core
.
The IP to connect to the master is 192.168.10.1.
The IPs to connect to the nodes are 192.168.10.2 and onwards.
Connect to kubernetes_master
:
Connect to kubernetes_node-01
:
All of the following commands assume you have set KUBERNETES_PROVIDER
appropriately:
export KUBERNETES_PROVIDER=libvirt-coreos
Bring up a libvirt-CoreOS cluster of 5 nodes
NUM_NODES=5 cluster/kube-up.sh
Destroy the libvirt-CoreOS cluster
cluster/kube-down.sh
Update the libvirt-CoreOS cluster with a new Kubernetes release produced by make release
or make release-skip-tests
:
cluster/kube-push.sh
Update the libvirt-CoreOS cluster with the locally built Kubernetes binaries produced by make
:
KUBE_PUSH=local cluster/kube-push.sh
Interact with the cluster
kubectl ...
Build the release tarballs:
make release
Install libvirt
On Arch:
pacman -S qemu libvirt
On Ubuntu 14.04.1:
aptitude install qemu-system-x86 libvirt-bin
On Fedora 21:
yum install qemu libvirt
Start the libvirt daemon
On Arch:
systemctl start libvirtd
On Ubuntu 14.04.1:
service libvirt-bin start
Fix libvirt access permission (Remember to adapt $USER
)
On Arch and Fedora 21:
cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "$USER") {
return polkit.Result.YES;
polkit.log("action=" + action);
polkit.log("subject=" + subject);
}
});
EOF
On Ubuntu:
usermod -a -G libvirtd $USER
Ensure libvirtd has been restarted since ebtables was installed.