Benchmark of deployment / maintenance solutions #123
Replies: 5 comments
-
Hi, Here are some key elements from our internal processes:
Our project structure:
I hope this helps... |
Beta Was this translation helpful? Give feedback.
-
Hi Théo, great idea :-) At Le Filament, we are using Ansible and specific role for deploying Odoo hosted on pour Gitlab : https://sources.le-filament.com/lefilament/ansible-roles/docker_odoo Our use case is the following : every customer provides us with (at least) 1 Ubuntu server on which we will deploy Odoo on Docker (usually 2 instances : 1 prod and 1 test/validation) together with Traefik proxy. In order to ease maintenance (and costs for our customers) we need very homogeneous server and deployments which is why we are using Ansible. We also have daily backups encrypted (towards 2 location object storage) and log collection (towards an ELK stack) on prod instances. We build nightly an Odoo base image (source code here : https://sources.le-filament.com/lefilament/odoo_docker) with latest OCB code + a set of modules from OCA. Then for each customer we build on top final image with specific modules and dependencies. |
Beta Was this translation helpful? Give feedback.
-
At work we use Tecnativa's Doodba as a base. We started off with xoelabs dockery-odoo, which no longer exists. Doodba really works well for us. For new staff, it is a bit of a learning curve. Our projects use a long-ago forked version of Tecnativa's project template - https://github.com/glodouk/odoo-scaffolding. Some of those reasons aren't really relevant anymore. It's mostly been stuff stripped out to avoid confusion. There are a few things I'd like to get the time to do with it, but it's on the backburner at the moment. Non-developer/local deployments are all done through our Helm chart - https://github.com/GlodoUK/helm-charts/tree/master/charts/odoo. I don't like our Helm chart. It's untidy, full of backwards compatibility (I should have just bumped the major version). It's also in this weird "we probably want an kubernetes operator, but I don't want to write one" uncanny valley at the moment. It works. Every deployment we have uses it. Production deployments are all managed through gitops using FluxCD.
Kubernetes
Depends on the needs of the project (number of users and whats installed). We have shared hosting where we select the VM size for cost efficiency, effectively dedicated which is bespoke to the needs of the project and 2 (soon to be 1) on-premises where we use what we're given.
Typically Ubuntu.
Varies - 100 users down to 5.
Always dedicated.
HA cluster per instance, unless it's test/UAT. 1 exception is on-premises, another exception is a customer who is using Amazon RDS.
Varies. For our cloud hosting we're currently on Azure. We have managed to get by with Azure Files for the filestore. For the larger installs we use Redis for sessions.
PostgreSQL always, if production. There is 1 legacy exception to this.
Create pull request to $ODOO_VERSION branch, CI runs, if tests pass, merge. GitHub CI runs, builds and pushes Docker container. FluxCD monitors for image updates and automatically applies click-odoo-update.
Under the hood we use Doodba. So we get everything Doodba has: wdb, debugpy. git-aggregate for addons fetching, updating.
Pull from release branch source (i.e. $ODOO_VERSION). Try and ensure good code coverage. Occasionally this catches us out, but its usually minor.
PostgreSQL WAL backups to off-site storage using barman, in all but 2 places (1 the customer is self managing and another is using wal-g). This allows for PITR, but effectively we have a lag of up to a 5 minutes before it hits off-site. For our hosted solutions, currently, Velero using Restic as a backend for periodic filestore and database dump snapshots. On my cards is to look at replacing Velero, with something else. Velero does more than we need, ideally I just want a snapshot every X hours shipped off-site of the storage. Restic in particular is starting to struggle with some of our larger filestores due to the large-ish amount of very small files. Velero does have a Kopia backend, but I've not had time to investigate. |
Beta Was this translation helpful? Give feedback.
-
Great idea! Maybe this can contribute to a convergence of solutions. It seems everyone is using a different method that works for them, but it is a relatively high burden of maintenance involved. What we use internally in our 11 member team:
|
Beta Was this translation helpful? Give feedback.
-
Hi, Thanks @theo-le-filament for the initiative. :) Here @coopiteasy, we use the following.
Baremetal with python virtual environment. PostgreSQL is on the same server as Odoo.
Our main servers are 8 core, 32GB RAM, 500 GB SSD. We tends to put all our customers (that are on the same version of Odoo) on the same server (rare exceptions are made).
Ubuntu LTS.
Our most used server (hardware info above) has 37 databases with a total of 1450 internal users, and a total of about 15000 users.
Several databases on the same Odoo instance.
One database per instance on the same physical machine.
SSD of the server. Backups are send in two other places.
Currently not, Odoo instance is shutdown during an update of the code and the databases.
We use Ansible script for provisioning servers. We use it also to coordinate deployment of code and update of databases.
No particular debugging tools on the production server. Logs are stored on the server.
Addons path is listed in the Odoo configuration file. For each instance. That's something that annoys us, we would like to have it generated from
We deploy code from github using tags (from releases). We use git-aggregator to pull the code from github. We recently start using click-odoo-update or Module Auto Update to update module with a smallest downtime.
We update Odoo and OCA repositories every four months. We update custom module as soon as they need to be updated.
Backup using borg on two dedicated servers in different zones but in Europe. Also in parallel with |
Beta Was this translation helpful? Give feedback.
-
We would like to propose to make a benchmark of various deployment / maintenance solutions being used by OCA users.
We foresee the following advantages to this benchmark :
We would like everyone who would like to get its own solution part of this benchmark to provide links to :
So far, we have thought about running the analysis based on the following criterions :
Feel free to propose any other criterion that you find useful / pertinent !
Best Regards
Théo
Beta Was this translation helpful? Give feedback.
All reactions