-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docker-compose #195
base: develop
Are you sure you want to change the base?
Add docker-compose #195
Conversation
@@ -0,0 +1,35 @@ | |||
# FROM python:3.10.12-ubuntu22.04.3 | |||
FROM ubuntu:22.04 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious as to why the python image isn't used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There were so many odd bugs so I opted to use the same image as TFs servers are running. May try to roll back to the python image
RUN apt update && \ | ||
apt install -y python3.10 && \ | ||
apt install -y python3-pip --upgrade pip && \ | ||
apt install -y git && \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably makes no sense to run git inside the container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a package in the requirements files that's loaded from git that seems to require it to be installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok so for a dev container you probably wouldn't run pip inside the container anyway, since the files would just be mounted into it from your host machine. You should only need to install what is needed at runtime.
If you want to build an image that can be deployed to a server, that's a different story.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, interesting. Still learning about dev containers so good with feedback about it. Isn't the idea to have the same environment when deving as in production (where possible) so that the container would install the packages? In most cases, they'd be cached or pip would just check that no updates were needed?
My assumption and how I understood dev containers is that you don't need to do anything before starting to dev on a project so that everything is automatically installed when opening it and that both the dev container and the shippable "production" container would contain the same base instructions for setting up the container. I'm happy to be wrong, this is just the picture I got of this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have the right idea, but in practice it's a bit more complicated. The docker image will provide a consistent runtime environment, ensuring that the OS and its installed packages in production is the same as what you're working with during development.
Then you have the build environment, which is where your code is compiled, packaged, minified or whatever. Some people use an entirely separate docker image for the build environment, typically through the use of multi-stage builds. This way stuff that's used during the build stage (e.g. git or pip) doesn't end up in the runtime image, where it isn't needed anymore.
It's true that doing your development work always using the build image will yield more consistent results, avoiding the need to consider whether your workstation has the same version of git or pip installed, but in practice doing that rigorously is a pain in the ass. Most tools (like pip) should produce pretty consistent results from a given requirements file even if the version is a little off. As long as you're using a consistent runtime environment, things should be fine.
In a professional setting you would have CI set up that builds your image and runs your tests in a consistent environment, ensuring there are no issues, and then produces your production runtime image from that environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding a bit more:
What you want to avoid is having to constantly rebuild the image during development. As you're writing new code, installing libraries, etc, you'd want to be able to run the code without needing to run docker build
. That's why you mount the files rather than copying them in the build stage.
It's possible to run tools like git or pip also inside a docker image, thereby avoiding the need for devs to install those tools on their own machine, but as I mentioned that's usually a pain in the ass. You'll have trouble integrating that with your IDE or Git UI or other tools you might be using.
I'll also point out that it's often necessary to even have a different dev runtime image because you might need tools or configurations for debugging or profiling that you don't want in the production image.
|
||
RUN pip install --progress-bar off -r requirements.txt | ||
|
||
COPY . . |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're using this as a dev container you shouldn't copy any files into the image, just mount them as a volume instead. That way you don't need to rebuild the image when there are changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah true. The main task was not to create the dev container but I will look at that.
Add dockerfile for teknologr as well as a (maybe working) devcontainer.
No need to merge this yet, but sharing so other people can see the changes as well.