So I’ve just been getting into Docker in the last few days, after ignoring it and considering it hipster garbage for a really really long time. Not so much in that it does things that can’t be accomplished without it, but that it provides a nice systematic and organized approach to deployment, which makes migration easy and also repurposing a machine without digging out all the little pieces of what I’ve done from so many different layers of the OS. Like OpenVZ with a fraction of the overhead per container.
I’m curious if anyone else is using it regularly for any deployments. What are the ways you’re using it to make your life easier? Are you doing any kind of clustering and, if so, how are you going about the redundancy?
Funny you mention it, I was looking at grabbing a Lynda subscription yesterday to check out their DevOps/Sysadmin stuff like Docker and Ansible training. Only played with it a little but never enough to properly deploy anything with it beyond test stuff. Love the idea behind it though.
Every couple of months I’m trying to get into docker again, I play with it for few days but then there’s always something throwing me off.
But it’s nice for testing a new application for the first time without the hassle of installing it, etc.
This has been helping me out a ton:
In particular links and volumes have kind of changed how I’ve previously viewed it.
Docker’s amazing, I just don’t have a 6 figure monthly budget to make proper use out of it.
I tried docker once, it was awful.
I tried the “CLOUD” feature, which seems to work fine but is missing 80% of the features that a local running docker container has like IPv6. People recommended adding that but look at the Issue Tracker on Github, 4 months nothing, the Dev’s are ignoring it.
Wait a few decades and they will for sure have v6.
Huge Docker fanboy here. Using it on a daily basis at work to package up and deploy software for about three years now. We find that giving our users prepackaged images and just letting them hit the ground running goes well beyond just having good documentation.
I deal with non-networked machines quite often so having all the dependencies and everything good-to-go right out the box significantly speeds up the deployment process. For most of my setups, I write some fancy start script that does the needful to get all the right processes going and that’s the entry point when the container runs.
Using Docker Compose to bring up our stack with different configurations is a piece of cake. Our typical stack is 3-4 containers, but the good thing is that it allows us to scale as necessary and for testing purposes. Different configurations are set up by using environment variable files that each docker-compose file points to, so launching one config will send the appropriate env file to the container as it starts and all the variables are read in by the start script.
All in all yes, Docker love. Mucho Docker love.
Give me a shout if you need help playing around with anything. I wouldn’t consider myself an expert, but I’ve seen and worked out a shit ton of different Docker-related issues over the past few years.
I use it on and off but my immediate impression was that a bunch of front-end hipster developers took an axe to OpenVZ and this is what came out. Hipster Containers. It’s okay but I’ll never really enjoy using it.
I haven’t reeeeally gotten the hang of it, and probably never will.
One of our developers use it for staging for some web development, but that’s about it.
The only time I tried it was using Mailcows readymade Docker thingy and it didn’t really suit me personally.
I’ve used Docker once, and that was mainly to test stuff. Other than that, I like to do it the manual way. Allows me to better control stuff, and do it much faster.
Oh boy, if someone can enlighten me that’d be great. I just don’t see the point at all in Docker - people keep telling me to try it but I really don’t see the point in it…
There are so many layers to it that each person might have a different answer for what it is about docker that drive them to use it, I think.
An easy one to land on as a good reason to start using it is the same use case for splitting up your dedicated server into multiple VMs. I mean, why would anyone do that right? It’s excessive and unnecessary, technically. Yet many of us do it because we like to isolate this thing and that thing from each other.
For some maybe it’s because they’re installing prebuilt software stacks that don’t work next to each other (which, again, technically probably means that having both items is wasted resources, but being 100% logical obviously isn’t how we act). For others, maybe because they want to be able to spin down that app and destroy it with a single command and not have to go through and remove it’s leftovers like picking food out of your teeth with a toothpick. One action, done.
But if you use docker you can do that and gain back some of the resources. The docker container doesn’t need to contain an entire functional operating system to run the app, it’s typically barebones. One doesn’t need the same service running 4 times on the same machine.
Mmm, just to make things extra sexy. Web-based container administration.
Just wait till you start playing around with Rancher and/or Kubernetes. Then the real fun begins
How different is it? Any web panel to manage it as well?
I’ve tried several times, and never got Rancher to work reliably. Never.
Where Portainer is more for individual containers on your host machine, Rancher/Kubernetes is for orchestrating the creation and monitoring of stacks that have multiple containers which may or may not be running on your host machine.
This Rancher vid is decent to get a concept of what it’s all about:
Yeah, I’ve had my fair share of issues as well. It’s hit or miss though, can be useful in certain situations/environments. Typically I stick with all command line stuff or writing my own scripts to manage my containers.