Containers were supposed to offer greater flexibility and portability compared to VMs. Somewhere along the way that promise has got derailed in the frenzy of using containers to push 'webscale' under the umbrella market speak of Devops.
At the moment we are at a real risk of putting people off containers. So much technical debt and complexity has been piled on to containers to account for 'webscale' that the basic promise of lightweight and portability has been lost.
Words like devops promise a lot but deliver a web of complexity. Google and Facebook have an army of networking, storage, security and scalability engineers. It's not 2 engineers doing devops.
LXC still gives you containers that are simple to use and delivers all the efficiency and flexbility of containers. The devops container is a modification of the LXC container (docker was based on lxc till version 0.9) that introduces serious constraints and complexity that the devops ecosystem fails to tell its users about, but users need to understand and be aware of.
There are fundamental technical problems at the core of devops that have not met technical scrutiny and have far reaching consequences for end users.
The problem of devops starts with a non standard OS environment. Docker is in many ways a fork of the LXC project. Containers are namespaced processes. The LXC project on which Docker was based ran an OS init in this process so you got a standard multi process OS environment like a lightweight VM. For reasons that are not clear Docker decided to run the app directly as the namespaced process giving you a non standard single process OS environment.
Anything non standard is a technical debt and the roll on effects of this cannot be understated. The entire ecosystem of apps and tools are not designed to work in this kind of non standard OS environment and need workarounds for basics like logging, daemons, networking. A lot of functionality need to be managed from outside making containers paradoxially less self contained.
You can't suddenly believe you don't need an functioning OS environment to run apps and most of the functionality will have to be scripted back adding management overhead. Few users are going to spend thousands of hours of engineering to simply get to where they are with their current solution. This is a a lock-in to a specific type of a container and significant technical debt whose effects cannot be underestimated.
Docker also uses layers to build containers. Layers were popularized by Aufs and Overlayfs projects. While layers are interesting it's extremely easy to get carried away. Layers are far from mature with hard to detect bugs, filesystem incompatibilities and permission problems. The more layers the more complex it becomes. The same benefits could be achieved by using single layers just at runtime. Using it to build containers adds tremendous overhead to basic image and container management.
Words like immutability are thrown around but fall apart on scrutiny. Containers are mutable at run time. 'Immutability' used in this context is an artifact of the base image in the same way that running a copy of a container leaves the original untouched.
A container is simply a folder on your filesystem. If you use a clone of that folder the original folder is 'immutable'. Using layers to suggest some special 'immutability' is completely meaningless. Using a single layer at runtime in the same way you would use a simple copy or btrfs or ZFS snapshot provide the exact same benefits without the overhead of building containers with layers.
Using multiple layers to build containers and the idea of reuse is equally questionable, especially in the context of single app containers with just the base OS image and the application on top where there is not scope for too many layers unless you have got carried away and embrace complexity.
Layers are dependent on each other, any security fix or OS update will require a image rebuild. This makes the idea of using multiple layers to build a container a bit pointless. All container platforms give you a collection of base OS images to build on and simply using these images to build your apps and running a copy gives you the exact same benefits of 'immutability' without any of the complexity of layers.
Using layers in this enforced fashion and the core of the runtime also creates other problems like using non root containers. In Linux only root users can mount file systems and if you are using overlayfs or another layering system you need root privileges to manage these mounts in containers. That's why while LXC has stable unprivileged container support since 2014 while Docker is still trying to make it work in 2019. All these technical decisions have run off effects that make the container less useful and more complicated to use.
Launching a thousand instances of an stateless application has never been a problem. Scalability is only a problem because of application state. The devops approach to state simply wishes it away. However real users need to deal with application state as a matter of course and can't pretend it doesn't exist.
Making application data ephemeral makes no sense and multiples complexity for end users with literally no benefits. Containers abstract away the host, putting state on the host defeats the whole advantage of using containers across hosts seamlessly, and worse leaves the user with the overhead of managing state for every single container. You have taken a simple entity and made it much more complex.
You can't enforce statelessness by 'fiat'. Applications have to be architected to be stateless and to scale at which point the underlying platform be it containers or VMs ceases to matter. Applications will not become stateless or provide a seamless upgrade path because you use containers. That's magical thinking. And attempting to shoehorn statelessness at the infrastructure level is a recipe for fragility at the base of your infrastructure where it must be most robust.
At every stage complexity and constraints are added without technical justification or informed discussion of tradeoffs. The fact that that all of these decisions can be imposed on users without proper technical scrutiny and pushed out so aggressively betrays something fundamentally broken in the ecosystem.
Moving beyond Docker itself we have Kubernetes which is so complex many are still in 2018 not able to install it. If an application can't even be installed how is going to run reliably, what happens when it breaks, how will you debug it, how do all the parts interact? And for end users the question becomes do you expend months to understand these interactions, understand the fundamental underlying technologies or focus on your apps? And people who spend months then become vested in complexity.
Time is not unlimited and every additional piece of complexity extracts a cost. Networking is not simple, neither is storage, availability, scalability or security. Any halfway serious use case will need experts without exception. The idea that this can be done by developers who may or may not be interested in ops with tons of verbose yaml files with fragile waiting to break dependencies is not just bad engineering. It's yet more magical thinking.
'Webscale' is a limited use case, and the greatest irony is those who are webscale like Facebook, Google, Netflix not only have unique needs and architectures but thousands of engineers and experts to build them. Cookie cutter defaults provide an illusion of working, impose debt and constraints on new users and paradoxically those with the expertise will fail to see the value.
There is a huge fog of misinformation, confusion, marketing and hype clouding the container ecosystem. And the result is rather than develop an understanding of the underlying technologies many are getting caught up and confused in the layers above as companies use more and more open source tools but fail to clearly articulate the technologies, why and how they are being used in an effort to position their products as the only way to effectively use these technologies.
Much of the discussion is instead focused on 'network drivers', 'storage drivers', 'controllers' and other json/yaml wrappers that leaves users with little understanding of the underlying technologies. Mounting a fileystem is not a 'driver' and neither is deploying networking technologies like Vxlan or BGP. Using Nginx as a reverse proxy or Haproxy as a load balancer is not a 'controller'. This new vocabulary adds layers of indirection and confuses rather than informs.
This creates bad incentives and undermines the open source model and the talented individuals who create these technologies.
A lot of benefits associated with containers flow from their natural advantages. They are a folder on your filesystem, lightweight and portable. They let you package your apps and avoid lock-in to any provider by letting you move your apps and workloads across servers seamlessly. You don't need devops to benefit from this.
LXC which Docker was based on always offered a far more sensible container model that is far easier to use and scale. Complexity is inefficient. Millions of man hours are lost in working around them.
Flockport has consistently advocated for simplicity and flexibility. As a technical community we can do better, its our most basic responsibility to users.