This is a quick overview of the typical range of services that are useful when running containers across multiple servers.
Market communication often tends to confuse and make things look more complex than they really are.
We are going to use Flockport to illustrate some of these examples but other container platforms provide a subset of most of these services in some way.
First if you are running containers on a single or even a couple of servers these are mostly not needed. It's only when you are running a containers across multiple servers that some of these capabilities becomes useful.
This is the most basic. You need a way to deploy and manage containers across servers. You need to be able to add servers to your cluster and then run and deploy containers across these servers. You need visibility into running containers cluster wide and lifecycle management for both servers and containers.
Flockport lets you provision servers. You can list all operational servers in the cluster and use the push and pull commands to quickly deploy and move containers across servers. You can get visibility and management of all running servers and containers across servers. You can run all container operations on remote containers that you would on local instances.
Once you move beyond a single host networking becomes important. When you run containers across servers you need to be able to setup networks so your containers across servers can talk to each other and be on the same network.
There are a number of technologies you can use depending on your cluster setup from basic bridging, to routing or creating overlays with Vxlan, Wireguard and more.
All of these allow you to ensure containers across servers can reach each other and give you more fine grained control of your network, isolation and security. Flockport lets you quickly setup both layer 2 and layer 3 overlay networks with Vxlan, Wireguard and BGP.
Storage is another area to consider. Distributed storage can make data more accessible, allow easy scalability and availablity and ease management, For instance if your workloads share data depending on performance requirements you can consider distributed storage.
At its most basic an NFS share offers flexibility and is easy to setup. Beyond that you can use solutions like Gluster or MFS to build distributed storage pools across servers. Flockport currently makes adding NFS shares a breeze and lets you build storage pools with both Gluster and MFS.
Service discovery enables your container applications to be found across the cluster. For instance if your app needs to access a mysql.db instance instead of hardcoding IP's for the mysql instance you can simply use an address and the service discovery layer will provide the mysql.db IP to any querying application. To learn more about service discovery please see our recent article on it.
Flockport uses Consul for the service discovery layer and allows containers to publish any defined services to a service discovery endpoint. There are other solutions but we have found Consul to be the easiest to use.
Load balancing allows you to scale workloads across servers. Both Nginx and Haproxy are highly scalable load balancers that offer flexbility and performance.
Load balancers use the concept of backends. The backends are the web servers. Backends are defined either by the IP addresses or their dns names. You can have multiple application backends and the load balancer offers various strategies to balance incoming requests among the backends. These vary from basic round robin, least connected, IP hashing for session persistence to using weights.
Load balancers also perform health checks and remove any offline instances from the backends. They also offer SSL termination, this just means connections to the load balancer from the outside world is encrypted but the connection from the loadbalancer itself to the internal backends is not.
We mainly focused on layer 7 load balancing that occurs at the HTTP level. You can also have layer 4 load balancing that occurs at the TCP level. Haproxy supports this and Linux itself has support for a high performance kernel based load balancer called LVS.
LVS may be a good solution for things like streaming where you may not want the load balancer to become a bottle neck.
Flockport lets you deploy both Haproxy and Nginx load balancers with some lifecycle management including deploying the load balancers, adding and removing backends and configuring SSL.
An ingress controller is just a just a web server like Nginx acting as a reverse proxy or load balancer.
Container applications usually exist in a private network and are not directly accessible outside the host.
You can use a web server like Nginx to serve container apps from acrosss the cluster in a single place. The Nginx instance will have access to the outside world. All requests will hit the Nginx server and it will direct requests to internal container apps as configured.
Flockport lets you deploy managed Nginx instances to serve container applications across the cluster.
This is not related to containers as such but servers hosting VMs or containers. HA works best at the server level. The idea behind HA is if one server goes down the other is available to continue providing services.
In many ways load balancing can also offer some HA advantages by distributing loads across servers and even if one backend goes down the others backends are available to accept connections.
But what happens if your load balancer goes down? You can have 2 identically configured load balancers as a pair with a floating IP
With HA duplicate failover servers are assigned a floating IP which dependent applications and services are configured to use to access any services provided by the servers. The floating IP is assigned to the primary server and a heartbeat is implemented by the HA application. On detecting any failure the floating IP is switched to the secondary server so services can continue to work uninterrupted seamlessly.
Flockport uses Keepalived to enable HA and let you quickly deployed HA pairs.
Container builds let you deploy applications easily. They are a set of instructions to build and configure your application in a container. The build process builds a container with the application ready for use.
The Flockport App store for instance use automated builds to package more than 80 applications. These can be downloaded and deployed in minutes. The build capability lets us automate building these app containers.
A scheduler deploys apps across a cluster according to preset criteria. For instance you may want to scale a particular container to 5 instances, ensure 2 containers are located together, deploy to certain servers, acccount for capacity and have the scheduler automate deployment on specific criteria.
A scheduler also keeps tracks of all deployed jobs and upgrades them when required and can perform rolling upgrades. The Flockport scheduler is still work in progress though it has some deployment functionality available for testing.
Autoscaling is the ability to spin up new instances of applications to respond to increased load. The autoscaling layer has to monitor the server, application loads and the load balancer and on detecting a surge spin up new app instances and add them to the load balancer.
This is where containers come into their own as they are easier to spin up than say VMs.
There are a number of things required for this scenario to work. First your application must be designed to scale across instances with proper database, state and cache management. The autoscaler will need to have visibility of the server, the load balancer and depending on configured criteria - ie cpu or memory usage - on the server or number of inbound connections.
The autoscaler must have the capability to spin up new instances of servers and applications. To do this it needs access to available spare servers and the mechanism to deploy app instances to the load balancer.
For instance the autoscaler could be configured to spin up cloud instances via an API or more instances in the datacenter as required. This will also require some kind of networking setup before hand so any added servers are in networks accessible to the rest of the cluster.
Flockport's autoscaler is still in development but you can try it with the Autopilot module. Autopilot automates everything from container to network building and lets you deploy, scale and update applications across the cluster.
Flockport is a new container orchestration platform focused on showcasing the ease and flexibility of containers. Flockport currently supports LXC containers.
Flockport's new platform provides an app store, provisioning, advanced networking and distributed storage support, service discovery, load balancing, HA, container builds and deployment automation.