We recommend reading about Microservices, in detail before getting to know about containers in general or Docker in particular.
In Enterprise software development we have a lot of challenges on a daily basis. We have different team members who have different strengths. UI developers might want to work on a Mac, while the backend developers might like to work in a Windows environment. We would want to be able to hire developers without having to worry about picking developers who work on the same technology stack as the team’s. We would want these new hires to be as productive as possible.
When we talk about Microservices architecture, containers have a special place. When we talk about containers, Docker is by far the most popular one. However, containers have been part of an operating system way before even Docker was introduced.
When you run a process, you are actually running some kind of executable that gets loaded in memory and what’s next? Some CPU time will be allocated to this process and it will run some instructions. There will be a context switch between other processes that need to be executed. This context switching is what makes it look like we are running multiple applications at the same time. In nutshell, this is what is called multi-tasking.
This executable is called an image. Any isolated process or a set of processes that have access to certain hardware resources is a container.
Imagine downloading a technology or a framework you know that is compatible with other frameworks and libraries that can be packaged to create an application. We would need to download and may be, install that technology. We may need to resolve the dependencies, configure settings per environment, may need it to be cross-platform, make it secure, do version control, update it, manage resource conflicts and the list goes on. With a container like Docker most of these concerns get taken care of by the container itself. We can download an image that takes care of all this for us so we can focus our time on building the application and not having to worry about the operational aspects as much.
With containers, we can standardize and automate how we build, manage and secure applications. It makes collaboration between the development and ops team. With a balanced microservices approach, we can move legacy apps within containers. This way we can use them with newer applications. Traditionally, companies had to pick one technology stack and for the most part stay with it. However, with the introduction of containers, we can make different tech stacks work well with each other.
Docker is the company that is on the forefront of driving the container movement. It is by far the only container platform provider to address every application across the hybrid cloud. Enterprise can now take the advantage of true independence between applications and infrastructure offered by Docker to digitally transform themselves faster than ever before. They are no longer constrained with a limited tech stack or legacy applications. Docker creates a model for better collaboration and innovation between developers and the infrastructure division.
Video: Discover the power of Azure OpenAI in our captivating demo: generating charts and pictures in a single response. Don't miss out!
Docker does a phenomenal job with resources sharing between container images. Compared to a VM it has less isolation when it comes to sharing resources but this minimal allocation of resources makes it way less bulky and speeds up development.
Docker is built on a layered filesystem, AuFS in which common parts of the operating system are read only and shared amongst all the containers. AuFS provides each container its own mount for writing. Let’s say we have two 8GB VMs on a machine. That means we will need to allocate 16GB from our system to them. However, because of the way Docker uses AuFS, it will only use a little bit more than 8GB for two containers. Even if we end up adding let’s say 10 containers, memory usage would be similar. This is also the reason that a fully virtualized system takes way longer to start whereas Docker containers are almost instant (less than a second to few seconds).
In built security is one of the features of Docker that makes it very attractive for Enterprise grade applications. Docker creates a set of namespaces and control groups for every container. Processes running within a container cannot see or affect processes running in another container (or the host system). Containers have their own network stack. That means containers won’t gain privileged access to the sockets or interfaces of another container. We can allow IP traffic between containers and can restrict it too.
Control groups ensure that no single container can exhaust the physical resources of the machine including memory, CPU and disk I/O. Its job is to allocate these resources between the containers. One of the other good things about Docker is that it starts containers with restricted abilities. It’s like the root in a container is a lot more limited than the real root privileges on an OS. Even if someone hacks into the root it won’t escalate to the host.
Last, but not the least, it’s possible to leverage existing systems like TOMOYO, AppArmor, GRSEC etc. with Docker. All these systems can be used in tandem with Docker to enhance security.
One of the biggest challenges in the Enterprise World is being able to continue to digitally transform itself while maintaining growth. The reason it’s hard is because of multiple different applications especially legacy ones that could be important for the clients and might bring a huge amount of revenue but could be running on older or not-so-current technologies. Traditionally, this has meant having to develop using a limited technical stack. Even then, the newer versions of the same technologies need to be upgraded to. With Docker we can use any technical stack of our own choice and have the applications work with each other seamlessly.
Docker reduces the foot print and makes automation easier and helps transform the company with a combination of modern as well as legacy technologies.
Microservices are isolated, self-contained independent application units that each fulfill only one specific business responsibility. In a typical monolith, there are a lot of challenges. One of the challenges is being able to use a different technology stack. With microservices, the advantage is that we can use many different technical stacks. Now imagine we have three different microservices one running in Node.js, one in .Net Core and one in Java. What if the developers working on (let’s say) the Java microservice are on different operating systems (let’s say Mac or Windows). They will now need to deal with preparing that environment separately. With Docker, developer can set up an environment and share it using docker hub. Regardless off the operating system the other developers are on, it works seamlessly.
What if you have to manage all the different environments? What about problems with conflicting libraries and dependencies? With Docker, we can encapsulate each microservice in a container. With the performance benefits of Docker, we can now run thousands of microservices on the same server. That is not even remotely possible with a virtual machine architecture.
Docker provides portability by containerizing monoliths or services along with their dependencies, and later moving them to any cloud or data center seamlessly. Imagine working on different environments and then not even being what about portability. With Docker, different team members can not only develop without having to worry about the underlying operating system but can also deploy it on any cloud or server of their choice.
Docker makes it easier and faster to deploy existing applications. It’s as simple as downloading a Docker image and running it on a different server. It’s very easy to scale containers. Containers could be scaled as well as destroyed way faster than virtual machines.
How many times we’ve had to upgrade developer machines in order to support multiple virtual machines? VMs usually hog the physical resources of the machine and we have all dealt with them at some point of time in our careers. Containers are so much faster and require very less resources.
Similar to Docker, Kubernetes has also been one of the most liked container based technology. Google used Kubernetes internally for about 15 years of experience before finally open sourcing it. As we can imagine Google probably has the highest production workload given that Google.com and Youtube.com are the two top websites in the world currently (July, 2018). Read here to know more about our Kubernetes consulting services.
Cazton is composed of technical professionals with expertise gained all over the world and in all fields of the tech industry and we put this expertise to work for you. We serve all industries, including banking, finance, legal services, life sciences & healthcare, technology, media, and the public sector. Check out some of our services:
Cazton has expanded into a global company, servicing clients not only across the United States, but in Oslo, Norway; Stockholm, Sweden; London, England; Berlin, Germany; Frankfurt, Germany; Paris, France; Amsterdam, Netherlands; Brussels, Belgium; Rome, Italy; Sydney, Melbourne, Australia; Quebec City, Toronto Vancouver, Montreal, Ottawa, Calgary, Edmonton, Victoria, and Winnipeg as well. In the United States, we provide our consulting and training services across various cities like Austin, Dallas, Houston, New York, New Jersey, Irvine, Los Angeles, Denver, Boulder, Charlotte, Atlanta, Orlando, Miami, San Antonio, San Diego, San Francisco, San Jose, Stamford and others. Contact us today to learn more about what our experts can do for you.