497-The Docker Book-James Turnbull-Container-2014
Barack
2023/12/17
"The Docker Book", was first published in 2014. It is a book designed for system administrators, operations personnel, developers, and DevOps who are interested in deploying the open-source container service Docker. It explores how to complete installing, deploying, managing, and scaling Docker. It begins with an introduction to the basics of Docker and its components. Then start using Docker to build containers and services to perform various tasks. It introduces the development lifecycle from testing to production and learns where Docker fits and how it can make your life easier. It uses Docker to build a test environment for new projects, demonstrates how to integrate Docker with continuous integration workflows, and how to build application services and platforms. Finally, it shows how to use Docker's API and how to extend Docker yourself.
James Turnbull was born in Melbourne, Australia. He is a free software and open-source author and software developer. He lives in Brooklyn, New York, and serves as Smartrr VP of Product and Engineering and advisor to Access Now. Prior to that, he served as co-chair of the Velocity conference, led engineering at Sotheby's, served as a startup advocate at Microsoft, and was an Empatico Founder and CTO of Kickstarter Chief Technology Officer, Venmo VP of Engineering and VP of Services at Microsoft. Roustabout. He also serves as vice president of technical operations for open-source company Puppet Labs.
Containerization, in software engineering, is operating system-level virtualization or application-level virtualization across multiple network resources so that software applications can be run in isolated user spaces called containers in any cloud or non-cloud environment regardless of type or supplier. A container is basically a fully functional and portable cloud or non-cloud computing environment that surrounds an application and makes it independent of other environments running in parallel. Each container individually emulates a different software application and runs independent processes by bundling related configuration files, libraries, and dependencies. However, in general, multiple containers share a common operating system kernel (OS). In recent years, containerization technology has been adopted by Amazon Web Services, Microsoft Azure, Google Cloud Platform, and IBM Cloud computing platforms such as Cloud are widely adopted.
Docker was first released in March 2013. Docker is a set of platform-as-a-service (PaaS) products that use operating system-level virtualization to deliver software in packages called containers. The service is available in free and premium levels. The software that hosts the containers is called Docker Engine. It was first released in 2013 and developed by Docker, Inc. Docker is a tool for automating the deployment of applications in lightweight containers, allowing applications to work efficiently in isolation in different environments.
Table of Contents
1. The Docker Book
2. Introduction
3. Installing Docker
4. Getting Started with Docker
5. Working with Docker images and repositories
6. Testing with Docker
7. Building services with Docker
8. Docker Orchestration and Service Discovery
9. Using the Docker API
This book mainly discusses container technology, which is an efficient virtualization technology. Container virtualization is a virtualization technology performed at the operating system level. In contrast, Hypervisor technology is virtualization at the physical machine level. Container technology and Hypervisor technology each have advantages and disadvantages. A significant advantage of container technology is that it is based on operating system-level virtualization and therefore has less overhead than lower-level virtualization technologies. This means that with limited resources, we can create more virtual machines. However, the disadvantages of container technology also stem from its characteristics, which are relatively poor in flexibility. For example, if the server operating system is Ubuntu, you cannot create a virtual machine running a Windows operating system on it. Before the popularity of cloud computing, containers, a lightweight virtualization technology, were not very popular. However as cloud computing becomes a mainstream trend in the industry, this lower-cost technology begins to be favored by cloud service providers because it can support the operation of more virtual machines.
This book mainly discusses the concept of "container", which is derived from the container. Just like containers carry goods in international shipping and are shipped around the world, containers also carry software and services, whether they are web services, servers, or other types of software. Operations on containers, such as creation, deletion, copy, etc., are not affected by the loaded content. Docker is essentially an implementation of the container concept. When we talk about operating systems, we theoretically consider their functions such as resource scheduling, physical resource management, providing APIs to applications, etc. The specific implementation methods of these functions, such as Linux, MacOS, Windows, etc., are all different embodiments of these theories. In addition to Docker, there are other platforms and tools that provide container technology. For example, it is a task-oriented container engine, and LXC and LXD are container technologies under Linux. Container scheduling technology is not limited to Kubernetes. For example, Amazon EKS (a service provided by Amazon Web Services) is also a container orchestration service. Google Kubernetes Engine (GKE) is a container orchestration service on Google Cloud. Since Kubernetes was originally proposed by Google, it has good support for Kubernetes. Similarly, Azure Container Service (ACS), provided by Microsoft Azure, also supports container orchestration. To sum up, in order to efficiently utilize computing resources, we have developed cloud computing. In the process of cloud computing, in order to better manage computing resources, we use containerization technology. With the emergence of a large number of container clusters, the need for container orchestration technology has emerged. All of these technologies have multiple implementations and applications.
This book mainly introduces several core components of Docker technology: The first is native Linux support. Docker mainly runs on Linux systems, making full use of the core features of Linux. Second is the namespace (Namespace). This is a core concept of Docker, used to achieve isolation of different resources (such as networks, file systems, processes, etc.). This ensures independence and security between containers. Then there's file system isolation. Each container has its own root file system, ensuring isolation between file data. There is also Process Isolation. Each container runs in its own execution environment and does not affect each other. And Network Isolation. Different containers have different virtual network interfaces and IP addresses to ensure isolation between network activities. There is also Resource Allocation and Grouping. Resources (such as CPU, and memory) on the physical machine are allocated to different containers, ensuring effective management of resources. There is also a Copy-On-Write. The way the file system is created reduces hard disk usage and ensures efficient management of data. There is also log management (Logging). It is mainly used to collect the standard input and output information of the container to facilitate monitoring and debugging. There is also an interactive shell (Interactive Shell). The interactive shell provided by Docker allows users to interact directly with the container.
The first step to using Docker is to install it. Docker supports multiple platforms, primarily Linux, but it also runs on macOS and Windows. After installing Docker, users can take advantage of its features on different operating systems. After installing Docker, the next step is to ensure that the Docker Daemon is running properly. Daemon executes in the background and does not require direct user interaction and control. It primarily responds to network requests, hardware activity, or requests from other applications. This process is usually automatic and requires no user intervention. After completing the above two steps, users can start using Docker to create and manage containers. The life cycle management of containers includes creation, stopping, and deletion. Although most of the examples in the book focus on command-line operations, the latest version of Docker should also support a graphical management interface. When trying to create a container, Docker first looks for the required image locally, such as an Ubuntu image. If not found locally, it will be downloaded from Docker Hub. After downloading, Docker will create a container based on the image in the file system, including configuring the network IP address and bridge interface. The user also needs to decide what commands to run on the container, such as starting a bash session.
We can create Docker containers using the command line. These containers stop running when exiting from the shell. In order to keep the Docker container running, we can create a daemon container. After the Daemon container is created, the network address of the container (network address of localhost) is automatically generated and stored in the configuration file. We can view the logs of each process in Docker and the usage of Docker resources through the command line. This includes CPU, memory, network, and storage I/O usage by containers (such as daemon containers). These commands and functions are especially useful when there are multiple containers running on the same host (host). They help manage and monitor the performance and status of each container.