Logo Passei Direto
Material
Study with thousands of resources!

Text Material Preview

<p>1. Container Overview</p><p>This lesson describes containers and Kubernetes, their characteristics, and their importance. The lesson also describes the different types of container deployments.</p><p>Key Definitions</p><p>Let's start by looking at what containers and Kubernetes are.</p><p>· Containers - Containers allow Dev teams to package apps and services in a standard and simple way. Containers can run anywhere and be moved easily. Docker containers are the most common.</p><p>· Kubernetes (K8s) - Kubernetes is an open-source container-orchestration system. Kubernetes automates deployment, scaling, and management.</p><p>Container Characteristics</p><p>Containers can be described as having three characteristics: minimal, declarative, and predictable.</p><p>· Minimal - Typically single process entities</p><p>· Declarative - Built from images that are machine-readable</p><p>· Predictable - Do exactly the same thing from run to kill</p><p>Importance of Containers</p><p>It is important to know what a container is, how it relates to Kubernetes, and why you would use a container.</p><p>· Containers - Containers are a way of packaging software such that it has no dependencies on the machine. All of an application’s code, libraries, and dependencies are packaged into the container. Unlike a virtual machine, rather than creating a whole virtual operating system, containers allow applications to use the same Linux kernel as the system that they’re running on and only require that applications be shipped with things that are not already running on the host computer. That means that now operations teams can deploy code in production without having to spin up an entire OS. It also means that application developers can write machine-independent code.</p><p>· Kubernetes - As the industry shifts to microservices-based application architectures, it has become common to use multiple containers which run on multiple machines. Now, we need a way for these containers to talk to each other and work together in harmony. This is where Kubernetes comes in. Kubernetes is a container management tool that has a variety of benefits, including running containers on different machines, scaling up and down, load balancing, and launching a new container if one fails.</p><p>Traditional Development Compatibility and Dependency Problems</p><p>Containers reduce complexity by allowing components to run in separate containers with their own libraries and dependencies so they do not affect each other.</p><p>· Overly Complex Matrix - In this example, a developer is trying to build a website and is using node.js express as the web server. If the version of the libraries that are being used are incompatible with the MongoDB database being used, this would cause complexities that would need to be sorted out. You would have to go out and find a library that works for both. The same would hold true for dependencies.</p><p>· Container Solution - In order to tackle these types of issues where the dependencies don’t introduce a level of complexity that is not wanted, containers can be used to solve this problem. So, if you are writing a piece of code and certain dependencies are introduced, these dependencies will stay isolated from the dependencies of the other microservices that are being used to create the solution.</p><p>Let’s say that the web server is using a library that is version 1.5 and the database is using a library that is 2.5. In this case, these libraries will be isolated and abstracted from each other through the container. Here, the Docker runtime engine is being used to provide the abstraction. Each part of the solution: web server, database, messaging, and orchestration can be packaged and created in their own container.</p><p>Traditional, Virtualized, and Container Deployments</p><p>Let's take a look at the differences between traditional, virtualization, and container deployments.</p><p>· Traditional Deployment - In a traditional deployment, there is hardware, an operating system, and applications deployed on top of the OS.</p><p>· Virtualized Deployment - In the virtualized deployment, there is hardware, an operating system, a hypervisor that abstracts each virtual machine from the base OS, and (guest) virtual machines that have full operating systems installed in them with their respective libraries and applications.</p><p>· Container Deployment - In a container deployment, there is hardware, an operating system, a container runtime (such as Docker), and individual containers with their respective libraries and applications. This is a much more “lightweight” solution than the other deployment options as each container does not need its own operating system. It only needs the libraries and dependencies necessary in order to run the application.</p><p>Containerized Versus Virtualized Environments</p><p>Let’s compare the virtualized environment to the container environment so we can see the advantages of the container deployment.</p><p>In the virtualized environment, the resource utilization needed to run separate operating systems in their own virtual machines will be much greater than what is needed to run an application in a container. If the goal is to load a few webpages in an application on the VM, the size (hard drive space) requirements as well as the boot time to get the VM and applications up and running will be much greater than what is needed to load and run an application in a container. Therefore, the use of a VM to accomplish this goal is a waste of resources.</p><p>Kubernetes Network Policies Versus CN-Series</p><p>Kubernetes network policies are a native tool that can be used to enforce Layer 3 or 4 policies within a Kubernetes cluster.</p><p>Using Kubernetes network policies to protect Kubernetes clusters is analogous to using an access control list (ACL) to protect a modern web application. It is not enough, and it exposes your infrastructure to enormous risks, as shown in the table. A CN-Series firewall would otherwise mitigate these risks.</p><p>By using K8s network policies instead of a containerized next-generation firewall such as CN-Series, you lose out on Layer 7 protection as well as Layer 7 microsegmentation, URL filtering, threat prevention for known and unknown threats, data exfiltration capabilities, dynamic tagging, port range policies, the ability to enforce policies across bare-metal virtual and container firewalls, and event correlation.</p><p>The CN-Series has advantages over the Kubernetes Network Policies.</p><p>2. Container Components</p><p>This lesson describes the role of network security in securing containers and the risks associated with containerized applications. The lesson also describes Container, Linux, Docker concepts, and virtualization.</p><p>The Role of Network Security in Securing Containers</p><p>Container adoption is on the rise. According to a Gartner report, more than 75 percent of global organizations will be running containerized applications in production. However, this move brings new security and data risks for an organization.</p><p>Benefits of Enabling CN-Series</p><p>The CN-Series firewall enables organizations to:</p><p>· Gain Layer 7 visibility and enforcement using native Kubernetes context to protect against known and unknown threats</p><p>· Provide inline threat protection for containerized applications deployed anywhere (on prem or in cloud)</p><p>· Deploy and scale network security without compromising DevOps speed and agility</p><p>· Consistently secure legacy and modern microservices-based apps through unified management</p><p>· Consistently secure legacy and modern microservices-based apps through centralized management via Panorama</p><p>Container Architecture</p><p>A container is a standard unit of software that packages up code and all its dependencies, so the application runs quickly and reliably from one computing environment to another. Containers are completely isolated environments that have their own isolated processes, networks, mounts, etc.</p><p>Container Concepts</p><p>More information about container concepts.</p><p>· Namespaces - A namespace wraps a global system resource in an abstraction that makes it appear to the processes within the namespace that they have</p><p>their own isolated instance of the global resource. Changes to the global resource are visible to other processes that are members of the namespace but are invisible to other processes.</p><p>· CGroups - Control groups, usually referred to as CGroups, are a Linux kernel feature which allow processes to be organized into hierarchical groups whose usage of various types of resources can then be limited and monitored.</p><p>· Layered Filesystem - Different files and directory structures are layered in order and on top of each other to be reflected as one directory tree.</p><p>Linux Container Concepts</p><p>Here is more information about Linux container concepts.</p><p>High-Level Overview of Docker</p><p>To understand how Docker works, let's review operating systems.</p><p>· Kernel Differentiates the OS - Consider different operating systems like Ubuntu, Fedora, Open Suse, CentOS, etc. They all consist of an OS Kernel and a set of software on top. It is the set of software on top of the Kernel that differentiates these operating systems from each other.</p><p>· Docker Shares the Kernel of the Underlying OS - In this example, we have Docker installed on a machine running Ubuntu OS. Docker can run any OS on top of it as long as they are based on the same kernel. So, an application that is developed to run in a Docker runtime on Ubuntu will also be able to run on Fedora since they are based on the same kernel. However, that same application will not be able to run on Windows since it has a different kernel from Ubuntu.</p><p>· Linux Docker Versus Windows Docker - Let’s say a developer writes an application which is supposed to work only on Windows OS. If we have a Windows-based Docker container, it will not be able to run in the previously mentioned environment. This is because Windows does not share the same kernel as Ubuntu.</p><p>If the underlying containers are different, the containers cannot be used. Linux-based containers must run on Linux operating systems and Windows-based containers must be run on Windows operating systems.</p><p>Note: In order to run a Windows-based Docker container, you need to have docker installed on Windows.</p><p>· Using Containers and VMs Together - Instead of looking at this as “Virtualization Deployment versus Container Deployment”, we can see from the example that containers and VMs can work together to our advantage. Virtualization abstracts the hardware layer. Containerization abstracts the application layer.</p><p>Here you can install a Linux OS in one virtual machine with a Linux Docker runtime and a Windows OS in the other virtual machine with a Windows runtime. This way your Linux containerized applications can run in a Linux container and your Windows applications can run in a Windows container.</p><p>Virtualization Abstraction Versus Container Abstraction</p><p>It is important to note that the main purpose of Docker is not to virtualize. The main purpose of Docker is to package the dependencies and libraries, containerize the application, and then ship and run those applications anywhere at any time. Virtualization is an abstraction of the hardware layer, whereas, Container is an abstraction of the application layer.</p><p>Docker Keyword and Components</p><p>There are components and keywords related to Docker. Some of these keywords are Docker Image, Container, Docker File, and Registry. Click the tabs for more information about these keywords.</p><p>· Docker Image - Containers are distributed as images. A Docker Image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform. A Docker Image is nothing but a package or Template, just like a VM Template. Just like a VM template can be used to create one or more virtual machines, a Docker Image can be used to create one or more containers. When the “my app” image is started, it becomes a container.</p><p>· Docker Image Components - A Docker Image consists of:</p><p>· Your code config</p><p>· Third-party libraries and code dependencies</p><p>· Runtime dependencies and language runtime</p><p>· Base OS/layer</p><p>· Container - Containers are the actual running instances that were created using the Docker Image.</p><p>· Docker File - Docker File is a component of Docker Image. A Docker file is a text file that contains the necessary commands to assemble an image. It states the libraries and dependencies used by your source code and helps create the “layered filesystem” of components.</p><p>· Registry</p><p>· A registry is a storage and content delivery system, holding named Docker images, available in different tagged versions</p><p>· You can upload and download the Docker images either to a private registry or a public registry</p><p>· Docker Hub is one such registry</p><p>High-Level Flow</p><p>The following image shows the high-level workflow.</p><p>3. Kubernetes Overview</p><p>This lesson describes Kubernetes and why you should use it.</p><p>Introduction to Kubernetes</p><p>Kubernetes is an open-source container-orchestration system for automating application deployment, scaling, and management.</p><p>The following are some key concepts about Kubernetes.</p><p>· Kubernetes Pods - Kubernetes uses the concept of pods - an object that consists of one or more containers which share network namespace.</p><p>· Automation and Simplification - Kubernetes enables you to make the potential of container technology an operational reality by automating and simplifying your daily container workflow.</p><p>· Containerized Applications - Kubernetes automates deploying, scaling, and managing containerized applications on a group (cluster) of (bare metal or virtual) servers, such as ensuring that in case a container within a pod crashes, it will be restarted.</p><p>· Automatically Manage Containers - Kubernetes also lets you automatically handle networking, storage, logs, alerting, etc. for all your containers.</p><p>Why Kubernetes Should Be Used</p><p>There are potential issues that Kubernetes can help resolve. Here are a few scenarios to determine when you should use Kubernetes.</p><p>· Problem: It has become difficult to deploy new containers like this if you have 200 containers in the production environment.</p><p>· Problem: Something goes wrong with the host or you don't have enough resources to deploy more containers on the host. Everything needs to be monitored manually.</p><p>· Problem: You will have to monitor the containers manually.</p><p>Kubernetes Benefits</p><p>You can use K8s in order to tackle these kinds of issues in production environments. Kubernetes can deploy, monitor, heal, and redeploy containers automatically as needed.</p><p>4. Common Kubernetes Components</p><p>This lesson describes some of the common Kubernetes components.</p><p>Where Does Each Component Belong?</p><p>The following is an example of a master node and a worker node. This image describes where each component belongs.</p><p>5. Kubernetes Architecture</p><p>This lesson describes key components of the Kubernetes architecture.</p><p>Kubernetes Architecture Components</p><p>The Kubernetes architecture consists of several components.</p><p>· Node - A Node is a machine (physical or virtual) on which Kubernetes is installed. This is where containers will be deployed on.</p><p>· Cluster - A Cluster is a group of Nodes. Even if one node fails, you will be able to access your application.</p><p>· Master - A Master is responsible for managing all worker nodes in a cluster. Master is responsible for actual orchestration.</p><p>· Pods</p><p>· A Pod is the basic execution unit of a Kubernetes application. It is the smallest and simplest unit in the Kubernetes object model that you create or deploy.</p><p>· A Pod represents processes running on your cluster.</p><p>· A Pod encapsulates an application's container (in some cases, multiple containers), storage resources, a unique network identity (IP address), as well as options that govern how the containers should run.</p><p>· You can deploy multiple containers in a Pod, but those kind of deployments are very rare.</p><p>ReplicaSet and DaemonSet</p><p>· ReplicaSet</p><p>· A ReplicaSet's purpose is to maintain a stable set of replica pods running at any given time. As such, it is often used to guarantee the availability of a specified number</p><p>of identical pods.</p><p>· Even if you have one pod on a node, the ReplicaSet makes sure that one pod is always available. If something goes wrong with the pods, a new pod is created by ReplicaSet, thus providing HA.</p><p>· ReplicaSet can span across multiple nodes.</p><p>· DaemonSet</p><p>· A DaemonSet runs one copy of your pod on each worker node in the cluster.</p><p>· When a new node is added to the cluster, a copy of the pod is automatically created on the node. When a node is removed, the pod will also be removed.</p><p>· It ensures one copy of the pod is always present on all nodes.</p><p>· The definition file of a DaemonSet is similar to that of a ReplicaSet. The only difference is the “Kind” in the YAML file.</p><p>Note: The dataplane of the CN-Series firewall is a DaemonSet</p><p>Kubernetes Service</p><p>A Kubernetes service is a way to expose an application (running on a set of pods) as a network service. A Kubernetes service groups a set of pods to a single resource. You can create many services within a single application. This deployment ensures that you always have access to a group of pods even as they are added or torn down (e.g., front-end pods need connection to back-end pods).</p><p>Deployments</p><p>Deployments provide the capability to upgrade the underlying instances seamlessly using rolling updates, undo changes, pause, and resume changes as required.</p><p>Inside a deployment, there is a ReplicaSet. Inside a ReplicaSet, there are pods and inside pods there are containers. Click the tabs for more information about the Kubernetes hierarchy and deployments.</p><p>· ReplicaSet Creates the Pods - The ReplicaSet ultimately creates the pods with the name of Deployment and ReplicaSet.</p><p>· Deployment Creates a ReplicaSet - Creating a deployment automatically creates a ReplicaSet within the name of the deployment.</p><p>Horizontal Pod Autoscaling Overview</p><p>The horizontal pod autoscaler (HPA) automatically scales the number of pods in a replication controller, deployment, ReplicaSet, or StatefulSet based on standard or custom metrics.</p><p>The HPA is implemented as a Kubernetes API resource and a controller. For our solution, a cloud-specific Kubernetes metric adapter is deployed inside the firewall cluster and wired up to the HPA. The HPA checks in periodically with the adapter to get the custom metric published by the plugin in the management plane (MP). The adapter in turn calls to a Google Kubernetes Engine (GKE) stackdriver endpoint or Azure endpoint to retrieve the metric and give it back to the HPA. The HPA then evaluates the value and compares it to the target value configured for a given data plane (DP) deployment or MP StatefulSet. Based on the HPA algorithm, the HPA will scale up/down the pods.</p><p>New YAML files will be provided to the customers. Inside a YAML file, customers can use standard metrics like CPU/memory for autoscaling or go with specific custom metrics.</p><p>StatefulSet and Volumes</p><p>The management plane of the CN-Series firewall is a StatefulSet. Volumes are attached to containers to persist data.</p><p>StatefulSet and Volumes</p><p>· StatefulSet</p><p>· StatefulSet pods are created and deployed in a sequence. After the first pod is deployed, it must be in a running state for the next pod to be deployed.</p><p>· Each pod is assigned a unique index starting from suffix 0 and incremented by one. Each pod thus gets a unique name combined with the StatefulSet name. There are no random names for the Pods.</p><p>· Even if a pod fails and is recreated, it will still come up with the same name. It maintains a “sticky” identity for pods.</p><p>· Scaling down works in reverse order. The last instance is deleted first, followed by second, then last, and so on.</p><p>· Volumes</p><p>· Docker containers are meant to be transient. This means they last only for a short period of time. They are called upon to process data and then are destroyed once they are finished. The same is true for the data within the container.</p><p>· To persist data within a container, we attach a volume at the time that they are created. Hence, the data processed by the container will stay in this volume even if the container itself is destroyed.</p><p>· Similar to container data, the pods created are transient in nature. Hence, a volume is attached to the pod so that the data generated or processed by the pod remains, even if the pods are deleted.</p><p>· This volume will need storage. When you create a volume, you can choose to configure its storage in different ways.</p><p>· K8s supports different types of storage solutions, including NFS, ceph, GlusterFS, Flocker, AWS EBS, Azure Disk, and others.</p><p>Persistent Volumes</p><p>Persistent Volume allows central management of the volumes for pods.</p><p>· Persistent Volume Claims (PVC) - Persistent Volumes is a cluster-wide pool of storage volumes configured by an administrator to be used by users deploying applications on the cluster. The users can select the storage from this pool using Persistent Storage Claims (PVC).</p><p>· Storage Allocation - Persistent Volumes allow the use of a large pool of storage and allow the pods to carve out storage from the pool as and when required.</p><p>· Volume Challenges - When you have a large environment with a lot of pods to be deployed, you will have to configure volume and volume storage for each pod using the definition file. Whenever a change is to be made with respect to volumes or storage, the user will have to make those changes on all of the pods.</p><p>PAN-OS 10.2 Release - New CNF Deployment Mode</p><p>CN-Series can be deployed in one of three ways: as a daemonset, as a kubernetes service, or as a container network function (CNF).</p><p>When CN-Series is deployed on a kubernetes cluster as a daemonset, the dataplane is run on each application node. This is particularly aligned with the requirements of latency sensitive applications as the firewall lives as close to the workload as possible.</p><p>When CN-Series is deployed on a kubernetes cluster as a kubernetes service, the dataplane runs on a dedicated security node. This configuration improves cost and reduces utilization as the dataplane scales to meet the changing demands of workloads via native Kubernetes autoscale capabilities.</p><p>When CN-Series is deployed as a container network function (CNF) the firewall runs as a standalone layer 3 deployment. This deployment enables users to protect both containerized and non-containerized applications via CN-Series.</p><p>Container Network Interface</p><p>Container Network Interface (CNI) is a set of standards that define how a plugin should be developed to solve networking challenges in container runtime environments.</p><p>· Plugin-Based Networking Solution - The goal of CNI is to create a generic plugin-based networking solution for containers.</p><p>· CNI Plugin - CNI Plugin is configured in the Kubelet service in each node. Kubelet looks into the CNI config directory to find which plugin needs to be used. Click the image to enlarge it.</p><p>CNI Plugins and Runtime</p><p>· For Plugins</p><p>· Plugin must support command line arguments ADD/DEL/CHECK.</p><p>· Plugin must support parameters container Id, network Namespace, etc.</p><p>· Plugin must manage IP address assignment to pods.</p><p>· Plugin must result in a specific format.</p><p>· For Runtime</p><p>· Container Runtime, such as Docker, must create the Network Namespace.</p><p>· Identify network the container must attach to.</p><p>· There is a Container Runtime to invoke Network Plugin when container is added.</p><p>· There is a Container Runtime to invoke Network Plugin when container is deleted.</p><p>· The runtime specifies how a network must be configured in the environment using JSON file.</p><p>100%</p><p>image32.png</p><p>image1.png</p><p>image12.png</p><p>image33.png</p><p>image4.png</p><p>image30.png</p><p>image6.png</p><p>image18.png</p><p>image29.png</p><p>image25.png</p><p>image2.png</p><p>image20.png</p><p>image31.png</p><p>image10.png</p><p>image27.png</p><p>image8.png</p><p>image14.png</p><p>image13.png</p><p>image3.png</p><p>image26.png</p><p>image16.png</p><p>image7.png</p><p>image9.png</p><p>image19.png</p><p>image22.png</p><p>image21.png</p><p>image23.png</p><p>image11.png</p><p>image15.png</p><p>image17.png</p><p>image28.png</p><p>image24.png</p><p>image5.png</p>