Text Material Preview
<p>1. System Requirements for Deployment</p><p>This lesson describes the CN-Series system requirements for deployment.</p><p>CN-Series System Requirements for the Kubernetes Cluster</p><p>There are system requirements for the Kubernetes cluster in which you are deploying the CN-Series firewall.</p><p>While the CPU, memory, and disk storage will depend on your needs, some guidelines are shown in the image. In addition, a Kubernetes cluster runs a supported Kubernetes version.</p><p>2. Deployment Modes</p><p>This lesson describes the pros and cons of the distributed and clustered deployment modes available for the CN-Series firewall.</p><p>Multiple Deployment Modes for Optimal Security and Resource Utilization</p><p>You now have two deployment options based on your operational and budgetary considerations.</p><p>· Distributed Deployments - Deploy CN-Series data plane as a DaemonSet.</p><p>· Pros:</p><p>· CN-Series data plane is deployed per node, placing security enforcement as close to the workloads as possible and minimizing traffic latency</p><p>· Node-based pricing simplifies upfront forecasting by reducing the need to predict in advance how much throughput will be required for the firewall</p><p>· Cons:</p><p>· Resource intensive due to the need to allocate compute resources on every node to the firewalls</p><p>· Cost prohibitive in large environments due to the number of firewalls required</p><p>· Clustered Deployments - Deploy CN-Series data plane as a native Kubernetes Service in a dedicated security node.</p><p>· Pros:</p><p>· Leverage Kubernetes native autoscale capabilities to elastically scale CN-Series deployments</p><p>· Maximize compute efficiency by allowing Kubernetes to deploy CN-Series based on available resources</p><p>· Cost effective due to the need for fewer firewalls</p><p>· Cons:</p><p>· Network latency due to traffic hairpinning</p><p>PAN-OS 10.2 Release - New CNF Deployment Mode</p><p>As part of the new PAN-OS 10.2 release, we have launched Container Network Function deployment mode, also known as CNF.</p><p>This image shows how CN-Series can protect both non-containerized and containerized infrastructure.</p><p>Benefits of CNF Mode Deployment</p><p>Some of the benefits of CNF mode deployment include:</p><p>· Protected container and noncontainer workloads</p><p>· Expanded networking deployment options for public and private clouds</p><p>· Securing traffic more efficiently with a 5X performance increase</p><p>Characteristics of CNF Mode Deployment</p><p>Here are a few characteristics of CNF mode deployment:</p><p>· It is the industry's only containerized 5G firewall.</p><p>· It can scale up to 47 vCPUs.</p><p>· You can deploy it as a standalone Layer 3 deployment, with HA active-passive mode and session synchronization, in addition to IO acceleration with DPDK and SR-IOV.</p><p>How CNF Deployment Mode Solves Traffic Challenges</p><p>CN-Series-as-a-DaemonSet and CN-Series-as-a-Kubernetes-service deployment modes provide automated security deployment and leverage the autoscaling capabilities of Kubernetes.</p><p>CN-Series can be deployed in one of three ways: as a DaemonSet, as a kubernetes service, and as a container network function (CNF). Click the tabs for more information about the deployment mode challenges.</p><p>· DaemonSet and Kubernetes Service - The first two deployment methods (daemonset and kubernetes service) are designed to protect containerized applications from network-based attacks.</p><p>· Container Network Function - CNF differs from DaemonSet and Kubernetes service as it can protect both containerized and non-containerized applications. This is made possible by having CN-Series run in a standalone layer-3 deployment.</p><p>CNF mode additionally features HA with Active-Passive mode and session synchronization and I/O acceleration with DPDK and SR-IOV.</p><p>· Helpful Mental Model</p><p>· Daemonset and kubernetes service: protect containers.</p><p>· CNF: firewall as a container.</p><p>Solution to the Deployment Challenges</p><p>CN-Series-as-a-Kubernetes-CNF mode resolves these challenges for traffic that uses service function chaining (SFC) through external entities such as cloud provider's native routing, vRouters, and top-of-rack (TOR) switches.</p><p>Note: The CN-Series-as-a-kubernetes-CNF mode of deployment does not impact the application pods.</p><p>CNF Deployment Mode High-Level Steps</p><p>Here are a few high-level steps of CNF deployment mode.</p><p>· Step One - Go into the AWS Management Console to set up your Kubernetes cluster.</p><p>· Step Two (optional) - If you configured a custom certificate in the Kubernetes plugin for Panorama, you must create the cert secret.</p><p>· Step Three - Edit the YAML files to provide the details required to deploy the CN-Series firewalls.</p><p>· Step Four - Deploy the CN-MGMT StatefulSet.</p><p>· Step Five - Deploy the CN-NGFW in K8s-CNF mode.</p><p>· Step Six - Deploy the CN-NGFW pods.</p><p>· Step Seven - Verify that you can see CN-MGMT and CN-NGFW on the Kubernetes cluster.</p><p>3. Deployment Workflow</p><p>This lesson describes the CN-Series deployment workflow.</p><p>CN-Series Deployment Sequence</p><p>There are six steps in the CN-Series deployment sequence.</p><p>Step 1: Get Files and Prepare Docker Repository</p><p>The first step is to download Docker images from the Palo Alto Networks Customer Support Portal (CSP) and YAML files from GitHub. Then, upload the Docker images to the Docker repository in your cloud.</p><p>Get the latest PAN-OS container images from the CSP:</p><p>· For CN-MGMT and CN-NGFW pods - PanOS_cn-X.X.X.tgz</p><p>· For the init container that runs as a part of the CN-MGMT pod - Pan_cn_mgmt_init-X.X.X.tgz</p><p>· For the PAN-CNI pod: - Pan_cni-2.0.0.tgz</p><p>Get the latest PAN-OS container images from the CSP by downloading the files shown in the image below.</p><p>Docker Load, Tag, and Push</p><p>To get YAML files from GitHub, download the Docker images from the Palo Alto Networks CSP, and push them to your container registry before you continue to deploy the CN-Series firewalls.</p><p>Refer to the following table before beginning your deployment to ensure that you have downloaded the compatible files:</p><p>Download Docker Images and YAML Files</p><p>To download the Docker images and YAML files, follow the steps below:</p><p>· First, get the compressed tar archives from the Palo Alto Networks Customer Service Portal (CSP).</p><p>· Next, get the YAML files from GitHub.</p><p>Create and Push Container Registry</p><p>To load, tag, and push the container registry in Google Cloud Platform, Azure, AWS, or locally, run these commands. Click the tabs for the commands to load, tag, and push.</p><p>1. Load the images.</p><p>· docker load -i PanOS_cn-x.x.x.tgz</p><p>· docker load -i Pan_cn_mgmt-init-x.x.x.tgz</p><p>· docker load -i Pan_cni-x.x.x.tgz</p><p>After loading, "docker images" will display, for example, "paloaltonetworks/panos_cn_mgmt:x.x.x".</p><p>2. Tag these images to include your private registry detail.</p><p>· docker tag paloaltonetworks/panos_cn_mgmt:x.x.x <your_registry>/paloaltonetworks/panos_cn_mgmt:x.x.x</p><p>· docker tag paloaltonetworks/panos_cn_ngfw:x.x.x <your_registry>/paloaltonetworks/panos_cn_ngfw:x.x.x</p><p>· docker tag paloaltonetworks/pan_cn_mgmt_init:x.x.x <your_registry>/paloaltonetworks/pan_cn_mgmt_init:x.x.x</p><p>· docker tag paloaltonetworks/pan_cni:x.x.x <your_registry>/paloaltonetworks/pan_cni:x.x.x</p><p>3. Push these images to your private registry.</p><p>· docker push <your_registry>/paloaltonetworks/panos_cn_mgmt:x.x.x</p><p>· docker push <your_registry>/paloaltonetworks/panos_cn_ngfw:x.x.x</p><p>· docker push <your_registry>/paloaltonetworks/pan_cn_mgmt_init:x.x.x</p><p>· docker push <your_registry>/paloaltonetworks/pan_cni:x.x.x</p><p>Step 2: Set Up Panorama That Will Connect to K8s Cluster</p><p>It is important to set up Panorama so it can connect to the Kubernetes cluster.</p><p>· Steps 1 - 3</p><p>1. Make sure Panorama PAN-OS version is 10.1 or later.</p><p>2. Make sure Panorama is in Panorama Mode.</p><p>3. Install the Kubernetes plugin on Panorama:</p><p>· Log in to the Panorama web interface, select Panorama > Plugins and click Check Now to get the list of available plugins.</p><p>· Select Download and Install the Kubernetes plugin.</p><p>Once the installation is successful, Panorama refreshes and the Kubernetes plugin is displayed on the Panorama tab.</p><p>· Step 4 - Perform a Panorama commit:</p><p>· The commit creates two templates - K8S Network Setup</p><p>and K8S Network Setupv2. It takes up to one minute for the interfaces to display on Panorama.</p><p>· K8S-Network-Setup is for use with the CN-Series as a DaemonSet and has 30 virtual wires; a pair of interfaces that are part of a virtual wire to secure an application. Therefore, the CN-NGFW as a DaemonSet can secure a maximum of 30 application pods on a node.</p><p>· K8S-Network-Setup-V2 is for use with the CN-Series as a Kubernetes Service and has one virtual wire; a pair of interfaces that are part of the virtual wire to secure an application.</p><p>· Step 5 - Make sure that the CN-Series auth code is registered on the Palo Alto Networks Customer Support Portal.</p><p>· Step 6 - Activate the auth code on the Kubernetes plugin:</p><p>· Select Panorama > Plugins > Kubernetes > Setup > Licenses.</p><p>· Step 6 (Continued)</p><p>1. Select Activate/update using authorization code and enter the auth code and the number of tokens you received.</p><p>2. You must activate the auth code to enable the CN-MGMT to connect with Panorama.</p><p>3. If you deploy the CN-Series firewall without activating the license, you have a four-hour grace period, after which the firewalls stop processing traffic. After the grace period, the CN-NGFW instances will either fail open (default) or fail closed based on the (FAILOVER_MODE) defined in the pan-cn-ngfw-configmap.yaml.</p><p>· Fail-open mode: In fail-open mode, the firewall will receive the packet and send it out without applying any security policies. Transitioning to fail-open will require a restart and cause a brief disruption of traffic (expected to be around 10-30 seconds).</p><p>· Fail-closed mode: In fail-closed mode, the firewall will drop all the packets it receives. A fail-close will bring down the CN-NGFW pod and release the tokens to the available token pool for licensing new CN-NGFW pods.</p><p>4. Verify that the number of available license tokens is updated.</p><p>· Step 7 - Generate the VM Auth Key. Log in to the Panorama CLI and use the following operational command:</p><p>· request bootstrap vm-auth-key generate lifetime <1-8760></p><p>· Step 8 - Create a parent device group and template stack. You must create a template stack and a device group; you will later reference this template stack and device group when you edit the YAML file to deploy the CN-MGMT Pods. The Kubernetes plugin on Panorama creates a template called K8S-Network-Setup, and this template will be part of the template stack you define here.</p><p>To create a template stack, add the K8S-Network-Setup template in the template stack.</p><p>1. Select Panorama > Templates and add Stack.</p><p>2. Enter a unique Name to identify the stack.</p><p>3. Add and select the K8S-Network-Setup template.</p><p>4. Click OK.</p><p>Create a device group</p><p>1. Select Panorama > Device Groups and click Add.</p><p>2. Enter a unique Name and a Description to identify the device group.</p><p>3. Select the Parent Device Group (default is Shared) that will be just above the device group that you are creating in the device group hierarchy.</p><p>4. Click OK.</p><p>If you are using a Panorama virtual appliance, then create a Log Collector and add it to a Log Collector Group:</p><p>1. Select Panorama > Collector Groups and Add a Collector Group.</p><p>2. Enter a Name for the Collector Group.</p><p>3. Enter the Minimum Retention Period in days (1 to 2,000) for which the Collector Group will retain firewall logs. By default, the field is blank, which means the Collector Group retains logs indefinitely.</p><p>4. Add Log Collectors (1 to 16) to the Collector Group Members list.</p><p>Step 3: Create Two Service Accounts</p><p>You will need to create two service accounts to authenticate the K8s cluster and enable the CN-MGMT and CN-NGFW pods to communicate with each other.</p><p>1. Run the service-account YAML for the plugin-serviceaccount.yaml - This service account enables the permissions that Panorama requires to authenticate to the K8s cluster for retrieving Kubernetes labels and resource information. This service account is named pan-plugin-user by default.</p><p>· kubectl apply -f plugin-serviceaccount.yaml</p><p>· kubectl –n kube-system get secrets | grep pan-plugin-user-toke</p><p>· To view the secrets associated with this account</p><p>· kubectl –n kube-system get secrets <secrets-from-above-command> -o json >> cred.json</p><p>· Create the credential file, named cred.json in this example, that includes the secrets and save this file. You need to upload this file to Panorama to set up the Kubernetes plugin for monitoring the clusters.</p><p>Note: At this point, you should see the JSON file being created.</p><p>2. Run pan-mgmt-serviceaccount.yaml and pan-cni-serviceaccount.yaml - The pan-mgmt-serviceaccount.yaml file creates a service account named pan-sa and is required to enable the CN-MGMT and CN-NGFW pods to communicatie with each other, the PAN-CNI, and the Kubernetes API server. If you modify this service account name, you must also update the YAML files that you use to deploy the CN-MGMT and CN-NGFW pods. The pan-cni-serviceaccount.yaml file creates a service account named pan-cni-sa.</p><p>· kubectl apply –f pan-mgmt-serviceaccount.yaml</p><p>· kubectl apply –f pan-cni-serviceaccount.yaml</p><p>3. Verify the service accounts that are being created.</p><p>· kubectl get serviceaccounts -n kube-system</p><p>Step 4: Set Up the K8s Plugin on Panorama to Monitor the Cluster</p><p>Add the K8s cluster information so that Panorama can access the API endpoint for the cluster and authenticate using the service account credentials to query the API server.</p><p>· Steps 1 - 4</p><p>1. You can add up to 32 service account credentials on Panorama. Panorama supports only one service account credential for a Kubernetes cluster.</p><p>2. To ensure that the plugin and the Kubernetes clusters are in sync, the plugin polls the Kubernetes API server at a configured interval and listens for notifications from the Kubernetes Watch API at a predefined interval (not user configurable).</p><p>3. After you add the cluster information, Panorama retrieves the predefined tags from your Kubernetes clusters such as service, node, and replica set and creates tags for them to enable you to gain visibility and to control traffic to and from these clusters.</p><p>4. Optionally, you can specify whether you want Panorama to retrieve information on the Kubernetes labels and create tags for those, as well.</p><p>· .Step 5</p><p>5. Check the monitoring interval. The default interval at which Panorama polls the Kubernetes API server endpoint is 30 seconds.</p><p>A. Select Panorama > Plugins > Kubernetes > Setup > General.</p><p>B. Verify that Enable Monitoring is selected.</p><p>C. Click the gear icon to edit the Monitoring Interval and change it to a range of 30-300 seconds.</p><p>· Step 6</p><p>6. Select Panorama > Plugins > Kubernetes > Setup > Cluster and Add Cluster.</p><p>Note: You can leave the Label Filter and Label Selector configuration for later. This is an optional task that enables you to retrieve any custom or user-defined labels for which you want Panorama to create tags.</p><p>Step 5: Edit YAML Files</p><p>Edit YAML files and note that many parameters need to be edited across multiple files.</p><p>When you edit YAML files:</p><p>· Determine Version - Determine version of YAML files required based on PAN-OS version.</p><p>· Git Clone - Git clone the repository to your host.</p><p>· Git Checkout - Git checkout to the tag of the YAML version number that corresponds to your PAN-OS version.</p><p>· CD into Directory - CD into the appropriate directory for your deployment (Daemonset or Service) and then the directory for your environment (e.g., Daemonset to GKE).</p><p>Note: Many parameters need to be edited across multiple files.</p><p>Step 6: Deploy CN-Series CNI, Management, and Data Plane</p><p>The final step is to deploy CN-Series. The CN-Series firewall can be deployed as a service or as a DaemonSet.</p><p>Service Deployment</p><p>The following steps describes how to deploy CN-Series as a service.</p><p>Service Deployment: Deploy the CN-NGFW Service</p><p>· Step 1 - Verify that you have created the service account using pan-cni-serviceaccount.yaml.</p><p>· Step 2 - Use kubectl to run pan-cni-configmap.yaml.</p><p>· kubectl apply -fpan-cni-configmap.yaml</p><p>· Step 3 - Use kubectl to run pan-cn-ngfw-svc.yaml.</p><p>· kubectl</p><p>apply -f pan-cn-ngfw-svc.yaml</p><p>This YAML must be deployed before pan-cni.yaml.</p><p>· Step 4 - Use kubectl to run pan-cni.yaml.</p><p>· kubectl apply -f pan-cni.yaml</p><p>· Step 5 - Verify that you have modified the pan-cni-configmap and pan-cni YAML files.</p><p>· Step 6 - Run the following command and verify that your output is similar to the following example.</p><p>· kubectl get pods -n kube-system | grep pan-cni</p><p>Service Deployment: Deploy the CN-MGMT StatefulSet</p><p>By default, the management plane is deployed as a StatefulSet that provides fault tolerance. Up to 30 firewall CN-NGFW pods can connect to a CN-MGMT StatefulSet. Below are the steps for deploying a CN-MGMT StatefulSet:</p><p>· Step 1 - Use kubectl to run the YAML files.</p><p>· kubectl apply -f pan-cn-mgmt-configmap.yaml</p><p>· kubectl apply -f pan-cn-mgmt-slot-crd.yaml</p><p>· kubectl apply -f pan-cn-mgmt-slot-cr.yaml</p><p>· kubectl apply -f pan-cn-mgmt-secret.yaml</p><p>· kubectl apply -f pan-cn-mgmt.yaml</p><p>· Step 2 - You must run pan-mgmt-serviceaccount.yaml only if you have not previously created service accounts for cluster authentication.</p><p>· Step 3 - Verify that the CN-MGMT pods are up. It takes about five to six minutes. Use kubectl get pods -l app=pan-mgmt -n kube-system</p><p>Service Deployment: Deploy the CN-NGFW Pods</p><p>An instance of the CN-NFGW pod can secure traffic for up to 30 application pods on a node. Below are the steps for deploying the CN-NGFW pods:</p><p>· Step 1 - Verify that you have modified the YAML files as detailed in PAN-CN-NGFW-CONFIGMAP and PAN-CN-NGFW containers:</p><p>· - name: pan-ngfw-container image: <your-private-registry-image-path></p><p>· Step 2 - Use kubectl apply to run pan-cn-ngfw-configmap.yaml.</p><p>· kubectl apply -f pan-cn-ngfw-configmap.yaml</p><p>· Step 3 - Use kubectl apply to run pan-cn-ngfw.yaml.</p><p>· kubectl apply -f pan-cn-ngfw.yaml</p><p>· Step 4 - Verify that all the CN-NGFW pods are running (one per node in your cluster).</p><p>· kubectl get pods -n kube-system -l app=pan-ngfw -o wide</p><p>· Step 5 - Verify that you can see CN-MGMT, CN-NGFW, and the PAN-CNI on the Kubernetes cluster.</p><p>· kubectl -n kube-system get pods</p><p>DaemonSet Deployment</p><p>The following steps describes how to deploy CN-Series as a DaemonSet.</p><p>DaemonSet Deployment: Deploy the CNI DaemonSet</p><p>The CNI container is deployed as a DaemonSet (one pod per node), and it creates two interfaces on the CN-NGFW pod for each application deployed on the node. When you use the kubectl commands to run the pan-cni YAML files, the container becomes a part of the service chain on each node. Click the tabs for the steps for deploying the CNI DaemonSet.</p><p>· Step 1 - Verify that you have created the service account using pan-cni-serviceaccount.yaml.</p><p>· Step 2 - Use kubectl to run pan-cni-configmap.yaml.</p><p>· kubectl apply -f pan-cni-configmap.yaml</p><p>· Step 3 - Use kubectl to run pan-cni.yaml.</p><p>· kubectl apply -f pan-cni.yaml</p><p>· Step 4 - Verify that you have modified the pan-cni-configmap and pan-cni YAML files.</p><p>· Step 5 - Run the following command and verify that your output is similar to the following example.</p><p>· kubectl get pods -n kube-system | grep pan-cni</p><p>DaemonSet Deployment: Deploy the CN-MGMT StatefulSet</p><p>By default, the management plane is deployed as a StatefulSet that provides fault tolerance. Up to 30 firewall CN-NGFW pods can connect to a CN-MGMT StatefulSet.</p><p>Below are the steps for deploying a CN-MGMT StatefulSet:</p><p>· Step 1 - Use kubectl to run the YAML files.</p><p>· kubectl apply -f pan-cn-mgmt-configmap.yaml</p><p>· kubectl apply -f pan-cn-mgmt-slot-crd.yaml</p><p>· kubectl apply -f pan-cn-mgmt-slot-cr.yaml</p><p>· kubectl apply -f pan-cn-mgmt-secret.yaml</p><p>· kubectl apply -f pan-cn-mgmt.yaml</p><p>· Step 2 - You must run pan-mgmt-serviceaccount.yaml only if you have not previously created service accounts for cluster authentication.</p><p>· Step 3 - Verify that the CN-MGMT pods are up. It takes about five to six minutes. Use kubectl get pods -l app=pan-mgmt -n kube-system</p><p>DaemonSet Deployment: Deploy the CN-NGFW Pods</p><p>By default, the firewall data-plane CN-NGFW pod is deployed as a DaemonSet. An instance of the CN-NFGW pod can secure traffic for up to 30 application pods on a node. Below are the steps for deploying the CN-NGFW pods:</p><p>· Step 1 - Verify that you have modified the YAML files as detailed in PAN-CN-NGFW-CONFIGMAP and PAN-CN-NGFW containers:</p><p>· - name: pan-ngfw-container image: <your-private-registry-image-path></p><p>· Step 2 - Use kubectl apply to run pan-cn-ngfw-configmap.yaml.</p><p>· kubectl apply -f pan-cn-ngfw-configmap.yaml</p><p>· Step 3 - Use kubectl apply to run pan-cn-ngfw.yaml.</p><p>· kubectl apply -f pan-cn-ngfw.yaml</p><p>· Step 4 - Verify that all the CN-NGFW pods are running (one per node in your cluster).</p><p>· kubectl get pods -n kube-system -l app=pan-ngfw -o wide</p><p>· Step 5 - Verify that you can see CN-MGMT, CN-NGFW, and the PAN-CNI on the Kubernetes cluster.</p><p>· kubectl -n kube-system get pods</p><p>You should see the MP pods connected to Panorama. Note that it might take some time for the MP pods to display.</p><p>What Is Terraform?</p><p>Terraform is an infrastructure-as-code tool that lets you build, change, and version cloud and on-prem resources in human-readable configuration files.</p><p>CN-Series: Cloud Native Kubernetes Orchestration</p><p>Deployments in AKS, EKS, GKE, OpenShift, and native Kubernetes can all be managed and orchestrated via a combination of Helm and Terraform. This enables you to quickly and easily deploy CN-Series across multiple clouds.</p><p>Deploying CN-Series Using Terraform</p><p>· Step 1 - Use your local cn-series\tfvars to create a file named terraform.tfvars and add the following variables and their associated values.</p><p>· Step 2 - Validate the Terraform init.</p><p>· $ terraform init</p><p>· Step 3 - Validate the Terraform plan.</p><p>· $ terraform plan</p><p>· Step 4 - Apply the Terraform plan.</p><p>· $ terraform apply</p><p>· Step 5 - Verify that the pods have been deployed and are Ready and the status is Running.</p><p>· $ kubectl get pods -A</p><p>CN-Series Red Hat OpenShift Operator</p><p>Our Red Hat OpenShift operator enables you to configure, deploy, and monitor your CN-Series containerized next-generation firewall deployments within your OpenShift clusters through the Red Hat OpenShift web app.</p><p>This gives rise to three main benefits:</p><p>· Easy Deployment - You can deploy CN-Series on OpenShift in seconds by locating the CN-Series operator within the Red Hat operator hub and installing it.</p><p>· Visibility and Configurability - You can now view your data plane and management plane CN-Series pods within your OpenShift cluster right within the Red Hat UI.</p><p>· Lifecycle Management - Operator lifecycle management is a tool that hardens your CN-Series deployments against issues such as accidental pod deletion, which increases the resiliency of your infrastructure.</p><p>Summary of the CN-Series Deployment Process on OpenShift</p><p>Here are the few steps to deploy CN-Series on OpenShift.</p><p>Key Takeaways</p><p>Now that you have reviewed the deployment workflow, below are key takeaways for deploying the CN-Series firewall.</p><p>100%</p><p>image29.png</p><p>image7.png</p><p>image6.png</p><p>image5.png</p><p>image17.png</p><p>image1.png</p><p>image18.png</p><p>image26.png</p><p>image12.png</p><p>image25.png</p><p>image10.png</p><p>image30.png</p><p>image11.png</p><p>image22.png</p><p>image31.png</p><p>image14.png</p><p>image9.png</p><p>image23.png</p><p>image21.png</p><p>image16.png</p><p>image8.png</p><p>image3.png</p><p>image19.png</p><p>image24.png</p><p>image4.png</p><p>image27.png</p><p>image32.png</p><p>image13.png</p><p>image28.png</p><p>image2.png</p><p>image15.png</p>