This post demonstrates how to use Argo Events and Argo Workflows to achieve event driven workflow execution. Event based architectures are a key part of building solutions where the individual components are decoupled, the line of responsibility for the action which created an event stops at point of generating the event, what happens next is a problem for another system. Core Components This section provides a quick overview of the core components for Argo Events and Argo Workflows.
What is ArgoCD ArgoCD is a Continuous Delivery (CD) tool for Kubernetes, which applies manifests stored in Git repositories or Helm charts to Kubernetes clusters. Built on the GitOps model, ArgoCD uses the source repositories as the source of truth; this means that manifests and charts pulled from source repositories represent the true intended state. The GitOps model also means that the source control platform tracks changes to the desired state.
Feature image source This post continues from Operator SDK Part1 and looks at building the operators container image and running the operator. Building the Operator Container An operator container image contains the operator runtime for the type of operator and your business logic. The command make docker-build IMG=$IMG_NAME:TAG creates the operator container image using a Dockerfile in the projects root directory. The initialization process generates the Dockerfile based on the type of operator.
Feature image source The Operator-SDK is a command-line tool used by operator developers to build, test and package their operators. The SDK assists developers transform application business logic into a package which extends the Kubernetes API. Initializing a new project You need to provide the SDK with some necessary information to initialize the scaffolding for a new project. The initialization command is; operator-SDK init --domain $DOMAIN --plugin $PLUGIN. Parameters --domain sets the domain for API resources that this operator will create on the cluster.
Development and management of Kubernetes native applications is a difficult task with a steep learning curve. The operator framework aims to reduce the complexity by pooling shared expertise into a single project and standardize application packaging. Consumers benefit from operators' automated operations, making it easier to keep their applications up to date and secure. The 3 Pillars Three pillars underpin the operator framework. The section below provides an overview of each pillar, and follow-up posts will cover each one in more detail.
This article is the first in a new series where I’ll attempt to explain Kubernetes Operators and how to build them in a concise and easy to digest manner. What is an operator? An operator is one or more custom resources and a control loop process which runs inside a pod on a Kubernetes cluster. What does an operator do? An operator manages a Kubernetes native application’s lifecycle by extending the Kubernetes API using custom resources (CR’s) specific to the application.
This is the first part of a series of posts which will look into different types of network policies and create conflicting rules to observe the outcome. Tests in this series of posts have been performed using Minikube with the Cilium CNI, using a different CNI may change the observed results. Demo environment The Kubernetes environment used for this post will start with the following: 2 Namespaces np-deepdive-one np-deepdive-two 2 types of pods Web to test ingress policies BusyBox to perform the tests from A service for the web pod Additional objects will be created through out the examples.
Kubernetes network policies allow platform consumers to restrict layer 3 and 4 network traffic. All ingress and egress traffic is permitted (default-allow) for namespaces without network policies in place. When at least one network policy exists, any traffic that does not match the specified rules is dropped (default-block). It is essential to be mindful of the default allow state when providing a Kubernetes service to users as it could impact platform and service design decisions.
ConfigMaps enable decoupling of application configuration settings and a pod, improving workload portability. ConfigMaps are a key/value store for non-sensitive information such as; command line argument values, environment-specific strings and URL’s. Do not use ConfigMaps to store sensitive or encrypted data; instead, use secrets for this use case. You can create a ConfigMap by using the command kubectl create configmap [NAME] [DATA], where data is entered as literal values or read from a file.
Pods are not assigned to any node when they are first created; it is the job of the scheduler to assign the pod to a node. A scheduler watches the apiServer for pods that have no value in the spec.nodeName field. The scheduler finds the most suitable node for a pod-based on the podSpec and node statistics. After finding a suitable node, the scheduler sends an event to the apiServer, assigning the pod to the node.
A LimitRange object is used by cluster administrators to set default request and limit values for containers within a namespace. And minimum and maximum values for containers and pods within a namespace. If you’re unfamiliar with container requests and limits within Kubernetes; click here. Configuration a containers requests and limits setting is a well documented best practice. Cluster administrators use LimitRange objects to ensure that containers within a namespace align to this best practice.
Container CPU and Memory requests and limits configuration guarantees a minimum amount of resources to a container and sets a maximum consumable amount. Resource Configuration Values Memory is measured in bytes and expressed as an integer or using a fixed point integer. For example; memory: 1 is 1 byte, memory: 1Mi is 1 mebibyte / megabyte, memory: 1Gi is 1 gibibyte / gigabyte. Memory is not a compressible resource, and there is no throttling.
I noticed that none of the Kubernetes services are writing logs to /var/log/ on the master nodes. I built my lab with kubeadm and a reasonably basic configuration file, just the settings needed to make it work. The configuration file contained just enough information to build the lab and make it work. The Kubernetes services apiServer, ControlManager and Scheduler have several configuration flags for logging. In this post, I am going to go through my experience, enabling logging with kubeadm.