This article explores the history of service mesh, its architecture and briefly examines its challenges.

Before we start examining what Service-Meshes means and how they are operating, I would like first to take a step back and provide some context.

Early Ages!

Traditional architectures such as Monolithics used to be widely consumed, requiring lower efforts to build and deploy, yet so difficult to respond quickly to business needs’ in term of elasticity and maintainability.

Monoliths, are typically a single package with a range of different features and components, all tightly coupled in an individual solution and then tightly dependent on one of…

While K3s is designed to provide a highly available production cluster for Edge devices, K3d is conceived for development purposes and spins up a multi-node K3s cluster with ease.

K3s by Rancher is the lightweight Linux distro designed for situations where compute resources may be limited or where a smaller footprint of a Kubernetes solution is needed, such as IoT and Edge devices. K3d is slightly different as it is the lightweight wrapper to run multi-node K3s clusters for development purposes.

In a previous blog-post, we discussed deeply how Rancher made K3s well suitable for certain types of workloads.


Lightweight —Simple —Easy Secure — Production-Ready

Everything started with a desire to have a lightweight K8s distribution that can run in constrained environments for a whole plethora of use cases.

Rancher took the effort to create an easy to use & reliable solution with a minimum learning curve required; create a Kubernetes cluster that uses half the memory of a Vanilla Kubernetes and uses a single binary of a hundred Megabytes. By removing all superfluous components, the final result was astonishing, a production-grade K8s cluster well-tuned to restricted environments, with everything secure since day one.

I’ll go deep into…

The Kubernetes PodSecurityPolicy (PSP) is the earliest reliable security control offered out-of-the-box with Kubernetes. It’s merely an Admission Controller which ensures deployed Pods meet the defined security level for a Kubernetes cluster. Unfortunately, PSP is still in Beta, is getting deprecated in the next Kubernetes release (Kubernetes 1.21), and will be entirely removed in Kubernetes 1.25.

PSP enables admins to prevent running containers that violate defined security policies. (Example: Prevent running containers that require a specific set of system capabilities).

Challenges with PSP?

PodSecurityPolicy has been stuck in Beta since its introduction in Kubernetes-v1.3. As all of us know, any Kubernetes API that…

In our today’s blog post, we will be creating a DotNet Core-based application from scratch, targeting a cross-platform environment. The steps described below can be easily reproducible in any Linux or Windows environment.

DotNet Core 5.0

Microsoft finally released the new .NET 5 at the .Net Conf, November 2020.

DotNet Core is the unified platform for building cross-platform applications (Windows, Linux) and runs on edge devices and IoT. The aforementioned brings increased performance, cost-saving, and an additional option in the running DotNet-based apps on Kubernetes, which wasn't the case for the legacy applications built with the DotNet-Framework.

Both “Certified Kubernetes Administrator” (CKA) and “Application Developer” (CKAD) certifications are hands-on, covering almost one topic, and are nearly the same as they overlap in scope. It will be just perfect for getting prepared and ready for both exams and thus spending less preparation time and efforts. However, this mainly depends on your inspirations and goals for sure and your ability to keep up with such challenges. :)

1. CKA & CKAD in a nutshell:

Linux Foundation ‘CKA’ and ‘CKAD’ are performance-based exams, thus doesn’t have multiple-choice questions. The exams’ duration is respectively 120 minutes each with a passing score of 74% for CKA and 66% for…

Azure Red Hat OpenShift “ARO” is back again with a fabulous set of features and capabilities, making the transit from ARO 3.11 to ARO 4.x a must for most customers.

A charming white bird is bringing additional freedom to end users.

We are going to get a long hard look at this new release and provide, as usual, an Honest REX.

Read-on to find out more!

ARO 4.x is available within the HERO region as a start, including West/East Europe, US (East, West, South Central) and more. However, as Microsoft aims to spread fast, ARO 4.x is expected to be…

In this blog post, I’m going to look hard on how to deploy an exciting toolset for Continuous Deployment, called Argo!

Let’s get into it ;)

What’s Argo CD?

Argo CD is a lightweight and easy to configure declarative GitOps tool used to sync application deployments, from a Git Repository to one or many Kubernetes / OpenShift clusters.

Argo CD is based on GitOps methodology where Git is the sole source of truth holding the a declarative definition of the desired state of a system. [Argo Project]

Git used to be a repository for versioning only Source Codes. Now its usage is extended…

In this blog post, I’ll explain the experience and outcome of a recent work to add support for API Management and API Gateway deployments and integration.

The document drives through the implementation of 3Scale 2.8 API Management (API M) and APICast Gateway v3.8 utilizing OpenShift Templates.

Architecture & Components

Red Hat 3Scale API Management solution is a single pane of glass to manage and sustain APIs, and is mainly composed by two logical planes, the Management plane “3Scale API-Management” and the data plane, which can be a bunch of APICast Gateways deployed On-Premises, or running on any cloud platform or region.

3Scale API-M…

This article promises on bringing a SpringBoot binary file to life and make from it a fully running application on OpenShift.

Our today’s Use-Case: Deploy to Openshift a resultant artifact of an already built packaged springboot application. This is useful for software built on a platform different from the one where the application is going to run.

Prerequisites for our Experiments

First, we need a running OpenShift cluster on which we can do our experiments. I invite you then to run the next steps using either a free account on or download & run an OpenShift cluster locally using minishift.

Do not forget…

Aymen Abdelwahed

Is a DevOps & Cloud enthusiast with 10 plus years of experience. He’s continuously immersing himself in the latest technologies trends & projects.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store