During a review of log events coming into Elasticsearch I came across some that included ASCII colour codes in them. Below is one example where they are included in the log level, noting the ‘‘box’'[39mDEBUG'‘box’'[0;39. While this is handy for colouring the log levels while viewing in a terminal, it is not so handy for use in Elasticsearch or Kibana. Some more examples, noting these were mutated to be all uppercase hence the capital ‘‘M’’.
Integrating HashiCorp Vault with an existing LDAP system such as Active Directory is a convenient way to manage user authentication and authorization. Follow along below for an example of setting this up. Note, I am piping curl output to jq for better formatting. Check it out here. Updated - check the updates at the bottom of the post for a briefer setup. Enable the LDAP Auth Method API: 1 curl --header "X-Vault-Token: s.
This article details the AWS CloudFormation building blocks to deploy a containerised application using the AWS Elastic Container Service (ECS). I use this method to deploy this vary website which was initially running in ECS using an on-demand instance deployed the old fashion way (with many mouse clicks and typing). With this CloudFormation template the entire stack can be created from a single command aws cloudformation create-stack…! and completely blown away and stood up again with minimal effort.
In this article, I’ve thrown together the steps I used to install Oh My ZSH! for the Windows Subsystem for Linux (WSL) with Powerlevel9k. I’m using Ubuntu in the WSL, so the steps apply to Ubuntu. Note, the screenshot was taken when using the agnoster theme. Setup for the Windows Subsystem for Linux Install the required packages. 1 sudo apt-get install -y zsh fontconfig Install Oh My ZSH!
I recently published a repository to GitHub called aws-cloudformation-logs-to-slack available here, which is an AWS Lambda function written in Python using the AWS SAM framework to as the name suggest send the CloudFormation events to a Slack channel. I thought this would come in handy when you’re running a large CloudFormation to quickly open the channel and see where it’s currently up to in the process. A sample of the messages is shown below.
A brief cheat sheet of some common commands and examples for using buildah and podman to build and run OCI containers without the docker daemon for reference. Buildah Creating an OCI working container image using the existing image python:alpine as the base. container=$(buildah from python:alpine) Mount the working container file system mountpoint=$(buildah mount $container) Creating a directory in the image file system mkdir $mountpoint/app Copying files into the container image file system
This article is going to take a look at Distributed Tracing for an application (this website) running in Kubernetes using Istio and Jaeger. The application is written in ASP .NET Core. For reference, I’m going to cover some of the Istio setup before getting into the distributed tracing. To quote the Istio Distributed Tracing overview here Distributed tracing enables users to track a request through mesh that is distributed across multiple services.
In this episode of Kubernetes in the Wild, we are delving into the world of service meshes and specifically Istio. Istio has been installed into a two-node Kubernetes cluster following the setup guide here, and a container has been deployed, but we are not able to access the container. The first port of call is to Isito Envoy sidecar container within the pod in question; its logs are checked using the following command
In this episode of Kubernetes in the Wild, we observe an issue with one of our pods, which is failing to start in EKS. The pod deployment YAML looks like the below containing a persistent volume claim. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 ---apiVersion:apps/v1kind:Deploymentmetadata:name:dblabels:app:dbspec:replicas:1selector:matchLabels:app:dbtemplate:metadata:labels:app:dbspec:hostname:dbvolumes:- name:mongodb-datapersistentVolumeClaim:claimName:mongodb-datacontainers:- name:dbimage:xxxxxxxxxxxx.
This Cisco Expressway can be backed up with a backup encryption password relatively easy using some simple Python code. The script requires updating the following variables before running. URL PASSWORD BACKUP_PASSWORD The script will save the Expressway backup file to the directory where it is run. Also available as a GitHub Gist here. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 import requests import json import re def main(): URL = "https://10.