Practical Istio - Introduction & Setup
I bet you're here because someone told you that they need Istio running in your production Kubernetes cluster? Well look no further! This guide is designed to make your life easier by going through a practical Istio deployment.
Table of Contents
- Practical Istio - Introduction
- Practical Istio - Private Kubernetes Deployment
- Practical Istio - Init & Install
- Practical Istio - Ingress Gateway
- Practical Istio - Virtual Services
Introduction
Istio is fast becoming the cool new kid on the block. This is thanks to some of the new and challenging problems that teams face when moving distributed application architectures. Traditionally it was common to put all application dependant services on the same server, or at the very least hosted on servers on other networked computed. With Kubernetes becoming so popular, many businesses are choosing to containerise and re-architecture their existing applications to run in a more distributed fashion.
There are a number of problems that come with just jumping head first into Kubernetes that aren't obvious until after the fact. The biggest of these is Pod networking and how to obtain useful metrics between services within a cluster. Due to the way pod networking is setup, it's very difficult to extract inter-service metrics without some kind of traffic proxy. This is where Istio steps in.
Purpose
The following guide has been written out of my own personal frustration at how poorly existing guides handle an end to end breakdown of getting Istio into a Production state on a private cloud hosted Kubernetes cluster. The focus is less on the intricate details about how Istio or Kubernetes work, if you'r looking for a good guide on that check out Rinor Maloku's outstanding Istio an introduction.
We'll be using Google Cloud Platform as our Kubernetes provider. Your choice however isn't going to hinder your ability to make use of this guide.
Dependencies
To get setup, follow the steps below.
Google Cloud CLI
I’m going to be installing the GCP SDK on a Debian based system. However there are instructions specific to all other operating systems available at https://cloud.google.com/sdk/install.
# Create an environment variable for the correct distribution
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)"
# Add the Cloud SDK distribution URI as a package source
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
# Import the Google Cloud public key
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
# Update and install the Cloud SDK
sudo apt-get update && sudo apt-get install google-cloud-sdk
Next initialise the CLI by running the following. Note you might be prompted for a project ID, select the ID that is linked to the devopstar
project in your organisation.
$ gcloud init
# * Commands that require authentication will use contact@devopstar.com by default
# * Commands will reference project `XXXXXX-XXX-XXXXXXX` by default
# * Compute Engine commands will use region `australia-southeast1` by default
# * Compute Engine commands will use zone `australia-southeast1-a` by default
The alternative way to configure a project / authenticate is to run the following:
# Login to GCloud
gcloud auth login
# Set the project
gcloud config set project PROJECT_ID
AWS CLI
Later on in this guide we'll optionally make use of AWS CLI to demonstrate Route53 DNS zone updates. Setting up an AWS account and the CLI tools is a good idea if you'd like to follow along.
Install and setup the CLI with your account credentials by following the guide outlined here: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
Repository
Go ahead and pull down the code from the t04glovern/gke-istio-bootstrap repository. Don't worry too much about about going over the code now, we'll slowly work through that in the coming posts.
git clone https://github.com/t04glovern/gke-istio-bootstrap.git
Whats Next?
In the next section we'll be be setting up the infrastructure needed to run a private Kubernetes cluster on GCP.