Kubernetes with Istio demo
Full asciinema demo can be found here: https://asciinema.org/a/226632

Requirements

or just

Install Kubernetes

The following sections will show you how to install k8s to OpenStack or how to use Minikube.

Use Minikube to start the Kubernetes cluster

Start Minikube
1
KUBERNETES_VERSION=$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt | tr -d v)
2
sudo minikube start --vm-driver=none --bootstrapper=kubeadm --kubernetes-version=v${KUBERNETES_VERSION}
Copied!
Install kubernetes-client package (kubectl):
1
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
2
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
3
apt-get update -qq
4
apt-get install -y kubectl socat
Copied!

Install Kubernetes to OpenStack

Install k8s to Openstack using Terraform.
You will need to have Docker installed.

Prepare the working environment inside Docker

You can skip this part if you have kubectl, Helm, Siege and Terraform installed.
Run Ubuntu docker image and mount the directory there:
1
mkdir /tmp/test && cd /tmp/test
2
docker run -it -rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v $PWD:/mnt ubuntu
Copied!
Install necessary software into the Docker container:
1
apt update -qq
2
apt-get install -y -qq apt-transport-https curl firefox git gnupg jq openssh-client psmisc siege sudo unzip vim > /dev/null
Copied!
Install kubernetes-client package - (kubectl):
1
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
2
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
3
apt-get update -qq
4
apt-get install -y -qq kubectl
Copied!
Install Terraform:
1
TERRAFORM_LATEST_VERSION=$(curl -s https://checkpoint-api.hashicorp.com/v1/check/terraform | jq -r -M ".current_version")
2
curl --silent --location https://releases.hashicorp.com/terraform/${TERRAFORM_LATEST_VERSION}/terraform_${TERRAFORM_LATEST_VERSION}_linux_amd64.zip --output /tmp/terraform_linux_amd64.zip
3
unzip -o /tmp/terraform_linux_amd64.zip -d /usr/local/bin/
Copied!
Change directory to /mnt where the git repository is mounted:
1
cd /mnt
Copied!

Provision VMs in OpenStack

Start 3 VMs (one master and 2 workers) where the k8s will be installed.
Generate ssh keys if not exists:
1
test -f $HOME/.ssh/id_rsa || ( install -m 0700 -d $HOME/.ssh && ssh-keygen -b 2048 -t rsa -f $HOME/.ssh/id_rsa -q -N "" )
2
# ssh-agent must be running...
3
test -n "$SSH_AUTH_SOCK" || eval `ssh-agent`
4
if [ "`ssh-add -l`" = "The agent has no identities." ]; then ssh-add; fi
Copied!
Clone this git repository:
1
git clone https://github.com/ruzickap/k8s-istio-demo
2
cd k8s-istio-demo
Copied!
Modify the Terraform variable file if needed:
1
OPENSTACK_PASSWORD=${OPENSTACK_PASSWORD:-default}
2
3
cat > terrafrom/openstack/terraform.tfvars << EOF
4
openstack_auth_url = "https://ic-us.ssl.mirantis.net:5000/v3"
5
openstack_instance_flavor_name = "compact.dbs"
6
openstack_instance_image_name = "bionic-server-cloudimg-amd64-20190119"
7
openstack_networking_subnet_dns_nameservers = ["172.19.80.70"]
8
openstack_password = "$OPENSTACK_PASSWORD"
9
openstack_tenant_name = "mirantis-services-team"
10
openstack_user_name = "pruzicka"
11
openstack_user_domain_name = "ldap_mirantis"
12
prefix = "pruzicka-k8s-istio-demo"
13
EOF
Copied!
Download Terraform components:
1
terraform init -var-file=terrafrom/openstack/terraform.tfvars terrafrom/openstack
Copied!
Create VMs in OpenStack:
1
terraform apply -auto-approve -var-file=terrafrom/openstack/terraform.tfvars terrafrom/openstack
Copied!
Show Terraform output:
1
terraform output
Copied!
Output:
1
vms_name = [
2
pruzicka-k8s-istio-demo-node01.01.localdomain,
3
pruzicka-k8s-istio-demo-node02.01.localdomain,
4
pruzicka-k8s-istio-demo-node03.01.localdomain
5
]
6
vms_public_ip = [
7
172.16.240.185,
8
172.16.242.218,
9
172.16.240.44
10
]
Copied!
At the end of the output you should see 3 IP addresses which should be accessible by ssh using your public key ~/.ssh/id_rsa.pub.

Install k8s to the VMs

Install k8s using kubeadm to the provisioned VMs:
1
./install-k8s-kubeadm.sh
Copied!
Check if all nodes are up:
1
export KUBECONFIG=$PWD/kubeconfig.conf
2
kubectl get nodes -o wide
Copied!
Output:
1
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
2
pruzicka-k8s-istio-demo-node01 Ready master 2m v1.13.3 192.168.250.11 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
3
pruzicka-k8s-istio-demo-node02 Ready <none> 45s v1.13.3 192.168.250.12 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
4
pruzicka-k8s-istio-demo-node03 Ready <none> 50s v1.13.3 192.168.250.13 <none> Ubuntu 18.04.1 LTS 4.15.0-43-generic docker://18.6.1
Copied!
View services, deployments, and pods:
1
kubectl get svc,deploy,po --all-namespaces -o wide
Copied!
Output:
1
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
2
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2m16s <none>
3
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 2m11s k8s-app=kube-dns
4
5
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
6
kube-system deployment.extensions/coredns 2/2 2 2 2m11s coredns k8s.gcr.io/coredns:1.2.6 k8s-app=kube-dns
7
8
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
9
kube-system pod/coredns-86c58d9df4-tlmvh 1/1 Running 0 116s 10.244.0.2 pruzicka-k8s-istio-demo-node01 <none> <none>
10
kube-system pod/coredns-86c58d9df4-zk685 1/1 Running 0 116s 10.244.0.3 pruzicka-k8s-istio-demo-node01 <none> <none>
11
kube-system pod/etcd-pruzicka-k8s-istio-demo-node01 1/1 Running 0 79s 192.168.250.11 pruzicka-k8s-istio-demo-node01 <none> <none>
12
kube-system pod/kube-apiserver-pruzicka-k8s-istio-demo-node01 1/1 Running 0 72s 192.168.250.11 pruzicka-k8s-istio-demo-node01 <none> <none>
13
kube-system pod/kube-controller-manager-pruzicka-k8s-istio-demo-node01 1/1 Running 0 65s 192.168.250.11 pruzicka-k8s-istio-demo-node01 <none> <none>
14
kube-system pod/kube-flannel-ds-amd64-cvpfq 1/1 Running 0 65s 192.168.250.13 pruzicka-k8s-istio-demo-node03 <none> <none>
15
kube-system pod/kube-flannel-ds-amd64-ggqmv 1/1 Running 0 60s 192.168.250.12 pruzicka-k8s-istio-demo-node02 <none> <none>
16
kube-system pod/kube-flannel-ds-amd64-ql6g6 1/1 Running 0 117s 192.168.250.11 pruzicka-k8s-istio-demo-node01 <none> <none>
17
kube-system pod/kube-proxy-79mx8 1/1 Running 0 117s 192.168.250.11 pruzicka-k8s-istio-demo-node01 <none> <none>
18
kube-system pod/kube-proxy-f99q2 1/1 Running 0 65s 192.168.250.13 pruzicka-k8s-istio-demo-node03 <none> <none>
19
kube-system pod/kube-proxy-w4tbd 1/1 Running 0 60s 192.168.250.12 pruzicka-k8s-istio-demo-node02 <none> <none>
20
kube-system pod/kube-scheduler-pruzicka-k8s-istio-demo-node01 1/1 Running 0 78s 192.168.250.11 pruzicka-k8s-istio-demo-node01 <none> <none>
Copied!

Install Helm

Install Helm binary locally:
1
curl -s https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash
Copied!
Install Tiller (the Helm server-side component) into the Kubernetes cluster:
1
kubectl create serviceaccount tiller --namespace kube-system
2
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
3
helm init --wait --service-account tiller
4
helm repo update
Copied!
Check if the tiller was installed properly:
1
kubectl get pods -l app=helm --all-namespaces
Copied!
Output:
1
NAMESPACE NAME READY STATUS RESTARTS AGE
2
kube-system tiller-deploy-dbb85cb99-z4c47 1/1 Running 0 28s
Copied!

Instal Rook

Rook Architecture
Install Rook Operator (Ceph storage for k8s):
1
helm repo add rook-stable https://charts.rook.io/stable
2
helm install --wait --name rook-ceph --namespace rook-ceph-system rook-stable/rook-ceph
3
sleep 110
Copied!
See how the rook-ceph-system should look like:
1
kubectl get svc,deploy,po --namespace=rook-ceph-system -o wide
Copied!
Output:
1
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
2
deployment.extensions/rook-ceph-operator 1/1 1 1 3m36s rook-ceph-operator rook/ceph:v0.9.2 app=rook-ceph-operator
3
4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
5
pod/rook-ceph-agent-2bxhq 1/1 Running 0 2m14s 192.168.250.12 pruzicka-k8s-istio-demo-node02 <none> <none>
6
pod/rook-ceph-agent-8h4p4 1/1 Running 0 2m14s 192.168.250.11 pruzicka-k8s-istio-demo-node01 <none> <none>
7
pod/rook-ceph-agent-mq69r 1/1 Running 0 2m14s 192.168.250.13 pruzicka-k8s-istio-demo-node03 <none> <none>
8
pod/rook-ceph-operator-7478c899b5-px2hc 1/1 Running 0 3m37s 10.244.2.3 pruzicka-k8s-istio-demo-node02 <none> <none>
9
pod/rook-discover-8ffj8 1/1 Running 0 2m14s 10.244.2.4 pruzicka-k8s-istio-demo-node02 <none> <none>
10
pod/rook-discover-l56jj 1/1 Running 0 2m14s 10.244.1.2 pruzicka-k8s-istio-demo-node03 <none> <none>
11
pod/rook-discover-q9xwp 1/1 Running 0 2m14s 10.244.0.4 pruzicka-k8s-istio-demo-node01 <none> <none>
Copied!
Create your Rook cluster:
1
kubectl create -f https://raw.githubusercontent.com/rook/rook/v0.9.3/cluster/examples/kubernetes/ceph/cluster.yaml
2
sleep 100
Copied!
Get the Toolbox with ceph commands:
1
kubectl create -f https://raw.githubusercontent.com/rook/rook/v0.9.3/cluster/examples/kubernetes/ceph/toolbox.yaml
2
sleep 300
Copied!
Check what was created in rook-ceph namespace:
1
kubectl get svc,deploy,po --namespace=rook-ceph -o wide
Copied!
Output:
1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
2
service/rook-ceph-mgr ClusterIP 10.103.36.128 <none> 9283/TCP 8m45s app=rook-ceph-mgr,rook_cluster=rook-ceph
3
service/rook-ceph-mgr-dashboard ClusterIP 10.99.173.58 <none> 8443/TCP 8m45s app=rook-ceph-mgr,rook_cluster=rook-ceph
4
service/rook-ceph-mon-a ClusterIP 10.102.39.160 <none> 6790/TCP 12m app=rook-ceph-mon,ceph_daemon_id=a,mon=a,mon_cluster=rook-ceph,rook_cluster=rook-ceph
5
service/rook-ceph-mon-b ClusterIP 10.102.49.137 <none> 6790/TCP 11m app=rook-ceph-mon,ceph_daemon_id=b,mon=b,mon_cluster=rook-ceph,rook_cluster=rook-ceph
6
service/rook-ceph-mon-c ClusterIP 10.96.25.143 <none> 6790/TCP 10m app=rook-ceph-mon,ceph_daemon_id=c,mon=c,mon_cluster=rook-ceph,rook_cluster=rook-ceph
7
8
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
9
deployment.extensions/rook-ceph-mgr-a 1/1 1 1 9m33s mgr ceph/ceph:v13 app=rook-ceph-mgr,ceph_daemon_id=a,instance=a,mgr=a,rook_cluster=rook-ceph
10
deployment.extensions/rook-ceph-mon-a 1/1 1 1 12m mon ceph/ceph:v13 app=rook-ceph-mon,ceph_daemon_id=a,mon=a,mon_cluster=rook-ceph,rook_cluster=rook-ceph
11
deployment.extensions/rook-ceph-mon-b 1/1 1 1 11m mon ceph/ceph:v13 app=rook-ceph-mon,ceph_daemon_id=b,mon=b,mon_cluster=rook-ceph,rook_cluster=rook-ceph
12
deployment.extensions/rook-ceph-mon-c 1/1 1 1 10m mon ceph/ceph:v13 app=rook-ceph-mon,ceph_daemon_id=c,mon=c,mon_cluster=rook-ceph,rook_cluster=rook-ceph
13
deployment.extensions/rook-ceph-osd-0 1/1 1 1 8m34s osd ceph/ceph:v13 app=rook-ceph-osd,ceph-osd-id=0,rook_cluster=rook-ceph
14
deployment.extensions/rook-ceph-osd-1 1/1 1 1 8m33s osd ceph/ceph:v13 app=rook-ceph-osd,ceph-osd-id=1,rook_cluster=rook-ceph
15
deployment.extensions/rook-ceph-osd-2 1/1 1 1 8m33s osd ceph/ceph:v13 app=rook-ceph-osd,ceph-osd-id=2,rook_cluster=rook-ceph
16
deployment.extensions/rook-ceph-tools 1/1 1 1 12m rook-ceph-tools rook/ceph:master app=rook-ceph-tools
17
18
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
19
pod/rook-ceph-mgr-a-669f5b47fc-sjvrr 1/1 Running 0 9m33s 10.244.1.6 pruzicka-k8s-istio-demo-node03 <none> <none>
20
pod/rook-ceph-mon-a-784f8fb5b6-zcvjr 1/1 Running 0 12m 10.244.0.5 pruzicka-k8s-istio-demo-node01 <none> <none>
21
pod/rook-ceph-mon-b-6dfbf486f4-2ktpm 1/1 Running 0 11m 10.244.2.5 pruzicka-k8s-istio-demo-node02 <none> <none>
22
pod/rook-ceph-mon-c-6c85f6f44-j5wwv 1/1 Running 0 10m 10.244.1.5 pruzicka-k8s-istio-demo-node03 <none> <none>
23
pod/rook-ceph-osd-0-6dd9cdc946-7th52 1/1 Running 0 8m34s 10.244.1.8 pruzicka-k8s-istio-demo-node03 <none> <none>
24
pod/rook-ceph-osd-1-64cdd77897-9vdrh 1/1 Running 0 8m33s 10.244.2.7 pruzicka-k8s-istio-demo-node02 <none> <none>
25
pod/rook-ceph-osd-2-67fcc446bd-skq52 1/1 Running 0 8m33s 10.244.0.7 pruzicka-k8s-istio-demo-node01 <none> <none>
26
pod/rook-ceph-osd-prepare-pruzicka-k8s-istio-demo-node01-z29hj 0/2 Completed 0 8m39s 10.244.0.6 pruzicka-k8s-istio-demo-node01 <none> <none>
27
pod/rook-ceph-osd-prepare-pruzicka-k8s-istio-demo-node02-q8xqx 0/2 Completed 0 8m39s 10.244.2.6 pruzicka-k8s-istio-demo-node02 <none> <none>
28
pod/rook-ceph-osd-prepare-pruzicka-k8s-istio-demo-node03-vbwxv 0/2 Completed 0 8m39s 10.244.1.7 pruzicka-k8s-istio-demo-node03 <none> <none>
29
pod/rook-ceph-tools-76c7d559b6-s6s4l 1/1 Running 0 12m 192.168.250.12 pruzicka-k8s-istio-demo-node02 <none> <none>
Copied!
Create a storage class based on the Ceph RBD volume plugin:
1
kubectl create -f https://raw.githubusercontent.com/rook/rook/v0.9.3/cluster/examples/kubernetes/ceph/storageclass.yaml
2
sleep 10
Copied!
Set rook-ceph-block as default Storage Class:
1
kubectl patch storageclass rook-ceph-block -p "{\"metadata\": {\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"}}}"
Copied!
Check the Storage Classes:
1
kubectl describe storageclass
Copied!
Output:
1
Name: rook-ceph-block
2
IsDefaultClass: Yes
3
Annotations: storageclass.kubernetes.io/is-default-class=true
4
Provisioner: ceph.rook.io/block
5
Parameters: blockPool=replicapool,clusterNamespace=rook-ceph,fstype=xfs
6
AllowVolumeExpansion: <unset>
7
MountOptions: <none>
8
ReclaimPolicy: Delete
9
VolumeBindingMode: Immediate
10
Events: <none>
Copied!
See the CephBlockPool:
1
kubectl describe cephblockpool --namespace=rook-ceph
Copied!
Output:
1
Name: replicapool
2
Namespace: rook-ceph
3
Labels: <none>
4
Annotations: <none>
5
API Version: ceph.rook.io/v1
6
Kind: CephBlockPool
7
Metadata:
8
Creation Timestamp: 2019-02-04T09:51:55Z
9
Generation: 1
10
Resource Version: 3171
11
Self Link: /apis/ceph.rook.io/v1/namespaces/rook-ceph/cephblockpools/replicapool
12
UID: 8163367d-2862-11e9-a470-fa163e90237a
13
Spec:
14
Replicated:
15
Size: 1
16
Events: <none>
Copied!
Check the status of your Ceph installation:
1
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph status
Copied!
Output:
1
cluster:
2
id: 1f4458a6-f574-4e6c-8a25-5a5eef6eb0a7
3
health: HEALTH_OK
4
5
services:
6
mon: 3 daemons, quorum c,a,b
7
mgr: a(active)
8
osd: 3 osds: 3 up, 3 in
9
10
data:
11
pools: 1 pools, 100 pgs
12
objects: 0 objects, 0 B
13
usage: 13 GiB used, 44 GiB / 58 GiB avail
14
pgs: 100 active+clean
Copied!
Ceph status:
1
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph osd status
Copied!
Output:
1
+----+--------------------------------+-------+-------+--------+---------+--------+---------+-----------+
2
| id | host | used | avail | wr ops | wr data | rd ops | rd data | state |
3
+----+--------------------------------+-------+-------+--------+---------+--------+---------+-----------+
4
| 0 | pruzicka-k8s-istio-demo-node03 | 4302M | 15.0G | 0 | 0 | 0 | 0 | exists,up |
5
| 1 | pruzicka-k8s-istio-demo-node02 | 4455M | 14.8G | 0 | 0 | 0 | 0 | exists,up |
6
| 2 | pruzicka-k8s-istio-demo-node01 | 4948M | 14.3G | 0 | 0 | 0 | 0 | exists,up |
7
+----+--------------------------------+-------+-------+--------+---------+--------+---------+-----------+
Copied!
Check the cluster usage status:
1
kubectl -n rook-ceph exec $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath="{.items[0].metadata.name}") -- ceph df
Copied!
Output:
1
GLOBAL:
2
SIZE AVAIL RAW USED %RAW USED
3
58 GiB 44 GiB 13 GiB 23.22
4
POOLS:
5
NAME ID USED %USED MAX AVAIL OBJECTS
6
replicapool 1 0 B 0 40 GiB 0
Copied!

Install ElasticSearch, Kibana, FluentBit

1
helm repo add es-operator https://raw.githubusercontent.com/upmc-enterprises/elasticsearch-operator/master/charts/
Copied!
Install ElasticSearch operator:
1
helm install --wait --name elasticsearch-operator es-operator/elasticsearch-operator --set rbac.enabled=True --namespace es-operator
2
sleep 50
Copied!
Check how the operator looks like:
1
kubectl get svc,deploy,po --namespace=es-operator -o wide
Copied!
Output:
1
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
2
deployment.extensions/elasticsearch-operator 1/1 1 1 106s elasticsearch-operator upmcenterprises/elasticsearch-operator:0.0.12 name=elasticsearch-operator,release=elasticsearch-operator
3
4
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
5
pod/elasticsearch-operator-5dc59b8cc5-6946l 1/1 Running 0 106s 10.244.1.9 pruzicka-k8s-istio-demo-node03 <none> <none>
Copied!
Install ElasticSearch cluster:
1
helm install --wait --name=elasticsearch --namespace logging es-operator/elasticsearch \
2
--set kibana.enabled=true \
3
--set cerebro.enabled=true \
4
--set storage.class=rook-ceph-block \
5
--set clientReplicas=1,masterReplicas=1,dataReplicas=1
6
sleep 350
Copied!
Show ElasticSearch components:
1
kubectl get svc,deploy,po,pvc,elasticsearchclusters --namespace=logging -o wide
Copied!
Output:
1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
2
service/cerebro-elasticsearch-cluster ClusterIP 10.105.197.151 <none> 80/TCP 18m role=cerebro
3
service/elasticsearch-discovery-elasticsearch-cluster ClusterIP 10.111.76.241 <none> 9300/TCP 18m component=elasticsearch-elasticsearch-cluster,role=master
4
service/elasticsearch-elasticsearch-cluster ClusterIP 10.104.103.49 <none> 9200/TCP 18m component=elasticsearch-elasticsearch-cluster,role=client
5
service/es-data-svc-elasticsearch-cluster ClusterIP 10.98.179.244 <none> 9300/TCP 18m component=elasticsearch-elasticsearch-cluster,role=data
6
service/kibana-elasticsearch-cluster ClusterIP 10.110.19.242 <none> 80/TCP 18m role=kibana
7
8
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
9
deployment.extensions/cerebro-elasticsearch-cluster 1/1 1 1 18m cerebro-elasticsearch-cluster upmcenterprises/cerebro:0.6.8 component=elasticsearch-elasticsearch-cluster,name=cerebro-elasticsearch-cluster,role=cerebro
10
deployment.extensions/es-client-elasticsearch-cluster 1/1 1 1 18m es-client-elasticsearch-cluster upmcenterprises/docker-elasticsearch-kubernetes:6.1.3_0 cluster=elasticsearch-cluster,component=elasticsearch-elasticsearch-cluster,name=es-client-elasticsearch-cluster,role=client
11
deployment.extensions/kibana-elasticsearch-cluster 1/1 1 1 18m kibana-elasticsearch-cluster docker.elastic.co/kibana/kibana-oss:6.1.3 component=elasticsearch-elasticsearch-cluster,name=kibana-elasticsearch-cluster,role=kibana
12
13
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
14
pod/cerebro-elasticsearch-cluster-64888cf977-dgb8g 1/1 Running 0 18m 10.244.0.9 pruzicka-k8s-istio-demo-node01 <none> <none>
15
pod/es-client-elasticsearch-cluster-8d9df64b7-tvl8z 1/1 Running 0 18m 10.244.1.11 pruzicka-k8s-istio-demo-node03 <none> <none>
16
pod/es-data-elasticsearch-cluster-rook-ceph-block-0 1/1 Running 0 18m 10.244.2.11 pruzicka-k8s-istio-demo-node02 <none> <none>
17
pod/es-master-elasticsearch-cluster-rook-ceph-block-0 1/1 Running 0 18m 10.244.2.10 pruzicka-k8s-istio-demo-node02 <none> <none>
18
pod/kibana-elasticsearch-cluster-7fb7f88f55-6sl6j 1/1 Running 0 18m 10.244.2.9 pruzicka-k8s-istio-demo-node02 <none> <none>
19
20
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
21
persistentvolumeclaim/es-data-es-data-elasticsearch-cluster-rook-ceph-block-0 Bound pvc-870ad81a-2863-11e9-a470-fa163e90237a 1Gi RWO rook-ceph-block 18m
22
persistentvolumeclaim/es-data-es-master-elasticsearch-cluster-rook-ceph-block-0 Bound pvc-86fcb9ce-2863-11e9-a470-fa163e90237a 1Gi RWO rook-ceph-block 18m
23
24
NAME AGE
25
elasticsearchcluster.enterprises.upmc.com/elasticsearch-cluster 18m
Copied!
Install FluentBit:
1
# https://github.com/fluent/fluent-bit/issues/628
2
helm install --wait stable/fluent-bit --name=fluent-bit --namespace=logging \
3
--set metrics.enabled=true \
4
--set backend.type=es \
5
--set backend.es.time_key='@ts' \
6
--set backend.es.host=elasticsearch-elasticsearch-cluster \
7
--set backend.es.tls=on \
8
--set backend.es.tls_verify=off
Copied!
Configure port forwarding for Kibana:
1
# Kibana UI - https://localhost:5601
2
kubectl -n logging port-forward $(kubectl -n logging get pod -l role=kibana -o jsonpath="{.items[0].metadata.name}") 5601:5601 &
Copied!
Configure ElasticSearch:
    Navigate to the Kibana UI and click the "Set up index patterns" in the top right.
    Use * as the index pattern, and click "Next step.".
    Select @timestamp as the Time Filter field name, and click "Create index pattern."
Check FluentBit installation:
1
kubectl get -l app=fluent-bit svc,pods --all-namespaces -o wide
Copied!
Output:
1
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
2
logging service/fluent-bit-fluent-bit-metrics ClusterIP 10.97.33.162 <none> 2020/TCP 80s app=fluent-bit,release=fluent-bit
3
4
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
5
logging pod/fluent-bit-fluent-bit-426ph 1/1 Running 0 80s 10.244.0.10 pruzicka-k8s-istio-demo-node01 <none> <none>
6
logging pod/fluent-bit-fluent-bit-c6tbx 1/1 Running 0 80s 10.244.1.12 pruzicka-k8s-istio-demo-node03 <none> <none>
7
logging pod/fluent-bit-fluent-bit-zfkqr 1/1 Running 0 80s 10.244.2.12 pruzicka-k8s-istio-demo-node02 <none> <none>
Copied!

Istio architecture and features

Istio is an open platform-independent service mesh that provides traffic management, policy enforcement, and telemetry collection (layer 7 firewall + loadbalancer, ingress, blocking outgoing traffic, tracing, monitoring, logging).
    Istio architectue
      Envoy - is a high-performance proxy to mediate all inbound and outbound traffic for all services in the service mesh.
      Pilot - provides service discovery for the Envoy sidecars, traffic management capabilities for intelligent routing.
      Mixer - enforces access control and usage policies across the service mesh, and collects telemetry data from the Envoy proxy and other services.
      Citadel - provides strong service-to-service and end-user authentication with built-in identity and credential management.
    Blue-green deployment and content based traffic steering
    Istio Security Architecture
    Mesh Expansion - non-Kubernetes services(running on VMs and/or physical machines) can be added to an Istio mesh on a Kubernetes cluster. (Istio mesh expansion on IBM Cloud Private)
    Istio Multicluster - multiple k8s clusters managed by single Istio instance

Istio types

    VirtualService defines the rules that control how requests for a service are routed within an Istio service mesh.
    DestinationRule configures the set of policies to be applied to a request after VirtualService routing has occurred.
    ServiceEntry is commonly used to enable requests to services outside of an Istio service mesh.
    Gateway configures a load balancer for HTTP/TCP traffic, most commonly operating at the edge of the mesh to enable ingress traffic for an application.

Install Istio

1
[ -f $PWD/kubeconfig.conf ] && export KUBECONFIG=${KUBECONFIG:-$PWD/kubeconfig.conf}
2
kubectl get nodes -o wide
Copied!
Either download Istio directly from https://github.com/istio/istio/releases or get the latest version by using curl:
1
test -d files || mkdir files
2
cd files
3
curl -sL https://git.io/getLatestIstio | sh -
Copied!
Change the directory to the Istio installation files location:
1
cd istio*
Copied!
Install istioctl:
1
sudo mv bin/istioctl /usr/local/bin/
Copied!
Install Istio using Helm:
1
helm install --wait --name istio --namespace istio-system install/kubernetes/helm/istio \
2
--set gateways.istio-ingressgateway.type=NodePort \
3
--set gateways.istio-egressgateway.type=NodePort \
4
--set grafana.enabled=true \
5
--set kiali.enabled=true \
6
--set kiali.dashboard.grafanaURL=http://localhost:3000 \
7
--set kiali.dashboard.jaegerURL=http://localhost:16686 \
8
--set servicegraph.enabled=true \
9
--set telemetry-gateway.grafanaEnabled=true \
10
--set telemetry-gateway.prometheusEnabled=true \
11
--set tracing.enabled=true
Copied!
See the Istio components:
1
kubectl get --namespace=istio-system svc,deployment,pods -o wide
Copied!
Output:
1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
2
service/grafana ClusterIP 10.101.117.126 <none> 3000/TCP 15m app=grafana
3
service/istio-citadel ClusterIP 10.99.235.151 <none> 8060/TCP,9093/TCP 15m istio=citadel
4
service/istio-egressgateway NodePort 10.105.213.174 <none> 80:31610/TCP,443:31811/TCP 15m app=istio-egressgateway,istio=egressgateway
5
service/istio-galley ClusterIP 10.110.154.0 <none> 443/TCP,9093/TCP 15m istio=galley
6
service/istio-ingressgateway NodePort 10.101.212.170 <none> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31814/TCP,8060:31435/TCP,853:31471/TCP,15030:30210/TCP,15031:30498/TCP 15m app=istio-ingressgateway,istio=ingressgateway
7
service/istio-pilot ClusterIP 10.96.34.157 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 15m istio=pilot
8
service/istio-policy ClusterIP 10.98.185.215 <none> 9091/TCP,15004/TCP,9093/TCP 15m istio-mixer-type=policy,istio=mixer
9
service/istio-sidecar-injector ClusterIP 10.97.47.179 <none> 443/TCP 15m istio=sidecar-injector
10
service/istio-telemetry ClusterIP 10.103.23.55 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 15m istio-mixer-type=telemetry,istio=mixer
11
service/jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 15m app=jaeger
12
service/jaeger-collector ClusterIP 10.110.10.174 <none> 14267/TCP,14268/TCP 15m app=jaeger
13
service/jaeger-query ClusterIP 10.98.172.235 <none> 16686/TCP 15m app=jaeger
14
service/kiali ClusterIP 10.111.114.225 <none> 20001/TCP 15m app=kiali
15
service/prometheus ClusterIP 10.111.132.151 <none> 9090/TCP 15m app=prometheus
16
service/servicegraph ClusterIP 10.109.59.250 <none> 8088/TCP 15m app=servicegraph
17
service/tracing ClusterIP 10.96.59.251 <none> 80/TCP 15m app=jaeger
18
service/zipkin ClusterIP 10.107.168.128 <none> 9411/TCP 15m app=jaeger
19
20
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
21
deployment.extensions/grafana 1/1 1 1 15m grafana grafana/grafana:5.2.3 app=grafana
22
deployment.extensions/istio-citadel 1/1 1 1 15m citadel docker.io/istio/citadel:1.0.5 istio=citadel
23
deployment.extensions/istio-egressgateway 1/1 1 1 15m istio-proxy docker.io/istio/proxyv2:1.0.5 app=istio-egressgateway,istio=egressgateway
24
deployment.extensions/istio-galley 1/1 1 1 15m validator docker.io/istio/galley:1.0.5 istio=galley
25
deployment.extensions/istio-ingressgateway 1/1 1 1 15m istio-proxy docker.io/istio/proxyv2:1.0.5 app=istio-ingressgateway,istio=ingressgateway
26
deployment.extensions/istio-pilot 1/1 1 1 15m discovery,istio-proxy docker.io/istio/pilot:1.0.5,docker.io/istio/proxyv2:1.0.5 app=pilot,istio=pilot
27
deployment.extensions/istio-policy 1/1 1 1 15m mixer,istio-proxy docker.io/istio/mixer:1.0.5,docker.io/istio/proxyv2:1.0.5 app=policy,istio=mixer,istio-mixer-type=policy
28
deployment.extensions/istio-sidecar-injector 1/1 1 1 15m sidecar-injector-webhook docker.io/istio/sidecar_injector:1.0.5 istio=sidecar-injector
29
deployment.extensions/istio-telemetry 1/1 1 1 15m mixer,istio-proxy docker.io/istio/mixer:1.0.5,docker.io/istio/proxyv2:1.0.5 app=telemetry,istio=mixer,istio-mixer-type=telemetry
30
deployment.extensions/istio-tracing 1/1 1 1 15m jaeger docker.io/jaegertracing/all-in-one:1.5 app=jaeger
31
deployment.extensions/kiali 1/1 1 1 15m kiali docker.io/kiali/kiali:v0.10 app=kiali
32
deployment.extensions/prometheus 1/1 1 1 15m prometheus docker.io/prom/prometheus:v2.3.1 app=prometheus
33
deployment.extensions/servicegraph 1/1 1 1 15m servicegraph docker.io/istio/servicegraph:1.0.5 app=servicegraph
34
35
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
36
pod/grafana-59b8896965-pmwd2 1/1 Running 0 15m 10.244.1.16 pruzicka-k8s-istio-demo-node03 <none> <none>
37
pod/istio-citadel-856f994c58-8r8nr 1/1 Running 0 15m 10.244.1.17 pruzicka-k8s-istio-demo-node03 <none> <none>
38
pod/istio-egressgateway-5649fcf57-sv8wf 1/1 Running 0 15m 10.244.1.14 pruzicka-k8s-istio-demo-node03 <none> <none>
39
pod/istio-galley-7665f65c9c-8sjmm 1/1 Running 0 15m 10.244.1.18 pruzicka-k8s-istio-demo-node03 <none> <none>
40
pod/istio-grafana-post-install-kw74d 0/1 Completed 0 10m 10.244.1.19 pruzicka-k8s-istio-demo-node03 <none> <none>
41
pod/istio-ingressgateway-6755b9bbf6-f7pnx 1/1 Running 0 15m 10.244.1.13 pruzicka-k8s-istio-demo-node03 <none> <none>
42
pod/istio-pilot-56855d999b-6zq86 2/2 Running 0 15m 10.244.0.11 pruzicka-k8s-istio-demo-node01 <none> <none>
43
pod/istio-policy-6fcb6d655f-4zndw 2/2 Running 0 15m 10.244.2.13 pruzicka-k8s-istio-demo-node02 <none> <none>
44
pod/istio-sidecar-injector-768c79f7bf-74wbc 1/1 Running 0 15m 10.244.2.18 pruzicka-k8s-istio-demo-node02 <none> <none>
45
pod/istio-telemetry-664d896cf5-smz7w 2/2 Running 0 15m 10.244.2.14 pruzicka-k8s-istio-demo-node02 <none> <none>
46
pod/istio-tracing-6b994895fd-vb58q 1/1 Running 0 15m 10.244.2.17 pruzicka-k8s-istio-demo-node02 <none> <none>
47
pod/kiali-67c69889b5-sw92h 1/1 Running 0 15m 10.244.1.15 pruzicka-k8s-istio-demo-node03 <none> <none>
48
pod/prometheus-76b7745b64-kwzj5 1/1 Running 0 15m 10.244.2.15 pruzicka-k8s-istio-demo-node02 <none> <none>
49
pod/servicegraph-5c4485945b-j9bp2 1/1 Running 0 15m 10.244.2.16 pruzicka-k8s-istio-demo-node02 <none> <none>
Copied!
Configure Istio with a new log type and send those logs to the FluentD:
1
kubectl apply -f ../../yaml/fluentd-istio.yaml
Copied!

Istio example

Check how Istio can be used and how it works...

Check + Enable Istio in default namespace

Let the default namespace to use Istio injection:
1
kubectl label namespace default istio-injection=enabled
Copied!
Check namespaces:
1
kubectl get namespace -L istio-injection
Copied!
Output:
1
NAME STATUS AGE ISTIO-INJECTION
2
default Active 70m enabled
3
es-operator Active 41m
4
istio-system Active 16m
5
kube-public Active 70m
6
kube-system Active 70m
7
logging Active 38m
8
rook-ceph Active 59m
9
rook-ceph-system Active 63m
Copied!
Configure port forwarding for Istio services:
1
# Jaeger - http://localhost:16686
2
kubectl port-forward -n istio-system $(kubectl get pod -n istio-system -l app=jaeger -o jsonpath="{.items[0].metadata.name}") 16686:16686 &
3
4
# Prometheus - http://localhost:9090/graph
5
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=prometheus -o jsonpath="{.items[0].metadata.name}") 9090:9090 &
6
7
# Grafana - http://localhost:3000/dashboard/db/istio-mesh-dashboard
8
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath="{.items[0].metadata.name}") 3000:3000 &
9
10
# Kiali - http://localhost:20001
11
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=kiali -o jsonpath="{.items[0].metadata.name}") 20001:20001 &
12
13
# Servicegraph - http://localhost:8088/force/forcegraph.html
14
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath="{.items[0].metadata.name}") 8088:8088 &
Copied!

Deploy application into the default namespace where Istio is enabled

The Bookinfo application is broken into four separate microservices:
    productpage - the productpage microservice calls the details and reviews microservices to populate the page.
    details - the details microservice contains book information.
    reviews - the reviews microservice contains book reviews. It also calls the ratings microservice.
    ratings - the ratings microservice contains book ranking information that accompanies a book review.
There are 3 versions of the reviews microservice:
    Version v1 - doesn’t call the ratings service.
    Version v2 - calls the ratings service, and displays each rating as 1 to 5 black stars.
    Version v3 - calls the ratings service, and displays each rating as 1 to 5 red stars.
Bookinfo application architecture
Application Architecture without Istio
Application Architecture with Istio
Deploy the demo of Bookinfo application:
1
kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
2
sleep 400
Copied!
Confirm all services and pods are correctly defined and running:
1
kubectl get svc,deployment,pods -o wide
Copied!
Output:
1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
2
service/details ClusterIP 10.103.142.153 <none> 9080/TCP 4m21s app=details
3
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 75m <none>
4
service/productpage ClusterIP 10.111.62.53 <none> 9080/TCP 4m17s app=productpage
5
service/ratings ClusterIP 10.110.22.215 <none> 9080/TCP 4m20s app=ratings
6
service/reviews ClusterIP 10.100.73.81 <none> 9080/TCP 4m19s app=reviews
7
8
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
9
deployment.extensions/details-v1 1/1 1 1 4m21s details istio/examples-bookinfo-details-v1:1.8.0 app=details,version=v1
10
deployment.extensions/productpage-v1 1/1 1 1 4m16s productpage istio/examples-bookinfo-productpage-v1:1.8.0 app=productpage,version=v1
11
deployment.extensions/ratings-v1 1/1 1 1 4m20s ratings istio/examples-bookinfo-ratings-v1:1.8.0 app=ratings,version=v1
12
deployment.extensions/reviews-v1 1/1 1 1 4m19s reviews istio/examples-bookinfo-reviews-v1:1.8.0 app=reviews,version=v1
13
deployment.extensions/reviews-v2 1/1 1 1 4m18s reviews istio/examples-bookinfo-reviews-v2:1.8.0 app=reviews,version=v2
14
deployment.extensions/reviews-v3 1/1 1 1 4m18s reviews istio/examples-bookinfo-reviews-v3:1.8.0 app=reviews,version=v3
15
16
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
17
pod/details-v1-68c7c8666d-pvrx6 2/2 Running 0 4m21s 10.244.1.20 pruzicka-k8s-istio-demo-node03 <none> <none>
18
pod/elasticsearch-operator-sysctl-297j8 1/1 Running 0 45m 10.244.2.8 pruzicka-k8s-istio-demo-node02 <none> <none>
19
pod/elasticsearch-operator-sysctl-bg8rn 1/1 Running 0 45m 10.244.1.10 pruzicka-k8s-istio-demo-node03 <none> <none>
20
pod/elasticsearch-operator-sysctl-vwvbl 1/1 Running 0 45m 10.244.0.8 pruzicka-k8s-istio-demo-node01 <none> <none>
21
pod/productpage-v1-54d799c966-2b4ss 2/2 Running 0 4m16s 10.244.1.23 pruzicka-k8s-istio-demo-node03 <none> <none>
22
pod/ratings-v1-8558d4458d-ln99n 2/2 Running 0 4m20s 10.244.1.21 pruzicka-k8s-istio-demo-node03 <none> <none>
23
pod/reviews-v1-cb8655c75-hpqfg 2/2 Running 0 4m19s 10.244.1.22 pruzicka-k8s-istio-demo-node03 <none> <none>
24
pod/reviews-v2-7fc9bb6dcf-snshx 2/2 Running 0 4m18s 10.244.2.19 pruzicka-k8s-istio-demo-node02 <none> <none>
25
pod/reviews-v3-c995979bc-wcql9 2/2 Running 0 4m18s 10.244.0.12 pruzicka-k8s-istio-demo-node01 <none> <none>
Copied!
Check the container details - you should see also container istio-proxy next to productpage:
1
kubectl describe pod -l app=productpage
2
kubectl logs $(kubectl get pod -l app=productpage -o jsonpath="{.items[0].metadata.name}") istio-proxy --tail=5
Copied!
Define the Istio gateway for the application:
1
cat samples/bookinfo/networking/bookinfo-gateway.yaml
2
kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
3
sleep 5
Copied!
Confirm the gateway has been created:
1
kubectl get gateway,virtualservice
Copied!
Output:
1
NAME AGE
2
gateway.networking.istio.io/bookinfo-gateway 11s
3
4
NAME AGE
5
virtualservice.networking.istio.io/bookinfo 12s
Copied!
Determining the ingress IP and ports when using a node port:
1
export INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath="{.spec.ports[?(@.name==\"http2\")].nodePort}")
2
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath="{.spec.ports[?(@.name==\"https\")].nodePort}")
3
export INGRESS_HOST=$(kubectl get po -l istio=ingressgateway -n istio-system -o "jsonpath={.items[0].status.hostIP}")
4
if test -f ../../terraform.tfstate && grep -q vms_public_ip ../../terraform.tfstate; then
5
export INGRESS_HOST=$(terraform output -json -state=../../terraform.tfstate | jq -r ".vms_public_ip.value[0]")
6
fi
7
export GATEWAY_URL=$INGRESS_HOST:$INGRESS_PORT
8
echo "$INGRESS_PORT | $SECURE_INGRESS_PORT | $INGRESS_HOST | $GATEWAY_URL | http://$GATEWAY_URL/productpage"
Copied!
Output:
1
31380 | 31390 | 172.16.242.170 | 172.16.242.170:31380
Copied!
Confirm the app is running:
1
curl -o /dev/null -s -w "%{http_code}\n" -A "Mozilla/5.0 (iPhone; U; CPU iPhone OS 4_3_3 like Mac OS X; en-us) AppleWebKit/533.17.9 (KHTML, like Gecko) Version/5.0.2 Mobile/8J2 Safari/6533.18.5" http://${GATEWAY_URL}/productpage
Copied!
Output:
1
200
Copied!
Create default destination rules (subsets) for the Bookinfo services:
1
kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
Copied!
Display the destination rules:
1
kubectl get destinationrules -o yaml
Copied!
Generate some traffic for next 5 minutes to gether some data:
1
siege --log=/tmp/siege --concurrent=1 -q --internet --time=5M $GATEWAY_URL/productpage &
Copied!
Open the browser with these pages:
Bookinfo v1, v3, v2
    Check the flows in Kiali graph
Istio Graph

Configuring Request Routing

Apply the virtual services which will route all traffic to v1 of each microservice:
1
kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
Copied!
Display the defined routes:
1
kubectl get virtualservices -o yaml
Copied!
Output:
1
apiVersion: networking.istio.io/v1alpha3
2
kind: VirtualService
3
metadata:
4
name: details
5
...
6
spec:
7
hosts:
8
- details
9
http:
10
- route:
11
- destination:
12
host: details
13
subset: v1
14
---
15
apiVersion: networking.istio.io/v1alpha3
16
kind: VirtualService
17
metadata:
18
name: productpage
19
...
20
spec:
21
gateways:
22
- bookinfo-gateway
23
- mesh
24
hosts:
25
- productpage
26
http:
27
- route:
28
- destination:
29
host: productpage
30
subset: v1
31
---
32
apiVersion: networking.istio.io/v1alpha3
33
kind: VirtualService
34
metadata:
35
name: ratings
36
...
37
spec:
38
hosts:
39
- ratings
40
http:
41
- route:
42
- destination:
43
host: ratings
44
subset: v1
45
---
46
apiVersion: networking.istio.io/v1alpha3
47
kind: VirtualService
48
metadata:
49
name: reviews
50
...
51
spec:
52
hosts:
53
- reviews
54
http:
55
- route:
56
- destination:
57
host: reviews
58
subset: v1
Copied!
    Open the Bookinfo site in your browser http://$GATEWAY_URL/productpage and notice that the reviews part of the page displays with no rating stars, no matter how many times you refresh.
Bookinfo v1

Route based on user identity

All traffic from a user named jason will be routed to the service reviews:v2 by forwarding HTTP requests with custom end-user header to the appropriate reviews service.
Enable user-based routing:
1
kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
Copied!
Confirm the rule is created:
1
kubectl get virtualservice reviews -o yaml
Copied!
Output:
1
apiVersion: networking.istio.io/v1alpha3
2
kind: VirtualService
3
metadata:
4
name: reviews
5
...
6
spec:
7
hosts:
8
- reviews
9
http:
10
- match:
11
- headers:
12
end-user:
13
exact: jason
14
route:
15
- destination:
16
host: reviews