Build your own Heroku with Kubernetes Part 3: Install Istio and Knative

With Knative, your application scaled up from zero, handled our requests, and scaled back down when traffic stopped. Normally this setup would take a Kubernetes team a few sprints to implement but with Knative you get this out of the box.

Welcome to the Build Your Own Heroku with Kubernetes series! In the previous post you used Terraform to create your GKE cluster. In today’s post you will learn how to install Istio, an ingress gateway enabled service mesh, and Knative, a serverless runtime. You can find the files for this tutorial on Github.

Installing Istio Ingress Gateway

To get started you will need a cluster. You use the previous post to create a GKE cluster. You need Istio to expose your services to the internet. Istio is the most stable ingress gateway compatible with Knative and therefore a good choice. Istio installation is simple enough. To make the install even easier you can use the following bash script.

# run-install-istio

# Shell Configs
set -euo pipefail

# Make the prject root out working directory


if [ ! -d "$ISTIO_DIRECTORY" ]; then
    ### Take action if DIR DOES NOT exists ###
    echo "Installing Istio..."
    curl -L | $ISTIO_VERSION sh -

# Make istioctl avaliable to shell

# Apply operator from our manifests directory
istioctl manifest apply -f manifests/istio-minimal-operator.yaml

From the project root, you download the Istio artifacts if they are not already present in the current directory. You then export the Istio binary so you have access to the istioctl command. Then you apply the Istio resources.

# istio-minimal-operator.yaml
kind: IstioOperator
        autoInject: enabled
      useMCP: false
      # The third-party-jwt is not enabled on all k8s.
      # See:
      jwtPolicy: first-party-jwt

      enabled: true

      - name: istio-ingressgateway
        enabled: true

      - name: cluster-local-gateway
        enabled: true
          istio: cluster-local-gateway
          app: cluster-local-gateway
            type: ClusterIP
              - port: 15020
                name: status-port
              - port: 80
                name: http2
              - port: 443
                name: https

You can see istio-minimal-operator.yaml above. Here you enable the istio-ingressgateway to expose the cluster to internet traffic. The ingress gateway will provision a service of type Loadbalancer. For now this is ok. Later you'll learn how to get internet traffic via a Nodeport to keep expenses low.

Installing Knative Serving Components

Now time to install Knative as our application development framework. Knative is a set of middleware components designed to simplify difficult container workflows. With Knative deploying a container, routing traffic based on weights, and autoscaling based on-demands is easy.


set -euo pipefail



# Install serving components
kubectl apply -f$KNATIVE_VERSION/serving-crds.yaml
kubectl apply -f$KNATIVE_VERSION/serving-core.yaml
kubectl apply -f$KNATIVE_VERSION/release.yaml

# Make Istio specific changes
kubectl label namespace knative-serving istio-injection=enabled
kubectl apply -f manifests/kn-istio-security.yaml

# Configure DNS with
kubectl apply -f$KNATIVE_VERSION/serving-default-domain.yaml

Here is another bash script to simplify Knative installation. From the project root, you install the core components for Knative serving v0.16.0, make adjustments to your components for Istio communications, and then you run a Kubernetes job to configure a magic DNS entry that will allow these DNS entries to resolve your services. With Knative installed, you can deploy your first container and use your magic DNS address to reach your container from the browser.

Deploying an example application

Deploying a service with Knative is simpler than if you did used a normal Deployment resource in Kubernetes. Let’s look at the manifest for your simple application.

# helloworld.yaml
kind: Service
  name: helloworld-go # The name of the app
  namespace: default # The namespace the app will use
        - image: # The URL to the image of the app
            - name: TARGET # The environment variable printed out by the sample app
              value: "Go Sample v1"

The service is named helloworld-go and runs in the default namespace of your cluster. The container image lives in gcr and has an environment variable configured to be consumed by the application. That’s it! To deploy your application run kubectl apply -f manifests/sample-app/helloworld.yaml.

If you run kubectl get ksvc helloworld-go -n default there are zero running containers. Why? Knative scales the container down to zero when there is no traffic. To scale up, you need to introduce traffic. Run kubectl get ksvc helloworld-go -n default -ojsonpath='{.status.url}' to fetch the current url. Visit the URL in the browser. The website will show a greeting from your app. Look in the terminal and you'll see a running container. Wait a few seconds and the service will scale down to zero.

With Knative, your application scaled up from zero, handled our requests, and scaled back down when traffic stopped. Normally this setup would take a Kubernetes team a few sprints to implement but with Knative you get this out of the box.

What's next

In the next tutorials you will build a CLI to streamline the installation process, create a simple dashboard to view your apps, setup an observability platform to monitor and alert on your apps, and you'll add a GitOps based deployment preview feature to preview your app before its deployed to production. These tutorials and more will be available via the newsletter.

Sign up to the newsletter using the form below to get access to each tutorial when published. Check your email for a confirmation link after you sign up. We're a small group of 21 people building our own Heroku. We’d love for you to join us. Cheers!

Additional Reading