How HOPS deploys your applications to Kubernetes

This page has a high-level overview of how your applications are deployed in our Kubernetes environment. We don't provide any guarantees about the information in this page. As we work to make HOPS less tied to Kubernetes, we might use the Kubernetes internals differently to accomplish our goals, and provide better interfaces for you to build, run and fix your applications.

Until then, you might need to pop up the hood and interact with some parts of the Kubernetes engine.

Apps and namespaces

Each app in HOPS is deployed to one ore more Namespaces, one for each environment, such as prod or test. We currently construct namespaces in the format apps-{app}-{environment}. Note that underscores (_) are replaced with dashes (-).

Each appling in your app is deployed as a Deployment, each with it's own Service, making them accessible internally by other applings in the same namespace. The appling's service is discoverable as {appling} for apps with only one appling, and {app}-{appling} for projects with more than one app.

Your applings' containers, built from their Dockerfile(s), run as Pods. Pods are workload descriptions that let Kubernetes know how to run one or more containers . We might run your appling's container along with sidecar containers1, such as the SQL proxy container. This usually has no impact on your applications.

Your deployments are configured to multiple replicas of your pods, meaning that we run multiple instances of your pods concurrently.

Builds and deployments

We build your applings using BuildKit builders. We build for the x86-64 architecture, and run our clusters on Linux.

Whenever you publish a new version of your application, we build all your applings and update their deployments. This causes Kubernetes to roll out new [ReplicaSets]s, which gradually rolls out the new versions of your applings. We do not provide guarantees that you will not have multiple versions running a deployment. Applings may individually succeed or fail to deploy.

As your pods become healthy, they receive traffic, and old pods are shut down using the SIGTERM signal.

How we route traffic to your applings

Traffic to your applings is routed based on the Host header in incoming HTTP requests, matching to the domain field in iterapp.toml. We route requests using nginx's ingress controller, terminating TLS in the ingress, and passing unencrypted HTTP traffic to your pod.

How we keep your pods alive2

We check for readiness, not liveness, by default (see the iterapp.toml reference)3. We use HTTP checks, and you must respond within reasonable time with a status greater than or equal to 200 and less than 400. Any other code indicates failure.

Readiness probes simply causes the load balancers to leave your pods alone on failure. If your appling is permanently unhealthy and you need to restart it, you may shut down using any exit code. When pods stop, for any reason, they are restarted.

1

See Pods that run multiple containers that need to work together. We use containers like this to provide storage and network access.

2

We're currently evaluating our health check system.