Say hello to serverless containers with Cloud Run


Written by:

developers, love serverless. We love that we can
focus on code, deploying, and let the platform
take care of the rest. But we also keep
asking, what else can we deploy on serverless? What about going beyond
simple functions? Well, you can today
with Cloud Run, which lets you run any stateless
container on serverless. With Cloud Run, you can
forget about infrastructure. It focuses on fast, automatic
scalings that’s request-aware, so you can scale down
to zero and only pay when it’s being used. To demo this, I’m
going to deploy a serverless microservice
that transforms Word documents to PDFs. To perform this
transformation, does anyone remember OpenOffice? I’m going to simply add
OpenOffice inside my container, and then run it in a
serverless environment. Let’s see how easy it is to run
it in a container on Cloud Run. From the console,
go to Cloud Run and open the Deployment page. Select or paste the URL
of the container image and click Create. That’s all we needed to
create a serverless container. No infrastructure to provision
in advance, no YAML file, and no servers. Cloud Run has imported my image,
made sure that it started, and gathered a stable and
secure HTTPS endpoint. What we just deployed is
a scalable microservice that transforms a
document into a PDF. Let’s see it in action. Let’s give it a doc to convert. And we get a PDF back. OpenOffice is not exactly
a modern piece of software. It’s about a 15-year-old binary,
and it’s about 200 megabytes. And we just took that
binary and deployed it as a serverless
workload with Cloud Run, because Cloud Run supports
Docker containers. That means you can run any
programming language you want or any software in
a serverless way. Let’s look at the code. We have a small
piece of Python code that listens for
incoming HTTP requests and calls OpenOffice to
convert our document. And we also have a
very small Docker file. It starts by defining
our base image. In our case, it’s the
official Python-based image. Later, we installed OpenOffice
and we specified our start command. Then, we packaged all
of this into a container image using Cloud Build and
deployed it to Cloud Run. On Cloud Run, our
microservice can automatically be scaled to thousands of
containers or instances in just a few seconds. We just took a legacy
app and deployed it to a microservice environment
without any change in code. But sometimes you might want to
have a little bit more control. For example, bigger CPU
sizes, access to GPUs, more memory, or maybe have it
running on a Kubernetes Engine cluster. With Cloud Run on GKE, it
uses the exact same interface. I’m going to deploy the
exact same container image, this time in GKE. And instead of a
fully-managed region, I’m now picking our GKE cluster. We get the same Cloud
Run developer experience. It’s deploying, and our
microservice is creating. As before, we get a
stable and secure endpoint that automatically
scales our microservice. Behind the scenes, Cloud
Run and Cloud Run on GKE are powered by Knative– an opensource project
to run serverless workloads that we
launched last year. This means we can
actually deploy the exact same microservice to
any Kubernetes cluster running on Knative. Let’s take a look. I exported the microservice
into a file, service.yaml. Then, using the
kubectl command, I’ll deploy it to a managed Knative
on another cloud provider. I’ll enter “kubectl get kservice
to retrieve the URL endpoint.” And voila! We have it now on
another cloud provider. Let’s look into
the service that’s running by entering “gcloud
beta run services describe pdf service.” If you’re familiar
with Kubernetes, these API version and kind
fields may look familiar. In this case, we’re
not using Kubernetes. But since Cloud Run implements
the Knative API, an extension of Kubernetes, this
is an API object that looks like Kubernetes. Knative enables services
to be run portably between environments
without vendor lock-in. Cloud Run gives you everything
you love about serverless. There are no servers to manage,
you get to stay in the code, and there’s fast scale-up,
and more importantly, scale down to zero. You get to pay nothing when
there are no cycles being run. Use any binary or
language, because it’s on the flexibility
of containers. And it gives you access to
the Google Cloud ecosystem and APIs. And you get a
consistent experience wherever you want it– in a fully-managed
environment or on GKE. Thanks for watching. Check out more in the
description below. And subscribe to stay up-to-date
with the latest in serverless on Google Cloud. [MUSIC PLAYING]

Leave a Reply

Your email address will not be published. Required fields are marked *