Portainer Shows How to Manage Kubernetes at the Edge

Portainer Shows How to Manage Kubernetes at the Edge

Kubernetes makes sense at the edge if deployed with the right tooling.

At last week’s Civo Navigate conference, Portainer demoed how it simplifies edge Kubernetes deployments, using what it touts as a universal container management tool. Basically, it can work with both Docker and Kubernetes to make the deployment and management of containerized apps and services easier, Jack Wallen said in his assessment last year.

Kubernetes makes sense at the edge, Portainer co-founder Neil Cresswell told Civo audiences.

“One of the major benefits with Kubernetes is that really we have a recipe or a means to declare how we want our application to run and that’s the manifest,” Cresswell said. “The manifest really is a game changer. And in my career, it’s the first time I’ve ever seen a way that you have a way to basically declare how your application runs, and it runs that way everywhere.”

While it can be overwhelming, the manifest makes sense once you’ve wrapped your head around how it works — that it’s infinitely reproducible, he explained.

“No matter where you run the manifests it’ll pretty much run the same way every single time,” he said, adding Portainer ships its product as a Kubernetes manifest. Using Portainer to run a manifest means it will run in seconds and in very predictable ways, and it will deploy the same way, he said.

For the purposes of the demo, Cresswell defined the edge as the network’s edge.

“When we say network edge what we’re talking about here is putting applications closer to users,” he said. “We do this to try to reduce network latency and reduce bandwidth. But the whole thing there is to say how do I get an application close to my user so they get a snappy responsive application experience and I pay less for bandwidth between my user and the backend system.”

See also  Arc Search for Android shows promise, but it’s not a Chrome replacement

He gave an example of why this matters today more than ever — he has a bank app that, when he travels to the U.S., experiences excess latency issues because it’s trying to hit the backend of his bank in New Zealand.

“Whenever I am here [the U.S], my internet banking application is completely unusable because it is trying to do API requests — 1000s of them — to a backend server in New Zealand,” he said. “That latency just breaks the application. So by being able to have API endpoints closer to your users, you get a much faster application experience.”

Stateless Services and Kubernetes

The edge relies on stateless services and the Kubernetes working group has done a really good job helping to transition it to more stateful services, but predominantly Kubernetes is built around stateless, and edge applications are also predominantly stateless.

“They’re designed for ingesting and tuning and buffering and caching. They’re not designed really to hold data,” he said. “So Kubernetes is a stateless orchestrator and edge — stateless workloads — really are a perfect match. It just makes sense.”

Moving applications to edge makes it possible to reduce bandwidth between users and the backend; reduce latency overall; provide a faster app experience; and support data separation.

That said, there are challenges to deploying Kubernetes at the edge, he said. While certified Kubernetes distributions use a standardized API, a lot of providers want to add value to the native Kubernetes API, which leads to lock-in. That’s why it’s important to be careful about things like authentication and load balancing, he added, which can lock developers into a particular buyer.

Cresswell showed a diagram of providers, noting that Civo, Linode and Digital Ocean all provide the raw Kubernetes without adding aspects that will lock you in. Azure and Google also offer raw environments, though they do provide some value add but are generally quite compliant, he said. Other providers may make it more difficult, he added, to “have a fully transformed work location.”

See also  Kubernetes for Databases: Weighing the Pros and Cons

Multiple Cluster Kubernetes Management

The Kubernetes management challenge comes in when you’re deploying multiple clusters rather than a single cluster, Cresswell said.

“When you get to three, four, 10 clusters, things start a little harder,” he said. “When you’re talking edge, though, you really are talking standard and you really have to change the way you think about how I manage this.”

Among the issues to consider are:

  • How do you manage the clusters centrally?
  • How do you perform authentication centrally?
  • How will users automatically get propagated to backend clusters?
  • How do you define back roles somewhere centrally and have those propagate?

There are three main features developers need, he said:

  1. Centralized access;
  2. Access control with monitoring dashboards; and
  3. Centralized deployments.

“You really have to have all of those,” he said, and you have to do it at scale. While it’s possible to go cluster-by-cluster, you still have to think about user authentication.
“You really do have to say, ‘I want to have a single API endpoint that every developer or consumer can connect to, and their proxies, to a backend,” Cresswell said. “This is how you manage things at scale. And the same thing with dashboards.”

One Dashboard to Manage the Edge

Cresswell said that everyone thinks they can install Prometheus or Grafana and have a monitoring dashboard — but you don’t want to have 47 different dashboards open.

“You want to try and get a macro global view of where the clusters are. You can do that with Prometheus and Grafana, but you have to architect it that way,” he said. “You can’t just don’t install it in each cluster. Yes, you have to install the edge, the Prometheus edge agents, and send the streams back to a central Prometheus instance. You have to configure things correctly. So you have to think differently. And bulk deployments is actually quite complicated.”

See also  Observability Is a Multicluster App Developer’s Best Friend

GitOps will help but it won’t get you there all the way, he added, specifically pointing to a Chik-fil-A rollout of a couple 1000 clusters where they started using GitOps and “found out very early on that you can’t just deploy GitOps and pray that it’s going to update,” he said. “You actually have to have some centralized dashboard to see the current status of all these deployments.”

Portainer faced the same challenge, he said, and has added tooling to manage these clusters, along with identity management and access management, from a centralized dashboard, Cresswell said.

“Portainer has a high level of abstraction; we try and make things really easy,” he said “We try to be the one tool you need to manage Kubernetes… So full Kubernetes API proxy, multicluster management, centralized identity management, and dashboards, monitoring everything you need to basically go live with Kubernetes in production at scale.”

Cresswell and Adolfo Delorenzo, Portainer’s IT sales and business development executive, demoed how Portainer could manage the edge from one dashboard by broadcasting a live stream of the audience into three different locations supported by Civo — London, New York and Frankfurt — in about two minutes.

“We have customers who’ve got upwards of 50,000 environments. We have a customer who need us to stand us up 125,000 clusters by the end of this year,” Cresswell said.

Civo paid for Loraine Lawson’s travel and accommodations to attend the conference.

The post Portainer Shows How to Manage Kubernetes at the Edge appeared first on The New Stack.

RECENT POSTS

Leave a Reply

Your email address will not be published. Required fields are marked *