Top Considerations for Containers at the Edge

Some things are just made to go together, like containers and edge computing....
by Daniel Bartholomew | November 22, 2021

Some things are just made to go together, like containers and edge computing.

Containers package an application such that the software and its dependencies (libraries, binaries, config files, etc.) are isolated from other processes, allowing them to migrate as a unit and avoiding the underlying hardware or OS differences that can cause incompatibilities and errors. In short, containers are lightweight and portable, which makes for faster and smoother deployment to a server… or a network of servers.

Edge computing leverages a distributed compute model to physically move compute, storage, data and applications closer to the user to minimize latency, reduce backhaul and improve availability. Effective edge computing requires efficient deployment (both in terms of time and cost) of an application across many locations and – often – many underlying compute platforms.

I think you see where this is going…

Containers offer two key benefits when it comes to edge computing:

  1. Portability makes containers ideal for edge applications as they can be deployed in a distributed fashion without needing to fundamentally rearchitect the underlying application.
  2. Abstraction makes containers ideal for deployment to non-homogenous federated compute platforms, which are often found in distributed edge networks.

Kubernetes at the Edge

The above presupposes a suitable edge orchestration framework is present to coordinate the distributed compute platform, and that’s where Kubernetes comes in. It provides the common layer of abstraction required to manage diverse workloads and compute resources. Moreover, it provides the orchestration and scheduling needed to coordinate and scale resources at each discreet location. However, Kubernetes, itself, does not manage workload orchestration across disparate edge systems. Edge as a Service (EaaS) platforms have emerged to help fill this gap, which we’ll get to in just a bit.

Top Considerations When Putting Containers at the Edge

All that said, significant considerations remain when deploying containers at the edge. These can be distilled into three distinct categories: distributed edge orchestration, edge development lifecycle and deployment framework, and edge traffic management and routing.

Distributed Edge Orchestration

Moving to a truly distributed compute network adds complexity over typical cloud deployments for one simple reason: how do you maximize efficiency to meet real-time traffic demands without running all workloads in all locations across all networks all the time? Consider a truly global edge deployment. Ideally, compute resources and workloads are spun up and down in an automated fashion to, at a minimum follow the sun (roughly), yet remain responsive to local demand in real time. Now add in a heterogenous edge deployment, where demand and resources are monitored, managed and allocated in an automated fashion to ensure availability across disparate, federated networks. All of this involves workload orchestration, load shedding, fault tolerance, compute provisioning and scaling, messaging frameworks and more. None of this is simple, and the term “automated” is doing a lot of heavy lifting in the above scenarios.

But doing it correctly can accrue significant benefits in terms of lowering costs at the same time as increasing performance. Similarly, effective use of federated networks increases availability and fault tolerance, while decreasing vendor lock-in. Finally, it can improve regulatory compliance with initiatives such as GDPR, which requires data storage in specific locations.

Edge development lifecycle and deployment framework

Typical cloud deployments involve a simple calculation of determining which single cloud location will deliver the best performance to the maximum number of users, then connecting your code base/repository and automating build and deployment through CI/CD. But what happens when you add hundreds of edge endpoints to the mix, with different microservices being served from different edge locations at different times? How do you decide which edge endpoints your code should be running on at any given time? More importantly, how do you manage the constant orchestration across these nodes among a heterogeneous makeup of infrastructure from a host of different providers?

Effectively managing edge development and deployment requires the same granular, code-level configuration control, automation, and integration that is typical in cloud deployments, but on a massively distributed scale. Some of the more critical components of an edge platform include comprehensive observability so developers have a holistic understanding of the state of an application, and cohesive application lifecycle systems and processes across a distributed edge.

Edge traffic management and routing

Deploying containers across a distributed edge fundamentally requires managing a distributed network. This includes DNS routing and failover, TLS provisioning, DDoS protection at layers 3/4/7, BGP/IP address management and more. Moreover, you’ll now need a robust virtualized network monitoring stack that provides the visibility/observability necessary to understand how traffic is (or isn’t) flowing across an edge architecture. And to truly manage distributed network, infrastructure and operations at the Edge, you will likely require an edge operations model with an experienced team comprised of network engineers, platform engineers and DevOps engineers with an emphasis on site reliability engineering (SRE).

This is problematic, to say the least, for SaaS vendors and others who don’t have full network operations teams and systems on site.

Edge as a Service

All of this explains the existence of EaaS solutions for containerized edge application deployment. EaaS leverages the portability of containers for efficient deployment across distributed systems, but abstracts the complexities of the actual network, workload and compute management. This allows organizations to deploy applications to the edge through simple, familiar processes – much in the same way they would if deploying to a single cloud instance – and provides the necessary tools and infrastructure to simplify and automate configuration control. Ultimately, EaaS gives organizations all the cost and performance benefits of edge computing, while allowing them to concentrate on their core business rather than distributed network management.

Popular posts

by Jose Kunnappally | August 22, 2022

Ecommerce Holiday Shopping 2022: What to Expect and...

by Jose Kunnappally | April 18, 2022

The Global Ecommerce Security Report 2022

by Jose Kunnappally | January 12, 2022

How a CDN can boost your Core Web...

by PAUL BRISCOE | January 11, 2022

3 Ways to Write Better Caching Modules in...

Stay up to date with Webscale
by signing up for our blog subscription

Recent Posts

by Daniel Bartholomew | September 25, 2023

Supercharging LLMs with Supercloud

Supercloud, characterized by a decentralized and distributed architecture, has the potential to revolutionize cloud computing. This paradigm shift could have far-reaching implications for Large Language Models (LLMs), such as ChatGPT,...
by Daniel Bartholomew | August 27, 2023

Key Kubernetes and Edge Trends to Watch

Daniel Bartholomew, Webscale's Chief Product Officer, has shared his insights on four noteworthy trends to monitor within the realms of Kubernetes, container orchestration, and the expanding landscape of edge computing....
by Daniel Bartholomew | July 31, 2023

Prometheus Querying – Breaking Down PromQL

Prometheus has its own language specifically dedicated to queries called PromQL. It is a powerful functional expression language, which lets you filter with Prometheus’ multi-dimensional time-series labels. The result of each...