Creating the Mutual Funds of Edge Computing

by | Jul 10, 2020

Daniel Bartholomew

Dan's innovative mindset and expertise in mission critical application delivery has led the development of the CloudFlow Supercloud Platform, revolutionizing the way businesses approach global application delivery and container orchestration.

At Section, we are working to leverage data science and machine learning methods to create the mutual funds of edge computing.

In finance, Modern Portfolio Theory revolutionized investment practices by providing an analytical framework to balance the risk in a portfolio with the expected reward. No longer does an investor need to study and select individual stocks. Rather, they specify their appetite for risk versus reward, and then analytically identify optimally efficient portfolios to suit.

The future of the edge will see this same improvement-through-simplification by supporting optimal trade-offs between cost and performance for each business case.

This will deliver enormous benefits for all users of the Internet as we will have the best performing and most secure web applications delivered to us by a more efficient Internet.

Cost-performance optimization in edge computing

While operators and infrastructure providers race to build and expand their edge networks, and software providers build out tooling to give developers access, there’s a big, hairy challenge that not many are talking about – how are engineers expected to efficiently manage workload orchestration across thousands of points of presence?

The value proposition of edge compute is an optimization of performance versus cost, access, and other limitations. Edge Points-of-Presence (PoPs) will have higher performance and higher cost, so unless cost is not a concern, it won’t be desirable to move all workloads to the edge.

Successful use of edge compute will require automated systems to dynamically migrate users to workloads that move into, out of, and across a rapidly expanding pool of edge PoPs.

The data science behind edge workload orchestration

So, how do we go about applying similar mathematical frameworks that have been adopted in the modern investment landscape to simplify edge workload orchestration?

The cost/performance frontier

According to Investopedia, “the efficient frontier is the set of optimal portfolios that offer the highest expected return for a defined level of risk, or the lowest risk for a given level of expected return. Portfolios that lie below the efficient frontier are sub-optimal because they do not provide enough return for the level of risk. Portfolios that cluster to the right of the efficient frontier are suboptimal because they have a higher level of risk for the defined rate of return.”


Using the efficient frontier as a guide, we are working to apply this same approach to edge workload scheduling using cost and performance as our constraints.

What this means in practice is that developers can define their level of accepted cost for a given level of expected performance.

Aside from certain security-related scenarios (like regulation or geo-fencing), developers shouldn’t have to think about where their compute workloads are running, as long as cost and performance parameters are being met. Furthermore, most developers don’t want to deal with the complexities of edge workload orchestration.

Section’s Adaptive Edge Engine (patent pending) provides a solution that abstracts the underpinning monitoring, decisions, and execution to provide developers with a trusted, turnkey solution to optimize application performance.

Constructing the cost/performance frontier

The first requirement of an efficient portfolio is a suitably diverse pool of options providing scope for optimization. In this case, the pool of all PoPs is growing each year and will continue to grow at an increasing rate. With the wider distribution and availability of edge PoPs provides high cost, high performance options that did not previously exist. Given these options, we need reliable estimates of the cost and performance for each. It is perfectly acceptable if there is uncertainty about these estimates, because that uncertainty can be handled in the optimization and in the ongoing learning process behind the scenes.

Given the options and relevant data measures above, we can easily trace the efficient frontier. However, before we can solve for an optimum, we must know more about the preferences and requirements of the customer. We manage this through elicitation identical to when your 401k manager asks you to categorize yourself as a Risk-Averse, Moderate, or Aggressive investor. Investors in each category will sit in a different location on the efficient frontier and we similarly allow customers to articulate how they uniquely balance cost and performance for the case at hand.

With these ingredients, we can compose an optimal portfolio of PoPs and deploy the customer workload accordingly. Once initiated, the process itself is ongoing. Portfolios need re-balancing as the underlying dynamics change. In our case, and this is where the investing portfolio breaks down, the major driver of changes in the portfolio is changing web traffic volumes and (originating) locations. The dynamic nature of incoming request traffic, along with other factors, require us to continually solve for the best portfolio, re-deploying as necessary to maintain optimal results.

The final piece is harnessing the realized results in a feedback loop that we use to tune the models feeding and driving the optimization at the heart of this process. At Section, we can leverage this knowledge across time and across all customers and PoPs to build a comprehensive view of the cost vs. performance landscape.

Vendor-agnostic networks & open source technologies

One of the keys to realizing the full potential of edge computing is interoperability. A truly expansive global edge network will not be built by or rely on a single provider. Furthermore, operators need to provide easy and open access for software developers to flexibly deploy/remove edge workloads on/from their infrastructure.

Many organizations are forming to help push progress on this front. Similar to what CNCF has done for the cloud native ecosystem, LF Edge (also under the umbrella of the Linux Foundation) is helping drive open source technologies, education and adoption around and within the emerging edge ecosystem.

TL;DR

At Section, our vision is simple – Improve the Internet.

By applying a proven model and creating the mutual funds of edge computing, we will do just that.

Recent Posts

Headless Commerce Drives Edge Computing Adoption

Headless Commerce Drives Edge Computing Adoption

In e-commerce today, the challenge to meet and exceed customer expectations is driving innovation. The demand for frictionless shopping, 24/7 availability, superior product and impeccable service quality is ever increasing, putting pressure on...

read more