The rise of edge clouds
With the increasing demand for edge services, coming from 5G based applications, several emerging open-source edge stacks like StarlingX start to catch momentum.
Moreover, the major cloud providers built their own versions of an edge stack like AWS Outposts or Azure Edge Stack, which brings edge-like capabilities to enterprises as well.
Most of the edge stacks converge to run Kubernetes clusters at the edge. Sometimes it runs along OpenStack for provisioning VMs and sometimes it runs on bare metal.
Challenges
This new edge cloud architecture brings a few challenges.
The main ones are -
- Managing workloads at scale on multiple K8S clusters
- Placement of workloads on the right clusters and the right locations.
- Managing the life cycle operations of configuring, provisioning, starting, deleting and re-deployment of workloads on multiple clusters.
StarlingX overview
StarlingX comes with a centralized control plane and multiple subclouds, where each subcloud is a Kubernetes cluster from a certain size and type.
- Openstack APIs for the provision and creation of VMs, Networks, Security Groups and other resources supported by OpenStack.
- Kubernetes APIs where you communicate with the Kubernetes API server and manage Kubernetes resources like Pods, Services, Replica sets, Daemonsets etc.
Figuring out, via a placement policy, which subcloud runs which application workloads you communicate with multiple Kubernetes clusters to provision the required resources and applications.
Applications are packaged in Helm Charts, the Kubernetes package manager.
StarlingX comes with a rich set of APIs:
As you can see in Fig.2 there are several API categories, for bare metal, configuration, distributed cloud, fault management, HA, NFV and software updates. Fig.3 shows the detailed APIs for the distributed clouds, subclouds, creation of a subcloud, list all subclouds, actually managing the whole life cycle of subclouds.
Cloudify & StarlingX
Cloudify integrates with StarlingX in the following ways -
- Communicate with the StarlingX APIs for discovery of subcloud endpoints and provisioning of resources. For example, get a list of all subclouds and their metadata like IP addresses in order to provision resources and applications on these subclouds. Metadata includes information that could be filtered by placement policies.*
- Placement policy defines where workloads run. Based on tags like geographical locations, e.g. US West, you define to run a workload only on the US West clusters. Tags could represent many criteria, e.g. a certain Kubernetes version.*
- Managing the life cycle (LCM) of workloads on different subclouds, communicating with subcloud endpoints. This is true for Kubernetes workloads as well as for OpenStack workloads.
*Disclaimer: Some of the features above are still under development
Let’s look at a few examples.
Fig. 4 presents a site map of the discovered StarlingX subclouds.
Moreover, by clicking on the subcloud icon on the map Cloudify presents the workload deployed at each subcloud.
To get more details on each deployment as well as to follow the deployment process you can navigate to the deployments tab and see details like presented in Fig. 5. Green boxes present completion of a task while yellow ones show “status in progress” and white ones are tasks waiting to be executed.
Fig. 6 shows the Kubernetes dashboard of one of the clusters. We can see the deployment of Nginx pods and by clicking on services, the services created.
To Summarize
In this article we presented in a nutshell the StarlingX edge cloud and the ability to orchestrate workloads, aka Kubernetes services and pods as well as OpenStack resources on multiple edge subclouds.
That is done utilizing the StarlingX APIs (control-plane) for discovering the various edge subclouds and communicating directly with these subclouds to provision resources, via a placement policy that defines which workloads run where.