YurtHub
1. Features
As a important component of OpenYurt, YurtHub
provides additional capabilities for the edge-side components in the scenario of cloud-edge computing.
1)Edge Autonomy
OpenYurt supports edge autonomy, which means even under the circumstance of network disconnection between cloud and edge, the workload containers at edge can run as they are when they restart, instead of being evicted and rescheduled.
YurtHub
will cache resources at the edge side to ensure that kubelet
and pods can get resources they need when the network between cloud and edge are disconnect.
2)Traffic Closure
In the native Kubernetes, the endpoints of a service are distributed among the whole cluster. But in OpenYurt we can divided nodes into nodepools, and manage them at the granularity of nodepool. On the base of it, we can also manage resources in each nodepool individually, such as using UnitedDeployment to manage pods in different nodepools.
In the scenario of edge computing, resources in one nodepool are often independent on those in other nodepools, and nodes sometimes can only reach the nodes in the same nodepools. To meet this need, YurtHub
provides the capability of traffic closure to ensure the client can only reach the endpoints in the same nodepool making the traffic closed in the granularity of nodepool.
3)Seamlessly Migrate Pods to Edge
In the native Kubernetes, Pods uses InClusterConfig to visit the Kube-APIServer
by default. But in the scenario of cloud-edge computing, the cloud side and the edge side are often separated in different networks, thus pods cannot reach the Kube-APIServer
through InClusterConfig. In addition, under the circumstance of disconnection between cloud and edge, the restart of pod will end in failure because it cannot get the resource from Kube-APIServer
.
To solve the above two problems, YurtHub
provides users a way to seamlessly migrate their pods to the edge side with no modification. For these pods which using InClusterConfig to visit the Kube-APIServer
, YurtHub
will automatically revise the Kubernetes addresses they use to redirect the traffic from Kube-APIServer
to the YurtHub
, without any modification of pod yaml configurations.
4)Support of Multiple Cloud APIServers
YurtHub
can work well with multiple cloud apiservers to adapt different scenarios, such as the dedicated cloud scenario which often runs Kube-APIServer
in HA mode, and the edge computing scenario which communicates through dedicated network and public network at the same time.
- rr(round-robin):select the address in turn, default.
- priority: select the address according to its priority, only when it is unreachable, other addresses with lower priority will be use.
5)Management of Node Certificate
YurtHub
serves as a client to redirect the requests to the APIServer and meanwhile it serves as a HTTP/HTTPS server to receive requests from kubelet
and pods running on the node. In the aim of security, YurtHub
manages the client certificate and the server certificate it needs.
YurtHub
uses the capability of automatic certificate rotation of Kubernetes. Before certificates on the node expiring, it will automatically ask the cloud for new certificates. This mechanism, meanwhile, solves the following problem which will result in the failure when YurtHub
restarts: YurtHub
fails to rotate its certificates for the cloud-edge network breakdown, and fails again after the network recovering because of the expiration of certificates.
2. Architecture
YurtHub
can run on the cloud node and the edge node. Thus, it has two work mode: "edge" and "cloud".
1)Edge Mode
The following figure shows the architecture of YurtHub
working in "edge" mode.
In this figure, the processing of requests is clearly shown.
- When the network between cloud and edge is healthy, requests coming from pods and
kubelet
will be sent to theKube-APIServer
throughLoad Balancer
. And responses returned back fromKube-APIServer
will first be filtered byLoad Balancer
. Load Balancer then will cache resources contained in responses and finally send them back to the client. - When the network between cloud and edge breaks, requests coming from pods and
kubelet
will be processed byLocal Proxy
.
According the above data flows, we can divided components of YurtHub
into two kinds: Cloud Request Processing Module and Edge Request Processing Module. The Edge Request Processing Module is made of the following components:
-
Local Proxy takes the responsibility of handling resource requests from Pods and
Kubelet
when the cloud-edge network breaks, and makes the client unconscious of the disconnection.Local Proxy
supports Get/List/Watch requests and will construct responses with cached resources. For those it does not support, such as Delete/Create/Update, it will return error messages.Cache Manager
is used in this process. -
Cache Manager takes the responsibility of caching and retrieving resources. It provides caching interface to cache resources contained in the response and retrieving interface to get resources from cache. The former is used by
Load Balancer
, and the later is used byLocal Proxy
. -
Storage Manager defines functions to manipulate resources in cache, including Create, Update, Delete, Get, List and so on. Finally, resources will be serialized and stored in the cache.
-
Network Manager takes the responsibility of setting iptables of host. Through these iptables rules, requests that originally sent to the
Kube-APIServer
will be redirected to theYurtHub
.
The Cloud Request Processing Module is made of the following components:
-
Certificate Manager takes the responsibility of managing necessary certificates that are needed when communicating with
Kube-APIServer
, including client cert ofYurtHub
and CA cert of the cluster. It will apply for certificates first and continuously update them before expiration. -
Health Check will periodically check if the
YurtHub
can reach theKube-APIServer
and set the healthy status according to the result. The healthy status will help theYurtHub
determine whether to send received requests to the cloud or handle them at the edge side. In addition,Health Check
also takes the responsibility of sending the heartbeat to the cloud. -
Load Balancer takes the responsibility of establishing the connection between
YurtHub
andKube-APIServer
. It will send requests from pods andKubelet
to the cloud.Load Balancer
supports multiple Kube-APIServer addresses, and use Round-Robin or Priority mode to do the load balance. It usesData Filtering Framework
to process responses andStorage Manager
to cache resources in responses. -
Data Filtering Framework takes the responsibility of filtering data to extend the capability of
YurtHub
. Currently, three filters are included.-
MasterService Filter: enable users to seamlessly migrate pods which uses InClusterConfig to the edge side without modification.
-
ServiceTopology Filter: provide the capability of traffic closure, limiting the endpoints in the same nodepool as the node.
-
DiscardCloudService Filter: ensure that client at the edge side uses public network to reach the endpoints of cloud service instead of the PodIP when the cloud and edge are in the separated network.
-
-
GC Manager Each time when
YurtHub
restarts, it will recycle pod resources in the cache which does not exist any more at the cloud. During the runtime, it will periodically recycle cached event resources ofkubelet
andkube-proxy
.