v1.2.0
v1.2.0
What's New
Improve edge autonomy capability when cloud-edge network off
The original edge autonomy feature can make the pods on nodes un-evicted even if node crashed by adding annotation to node,
and this feature is recommended to use for scenarios that pods should bind to node without recreation.
After improving edge autonomy capability, when the reason of node NotReady is cloud-edge network off, pods will not be evicted
because leader yurthub will help these offline nodes to proxy their heartbeats to the cloud via pool-coordinator component,
and pods will be evicted and recreated on other ready node if node crashed.
By the way, The original edge autonomy capability by annotating node (with node.beta.openyurt.io/autonomy) will be kept as it is,
which will influence all pods on autonomy nodes. And a new annotation (named apps.openyurt.io/binding) can be added to workload to
enable the original edge autonomy capability for specified pod.
Reduce the control-plane traffic between cloud and edge
Based on the Pool-Coordinator in the nodePool, A leader Yurthub will be elected in the nodePool. Leader Yurthub will
list/watch pool-scope data(like endpoints/endpointslices) from cloud and write into pool-coordinator. then all components(like kube-proxy/coredns)
in the nodePool will get pool-scope data from pool-coordinator instead of cloud kube-apiserver, so large volume control-plane traffic
will be reduced.
Use raven component to replace yurt-tunnel component
Raven has released version v0.3, and provide cross-regional network communication ability based on PodIP or NodeIP, but yurt-tunnel
can only provide cloud-edge requests forwarding for kubectl logs/exec commands. because raven provides much more than the capabilities
provided by yurt-tunnel, and raven has been proven by a lot of work. so raven component is officially recommended to replace yurt-tunnel.
Other Notable changes
- proposal of yurtadm join refactoring by @YTGhost in #1048
- [Proposal] edgex auto-collector proposal by @LavenderQAQ in #1051
- add timeout config in yurthub to handle those watch requests by @AndyEWang in #1056
- refactor yurtadm join by @YTGhost in #1049
- expose helm values for yurthub cacheagents by @huiwq1990 in #1062
- refactor yurthub cache to adapt different storages by @Congrool in #882
- add proposal of static pod upgrade model by @xavier-hou in #1065
- refactor yurtadm reset by @YTGhost in #1075
- bugfix: update the dependency yurt-app-manager-api from v0.18.8 to v0.6.0 by @YTGhost in #1115
- Feature: yurtadm reset/join modification. Do not remove k8s binaries, add flag for using local cni binaries. by @Windrow14 in #1124
- Improve certificate manager by @rambohe-ch in #1133
- fix: update package dependencies by @fengshunli in #1149
- fix: add common builder by @fengshunli in #1152
- generate yurtadm docs by @huiwq1990 in #1159
- add inclusterconfig filter for commenting kube-proxy configmap by @rambohe-ch in #1158
- delete yurt tunnel helm charts by @River-sh in #1161
Fixes
- bugfix: StreamResponseFilter of data filter framework can't work if size of one object is over 32KB by @rambohe-ch in #1066
- bugfix: add ignore preflight errors to adapt kubeadm before version 1.23.0 by @YTGhost in #1092
- bugfix: dynamically switch apiVersion of JoinConfiguration to adapt to different versions of k8s by @YTGhost in #1112
- bugfix: yurthub can not exit when SIGINT/SIGTERM happened by @rambohe-ch in #1143
Contributors
Thank you to everyone who contributed to this release! ❤
- @YTGhost
- @Congrool
- @LavenderQAQ
- @AndyEWang
- @huiwq1990
- @rudolf-chy
- @xavier-hou
- @gbtyy
- @huweihuang
- @zzguang
- @Windrow14
- @fengshunli
- @gnunu
- @luc99hen
- @donychen1134
- @LindaYu17
- @fujitatomoya
- @River-sh
- @rambohe-ch
And thank you very much to everyone else not listed here who contributed in other ways like filing issues,
giving feedback, helping users in community group, etc.