Abstract :Karmada v1.2 The scheduler capability has been greatly enhanced in version , It initially provides distributed search engine support , In addition, with the help of aggregation API Provides information such as logs, watch And other practical command line tools , Resource interpreter (Resource Interpreter) Start to support status collection customization .
This article is shared from Huawei cloud community 《Karmada v1.2 Release : Open a new era of full-text search 》, author : The future of cloud containers .
Same as previous versions ,v1.2 Still compatible with previous versions .

New characteristics
Global full text search K8s resources
This version provides a file named ”karmada-search” The components of , Used to cache resource objects and events deployed in the cluster , And search through API Providing retrieval services to the outside world .
And polymerization API The difference between the query services provided is , Cache retrieval does not require access to the target cluster , Locate the scenarios that require frequent data search and analysis , Such as building real-time resources 、 Event Kanban .
Users can use ResourceRegistry API To specify the cached resource type and data source ( Target cluster ), For example, the following configuration indicates from member1 and member2 Cache in both clusters Deployment resources :
apiVersion: search.karmada.io/v1alpha1 kind: ResourceRegistry metadata: name: foo spec: resourceSelectors: - apiVersion: apps/v1 kind: Deployment targetCluster: clusterNames: - member1 - member2
Submit the configuration to karmada-apiserver, You can use search API Search , For example, query cached deployment:
# kubectl get --raw /apis/search.karmada.io/v1alpha1/search/cache/apis/apps/v1/deployments { "apiVersion": "v1", "kind": "List", "metadata": {}, "items": [{ "apiVersion": "apps/v1", "kind": "Deployment", "metadata": { "annotations": { "cluster.karmada.io/name": "member1", }, } }, ] }
The URL in ,/apis/search.karmada.io/v1alpha1/search/cache Is a fixed prefix , The latter part is related to Kubernetes Native API The paths are exactly the same .
Besides ,`karmada-search` Third party storage is also supported , Such as search engine (Elasticsearch or OpenSearch) And relational databases (MySQL)、 Graph database, etc , The current version only supports search engines .
With the help of the powerful data search and analysis capabilities of the search engine , Resource objects and events can be randomly combined to form various reports , For example, image pulling , node OOM Situation, etc .
Karmada polymerization API To shine
polymerization API Ability for the first time in v1.0 Available in version , It provides the ability to aggregate multiple clusters API The capacity of the entrance , The user only through Karmada The control surface can query the resource objects in the cluster in real time .
stay v1.2 In the version , With this capability, we provide several command line tools commonly used in the process of transportation .
karmadactl get: Query resources across clusters
# karmadactl get deployment -n default NAME CLUSTER READY UP-TO-DATE AVAILABLE AGE ADOPTION nginx member1 2/2 2 2 33h N nginx member2 1/1 1 1 4m38s Y podinfo member3 2/2 2 2 27h N
stay v1.2 Aggregation is used in version API It is realized again get command , And supported Pull Cluster query of pattern .
karmadactl logs: Query container logs across clusters
# ./karmadactl logs nginx-6799fc88d8-9mpxn -c nginx -C member1 /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/ /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf ...
The container log can now be displayed in Karmada Control surface to query .
except logs, get And new watch Command to monitor resource changes and changes across clusters exec Command executes commands in a remote container .
Karmada The resource interpreter supports state collection customization
since Karmada v0.10 New in version Resource interpreter Since characteristics , The community continues to enrich the functionality supported by this feature . stay v1.2 In the version , We have added a new pair of InterpretStatus Operation support .InterpretStatus The operation supports user-defined collection Kubernetes Status information of resources , Include custom resources CustomResourceDefinition(CRD).
Karmada Support multi cluster resource and resource state management . After the user distributes the specified resources to different member clusters through the distribution policy ,Karmada The status information of corresponding resources on different member clusters will be collected layer by layer , Finally, it will converge to the resource template in the control surface , Thus, it is convenient for users to view . for instance Deployment Status information :
# kubectl get deployments.apps nginx -oyaml apiVersion: apps/v1 kind: Deployment metadata: … labels: app: nginx propagationpolicy.karmada.io/name: nginx-propagation propagationpolicy.karmada.io/namespace: default name: nginx namespace: default spec: … status: availableReplicas: 2 readyReplicas: 2 replicas: 2 updatedReplicas: 2
During resource status collection ,Karmada The original strategy is to collect the whole resource Status, The direct problem caused by this is that a lot of redundant resource status information will be collected , And that leads to Karmada Consume more memory and storage resources , And then limit Karmada The scale of resource management .
InterpretStatus Introduction of operation , It can solve the above problems well .Karmada By calling the user-defined resource status collection hook , It can be collected and processed , Resource status without redundant information . This is for CRD In terms of resources , Especially meaningful . Because for CRD In terms of resources ,Karmada I don't know its Status Definition , That means we can't go through build-in To process status information . here , With this capability, users can customize the status collection behavior .
Scheduler capabilities are enhanced : Counterbalance and fault expulsion
New components : karmada-descheduler
The new increased karmada-descheduler Balance the scheduling results of components , For example, over time , In the cluster Pod Maybe because the cluster resources change ( Such as Node fault ) And become inoperable , here karmada-descheduler Such resources can be expelled , And then trigger Karmada The scheduler performs secondary scheduling , Reassign the workload to a new available cluster .
New plugins
A new name has been added ClusterLocality Scoring plug-in for , The plug-in will score according to scheduling constraints and cluster location , It is convenient for the dispatcher to conduct accurate scheduling .
A new name has been added SpreadConstraint The filter plug-in , The plug-in will filter out clusters that do not meet the constraints .
Based on these two plug-ins , It's also added yes spread-by-region Scheduling support for , Thus, the user can conveniently according to region Deployment .
Failover and eviction
This version also starts the optimization of failover and eviction , The community plan will be completed in the next version , Some optimization work has been completed in this version , For example, more reasonable fault judgment rules and fault overdue startup expulsion, etc .
The controller is newly added --cluster-failure-threshold Parameter is used to specify the cluster fault tolerance period , Only the failure time exceeds this cycle ,Karmada The cluster is judged as a fault , Avoid misjudgment caused by network fluctuation .
The controller also adds --failover-eviction-timeout The parameter specifies the fault expulsion tolerance time , Once the cluster fails beyond this period ,Karmada That is, start the fault expulsion ( The current version will only stain the cluster , Eviction capabilities will be available in the next release ).
With the help of primordial API Integrate the surrounding ecology : Security compliance governance 、helm chart Distribute across clusters
stay Karmada 1.2 In the version ,Karmada With the help of Kubernetes Native API Integrated Kubernetes The surrounding Ecology , It provides users with in Karmada Security compliance governance policies and helm chart Practice cases of cross cluster distribution .
Security compliance governance
Application in production environment Karmada when , For the sake of safety 、 Compliance and other control purposes , Workloads often need to be audited 、 Verification and change , For example, the following scenarios :
- To facilitate the identification of cluster specific applications in monitoring and logging , We want a certain business application Pod Appropriate label structure , Logo book Pod Cluster location and business role .
- In order to prevent supply chain attacks or speed up the image loading of specific clusters , We restrict the application of a specific cluster to only pull images from a specific warehouse , And the pull strategy is always “Always”.
- Karmada While improving the operation and maintenance capability of multiple clusters, it is also necessary to limit the destructive behavior of malicious users to the cluster , Or misoperation of operation and maintenance personnel during operation and maintenance , For example, delete a namespace that still has workload work .
Based on all of the above ,Karmada A security policy engine is required to meet the security compliance governance of multiple clusters . With the help of native API,Karmada Can be integrated without intrusion CNCF Mainstream security policy engine project , The current community has provided Gatekeeper as well as Kyverno Integration practice of virtual reality , Users can generate their own... Based on the above practices and production requirements Karmada Security compliance policies .
Helm chart Distribute across clusters
At present Karmada Implemented on Kubernetes Cross cluster distribution of native resources , But in the production environment , Applications typically contain more than one workload , It also includes RBAC resources 、configmap、secret etc. , This requires that Karmada The ability to distribute applications across clusters . Usually , User choice Helm chart To package the application , In the cross cluster scenario ,Karmada Through integration Flux Easy way to achieve helm chart Ability to distribute across clusters , More than that , With the help of Karmada Of OverridePolicy, Users can customize applications for specific clusters , In a unified Karmada Manage cross cluster applications on the control plane .
Thank contributors
Karmada v1.2 The version contains information from 48 Hundreds of code submissions by contributors , I would like to express my heartfelt thanks to all the contributors :
contributor GitHub ID:
@AllenZMC
@anu491
@carlory
@CharlesQQ
@chaunceyjiang
@chinmaym07
@CuiDengdeng
@dddddai
@duanmeng
@duanmengkk
@ErikJiang
@fanzhihai0215
@fleeto
@Garrybest
@gf457832386
@gy95
@hanweisen
@huiwq1990
@huntsman-li
@huone1
@ikaven1024
@jameszhangyukun
@kerthcet
@learner0810
@lfbear
@likakuli
@liys87x
@lonelyCZ
@lvyanru8200
@mikeshng
@mrlihanbo
@my-git9
@pangsq
@pigletfly
@Poor12
@prodanlabs
@RainbowMango
@sayaoailun
@snowplayfire
@stingshen
@Tingtal
@wuyingjun-lucky
@wwwnay
@XiShanYongYe-Chang
@xyz2277
@YueHonghui
@zgfh
@zirain
Reference link
- Release Notes: https://github.com/karmada-io/karmada/releases/tag/v1.2.0
- Resource interpreter instructions :https://github.com/karmada-io/karmada/blob/master/docs/userguide/customizing-resource-interpreter.md
- Safety compliance practices (Gatekeeper practice ):https://github.com/karmada-io/karmada/blob/master/docs/working-with-gatekeeper.md
- Safety compliance practices (kyverno practice ):https://github.com/karmada-io/karmada/blob/master/docs/working-with-kyverno.md
- Helm chart Distribute practices across clusters :https://github.com/karmada-io/karmada/blob/master/docs/working-with-flux.md
Add small assistant wechat putong3333, reply Karmada Enter the communication group

Huawei partners and developers conference 2022 The fire is coming , Heavy content can't be missed !
【 Wonderful activities 】
March forward courageously · Be an all-around Developer →12 Technology live broadcast ,8 High energy output of the great technical treasure , And the code room 、 Many rounds of mysterious tasks such as knowledge competition are waiting for you to challenge . Break through immediately , Open the ultimate prize ! Click to embark on the promotion of all-round developers !
【 Technical topics 】
The future has to ,2022 Technical exploration → Huawei's cutting-edge technologies in various fields 、 Heavy open source project 、 Innovative application practice , Standing at the entrance of the intelligent world , Explore how the future shines into reality , Full of dry goods, click to learn
Click to follow , The first time to learn about Huawei's new cloud technology ~




![(10.3) [steganography mitigation] steganography protection, steganography interference and steganography detection](/img/f3/a2c3445d1a972eeda9b21575ee7f58.png)




