kubebuilder自定義資源

一直在網上看k8s自定義資源這一塊的內容,但是隻停留於看,並沒有真正的自己去實踐一波,寫這篇文章主要參考的是這篇博客,只是我對他做了一些簡化,我只希望外部能夠通過nodeip+port訪問我的服務,並且對裏面的資源進行統一生命週期管理。

在這裏插入圖片描述

1、使用kubebuilder初始化一個自定義資源

kubebuilder的安裝請參考以前寫的博客

1.進入gopath src 下新建一個文件夾,進入新建文件夾,生成自定以資源的相關文件,生成controller,type等,生成webhook相關的文件

[root@master src]# mkdir servicemanager                       
[root@master src]# cd servicemanager/                         
[root@master servicemanager]# kubebuilder init --domain servicemanager.io  
Writing scaffold for you to edit...
Get controller runtime:
$ go get sigs.k8s.io/[email protected]
Update go.mod:
$ go mod tidy
Running make:
$ make
/usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
Next: define a resource with:
$ kubebuilder create api
[root@master servicemanager]# kubebuilder create api --group servicemanager --version v1 --kind ServiceManager     
Create Resource [y/n]
y
Create Controller [y/n]
y
Writing scaffold for you to edit...
api/v1/servicemanager_types.go
controllers/servicemanager_controller.go
Running make:
$ make
/usr/local/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
go fmt ./...
go vet ./...
go build -o bin/manager main.go
[root@master servicemanager]# kubebuilder create webhook --group servicemanager --version v1 --kind ServiceManager --defaulting --programmatic-validation
Writing scaffold for you to edit...
api/v1/servicemanager_webhook.go

生成的,目錄結構如下:

.
├── api
│   └── v1
│       ├── groupversion_info.go // GVK信息、scheme生成的方法都在這裏
│       ├── servicemanager_types.go // 自定義CRD結構,需修改的文件
│       ├── servicemanager_webhook.go // webhook相關的文件
│       └── zz_generated.deepcopy.go // 深度拷貝
├── bin
│   └── manager // go打包文件的二進制文件
├── config // 所有最終生成的需要kubectl apply的的資源,按照功能進行分片成不同的目錄,這裏有些地方可以做些自定義的配置
│   ├── certmanager
│   │   ├── certificate.yaml
│   │   ├── kustomization.yaml
│   │   └── kustomizeconfig.yaml
│   ├── crd // crd的配置
│   │   ├── kustomization.yaml
│   │   ├── kustomizeconfig.yaml
│   │   └── patches
│   │       ├── cainjection_in_servicemanagers.yaml
│   │       └── webhook_in_servicemanagers.yaml
│   ├── default
│   │   ├── kustomization.yaml
│   │   ├── manager_auth_proxy_patch.yaml
│   │   ├── manager_webhook_patch.yaml
│   │   └── webhookcainjection_patch.yaml
│   ├── manager // manager的deployment在這裏
│   │   ├── kustomization.yaml
│   │   └── manager.yaml
│   ├── prometheus // metric暴露
│   │   ├── kustomization.yaml
│   │   └── monitor.yaml
│   ├── rbac // rbac授權
│   │   ├── auth_proxy_client_clusterrole.yaml
│   │   ├── auth_proxy_role_binding.yaml
│   │   ├── auth_proxy_role.yaml
│   │   ├── auth_proxy_service.yaml
│   │   ├── kustomization.yaml
│   │   ├── leader_election_role_binding.yaml
│   │   ├── leader_election_role.yaml
│   │   ├── role_binding.yaml
│   │   ├── servicemanager_editor_role.yaml
│   │   └── servicemanager_viewer_role.yaml
│   ├── samples // 簡單的自定義資源yaml文件
│   │   └── servicemanager_v1_servicemanager.yaml
│   └── webhook // Unit webhook Service,用來接收APIServer轉發而來的webhook請求
│       ├── kustomization.yaml
│       ├── kustomizeconfig.yaml
│       └── service.yaml
├── controllers 
│   ├── servicemanager_controller.go // # CRD controller的核心邏輯在這裏
│   └── suite_test.go
├── Dockerfile // 製作crd-controller鏡像的Dockerfile
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── main.go // 程序入口
├── Makefile // make編譯文件
└── PROJECT // 項目元數據

2.修改servicemanager_types.go文件

type ServiceManagerSpec struct {
	// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
	// Important: Run "make" to regenerate code after modifying this file

	// Foo is an example field of ServiceManager. Edit ServiceManager_types.go to remove/update
	// Category 只有兩種可能 deployment statefulset
	// 這個註釋表示該字段的值只能是Deployment 或者 Statefulset
	// +kubebuilder:validation:Enum=Deployment;Statefulset
	Category string `json:"category,omitempty"`
	
	// 標籤選擇器
	Selector map[string]string `json:"selector,omitempty"`
	
	// 引用的statefulset deployment的template
	Template corev1.PodTemplateSpec `json:"template,omitempty"`

	// 副本數 最大不超過10
	// +kubebuilder:validation:Maximum=10
	Replicas *int32 `json:"replicas,omitempty"`
	
	//端口號 端口號做大超過65535 服務端口號
	// +kubebuilder:validation:Maximum=65535
	Port *int32 `json:"port,omitempty"`

	// +kubebuilder:validation:Maximum=65535
	Targetport *int32 `json:"targetport,omitempty"`
	
}

// ServiceManagerStatus defines the observed state of ServiceManager
type ServiceManagerStatus struct {
	// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
	// Important: Run "make" to regenerate code after modifying this file
	Replicas int32 `json:"replicas,omitempty"`
	LastUpdateTime metav1.Time `json:"last_update_time,omitempty"`
	DeploymentStatus appsv1.DeploymentStatus `json:"deployment_status,omitempty"`
	ServiceStatus corev1.ServiceStatus `json:"service_status,omitempty"`
}
// 這裏,Spec和Status均是ServiceManager的成員變量,Status並不像Pod.Status一樣,是Pod的subResource.因此,
// 如果我們在controller的代碼中調用到Status().Update(),會觸發panic,
// 並報錯:the server could not find the requested resource
// 如果我們想像k8s中的設計那樣,那麼就要遵循k8s中status subresource的使用規範:
// kubebuilder:subresource:status
// 用戶只能指定一個CRD實例的spec部分;
// CRD實例的status部分由控制器進行變更。

// +kubebuilder:object:root=true
// +kubebuilder:subresource:status
// +kubebuilder:subresource:scale:selectorpath=.spec.selector,specpath=.spec.replicas,statuspath=.status.replicas
// ServiceManager is the Schema for the servicemanagers API
type ServiceManager struct {
	metav1.TypeMeta   `json:",inline"`
	metav1.ObjectMeta `json:"metadata,omitempty"`

	Spec   ServiceManagerSpec   `json:"spec,omitempty"`
	Status ServiceManagerStatus `json:"status,omitempty"`
}

// +kubebuilder:object:root=true

// ServiceManagerList contains a list of ServiceManager
type ServiceManagerList struct {
	metav1.TypeMeta `json:",inline"`
	metav1.ListMeta `json:"metadata,omitempty"`
	Items           []ServiceManager `json:"items"`
}

// +kubebuilder:subresource:status 一定要加上,不然在更新資源狀態的時候會報錯資源找不到。

3、修改文件servicemanager_controller.go

定義一個interface 用於內部資源的創建,更新,資源是否已經存在,實例化資源

type OwnResource interface {
	// 獲取內部資源的實體類
	MakeOwnResource(instance *servicemanagerv1.ServiceManager,logger logr.Logger,scheme *runtime.Scheme)(interface{}, error)

	// 校驗資源是否存在
	OwnResourceExist(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger)(bool, interface{},error)

	// 獲取內部資源的狀態並修改自定資源的狀態
	UpdateOwnerResources(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger) error

	// 發佈內部資源
	ApplyOwnResource(instance *servicemanagerv1.ServiceManager,client client.Client,logger logr.Logger,scheme *runtime.Scheme)error
}

結構體實現這4個接口,以service爲例

type OwnService struct {
	Port *int32
}
// 獲取內部資源的實體類
func (ownService *OwnService) MakeOwnResource(instance *ServiceManager,logger logr.Logger,scheme *runtime.Scheme) (interface{}, error){
	var label  = map[string]string{
		"app": instance.Name,
	}
	objectMeta := metav1.ObjectMeta{
		Name:instance.Name,
		Namespace:instance.Namespace,

	}
	servicePort := []corev1.ServicePort{
		corev1.ServicePort{
			TargetPort: intstr.IntOrString{intstr.Int,*instance.Spec.Targetport,""},
			NodePort:   *instance.Spec.Port,
			Port:*instance.Spec.Port,
		},
	}
	serviceSpec := corev1.ServiceSpec{
		Selector:label,
		Type:corev1.ServiceTypeNodePort,
		Ports:servicePort,

	}
	service := &corev1.Service{
		ObjectMeta: objectMeta,
		Spec:serviceSpec,
	}
	if err :=controllerutil.SetControllerReference(instance,service,scheme); err != nil{
		msg := fmt.Sprintf("set controllerReference for service %s/%s failed", instance.Namespace, instance.Name)
		logger.Error(err, msg)
		return nil, err
	}
	return service,nil
}

// 校驗資源是否存在
func (ownService *OwnService) OwnResourceExist(instance *ServiceManager,client client.Client,logger logr.Logger) (bool, interface{},error){

	service := &corev1.Service{}
	// 查看k8s集羣中是否存在service資源
	if err := client.Get(context.Background(),types.NamespacedName{Name:instance.Name,Namespace:instance.Namespace},service); err != nil{
		return false,nil,err
	}

	return true,service,nil
}

// 獲取內部資源的狀態並修改自定資源的狀態
func (ownService *OwnService) UpdateOwnerResources(instance *ServiceManager,client client.Client,logger logr.Logger) error{

	service := &corev1.Service{}
	if err := client.Get(context.Background(),types.NamespacedName{Name:instance.Name,Namespace:instance.Namespace},service); err != nil{
		logger.Error(err,"service 資源不存在!")
		return err
	}

	instance.Status.LastUpdateTime = metav1.Now()
	instance.Status.ServiceStatus = service.Status

	return nil
}

// 發佈內部資源
func (ownService *OwnService) ApplyOwnResource(instance *ServiceManager,client client.Client,logger logr.Logger,scheme *runtime.Scheme) error{

	// 首先查看資源是否存在
	exsit,found,err := ownService.OwnResourceExist(instance,client,logger)
	if err != nil {
		logger.Error(err,"service 資源不存在!")
		// return err
	}

	service,err := ownService.MakeOwnResource(instance,logger,scheme)
	newService,ok := service.(*corev1.Service)
	if !ok {
		logger.Error(err,"service 結構體轉化失敗!")
		return err
	}
	if err != nil {
		logger.Error(err,"獲取service資源失敗!")
		return err
	}
	if exsit {
		// 更新
		founService,ok := found.(*corev1.Service)
		if  ! ok{
			logger.Error(err,"service 結構體轉化失敗!")
			return err
		}
		// 這裏有個坑,svc在創建前可能未指定clusterIP,那麼svc創建後,
		// 會自動指定clusterIP並修改spec.clusterIP字段,
		// 因此這裏要補上。SessionAffinity同理
		newService.Spec.ClusterIP = founService.Spec.ClusterIP
		newService.Spec.SessionAffinity = founService.Spec.SessionAffinity
		newService.ObjectMeta.ResourceVersion = founService.ObjectMeta.ResourceVersion
		// 如果不等,則更新Service資源
		if founService != nil && !reflect.DeepEqual(founService.Spec,newService.Spec) {
			err := client.Update(context.Background(),newService)
			if err != nil {
				logger.Error(err,"service更新失敗!")
				return err
			}
		}
	}else{
		// 創建
		err := client.Create(context.Background(),newService)
		if err != nil {
			logger.Error(err,"service 創建失敗!")
			return err
		}
	}
	return nil

}

修改controller包中的方法,這個方式是真正調諧部分

// +kubebuilder:rbac:groups=servicemanager.servicemanager.io,resources=servicemanagers,verbs=get;list;watch;create;update;patch;delete
// +kubebuilder:rbac:groups=servicemanager.servicemanager.io,resources=servicemanagers/status,verbs=get;update;patch
// +kubebuilder:rbac:groups=apps,resources=statefulSet,verbs=get;update;patch;delete
// +kubebuilder:rbac:groups=apps,resources=deployment,verbs=get;update;patch;delete
// +kubebuilder:rbac:groups=core,resources=service,verbs=get;update;patch;delete
func (r *ServiceManagerReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
	ctx := context.Background()
	logger := r.Log.WithValues("servicemanager", req.NamespacedName)
	serviceManager := &servicemanagerv1.ServiceManager{}

	if err := r.Get(ctx,req.NamespacedName,serviceManager); err != nil{
		logger.Error(err,"獲取serviceManager失敗!")

		return ctrl.Result{}, err
	}

	// 如果存在,獲取own資源
	ownResources,err := r.getOwnResource(serviceManager)
	if err != nil {
		logger.Error(err,"獲取ownResource失敗!")
	}
	var success = true
	for _,ownResource := range ownResources {
		// 發佈或者更新子資源
		if err := ownResource.ApplyOwnResource(serviceManager,r.Client,logger,r.Scheme); err != nil{
			success = false
		}
	}

	// 獲取更新內置資源的狀態,並且修改自定義資源的crd
	newServiceManager := serviceManager.DeepCopy()
	for _,ownResource := range ownResources {
		// 發佈或者更新子資源
		if err := ownResource.UpdateOwnerResources(newServiceManager,r.Client,logger); err != nil{
			success = false
		}
	}

	// 更新newServiceManager
	if newServiceManager != nil && !reflect.DeepEqual(serviceManager.Status,newServiceManager.Status) {
		if err := r.Status().Update(ctx,newServiceManager); err != nil{
			// 這裏不處理
			r.Log.Error(err, "unable to update Unit status")
		}

	}

	if !success{
		// 調諧失敗
		logger.Info("更新內置資源失敗,將監聽資源再次放入到workqueue裏")
		return ctrl.Result{},err
	}else{
		logger.Info("更新內置資源成功!")
		return ctrl.Result{},nil
	}

	return ctrl.Result{}, nil
}
func (r *ServiceManagerReconciler) getOwnResource(instance *servicemanagerv1.ServiceManager) ([]OwnResource, error) {
	var ownResources []OwnResource
	if instance.Spec.Category == "Deployment" {
		ownDeployment := &servicemanagerv1.OwnDeployment{
			Category:instance.Spec.Category,
		}
		ownResources = append(ownResources, ownDeployment)

	} else {
		// statefulset留着後面寫
		/*ownStatefulSet := &servicemanagerv1.OwnStatefulSet{
			Spec: appsv1.StatefulSetSpec{
				Replicas:    instance.Spec.Replicas,
				Selector:    instance.Spec.Selector,
				Template:    instance.Spec.Template,
				ServiceName: instance.Name,
			},
		}

		ownResources = append(ownResources, ownStatefulSet)*/
	}

	if instance.Spec.Port != nil {
		ownService := &servicemanagerv1.OwnService{
			Port:instance.Spec.Port,
		}
		ownResources = append(ownResources, ownService)
	}

	return ownResources,nil

}

4、修改servicemanager_webhook.go文件,這個文件主要做的就是請求k8sapi的時候進行已成攔截 可以做的時 修改結構體 添加校驗

/*


Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/

package v1

import (
	"k8s.io/apimachinery/pkg/runtime"
	ctrl "sigs.k8s.io/controller-runtime"
	logf "sigs.k8s.io/controller-runtime/pkg/log"
	"sigs.k8s.io/controller-runtime/pkg/webhook"
)

// log is for logging in this package.
var servicemanagerlog = logf.Log.WithName("servicemanager-resource")

func (r *ServiceManager) SetupWebhookWithManager(mgr ctrl.Manager) error {
	return ctrl.NewWebhookManagedBy(mgr).
		For(r).
		Complete()
}

// EDIT THIS FILE!  THIS IS SCAFFOLDING FOR YOU TO OWN!

// +kubebuilder:webhook:path=/mutate-servicemanager-servicemanager-io-v1-servicemanager,mutating=true,failurePolicy=fail,groups=servicemanager.servicemanager.io,resources=servicemanagers,verbs=create;update,versions=v1,name=mservicemanager.kb.io

var _ webhook.Defaulter = &ServiceManager{}

// Default implements webhook.Defaulter so a webhook will be registered for the type
// 這個方法時可以修改結構體 比如添加一些默認參數什麼的
func (r *ServiceManager) Default() {
	servicemanagerlog.Info("default", "name", r.Name)

	// TODO(user): fill in your defaulting logic.
}

// TODO(user): change verbs to "verbs=create;update;delete" if you want to enable deletion validation.
// +kubebuilder:webhook:verbs=create;update,path=/validate-servicemanager-servicemanager-io-v1-servicemanager,mutating=false,failurePolicy=fail,groups=servicemanager.servicemanager.io,resources=servicemanagers,versions=v1,name=vservicemanager.kb.io

var _ webhook.Validator = &ServiceManager{}

// ValidateCreate implements webhook.Validator so a webhook will be registered for the type
// 下面的方法主要時做校驗使用
func (r *ServiceManager) ValidateCreate() error {
	servicemanagerlog.Info("validate create", "name", r.Name)

	// TODO(user): fill in your validation logic upon object creation.
	return nil
}

// ValidateUpdate implements webhook.Validator so a webhook will be registered for the type
func (r *ServiceManager) ValidateUpdate(old runtime.Object) error {
	servicemanagerlog.Info("validate update", "name", r.Name)

	// TODO(user): fill in your validation logic upon object update.
	return nil
}

// ValidateDelete implements webhook.Validator so a webhook will be registered for the type
func (r *ServiceManager) ValidateDelete() error {
	servicemanagerlog.Info("validate delete", "name", r.Name)

	// TODO(user): fill in your validation logic upon object deletion.
	return nil
}

這裏可能會設計到的概念 webhook、finalizer有興趣的可以自行百度
kubebuilder的一些註解都有什麼含義 這個看官網上面都寫的很清楚

自定義crd基本寫完,我們怎麼在本地運行呢
修改config/default/kustomization.yaml文件
將紅框內的註釋放開
在這裏插入圖片描述
在這裏插入圖片描述
修改config/crd/kustomization.yaml文件
根據閱讀註釋的描述,把下圖圈中的部分註釋打開:
在這裏插入圖片描述
修改Makefile文件中的指令deploy
在這裏插入圖片描述
替換爲自己的registry

export IMAGE="my.registry.com:5000/unit-controller:tmp"
make deploy IMG=${IMAGE}

最終生成一個all_in_one.yaml文件,這個文件有六千多行
1、需要把yaml文件中CustomResourceDefinition.spec下新增一個字段:preserveUnknownFields: false
在這裏插入圖片描述
2、MutatingWebhookConfiguration 和 ValidatingWebhookConfiguration
這兩個webhook配置需要修改什麼呢?來看看下載的配置,以爲例:MutatingWebhookConfiguration
在這裏插入圖片描述

下面copy的是博客博客,這裏面講的很詳細
這裏面有兩個地方要修改:

caBundle現在是空的,需要補上
clientConfig現在的配置是ca授權給的是Service unit-webhook-service,也即是會轉發到deployment的pod,但我們現在是要本地調試,這裏就要改成本地環境。
下面來講述如何配置這兩個點。

CA證書籤發
這裏要分爲多個步驟:

1.ca.cert
首先獲取K8s CA的CA.cert文件:

kubectl config view --raw -o json | jq -r '.clusters[0].cluster."certificate-authority-data"' | tr -d '"' > ca.cert

ca.cert的內容,即可複製替換到上面的MutatingWebhookConfiguration和ValidatingWebhookConfigurationd的webhooks.clientConfig.caBundle裏。(原來的Cg==要刪掉.)

2.csr
創建證書籤署請求json配置文件:

注意,hosts裏面填寫兩種內容:

Unit controller的service 在K8s中的域名,最後Unit controller是要放在K8s裏運行的。
本地開發機的某個網卡IP地址,這個地址用來連接K8s集羣進行調試。因此必須保證這個IP與K8s集羣可以互通

cat > unit-csr.json << EOF
{
  "hosts": [
    "unit-webhook-service.default.svc",
    "unit-webhook-service.default.svc.cluster.local",
    "192.168.254.1"
  ],
  "CN": "unit-webhook-service",
  "key": {
    "algo": "rsa",
    "size": 2048
  }
}
EOF

3.生成csr和pem私鑰文件:

[root@vm254011 unit]# cat unit-csr.json | cfssl genkey - | cfssljson -bare unit
2020/05/23 17:44:39 [INFO] generate received request
2020/05/23 17:44:39 [INFO] received CSR
2020/05/23 17:44:39 [INFO] generating key: rsa-2048
2020/05/23 17:44:39 [INFO] encoded CSR
[root@vm254011 unit]#
[root@vm254011 unit]# ls unit*
unit.csr  unit-csr.json  unit-key.pem

4.創建CertificateSigningRequest資源

cat > csr.yaml << EOF 
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
  name: unit
spec:
  request: $(cat unit.csr | base64 | tr -d '\n')
  usages:
  - digital signature
  - key encipherment
  - server auth
EOF
# apply
kubectl apply -f csr.yaml

5.向集羣提交此CertificateSigningRequest.
查看狀態:

[root@vm254011 unit]# kubectl apply -f csr.yaml
certificatesigningrequest.certificates.k8s.io/unit created
[root@vm254011 unit]# kubectl describe csr unit
Name:         unit
Labels:       <none>
...
CreationTimestamp:  Sat, 23 May 2020 17:56:14 +0800
Requesting User:    kubernetes-admin
Status:             Pending
Subject:
  Common Name:    unit-webhook-service
  Serial Number:
Subject Alternative Names:
         DNS Names:     unit-webhook-service.default.svc
                        unit-webhook-service.default.svc.cluster.local
         IP Addresses:  192.168.254.1
Events:  <none>

可以看到它還是pending的狀態,需要同意一下請求:

[root@vm254011 unit]# kubectl certificate approve unit
certificatesigningrequest.certificates.k8s.io/unit approved
[root@vm254011 unit]#
[root@vm254011 unit]# kubectl get csr unit
NAME   AGE    REQUESTOR          CONDITION
unit   111s   kubernetes-admin   Approved,Issued
# 保存客戶端crt文件
[root@vm254011 unit]# kubectl get csr unit -o jsonpath='{.status.certificate}' | base64 --decode > unit.crt

可以看到,現在已經簽署完畢了。

彙總一下:

第1步生成的ca.cert文件給caBundle字段使用
第3步生成的unit-key.pem私鑰文件和第5步生成的unit.crt文件,提供給客戶端(unit controller)https服務使用
更新WebhookConfiguration
根據上面生成的證書相關內容,對all_in_one.yaml 中的WebhookConfiguration進行替換,替換之後:

apiVersion: admissionregistration.k8s.io/v1beta1
kind: MutatingWebhookConfiguration
metadata:
  creationTimestamp: null
  name: unit-mutating-webhook-configuration
webhooks:
- clientConfig:
    caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXhNakEzTkRNeE0xb1hEVE13TURVeE1EQTNORE14TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5CCmRvZVRHNTlYMkZsYXRoN1RhRnYrZ2hjbGxsV0NLbkxuT1hQLzZydE0wdE92U0RCQjV2UVJsNUF0L3BWMEJucmQKZGtyOWRnMWRKSHp1T05WamkxTml6QVdUbWtSbDBKczMrdjFMUzBCY2xLeU5XbWRQM0NNUWl2M1BDbjNISG9rcgoveDZncnFaa3RxeUo2ck5JMXFocmkzbjNLSWFQWFBtYUJIeW1zWCt1UjQyMk1kaGNhU3dBUDQwUktzcUtWcS81CkRodzdHdVZzdFZHNG5GZUZ2dlFuYU1jVm13WUpyellFQWxNRitlSyswM3IyWEFLQUZxQnBEWXBaZlg1Wi9tUEsKVXlxNlIwcEJUaG9adXlwSUhQekwwMkJGazlDbmU3eTBXd1d6L1VleDJSN2toOVJhendNeVVTNlJKYU4wT2hRaQpsTTZyM2lZcnIzVWIxSW1ieE5NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFENHVNaVZpL28zSkVhVi9UZzVKRWhQK2tQZm8KVzBLeUtaT3FNVlZzRVZsM1l2aFdYdGxOaCtwT0ZHSTlPQVFZdE5NKzZDeEJLVm9Xd1NzSUpyYkpZeVR2bGFlYgpHZnJGZWRkL2NkM0N5M2N1UDQ0ZjRPQ3VabTZWckJUVy8wUms3LzVKMHlLTmlSSDVqelRJL0szZGtKWkNERktOCjRGdWZxZ3Y0QTNxdVYwQXJaNFNOV2poVEx2SlM1VVdaOUpxUndyU3NqNlpvenRJRVhiU1d2aWhyS2FGQmtoWWwKRG5KM2N4cFljYXJ0aVZqS1g3SUNQQTJxdmw1azF4ZEMwVldTQWlLdTVFR24zZkFmdkQwN2poeVBub3lkMjVmWApQeDlkaGlzaDgwaFl4Nm9pbHpHdUppMGZDNjgxZ0VRRTQzUGhNRHRCZHNKMTBEejRQYTdrL2QvY3hETT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    url: https://192.168.254.1:9443/mutate-custom-my-crd-com-v1-unit
#    service:
#      name: unit-webhook-service
#      namespace: default
#      path: /mutate-custom-my-crd-com-v1-unit
  failurePolicy: Fail
  name: munit.kb.io
  rules:
  - apiGroups:
    - custom.my.crd.com
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - units
---
apiVersion: admissionregistration.k8s.io/v1beta1
kind: ValidatingWebhookConfiguration
metadata:
  creationTimestamp: null
  name: unit-validating-webhook-configuration
webhooks:
- clientConfig:
    caBundle: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXhNakEzTkRNeE0xb1hEVE13TURVeE1EQTNORE14TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTG5CCmRvZVRHNTlYMkZsYXRoN1RhRnYrZ2hjbGxsV0NLbkxuT1hQLzZydE0wdE92U0RCQjV2UVJsNUF0L3BWMEJucmQKZGtyOWRnMWRKSHp1T05WamkxTml6QVdUbWtSbDBKczMrdjFMUzBCY2xLeU5XbWRQM0NNUWl2M1BDbjNISG9rcgoveDZncnFaa3RxeUo2ck5JMXFocmkzbjNLSWFQWFBtYUJIeW1zWCt1UjQyMk1kaGNhU3dBUDQwUktzcUtWcS81CkRodzdHdVZzdFZHNG5GZUZ2dlFuYU1jVm13WUpyellFQWxNRitlSyswM3IyWEFLQUZxQnBEWXBaZlg1Wi9tUEsKVXlxNlIwcEJUaG9adXlwSUhQekwwMkJGazlDbmU3eTBXd1d6L1VleDJSN2toOVJhendNeVVTNlJKYU4wT2hRaQpsTTZyM2lZcnIzVWIxSW1ieE5NQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFENHVNaVZpL28zSkVhVi9UZzVKRWhQK2tQZm8KVzBLeUtaT3FNVlZzRVZsM1l2aFdYdGxOaCtwT0ZHSTlPQVFZdE5NKzZDeEJLVm9Xd1NzSUpyYkpZeVR2bGFlYgpHZnJGZWRkL2NkM0N5M2N1UDQ0ZjRPQ3VabTZWckJUVy8wUms3LzVKMHlLTmlSSDVqelRJL0szZGtKWkNERktOCjRGdWZxZ3Y0QTNxdVYwQXJaNFNOV2poVEx2SlM1VVdaOUpxUndyU3NqNlpvenRJRVhiU1d2aWhyS2FGQmtoWWwKRG5KM2N4cFljYXJ0aVZqS1g3SUNQQTJxdmw1azF4ZEMwVldTQWlLdTVFR24zZkFmdkQwN2poeVBub3lkMjVmWApQeDlkaGlzaDgwaFl4Nm9pbHpHdUppMGZDNjgxZ0VRRTQzUGhNRHRCZHNKMTBEejRQYTdrL2QvY3hETT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    url: https://192.168.254.1:9443/validate-custom-my-crd-com-v1-unit
#    service:
#      name: unit-webhook-service
#      namespace: default
#      path: /validate-custom-my-crd-com-v1-unit
  failurePolicy: Fail
  name: vunit.kb.io
  rules:
  - apiGroups:
    - custom.my.crd.com
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - units

主意,url中的ip地址需要是本地開發機的ip地址,同時此ip需要能與K8s集羣正常通信,uri爲service.path.

修改完兩個WebhookConfiguration之後,下一步就可以去部署all_in_one.yaml文件了,由於現在controller要在本地運行調試,因此,這個階段,要記得把all_in_one.yaml中的Deployment資源部分註釋掉。

[root@vm254011 unit]# kubectl apply -f all_in_one.local.yaml  --validate=false

namespace/unit-system created
customresourcedefinition.apiextensions.k8s.io/units.custom.my.crd.com created
mutatingwebhookconfiguration.admissionregistration.k8s.io/unit-mutating-webhook-configuration created
role.rbac.authorization.k8s.io/unit-leader-election-role created
clusterrole.rbac.authorization.k8s.io/unit-manager-role created
clusterrole.rbac.authorization.k8s.io/unit-proxy-role created
clusterrole.rbac.authorization.k8s.io/unit-metrics-reader created
rolebinding.rbac.authorization.k8s.io/unit-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/unit-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/unit-proxy-rolebinding created
service/unit-controller-manager-metrics-service created
service/unit-webhook-service created
validatingwebhookconfiguration.admissionregistration.k8s.io/unit-validating-webhook-configuration created

K8s這邊的CRD資源、webhook資源、RBAC授權都已經搞定了,下一步就是啓動本地的controller進行調試了。

新建一個自定義資源ServiceManager

apiVersion: servicemanager.servicemanager.io/v1
kind: ServiceManager
metadata:
  name: servicemanager-sample
spec:
  # Add fields here
  category: Deployment
  #selector:
    #app: servicemanager-sample
  replicas: 2
  port: 30027 #nodeport 和 serviceport
  targetport: 80 #container port
  template:
    metadata:
      name: servicemanager-sample
    spec:
      containers:
        - image: nginx
          imagePullPolicy: IfNotPresent
          name: servicemanager-sample
          resources:
            limits:
              cpu: 110m
              memory: 256Mi
            requests:
              cpu: 100m
              memory: 128Mi

通過nodeip + port 能正常訪問
在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章