欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

kubernetes学习4--资源配额管理(租户配额)

程序员文章站 2024-03-19 13:57:52
...
   上一篇博文学习了Namespace,知道通过Namespace可以对集群中的可用资源进行分配和限制。那这篇就学习下,怎样指定资源配额。

    测试环境,2台node服务器,都是CPU为1,内存为1G。查询方法,可以参考另一篇文章“centOS7下实践查询版本/CPU/内存/硬盘容量等硬件信息”。或者kubectl get nodes查询到所有nodes后,bubectl describe node nodeName可查询到node的详细信息,推荐使用这种方法。


1.指定容器配额

    对应指定容器实施配额管理非常简单,只要在Pod或ReplicationController的定义文件中设定resource属性即可为某个容器指定配额。目前容器支持CPU和Memory两类资源的配额限制。

    下面这个RC定义文件中增加资源配额声明。

    apiVersion: v1  
    kind: ReplicationController  
    metadata:  
       name: php-controller  
       labels:   
         name: php-controller  
    spec:  
      replicas: 1  
      selector:  
         name: php-test-pod  
      template:   
        metadata:  
         labels:  
           name: php-test-pod  
        spec:  
          containers:  
          - name: php-test  
            image: 192.168.174.131:5000/php-base:1.0  
            env:  
            - name: ENV_TEST_1  
              value: env_test_1  
            - name: ENV_TEST_2  
              value: env_test_2  
            ports:  
            - containerPort: 80  
            resources:  
              limits:  
                cpu: 0.5  
                memory: 512Mi  


    [[email protected] k8s]# kubectl create -f php-controller.yaml   
    replicationcontroller "php-controller" created  
    [[email protected] k8s]# kubectl get pods -o wide  
    NAME                   READY     STATUS    RESTARTS   AGE       NODE  
    php-controller-qx1wg   1/1       Running   0          47s       192.168.174.130  
    [[email protected] k8s]# kubectl describe pod php-controller-qx1wg   
    Name:           php-controller-qx1wg  
    Namespace:      default  
    Node:           192.168.174.130/192.168.174.130  
    Start Time:     Fri, 11 Nov 2016 17:27:21 +0800  
    Labels:         name=php-test-pod  
    Status:         Running  
    IP:             172.17.42.2  
    Controllers:    ReplicationController/php-controller  
    Containers:  
      php-test:  
        Container ID:       docker://1777abd035e7cc3c8dee9eb27487e76424aff61a26143b0f23fb0c411415ed5b  
        Image:              192.168.174.131:5000/php-base:1.0  
        Image ID:           docker://sha256:104c7334b9624b054994856318e54b6d1de94c9747ab9f73cf25ae5c240a4de2  
        Port:               80/TCP  
        QoS Tier:  
          memory:   Guaranteed  
          cpu:      Guaranteed  
        Limits:  
          cpu:      500m  
          memory:   512Mi  
        Requests:  
          cpu:              500m  
          memory:           512Mi  
        State:              Running  
          Started:          Fri, 11 Nov 2016 17:27:22 +0800  
        Ready:              True  
        Restart Count:      0  
        Environment Variables:  
          ENV_TEST_1:       env_test_1  
          ENV_TEST_2:       env_test_2  
    Conditions:  
      Type          Status  
      Ready         True   
    No volumes.  
    Events:  
      FirstSeen     LastSeen        Count   From                            SubobjectPath                   Type            Reason                  Message  
      ---------     --------        -----   ----                            -------------                   --------        ------                  -------  
      1m            1m              2       {kubelet 192.168.174.130}                                       Warning         MissingClusterDNS       kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.  
      1m            1m              1       {kubelet 192.168.174.130}       spec.containers{php-test}       Normal          Pulled                  Container image "192.168.174.131:5000/php-base:1.0" already present on machine  
      1m            1m              1       {kubelet 192.168.174.130}       spec.containers{php-test}       Normal          Created                 Created container with docker id 1777abd035e7  
      1m            1m              1       {kubelet 192.168.174.130}       spec.containers{php-test}       Normal          Started                 Started container with docker id 1777abd035e7  
      1m            1m              1       {default-scheduler }                                            Normal          Scheduled               Successfully assigned php-controller-qx1wg to 192.168.174.130  

 如果资源配额声明的大小超出了实际物理机的CPU和内存,会怎样呢?是不是无法创建成功?把CPU改为3,memory改为3Gi,测试下。

  1. [[email protected] k8s]# kubectl get pods -o wide  
  2. NAME                   READY     STATUS    RESTARTS   AGE       NODE  
  3. php-controller-47ufd   0/1       Pending   0   

发现Pod的状态是Pendding。kubectl describe看下Pod的信息

    [[email protected] k8s]# kubectl describe pod php-controller-47ufd   
    Name:           php-controller-47ufd  
    Namespace:      default  
    Node:           /  
    Labels:         name=php-test-pod  
    Status:         Pending  
    IP:  
    Controllers:    ReplicationController/php-controller  
    Containers:  
      php-test:  
        Image:      192.168.174.131:5000/php-base:1.0  
        Port:       80/TCP  
        QoS Tier:  
          cpu:      Guaranteed  
          memory:   Guaranteed  
        Limits:  
          memory:   3Gi  
          cpu:      3  
        Requests:  
          memory:   3Gi  
          cpu:      3  
        Environment Variables:  
          ENV_TEST_1:       env_test_1  
          ENV_TEST_2:       env_test_2  
    No volumes.  
    Events:  
      FirstSeen     LastSeen        Count   From                    SubobjectPath   Type            Reason                  Message  
      ---------     --------        -----   ----                    -------------   --------        ------                  -------  
      3m            3m              1       {default-scheduler }                    Warning         FailedScheduling        pod (php-controller-47ufd) failed to fit in any node  
    fit failure on node (192.168.174.131): Node didn't have enough resource: CPU, requested: 3000, used: 0, capacity: 1000  
    fit failure on node (192.168.174.130): Node didn't have enough resource: CPU, requested: 3000, used: 0, capacity: 1000  
      
      3m    15s     13      {default-scheduler }            Warning FailedScheduling        pod (php-controller-47ufd) failed to fit in any node  
    fit failure on node (192.168.174.130): Node didn't have enough resource: CPU, requested: 3000, used: 0, capacity: 1000  
    fit failure on node (192.168.174.131): Node didn't have enough resource: CPU, requested: 3000, used: 0, capacity: 1000  

可以看到Events信息中,显示Pod调度到192.168.174.131这个Node上,发现CPU实际只有1000,但是申请3000,所以这个Node资源不足。接着调度到192.168.174.130这个Node上,发现也是资源不足。所以Pod无法Running,一直Pandding,一直在所有Node上循环检查,直到某个Node上资源足够。

2.全局默认配额

   除了可以直接在容器(或RC)的定义文件中指定的容器增加资源配额参数,我们还可以通过创建LimitRange对象来定义一个全局默认配额模版。这个默认配额模版会加载到集群中的每个Pod及容器上,这样就不用我们手工为每个Pod和容器重复设置了。

   LimitRange对象可以同时在Pod和Container两个级别上进行对资源配额的设置。当LimitRange创建生效后,之后创建的Pod都将使用LimitRange设置的资源配额进行约束。

   创建个pod-container-limitRnage.yaml文件


    apiVersion: v1  
    kind: LimitRange  
    metadata:  
      name: limit-range-test  
    spec:  
      limits:  
      - type: Pod  
        max:  
          cpu: 1  
          memory: 1Gi  
        min:  
          cpu: 0.5  
          memory: 216Mi  
      - type: Container  
        max:  
          cpu: 1  
          memory: 1Gi  
        min:  
          cpu: 0.5  
          memory: 216Mi  
        default:  
          cpu: 0.5  
          memory: 512Mi  

上述设置表明:

  1)任意Pod内的所有容器的CPU使用都限制在0.5--1,内存使用限制在216Mi--1Gi

  2)任意容器的CPU使用都限制在0.5--1,默认0.5,内存使用限制在216Mi--1Gi,默认512Mi


    [[email protected] k8s]# kubectl create -f pod-container-limitRange.yaml   
    limitrange "limit-range-test" created  
    [[email protected] k8s]# kubectl get limitrange   
    NAME               AGE  
    limit-range-test   12s  
    [[email protected] k8s]# kubectl describe limitrange limit-range-test   
    Name:           limit-range-test  
    Namespace:      default  
    Type            Resource        Min     Max     Default Request Default Limit   Max Limit/Request Ratio  
    ----            --------        ---     ---     --------------- -------------   -----------------------  
    Pod             memory          216Mi   1Gi     -               -               -  
    Pod             cpu             500m    1       -               -               -  
    Container       cpu             500m    1       500m            500m            -  
    Container       memory          216Mi   1Gi     512Mi           512Mi           -  
      
      
    [[email protected] k8s]#   
写个Pod或RC,不指定resource,然后查看Pod信息,会看到cpu默认500m,memory=512Mi。

  此外,如果我们在Pod或RC定义文件中指定配额参数,则可遵循局部覆盖全局的原则。当然,如果配额超过了全局设定的最大值,那kubectl ceate的时候就会爆粗。在这就不一一演示了。

   LimitRange也是跟Namespace捆绑的,如果定义文件中没指定,则默认是default,kubectl describe limitrange limit-range-test 的时候,也会看到Namespace:default信息。

3.多租户配额管理

    多租户在Kubernetes中以Namespace来体现的,这里的多租户可以是多个用户、多个业务系统或者相互隔离的多种作业环境。一个集群中的资源总是有限的,当这个集群被多个租户的应用同时使用时,为了更好的使用这种有限的共有资源,我们需要将资源配额的管理单元提升到租户级别,只需要在不同组合对应的Namespace上加载对应的ResourceQuota配置即可达到目的。

   创建development Namespace

    [[email protected] k8s]# cat namespace-dev.yaml   
    apiVersion: v1  
    kind: Namespace  
    metadata:  
       name: development  
       labels:  
         name: development  


   上面创建了development Namespace,describe查看,会发现,它对资源配额没有任何限制。
    [[email protected] k8s]# kubectl describe namespace development   
    Name:   development  
    Labels: name=development  
    Status: Active  
      
    No resource quota.  
      
    No resource limits.  

现有2台Node服务器,总共cpu 2,memory 2Gi,给development namespce配额cpu 1,memory 1Gi。

    创建dev-resourcequota.yaml,不但可以对CPU,memory做限制,还可以对pods/services/replicationcontrollers/resourcequotas/secrets/configmaps/persistentvolumeclaims/services.nodeports/services.loadbalancers/requests.cpu/requests.memory/limits.cpu/limits.memory/cpu/memory/storage等资源做出限制.注意定义文件中namespace不能漏掉,否则默认是default(文件中如果没指明namespace,则kubectl create -f xxFile --namespace=development也行)

    apiVersion: v1  
    kind: ResourceQuota  
    metadata:  
       name: quota-development  
       namespace: development  
    spec:  
      hard:  
        cpu: 1  
        memory: 1Gi  
        persistentvolumeclaims: 10  
        pods: 50  
        replicationcontrollers: 20  
        resourcequotas: 1  
        secrets: 20  
        services: 20  



    [[email protected] k8s]# kubectl create -f dev-resourcequota.yaml   
    resourcequota "quota-development" created  

[email protected] k8s]# kubectl get resourcequotas --namespace=development  
NAME                AGE  
quota-development   1m  


    [[email protected] k8s]# kubectl describe resourcequota quota-development --namespace=development  
    Name:                   quota-development  
    Namespace:              development  
    Resource                Used    Hard  
    --------                ----    ----  
    cpu                     0       1  
    memory                  0       1Gi  
    persistentvolumeclaims  0       10  
    pods                    0       50  
    replicationcontrollers  0       20  
    resourcequotas          1       1  
    secrets                 0       20  
    services                0       20  


还可查看这时namespace的信息 


    [[email protected] k8s]# kubectl describe namespace development   
    Name:   development  
    Labels: name=development  
    Status: Active  
      
    Resource Quotas  
     Name:                  quota-development  
     Resource               Used    Hard  
     --------               ---     ---  
     cpu                    0       1  
     memory                 0       1Gi  
     persistentvolumeclaims 0       10  
     pods                   0       50  
     replicationcontrollers 0       20  
     resourcequotas         1       1  
     secrets                0       20  
     services               0       20  
      
    No resource limits.  
看到Resource Quoata已经做出了限制,至于resource limit,只要按照上面的全局默认配额操作就行了(定义文件中要指明namespace,文件中如果没指明namespace,则kubectl create -f xxFile --namespace=development也行)

Resource limits限制的是Pod和container的cpu/memory,而resource quota是对namespace的限制,也就是说所有的Pod/container的总额不能超过quota的限制。


  创建完ResourceQuota之后,对于所有需要创建的Pod都必须指定具体的资源配额设置,否则,创建Pod会失败。

    apiVersion: v1  
    kind: Pod  
    metadata:  
       name: php-test  
       labels:   
         name: php-test  
       namespace: development  
    spec:  
      containers:  
      - name: php-test  
        image: 192.168.174.131:5000/php-base:1.0  
        env:  
        - name: ENV_TEST_1  
          value: env_test_1  
        - name: ENV_TEST_2  
          value: env_test_2  
        ports:  
        - containerPort: 80  
          hostPort: 80  


    [[email protected] k8s]# kubectl create -f php-pod.yaml   
    Error from server: error when creating "php-pod.yaml": pods "php-test" is forbidden: Failed quota: quota-development: must specify cpu,limits.cpu,limits.memory,memory  
    [[email protected] k8s]#