欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页

将kubernetes跑在本地LXD容器中(by quqi99)

程序员文章站 2022-07-11 11:42:03
...

版权声明:可以任意转载,转载时请务必以超链接形式标明文章原始出处和作者信息及本版权声明 (http://blog.csdn.net/quqi99)

问题

本文将kubernetest跑在本地LXD容器中。

Kubernetes是什么

Kubernetes是什么,见我的博客

安装LXD

如何安装LXD,见我的博客
这篇文章和之前的在LXD上运行容器化的OpenStack类似,见我的博客

LXD上安装Kubernetes

1, 从这个链接下载 ‘canonical-kubernetes.zip’ ,里面有下面要用到的bundle.yaml

curl https://api.jujucharms.com/charmstore/v5/canonical-kubernetes/archive -o canonical_kubernetes.zip
unzip canonical_kubernetes.zip

2, 运行’juju bootstrap’,注意:运行这一步时先不要修改profile

sudo snap install juju --classic
#export PATH=/snap/bin:$PATH
sudo lxc network set lxdbr0 ipv6.address none
sudo chown -R hua ~/.config/lxc
juju bootstrap --debug --config bootstrap-series=xenial --config agent-stream=devel localhost lxd-controller
juju status

3, 这步会产生 juju-kubernetes profile

juju add-model kubernetes
juju models
lxc profile show juju-kubernetes

4, 修改juju-kubernetes profile

#sudo apt-get install --reinstall linux-image-extra-$(uname -r)
sudo modprobe nbd
sudo modprobe ebtables
sudo modprobe ip_tables
sudo modprobe ip6_tables
sudo modprobe netlink_diag
sudo modprobe openvswitch
sudo modprobe nf_nat

#https://github.com/juju-solutions/bundle-canonical-kubernetes/wiki/Deploying-on-LXD
cat << EOF > juju-kubernetes.yaml
name: juju-kubernetes
config:
  user.user-data: |
    #cloud-config
    ssh_authorized_keys:
      - @@aaa@qq.com@
  boot.autostart: "true"
  linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
  raw.lxc: |
    lxc.apparmor.profile=unconfined
    lxc.mount.auto=proc:rw sys:rw
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
description: ""
devices:
  aadisable:
    path: /sys/module/nf_conntrack/parameters/hashsize
    source: /dev/null
    type: disk
  aadisable1:
    path: /sys/module/apparmor/parameters/enabled
    source: /dev/null
    type: disk
EOF
#ssh-******
sed -ri "s'@@aaa@qq.com@'$(cat ~/.ssh/id_rsa.pub)'" juju-kubernetes.yaml
lxc profile edit "juju-kubernetes" < juju-kubernetes.yaml
lxc profile show juju-kubernetes

5, 使用juju一键部署kubernetes

juju deploy bundle.yaml

安装配置验证Kubernetes

#Install the former as a snap and copy the k8s config using juju
sudo snap install kubectl --classic
mkdir -p ~/.kube
juju scp kubernetes-master/0:config ~/.kube/config
#For the k8s UI experience, get the URL and credentials using
kubectl config view
kubectl cluster-info
kubectl -s https://10.241.244.49:443 get componentstatuses

验证数据如下:

aaa@qq.com:~# cat ~/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...
    server: https://10.241.244.49:443
  name: juju-cluster
contexts:
- context:
    cluster: juju-cluster
    user: admin
  name: juju-context
current-context: juju-context
kind: Config
preferences: {}
users:
- name: admin
  user:
    as-user-extra: {}
    password: 9nvGaeQYtu3PSCpMYk6tKFRExoq29pwT
    username: admin

aaa@qq.com:~# kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://10.241.244.49:443
  name: juju-cluster
contexts:
- context:
    cluster: juju-cluster
    user: admin
  name: juju-context
current-context: juju-context
kind: Config
preferences: {}
users:
- name: admin
  user:
    password: 9nvGaeQYtu3PSCpMYk6tKFRExoq29pwT
    username: admin

raaa@qq.com:~# kubectl cluster-info
Kubernetes master is running at https://10.241.244.49:443
Heapster is running at https://10.241.244.49:443/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://10.241.244.49:443/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://10.241.244.49:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Grafana is running at https://10.241.244.49:443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at https://10.241.244.49:443/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

aaa@qq.com:~# kubectl -s https://10.241.244.49:443 get componentstatuses
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok                   
controller-manager   Healthy   ok                   
etcd-1               Healthy   {"health": "true"}   
etcd-2               Healthy   {"health": "true"}   
etcd-0               Healthy   {"health": "true"}  

aaa@qq.com:~# kubectl -s https://10.241.244.49:443 get nodes
NAME            STATUS    ROLES     AGE       VERSION
juju-893965-6   Ready     <none>    10h       v1.8.2
juju-893965-7   Ready     <none>    10h       v1.8.2
juju-893965-8   Ready     <none>    10h       v1.8.2

使用Kubernetes编排应用到容器

以kubernetes官方包中的examples/guestbook作为例子,该例子是一个典型的WEB应用,分为Frontend, Redis Backend, Redis Backend分为Redis Master和Redis Slave。从中我们可以看到:
Kubernetest的数据模型很通用(Pod, Replication Controller, Service, Label, Node),所以用Kubernetest编排容器化应用时只需要为应用分解的每一个微服务用yaml编写RC(Replication Controller)模板和Service模板即可。Kubernetes只编排容器,Juju不仅编排容器还可编排虚机数据模型更通用。如图, Juju的数据模型是一个树状的(Cloud, Bundle, Charm, Service, Application, Relation, Machine):

  • Machine, 相当于Kubernetes中的Node
  • Bundle, 分布式应用的抽象, 一个Bundle可包含多个Charm
  • Charm, 相当于组成应用的模块(即微服务,如假设OpenStack是一个Bundle的话,那么neutron可以做为一个Charm), Kubernetes中使用Yaml来编写RC和Service,Juju则要写Charm。一个Charm可包含多个Service。
  • Service, 相当于Kubernetes中的Service,一个Service下可以包含多个Application和Relation. 一个Service可以HA部署在多个Machine上,一个Machine可以承载多个Service; 相当于Kubernetes中的一个Service可以通过HA部署在多个Node上的Pod里, 一台Node上可以承载多个Pod
  • Application, Juju相比Kubernetes不仅可以编排容器还可以编排虚机,所以它比Kubernetes多出一个Application的数据模型
    Relation, 它包括Provide与Reqire两个类型。Kubernetes中是通过yaml中的selector元素来定义关系,Juju是通过yaml中的relations段集中定义关系,二者类似
  • Unit,相当于Kubernetes中的Pod下的Container(Kubernetes以Pod为最小单位管理,Unit相当于容器,比Pod更小一级)。Kubernetes有Label用于Replication Controller来区别Pod做HA,Juju在这一块是通过’juju add-unit’的形式通过一个单独的haproxy charm来实现HA。
  • Cloud, Juju也支持Cloud的概念,可以同时部署在裸机和虚拟化环境和公有云,kubernetes也有这些。

将kubernetes跑在本地LXD容器中(by quqi99)

wget https://github.com/kubernetes/kubernetes/releases/download/v1.1.1/kubernetes.tar.gz
tar -xf kubernetes.tar.gz && cd kubernetes/examples/guestbook

1, 创建Redis Master Replication Controller模板,并且根据模板创建Pod

root@test1:~/kubernetes/examples/guestbook# kubectl create -f redis-master-controller.yaml 
replicationcontroller "redis-master" created
root@test1:~/kubernetes/examples/guestbook# kubectl get replicationcontroller redis-master
NAME           DESIRED   CURRENT   READY     AGE
redis-master   1         1         1         39s
root@test1:~/kubernetes/examples/guestbook# kubectl get replicationcontroller
NAME                       DESIRED   CURRENT   READY     AGE
default-http-backend       1         1         1         10h
nginx-ingress-controller   3         3         3         10h
redis-master               1         1         1         56s
root@test1:~/kubernetes/examples/guestbook# kubectl get pod --selector name=redis-master
NAME                 READY     STATUS    RESTARTS   AGE
redis-master-xbxwc   1/1       Running   0          1m

root@test1:~/kubernetes/examples/guestbook# cat redis-master-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: redis-master
  labels:
    name: redis-master
spec:
  replicas: 1
  selector:
    name: redis-master
  template:
    metadata:
      labels:
        name: redis-master
    spec:
      containers:
      - name: master
        image: redis
        ports:
        - containerPort: 6379

2, 创建Redis Master Service模板,并根据模板创建Service

#redis-master-service.yaml模板中的selector属性指明了这个Service要关联名为redis-master的POD
root@test1:~/kubernetes/examples/guestbook# cat redis-master-service.yaml 
apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    name: redis-master
spec:
  ports:
    # the port that this service should serve on
  - port: 6379
    targetPort: 6379
  selector:
    name: redis-master

root@test1:~/kubernetes/examples/guestbook# kubectl create -f redis-master-service.yaml
service "redis-master" created

root@test1:~/kubernetes/examples/guestbook# kubectl get service
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
default-http-backend   ClusterIP   10.152.183.64   <none>        80/TCP     10h
kubernetes             ClusterIP   10.152.183.1    <none>        443/TCP    10h
redis-master           ClusterIP   10.152.183.31   <none>        6379/TCP   24s

root@test1:~/kubernetes/examples/guestbook# kubectl get service redis-master
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
redis-master   ClusterIP   10.152.183.31   <none>        6379/TCP   1m

3, 类似地,继续创建Redis Slave Pod与Service

kubectl create -f redis-slave-controller.yaml
kubectl get pod --selector name=redis-slave
kubectl create -f redis-slave-service.yaml

4, 类似地,继续创建Frontend Pod与Service

kubectl create -f frontend-controller.yaml
kubectl get pod --selector name=frontend
kubectl create -f frontend-service.yaml 
root@test1:~/kubernetes/examples/guestbook# kubectl get service frontend
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
frontend   ClusterIP   10.152.183.99   <none>        80/TCP    19s

5, 设置Frontend Service的端口映射
上面的10.152.183.99是虚拟IP,要想从外网访问,需要使用NodePort特设置端口映射。即在原有frontend-service.yaml的ports元素上方添加一行‘type: NodePort’,如下所示:

root@test1:~/kubernetes/examples/guestbook# grep -r 'NodePort' frontend-service.yaml -A 3
  type: NodePort
  ports:
    # the port that this service should serve on
    - port: 80

然后重新部署Frontend Service后就可以通过任何一个计算Node的IP(如使用‘juju status kubernetes-worker/0’查看)和NodePort(tcp:31375)访问WEB界面了(wget http://10.241.244.222:31375).

kubectl replace -f frontend-service.yaml --force
root@test1:~/kubernetes/examples/guestbook# kubectl get service frontend
NAME       TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
frontend   NodePort   10.152.183.33   <none>        80:31375/TCP   35s

附录 - Juju环境相关输出

aaa@qq.com:~# juju status
...
Unit                      Workload  Agent  Machine  Public address  Ports           Message
easyrsa/0*                active    idle   0        10.241.244.149                  Certificate Authority connected.
etcd/0*                   active    idle   1        10.241.244.78   2379/tcp        Healthy with 3 known peers
etcd/1                    active    idle   2        10.241.244.83   2379/tcp        Healthy with 3 known peers
etcd/2                    active    idle   3        10.241.244.89   2379/tcp        Healthy with 3 known peers
kubeapi-load-balancer/0*  active    idle   4        10.241.244.49   443/tcp         Loadbalancer ready.
kubernetes-master/0*      active    idle   5        10.241.244.162  6443/tcp        Kubernetes master running.
  flannel/0*              active    idle            10.241.244.162                  Flannel subnet 10.1.38.1/24
kubernetes-worker/0       active    idle   6        10.241.244.222  80/tcp,443/tcp  Kubernetes worker running.
  flannel/3               active    idle            10.241.244.222                  Flannel subnet 10.1.62.1/24
kubernetes-worker/1*      active    idle   7        10.241.244.200  80/tcp,443/tcp  Kubernetes worker running.
  flannel/1               active    idle            10.241.244.200                  Flannel subnet 10.1.93.1/24
kubernetes-worker/2       active    idle   8        10.241.244.119  80/tcp,443/tcp  Kubernetes worker running.
  flannel/2               active    idle            10.241.244.119                  Flannel subnet 10.1.67.1/24

aaa@qq.com:~# juju ssh kubernetes-master/0 ps -ef|grep kube
...
root      3045     1  3 Nov12 ?        00:15:00 /snap/kube-scheduler/200/kube-scheduler --logtostderr --master http://127.0.0.1:8080 --v 2
root      3096     1  9 Nov12 ?        00:45:47 /snap/kube-apiserver/200/kube-apiserver --admission-control Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,DefaultTolerationSeconds --allow-privileged=false --authorization-mode AlwaysAllow --basic-auth-file /root/cdk/basic_auth.csv --etcd-cafile /root/cdk/etcd/client-ca.pem --etcd-certfile /root/cdk/etcd/client-cert.pem --etcd-keyfile /root/cdk/etcd/client-key.pem --etcd-servers https://10.241.244.78:2379,https://10.241.244.83:2379,https://10.241.244.89:2379 --insecure-bind-address 127.0.0.1 --insecure-port 8080 --kubelet-certificate-authority /root/cdk/ca.crt --kubelet-client-certificate /root/cdk/client.crt --kubelet-client-key /root/cdk/client.key --logtostderr --min-request-timeout 300 --service-account-key-file /root/cdk/serviceaccount.key --service-cluster-ip-range 10.152.183.0/24 --storage-backend etcd2 --tls-cert-file /root/cdk/server.crt --tls-private-key-file /root/cdk/server.key --token-auth-file /root/cdk/known_tokens.csv --v 4
root      3303     1  7 Nov12 ?        00:39:14 /snap/kube-controller-manager/191/kube-controller-manager --logtostderr --master http://127.0.0.1:8080 --min-resync-period 3m --root-ca-file /root/cdk/ca.crt --service-account-private-key-file /root/cdk/serviceaccount.key --v 2

aaa@qq.com:~# juju ssh kubernetes-worker/0 ps -ef|grep kube
...
root     12872     1  0 Nov12 ?        00:04:20 /snap/kube-proxy/200/kube-proxy --cluster-cidr 10.1.0.0/16 --conntrack-max-per-core 0 --kubeconfig /root/cdk/kubeproxyconfig --logtostderr --master https://10.241.244.49:443 --v 0
root     12881     1  6 Nov12 ?        00:32:10 /snap/kubelet/200/kubelet --address 0.0.0.0 --allow-privileged=false --anonymous-auth=false --client-ca-file /root/cdk/ca.crt --cluster-dns 10.152.183.10 --cluster-domain cluster.local --fail-swap-on=false --kubeconfig /root/cdk/kubeconfig --logtostderr --network-plugin cni --port 10250 --tls-cert-file /root/cdk/server.crt --tls-private-key-file /root/cdk/server.key --v 0

使用conjure-up部署Kubernetes

也可以使用conjure-up更加友好的在LXD容器里部署Kubernetes,它实际上也是调用上面的bundle.yaml进行部署的。脚本如下:

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next
sudo apt update
sudo snap install lxd
sudo snap install conjure-up --classic
export PATH=/snap/bin:$PATH
/snap/bin/lxd init --auto
#Use lxdbr1 since the name lxdbr0 has been used in test env, can use 'lxc network list' to see 'MANAGED' field
/snap/bin/lxc network create lxdbr1 ipv4.address=auto ipv4.nat=true ipv6.address=none
#Must use non-root user to avoid the error 'This should _not_ be run as root or with sudo'
#Step1, select to install 'Kubernetes Core', see the picture below
#Step2, select 'localhost', see the picture below
#Step3, select the network bridge 'lxdbr1', see the picture below
#Step4, click 'Deploy all 5 Remaining Applicatons, see the picture below
sudo -u ubuntu -i conjure-up kubernetes
tailf ~/.cache/conjure-up/conjure-up.log

或者直接使用下列脚本安装:

export PATH=/snap/bin:$PATH

cat << EOF > default-profile.yaml
config: {}
description: Default LXD profile
devices:
  eth0:
    nictype: bridged
    parent: lxdbr1
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: default
used_by:
- /1.0/containers/cache
- /1.0/containers/kubernetes
EOF
lxc profile create default 2>/dev/null || echo "default profile already exists"
cat default-profile.yaml | lxc profile edit default

#!/bin/bash
lxc delete -f kubernetes
#用default和juju-kubernetes两个profile创建一个容器
lxc launch ubuntu:17.04 -p default -p juju-kubernetes kubernetes
sleep 5s
lxc exec kubernetes -- apt-get update
lxc exec kubernetes -- snap install lxd
lxc exec kubernetes -- apt-get install squashfuse
lxc exec kubernetes -- snap install core --beta
lxc exec kubernetes -- snap install conjure-up --classic --beta

#注意,此命令一定要使用snap包里的/snap/bin/lxc来执行,而不是apt包里的/usr/bin/lxc
/snap/bin/lxc exec kubernetes -- sudo -u ubuntu -i /snap/bin/conjure-up canonical-kubernetes localhost controller model

贴一些相关的图如下:
将kubernetes跑在本地LXD容器中(by quqi99)
将kubernetes跑在本地LXD容器中(by quqi99)
将kubernetes跑在本地LXD容器中(by quqi99)
将kubernetes跑在本地LXD容器中(by quqi99)

参考

[1] https://stgraber.org/2017/01/13/kubernetes-inside-lxd/
[2] https://insights.ubuntu.com/2017/10/12/kubernetes-the-not-so-easy-way/
[3] https://github.com/lenovo/workload-solution/wiki/juju-charm-layers