Notice: Constant WP_DEBUG already defined in /var/www/html/wordpress/wp-content/plugins/changyan/sohuchangyan.php on line 12

Notice: Constant WP_DEBUG_LOG already defined in /var/www/html/wordpress/wp-content/plugins/changyan/sohuchangyan.php on line 13

Notice: Constant WP_DEBUG_DISPLAY already defined in /var/www/html/wordpress/wp-content/plugins/changyan/sohuchangyan.php on line 14
容器编排之Kubernetes网络隔离NetworkPolicy【zoues.com】 – zoues

LOADING

Follow me

容器编排之Kubernetes网络隔离NetworkPolicy【zoues.com】
四月 28, 2017|DockerPaaS

容器编排之Kubernetes网络隔离NetworkPolicy【zoues.com】

容器编排之Kubernetes网络隔离NetworkPolicy【zoues.com】

Kubernetes的一个重要特性就是要把不同node节点的pod(container)连接起来,无视物理节点的限制。但是在某些应用环境中,比如公有云,不同租户的pod不应该互通,这个时候就需要网络隔离。幸好,Kubernetes提供了NetworkPolicy,支持按Namespace级别的网络隔离,这篇文章就带你去了解如何使用NetworkPolicy。

需要注意的是,使用NetworkPolicy需要特定的网络解决方案,如果不启用,即使配置了NetworkPolicy也无济于事。我们这里使用Calico解决网络隔离问题。

互通测试

在使用NetworkPolicy之前,我们先验证不使用的情况下,pod是否互通。这里我们的测试环境是这样的:

Namespace:ns-calico1,ns-calico2

Deployment: ns-calico1/calico1-nginx, ns-calico2/busybox

Service: ns-calico1/calico1-nginx

先创建Namespace:

apiVersion: v1 kind: Namespace metadata:   name: ns-calico1   labels:     user: calico1 --- apiVersion: v1 kind: Namespace metadata:   name: ns-calico2 

# kubectl create -f namespace.yaml namespace "ns-calico1" created namespace "ns-calico2" created # kubectl get ns NAME          STATUS    AGE default       Active    9d kube-public   Active    9d kube-system   Active    9d ns-calico1    Active    12s ns-calico2    Active    8s 

接着创建ns-calico1/calico1-nginx:

apiVersion: extensions/v1beta1 kind: Deployment metadata:   name: calico1-nginx   namespace: ns-calico1 spec:   replicas: 1   template:     metadata:       labels:         user: calico1         app: nginx     spec:       containers:       - name: nginx         image: nginx         ports:         - containerPort: 80 --- apiVersion: v1 kind: Service metadata:   name: calico1-nginx   namespace: ns-calico1   labels:      user: calico1 spec:   selector:     app: nginx   ports:   - port: 80 

# kubectl create -f calico1-nginx.yaml deployment "calico1-nginx" created service "calico1-nginx" created # kubectl get svc -n ns-calico1 NAME            CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE calico1-nginx   192.168.3.141   <none>        80/TCP    26s # kubectl get deploy -n ns-calico1 NAME            DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE calico1-nginx   1         1         1            1           34s 

最后创建ns-calico2/calico2-busybox:

apiVersion: v1 kind: Pod metadata:   name: calico2-busybox   namespace: ns-calico2 spec:   containers:   - name: busybox     image: busybox     command:       - sleep       - "3600" 

# kubectl create -f calico2-busybox.yaml pod "calico2-busybox" created # kubectl get pod -n ns-calico2 NAME              READY     STATUS    RESTARTS   AGE calico2-busybox   1/1       Running   0          40s 

测试服务已经安装完成,现在我们登进calico2-busybox里,看是否能够连通calico1-nginx

# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.141:80) 

由此可以看出,在没有设置网络隔离的时候,两个不同Namespace下的Pod是可以互通的。接下来我们使用Calico进行网络隔离。

网络隔离

先决条件

要想在Kubernetes集群中使用Calico进行网络隔离,必须满足以下条件:

  1. kube-apiserver必须开启运行时extensions/v1beta1/networkpolicies,即设置启动参数:–runtime-config=extensions/v1beta1/networkpolicies=true
  2. kubelet必须启用cni网络插件,即设置启动参数:–network-plugin=cni
  3. kube-proxy必须启用iptables代理模式,这是默认模式,可以不用设置
  4. kube-proxy不得启用–masquerade-all,这会跟calico冲突

注意:配置Calico之后,之前在集群中运行的Pod都要重新启动

安装calico

首先需要安装Calico网络插件,我们直接在Kubernetes集群中安装,便于管理。

# Calico Version v2.1.4 # http://docs.projectcalico.org/v2.1/releases#v2.1.4 # This manifest includes the following component versions: #   calico/node:v1.1.3 #   calico/cni:v1.7.0 #   calico/kube-policy-controller:v0.5.4  # This ConfigMap is used to configure a self-hosted Calico installation. kind: ConfigMap apiVersion: v1 metadata:   name: calico-config   namespace: kube-system data:   # Configure this with the location of your etcd cluster.   etcd_endpoints: "https://10.1.2.154:2379,https://10.1.2.147:2379"    # Configure the Calico backend to use.   calico_backend: "bird"    # The CNI network configuration to install on each node.   cni_network_config: |-     {         "name": "k8s-pod-network",         "type": "calico",         "etcd_endpoints": "__ETCD_ENDPOINTS__",         "etcd_key_file": "__ETCD_KEY_FILE__",         "etcd_cert_file": "__ETCD_CERT_FILE__",         "etcd_ca_cert_file": "__ETCD_CA_CERT_FILE__",         "log_level": "info",         "ipam": {             "type": "calico-ipam"         },         "policy": {             "type": "k8s",             "k8s_api_root": "https://__KUBERNETES_SERVICE_HOST__:__KUBERNETES_SERVICE_PORT__",             "k8s_auth_token": "__SERVICEACCOUNT_TOKEN__"         },         "kubernetes": {             "kubeconfig": "__KUBECONFIG_FILEPATH__"         }     }    # If you're using TLS enabled etcd uncomment the following.   # You must also populate the Secret below with these files.   etcd_ca: "/calico-secrets/etcd-ca"   # "/calico-secrets/etcd-ca"   etcd_cert: "/calico-secrets/etcd-cert" # "/calico-secrets/etcd-cert"   etcd_key: "/calico-secrets/etcd-key"  # "/calico-secrets/etcd-key"  ---  # The following contains k8s Secrets for use with a TLS enabled etcd cluster. # For information on populating Secrets, see http://kubernetes.io/docs/user-guide/secrets/ apiVersion: v1 kind: Secret type: Opaque metadata:   name: calico-etcd-secrets   namespace: kube-system data:   # Populate the following files with etcd TLS configuration if desired, but leave blank if   # not using TLS for etcd.   # This self-hosted install expects three files with the following names.  The values   # should be base64 encoded strings of the entire contents of each file.   etcd-key: base64 key.pem   etcd-cert: base64 cert.pem   etcd-ca: base64 ca.pem  ---  # This manifest installs the calico/node container, as well # as the Calico CNI plugins and network config on # each master and worker node in a Kubernetes cluster. apiVersion: extensions/v1beta1 kind: DaemonSet metadata:   name: calico-node   namespace: kube-system   labels:     k8s-app: calico-node spec:   selector:     matchLabels:       k8s-app: calico-node   template:     metadata:       labels:         k8s-app: calico-node       annotations:         scheduler.alpha.kubernetes.io/critical-pod: ''         scheduler.alpha.kubernetes.io/tolerations: |           [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },            {"key":"CriticalAddonsOnly", "operator":"Exists"}]     spec:       hostNetwork: true       containers:         # Runs calico/node container on each Kubernetes node.  This         # container programs network policy and routes on each         # host.         - name: calico-node           image: quay.io/calico/node:v1.1.3           env:             # The location of the Calico etcd cluster.             - name: ETCD_ENDPOINTS               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_endpoints             # Choose the backend to use.             - name: CALICO_NETWORKING_BACKEND               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: calico_backend             # Disable file logging so `kubectl logs` works.             - name: CALICO_DISABLE_FILE_LOGGING               value: "true"             # Set Felix endpoint to host default action to ACCEPT.             - name: FELIX_DEFAULTENDPOINTTOHOSTACTION               value: "ACCEPT"             # Configure the IP Pool from which Pod IPs will be chosen.             - name: CALICO_IPV4POOL_CIDR               value: "192.168.0.0/16"             - name: CALICO_IPV4POOL_IPIP               value: "always"             # Disable IPv6 on Kubernetes.             - name: FELIX_IPV6SUPPORT               value: "false"             # Set Felix logging to "info"             - name: FELIX_LOGSEVERITYSCREEN               value: "info"             # Location of the CA certificate for etcd.             - name: ETCD_CA_CERT_FILE               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_ca             # Location of the client key for etcd.             - name: ETCD_KEY_FILE               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_key             # Location of the client certificate for etcd.             - name: ETCD_CERT_FILE               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_cert             # Auto-detect the BGP IP address.             - name: IP               value: ""           securityContext:             privileged: true           #resources:             #requests:               #cpu: 250m           volumeMounts:             - mountPath: /lib/modules               name: lib-modules               readOnly: true             - mountPath: /var/run/calico               name: var-run-calico               readOnly: false             - mountPath: /calico-secrets               name: etcd-certs         # This container installs the Calico CNI binaries         # and CNI network config file on each node.         - name: install-cni           image: quay.io/calico/cni:v1.7.0           command: ["/install-cni.sh"]           env:             # The location of the Calico etcd cluster.             - name: ETCD_ENDPOINTS               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_endpoints             # The CNI network config to install on each node.             - name: CNI_NETWORK_CONFIG               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: cni_network_config           volumeMounts:             - mountPath: /host/opt/cni/bin               name: cni-bin-dir             - mountPath: /host/etc/cni/net.d               name: cni-net-dir             - mountPath: /calico-secrets               name: etcd-certs       volumes:         # Used by calico/node.         - name: lib-modules           hostPath:             path: /lib/modules         - name: var-run-calico           hostPath:             path: /var/run/calico         # Used to install CNI.         - name: cni-bin-dir           hostPath:             path: /opt/cni/bin         - name: cni-net-dir           hostPath:             path: /etc/cni/net.d         # Mount in the etcd TLS secrets.         - name: etcd-certs           secret:             secretName: calico-etcd-secrets  ---  # This manifest deploys the Calico policy controller on Kubernetes. # See https://github.com/projectcalico/k8s-policy apiVersion: extensions/v1beta1 kind: Deployment metadata:   name: calico-policy-controller   namespace: kube-system   labels:     k8s-app: calico-policy   annotations:     scheduler.alpha.kubernetes.io/critical-pod: ''     scheduler.alpha.kubernetes.io/tolerations: |       [{"key": "dedicated", "value": "master", "effect": "NoSchedule" },        {"key":"CriticalAddonsOnly", "operator":"Exists"}] spec:   # The policy controller can only have a single active instance.   replicas: 1   strategy:     type: Recreate   template:     metadata:       name: calico-policy-controller       namespace: kube-system       labels:         k8s-app: calico-policy     spec:       # The policy controller must run in the host network namespace so that       # it isn't governed by policy that would prevent it from working.       hostNetwork: true       containers:         - name: calico-policy-controller           image: quay.io/calico/kube-policy-controller:v0.5.4           env:             # The location of the Calico etcd cluster.             - name: ETCD_ENDPOINTS               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_endpoints             # Location of the CA certificate for etcd.             - name: ETCD_CA_CERT_FILE               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_ca             # Location of the client key for etcd.             - name: ETCD_KEY_FILE               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_key             # Location of the client certificate for etcd.             - name: ETCD_CERT_FILE               valueFrom:                 configMapKeyRef:                   name: calico-config                   key: etcd_cert             # The location of the Kubernetes API.  Use the default Kubernetes             # service for API access.             - name: K8S_API               value: "https://kubernetes.default:443"             # Since we're running in the host namespace and might not have KubeDNS             # access, configure the container's /etc/hosts to resolve             # kubernetes.default to the correct service clusterIP.             - name: CONFIGURE_ETC_HOSTS               value: "true"           volumeMounts:             # Mount in the etcd TLS secrets.             - mountPath: /calico-secrets               name: etcd-certs       volumes:         # Mount in the etcd TLS secrets.         - name: etcd-certs           secret:             secretName: calico-etcd-secrets 

# kubectl create -f calico.yaml configmap "calico-config" created secret "calico-etcd-secrets" created daemonset "calico-node" created deployment "calico-policy-controller" created # kubectl get ds -n kube-system                                      NAME          DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE-SELECTOR                              AGE calico-node   1         1         1         1            1           <none>                                     52s  # kubectl get deploy -n kube-system NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE calico-policy-controller   1         1         1            1           6m 

这样就搭建了Calico网络,接下来就可以配置NetworkPolicy了。

配置NetworkPolicy

首先,修改ns-calico1的配置:

apiVersion: v1 kind: Namespace metadata:   name: ns-calico1   labels:     user: calico1   annotations:     net.beta.kubernetes.io/network-policy: |       {         "ingress": {           "isolation": "DefaultDeny"         }       } 

# kubectl apply -f ns-calico1.yaml namespace "ns-calico1" configured 

如果这个时候再测试两个pod是否连通,一定会不通:

# kubectl exec -it calico2-busybox -n ns-calico2 -- wget --spider --timeout=1 calico1-nginx.ns-calico1 Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80) wget: download timed out 

这就是我们想要的效果,不同Namespace之间的pod不能互通,当然这只是最简单的情况,如果这时候ns-calico1的pod去连接ns-calico2的pod,还是互通的。因为ns-calico2没有设置Namespace annotations。

而且,这时候的ns-calico1会拒绝任何pod的通讯请求。因为,Namespace的annotations只是指定了拒绝所有的通讯请求,还未规定何时接受其他pod的通讯请求。在这里,我们指定只有拥有user=calico1标签的pod可以互联。

apiVersion: extensions/v1beta1 kind: NetworkPolicy metadata:  name: calico1-network-policy  namespace: ns-calico1 spec:  podSelector:   matchLabels:     user: calico1  ingress:   - from:      - namespaceSelector:         matchLabels:          user: calico1      - podSelector:         matchLabels:          user: calico1 ---  apiVersion: v1 kind: Pod metadata:   name: calico1-busybox   namespace: ns-calico1   labels:     user: calico1 spec:   containers:   - name: busybox     image: busybox     command:       - sleep       - "3600" 

# kubectl create -f calico1-network-policy.yaml networkpolicy "calico1-network-policy" created # kubectl create -f calico1-busybox.yaml pod "calico1-busybox" created 

这时候,如果我通过calico1-busybox连接calico1-nginx,则可以连通。

# kubectl exec -it calico1-busybox -n ns-calico1 -- wget --spider --timeout=1 calico1-nginx.ns-calico1      Connecting to calico1-nginx.ns-calico1 (192.168.3.71:80) 

这样我们就实现了Kubernetes的网络隔离。基于NetworkPolicy,可以实现公有云安全组策略。更多的NetworkPolicy参数,请参考 api-reference

参考资料:

  1. Network Policies
  2. Declaring Network Policy
  3. Using Calico for NetworkPolicy
  4. Calico for Kubernetes
no comments
Share