容器化部署實(shí)踐:Docker與Kubernetes在生產(chǎn)環(huán)境中的最佳應(yīng)用
引言:為什么你的系統(tǒng)需要容器化?
還記得上次凌晨3點(diǎn)被電話叫醒處理生產(chǎn)環(huán)境故障嗎?還記得因?yàn)榄h(huán)境不一致導(dǎo)致的"在我機(jī)器上能跑"的尷尬嗎?如果你正在經(jīng)歷這些痛點(diǎn),那么這篇文章將徹底改變你的運(yùn)維生涯。
在我過去8年的運(yùn)維經(jīng)歷中,見證了從傳統(tǒng)物理機(jī)到虛擬化,再到容器化的完整演進(jìn)。今天,我將分享在管理超過1000個(gè)容器、日均處理10億請(qǐng)求的生產(chǎn)環(huán)境中積累的實(shí)戰(zhàn)經(jīng)驗(yàn)。
一、Docker:從入門到生產(chǎn)級(jí)實(shí)踐
1.1 Docker鏡像優(yōu)化:讓你的鏡像從1GB瘦身到50MB
很多人使用Docker時(shí)最大的誤區(qū)就是把它當(dāng)虛擬機(jī)用。讓我用一個(gè)真實(shí)案例說明如何優(yōu)化:
優(yōu)化前的Dockerfile(鏡像大小:1.2GB)
FROMubuntu:20.04 RUNapt-get update && apt-get install -y python3 python3-pip nodejs npm COPY. /app WORKDIR/app RUNpip3 install -r requirements.txt RUNnpm install CMD["python3","app.py"]
優(yōu)化后的Dockerfile(鏡像大?。?5MB)
# 構(gòu)建階段 FROMpython:3.9-alpine AS builder WORKDIR/app COPYrequirements.txt . RUNpip install --user -r requirements.txt # 運(yùn)行階段 FROMpython:3.9-alpine RUNapk add --no-cache libpq COPY--from=builder /root/.local /root/.local COPY--from=builder /app /app WORKDIR/app COPY. . ENVPATH=/root/.local/bin:$PATH CMD["python","app.py"]
關(guān)鍵優(yōu)化技巧:
? 使用Alpine Linux作為基礎(chǔ)鏡像
? 采用多階段構(gòu)建減少最終鏡像層數(shù)
? 合并RUN命令減少鏡像層
? 清理不必要的緩存和臨時(shí)文件
? 使用.dockerignore排除無關(guān)文件
1.2 生產(chǎn)環(huán)境Docker安全實(shí)踐
安全永遠(yuǎn)是生產(chǎn)環(huán)境的第一要?jiǎng)?wù)。以下是我總結(jié)的Docker安全清單:
# docker-compose.yml 安全配置示例
version:'3.8'
services:
app:
image:myapp:latest
security_opt:
-no-new-privileges:true
cap_drop:
-ALL
cap_add:
-NET_BIND_SERVICE
read_only:true
tmpfs:
-/tmp
user:"1000:1000"
networks:
-internal
deploy:
resources:
limits:
cpus:'0.5'
memory:512M
reservations:
cpus:'0.25'
memory:256M
核心安全措施:
? 以非root用戶運(yùn)行容器
? 限制容器capabilities
? 使用只讀文件系統(tǒng)
? 設(shè)置資源限制防止資源耗盡攻擊
? 定期掃描鏡像漏洞
1.3 Docker網(wǎng)絡(luò)架構(gòu)設(shè)計(jì)
在生產(chǎn)環(huán)境中,合理的網(wǎng)絡(luò)架構(gòu)至關(guān)重要:
# 創(chuàng)建自定義網(wǎng)絡(luò) docker network create --driver bridge --subnet=172.20.0.0/16 --ip-range=172.20.240.0/20 --gateway=172.20.0.1 production-network # 容器間通信最佳實(shí)踐 docker run -d --name backend --network production-network --network-alias api-server myapp:backend docker run -d --name frontend --network production-network -e API_URL=http://api-server:8080 myapp:frontend
二、Kubernetes:構(gòu)建企業(yè)級(jí)容器編排平臺(tái)
2.1 K8s架構(gòu)設(shè)計(jì):高可用集群部署方案
一個(gè)生產(chǎn)級(jí)的Kubernetes集群需要考慮的不僅僅是功能,更重要的是穩(wěn)定性和可擴(kuò)展性。
高可用Master節(jié)點(diǎn)配置:
# kubeadm-config.yaml apiVersion:kubeadm.k8s.io/v1beta3 kind:ClusterConfiguration kubernetesVersion:v1.28.0 controlPlaneEndpoint:"k8s-api.example.com:6443" networking: serviceSubnet:"10.96.0.0/12" podSubnet:"10.244.0.0/16" dnsDomain:"cluster.local" etcd: external: endpoints: -https://etcd-0.example.com:2379 -https://etcd-1.example.com:2379 -https://etcd-2.example.com:2379 caFile:/etc/kubernetes/pki/etcd/ca.crt certFile:/etc/kubernetes/pki/etcd/client.crt keyFile:/etc/kubernetes/pki/etcd/client.key
2.2 應(yīng)用部署最佳實(shí)踐:從開發(fā)到生產(chǎn)的完整流程
讓我們通過一個(gè)完整的微服務(wù)部署案例來展示K8s的強(qiáng)大能力:
1. 應(yīng)用配置管理(ConfigMap & Secret)
apiVersion:v1 kind:ConfigMap metadata: name:app-config namespace:production data: database.conf:| host=db.example.com port=5432 pool_size=20 redis.conf:| host=redis.example.com port=6379 --- apiVersion:v1 kind:Secret metadata: name:app-secrets namespace:production type:Opaque data: db-password:cGFzc3dvcmQxMjM=# base64編碼 api-key:YWJjZGVmZ2hpams=
2. 應(yīng)用部署(Deployment)
apiVersion:apps/v1 kind:Deployment metadata: name:api-server namespace:production labels: app:api-server version:v2.1.0 spec: replicas:3 strategy: type:RollingUpdate rollingUpdate: maxSurge:1 maxUnavailable:0 selector: matchLabels: app:api-server template: metadata: labels: app:api-server version:v2.1.0 spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: -labelSelector: matchExpressions: -key:app operator:In values: -api-server topologyKey:kubernetes.io/hostname containers: -name:api-server image:registry.example.com/api-server:v2.1.0 ports: -containerPort:8080 name:http -containerPort:9090 name:metrics env: -name:DB_PASSWORD valueFrom: secretKeyRef: name:app-secrets key:db-password volumeMounts: -name:config mountPath:/etc/config readOnly:true resources: requests: memory:"256Mi" cpu:"250m" limits: memory:"512Mi" cpu:"500m" livenessProbe: httpGet: path:/health port:8080 initialDelaySeconds:30 periodSeconds:10 readinessProbe: httpGet: path:/ready port:8080 initialDelaySeconds:5 periodSeconds:5 volumes: -name:config configMap: name:app-config
3. 服務(wù)暴露(Service & Ingress)
apiVersion:v1
kind:Service
metadata:
name:api-server-service
namespace:production
spec:
type:ClusterIP
selector:
app:api-server
ports:
-port:80
targetPort:8080
name:http
---
apiVersion:networking.k8s.io/v1
kind:Ingress
metadata:
name:api-ingress
namespace:production
annotations:
nginx.ingress.kubernetes.io/rewrite-target:/
nginx.ingress.kubernetes.io/ssl-redirect:"true"
cert-manager.io/cluster-issuer:"letsencrypt-prod"
spec:
ingressClassName:nginx
tls:
-hosts:
-api.example.com
secretName:api-tls-secret
rules:
-host:api.example.com
http:
paths:
-path:/
pathType:Prefix
backend:
service:
name:api-server-service
port:
number:80
2.3 自動(dòng)擴(kuò)縮容策略:讓你的系統(tǒng)具備彈性
水平自動(dòng)擴(kuò)縮容(HPA)配置:
apiVersion:autoscaling/v2 kind:HorizontalPodAutoscaler metadata: name:api-server-hpa namespace:production spec: scaleTargetRef: apiVersion:apps/v1 kind:Deployment name:api-server minReplicas:3 maxReplicas:20 metrics: -type:Resource resource: name:cpu target: type:Utilization averageUtilization:70 -type:Resource resource: name:memory target: type:Utilization averageUtilization:80 -type:Pods pods: metric: name:http_requests_per_second target: type:AverageValue averageValue:"1000" behavior: scaleDown: stabilizationWindowSeconds:300 policies: -type:Percent value:50 periodSeconds:60 scaleUp: stabilizationWindowSeconds:0 policies: -type:Percent value:100 periodSeconds:30 -type:Pods value:5 periodSeconds:60
三、監(jiān)控與日志:構(gòu)建可觀測(cè)性平臺(tái)
3.1 Prometheus + Grafana監(jiān)控體系
部署Prometheus監(jiān)控棧:
# prometheus-config.yaml
apiVersion:v1
kind:ConfigMap
metadata:
name:prometheus-config
namespace:monitoring
data:
prometheus.yml:|
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'kubernetes-pods'
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
action: keep
regex: true
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
action: replace
target_label: __metrics_path__
regex: (.+)
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
action: replace
regex: ([^:]+)(?::d+)?;(d+)
replacement: $1:$2
target_label: __address__
自定義監(jiān)控指標(biāo)示例:
# Python應(yīng)用集成Prometheus fromprometheus_clientimportCounter, Histogram, Gauge, start_http_server importtime # 定義指標(biāo) request_count = Counter('app_requests_total','Total requests', ['method','endpoint']) request_duration = Histogram('app_request_duration_seconds','Request duration', ['method','endpoint']) active_connections = Gauge('app_active_connections','Active connections') # 在應(yīng)用中使用 @request_duration.labels(method='GET', endpoint='/api/users').time() defget_users(): request_count.labels(method='GET', endpoint='/api/users').inc() # 業(yè)務(wù)邏輯 returnusers # 啟動(dòng)metrics服務(wù)器 start_http_server(9090)
3.2 ELK日志收集方案
Fluentd配置示例:
apiVersion:v1 kind:ConfigMap metadata: name:fluentd-config namespace:kube-system data: fluent.conf:|@type tail path /var/log/containers/*.log pos_file /var/log/fluentd-containers.log.pos tag kubernetes.* @type json time_format %Y-%m-%dT%H:%M:%S.%NZ @typekubernetes_metadata @typeelasticsearch hostelasticsearch.elastic-system.svc.cluster.local port9200 logstash_formattrue logstash_prefixkubernetes @typefile path/var/log/fluentd-buffers/kubernetes.system.buffer flush_modeinterval retry_typeexponential_backoff flush_interval5s retry_foreverfalse retry_max_interval30 chunk_limit_size2M queue_limit_length8 overflow_actionblock
四、CI/CD集成:實(shí)現(xiàn)真正的DevOps
4.1 GitLab CI/CD Pipeline配置
# .gitlab-ci.yml stages: -build -test -security -deploy variables: DOCKER_DRIVER:overlay2 DOCKER_TLS_CERTDIR:"" REGISTRY:registry.example.com IMAGE_TAG:$CI_COMMIT_SHORT_SHA build: stage:build image:docker:20.10 services: -docker:20.10-dind script: -dockerbuild-t$REGISTRY/$CI_PROJECT_NAME:$IMAGE_TAG. -dockerpush$REGISTRY/$CI_PROJECT_NAME:$IMAGE_TAG only: -main -develop test: stage:test image:$REGISTRY/$CI_PROJECT_NAME:$IMAGE_TAG script: -pytesttests/--cov=app--cov-report=xml -coveragereport coverage:'/TOTAL.*s+(d+%)$/' artifacts: reports: coverage_report: coverage_format:cobertura path:coverage.xml security-scan: stage:security image:aquasec/trivy:latest script: -trivyimage--severityHIGH,CRITICAL$REGISTRY/$CI_PROJECT_NAME:$IMAGE_TAG allow_failure:false deploy-production: stage:deploy image:bitnami/kubectl:latest script: -kubectlsetimagedeployment/api-serverapi-server=$REGISTRY/$CI_PROJECT_NAME:$IMAGE_TAG-nproduction -kubectlrolloutstatusdeployment/api-server-nproduction environment: name:production url:https://api.example.com only: -main when:manual
4.2 藍(lán)綠部署與金絲雀發(fā)布
金絲雀發(fā)布配置:
# 使用Flagger實(shí)現(xiàn)自動(dòng)化金絲雀發(fā)布 apiVersion:flagger.app/v1beta1 kind:Canary metadata: name:api-server namespace:production spec: targetRef: apiVersion:apps/v1 kind:Deployment name:api-server service: port:80 targetPort:8080 gateways: -public-gateway.istio-system.svc.cluster.local hosts: -api.example.com analysis: interval:1m threshold:10 maxWeight:50 stepWeight:5 metrics: -name:request-success-rate thresholdRange: min:99 interval:1m -name:request-duration thresholdRange: max:500 interval:1m webhooks: -name:load-test url:http://flagger-loadtester.test/ timeout:5s metadata: cmd:"hey -z 1m -q 10 -c 2 http://api.example.com/"
五、故障處理與性能優(yōu)化
5.1 常見問題排查清單
Pod無法啟動(dòng)問題排查:
# 1. 查看Pod狀態(tài) kubectl get pods -n production -o wide # 2. 查看Pod事件 kubectl describe pod-n production # 3. 查看容器日志 kubectl logs -n production --previous # 4. 進(jìn)入容器調(diào)試 kubectlexec-it -n production -- /bin/sh # 5. 查看資源使用情況 kubectl top pods -n production # 6. 檢查網(wǎng)絡(luò)連通性 kubectl run tmp-shell --rm-i --tty--image nicolaka/netshoot -- /bin/bash
5.2 性能優(yōu)化實(shí)戰(zhàn)
JVM應(yīng)用在K8s中的優(yōu)化:
# Dockerfile優(yōu)化 FROMopenjdk:11-jre-slim ENVJAVA_OPTS="-XX:MaxRAMPercentage=75.0 -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap" COPYapp.jar /app.jar ENTRYPOINT["sh","-c","java$JAVA_OPTS-jar /app.jar"]
資源限制優(yōu)化策略:
resources: requests: memory:"1Gi" cpu:"500m" limits: memory:"2Gi" cpu:"1000m" # 經(jīng)驗(yàn)法則: # - requests設(shè)置為平均使用量 # - limits設(shè)置為峰值使用量的1.2-1.5倍 # - CPU limits可以適當(dāng)放寬,內(nèi)存limits要嚴(yán)格控制
六、安全加固:打造銅墻鐵壁
6.1 RBAC權(quán)限管理
# 創(chuàng)建只讀用戶示例 apiVersion:v1 kind:ServiceAccount metadata: name:readonly-user namespace:production --- apiVersion:rbac.authorization.k8s.io/v1 kind:Role metadata: name:readonly-role namespace:production rules: -apiGroups:["","apps","batch"] resources:["pods","services","deployments","jobs"] verbs:["get","list","watch"] --- apiVersion:rbac.authorization.k8s.io/v1 kind:RoleBinding metadata: name:readonly-binding namespace:production subjects: -kind:ServiceAccount name:readonly-user namespace:production roleRef: kind:Role name:readonly-role apiGroup:rbac.authorization.k8s.io
6.2 網(wǎng)絡(luò)策略(Network Policy)
apiVersion:networking.k8s.io/v1
kind:NetworkPolicy
metadata:
name:api-server-netpol
namespace:production
spec:
podSelector:
matchLabels:
app:api-server
policyTypes:
-Ingress
-Egress
ingress:
-from:
-namespaceSelector:
matchLabels:
name:production
-podSelector:
matchLabels:
app:frontend
ports:
-protocol:TCP
port:8080
egress:
-to:
-namespaceSelector:
matchLabels:
name:production
ports:
-protocol:TCP
port:5432# PostgreSQL
-protocol:TCP
port:6379# Redis
-to:
-namespaceSelector:{}
podSelector:
matchLabels:
k8s-app:kube-dns
ports:
-protocol:UDP
port:53
七、成本優(yōu)化:讓每一分錢都花在刀刃上
7.1 資源利用率優(yōu)化
Vertical Pod Autoscaler配置:
apiVersion:autoscaling.k8s.io/v1 kind:VerticalPodAutoscaler metadata: name:api-server-vpa namespace:production spec: targetRef: apiVersion:apps/v1 kind:Deployment name:api-server updatePolicy: updateMode:"Auto" resourcePolicy: containerPolicies: -containerName:api-server minAllowed: cpu:100m memory:128Mi maxAllowed: cpu:2 memory:2Gi
7.2 節(jié)點(diǎn)資源優(yōu)化
# 設(shè)置節(jié)點(diǎn)污點(diǎn)實(shí)現(xiàn)資源隔離 kubectl taint nodes gpu-node-1 gpu=true:NoSchedule # Pod中使用容忍度 tolerations: - key:"gpu" operator:"Equal" value:"true" effect:"NoSchedule"
八、實(shí)戰(zhàn)案例:從0到1搭建高可用微服務(wù)架構(gòu)
讓我們通過一個(gè)完整的電商系統(tǒng)來展示如何將上述所有技術(shù)整合:
8.1 系統(tǒng)架構(gòu)設(shè)計(jì)
# namespace隔離
apiVersion:v1
kind:Namespace
metadata:
name:ecommerce-prod
labels:
istio-injection:enabled
---
# 微服務(wù)部署示例:訂單服務(wù)
apiVersion:apps/v1
kind:Deployment
metadata:
name:order-service
namespace:ecommerce-prod
spec:
replicas:5
selector:
matchLabels:
app:order-service
template:
metadata:
labels:
app:order-service
version:v1.0.0
annotations:
prometheus.io/scrape:"true"
prometheus.io/port:"9090"
spec:
containers:
-name:order-service
image:registry.example.com/order-service:v1.0.0
ports:
-containerPort:8080
name:http
-containerPort:9090
name:metrics
env:
-name:SPRING_PROFILES_ACTIVE
value:"production"
-name:DB_HOST
valueFrom:
configMapKeyRef:
name:db-config
key:host
livenessProbe:
httpGet:
path:/actuator/health/liveness
port:8080
initialDelaySeconds:60
periodSeconds:10
readinessProbe:
httpGet:
path:/actuator/health/readiness
port:8080
initialDelaySeconds:30
periodSeconds:5
resources:
requests:
memory:"512Mi"
cpu:"250m"
limits:
memory:"1Gi"
cpu:"500m"
8.2 服務(wù)網(wǎng)格配置(Istio)
# VirtualService配置
apiVersion:networking.istio.io/v1beta1
kind:VirtualService
metadata:
name:order-service-vs
namespace:ecommerce-prod
spec:
hosts:
-order-service
http:
-match:
-headers:
version:
exact:v2
route:
-destination:
host:order-service
subset:v2
weight:100
-route:
-destination:
host:order-service
subset:v1
weight:90
-destination:
host:order-service
subset:v2
weight:10
---
# DestinationRule配置
apiVersion:networking.istio.io/v1beta1
kind:DestinationRule
metadata:
name:order-service-dr
namespace:ecommerce-prod
spec:
host:order-service
trafficPolicy:
connectionPool:
tcp:
maxConnections:100
http:
http1MaxPendingRequests:50
h2MaxRequests:100
loadBalancer:
simple:LEAST_REQUEST
outlierDetection:
consecutiveErrors:5
interval:30s
baseEjectionTime:30s
subsets:
-name:v1
labels:
version:v1.0.0
-name:v2
labels:
version:v2.0.0
九、故障恢復(fù)與災(zāi)備方案
9.1 備份策略
#!/bin/bash # etcd備份腳本 ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key snapshot save /backup/etcd-snapshot-$(date+%Y%m%d-%H%M%S).db # 使用Velero進(jìn)行集群備份 velero backup create prod-backup --include-namespaces ecommerce-prod --snapshot-volumes --ttl 720h
9.2 跨區(qū)域容災(zāi)
# Federation配置示例 apiVersion:types.kubefed.io/v1beta1 kind:FederatedDeployment metadata: name:order-service namespace:ecommerce-prod spec: template: metadata: labels: app:order-service spec: replicas:3 # ... deployment spec placement: clusters: -name:cluster-beijing -name:cluster-shanghai overrides: -clusterName:cluster-beijing clusterOverrides: -path:"/spec/replicas" value:5 -clusterName:cluster-shanghai clusterOverrides: -path:"/spec/replicas" value:3
十、性能測(cè)試與壓測(cè)方案
10.1 使用K6進(jìn)行壓力測(cè)試
// k6-test.js
importhttpfrom'k6/http';
import{ check, sleep }from'k6';
import{Rate}from'k6/metrics';
constfailureRate =newRate('failed_requests');
exportletoptions = {
stages: [
{duration:'2m',target:100}, // 逐步增加到100個(gè)用戶
{duration:'5m',target:100}, // 保持100個(gè)用戶
{duration:'2m',target:200}, // 增加到200個(gè)用戶
{duration:'5m',target:200}, // 保持200個(gè)用戶
{duration:'2m',target:0}, // 逐步降到0
],
thresholds: {
http_req_duration: ['p(95)<500'],?// 95%的請(qǐng)求在500ms內(nèi)完成
? ??failed_requests: ['rate<0.1'], ? ?// 錯(cuò)誤率低于10%
? },
};
exportdefaultfunction() {
let?response = http.get('https://api.example.com/orders');
check(response, {
? ??'status is 200':?(r) =>r.status===200,
'response time < 500ms':?(r) =>r.timings.duration500,
? }) || failureRate.add(1);
sleep(1);
}
實(shí)戰(zhàn)總結(jié):我的十大運(yùn)維心得
1.永遠(yuǎn)不要在生產(chǎn)環(huán)境直接操作:先在測(cè)試環(huán)境驗(yàn)證,使用GitOps管理所有變更
2.監(jiān)控先行:沒有監(jiān)控的系統(tǒng)等于裸奔,先部署監(jiān)控再上線業(yè)務(wù)
3.自動(dòng)化一切:能自動(dòng)化的絕不手動(dòng),減少人為錯(cuò)誤
4.做好容量規(guī)劃:提前預(yù)估資源需求,避免臨時(shí)擴(kuò)容的被動(dòng)局面
5.災(zāi)備演練常態(tài)化:定期進(jìn)行故障演練,不要等真出事才發(fā)現(xiàn)備份不可用
6.文檔即代碼:所有配置和流程都要文檔化,最好是代碼化
7.安全是紅線:寧可性能差一點(diǎn),也不能有安全隱患
8.保持學(xué)習(xí):容器技術(shù)發(fā)展迅速,持續(xù)學(xué)習(xí)才能不被淘汰
9.關(guān)注成本:技術(shù)優(yōu)化的同時(shí)要考慮成本效益
10.建立SRE文化:從救火隊(duì)員轉(zhuǎn)變?yōu)橄到y(tǒng)可靠性工程師
結(jié)語(yǔ):開啟你的容器化之旅
容器化不是銀彈,但它確實(shí)能解決傳統(tǒng)運(yùn)維的很多痛點(diǎn)。從Docker到Kubernetes,從微服務(wù)到服務(wù)網(wǎng)格,這條路雖然充滿挑戰(zhàn),但也充滿機(jī)遇。
記住,最好的架構(gòu)是演進(jìn)出來的,不是設(shè)計(jì)出來的。從小處著手,逐步優(yōu)化,持續(xù)改進(jìn)。今天分享的這些實(shí)踐,都是我在無數(shù)個(gè)不眠夜晚中總結(jié)出來的經(jīng)驗(yàn)教訓(xùn)。
-
容器
+關(guān)注
關(guān)注
0文章
521瀏覽量
22799 -
Docker
+關(guān)注
關(guān)注
0文章
526瀏覽量
13988 -
kubernetes
+關(guān)注
關(guān)注
0文章
256瀏覽量
9403
原文標(biāo)題:容器化部署實(shí)踐:Docker與Kubernetes在生產(chǎn)環(huán)境中的最佳應(yīng)用
文章出處:【微信號(hào):magedu-Linux,微信公眾號(hào):馬哥Linux運(yùn)維】歡迎添加關(guān)注!文章轉(zhuǎn)載請(qǐng)注明出處。
發(fā)布評(píng)論請(qǐng)先 登錄
Kubernetes之路 2 - 利用LXCFS提升容器資源可見性
如何在Docker中創(chuàng)建容器
RFID在生產(chǎn)線管理中的應(yīng)用有哪些?
混合云環(huán)境中的Kubernetes HPC使用經(jīng)驗(yàn)
Docker工具分類列表
Kubernetes是什么,一文了解Kubernetes
什么是Kubernetes容器運(yùn)行時(shí)CRI

Docker與Kubernetes在生產(chǎn)環(huán)境中的最佳應(yīng)用
評(píng)論