Peterfei

上主是我的牧者,我实在一无所缺


  • 首页

  • 归档

  • 标签

k8s 部署和gitlab CI/CD

发表于 2021-03-16   |  

准备工作

1. 安装必备软件

1
yum install -y wget vim net-tools epel-release

2. 关闭swap

1
2
3
4
swapoff -a
# 永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab

3. 关闭selinux

1
2
3
4
5
# 临时禁用selinux
setenforce 0
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

4. 关闭防火墙

1
2
systemctl disable firewalld
systemctl stop firewalld

安装Docker和配置代理

1. 配置yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
## 配置默认源
## 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
## 下载阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
## 刷新
yum makecache fast
## 配置k8s源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
## 重建yum缓存
yum clean all
yum makecache fast
yum -y update

2. 安装docker

1
2
3
4
5
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce-20.10.5
systemctl enable docker
systemctl start docker

3. 确保kubelet使用的cgroup driver 与 Docker的一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
],
"registry-mirrors": ["https://ifa4ye2m.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload sudo systemctl restart docker

4. 从国内仓库拉取镜像(核心步骤-如果没有代理)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash
images=(
kube-apiserver:v1.18.16
kube-controller-manager:v1.18.16
kube-scheduler:v1.18.16
kube-proxy:v1.18.16
pause:3.2
etcd:3.4.3-0
coredns:1.6.7
kubernetes-dashboard-amd64:v1.10.1
)
for imageName in ${images[@]};do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done

开始安装k8s

1. 安装kubeadm,kubelet等

1
yum install kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0 kubernetes-cni-1.18.0

2. 集群初始化

1
2
3
4
5
6
7
8
9
10
11
## master节点执行:
sudo kubeadm init \
--apiserver-advertise-address 10.1.69.101 \
--kubernetes-version=v1.15.0 \
--pod-network-cidr=10.244.0.0/16
如果外网:
kubeadm init --apiserver-advertise-address xx.xx.255.84 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.0

得到回复:

1
2
3
4
5
6
7
8
9
(...省略)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
## 保存好该命令,丢了不好找回。节点加入时需要
kubeadm join 10.1.69.101:6443 --token ou5pvo.qseafc4s8licblzy \
--discovery-token-ca-cert-hash sha256:de9c10f11c50c074f212698b9d514fc12a9c1c4ffe70961aff89ac5e585f0663

3. 拷贝配置,给kubectl使用

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4. 安装flannel网络

1
2
3
4
5
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
## 查看flannal是否安装成功
sudo kubectl -n kube-system get po -l app=flannel -o wide

5. node节点加入集群

其他节点执行:

1
2
kubeadm join 10.1.69.101:6443 --token ou5pvo.qseafc4s8licblzy \
--discovery-token-ca-cert-hash sha256:de9c10f11c50c074f212698b9d514fc12a9c1c4ffe70961aff89ac5e585f0663

清理安装

  • 如果安装过程中出任何问题,可以重置后重新安装
1
sudo kubeadm reset

安装 helm 和 gitlab-runner

1
2
3
4
tar -zxvf helm-v2.12.1-linux-amd64.tar.gz
cd linux-amd64/
cp helm /usr/local/bin

cat tiller.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
name: tiller
app: helm
template:
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: kube-system
- name: TILLER_HISTORY_MAX
value: "0"
image: registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3
imagePullPolicy: IfNotPresent
livenessProbe:
httpGet:
path: /liveness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
name: tiller
ports:
- containerPort: 44134
name: tiller
- containerPort: 44135
name: http
readinessProbe:
httpGet:
path: /readiness
port: 44135
initialDelaySeconds: 1
timeoutSeconds: 1
resources: {}
serviceAccountName: tiller
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: helm
name: tiller
name: tiller-deploy
namespace: kube-system
spec:
ports:
- name: tiller
port: 44134
targetPort: tiller
selector:
app: helm
name: tiller
type: ClusterIP
status:
loadBalancer: {}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
kubectl apply -f tiller.yaml
helm init --upgrade --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
#helm init -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.3 --stable-repo-url http://mirror.azure.cn/kubernetes/charts/ --service-account tiller --override spec.selector.matchLabels.'name'='tiller',spec.selector.matchLabels.'app'='helm' --output yaml | sed 's@apiVersion: extensions/v1beta1@apiVersion: apps/v1@' | kubectl delete -f -
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
kubectl get po -A|grep tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
helm repo add gitlab https://charts.gitlab.io
helm repo update
helm fetch gitlab/gitlab-runner --version "v0.10.0-rc1"
kubectl create namespace gitlab-runners
mkdir -p gitlab-runner
cd gitlab-runner/
kubectl create -f rbac-runner-config.yaml

cat rbac-runner-config.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab
namespace: gitlab-runners
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: gitlab-runners
name: gitlab
rules:
- apiGroups: [""] #"" indicates the core API group
resources: ["*"]
verbs: ["*"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: gitlab
namespace: gitlab-runners
subjects:
- kind: ServiceAccount
name: gitlab # Name is case sensitive
apiGroup: ""
roleRef:
kind: Role #this must be Role or ClusterRole
name: gitlab # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io

cat values-spm-operation-frontend.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
## GitLab Runner Image
##
## By default it's using gitlab/gitlab-runner:alpine-v{VERSION}
## where {VERSION} is taken from Chart.yaml from appVersion field
##
## ref: https://hub.docker.com/r/gitlab/gitlab-runner/tags/
##
# image: gitlab/gitlab-runner:alpine-v11.6.0
## Specify a imagePullPolicy
## 'Always' if imageTag is 'latest', else set to 'IfNotPresent'
## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
##
imagePullPolicy: IfNotPresent
gitlabUrl: http://xxx.xxx.cn/
runnerRegistrationToken: "-Rxxz6Cqdk33Q96mRdxk"
## 和之前配置的 rbac name 对应
## 指定关联 runner 的标签
tags: " operations"
privileged: true
serviceAccountName: gitlab
## The GitLab Server URL (with protocol) that want to register the runner against
## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-register
##
# gitlabUrl: http://gitlab.your-domain.com/
## The Registration Token for adding new Runners to the GitLab Server. This must
## be retrieved from your GitLab Instance.
## ref: https://docs.gitlab.com/ce/ci/runners/README.html
##
# runnerRegistrationToken: ""
## The Runner Token for adding new Runners to the GitLab Server. This must
## be retrieved from your GitLab Instance. It is token of already registered runner.
## ref: (we don't yet have docs for that, but we want to use existing token)
##
# runnerToken: ""
#
## Unregister all runners before termination
##
## Updating the runner's chart version or configuration will cause the runner container
## to be terminated and created again. This may cause your Gitlab instance to reference
## non-existant runners. Un-registering the runner before termination mitigates this issue.
## ref: https://docs.gitlab.com/runner/commands/README.html#gitlab-runner-unregister
##
unregisterRunners: true
## When stopping ther runner, give it time to wait for it's jobs to terminate.
##
## Updating the runner's chart version or configuration will cause the runner container
## to be terminated with a graceful stop request. terminationGracePeriodSeconds
## instructs Kubernetes to wait long enough for the runner pod to terminate gracefully.
## ref: https://docs.gitlab.com/runner/commands/#signals
terminationGracePeriodSeconds: 3600
## Set the certsSecretName in order to pass custom certficates for GitLab Runner to use
## Provide resource name for a Kubernetes Secret Object in the same namespace,
## this is used to populate the /home/gitlab-runner/.gitlab-runner/certs/ directory
## ref: https://docs.gitlab.com/runner/configuration/tls-self-signed.html#supported-options-for-self-signed-certificates
##
# certsSecretName:
## Configure the maximum number of concurrent jobs
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
concurrent: 10
## Defines in seconds how often to check GitLab for a new builds
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
checkInterval: 30
limitLine: 100000
## Configure GitLab Runner's logging level. Available values are: debug, info, warn, error, fatal, panic
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
# logLevel:
## Configure GitLab Runner's logging format. Available values are: runner, text, json
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-global-section
##
# logFormat:
## For RBAC support:
rbac:
create: false
serviceAccountName: gitlab
## Define specific rbac permissions.
# resources: ["pods", "pods/exec", "secrets"]
# verbs: ["get", "list", "watch", "create", "patch", "delete"]
## Run the gitlab-bastion container with the ability to deploy/manage containers of jobs
## cluster-wide or only within namespace
clusterWideAccess: false
## Use the following Kubernetes Service Account name if RBAC is disabled in this Helm chart (see rbac.create)
##
# serviceAccountName: default
## Configure integrated Prometheus metrics exporter
## ref: https://docs.gitlab.com/runner/monitoring/#configuration-of-the-metrics-http-server
metrics:
enabled: true
## Configuration for the Pods that that the runner launches for each new job
##
runners:
## Default container image to use for builds when none is specified
##
image: ubuntu:16.04
## Specify one or more imagePullSecrets
##
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# imagePullSecrets: []
## Specify the image pull policy: never, if-not-present, always. The cluster default will be used if not set.
##
# imagePullPolicy: ""
## Defines number of concurrent requests for new job from GitLab
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-section
##
# requestConcurrency: 1
## Specify whether the runner should be locked to a specific project: true, false. Defaults to true.
##
# locked: true
## Specify the tags associated with the runner. Comma-separated list of tags.
##
## ref: https://docs.gitlab.com/ce/ci/runners/#using-tags
##
# tags: ""
## Specify if jobs without tags should be run.
## If not specified, Runner will default to true if no tags were specified. In other case it will
## default to false.
##
## ref: https://docs.gitlab.com/ce/ci/runners/#allowing-runners-with-tags-to-pick-jobs-without-tags
##
# runUntagged: true
## Run all containers with the privileged flag enabled
## This will allow the docker:dind image to run if you need to run Docker
## commands. Please read the docs before turning this on:
## ref: https://docs.gitlab.com/runner/executors/kubernetes.html#using-docker-dind
##
privileged: false
## The name of the secret containing runner-token and runner-registration-token
# secret: gitlab-runner
## Namespace to run Kubernetes jobs in (defaults to the same namespace of this release)
##
# namespace:
## Distributed runners caching
## ref: https://gitlab.com/gitlab-org/gitlab-runner/blob/master/docs/configuration/autoscale.md#distributed-runners-caching
##
## If you want to use s3 based distributing caching:
## First of all you need to uncomment General settings and S3 settings sections.
##
## Create a secret 's3access' containing 'accesskey' & 'secretkey'
## ref: https://aws.amazon.com/blogs/security/wheres-my-secret-access-key/
##
## $ kubectl create secret generic s3access \
## --from-literal=accesskey="YourAccessKey" \
## --from-literal=secretkey="YourSecretKey"
## ref: https://kubernetes.io/docs/concepts/configuration/secret/
##
## If you want to use gcs based distributing caching:
## First of all you need to uncomment General settings and GCS settings sections.
##
## Access using credentials file:
## Create a secret 'google-application-credentials' containing your application credentials file.
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runnerscachegcs-section
## You could configure
## $ kubectl create secret generic google-application-credentials \
## --from-file=gcs-applicaton-credentials-file=./path-to-your-google-application-credentials-file.json
## ref: https://kubernetes.io/docs/concepts/configuration/secret/
##
## Access using access-id and private-key:
## Create a secret 'gcsaccess' containing 'gcs-access-id' & 'gcs-private-key'.
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html#the-runners-cache-gcs-section
## You could configure
## $ kubectl create secret generic gcsaccess \
## --from-literal=gcs-access-id="YourAccessID" \
## --from-literal=gcs-private-key="YourPrivateKey"
## ref: https://kubernetes.io/docs/concepts/configuration/secret/
cache: {}
## General settings
# cacheType: s3
# cachePath: "gitlab_runner"
# cacheShared: true
## S3 settings
# s3ServerAddress: s3.amazonaws.com
# s3BucketName:
# s3BucketLocation:
# s3CacheInsecure: false
# secretName: s3access
## GCS settings
# gcsBucketName:
## Use this line for access using access-id and private-key
# secretName: gcsaccess
## Use this line for access using google-application-credentials file
# secretName: google-application-credentials
## Build Container specific configuration
##
builds: {}
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Service Container specific configuration
##
services: {}
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
## Helper Container specific configuration
##
helpers: {}
# cpuLimit: 200m
# memoryLimit: 256Mi
# cpuRequests: 100m
# memoryRequests: 128Mi
# image: gitlab/gitlab-runner-helper:x86_64-latest
## Service Account to be used for runners
##
# serviceAccountName:
## If Gitlab is not reachable through $CI_SERVER_URL
##
# cloneUrl:
## Specify node labels for CI job pods assignment
## ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/
##
# nodeSelector: {}
## Specify pod labels for CI job pods
##
# podLabels: {}
## Specify annotations for job pods, useful for annotations such as iam.amazonaws.com/role
# podAnnotations: {}
## Configure environment variables that will be injected to the pods that are created while
## the build is running. These variables are passed as parameters, i.e. `--env "NAME=VALUE"`,
## to `gitlab-runner register` command.
##
## Note that `envVars` (see below) are only present in the runner pod, not the pods that are
## created for each build.
##
## ref: https://docs.gitlab.com/runner/commands/#gitlab-runner-register
##
# env:
# NAME: VALUE
## Configure resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources: {}
# limits:
# memory: 256Mi
# cpu: 200m
# requests:
# memory: 128Mi
# cpu: 100m
## Affinity for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}
## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {
nodeType: k8s
}
# Example: The gitlab runner manager should not run on spot instances so you can assign
# them to the regular worker nodes only.
# node-role.kubernetes.io/worker: "true"
## List of node taints to tolerate (requires Kubernetes >= 1.6)
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []
# Example: Regular worker nodes may have a taint, thus you need to tolerate the taint
# when you assign the gitlab runner manager with nodeSelector or affinity to the nodes.
# - key: "node-role.kubernetes.io/worker"
# operator: "Exists"
## Configure environment variables that will be present when the registration command runs
## This provides further control over the registration process and the config.toml file
## ref: `gitlab-runner register --help`
## ref: https://docs.gitlab.com/runner/configuration/advanced-configuration.html
##
# envVars:
# - name: RUNNER_EXECUTOR
# value: kubernetes
## list of hosts and IPs that will be injected into the pod's hosts file
hostAliases: []
# Example:
# - ip: "127.0.0.1"
# hostnames:
# - "foo.local"
# - "bar.local"
# - ip: "10.1.2.3"
# hostnames:
# - "foo.remote"
# - "bar.remote"
## Annotations to be added to manager pod
##
podAnnotations: {}
# Example:
# iam.amazonaws.com/role: <my_role_arn>
## HPA support for custom metrics:
## This section enables runners to autoscale based on defined custom metrics.
## In order to use this functionality, Need to enable a custom metrics API server by
## implementing "custom.metrics.k8s.io" using supported third party adapter
## Example: https://github.com/directxman12/k8s-prometheus-adapter
##
#hpa: {}
# minReplicas: 1
# maxReplicas: 10
# metrics:
# - type: Pods
# pods:
# metricName: gitlab_runner_jobs
# targetAverageValue: 400m

Helm for gitlab configmap

cat templates/configmap.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "gitlab-runner.fullname" . }}
labels:
app: {{ include "gitlab-runner.fullname" . }}
chart: {{ include "gitlab-runner.chart" . }}
release: "{{ .Release.Name }}"
heritage: "{{ .Release.Service }}"
data:
entrypoint: |
#!/bin/bash
set -e
mkdir -p /home/gitlab-runner/.gitlab-runner/
cp /scripts/config.toml /home/gitlab-runner/.gitlab-runner/
# Register the runner
if [[ -f /secrets/accesskey && -f /secrets/secretkey ]]; then
export CACHE_S3_ACCESS_KEY=$(cat /secrets/accesskey)
export CACHE_S3_SECRET_KEY=$(cat /secrets/secretkey)
fi
if [[ -f /secrets/gcs-applicaton-credentials-file ]]; then
export GOOGLE_APPLICATION_CREDENTIALS="/secrets/gcs-applicaton-credentials-file"
else
if [[ -f /secrets/gcs-access-id && -f /secrets/gcs-private-key ]]; then
export CACHE_GCS_ACCESS_ID=$(cat /secrets/gcs-access-id)
# echo -e used to make private key multiline (in google json auth key private key is oneline with \n)
export CACHE_GCS_PRIVATE_KEY=$(echo -e $(cat /secrets/gcs-private-key))
fi
fi
if [[ -f /secrets/runner-registration-token ]]; then
export REGISTRATION_TOKEN=$(cat /secrets/runner-registration-token)
fi
if [[ -f /secrets/runner-token ]]; then
export CI_SERVER_TOKEN=$(cat /secrets/runner-token)
fi
if ! sh /scripts/register-the-runner; then
exit 1
fi
cat >>/home/gitlab-runner/.gitlab-runner/config.toml <<EOF
output_limit = 100000
[[runners.kubernetes.volumes.pvc]]
name = "gitlab-runner-maven-repo-claim"
mount_path = "/home/cache/maven"
[[runners.kubernetes.volumes.host_path]]
name = "docker"
mount_path = "/var/run/docker.sock"
EOF
# Start the runner
exec /entrypoint run --user=gitlab-runner \
--working-directory=/home/gitlab-runner
config.toml: |
concurrent = {{ .Values.concurrent }}
check_interval = {{ .Values.checkInterval }}
log_level = {{ default "info" .Values.logLevel | quote }}
output_limit = {{ default 10000 .Values.limitLine}}
{{- if .Values.logFormat }}
log_format = {{ .Values.logFormat | quote }}
{{- end }}
{{- if .Values.metrics.enabled }}
listen_address = '[::]:9252'
{{- end }}
configure: |
set -e
cp /init-secrets/* /secrets
register-the-runner: |
#!/bin/bash
MAX_REGISTER_ATTEMPTS=30
for i in $(seq 1 "${MAX_REGISTER_ATTEMPTS}"); do
echo "Registration attempt ${i} of ${MAX_REGISTER_ATTEMPTS}"
/entrypoint register \
{{- range .Values.runners.imagePullSecrets }}
--kubernetes-image-pull-secrets {{ . | quote }} \
{{- end }}
{{- range $key, $val := .Values.runners.nodeSelector }}
--kubernetes-node-selector {{ $key | quote }}:{{ $val | quote }} \
{{- end }}
{{- range $key, $value := .Values.runners.podLabels }}
--kubernetes-pod-labels {{ $key | quote }}:{{ $value | quote }} \
{{- end }}
{{- range $key, $val := .Values.runners.podAnnotations }}
--kubernetes-pod-annotations {{ $key | quote }}:{{ $val | quote }} \
{{- end }}
{{- range $key, $value := .Values.runners.env }}
--env {{ $key | quote -}} = {{- $value | quote }} \
{{- end }}
{{- if and (hasKey .Values.runners "runUntagged") .Values.runners.runUntagged }}
--run-untagged=true \
{{- end }}
--non-interactive
retval=$?
if [ ${retval} = 0 ]; then
break
elif [ ${i} = ${MAX_REGISTER_ATTEMPTS} ]; then
exit 1
fi
sleep 5
done
exit 0
check-live: |
#!/bin/bash
if /usr/bin/pgrep -f .*register-the-runner; then
exit 0
elif /usr/bin/pgrep gitlab.*runner; then
exit 0
else
exit 1
fi

gitlab-runner

1
2
helm install --name xxx gitlab/gitlab-runner -f values-spm-labour.yaml --namespace gitlab-runners --version "v0.10.0-rc1"
helm upgrade xxx ./gitlab-runner/

创建runner maven 缓存

Java 需要

cat maven-nfs.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
apiVersion: v1
kind: PersistentVolume
metadata:
name: gitlab-runner-maven-repo
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi
mountOptions:
- nolock
nfs:
path: /data/nfs/volumes
server: YOURIPADDRESS
persistentVolumeReclaimPolicy: Recycle
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-runner-maven-repo-claim
namespace: gitlab-runners
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
volumeName: gitlab-runner-maven-repo
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi

生成maven缓存PVC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-runner-maven-repo-claim
namespace: gitlab-runners
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
volumeName: gitlab-runner-maven-repo
status:
accessModes:
- ReadWriteMany
capacity:
storage: 5Gi

代码及镜像上传

1
2
3
4
5
6
7
8
##如果删除
helm del --purge spm-labour
##生成gitlab namespace
kubectl create ns gitlab
### 代码镜像上传密钥
kubectl create secret docker-registry huawei-auth --docker-server=XXX.myhuaweicloud.com --docker-username=XXX --docker-password=XXXX -ngitlab
###kubectl create secret docker-registry aliyun-auth --docker-server=registry.cn-zhangjiakou.aliyuncs.com --docker-username=xxx --docker-password=xxx -ngitlab
其它及遇到的坑

GitLab集成k8s 签名错误

1
2
kubectl get secrets
kubectl get secret default-token-52xsf -o jsonpath="{['data']['ca\.crt']}" | base64 --decode

cat gitlab-admin-service-account.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: gitlab
namespace: gitlab
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab
namespace: gitlab
subjects:
- kind: ServiceAccount
name: gitlab
namespace: gitlab
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin

获取 Service Token

1
kubectl -n gitlab describe secret $(kubectl -n gitlab get secret | grep gitlab| awk '{print $1}')

记k8s在公网部署

发表于 2020-10-29   |  

准备工作

1. 安装必备软件

1
yum install -y wget vim net-tools epel-release

2. 关闭swap

1
2
3
4
swapoff -a
# 永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab

3. 关闭selinux

1
2
3
4
5
# 临时禁用selinux
setenforce 0
# 永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

4. 关闭防火墙

1
2
systemctl disable firewalld
systemctl stop firewalld

安装Docker和配置代理

1. 配置yum源

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
## 配置默认源
## 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
## 下载阿里源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
## 刷新
yum makecache fast
## 配置k8s源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
## 重建yum缓存
yum clean all
yum makecache fast
yum -y update

2. 安装docker

1
2
3
4
5
yum -y install yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y install docker-ce
systemctl enable docker
systemctl start docker

3. 确保kubelet使用的cgroup driver 与 Docker的一致

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
sudo systemctl daemon-reload sudo systemctl restart docker

4. 设置docker代理(核心步骤-如果有代理)

没有代理执行步骤5

1
2
3
4
5
6
7
8
9
mkdir /etc/systemd/system/docker.service.d
touch /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://xxx"
Environment="HTTPS_PROXY=http://xxx"
Environment="NO_PROXY=localhost,127.0.0.1,localaddress,.localdomain.com"
systemctl daemon-reload && systemctl restart docker

5. 从国内仓库拉取镜像(核心步骤-如果没有代理)

1
2
## 查看集群初始化所需镜像及对应依赖版本号,列出的就是需要下载的镜像
kubeadm config images list
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#!/bin/bash
images=(
kube-apiserver:v1.18.0
kube-controller-manager:v1.18.0
kube-scheduler:v1.18.0
kube-proxy:v1.18.0
pause:3.2
etcd:3.4.3-0
coredns:1.6.7
kubernetes-dashboard-amd64:v1.10.1
)
for imageName in ${images[@]};do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done

开始安装k8s

1. 安装kubeadm,kubelet等

1
2
yum -y install kubelet kubeadm kubectl kubernetes-cni
systemctl enable kubelet && systemctl start kubelet

2. 集群初始化

1
2
3
4
5
## master节点执行:
sudo kubeadm init \
--apiserver-advertise-address 10.1.69.101 \
--kubernetes-version=v1.15.0 \
--pod-network-cidr=10.244.0.0/16

公网会卡在:

1
2
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

修改etcd.yaml

文件路径”/etc/kubernetes/manifests/etcd.yaml”

修改前

修改后

此处”xxx”为公网ip,要关注的是”–listen-client-urls”和”–listen-peer-urls”。需要把–listen-client-urls 和 –listen-peer-urls 都改成0.0.0.0:xxx

来源

得到回复:

1
2
3
4
5
6
7
8
9
(...省略)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
## 保存好该命令,丢了不好找回。节点加入时需要
kubeadm join 10.1.69.101:6443 --token ou5pvo.qseafc4s8licblzy \
--discovery-token-ca-cert-hash sha256:de9c10f11c50c074f212698b9d514fc12a9c1c4ffe70961aff89ac5e585f0663

3. 拷贝配置,给kubectl使用

1
2
3
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

4. 安装flannel网络

1
2
3
4
5
sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
## 查看flannal是否安装成功
sudo kubectl -n kube-system get po -l app=flannel -o wide

5. node节点加入集群

其他节点执行:

1
kubeadm join 39.99.xxx.xxx:6443 --token w4tx2r.0dm194h24wlw8m8t --discovery-token-ca-cert-hash sha256:caae3b178b4172839269254f373b6eaa14514173b01e129e0b0fe7d64694dc39

清理安装

如果安装过程中出任何问题,可以重置后重新安装

1
sudo kubeadm reset

Dashboard安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
# 安装dashboard,国内可以使用别的yaml源
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
# 修改node为NodePort模式
kubectl patch svc -n kube-system kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'
# 查看服务(得知dashboard运行在443:32383/TCP端口)
kubectl get svc -n kube-system
# --- 输出 ---
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 7h40m
kubernetes-dashboard NodePort 10.111.77.210 <none> 443:32383/TCP 3h42m
# --- 输出 ---
# 查看dashboard运行在哪个node(得知dashboard运行在192.168.20.4)
kubectl get pods -A -o wide
# --- 输出 ---
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-fb8b8dccf-rn8kd 1/1 Running 0 7h43m 10.244.0.2 master <none> <none>
kube-system coredns-fb8b8dccf-slwr4 1/1 Running 0 7h43m 10.244.0.3 master <none> <none>
kube-system etcd-master 1/1 Running 0 7h42m 192.168.20.5 master <none> <none>
kube-system kube-apiserver-master 1/1 Running 0 7h42m 192.168.20.5 master <none> <none>
kube-system kube-controller-manager-master 1/1 Running 0 7h42m 192.168.20.5 master <none> <none>
kube-system kube-flannel-ds-amd64-l8c7c 1/1 Running 0 7h3m 192.168.20.5 master <none> <none>
kube-system kube-flannel-ds-amd64-lcmxw 1/1 Running 1 6h50m 192.168.20.4 node1 <none> <none>
kube-system kube-flannel-ds-amd64-pqnln 1/1 Running 1 6h5m 192.168.20.3 node2 <none> <none>
kube-system kube-proxy-4kcqb 1/1 Running 0 7h43m 192.168.20.5 master <none> <none>
kube-system kube-proxy-jcqjd 1/1 Running 0 6h5m 192.168.20.3 node2 <none> <none>
kube-system kube-proxy-vm9sj 1/1 Running 0 6h50m 192.168.20.4 node1 <none> <none>
kube-system kube-scheduler-master 1/1 Running 0 7h42m 192.168.20.5 master <none> <none>
kube-system kubernetes-dashboard-5f7b999d65-2ltmv 1/1 Running 0 3h45m 10.244.1.2 node1 <none> <none>
# --- 输出 ---
# 如果无法变成Running状态,可以使用以下命令排错
journalctl -f -u kubelet # 只看当前的kubelet进程日志
### 提示拉取镜像失败,无法翻墙的可以使用以下方法预先拉取镜像
### 请在kubernetes-dashboard的节点上操作:
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
docker tag mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
docker rmi mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1

获取token命令

1
2
3
## 获取登陆界面token
kubectl -n kube-system describe $(kubectl -n kube-system \
get secret -n kube-system -o name | grep namespace) | grep token

如果dashboard的token过期,以下脚本重新生成config文件

1
2
3
#!/bin/bash
TOKEN=$(kubectl -n kube-system describe secret default| awk '$1=="token:"{print $2}')
kubectl config set-credentials kubernetes-admin --token="${TOKEN}"

如何编写Dockerfile

发表于 2019-04-18   |   分类于 Docker   |  

示例

假设我们需要使用Docker运行一个Node.js应用

1
2
3
4
5
6
7
8
9
10
FROM ubuntu
ADD . /app
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nodejs ssh mysql
RUN cd /app && npm install
CMD mysql & sshd & npm start

构建镜像:

1
docker build -t mydocker .

编写.dockerignore文件

构建镜像时,Docker需要先准备context ,将所有需要的文件收集到进程中。默认的context包含Dockerfile目录中的所有文件,但是实际上,我们并不需要.git目录,node_modules目录等内容。 .dockerignore 的作用和语法类似于 .gitignore,可以忽略一些不需要的文件,这样可以有效加快镜像构建时间,同时减少Docker镜像的大小。示例如下:

1
2
3
.git/
node_modules/

容器只运行单个应用

从技术角度讲,你可以在Docker容器中运行多个进程。你可以将数据库,前端,后端,ssh,supervisor都运行在同一个Docker容器中。但是,这会让你非常痛苦:

  • 非常长的构建时间(修改前端之后,整个后端也需要重新构建)
  • 非常大的镜像大小
  • 多个应用的日志难以处理
  • 横向扩展时非常浪费资源(不同的应用需要运行的容器数并不相同)
  • 僵尸进程问题 - 你需要选择合适的init进程

因此,我建议大家为每个应用构建单独的Docker镜像,然后使用 Docker Compose 运行多个Docker容器。

现在,我从Dockerfile中删除一些不需要的安装包,另外,SSH可以用docker exec替代。示例如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
FROM ubuntu
ADD . /app
RUN apt-get update
RUN apt-get upgrade -y
# we should remove ssh and mysql, and use
# separate container for database
RUN apt-get install -y nodejs # ssh mysql
RUN cd /app && npm install
CMD npm start

将多个RUN指令合并为一个

Docker镜像是分层的,下面这些知识点非常重要:

  • Dockerfile中的每个指令都会创建一个新的镜像层。
  • 镜像层将被缓存和复用
  • 当Dockerfile的指令修改了,复制的文件变化了,或者构建镜像时指定的变量不同了,对应的镜像层缓存就会失效
  • 某一层的镜像缓存失效之后,它之后的镜像层缓存都会失效
  • 镜像层是不可变的,如果我们再某一层中添加一个文件,然后在下一层中删除它,则镜像中依然会包含该文件(只是这个文件在Docker容器中不可见了)。

Docker镜像类似于洋葱。它们都有很多层。为了修改内层,则需要将外面的层都删掉。记住这一点的话,其他内容就很好理解了。

现在,我们将所有的RUN指令合并为一个。同时把apt-get upgrade删除,因为它会使得镜像构建非常不确定(我们只需要依赖基础镜像的更新就好了)

1
2
3
4
5
6
7
8
9
10
FROM ubuntu
ADD . /app
RUN apt-get update \
&& apt-get install -y nodejs \
&& cd /app \
&& npm install
CMD npm start

记住一点,我们只能将变化频率一样的指令合并在一起。将node.js安装与npm模块安装放在一起的话,则每次修改源代码,都需要重新安装node.js,这显然不合适。因此,正确的写法是这样的:

1
2
3
4
5
6
7
FROM ubuntu
RUN apt-get update && apt-get install -y nodejs
ADD . /app
RUN cd /app && npm install
CMD npm start

基础镜像的标签不要用latest

当镜像没有指定标签时,将默认使用latest 标签。因此, FROM ubuntu 指令等同于FROM ubuntu:latest。当时,当镜像更新时,latest标签会指向不同的镜像,这时构建镜像有可能失败。如果你的确需要使用最新版的基础镜像,可以使用latest标签,否则的话,最好指定确定的镜像标签。

示例Dockerfile应该使用16.04作为标签。

1
2
3
4
5
6
7
FROM ubuntu:16.04 # it's that easy!
RUN apt-get update && apt-get install -y nodejs
ADD . /app
RUN cd /app && npm install
CMD npm start

每个RUN指令后删除多余文件

假设我们更新了apt-get源,下载,解压并安装了一些软件包,它们都保存在/var/lib/apt/lists/目录中。但是,运行应用时Docker镜像中并不需要这些文件。我们最好将它们删除,因为它会使Docker镜像变大。

示例Dockerfile中,我们可以删除/var/lib/apt/lists/目录中的文件(它们是由apt-get update生成的)。

1
2
3
4
5
6
7
8
9
10
11
ROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -y nodejs \
# added lines
&& rm -rf /var/lib/apt/lists/*
ADD . /app
RUN cd /app && npm install
CMD npm start

选择合适的基础镜像(alpine版本最好)

在示例中,我们选择了ubuntu作为基础镜像。但是我们只需要运行node程序,有必要使用一个通用的基础镜像吗?node镜像应该是更好的选择。

1
2
3
4
5
6
7
8
FROM node
ADD . /app
# we don't need to install node
# anymore and use apt-get
RUN cd /app && npm install
CMD npm start

更好的选择是alpine版本的node镜像。alpine是一个极小化的Linux发行版,只有4MB,这让它非常适合作为基础镜像。

1
2
3
4
5
6
FROM node:7-alpine
ADD . /app
RUN cd /app && npm install
CMD npm start

设置WORKDIR和 CMD

WORKDIR指令可以设置默认目录,也就是运行RUN / CMD / ENTRYPOINT指令的地方。

CMD指令可以设置容器创建是执行的默认命令。另外,你应该讲命令写在一个数组中,数组中每个元素为命令的每个单词(参考官方文档)。

1
2
3
4
5
6
7
FROM node:7-alpine
WORKDIR /app
ADD . /app
RUN npm install
CMD ["npm", "start"]

使用ENTRYPOINT (可选)

ENTRYPOINT指令并不是必须的,因为它会增加复杂度。ENTRYPOINT是一个脚本,它会默认执行,并且将指定的命令错误其参数。它通常用于构建可执行的Docker镜像。entrypoint.sh如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
#!/usr/bin/env sh
# $0 is a script name,
# $1, $2, $3 etc are passed arguments
# $1 is our command
CMD=$1
case "$CMD" in
"dev" )
npm install
export NODE_ENV=development
exec npm run dev
;;
"start" )
# we can modify files here, using ENV variables passed in
# "docker create" command. It can't be done during build process.
echo "db: $DATABASE_ADDRESS" >> /app/config.yml
export NODE_ENV=production
exec npm start
;;
* )
# Run custom command. Thanks to this line we can still use
# "docker run our_image /bin/bash" and it will work
exec $CMD ${@:2}
;;
esac

示例Dockerfile:

1
2
3
4
5
6
7
8
9
FROM node:7-alpine
WORKDIR /app
ADD . /app
RUN npm install
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start"]

可以使用如下命令运行该镜像:

1
2
3
4
5
6
7
8
# 运行开发版本
docker run our-app dev
# 运行生产版本
docker run our-app start
# 运行bash
docker run -it our-app /bin/bash

COPY与ADD优先使用前者

1
2
3
4
5
6
7
8
9
FROM node:7-alpine
WORKDIR /app
COPY . /app
RUN npm install
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start"]

合理调整COPY与RUN的顺序

我们应该把变化最少的部分放在Dockerfile的前面,这样可以充分利用镜像缓存。

示例中,源代码会经常变化,则每次构建镜像时都需要重新安装NPM模块,这显然不是我们希望看到的。因此我们可以先拷贝package.json,然后安装NPM模块,最后才拷贝其余的源代码。这样的话,即使源代码变化,也不需要重新安装NPM模块。

1
2
3
4
5
6
7
8
9
10
FROM node:7-alpine
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start"]

设置默认的环境变量,映射端口和数据卷

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
FROM node:7-alpine
ENV PROJECT_DIR=/app
WORKDIR $PROJECT_DIR
COPY package.json $PROJECT_DIR
RUN npm install
COPY . $PROJECT_DIR
ENV MEDIA_DIR=/media \
NODE_ENV=production \
APP_PORT=3000
VOLUME $MEDIA_DIR
EXPOSE $APP_PORT
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start"]

添加HEALTHCHECK

运行容器时,可以指定–restart always选项。这样的话,容器崩溃时,Docker守护进程(docker daemon)会重启容器。对于需要长时间运行的容器,这个选项非常有用。但是,如果容器的确在运行,但是不可(陷入死循环,配置错误)用怎么办?使用HEALTHCHECK指令可以让Docker周期性的检查容器的健康状况。我们只需要指定一个命令,如果一切正常的话返回0,否则返回1。示例如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
FROM node:7-alpine
LABEL maintainer "jakub.skalecki@example.com"
ENV PROJECT_DIR=/app
WORKDIR $PROJECT_DIR
COPY package.json $PROJECT_DIR
RUN npm install
COPY . $PROJECT_DIR
ENV MEDIA_DIR=/media \
NODE_ENV=production \
APP_PORT=3000
VOLUME $MEDIA_DIR
EXPOSE $APP_PORT
HEALTHCHECK CMD curl --fail http://localhost:$APP_PORT || exit 1
ENTRYPOINT ["./entrypoint.sh"]
CMD ["start"]

记_给公司Unity小组实现的基于Sqlite缓存Cache类

发表于 2019-04-18   |   分类于 Unity   |  

需求

  1. 所有线上接口管理至本地缓存
  2. 使用面向对象
  3. db使用sqlite

设计思路

线上拉取数据,缓存至本地,读取本地版本,对比线上版本号,如有更新,更新本地数据库数据,如没有,则读取本地.

核心代码

IDbDao.cs 接口定义

1
2
3
4
5
6
7
8
9
10
11
12
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Assets.Scripts.DbDao
{
public interface IDbDao<T> where T : class, new()
{
void setCache(string r);
}
}

CacheOption.cs 实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Assets.Scripts.Repository;
using Assets.Scripts.Model;
using Assets.Scripts.Infrastructure;
using UnityEngine;
using Newtonsoft.Json;
namespace Assets.Scripts.DbDao
{
public class CacheOption<T> : IDbDao<T> where T : class, new()
{
public void setCache_dbPath(string r, string path, bool is_append = false)
{
var repository = new RemoteRepository<GetStruct>();
repository.Get<Response<T>>(r, (response) =>
{
Debug.Log(JsonConvert.SerializeObject(response));
if (response.List != null && response.List.Count != 0)
{
var db = new DbRepository<T>();
db.CreateDb(path);
if (!is_append)
{
db.DropTable();
}
db.CreateTable();
db.InsertAll(response.List);//更新远程数据源
db.Close();
}
else
{
Debug.Log("Struct List data is null ");
}
});
}
//实现写入缓存共有类
public void setCache(string r)
{
var repository = new RemoteRepository<GetStruct>();
var version = new DbRepository<TableVersions>();
version.DataService("vesali.db");
version.CreateTable();
TableVersions tv = version.SelectOne<TableVersions>((tmpT) =>
{
if (tmpT.table_name == typeof(T).Name)
{
return true;
}
return false;
});
var struct_version = "-1";
if (tv != null)
{
struct_version = tv.version;
}
repository.Get<Response<T>>(r, new GetStruct { Version = struct_version, device = SystemInfo.deviceUniqueIdentifier, os = Enum.GetName(typeof(asset_platform), PublicClass.platform) , level = ((int)PublicClass.Quality).ToString(), softVersion = PublicClass.get_version() }, (response) =>
{
if (response.List != null && response.List.Count != 0)
{
Debug.Log("============读取数据库缓存更新,从远程拉取数据,更新版本号=================== ");
//如果有数据,更新数据和版本
if (tv == null)
{
version.Insert(new TableVersions { table_name = typeof(T).Name, version = response.maxVersion });
}
else
{
version.Update(new TableVersions { table_name = typeof(T).Name, version = response.maxVersion });
}
version.Close();
var db = new DbRepository<T>();
db.DataService("vesali.db");
db.DropTable();
db.CreateTable();
db.InsertAll(response.List);//更新远程数据源
db.Close();
}
else
{
Debug.Log("Struct List data is null ");
// Debug.Log("============读取数据库缓存=================== ");
//读取数据库缓存
}
PublicClass.data_list_count++;
});
//throw new NotImplementedException();
}
}
}

IRepository.cs 对基本数据操作接口定义,如增删改查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Assets.Scripts.Repository
{
public interface IRepository<T> where T:class,new()
{
void Insert(T instance);
void Delete(T instance);
void Update(T instance);
IEnumerable<T> Select(Func<T,bool> func );
}
}

DbRepository.cs数据库脚本实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
using System;
using System.IO;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using SQLite4Unity3d;
using UnityEngine;
namespace Assets.Scripts.Repository
{
public class DbRepository<T> : IRepository<T> where T : class, new()
{
//public delegate void DoConnection(String str);
private SQLiteConnection _connection;
//private string v;
//public DbRepository(string v)
//{
// this.v = v;
//}
public void CreateTable()
{
_connection.CreateTable<T>();
}
public TableQuery<T> getTableQuerry()
{
return _connection.Table<T>();
}
public void DropTable()
{
_connection.DropTable<T>();
}
public void Delete(T instance)
{
try
{
_connection.Delete(instance);
}
catch (Exception e)
{
throw new Exception(e.ToString());
}
}
public void Insert(T instance)
{
try
{
_connection.Insert(instance);
}
catch (Exception e)
{
throw new Exception(e.ToString());
}
}
public void InsertAll(T[] instance)
{
try
{
_connection.InsertAll(instance);
}
catch (Exception e)
{
throw new Exception(e.ToString());
}
}
public void InsertAll(List<T> instance)
{
try
{
_connection.InsertAll(instance);
}
catch (Exception e)
{
throw new Exception(e.ToString());
}
}
public T SelectOne<R>(Func<T, bool> func) where R : class, new()
{
return _connection.Table<T>().Where(func).FirstOrDefault();
}
public IEnumerable<T> Select<R>(Func<T, bool> func) where R : class, new()
{
return _connection.Table<T>().Where(func);
}
public void Update(T t)
{
_connection.Update(t);
}
public void Close()
{
try
{
_connection.Close();
}
catch (Exception e)
{
throw new Exception(e.ToString());
}
}
public void CreateDb(string dbPath)
{
_connection = new SQLiteConnection(dbPath, SQLiteOpenFlags.ReadWrite | SQLiteOpenFlags.Create);
}
public void DataService(string DatabaseName)
{
#if UNITY_EDITOR
var dbPath = string.Format("{0}/{1}", PublicClass.vesal_db_path, DatabaseName); //string.Format(@"Assets/StreamingAssets/{0}", DatabaseName);
#else
var filepath = string.Format("{0}/{1}",PublicClass.vesal_db_path, DatabaseName);
var dbPath = filepath;
// var dbPath = loadDb;
#endif
_connection = new SQLiteConnection(dbPath, SQLiteOpenFlags.ReadWrite | SQLiteOpenFlags.Create);
//DebugLog.DebugLogInfo("Final PATH: " + dbPath);
}
public IEnumerable<T> Select(Func<T, bool> func)
{
throw new NotImplementedException();
}
}
}

RemoteRepository.cs 接口类实现

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
using System;
using System.Collections.Generic;
using System.Text;
using Assets.Scripts.Infrastructure;
using UnityEngine;
using Assets.Scripts.Network;
using UnityEngine;
using Newtonsoft.Json;
namespace Assets.Scripts.Repository
{
public class RemoteRepository<T> where T : class, new()
{
public ISerializer Serializer { get; set; }
public RemoteRepository(ISerializer serializer = null)
{
Serializer = serializer ?? SerializerJson.Instance;
}
public void Get<R>(string url, T instance, Action<R> onSuccess) where R : class, new()
{
var parameters = HttpUtility.BuildParameters(instance, new StringBuilder("?"));
var httpRequest = new HttpRequest
{
Url = url,
Method = HttpMethod.Get,
Parameters = parameters
};
Debug.Log(httpRequest.Url + httpRequest.Parameters);
HttpClient.Instance.SendAsync(httpRequest, httpResponse =>
{
if (httpResponse.IsSuccess)
{
R r = JsonConvert.DeserializeObject<R>(httpResponse.Data);
onSuccess(r);
}
//TODO:异常处理
});
}
public void Get<R>(string url, Action<R> onSuccess) where R : class, new()
{
// var parameters = HttpUtility.BuildParameters(instance, new StringBuilder("?"));
var httpRequest = new HttpRequest
{
Url = url,
Method = HttpMethod.Get,
// Parameters = parameters
};
HttpClient.Instance.SendAsync(httpRequest, httpResponse =>
{
if (httpResponse.IsSuccess)
{
R r = JsonConvert.DeserializeObject<R>(httpResponse.Data);
Debug.Log("json str :" + r);
onSuccess(r);
}
//TODO:异常处理
});
}
public void Post<R>(string url, T instance, Action<R> onSuccess) where R : class, new()
{
var parameters = HttpUtility.BuildParameters(instance, new StringBuilder());
var httpRequest = new HttpRequest
{
Url = url,
Method = HttpMethod.Post,
Parameters = parameters
};
HttpClient.Instance.SendAsync(httpRequest, httpResponse =>
{
if (httpResponse.IsSuccess)
{
//TODO:判断是否有Data
onSuccess(Serializer.Deserialize<R>(httpResponse.Data));
}
});
}
public void Test()
{
Debug.Log("Hello...");
}
}
}

Response.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Assets.Scripts.Infrastructure
{
[System.Serializable]
public class Response<T>
{
public string msg;
public int code;
public List<T> List;
public string maxVersion;
}
}

HttpUtility.cs http操作类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Assets.Scripts.Infrastructure
{
public class HttpUtility
{
public static string BuildParameters<T>(T instance, StringBuilder sb) where T:class,new()
{
foreach (var property in instance.GetType().GetProperties())
{
var propertyName = property.Name;
var value = property.GetValue(instance, null);
sb.Append(propertyName + "=" + value + "&");
}
return sb.ToString().TrimEnd('&');
}
}
}

ISerializer.cs 格式化接口

1
2
3
4
5
6
7
8
9
10
11
12
13
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Assets.Scripts.Infrastructure
{
public interface ISerializer
{
string Serialize<T>(T obj, bool readableOutput = false) where T : class, new();
T Deserialize<T>(string json) where T : class, new();
}
}

SerializerJson.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using UnityEngine;
namespace Assets.Scripts.Infrastructure
{
public class SerializerJson:ISerializer
{
public static readonly SerializerJson Instance=new SerializerJson();
private SerializerJson()
{
}
public string Serialize<T>(T obj, bool readableOutput = false) where T : class, new()
{
throw new NotImplementedException();
}
public T Deserialize<T>(string json) where T : class, new()
{
return JsonUtility.FromJson<T>(json);
}
}
}

SQLite.cs

查看源码

HttpMethod.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace Assets.Scripts.Infrastructure
{
public enum HttpMethod
{
Get,
Post
}
}

Model实例

TableVersions.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using System;
using System.Collections.Generic;
using SQLite4Unity3d;
namespace Assets.Scripts.Model
{
[Serializable]
public class TableVersions
{
[PrimaryKey]
public string table_name { get; set; }
public string version { get; set; }
}
}

GetStruct.cs

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
using System;
namespace Assets.Scripts.Model
{
class GetStruct
{
public string Version { get; set; }
public string UpdateUrlUuid { get; set; }
public string device { get; set; }
public string os { get; set; }
public string level { get; set; }
public string softVersion { get; set; }
}
}

使用示例

1
2
var cache_CommonLib = new CacheOption<GetStruct>();
cache_CommonLib.setCache(PublicClass.server_ip+'v1/app/api')

docker笔记01

发表于 2019-04-17   |   分类于 docker   |  

安装 Docker

Mac 上的安装

1
brew cask install docker

安装完成之后,执行 hello-world 试一下:

1
$ docker run hello-world
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
(amd64)
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/

启动Nginx服务

1
docker run -d -p 80:80 --restart=always nginx:latest

-p 表示宿主机IP:容器IP
--restart 重启模式,设置 always,每次启动 docker 都会启动 nginx 容器

进入容器

1
docker exec -it 4591552a4185 bash

参数说明:

exec 对容器操作
-it 分配伪tty
4591552a4185容器ID
bash交互程序为bash

Docker 提供数据挂载的功能,即可以指定容器内的某些路径映射到宿主机器上,修改命令,添加 -v 参数,启动新的容器。

1
docker run -d -p 80:80 -v ~/docker-demo/nginx-htmls:/usr/share/nginx/html/ --restart=always nginx:latest

启动成功之后,docker 会帮你生成目录 ~/docker-demo/nginx-htmls

停止运行

1
docker stop 4591552a4185

删除容器

1
docker rm 4591552a4185

分配IP给宿主机

1
sudo ifconfig lo0 alias 192.168.64.0/24

访问 http://192.168.64.0:8080

1…345…16
peterfei

peterfei

peterfei|技术|上主是我的牧者

79 日志
14 分类
62 标签
RSS
github
© 2025 peterfei
由 Hexo 强力驱动
主题 - NexT.Mist