信风 2024-04-15T06:16:43+00:00 xsllqs@qq.com 使用kubeadm安装kubernetes 2017-11-28T18:25:24+00:00 信风 https://xsllqs.github.io/2017/11/28/kubernetes-kubeadm 简介:

kubernetes集群结构和安装环境

主节点:
172.19.2.50
主节点包含组件:kubectl、Kube-Proxy、kube-dns、etcd、kube-apiserver、kube-controller-manager、kube-scheduler、calico-node、calico-policy-controller、calico-etcd

从节点:
172.19.2.51
172.19.2.140
从节点包含组件:kubernetes-dashboard、calico-node、kube-proxy

最后kubernetes-dashboard访问地址:
http://172.19.2.50:30099/

kubectl:用于操作kubernetes集群的命令行接口
Kube-Proxy:用于暴露容器内的端口,kubernetes master会从与定义的端口范围内请求一个端口号(默认范围:30000-32767)
kube-dns:用于提供容器内的DNS服务
etcd:用于共享配置和服务发现的分布式,一致性的KV存储系统
kube-apiserver:相当于是k8s集群的一个入口,不论通过kubectl还是使用remote api直接控制,都要经过apiserver
kube-controller-manager:承担了master的主要功能,比如交互,管理node,pod,replication,service,namespace等
kube-scheduler:根据特定的调度算法将pod调度到指定的工作节点(minion)上,这一过程也叫绑定(bind)
calico:基于BGP协议的虚拟网络工具,这里主要用于容器内网络通信
kubernetes-dashboard:官方提供的用户管理Kubernets集群可视化工具

参考文档:

https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/

http://www.cnblogs.com/liangDream/p/7358847.html

一、所有节点安装kubeadm

清理系统中残留的kubernetes文件

rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd
rm -rf $HOME/.kube

部署阿里云的kubernetes仓库

vim /etc/yum.repos.d/kubernetes.repo
[kube]
name=Kube
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0

安装kubeadm

yum install kubeadm

安装完后必须有以下几个组件

rpm -qa | grep kube
kubernetes-cni-0.5.1-0.x86_64
kubelet-1.7.5-0.x86_64
kubectl-1.7.5-0.x86_64
kubeadm-1.7.5-0.x86_64

docker使用的是docker-ce

新增docker-ce仓库

vim /etc/yum.repos.d/docker-ce.repo
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://download.docker.com/linux/centos/7/source/stable
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-edge]
name=Docker CE Edge - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-edge-debuginfo]
name=Docker CE Edge - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/edge
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-edge-source]
name=Docker CE Edge - Sources
baseurl=https://download.docker.com/linux/centos/7/source/edge
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://download.docker.com/linux/centos/7/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://download.docker.com/linux/centos/7/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://download.docker.com/linux/centos/7/source/test
enabled=0
gpgcheck=1
gpgkey=https://download.docker.com/linux/centos/gpg

安装docker-ce

yum install docker-ce

二、部署镜像到所有节点

所需镜像,因为是谷歌镜像需要翻墙,可以先把镜像构建到dockerhub上再下载到本地仓库

gcr.io/google_containers/etcd-amd64:3.0.17
gcr.io/google_containers/kube-apiserver-amd64:v1.7.6
gcr.io/google_containers/kube-controller-manager-amd64:v1.7.6
gcr.io/google_containers/kube-scheduler-amd64:v1.7.6
quay.io/coreos/etcd:v3.1.10
quay.io/calico/node:v2.4.1
quay.io/calico/cni:v1.10.0
quay.io/calico/kube-policy-controller:v0.7.0
gcr.io/google_containers/pause-amd64:3.0
gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
gcr.io/google_containers/kube-proxy-amd64:v1.7.6

构建镜像过程参考

https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/

http://www.cnblogs.com/liangDream/p/7358847.html

复制已有的私有仓库密钥到本地

mkdir -pv /etc/docker/certs.d/172.19.2.139/
vim /etc/docker/certs.d/172.19.2.139/ca.crt
-----BEGIN CERTIFICATE-----
MIIDvjCCAqagAwIBAgIUQzFZBuFh7EZLOzWUYZ10QokL+BUwDQYJKoZIhvcNAQEL
BQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl
aUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr
dWJlcm5ldGVzMB4XDTE3MDcwNDA4NTMwMFoXDTIyMDcwMzA4NTMwMFowZTELMAkG
A1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK
BgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWgHFV6Cnbgxcs7X7ujj
APnnMmotzNnnTRhygJLCMpCZUaWYrdBkFE4T/HGpbYi1R5AykSPA7FCffFHpJIf8
Gs5DAZHmpY/uRsLSrqeP7/D8sYlyCpggVUeQJviV/a8L7PkCyGq9DSiU/MUBg4CV
Dw07OT46vFJH0lzTaZJNSz7E5QsekLyzRb61tZiBN0CJvSOxXy7wvdqK0610OEFM
T6AN8WfafTH4qmKWulFBJN1LjHTSYfTZzCL6kfTSG1M3kqG0W4B2o2+TkNLVmC9n
gEKdeh/yQmQWfraRkuWiCorJZGxte27xpjgu7u62sRyCm92xQRNgp5RiGHxP913+
HQIDAQABo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAd
BgNVHQ4EFgQUDFiYOhMMWkuq93iNBoC1Udr9wLIwHwYDVR0jBBgwFoAUDFiYOhMM
Wkuq93iNBoC1Udr9wLIwDQYJKoZIhvcNAQELBQADggEBADTAW0FPhfrJQ6oT/WBe
iWTv6kCaFoSuWrIHiB9fzlOTUsicrYn6iBf+XzcuReZ6qBILghYGPWPpOmnap1dt
8UVl0Shdj+hyMbHzxR0XzX12Ya78Lxe1GFg+63XbxNwOURssd9DalJixKcyj2BW6
F6JG1aBQhdgGSBhsCDvG1zawqgZX/h4VWG55Kv752PYBrQOtUH8CS93NfeB5Q7bE
FOuyvGVd1iO40JQLoFIkZuyxNh0okGjfmT66dia7g+bC0v1SCMiE/UJ9uvHvfPYe
qLkSRjIHH7FH1lQ/AKqjl9qrpZe7lHplskQ/jynEWHcb60QRcAWPyd94OPrpLrTU
64g=
-----END CERTIFICATE-----

登出docker仓库,登录私有仓库

docker logout
docker login 172.19.2.139
Username: admin
Password: Cmcc@1ot

下载私有仓库中的镜像

docker pull 172.19.2.139/xsllqs/etcd-amd64:3.0.17
docker pull 172.19.2.139/xsllqs/kube-scheduler-amd64:v1.7.6
docker pull 172.19.2.139/xsllqs/kube-apiserver-amd64:v1.7.6
docker pull 172.19.2.139/xsllqs/kube-controller-manager-amd64:v1.7.6
docker pull 172.19.2.139/xsllqs/etcd:v3.1.10
docker pull 172.19.2.139/xsllqs/node:v2.4.1
docker pull 172.19.2.139/xsllqs/cni:v1.10.0
docker pull 172.19.2.139/xsllqs/kube-policy-controller:v0.7.0
docker pull 172.19.2.139/xsllqs/pause-amd64:3.0
docker pull 172.19.2.139/xsllqs/k8s-dns-kube-dns-amd64:1.14.4
docker pull 172.19.2.139/xsllqs/k8s-dns-dnsmasq-nanny-amd64:1.14.4
docker pull 172.19.2.139/xsllqs/kubernetes-dashboard-amd64:v1.6.3
docker pull 172.19.2.139/xsllqs/k8s-dns-sidecar-amd64:1.14.4
docker pull 172.19.2.139/xsllqs/kube-proxy-amd64:v1.7.6

重命名镜像

docker tag 172.19.2.139/xsllqs/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17
docker tag 172.19.2.139/xsllqs/kube-scheduler-amd64:v1.7.6 gcr.io/google_containers/kube-scheduler-amd64:v1.7.6
docker tag 172.19.2.139/xsllqs/kube-apiserver-amd64:v1.7.6 gcr.io/google_containers/kube-apiserver-amd64:v1.7.6
docker tag 172.19.2.139/xsllqs/kube-controller-manager-amd64:v1.7.6 gcr.io/google_containers/kube-controller-manager-amd64:v1.7.6
docker tag 172.19.2.139/xsllqs/etcd:v3.1.10 quay.io/coreos/etcd:v3.1.10
docker tag 172.19.2.139/xsllqs/node:v2.4.1 quay.io/calico/node:v2.4.1
docker tag 172.19.2.139/xsllqs/cni:v1.10.0 quay.io/calico/cni:v1.10.0
docker tag 172.19.2.139/xsllqs/kube-policy-controller:v0.7.0 quay.io/calico/kube-policy-controller:v0.7.0
docker tag 172.19.2.139/xsllqs/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
docker tag 172.19.2.139/xsllqs/k8s-dns-kube-dns-amd64:1.14.4 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
docker tag 172.19.2.139/xsllqs/k8s-dns-dnsmasq-nanny-amd64:1.14.4 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
docker tag 172.19.2.139/xsllqs/kubernetes-dashboard-amd64:v1.6.3 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
docker tag 172.19.2.139/xsllqs/k8s-dns-sidecar-amd64:1.14.4 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
docker tag 172.19.2.139/xsllqs/kube-proxy-amd64:v1.7.6 gcr.io/google_containers/kube-proxy-amd64:v1.7.6

三、所有节点启动kubelet

在hosts中加入所有主机名

直接启动kubelet会有这个报错

journalctl -xe
error: failed to run Kubelet: failed to create kubelet: misconfiguration: kubelet cgroup driver: "systemd" is different from docker cgroup driver: "cgrou

所以需要修改以下内容

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

以下有就修改,没有则不管

vim /etc/systemd/system/kubelet.service.d/99-kubelet-droplet.conf
Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 --cgroup-driver=cgroupfs"

启动

systemctl enable kubelet
systemctl start kubelet

也许会启动失败,说找不到配置文件,但是不用管,因为kubeadm会给出配置文件并启动kubelet,建议执行了kubeadm init失败后启动kubelete

四、kubeadm部署master节点

master节点执行以下命令

kubeadm reset
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
kubeadm init --kubernetes-version=v1.7.6

会有以下内容出现

[apiclient] All control plane components are healthy after 30.001912 seconds
[token] Using token: cab485.49b7c0358a06ad35
[apiconfig] Created RBAC rules
[addons] Applied essential addon: kube-proxy
[addons] Applied essential addon: kube-dns

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token cab485.49b7c0358a06ad35 172.19.2.50:6443

请牢记以下内容,后期新增节点需要用到

kubeadm join --token cab485.49b7c0358a06ad35 172.19.2.50:6443

按照要求执行

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

部署calico网络,最好先部署kubernetes-dashboard再部署calico网络

cd /home/lvqingshan/
kubectl apply -f http://docs.projectcalico.org/v2.4/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

查看名称空间

kubectl get pods --all-namespaces

五、主节点安装kubernetes-dashboard

下载kubernetes-dashboard对应的yaml文件

cd /home/lvqingshan/
wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml

修改该yaml文件,固定服务对外端口

vim kubernetes-dashboard.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
	k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  labels:
	k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  labels:
	k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
	matchLabels:
	  k8s-app: kubernetes-dashboard
  template:
	metadata:
	  labels:
		k8s-app: kubernetes-dashboard
	spec:
	  containers:
	  - name: kubernetes-dashboard
		image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.3
		ports:
		- containerPort: 9090
		  protocol: TCP
		args:
		  # Uncomment the following line to manually specify Kubernetes API server Host
		  # If not specified, Dashboard will attempt to auto discover the API server and connect
		  # to it. Uncomment only if the default does not work.
		  # - --apiserver-host=http://my-address:port
		livenessProbe:
		  httpGet:
			path: /
			port: 9090
		  initialDelaySeconds: 30
		  timeoutSeconds: 30
	  serviceAccountName: kubernetes-dashboard
	  # Comment the following tolerations if Dashboard must not be deployed on master
	  tolerations:
	  - key: node-role.kubernetes.io/master
		effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
	k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 80
	targetPort: 9090
	nodePort: 30099		#新增这里,指定物理机上提供的端口
  selector:
	k8s-app: kubernetes-dashboard
  type: NodePort	#新增这里,指定类型

创建dashboard

kubectl create -f kubernetes-dashboard.yaml

六、从节点加入集群

从节点修改/etc/systemd/system/kubelet.service.d/文件后执行

systemctl enable kubelet
systemctl start kubelet

kubeadm join --token cab485.49b7c0358a06ad35 172.19.2.50:6443

查看dashboard的NodePort

kubectl describe svc kubernetes-dashboard --namespace=kube-system

网页打开

http://172.19.2.50:30099/

]]>
kubernetes中挂载glusterfs并使用 2017-11-10T17:35:24+00:00 信风 https://xsllqs.github.io/2017/11/10/kubernetes-glusterfs 一、所有k8s节点安装glusterfs客户端

安装客户端

yum install -y glusterfs glusterfs-fuse

在hosts中加入所有gluster的节点

vim /etc/hosts
172.19.12.193  gluster-manager
172.19.12.194  gluster-node1
172.19.12.195  gluster-node2

二、在kubernetes主节点部署

新建名称空间

vim portal-ns1.yaml
piVersion: v1
kind: Namespace
metadata:
  name: gcgj-portal

新建endpoints

cd /opt/glusterfs
curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/glusterfs/glusterfs-endpoints.json
vim glusterfs-endpoints.json
{
  "kind": "Endpoints",
  "apiVersion": "v1",
  "metadata": {
	"name": "glusterfs-cluster",
	"namespace": "gcgj-portal"	#如果后面要调用的pod有ns则一定要写ns
  },
  "subsets": [
	{
	  "addresses": [
		{
		  "ip": "172.19.12.193"
		}
	  ],
	  "ports": [
		{
		  "port": 1990	#这个端口自己随便写
		}
	  ]
	}
  ]
}

kubectl apply -f glusterfs-endpoints.json
kubectl get ep

新建服务

curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/glusterfs/glusterfs-service.json
vim glusterfs-service.json
{
  "kind": "Service",
  "apiVersion": "v1",
  "metadata": {
	"name": "glusterfs-cluster",
	"namespace": "gcgj-portal"
  },
  "spec": {
	"ports": [
	  {"port": 1990}
	]
  }
}

kubectl apply -f glusterfs-service.json
kubectl get svc

新建glusterfs的pod

curl -O https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/volumes/glusterfs/glusterfs-pod.json
vim glusterfs-pod.json
{
	"apiVersion": "v1",
	"kind": "Pod",
	"metadata": {
		"name": "glusterfs",
		"namespace": "gcgj-portal"
	},
	"spec": {
		"containers": [
			{
				"name": "glusterfs",
				"image": "nginx",
				"volumeMounts": [
					{
						"mountPath": "/mnt/glusterfs",	#自定义本地挂载glusterfs的目录
						"name": "glusterfsvol"
					}
				]
			}
		],
		"volumes": [
			{
				"name": "glusterfsvol",
				"glusterfs": {
					"endpoints": "glusterfs-cluster",
					"path": "models",
					"readOnly": true
				}
			}
		]
	}
}


kubectl apply -f glusterfs-pod.json
kubectl get pods

创建pv

vim glusterfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gluster-dev-volume
spec:
  capacity:
	storage: 8Gi	#pv申请的容量大小
  accessModes:
	- ReadWriteMany
  glusterfs:
	endpoints: "glusterfs-cluster"
	path: "models"
	readOnly: false

kubectl apply -f glusterfs-pv.yaml
kubectl get pv

创建pvc

vim glusterfs-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: glusterfs-gcgj
  namespace: gcgj-portal
spec:
  accessModes:
	- ReadWriteMany
  resources:
	requests:
	  storage: 8Gi

kubectl apply -f glusterfs-pvc.yaml
kubectl get pvc

新建应用,测试能否正常挂载

cd /opt/kube-gcgj/portal-test
vim portal-rc1.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: gcgj-portal
  namespace: gcgj-portal
spec:
  replicas: 1
  selector:
	app: portal
  template:
	metadata:
	  labels:
		app: portal
	spec:
	  containers:
	  - image: 172.19.2.139/gcgj/portal:latest
		name: portal
		resources:
		  limits:
			cpu: "1"
			memory: 2Gi
		ports:
		- containerPort: 8080
		volumeMounts:
		- mountPath: /usr/local/tomcat/logs		#需要挂载的目录
		  name: gcgj-portal-log			#这里的名字和下面的volumes的name要一致
	  volumes:
	  - name: gcgj-portal-log
		persistentVolumeClaim:
		  claimName: glusterfs-gcgj		#这里为pvc的名字

vim portal-svc1.yaml
apiVersion: v1
kind: Service
metadata:
  name: gcgj-portal
  namespace: gcgj-portal
spec:
  ports:
  - name: portal-svc
	port: 8080
	targetPort: 8080
	nodePort: 30082
  selector:
	app: portal
  type: NodePort

kubectl create -f /opt/kube-gcgj/portal-test

应用启动后到gluster集群对应的目录中查看是否有新日志生成

]]>
kubernetes部署helm 2017-11-07T17:35:24+00:00 信风 https://xsllqs.github.io/2017/11/07/kubernetes-helm 环境

参考:https://docs.helm.sh/using_helm/#installing-helm

需要镜像

gcr.io/kubernetes-helm/tiller:v2.7.0

重命名一个tag

docker pull xsllqs/kubernetes-helm:v2.7.0
docker tag xsllqs/kubernetes-helm:v2.7.0 gcr.io/kubernetes-helm/tiller:v2.7.0

上传到私有仓库

docker tag xsllqs/kubernetes-helm:v2.7.0 172.19.2.139/xsllqs/kubernetes-helm/tiller:v2.7.0
docker push 172.19.2.139/xsllqs/kubernetes-helm/tiller:v2.7.0

每个node都下载并重命名该节点

docker pull 172.19.2.139/xsllqs/kubernetes-helm/tiller:v2.7.0
docker tag 172.19.2.139/xsllqs/kubernetes-helm/tiller:v2.7.0 gcr.io/kubernetes-helm/tiller:v2.7.0

一、安装helm客户端和tiller

修改环境变量

vim /etc/bashrc
export PATH="$PATH:/usr/local/bin"
vim /etc/profile
export PATH="$PATH:/usr/local/bin"
source /etc/profile

RBAC授权

kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

部署

cd /opt/helm
curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
chmod 700 get_helm.sh
./get_helm.sh
helm init --tiller-namespace=kube-system

查看tiller是否安装成功

kubectl get pods --namespace kube-system

测试Client和Server是否连接正常

helm version

卸载

kubectl delete deployment tiller-deploy --namespace kube-system

web-UI安装(本人未部署)

helm install stable/nginx-ingress
helm install stable/nginx-ingress --set controller.hostNetwork=true
helm repo add monocular https://kubernetes-helm.github.io/monocular
helm install monocular/monocular

添加国内可用仓库

helm repo add opsgoodness http://charts.opsgoodness.com

二、应用的安装删除

安装redis应用

helm install stable/redis-ha --version 0.2.3

如果上面执行不了就直接执行以下内容

helm install https://kubernetes-charts.storage.googleapis.com/redis-ha-0.2.3.tgz

访问redis

redis-cli -h torrid-tuatara-redis-ha.default.svc.cluster.local

安装kafka应用

helm repo add incubator https://kubernetes-charts-incubator.storage.googleapis.com
helm install incubator/kafka --version 0.2.1

或者

helm install https://kubernetes-charts-incubator.storage.googleapis.com/kafka-0.2.1.tgz

删除部署的应用

helm ls
helm delete {name}
]]>
kubernetes中部署mysql集群并持久化存储 2017-10-31T19:12:24+00:00 信风 https://xsllqs.github.io/2017/10/31/kubernetes-mysql 简介:

参考:https://github.com/Yolean/kubernetes-mysql-cluster

环境

主节点:172.19.2.50
从节点:
172.19.2.51
172.19.2.140

部署完成后通过各节点的30336端口访问mysql

账号root,密码abcd1234
如:
mysql -h 172.19.2.50 -P 30336 -uroot -pabcd1234

部署完成后通过galera可以让集群3个节点间的数据一致

容器内访问mysql时,可以通过所有k8s节点的30336端口访问,也可以使用k8s服务中的内部入口访问,如mysql.mysql:3306,dns会自动解析mysql.mysql到对应的服务集群

一、在主节点创建目录

mkdir -pv /mysql_data/datadir-mariadb-0
mkdir -pv /mysql_data/datadir-mariadb-1
mkdir -pv /mysql_data/datadir-mariadb-2

二、修改部署文件

cd /opt/kubernetes-mysql-cluster

命名空间部署文件

vim 00namespace.yml
---
apiVersion: v1
kind: Namespace
metadata:
  name: mysql

pvc部署文件

vim 10pvc.yml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-mariadb-0
  namespace: mysql
spec:
  accessModes:
	- ReadWriteOnce		#这里为pvc的访问模式
  resources:
	requests:
	  storage: 10Gi		#这里调整要挂载的pvc大小
  selector:
	matchLabels:		#这里要和pv的标签对应
	  app: mariadb
	  podindex: "0"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-mariadb-1
  namespace: mysql
spec:
  accessModes:
	- ReadWriteOnce
  resources:
	requests:
	  storage: 10Gi
  selector:
	matchLabels:
	  app: mariadb
	  podindex: "1"
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql-mariadb-2
  namespace: mysql
spec:
  accessModes:
	- ReadWriteOnce
  resources:
	requests:
	  storage: 10Gi
  selector:
	matchLabels:
	  app: mariadb
	  podindex: "2"

mariadb服务文件

vim 20mariadb-service.yml
# the "Headless Service, used to control the network domain"
---
apiVersion: v1
kind: Service
metadata:
  name: mariadb
  namespace: mysql
spec:
  clusterIP: None
  selector:
	app: mariadb
  ports:
	- port: 3306
	  name: mysql
	- port: 4444
	  name: sst
	- port: 4567
	  name: replication
	- protocol: UDP
	  port: 4567
	  name: replicationudp
	- port: 4568
	  name: ist

mysql服务文件

vim 30mysql-service.yml
---
apiVersion: v1
kind: Service
metadata:
  name: mysql
  namespace: mysql
spec:
  ports:
  - port: 3306
	name: mysql
	targetPort: 3306
	nodePort: 30336		#这里为porxy映射端口
  selector:
	app: mariadb
  type: NodePort

载入配置文件

vim 40configmap.sh
#!/bin/bash
DIR=`dirname "$BASH_SOURCE"`
kubectl create configmap "conf-d" --from-file="$DIR/conf-d/" --namespace=mysql

输入mysql初始密码文件

vim 41secret.sh
#!/bin/bash
echo -n Please enter mysql root password for upload to k8s secret:
read -s rootpw
echo
kubectl create secret generic mysql-secret --namespace=mysql --from-literal=rootpw=$rootpw

部署有状态集群文件

vim 50mariadb.yml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
  name: mariadb
  namespace: mysql
spec:
  serviceName: "mariadb"
  replicas: 1		#这里是要启动的节点的数量
  template:
	metadata:
	  labels:
		app: mariadb
	spec:
	  terminationGracePeriodSeconds: 10
	  containers:
		- name: mariadb
		  image: mariadb:10.1.22	#这里修改使用的镜像文件
		  ports:
			- containerPort: 3306
			  name: mysql
			- containerPort: 4444
			  name: sst
			- containerPort: 4567
			  name: replication
			- containerPort: 4567
			  protocol: UDP
			  name: replicationudp
			- containerPort: 4568
			  name: ist
		  env:
			- name: MYSQL_ROOT_PASSWORD
			  valueFrom:
				secretKeyRef:
				  name: mysql-secret
				  key: rootpw
			- name: MYSQL_INITDB_SKIP_TZINFO
			  value: "yes"
		  args:
			- --character-set-server=utf8mb4
			- --collation-server=utf8mb4_unicode_ci
			# Remove after first replicas=1 create
			- --wsrep-new-cluster	#这里在执行的时候代表会创建新集群,新增节点的时候要注释掉
		  volumeMounts:
			- name: mysql
			  mountPath: /var/lib/mysql
			- name: conf
			  mountPath: /etc/mysql/conf.d
			- name: initdb
			  mountPath: /docker-entrypoint-initdb.d
	  volumes:
		- name: conf
		  configMap:
			name: conf-d
		- name: initdb
		  emptyDir: {}
  volumeClaimTemplates:
  - metadata:
	  name: mysql
	spec:
	  accessModes: [ "ReadWriteOnce" ]
	  resources:
		requests:
		  storage: 10Gi

调整集群节点到3个的文件

vim 70unbootstrap.sh
#!/bin/bash
DIR=`dirname "$BASH_SOURCE"`
set -e
set -x
cp "$DIR/50mariadb.yml" "$DIR/50mariadb.yml.unbootstrap.yml"
sed -i 's/replicas: 1/replicas: 3/' "$DIR/50mariadb.yml.unbootstrap.yml"
sed -i 's/- --wsrep-new-cluster/#- --wsrep-new-cluster/' "$DIR/50mariadb.yml.unbootstrap.yml"
kubectl apply -f "$DIR/50mariadb.yml.unbootstrap.yml"
rm "$DIR/50mariadb.yml.unbootstrap.yml"

创建目录

mkdir -pv bootstrap conf-d

创建pv脚本

vim bootstrap/pv.sh
#!/bin/bash
echo "Note that in for example GKE a PetSet will have PersistentVolume(s) and PersistentVolumeClaim(s) created for it automatically"
dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )"
path="$dir/data"
echo "Please enter a path where to store data during local testing: ($path)"
read newpath
[ -n "$newpath" ] && path=$newpath
cat bootstrap/pv-template.yml | sed "s|/tmp/k8s-data|$path|" | kubectl create -f -

创建挂载pv文件

vim bootstrap/pv-template.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-mariadb-0
  labels:				#这里的标签要和pvc的matchLabels对应
	app: mariadb
	podindex: "0"
spec:
  accessModes:
  - ReadWriteOnce		#这里是pv的访问模式,必须要与pvc相同
  capacity:
	storage: 10Gi		#这里是要创建的pv的大小
  hostPath:
	path: /mysql_data/datadir-mariadb-0		#这里为挂载到本地的路径
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-mariadb-1
  labels:
	app: mariadb
	podindex: "1"
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
	storage: 10Gi
  hostPath:
	path: /mysql_data/datadir-mariadb-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: datadir-mariadb-2
  labels:
	app: mariadb
	podindex: "2"
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
	storage: 10Gi
  hostPath:
	path: /mysql_data/datadir-mariadb-2

删除pv脚本

vim bootstrap/rm.sh
#!/bin/bash
echo "Note that in for example GKE a PetSet will have PersistentVolume(s) and PersistentVolumeClaim(s) created for it automatically"
dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && cd .. && pwd )"
path="$dir/data"
echo "Please enter a path where to store data during local testing: ($path)"
read newpath
[ -n "$newpath" ] && path=$newpath
cat bootstrap/pv-template.yml | sed "s|/tmp/k8s-data|$path|" | kubectl delete -f -

集群同步galera的配置文件

vim conf-d/galera.cnf
[server]
[mysqld]
[galera]
wsrep_on=ON
wsrep_provider="/usr/lib/galera/libgalera_smm.so"
wsrep_cluster_address="gcomm://mariadb-0.mariadb,mariadb-1.mariadb,mariadb-2.mariadb"
binlog_format=ROW
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep-sst-method=rsync
bind-address=0.0.0.0
[embedded]
[mariadb]
[mariadb-10.1]

三、在kubernetes上部署mariadb集群

kubernetes所有节点执行

docker pull mariadb:10.1.22

主节点执行

cd /opt/kubernetes-mysql-cluster
sh bootstrap/pv.sh
kubectl create -f 00namespace.yml
kubectl create -f 10pvc.yml
./40configmap.sh
./41secret.sh
设置数据库root密码为abcd1234
kubectl create -f 20mariadb-service.yml
kubectl create -f 30mysql-service.yml
kubectl create -f 50mariadb.yml

四、增加集群节点至3个

./70unbootstrap.sh

五、清理kubernetes上的mysql集群

主节点上执行

cd /opt/kubernetes-mysql-cluster
kubectl delete -f ./
sh bootstrap/rm.sh

所有节点执行

rm -rf /mysql_data/datadir-mariadb-0/*
rm -rf /mysql_data/datadir-mariadb-1/*
rm -rf /mysql_data/datadir-mariadb-2/*
]]>
ansible的简单介绍 2017-10-21T20:12:24+00:00 信风 https://xsllqs.github.io/2017/10/21/ansible 一、Ansible的安装

官方网站:http://www.ansible.com

官方文档:http://docs.ansible.com/ansible/latest/intro_installation.html

1、yum源安装

以centos为例,默认在源里没有ansible,不过在fedora epel源里有ansible,配置完epel 源后,可以直接通过yum 进行安装。这里以centos6.8为例:

# yum install http://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm
# yum install ansible

也可以用rpm安装

$ git clone https://github.com/ansible/ansible.git
$ cd ./ansible
$ make rpm
$ sudo rpm -Uvh ./rpm-build/ansible-*.noarch.rpm

2、apt-get安装

在ubuntu及其衍生版中,可以通过增加ppa源进行apt-get安装,具体如下:

$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible

其他问题参考:http://www.361way.com/ansible-install/4371.html

二、ansible的配置

1、新增主机和组

ansible要控制一台主机必须先新增主机到组

例:要控制172.19.2.49和172.19.2.50的主机

vim /etc/ansible/hosts
[kubernetes]	#中括号内为组名,组名自定义
172.19.2.50		#在组内为包含的主机
172.19.2.51		#一个组中可以存在多个主机,一个主机可以存在多个组

ansible可通过组名同时控制多台主机,也可直接通过ip控制单台主机,但都需要将主机添加至/etc/ansible/hosts

2、控制主机前操作

因为ansible是依靠ssh来进行主机操作的,所以需要用到ssh密钥通信

在对应用户下生成密钥:

ssh-keygen -t rsa

传输密钥:

ssh-copy-id -i .ssh/id_rsa.pub root@172.19.2.1

三、命令行控制主机操作

1、命令行操作

例:调用对应主机的ifconfig命令

ansible kubernetes -s -m shell -a "ifconfig"
#kubernetes就是之前在/etc/ansible/hosts中设置的组,表示控制该组的所有主机
# -s 是使用sudo,如果客户端本来是在root下操作可以不加该参数,不加的时候命令执行速度更快
# -m shell 是指定执行的模块,这里调用了shell模块
# -a "ifconfig" 是调用模块后执行的操作,这里执行了ifconfig命令
#如果对单台主机进行操作则:ansible 172.19.2.50 -s -m shell -a "ifconfig"

使用ansible -h可查看各参数含义

2、常用模块说明

shell模块:用于执行shell命令,支持命令管道

例:执行远程主机上的shell脚本

ansible kubernetes -m shell -a "/tmp/rocketzhang_test.sh"

common模块:用于执行命令,功能和shell基本相同,不支持命令管道

例:在远程主机上执行date命令

ansible kubernetes -m common -a "date"

copy模块:从ansible所在主机复制文件到远程主机

相关参数如下:
src:为ansible所在主机的文件所在路径
dest:文件要复制到目标主机所在的路径
owner:文件复制后的属主
group:文件复制后的属组
mode:文件复制后的权限

例:将本地文件“/etc/ansible/ansible.cfg”复制到远程服务器

ansible kubernetes -m copy -a "src=/etc/ansible/ansible.cfg dest=/tmp/ansible.cfg owner=root group=root mode=0644"

file模块:用于操作文件或目录

相关参数如下:
group:定义文件/目录的属组
mode:定义文件/目录的权限
owner:定义文件/目录的属主
path:必选项,定义文件/目录的路径
recurse:递归的设置文件的属性,只对目录有效
src:要被链接的源文件的路径,只应用于state=link的情况
dest:被链接到的路径,只应用于state=link的情况
state:
	directory:如果目录不存在,创建目录
	file:即使文件不存在,也不会被创建
	link:创建软链接
	hard:创建硬链接
	touch:如果文件不存在,则会创建一个新的文件,如果文件或目录已存在,则更新其最后修改时间
	absent:删除目录、文件或者取消链接文件

例:递归删除test目录

file: path=/home/app/test recurse=yes state=absent

例:递归创建test目录

path=/home/app/test recurse=yes mode=775 owner=app group=app state=directory

其他模块可参考官方文档:

http://docs.ansible.com/ansible/latest/modules_by_category.html

例:ansible批量更新jdk

ansible routechang -s -m shell -a 'rpm -qa | grep jdk'
ansible routechang -s -m shell -a 'rpm -qa | grep java'
ansible routechang -s -m shell -a 'yum -y remove java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64'
ansible routechang -s -m shell -a 'yum -y remove -y java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64'
ansible routechang -s -m shell -a 'yum -y remove java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64'
ansible routechang -s -m copy -a "src=/home/app/test/jdk-7u80-linux-x64.rpm dest=/home/app/jdk-7u80-linux-x64.rpm mode=755"
ansible routechang -s -m shell -a 'rpm -i /home/app/jdk-7u80-linux-x64.rpm'
ansible routechang -s -m shell -a 'java -version'

三、ansible-playbook说明

1、使用方法

命令行执行:ansible-playbook -vvv /home/app/test.yaml

test.yaml是一个类似于脚本的json格式文件

其中 -v 可看到整个playbook的执行过程,包括报错原因,v越多显示越详细

2、yaml文件常用项说明

这里需要特别注意,yaml文件对格式要求非常严格,多一个空格少一个空格都不行

hosts:指定了对哪些主机进行操作;
user:指定使用什么用户登录远程主机操作;
vars:指定变量
task:指定了一个任务
name:对该模块实现的功能做一个描述,类似于注释
ignore_errors:忽略本行报错
remote_user:远程执行过程中使用的用户

注意事项:

YAML和Ansible Playbook的变量语法不能在一起。这里特指冒号后面的值不能以 { 开头,需要再之前加双引号

特别是在tasks中写shell命令容易出现冲突

例:

下面的代码会报错:

- hosts: app_servers
  vars:
	  app_path: /22

解决办法:要在{ 开始的值加上引号:

- hosts: app_servers
  vars:
	   app_path: "/22"

四、其他参考

yaml文件传参:

http://blog.csdn.net/angle_sun/article/details/52728105

ansible非root用户批量修改root密码:

http://www.cnblogs.com/zhanmeiliang/p/6197762.html

ansible批量修改主机密码:

http://147546.blog.51cto.com/137546/1892537

]]>
jenkins触发式自动构建python应用镜像并发布至kubernetes集群 2017-10-18T17:15:24+00:00 信风 https://xsllqs.github.io/2017/10/18/kubernetes-python 一、制作Dockerfile文件

1.在172.19.2.51上部署

上传安装包至该目录并解压

mkdir -pv /opt/git/obd
cd /opt/git/obd
tar zxvf flask.tar.gz

vim Dockerfile
FROM python:2.7
RUN mkdir -pv /opt/flask
ADD flask /opt/flask
RUN pip install flask
RUN pip install Flask-MYSQL
EXPOSE 5000
CMD ["python","/opt/flask/server.py"]

ls /home/app/test/cmdb
Dockerfile  flask  flask.tar.gz

2.测试dockerfile是否能正常工作

docker build -t obd:v1 ./
docker run -p 31500:5000 -idt obd:v1
git add -A
git commit
git push -u origin master
gitlab账号lvqingshan
密码abcd1234

二、配置登录habor仓库(仓库为172.19.2.139)

1.在192.168.13.45上配置仓库私钥

mkdir -pv /etc/docker/certs.d/172.19.2.139/
vim /etc/docker/certs.d/172.19.2.139/ca.crt
-----BEGIN CERTIFICATE-----
MIIDvjCCAqagAwIBAgIUQzFZBuFh7EZLOzWUYZ10QokL+BUwDQYJKoZIhvcNAQEL
BQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl
aUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr
dWJlcm5ldGVzMB4XDTE3MDcwNDA4NTMwMFoXDTIyMDcwMzA4NTMwMFowZTELMAkG
A1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK
BgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWgHFV6Cnbgxcs7X7ujj
APnnMmotzNnnTRhygJLCMpCZUaWYrdBkFE4T/HGpbYi1R5AykSPA7FCffFHpJIf8
Gs5DAZHmpY/uRsLSrqeP7/D8sYlyCpggVUeQJviV/a8L7PkCyGq9DSiU/MUBg4CV
Dw07OT46vFJH0lzTaZJNSz7E5QsekLyzRb61tZiBN0CJvSOxXy7wvdqK0610OEFM
T6AN8WfafTH4qmKWulFBJN1LjHTSYfTZzCL6kfTSG1M3kqG0W4B2o2+TkNLVmC9n
gEKdeh/yQmQWfraRkuWiCorJZGxte27xpjgu7u62sRyCm92xQRNgp5RiGHxP913+
HQIDAQABo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAd
BgNVHQ4EFgQUDFiYOhMMWkuq93iNBoC1Udr9wLIwHwYDVR0jBBgwFoAUDFiYOhMM
Wkuq93iNBoC1Udr9wLIwDQYJKoZIhvcNAQELBQADggEBADTAW0FPhfrJQ6oT/WBe
iWTv6kCaFoSuWrIHiB9fzlOTUsicrYn6iBf+XzcuReZ6qBILghYGPWPpOmnap1dt
8UVl0Shdj+hyMbHzxR0XzX12Ya78Lxe1GFg+63XbxNwOURssd9DalJixKcyj2BW6
F6JG1aBQhdgGSBhsCDvG1zawqgZX/h4VWG55Kv752PYBrQOtUH8CS93NfeB5Q7bE
FOuyvGVd1iO40JQLoFIkZuyxNh0okGjfmT66dia7g+bC0v1SCMiE/UJ9uvHvfPYe
qLkSRjIHH7FH1lQ/AKqjl9qrpZe7lHplskQ/jynEWHcb60QRcAWPyd94OPrpLrTU
64g=
-----END CERTIFICATE-----

2.登录仓库

docker login 172.19.2.139
Username: admin
Password: Cmcc@1ot

3.上传镜像测试

在habor上创建ops仓库后才能push

docker tag obd:v1 172.19.2.139/ops/obd
docker login -p admin -u Cmcc@1ot -e 172.19.2.139
docker push 172.19.2.139/ops/obd

三、kubernets文件配置

在kubernetes主节点172.19.2.50上配置

vim /opt/kube-obd/obd-rc1.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: obd
spec:
  replicas: 2		#pod数量
  selector:
	app: obd
  template:
	metadata:
	  labels:
		app: obd
	spec:
	  containers:
	  - image: 172.19.2.139/ops/obd:latest		#使用的镜像
		name: obd
		resources:
		  limits:
			cpu: "2"		#分配给pod的CPU资源
			memory: 2Gi		#分配给pod的内存资源
		ports:
		- containerPort: 5000		#开放的端口

vim /opt/kube-obd/obd-svc1.yaml
apiVersion: v1
kind: Service
metadata:
  name: obd
spec:
  ports:
  - name: obd-svc
	port: 5000
	targetPort: 5000
	nodePort: 31500		#proxy映射出来的端口
  selector:
	app: obd
  type: NodePort		#映射端口类型

四、jenkins配置

1.General中配置参数化构建过程

新增String Parameter

名字:VERSION

默认值:[空]

描述:请输入版本号

1

2.源码管理Git设置

Repository URL 为http://172.19.2.140:18080/lijun/obd.git

2

3.设置Gitlab出现变更自动触发构建

一分钟检测一次gitlab项目是否有变化

*/1 * * * *

3 4

4.Execute shell设置

两种控制版本的方式,当自动触发构建或者版本号为空时使用时间戳作为版本,当填入版本号时使用填入的版本号

imagesid=`docker images | grep -i obd | awk '{print $3}'| head -1`
project=/var/lib/jenkins/jobs/odb-docker-build/workspace

if [ -z "$VERSION" ];then
	VERSION=`date +%Y%m%d%H%M`
fi

echo $VERSION

if [ -z "$imagesid" ];then
	echo $imagesid "is null"
else
	docker rmi -f $imagesid 
fi

docker build -t obd:$VERSION $project


docker tag obd:$VERSION 172.19.2.139/ops/obd:$VERSION
docker tag obd:$VERSION 172.19.2.139/ops/obd:latest
docker login -u admin -p Cmcc@1ot 172.19.2.139
docker push 172.19.2.139/ops/obd:$VERSION
docker push 172.19.2.139/ops/obd:latest

5

5.ansible-playbook配置

vim /home/app/ansible/playbooks/obd/obd.yaml
- hosts: 172.19.2.50
  remote_user: app
  sudo: yes
  tasks:
	- name: 关闭原有pod
	  shell: kubectl delete -f /opt/kube-obd
	  ignore_errors: yes
	- name: 启动新pod
	  shell: kubectl create -f /opt/kube-obd

6

]]>
jenkins触发式自动构建tomcat镜像并发布至kubernetes集群 2017-10-16T17:55:24+00:00 信风 https://xsllqs.github.io/2017/10/16/kubernetes-tomcat 一、制作Dockerfile文件

1.在172.19.2.51上部署

mkdir -pv /opt/git
git clone http://172.19.2.140:18080/lvqingshan/gcgj.git
cd /opt/git/gcgj
scp app@172.19.2.1:/home/app/portal-tomcat/webapps/portal.war ./
scp app@192.168.37.34:/home/app/portal-tomcat/conf/server.xml ./

vim Dockerfile
FROM tomcat:7.0.77-jre8		#这里是基础镜像
ADD server.xml /usr/local/tomcat/conf	#server.xml文件要和Dockerfile再同一目录,这里是替换文件
RUN rm -rf /usr/local/tomcat/webapps/*
COPY portal.war /usr/local/tomcat/webapps/ROOT.war		#portal.war文件要和Dockerfile再同一目录,这里是复制文件
EXPOSE 8080		#对外开放端口
CMD ["/usr/local/tomcat/bin/catalina.sh","run"]		#启动镜像时执行的命令

2.测试dockerfile是否能正常工作

docker build -t gcgj/portal .
docker run -p 38080:8080 -idt gcgj/portal:latest
git add -A
git commit
git push -u origin master
gitlab账号lvqingshan
密码abcd1234

二、配置登录habor仓库(仓库为172.19.2.139)

1.在192.168.13.45上配置仓库私钥

mkdir -pv /etc/docker/certs.d/172.19.2.139/
vim /etc/docker/certs.d/172.19.2.139/ca.crt
-----BEGIN CERTIFICATE-----
MIIDvjCCAqagAwIBAgIUQzFZBuFh7EZLOzWUYZ10QokL+BUwDQYJKoZIhvcNAQEL
BQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl
aUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr
dWJlcm5ldGVzMB4XDTE3MDcwNDA4NTMwMFoXDTIyMDcwMzA4NTMwMFowZTELMAkG
A1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK
BgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWgHFV6Cnbgxcs7X7ujj
APnnMmotzNnnTRhygJLCMpCZUaWYrdBkFE4T/HGpbYi1R5AykSPA7FCffFHpJIf8
Gs5DAZHmpY/uRsLSrqeP7/D8sYlyCpggVUeQJviV/a8L7PkCyGq9DSiU/MUBg4CV
Dw07OT46vFJH0lzTaZJNSz7E5QsekLyzRb61tZiBN0CJvSOxXy7wvdqK0610OEFM
T6AN8WfafTH4qmKWulFBJN1LjHTSYfTZzCL6kfTSG1M3kqG0W4B2o2+TkNLVmC9n
gEKdeh/yQmQWfraRkuWiCorJZGxte27xpjgu7u62sRyCm92xQRNgp5RiGHxP913+
HQIDAQABo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAd
BgNVHQ4EFgQUDFiYOhMMWkuq93iNBoC1Udr9wLIwHwYDVR0jBBgwFoAUDFiYOhMM
Wkuq93iNBoC1Udr9wLIwDQYJKoZIhvcNAQELBQADggEBADTAW0FPhfrJQ6oT/WBe
iWTv6kCaFoSuWrIHiB9fzlOTUsicrYn6iBf+XzcuReZ6qBILghYGPWPpOmnap1dt
8UVl0Shdj+hyMbHzxR0XzX12Ya78Lxe1GFg+63XbxNwOURssd9DalJixKcyj2BW6
F6JG1aBQhdgGSBhsCDvG1zawqgZX/h4VWG55Kv752PYBrQOtUH8CS93NfeB5Q7bE
FOuyvGVd1iO40JQLoFIkZuyxNh0okGjfmT66dia7g+bC0v1SCMiE/UJ9uvHvfPYe
qLkSRjIHH7FH1lQ/AKqjl9qrpZe7lHplskQ/jynEWHcb60QRcAWPyd94OPrpLrTU
64g=
-----END CERTIFICATE-----

2.登录仓库

docker login 172.19.2.139
Username: admin
Password: Cmcc@1ot

3.上传镜像测试

在habor上创建gcgj仓库后才能push

docker tag gcgj/portal:latest 172.19.2.139/gcgj/portal
docker login -p admin -u Cmcc@1ot -e 172.19.2.139
docker push 172.19.2.139/gcgj/portal

三、kubernets文件配置

在kubernetes主节点172.19.2.50上配置

vim /opt/kube-portal/portal-rc1.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: gcgj-portal
spec:
  replicas: 2
  selector:
	app: portal
  template:
	metadata:
	  labels:
		app: portal		#加标签
	spec:
	  containers:
	  - image: 172.19.2.139/gcgj/portal:latest		#要发布的镜像
		name: portal
		resources:
		  limits:
			cpu: "2"		#pod占用的cpu资源
			memory: 2Gi		#pod占用的内存资源
		ports:
		- containerPort: 8080		#pod提供的端口
		volumeMounts:
		- mountPath: /usr/local/tomcat/logs		#镜像内要挂载的目录
		  name: portal-logs
	  volumes:
	  - name: portal-logs
		hostPath:
		  path: /opt/logs/portal		#映射到本地的目录

vim /opt/kube-portal/portal-svc1.yaml
apiVersion: v1
kind: Service
metadata:
  name: gcgj-portal
spec:
  ports:
  - name: portal-svc
	port: 8080
	targetPort: 8080
	nodePort: 30088		#proxy映射出来的端口
  selector:
	app: portal
  type: NodePort		#端口类型

四、jenkins配置

1.General中配置参数化构建过程

新增String Parameter

名字:VERSION

默认值:[空]

描述:请输入版本号

1

2.源码管理Git设置

Repository URL 为http://172.19.2.140:18080/lvqingshan/gcgj.git

2

3.设置Gitlab出现变更自动触发构建

一分钟检测一次gitlab项目是否有变化

*/1 * * * *

3 4

4.Execute shell设置

两种控制版本的方式,当自动触发构建或者版本号为空时使用时间戳作为版本,当填入版本号时使用填入的版本号

imagesid=`docker images | grep -i gcgj | awk '{print $3}'| head -1`
project=/var/lib/jenkins/jobs/build-docker-router-portal/workspace

if [ -z "$VERSION" ];then
	VERSION=`date +%Y%m%d%H%M`
fi

echo $VERSION

if docker ps -a|grep -i gcgj;then
   docker rm -f gcgj
fi

if [ -z "$imagesid" ];then
	echo $imagesid "is null"
else
	docker rmi -f $imagesid 
fi

docker build -t gcgj/portal:$VERSION $project


docker tag gcgj/portal:$VERSION 172.19.2.139/gcgj/portal:$VERSION
docker tag gcgj/portal:$VERSION 172.19.2.139/gcgj/portal:latest
docker login -u admin -p Cmcc@1ot 172.19.2.139
docker push 172.19.2.139/gcgj/portal:$VERSION
docker push 172.19.2.139/gcgj/portal:latest

5

5.ansible-playbook配置

在ansible主机192.168.13.45上配置

vim /home/app/ansible/playbooks/opstest/portal.yaml
- hosts: 172.19.2.50
  remote_user: app
  sudo: yes
  tasks:
	- name: 关闭原有pod
	  shell: kubectl delete -f /opt/kube-portal
	  ignore_errors: yes
	- name: 启动新pod
	  shell: kubectl create -f /opt/kube-portal

6

]]>
jenkins触发式自动构建docker镜像上传至harbor并发布 2017-09-15T02:55:24+00:00 信风 https://xsllqs.github.io/2017/09/15/jenkins-docker 一、制作Dockerfile文件

1.在172.19.2.51上部署

mkdir -pv /opt/git
git clone http://172.19.2.140:18080/lvqingshan/gcgj.git
cd /opt/git/gcgj
scp app@172.19.2.1:/home/app/portal-tomcat/webapps/portal.war ./
scp app@192.168.37.34:/home/app/portal-tomcat/conf/server.xml ./

vim Dockerfile
FROM tomcat:7.0.77-jre8
ADD server.xml /usr/local/tomcat/conf
RUN rm -rf /usr/local/tomcat/webapps/*
COPY portal.war /usr/local/tomcat/webapps/ROOT.war
EXPOSE 8080
CMD ["/usr/local/tomcat/bin/catalina.sh","run"]

2.测试dockerfile是否能正常工作

docker build -t gcgj/portal .
docker run -p 38080:8080 -idt gcgj/portal:latest
git add -A
git commit
git push -u origin master
gitlab账号lvqingshan
密码abcd1234

二、配置登录habor仓库(仓库为172.19.2.139)

1.在192.168.13.45上配置仓库私钥

mkdir -pv /etc/docker/certs.d/172.19.2.139/
vim /etc/docker/certs.d/172.19.2.139/ca.crt
-----BEGIN CERTIFICATE-----
MIIDvjCCAqagAwIBAgIUQzFZBuFh7EZLOzWUYZ10QokL+BUwDQYJKoZIhvcNAQEL
BQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl
aUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr
dWJlcm5ldGVzMB4XDTE3MDcwNDA4NTMwMFoXDTIyMDcwMzA4NTMwMFowZTELMAkG
A1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK
BgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz
MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyWgHFV6Cnbgxcs7X7ujj
APnnMmotzNnnTRhygJLCMpCZUaWYrdBkFE4T/HGpbYi1R5AykSPA7FCffFHpJIf8
Gs5DAZHmpY/uRsLSrqeP7/D8sYlyCpggVUeQJviV/a8L7PkCyGq9DSiU/MUBg4CV
Dw07OT46vFJH0lzTaZJNSz7E5QsekLyzRb61tZiBN0CJvSOxXy7wvdqK0610OEFM
T6AN8WfafTH4qmKWulFBJN1LjHTSYfTZzCL6kfTSG1M3kqG0W4B2o2+TkNLVmC9n
gEKdeh/yQmQWfraRkuWiCorJZGxte27xpjgu7u62sRyCm92xQRNgp5RiGHxP913+
HQIDAQABo2YwZDAOBgNVHQ8BAf8EBAMCAQYwEgYDVR0TAQH/BAgwBgEB/wIBAjAd
BgNVHQ4EFgQUDFiYOhMMWkuq93iNBoC1Udr9wLIwHwYDVR0jBBgwFoAUDFiYOhMM
Wkuq93iNBoC1Udr9wLIwDQYJKoZIhvcNAQELBQADggEBADTAW0FPhfrJQ6oT/WBe
iWTv6kCaFoSuWrIHiB9fzlOTUsicrYn6iBf+XzcuReZ6qBILghYGPWPpOmnap1dt
8UVl0Shdj+hyMbHzxR0XzX12Ya78Lxe1GFg+63XbxNwOURssd9DalJixKcyj2BW6
F6JG1aBQhdgGSBhsCDvG1zawqgZX/h4VWG55Kv752PYBrQOtUH8CS93NfeB5Q7bE
FOuyvGVd1iO40JQLoFIkZuyxNh0okGjfmT66dia7g+bC0v1SCMiE/UJ9uvHvfPYe
qLkSRjIHH7FH1lQ/AKqjl9qrpZe7lHplskQ/jynEWHcb60QRcAWPyd94OPrpLrTU
64g=
-----END CERTIFICATE-----

2.登录仓库

docker login 172.19.2.139
Username: admin
Password: Cmcc@1ot

3.上传镜像测试

在habor上创建gcgj仓库后才能push

docker tag gcgj/portal:latest 192.168.13.45/gcgj/portal
docker login -p admin -u Cmcc@1ot -e 172.19.2.139
docker push 192.168.13.45/gcgj/portal

三、jenkins配置

1.General中配置参数化构建过程

新增String Parameter

名字:VERSION

默认值:[空]

描述:请输入版本号

1

2.源码管理Git设置

Repository URL 为http://172.19.2.140:18080/lvqingshan/gcgj.git

2

3.设置Gitlab出现变更自动触发构建

一分钟检测一次gitlab项目是否有变化

*/1 * * * *

3

4.Execute shell设置

两种控制版本的方式,当自动触发构建或者版本号为空时使用时间戳作为版本,当填入版本号时使用填入的版本号

imagesid=`docker images | grep -i gcgj | awk '{print $3}'| head -1`
project=/var/lib/jenkins/jobs/build-docker-router-portal/workspace

if [ -z "$VERSION" ];then
	VERSION=`date +%Y%m%d%H%M`
fi

echo $VERSION

if docker ps -a|grep -i gcgj;then
   docker rm -f gcgj
fi

if [ -z "$imagesid" ];then
	echo $imagesid "is null"
else
	docker rmi -f $imagesid 
fi

docker build -t gcgj/portal:$VERSION $project

docker run -p 38080:8080 -idt --name gcgj gcgj/portal:$VERSION


docker tag gcgj/portal:$VERSION 172.19.2.139/gcgj/portal:$VERSION
docker login -u admin -p Cmcc@1ot 172.19.2.139
docker push 172.19.2.139/gcgj/portal:$VERSION

4

]]>
zabbix高可用部署 2017-08-20T02:22:52+00:00 信风 https://xsllqs.github.io/2017/08/20/zabbix-keepalived-proxy 4台主机:

192.168.13.54
192.168.13.55
192.168.13.56
192.168.13.57

2个vip:

192.168.13.59
192.168.13.60

启动mysql时需要同时启动keepalived

service mysql start

导出zabbix库

mysqldump -uzabbix -pZabbix1344 zabbix > /home/app/zabbix.sql

导入zabbix库

nohup mysql -uzabbix -pZabbix1344 zabbix < /home/app/zabbix.sql &

一、数据库高可用

192.168.13.56和192.168.13.57的mysql的root密码为zabbix@1344

安装的mysql-server版本为5.7

dpkg -l | grep mysql
apt-get install mysql-server
service mysql stop

vim /etc/mysql/my.cnf
[mysqld]
skip-name-resolve
max_connections = 1000
bind-address = 0.0.0.0
server-id = 1	#192.168.13.56设置为1,192.168.13.57设置为2
log-bin = /var/log/mysql/mysql-bin.log
binlog-ignore-db = mysql,information_schema
auto-increment-increment = 2
auto-increment-offset = 1
slave-skip-errors = all
user            = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
port            = 3306
basedir         = /usr
datadir         = /var/lib/mysql
tmpdir          = /tmp
lc-messages-dir = /usr/share/mysql
skip-external-locking
key_buffer_size         = 16M
max_allowed_packet      = 16M
thread_stack            = 192K
thread_cache_size       = 8
myisam-recover-options  = BACKUP
query_cache_limit       = 1M
query_cache_size        = 16M
log_error = /var/log/mysql/error.log
expire_logs_days        = 5
max_binlog_size   = 100M

[mysqld_safe]
socket          = /var/run/mysqld/mysqld.sock
nice            = 0
syslog

service mysql start

配置192.168.13.56

mysql -u root -pzabbix@1344
show master status;
+------------------+----------+--------------+--------------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         | Executed_Gtid_Set |
+------------------+----------+--------------+--------------------------+-------------------+
| mysql-bin.000001 |      154 |              | mysql,information_schema |                   |
+------------------+----------+--------------+--------------------------+-------------------+
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.13.%' IDENTIFIED  BY 'replication';
flush privileges;
change master to
master_host='192.168.13.57',
master_user='replication',
master_password='replication',
master_log_file='mysql-bin.000002',
master_log_pos=154;
start slave;

配置192.168.13.57

mysql -u root -pzabbix@1344
show master status;
+------------------+----------+--------------+--------------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB         | Executed_Gtid_Set |
+------------------+----------+--------------+--------------------------+-------------------+
| mysql-bin.000001 |      154 |              | mysql,information_schema |                   |
+------------------+----------+--------------+--------------------------+-------------------+
GRANT REPLICATION SLAVE ON *.* TO 'replication'@'192.168.13.%' IDENTIFIED  BY 'replication';
flush privileges;
change master to
master_host='192.168.13.56',
master_user='replication',
master_password='replication',
master_log_file='mysql-bin.000002',
master_log_pos=154;
start slave;

配置完成后在两台主机上都执行

show slave status\G;

查看Slave_IO和Slave_SQL是否为YES

在192.168.13.56上执行

mysql -pzabbix@1344
create database zabbix character set utf8 collate utf8_bin;
grant all privileges on zabbix.* to zabbix@'%' identified by 'Zabbix1344';
grant all privileges on zabbix.* to zabbix@localhost identified by 'Zabbix1344';
flush privileges;

在192.168.13.57上测试是否添加账号密码和zabbix库成功

mysql -uzabbix -pZabbix1344
show databases;

在192.168.13.57上创建和删除数据库,看192.168.13.56是否同步

CREATE DATABASE my_db1;
show databases;
DROP DATABASE my_db1;

192.168.13.56和192.168.13.57安装keepalived

apt-get install keepalived

查看是否加载ip_vs模块到内核,如果没有加载会导致VIP转移失败

lsmod | grep ip_vs
modprobe ip_vs
modprobe ip_vs_wrr

192.168.13.56设置

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
iot@iot.com
 }
notification_email_from  iot@iot.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MYSQL_HA
 }
vrrp_instance VI_1 {
 state BACKUP
 interface eth0
 virtual_router_id 51
 priority 100
 advert_int 1
 nopreempt
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.13.60
 }
}
virtual_server 192.168.13.60 3306 {
 delay_loop 2
 persistence_timeout 50
 protocol TCP
 real_server 192.168.13.56 3306 {
 weight 3
 notify_down /etc/keepalived/mysql.sh
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
  }
}
}

192.168.13.57设置

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
iot@iot.com
 }
notification_email_from  iot@iot.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id MYSQL_HA
 }
vrrp_instance VI_1 {
 state BACKUP
 interface eth0
 virtual_router_id 51
 priority 90
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.13.60
 }
}
virtual_server 192.168.13.60 3306 {
 delay_loop 2
 persistence_timeout 50
 protocol TCP
 real_server 192.168.13.57 3306 {
 weight 3
 notify_down /etc/keepalived/mysql.sh
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
  }
}
}

192.168.13.56和192.168.13.57都配置

vim /etc/keepalived/mysql.sh
#!/bin/bash
pkill keepalived
chmod +x /etc/keepalived/mysql.sh
service keepalived start

查看Mysql客户端最大连接数

show variables like 'max_connections';

二、zabbix_server高可用

1、配置部署keepalived

在192.168.13.54和192.168.13.55上配置

apt-get install keepalived
apt-get install open-jdk

查看是否加载ip_vs模块到内核,如果没有加载会导致VIP转移失败

lsmod | grep ip_vs
modprobe ip_vs
modprobe ip_vs_wrr

vim /etc/keepalived/mysql.sh
#!/bin/bash
pkill keepalived
chmod +x /etc/keepalived/mysql.sh

配置192.168.13.54的keepalived

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
iot@iot.com
 }
notification_email_from  iot@iot.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id ZABBIX_HA
 }
vrrp_instance VI_1 {
 state BACKUP
 interface eth0
 virtual_router_id 55
 priority 100
 advert_int 1
 nopreempt
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.13.59
 }
}
virtual_server 192.168.13.59 10051 {
 delay_loop 2
 persistence_timeout 50
 protocol TCP
 real_server 192.168.13.54 10051 {
 weight 3
 notify_down /etc/keepalived/zabbix.sh
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
  }
}
}

配置192.168.13.55的keepalived

vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
iot@iot.com
 }
notification_email_from  iot@iot.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id ZABBIX_HA
 }
vrrp_instance VI_1 {
 state BACKUP
 interface eth0
 virtual_router_id 55
 priority 90
 advert_int 1
 authentication {
 auth_type PASS
 auth_pass 1111
 }
 virtual_ipaddress {
 192.168.13.59
 }
}
virtual_server 192.168.13.59 10051 {
 delay_loop 2
 persistence_timeout 50
 protocol TCP
 real_server 192.168.13.55 10051 {
 weight 3
 notify_down /etc/keepalived/zabbix.sh
 TCP_CHECK {
 connect_timeout 3
 nb_get_retry 3
 delay_before_retry 3
  }
}
}

2、配置部署zabbix_server

从192.168.13.45复制了/opt/zabbix_home/到192.168.13.54和192.168.13.55上,目录位置没变

在192.168.13.54和192.168.13.55上都执行

修改zabbix前端对应的数据库

vim /opt/zabbix_home/frontends/php/conf/zabbix.conf.php
$DB['SERVER']   = '192.168.13.60';

修改zabbix_server指向的数据库

vim /opt/zabbix_home/conf/zabbix/zabbix_server.conf
DBHost=192.168.13.60

修改zabbix_server的SourceIP指向虚拟IP

vim /opt/zabbix_home/conf/zabbix/zabbix_server.conf
SourceIP=192.168.13.59

解决依赖关系 ldd $(which /opt/zabbix_home/app/httpd/bin/httpd) ldd $(which /opt/zabbix_home/sbin/zabbix_server)

apt-get install libaprutil1
apt-get install libpcre3
apt-get install libmysqlclient18:amd64
apt-get install libnet-snmp-perl
apt-get install snmp
apt-get install snmp-mibs-downloader

find / -name "libpcre.so*"
ln -sv /lib/x86_64-linux-gnu/libpcre.so.3.13.3 /lib/x86_64-linux-gnu/libpcre.so.1

启动web

/opt/zabbix_home/app/httpd/bin/httpd -k start

启动server

/opt/zabbix_home/sbin/zabbix_server -c /opt/zabbix_home/conf/zabbix/zabbix_server.conf

查看server日志

tail -200f /opt/zabbix_home/logs/zabbix/zabbix_server.log

启动keepalived

service keepalived start

keepalived开启日志

vim /etc/default/keepalived
DAEMON_ARGS="-D -d -S 0"

解决微信python脚本依赖

apt-get install python-simplejson
/opt/zabbix_home/app/zabbix/share/zabbix/alertscripts/wechat.py 1 1 1

三、zabbix-proxy部署

1、下载安装proxy包,解决依赖关系

192.168.13.45上执行

cd /opt
wget http://repo.zabbix.com/zabbix/3.0/ubuntu/pool/main/z/zabbix/zabbix-proxy-mysql_3.0.4-1+trusty_amd64.deb
dpkg -i zabbix-proxy-mysql_3.0.4-1+trusty_amd64.deb

2、导入数据库

192.168.13.44上执行

mysql -uroot -p
mysql >
create database zabbix_proxy character set utf8 collate utf8_bin;
grant all privileges on zabbix_proxy.* to zabbix@'%' identified by 'Zabbix1344';
grant all privileges on zabbix_proxy.* to zabbix@localhost identified by 'Zabbix1344';
flush privileges;

192.168.13.45上执行

zcat /usr/share/doc/zabbix-proxy-mysql/schema.sql.gz | mysql -h192.168.13.44 -uzabbix -p"Zabbix1344" zabbix_proxy

3、修改proxy的配置文件

vim /etc/zabbix/zabbix_proxy.conf
Server=192.168.13.59
ServerPort=10051
Hostname=Zabbix_proxy
LogFile=/var/log/zabbix/zabbix_proxy.log
PidFile=/var/run/zabbix/zabbix_proxy.pid
DBHost=192.168.13.44
DBName=zabbix_proxy
DBUser=zabbix
DBPassword=Zabbix1344
DBPort=3306
ConfigFrequency=600
DataSenderFrequency=3
StartPollers=100
StartPollersUnreachable=50
StartTrappers=30
StartDiscoverers=6
JavaGateway=127.0.0.1
JavaGatewayPort=10052
StartJavaPollers=5
CacheSize=320M
StartDBSyncers=20
HistoryCacheSize=512M
Timeout=4
ExternalScripts=/usr/lib/zabbix/externalscripts
FpingLocation=/usr/bin/fping
Fping6Location=/usr/bin/fping6
LogSlowQueries=3000
AllowRoot=1

4、修改server端和proxy端的hosts

vim /etc/hosts
192.168.13.45 Zabbix_proxy

5、启动zabbix_java监控jmx

/opt/zabbix_home/app/zabbix/sbin/zabbix_java/startup.sh

6、启动zabbix-proxy

service zabbix-proxy start

7、zabbix_proxy首次更新等待时间长用命令刷新解决 zabbix_proxy -R config_cache_reload

]]>
编译安装时出现依赖文件故障的解决方法 2017-08-10T01:12:52+00:00 信风 https://xsllqs.github.io/2017/08/10/libmysqlclient 编译安装时出现/usr/bin/ld: cannot find -lxxx故障的解决方法 编译安装时出现/usr/bin/ld: cannot find -lmysqlclient_r故障的解决方法

1、查看依赖文件位置locate libmysqlclient_r

locate libmysqlclient_r
/usr/lib64/mysql/libmysqlclient_r.so
/usr/lib64/mysql/libmysqlclient_r.so.16
/usr/lib64/mysql/libmysqlclient_r.so.16.0.0

发现文件不存在

ll /usr/lib64/mysql/libmysqlclient_r.so
ll /usr/lib64/mysql/libmysqlclient_r.so.16
ll /usr/lib64/mysql/libmysqlclient_r.so.16.0.0

查找系统是否存在该文件

find / -name libmysqlclient_r*
/usr/lib64/libmysqlclient_r.so.14.0.0
/usr/lib64/libmysqlclient_r.so.12
/usr/lib64/libmysqlclient_r.so.12.0.0
/usr/lib64/libmysqlclient_r.so.16.0.0
/usr/lib64/libmysqlclient_r.so.16
/usr/lib64/libmysqlclient_r.so.15.0.0
/usr/lib64/libmysqlclient_r.so.15
/usr/lib64/libmysqlclient_r.so.14

2、把找到的文件指向locate中定义的文件

ln -sf /usr/lib64/libmysqlclient_r.so.16 /usr/lib64/mysql/libmysqlclient_r.so
ln -sf /usr/lib64/libmysqlclient_r.so.16 /usr/lib64/mysql/libmysqlclient_r.so.16
ln -sf /usr/lib64/libmysqlclient_r.so.16 /usr/lib64/mysql/libmysqlclient_r.so.16.0.0

3、修改配置和环境变量

vim /etc/ld.so.conf
include ld.so.conf.d/*.conf
/usr/local/ssl/lib
/usr/lib64/mysql/
/usr/lin64/

ldconfig

vim ~/.bashrc
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64/mysql/
export LIBRARY_PATH=/usr/lib64/mysql/:$LIBRARY_PATH

source ~/.bashrc

4、重新编译

make clean
make
]]>