资讯专栏INFORMATION COLUMN

干货 | TiDB Operator实践

piglei / 1100人阅读

摘要:一环境二安装配置免密登录,配置节点所需镜像的文件由于某些镜像国内无法访问需要现将镜像通过代理下载到本地然后上传到本地镜像仓库或,同时修改配置文件,个别组件存放位置,需要新建服务器分发文件。文章转载自公众号北京爷们儿

</>复制代码

  1. K8s和TiDB都是目前开源社区中活跃的开源产品,TiDB
    Operator项目是一个在K8s上编排管理TiDB集群的项目。本文详细记录了部署K8s及install TiDB
    Operator的详细实施过程,希望能对刚"入坑"的同学有所帮助。
一、环境

Ubuntu 16.04
K8s 1.14.1

二、Kubespray安装K8s 配置免密登录

</>复制代码

  1. 1 yum -y install expect

vi /tmp/autocopy.exp

</>复制代码

  1. 1 #!/usr/bin/expect
  2. 2
  3. 3 set timeout
  4. 4 set user_hostname [lindex $argv ]
  5. 5 set password [lindex $argv ]
  6. 6 spawn ssh-copy-id $user_hostname
  7. 7 expect {
  8. 8 "(yes/no)?"
  9. 9 {
  10. 10 send "yes
  11. "
  12. 11 expect "*assword:" { send "$password
  13. "}
  14. 12 }
  15. 13 "*assword:"
  16. 14 {
  17. 15 send "$password
  18. "
  19. 16 }
  20. 17 }
  21. 18 expect eof

</>复制代码

  1. 1 ssh-keyscan addedip >> ~/.ssh/known_hosts
  2. 2
  3. 3 ssh-keygen -t rsa -P ""
  4. 4
  5. 5 for i in 10.0.0.{31,32,33,40,10,2050}; do ssh-keyscan $i >> ~/.ssh/known_hosts ; done
  6. 6
  7. 7 /tmp/autocopy.exp root@addeip
  8. 8 ssh-copy-id addedip
  9. 9
  10. 10 /tmp/autocopy.exp root@10.0.0.31
  11. 11 /tmp/autocopy.exp root@10.0.0.32
  12. 12 /tmp/autocopy.exp root@10.0.0.33
  13. 13 /tmp/autocopy.exp root@10.0.0.40
  14. 14 /tmp/autocopy.exp root@10.0.0.10
  15. 15 /tmp/autocopy.exp root@10.0.0.20
  16. 16 /tmp/autocopy.exp root@10.0.0.50
配置Kubespray

</>复制代码

  1. 1 pip install -r requirements.txt
  2. 2 cp -rfp inventory/sample inventory/mycluster

inventory/mycluster/inventory.ini

inventory/mycluster/inventory.ini

</>复制代码

  1. 1 # ## Configure "ip" variable to bind kubernetes services on a
  2. 2 # ## different ip than the default iface
  3. 3 # ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
  4. 4 [all]
  5. 5 # node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
  6. 6 # node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
  7. 7 # node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
  8. 8 # node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4
  9. 9 # node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5
  10. 10 # node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6
  11. 11 etcd1 ansible_host=10.0.0.31 etcd_member_name=etcd1
  12. 12 etcd2 ansible_host=10.0.0.32 etcd_member_name=etcd2
  13. 13 etcd3 ansible_host=10.0.0.33 etcd_member_name=etcd3
  14. 14 master1 ansible_host=10.0.0.40
  15. 15 node1 ansible_host=10.0.0.10
  16. 16 node2 ansible_host=10.0.0.20
  17. 17 node3 ansible_host=10.0.0.50
  18. 18
  19. 19 # ## configure a bastion host if your nodes are not directly reachable
  20. 20 # bastion ansible_host=x.x.x.x ansible_user=some_user
  21. 21
  22. 22 [kube-master]
  23. 23 # node1
  24. 24 # node2
  25. 25 master1
  26. 26 [etcd]
  27. 27 # node1
  28. 28 # node2
  29. 29 # node3
  30. 30 etcd1
  31. 31 etcd2
  32. 32 etcd3
  33. 33
  34. 34 [kube-node]
  35. 35 # node2
  36. 36 # node3
  37. 37 # node4
  38. 38 # node5
  39. 39 # node6
  40. 40 node1
  41. 41 node2
  42. 42 node3
  43. 43
  44. 44 [k8s-cluster:children]
  45. 45 kube-master
  46. 46 kube-node
节点所需镜像的文件

由于某些镜像国内无法访问需要现将镜像通过代理下载到本地然后上传到本地镜像仓库或DockerHub,同时修改配置文件,个别组件存放位置https://storage.googleapis.com,需要新建Nginx服务器分发文件。

建立Nginx服务器

~/distribution/docker-compose.yml

创建文件目录及Nginx配置文件目录

~/distribution/conf.d/open_distribute.conf

启动

下载并上传所需文件 具体版本号参考roles/download/defaults/main.yml文件中kubeadm_version、kube_version、image_arch参数

安装Docker及Docker-Compose

</>复制代码

  1. 1 apt-get install
  2. 2 apt-transport-https
  3. 3 ca-certificates
  4. 4 curl
  5. 5 gnupg-agent
  6. 6 software-properties-common
  7. 7
  8. 8 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  9. 9
  10. 10 add-apt-repository
  11. 11 "deb [arch=amd64] https://download.docker.com/linux/ubuntu
  12. 12 $(lsb_release -cs)
  13. 13 stable"
  14. 14
  15. 15 apt-get update
  16. 16
  17. 17 apt-get install docker-ce docker-ce-cli containerd.io
  18. 18
  19. 19 chmod +x /usr/local/bin/docker-compose
  20. 20 sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

新建Nginx docker-compose.yml

</>复制代码

  1. 1 mkdir ~/distribution
  2. 2 vi ~/distribution/docker-compose.yml

</>复制代码

  1. 1 # distribute
  2. 2 version: "2"
  3. 3 services:
  4. 4 distribute:
  5. 5 image: nginx:1.15.12
  6. 6 volumes:
  7. 7 - ./conf.d:/etc/nginx/conf.d
  8. 8 - ./distributedfiles:/usr/share/nginx/html
  9. 9 network_mode: "host"
  10. 10 container_name: nginx_distribute

</>复制代码

  1. 1 mkdir ~/distribution/distributedfiles
  2. 2 mkdir ~/distribution/
  3. 3 mkdir ~/distribution/conf.d
  4. 4 vi ~/distribution/conf.d/open_distribute.conf

</>复制代码

  1. 1 #open_distribute.conf
  2. 2
  3. 3 server {
  4. 4 #server_name distribute.search.leju.com;
  5. 5 listen 8888;
  6. 6
  7. 7 root /usr/share/nginx/html;
  8. 8
  9. 9 add_header Access-Control-Allow-Origin *;
  10. 10 add_header Access-Control-Allow-Headers X-Requested-With;
  11. 11 add_header Access-Control-Allow-Methods GET,POST,OPTIONS;
  12. 12
  13. 13 location / {
  14. 14 # index index.html;
  15. 15 autoindex on;
  16. 16 }
  17. 17 expires off;
  18. 18 location ~ .*.(gif|jpg|jpeg|png|bmp|swf|eot|ttf|woff|woff2|svg)$ {
  19. 19 expires -1;
  20. 20 }
  21. 21
  22. 22 location ~ .*.(js|css)?$ {
  23. 23 expires -1 ;
  24. 24 }
  25. 25 } # end of public static files domain : [ distribute.search.leju.com ]

</>复制代码

  1. 1 docker-compose up -d

</>复制代码

  1. 1 wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm
  2. 2
  3. 3 scp /tmp/kubeadm 10.0.0.60:/root/distribution/distributedfiles
  4. 4
  5. 5 wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/hyperkube

需要下载并上传到私有仓库的镜像

</>复制代码

  1. 1 docker pull k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0
  2. 2 docker tag k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0
  3. 3 docker push jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0
  4. 4
  5. 5 docker pull k8s.gcr.io/k8s-dns-node-cache:1.15.1
  6. 6 docker tag k8s.gcr.io/k8s-dns-node-cache:1.15.1 jiashiwen/k8s-dns-node-cache:1.15.1
  7. 7 docker push jiashiwen/k8s-dns-node-cache:1.15.1
  8. 8
  9. 9 docker pull gcr.io/google_containers/pause-amd64:3.1
  10. 10 docker tag gcr.io/google_containers/pause-amd64:3.1 jiashiwen/pause-amd64:3.1
  11. 11 docker push jiashiwen/pause-amd64:3.1
  12. 12
  13. 13 docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1
  14. 14 docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 jiashiwen/kubernetes-dashboard-amd64:v1.10.1
  15. 15 docker push jiashiwen/kubernetes-dashboard-amd64:v1.10.1
  16. 16
  17. 17 docker pull gcr.io/google_containers/kube-apiserver:v1.14.1
  18. 18 docker tag gcr.io/google_containers/kube-apiserver:v1.14.1 jiashiwen/kube-apiserver:v1.14.1
  19. 19 docker push jiashiwen/kube-apiserver:v1.14.1
  20. 20
  21. 21 docker pull gcr.io/google_containers/kube-controller-manager:v1.14.1
  22. 22 docker tag gcr.io/google_containers/kube-controller-manager:v1.14.1 jiashiwen/kube-controller-manager:v1.14.1
  23. 23 docker push jiashiwen/kube-controller-manager:v1.14.1
  24. 24
  25. 25 docker pull gcr.io/google_containers/kube-scheduler:v1.14.1
  26. 26 docker tag gcr.io/google_containers/kube-scheduler:v1.14.1 jiashiwen/kube-scheduler:v1.14.1
  27. 27 docker push jiashiwen/kube-scheduler:v1.14.1
  28. 28
  29. 29 docker pull gcr.io/google_containers/kube-proxy:v1.14.1
  30. 30 docker tag gcr.io/google_containers/kube-proxy:v1.14.1 jiashiwen/kube-proxy:v1.14.1
  31. 31 docker push jiashiwen/kube-proxy:v1.14.1
  32. 32
  33. 33 docker pull gcr.io/google_containers/pause:3.1
  34. 34 docker tag gcr.io/google_containers/pause:3.1 jiashiwen/pause:3.1
  35. 35 docker push jiashiwen/pause:3.1
  36. 36
  37. 37 docker pull gcr.io/google_containers/coredns:1.3.1
  38. 38 docker tag gcr.io/google_containers/coredns:1.3.1 jiashiwen/coredns:1.3.1
  39. 39 docker push jiashiwen/coredns:1.3.1

用于下载上传镜像的脚本

</>复制代码

  1. 1 #!/bin/bash
  2. 2
  3. 3 privaterepo=jiashiwen
  4. 4
  5. 5 k8sgcrimages=(
  6. 6 cluster-proportional-autoscaler-amd64:1.4.0
  7. 7 k8s-dns-node-cache:1.15.1
  8. 8 )
  9. 9
  10. 10 gcrimages=(
  11. 11 pause-amd64:3.1
  12. 12 kubernetes-dashboard-amd64:v1.10.1
  13. 13 kube-apiserver:v1.14.1
  14. 14 kube-controller-manager:v1.14.1
  15. 15 kube-scheduler:v1.14.1
  16. 16 kube-proxy:v1.14.1
  17. 17 pause:3.1
  18. 18 coredns:1.3.1
  19. 19 )
  20. 20
  21. 21
  22. 22 for k8sgcrimageName in ${k8sgcrimages[@]} ; do
  23. 23 echo $k8sgcrimageName
  24. 24 docker pull k8s.gcr.io/$k8sgcrimageName
  25. 25 docker tag k8s.gcr.io/$k8sgcrimageName $privaterepo/$k8sgcrimageName
  26. 26 docker push $privaterepo/$k8sgcrimageName
  27. 27 done
  28. 28
  29. 29
  30. 30 for gcrimageName in ${gcrimages[@]} ; do
  31. 31 echo $gcrimageName
  32. 32 docker pull gcr.io/google_containers/$gcrimageName
  33. 33 docker tag gcr.io/google_containers/$gcrimageName $privaterepo/$gcrimageName
  34. 34 docker push $privaterepo/$gcrimageName
  35. 35 done

修改文件inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml,修改K8s镜像仓库

</>复制代码

  1. 1 # kube_image_repo: "gcr.io/google-containers"
  2. 2 kube_image_repo: "jiashiwen"

修改roles/download/defaults/main.yml

</>复制代码

  1. 1 #dnsautoscaler_image_repo: "k8s.gcr.io/cluster-proportional-autoscaler-{{ image_arch }}"
  2. 2 dnsautoscaler_image_repo: "jiashiwen/cluster-proportional-autoscaler-{{ image_arch }}"
  3. 3
  4. 4 #kube_image_repo: "gcr.io/google-containers"
  5. 5 kube_image_repo: "jiashiwen"
  6. 6
  7. 7 #pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}"
  8. 8 pod_infra_image_repo: "jiashiwen/pause-{{ image_arch }}"
  9. 9
  10. 10 #dashboard_image_repo: "gcr.io/google_containers/kubernetes-dashboard-{{ image_arch }}"
  11. 11 dashboard_image_repo: "jiashiwen/kubernetes-dashboard-{{ image_arch }}"
  12. 12
  13. 13 #nodelocaldns_image_repo: "k8s.gcr.io/k8s-dns-node-cache"
  14. 14 nodelocaldns_image_repo: "jiashiwen/k8s-dns-node-cache"
  15. 15
  16. 16 #kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/ release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
  17. 17 kubeadm_download_url: "http://10.0.0.60:8888/kubeadm"
  18. 18
  19. 19 #hyperkube_download_url: "https://storage.googleapis.com/ kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/ hyperkube"
  20. 20 hyperkube_download_url: "http://10.0.0.60:8888/hyperkube"
三、执行安装

安装命令

</>复制代码

  1. 1 ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml

重置命令

</>复制代码

  1. 1 ansible-playbook -i inventory/mycluster/inventory.ini reset.yml
四、验证K8s集群

安装Kubectl

本地浏览器打开https://storage.googleapis.co...

用上一步得到的最新版本号v1.7.1替换下载地址中的$(curl -s https://storage.googleapis.co...:// storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubectl

上传下载好的kubectl

</>复制代码

  1. 1 scp /tmp/kubectl root@xxx:/root

修改属性

</>复制代码

  1. 1 chmod +x ./kubectl
  2. 2 mv ./kubectl /usr/local/bin/kubectl

Ubuntu

</>复制代码

  1. 1 sudo snap install kubectl --classic

CentOS

将master节点上的~/.kube/config 文件复制到你需要访问集群的客户端上即可

</>复制代码

  1. 1 scp 10.0.0.40:/root/.kube/config ~/.kube/config

执行命令验证集群

</>复制代码

  1. 1 kubectl get nodes
  2. 2 kubectl cluster-info
五、TiDB-Operaor部署

安装helm

https://blog.csdn.net/bbwangj...

安装helm

</>复制代码

  1. 1 curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
  2. 2 chmod 700 get_helm.sh
  3. 3 ./get_helm.sh

查看helm版本

</>复制代码

  1. 1 helm version

初始化

</>复制代码

  1. 1 helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
为K8s提供 local volumes

参考文档https://github.com/kubernetes...
tidb-operator启动会为pd和tikv绑定pv,需要在discovery directory下创建多个目录

格式化并挂载磁盘

</>复制代码

  1. 1 mkfs.ext4 /dev/vdb
  2. 2 DISK_UUID=$(blkid -s UUID -o value /dev/vdb)
  3. 3 mkdir /mnt/$DISK_UUID
  4. 4 mount -t ext4 /dev/vdb /mnt/$DISK_UUID

/etc/fstab持久化mount

</>复制代码

  1. 1 echo UUID=`sudo blkid -s UUID -o value /dev/vdb` /mnt/$DISK_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab

创建多个目录并mount到discovery directory

</>复制代码

  1. 1 for i in $(seq 1 10); do
  2. 2 sudo mkdir -p /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}
  3. 3 sudo mount --bind /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}
  4. 4 done

/etc/fstab持久化mount

</>复制代码

  1. 1 for i in $(seq 1 10); do
  2. 2 echo /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} none bind 0 0 | sudo tee -a /etc/fstab
  3. 3 done

为tidb-operator创建local-volume-provisioner

</>复制代码

  1. 1 $ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
  2. 2 $ kubectl get po -n kube-system -l app=local-volume-provisioner
  3. 3 $ kubectl get pv --all-namespaces | grep local-storage
六、Install TiDB Operator

项目中使用了gcr.io/google-containers/hyperkube,国内访问不了,简单的办法是把镜像重新push到dockerhub然后修改charts/tidb-operator/values.yaml

</>复制代码

  1. 1 scheduler:
  2. 2 # With rbac.create=false, the user is responsible for creating this account
  3. 3 # With rbac.create=true, this service account will be created
  4. 4 # Also see rbac.create and clusterScoped
  5. 5 serviceAccount: tidb-scheduler
  6. 6 logLevel: 2
  7. 7 replicas: 1
  8. 8 schedulerName: tidb-scheduler
  9. 9 resources:
  10. 10 limits:
  11. 11 cpu: 250m
  12. 12 memory: 150Mi
  13. 13 requests:
  14. 14 cpu: 80m
  15. 15 memory: 50Mi
  16. 16 # kubeSchedulerImageName: gcr.io/google-containers/hyperkube
  17. 17 kubeSchedulerImageName: yourrepo/hyperkube
  18. 18 # This will default to matching your kubernetes version
  19. 19 # kubeSchedulerImageTag: latest

TiDB Operator使用CRD扩展Kubernetes,因此要使用TiDB Operator,首先应该创建TidbCluster自定义资源类型。

</>复制代码

  1. 1 kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml
  2. 2 kubectl get crd tidbclusters.pingcap.com

安装TiDB-Operator

</>复制代码

  1. 1 $ git clone https://github.com/pingcap/tidb-operator.git
  2. 2 $ cd tidb-operator
  3. 3 $ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin
  4. 4 $ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/ instance=tidb-operator
七、部署TiDB

</>复制代码

  1. 1 helm install charts/tidb-cluster --name=demo --namespace=tidb
  2. 2 watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide
八、验证 安装MySQL客户端

参考文档https://dev.mysql.com/doc/ref...

CentOS安装

</>复制代码

  1. 1 wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
  2. 2 yum localinstall mysql80-community-release-el7-3.noarch.rpm -y
  3. 3 yum repolist all | grep mysql
  4. 4 yum-config-manager --disable mysql80-community
  5. 5 yum-config-manager --enable mysql57-community
  6. 6 yum install mysql-community-client

Ubuntu安装

</>复制代码

  1. 1 wget https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb
  2. 2 dpkg -i mysql-apt-config_0.8.13-1_all.deb
  3. 3 apt update
  4. 4
  5. 5 # 选择MySQL版本
  6. 6 dpkg-reconfigure mysql-apt-config
  7. 7 apt install mysql-client -y
九、映射TiDB端口

查看TiDB Service

</>复制代码

  1. 1 kubectl get svc --all-namespaces

映射TiDB端口

</>复制代码

  1. 1 # 仅本地访问
  2. 2 kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb
  3. 3
  4. 4 # 其他主机访问
  5. 5 kubectl port-forward --address 0.0.0.0 svc/demo-tidb 4000:4000 --namespace=tidb

首次登录MySQL

</>复制代码

  1. 1 mysql -h 127.0.0.1 -P 4000 -u root -D test

修改TiDB密码

</>复制代码

  1. 1 SET PASSWORD FOR "root"@"%" = "wD3cLpyO5M"; FLUSH PRIVILEGES;

趟坑小记

1、K8s国内安装

K8s镜像多在gcr.io国内访问不到,基本做法是把镜像导入DockerHub或者私有镜像,这一点在K8s部署章节有详细过程就不累述了。

2、TiDB-Operator 本地存储配置

Operator在启动集群时pd和TiKV需要绑定本地存储如果挂载点不足会导致pod启动过程中找不到可已bond的pv始终处于pending或createing状态,详细配请参阅https://github.com/kubernetes...“Sharing a disk filesystem by multiple filesystem PVs”一节,同一块磁盘绑定多个挂载目录,为Operator提供足够的bond

3、MySQL客户端版本问题

目前TiDB只支持MySQL5.7版本客户端8.0会报ERROR 1105 (HY000): Unknown charset id 255


点击"K8s"了解更多详情。

文章转载自公众号"北京IT爷们儿"

文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。

转载请注明本文地址:https://www.ucloud.cn/yun/25544.html

相关文章

  • 干货 | TiDB Operator实践

    摘要:一环境二安装配置免密登录,配置节点所需镜像的文件由于某些镜像国内无法访问需要现将镜像通过代理下载到本地然后上传到本地镜像仓库或,同时修改配置文件,个别组件存放位置,需要新建服务器分发文件。文章转载自公众号北京爷们儿 K8s和TiDB都是目前开源社区中活跃的开源产品,TiDBOperator项目是一个在K8s上编排管理TiDB集群的项目。本文详细记录了部署K8s及install TiDB...

    jhhfft 评论0 收藏0
  • 黄东旭:When TiDB Meets Kubernetes

    摘要:本文是我司黄东旭同学在上的演讲实录,主要分享了关于与整合的一些工作。同时也是特别喜欢开源,基本上做的所有东西都是开源,包括像这些项目。其实严格来说,他们只是做运维的创业公司。背后的系统的前身就是的。支持事务的前提之下还支持的。 本文是我司 CTO 黄东旭同学在 DTCC2017 上的《When TiDB Meets Kubernetes》演讲实录,主要分享了关于 TiDB 与 Kube...

    fobnn 评论0 收藏0
  • TiDB 社区成长足迹与小红花 | TiDB DevCon 2019

    摘要:在上,我司联合创始人崔秋带大家一起回顾了年社区成长足迹,在社区荣誉时刻环节,我们为新晋授予了证书,并为年度最佳贡献个人团队颁发了荣誉奖杯。同时,我们也为新晋授予了证书,并为年最佳社区贡献个人最佳社区贡献团队颁发了荣誉奖杯。 2018 年 TiDB 产品变得更加成熟和稳定,同时 TiDB 社区力量也在发展壮大。在 TiDB DevCon 2019 上,我司联合创始人崔秋带大家一起回顾了 ...

    hlcfan 评论0 收藏0

发表评论

0条评论

piglei

|高级讲师

TA的文章

阅读更多
最新活动
阅读需要支付1元查看
<