tidtidb 官网官网(tidb 官网)

Tidb 一步一步学习Tidb–2 tidb扩容缩容

tidtidb 官网官网(tidb 官网)

————————————————-官图镇贴——————————————————-

本文档中都是部署在同一台server,所以扩容缩容也都是在同一台机器,配置文件中ip地址使用的都是相同的但是分别使用了不同的端口和文件夹

大家在实际操作时请按照自己的需要进行配置

如果是对新的节点进行扩容缩容那么请注意新的节点在扩容前也需要对os进行基本的配置和修改。详情请见我之前的文章或者官方文档https://docs.pingcap.com/zh/tidb/stable/check-before-deployment

执行扩容或者缩容直接在安装有tiup的中控机上执行就可以

好的让我们开始吧…………….

tidtidb 官网官网(tidb 官网)

先看一下我们当前集群当前状态(之前部署安装的文档的环境扩容缩容内存实在是不够了于是我又新准备了一个除了IP地址其余都是一样的,满满的“橙”意)

1 查看当前状态

[tidb@localhost ~]$ tiup cluster display tidb-jiantest

tiup is checking updates for component cluster ...
Starting component `cluster`: /home/tidb/.tiup/components/cluster/v1.9.0/tiup-cluster /home/tidb/.tiup/components/cluster/v1.9.0/tiup-cluster display tidb-jiantest
Cluster type: tidb
Cluster name: tidb-jiantest
Cluster version: v5.4.0
Deploy user: tidb
SSH type: builtin
Dashboard URL: http://192.168.198.20:2379/dashboard
ID Role Host Ports OS/Arch Status Data Dir Deploy Dir
-- ---- ---- ----- ------- ------ -------- ----------
192.168.198.20:9093 alertmanager 192.168.198.20 9093/9094 linux/x86_64 Up /tidb-data/alertmanager-9093 /tidb-deploy/alertmanager-9093
192.168.198.20:3000 grafana 192.168.198.20 3000 linux/x86_64 Up - /tidb-deploy/grafana-3000
192.168.198.20:2379 pd 192.168.198.20 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379
192.168.198.20:9090 prometheus 192.168.198.20 9090/12020 linux/x86_64 Up /tidb-data/prometheus-9090 /tidb-deploy/prometheus-9090
192.168.198.20:4000 tidb 192.168.198.20 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.198.20:9000 tiflash 192.168.198.20 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.198.20:20160 tikv 192.168.198.20 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.198.20:20161 tikv 192.168.198.20 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.198.20:20162 tikv 192.168.198.20 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162
Total nodes: 9

2 PD

1 当前节点信息

192.168.198.20:2379 pd 192.168.198.20 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379

2 扩容

准备配置文件

[tidb@localhost ~]$ pwd/home/tidb

[tidb@localhost ~]$ vi scaleout-pd.yaml

[tidb@localhost ~]$ cat scaleout-pd.yaml

pd_servers:
 - host: 192.168.198.20
 ssh_port: 22
 client_port: 3379
 peer_port: 3380
 deploy_dir: /tidb-deploy/pd-3379
 data_dir: /tidb-data/pd-3379
 log_dir: /tidb-deploy/pd-3379/log

扩容前检查

[tidb@localhost ~]$ tiup cluster check tidb-jiantest scaleout-pd.yaml –cluster –user tidb

扩容

[tidb@localhost ~]$ tiup cluster scale-out tidb-jiantest scaleout-pd.yaml

扩容后检查

[tidb@localhost ~]$ tiup cluster display tidb-jiantest

192.168.198.20:2379 pd 192.168.198.20 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379

192.168.198.20:3379 pd 192.168.198.20 3379/3380 linux/x86_64 Up /tidb-data/pd-3379 /tidb-deploy/pd-3379

3 缩容

[tidb@localhost bin]$ tiup cluster scale-in tidb-jiantest –node 192.168.198.20:3379 ##建议缩容的pd节点当前最好不是leader节点

[tidb@localhost bin]$ tiup cluster display tidb-jiantest

192.168.198.20:2379 pd 192.168.198.20 2379/2380 linux/x86_64 Up|L|UI /tidb-data/pd-2379 /tidb-deploy/pd-2379

3 Tikv

1 当前节点信息

192.168.198.20:20160 tikv 192.168.198.20 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.198.20:20161 tikv 192.168.198.20 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.198.20:20162 tikv 192.168.198.20 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162

2 扩容

准备配置文件

[tidb@localhost ~]$ vi scaleout-tikv.yaml

[tidb@localhost ~]$ cat scaleout-tikv.yaml

tikv_servers:
 - host: 192.168.198.20
 ssh_port: 22
 port: 20163
 status_port: 20183
 deploy_dir: /tidb-deploy/tikv-20163
 data_dir: /tidb-data/tikv-20163
 log_dir: /tidb-deploy/tikv-20163/log

扩容前检查

[tidb@localhost ~]$ tiup cluster check tidb-jiantest scaleout-tikv.yaml –cluster –user tidb

扩容

[tidb@localhost ~]$ tiup cluster scale-out tidb-jiantest scaleout-tikv.yaml

扩容后检查

[tidb@localhost ~]$ tiup cluster display tidb-jiantest

192.168.198.20:20160 tikv 192.168.198.20 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.198.20:20161 tikv 192.168.198.20 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.198.20:20162 tikv 192.168.198.20 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162
192.168.198.20:20163 tikv 192.168.198.20 20163/20183 linux/x86_64 Up /tidb-data/tikv-20163 /tidb-deploy/tikv-20163

3 缩容

[tidb@localhost ~]$ tiup cluster scale-in tidb-jiantest –node 192.168.198.20:20163 ##建议缩容后的tikv至少保留三个节点

[tidb@localhost bin]$ tiup cluster display tidb-jiantest

192.168.198.20:20160 tikv 192.168.198.20 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.198.20:20161 tikv 192.168.198.20 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.198.20:20162 tikv 192.168.198.20 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162
192.168.198.20:20163 tikv 192.168.198.20 20163/20183 linux/x86_64 Tombstone /tidb-data/tikv-20163 /tidb-deploy/tikv-20163

缩容后的tikv节点不会立即从集群中消失,等待变为Tombstone 因为tikv内部的数据要进行再次平衡 即可进行删除

[tidb@localhost ~]$ tiup cluster prune tidb-jiantest

192.168.198.20:20160 tikv 192.168.198.20 20160/20180 linux/x86_64 Up /tidb-data/tikv-20160 /tidb-deploy/tikv-20160
192.168.198.20:20161 tikv 192.168.198.20 20161/20181 linux/x86_64 Up /tidb-data/tikv-20161 /tidb-deploy/tikv-20161
192.168.198.20:20162 tikv 192.168.198.20 20162/20182 linux/x86_64 Up /tidb-data/tikv-20162 /tidb-deploy/tikv-20162

4 Tidb

1 当前节点信息

192.168.198.20:4000 tidb 192.168.198.20 4000/10080 linux/x86_64 Up – /tidb-deploy/tidb-4000

2 扩容

准备配置文件

[tidb@localhost ~]$ pwd

/home/tidb

[tidb@localhost ~]$ vi scaleout-tidb.yaml

[tidb@localhost ~]$ cat scaleout-tidb.yaml

tidb_servers:
 - host: 192.168.198.20
 ssh_port: 22
 port: 4001
 status_port: 10081
 deploy_dir: /tidb-deploy/tidb-4001
 log_dir: /tidb-deploy/tidb-4001/log

扩容前检查

[tidb@localhost ~]$ tiup cluster check tidb-jiantest scaleout-tidb.yaml –cluster –user tidb

扩容

[tidb@localhost ~]$ tiup cluster scale-out tidb-jiantest scaleout-tidb.yaml

扩容后检查

[tidb@localhost ~]$ tiup cluster display tidb-jiantest

192.168.198.20:4000 tidb 192.168.198.20 4000/10080 linux/x86_64 Up - /tidb-deploy/tidb-4000
192.168.198.20:4001 tidb 192.168.198.20 4001/10081 linux/x86_64 Up - /tidb-deploy/tidb-4001

3 缩容

[tidb@localhost ~]$ tiup cluster scale-in tidb-jiantest –node 192.168.198.20:4001

[tidb@localhost bin]$ tiup cluster display tidb-jiantest

192.168.198.20:4000 tidb 192.168.198.20 4000/10080 linux/x86_64 Up – /tidb-deploy/tidb-4000

—————-坚持就是胜利

tidtidb 官网官网(tidb 官网)

5 Tiflash

1 当前节点信息

192.168.198.20:9000 tiflash 192.168.198.20 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000

2 扩容

准备配置文件

[tidb@localhost ~]$ pwd

/home/tidb

[tidb@localhost ~]$ vi scaleout-tiflash.yaml

[tidb@localhost ~]$ cat scaleout-tiflash.yaml

tiflash_servers:
 - host: 192.168.198.20
 tcp_port: 9001
 http_port: 8124
 flash_service_port: 3931
 flash_proxy_port: 20171
 flash_proxy_status_port: 20293
 metrics_port: 8235
 deploy_dir: "/tidb-deploy/tiflash-9001"
 data_dir: "/tidb-data/tiflash-9001"
 log_dir: "/tidb-deploy/tiflash-9001/log"

扩容前检查

[tidb@localhost ~]$ tiup cluster check tidb-jiantest scaleout-tiflash.yaml –cluster –user tidb

扩容

[tidb@localhost ~]$ tiup cluster scale-out tidb-jiantest scaleout-tiflash.yaml

扩容后检查

[tidb@localhost ~]$ tiup cluster display tidb-jiantest

192.168.198.20:9000 tiflash 192.168.198.20 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.198.20:9001 tiflash 192.168.198.20 9001/8124/3931/20171/20293/8235 linux/x86_64 Up /tidb-data/tiflash-9001 /tidb-deploy/tiflash-9001

3 缩容

缩容tiflash节点前要确保 TiFlash 集群剩余节点数大于等于所有数据表的最大副本数,否则需要修改相关表的 TiFlash 副本数。

alter table <db-name>.<table-name> set tiflash replica 0; 等待相关表的 TiFlash 副本被删除之后再进行缩容

###这里以后有机会我想单独写一下这里的操作和注意事项

[tidb@localhost ~]$ tiup cluster scale-in tidb-jiantest –node 192.168.198.20:9001

[tidb@localhost bin]$ tiup cluster display tidb-jiantest

192.168.198.20:9000 tiflash 192.168.198.20 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000
192.168.198.20:9001 tiflash 192.168.198.20 9001/8124/3931/20171/20293/8235 linux/x86_64 Tombstone /tidb-data/tiflash-9001 /tidb-deploy/tiflash-9001

这里和tikv相同缩容后的tiflash节点不会立即从集群中消失,等待变为Tombstone 即可进行删除

[tidb@localhost bin]$ tiup cluster display tidb-jiantest

192.168.198.20:9000 tiflash 192.168.198.20 9000/8123/3930/20170/20292/8234 linux/x86_64 Up /tidb-data/tiflash-9000 /tidb-deploy/tiflash-9000

6 TiCDC

ticdc在最开始并没有安装,但是我们也可以通过扩容的方式新安装一个

1 准备配置文件

[tidb@localhost ~]$ vi scaleout-ticdc.yaml

[tidb@localhost ~]$ cat scaleout-ticdc.yaml

cdc_servers:
 - host: 192.168.198.20
 gc-ttl: 86400
 data_dir: /tidb-data/cdc-8300

扩容前检查

[tidb@localhost ~]$ tiup cluster check tidb-jiantest scaleout-ticdc.yaml –cluster –user tidb

2 扩容

[tidb@localhost ~]$ tiup cluster scale-out tidb-jiantest scaleout-ticdc.yaml

扩容后检查

[tidb@localhost ~]$ tiup cluster display tidb-jiantest

192.168.198.20:8300 cdc 192.168.198.20 8300 linux/x86_64 Up /tidb-data/cdc-8300 /tidb-deploy/cdc-8300

3 缩容

[tidb@localhost ~]$ tiup cluster scale-in tidb-jiantest –node 192.168.198.20:8300

[tidb@localhost bin]$ tiup cluster display tidb-jiantest

已经看不到cdc节点了

好了,今天就到这里了。对以上内容有任何问题或者建议意见的小伙伴,欢迎留言。

    
本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容, 请发送邮件至 cloud@ksuyun.com 举报,一经查实,本站将立刻删除。
如若转载,请注明出处:https://www.daxuejiayuan.com/23542.html