Greenplum数据库:执行从6版至7版的重大版本升级

教育   2024-09-03 20:22   陕西  

简介

对于GreenPlum来说,若进行小版本升级,则非常方便,例如从6.19到6.28,从7.1到7.3等,只需要停机升级GreenPlum软件即可,

但是对大版本升级,则麻烦一点,生产环境升级,则建议重新搭建一套新的环境,然后进行数据迁移比较稳妥。

从GP6迁移升级到GP7,可选如下的办法:

1、gpbackup和gprestore  速度快,但需要占用额外的存储空间。若使用S3,则可以减少scp传输备份文件的时间。

2、gpcopy  数据经过网络层面直接插入目标库,不落地,对网络要求较高

3、pg_dump和pg_restore 逻辑导出导入,速度慢,占用本地空间,数据量较大时,不推荐

每种方法都有其优缺点,下面对这些方法进行详细分析:

1. 使用 gpbackup 和 gprestore

优点:

  • 速度快gpbackupgprestore 是专为Greenplum优化的备份和恢复工具,能够充分利用Greenplum的并行架构,备份和恢复速度较快。

  • 备份和恢复的粒度控制:支持对整个数据库、单个表、schema进行备份和恢复,提供了灵活性。

  • 增量备份:AO表支持增量备份,可以减少备份的数据量和时间。

  • 可用性:即使在数据库较大或数据量较多的情况下,依然能够提供高效的备份和恢复。

缺点:

  • 对GP6和GP7的兼容性限制gpbackupgprestore 在GP6和GP7之间可能存在某些功能或参数的不兼容,需要仔细检查兼容性问题。

  • 备份存储需求:需要额外的存储空间来存放备份文件,尤其是在数据量很大的情况下。

  • 操作相对复杂:相较于直接的数据库升级,需要执行多个步骤(备份、传输、恢复等),可能会增加操作的复杂性和风险。

2. 使用 gpcopy

优点:

  • 直接数据复制gpcopy 可以直接从一个Greenplum集群复制数据到另一个集群,无需先备份再恢复,简化了流程。

  • 在线操作:可以在源数据库保持在线的情况下进行数据迁移,减少了停机时间。

  • 高效并行复制:支持并行数据复制,可以加速大规模数据集的迁移过程。

  • 灵活性:支持选择性地复制数据库中的表和其他对象,提供了更大的迁移灵活性。

缺点:

  • 需要两套集群:通常需要一个新的GP7集群作为目标集群,这意味着需要额外的硬件资源。

  • 操作复杂度:设置和配置两个集群之间的网络连接、权限和安全设置可能会比较复杂。

  • 数据一致性:在源集群保持在线状态时,必须确保在数据复制过程中数据的一致性,可能需要在迁移完成后额外处理增量数据。

3. 使用 pg_dump 和 pg_restore

优点:

  • 标准化工具pg_dumppg_restore 是PostgreSQL的标准备份和恢复工具,与Greenplum兼容性较好,使用广泛且相对稳定。

  • 适用性广:不仅可以用于Greenplum,还可以用于其他基于PostgreSQL的数据库系统,因此在多种场景下都可以使用。

  • 简单易用:操作相对简单,命令行工具易于掌握,特别适合小规模数据迁移。

缺点:

  • 速度较慢:由于pg_dumppg_restore 不支持Greenplum的并行架构,备份和恢复速度可能比其他方法慢,尤其是在处理大规模数据集时。

  • 不支持增量备份:只支持全量备份和恢复,对于大数据量的数据库,这可能导致备份和恢复时间过长。

  • 需要更多停机时间:通常需要长时间的停机以完成全量备份和恢复,不适合对停机时间敏感的生产环境。

环境准备

利用docker快速初始化GP6、GP7和S3存储。

 1-- GP6
2docker rm -f gpdb6
3docker run -itd --name gpdb6 -h gpdb6270 \
4  -v /sys/fs/cgroup:/sys/fs/cgroup \
5  --privileged=true lhrbest/greenplum:6.27.0_v2 \
6  /usr/sbin/init
7
8docker exec -it gpdb6 bash
9su - gpadmin
10gpstart -a
11gpcc start
12
13gpcc status
14gpstate
15
16[gpadmin@gpdb6270 soft]$ psql -c "select version()"
17                                                                                                version                                                                                                
18-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
19 PostgreSQL 9.4.26 (Greenplum Database 6.27.0 build commit:482d4e839d519dcfde9cddf03807787d9d91d015) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.064-bit compiled on Apr  4 2024 11:00:20
20(1 row)
21
22
23
24-- GP6 产生数据
25createdb sbtest;
26sysbench /usr/share/sysbench/oltp_common.lua --db-driver=pgsql --pgsql-host=127.0.0.1 --pgsql-port=5432 \
27--pgsql-user=gpadmin --pgsql-password=lhr --pgsql-db=sbtest  \
28--time=300 --table-size=10000 --tables=10 --threads=16  \
29--events=999999999   prepare
30
31
32
33-- GP7
34docker rm -f gpdb7
35docker run -itd --name gpdb7 -h gpdb7 \
36  -v /sys/fs/cgroup:/sys/fs/cgroup \
37  --privileged=true lhrbest/greenplum:7.3.0 \
38  /usr/sbin/init
39
40docker exec -it gpdb7 bash
41su - gpadmin
42gpstart -a
43gpcc start
44
45gpcc status
46gpstate
47
48[gpadmin@gpdb7 soft]$ psql -c "select version()"
49                                                                                                                version                                                                                                                 
50----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
51 PostgreSQL 12.12 (Greenplum Database 7.3.0 build commit:ebf6c7db0034f8928f20252a306c4ecdb9324d81) on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-22), 64-bit compiled on Aug  1 2024 08:31:42 Bhuvnesh C.
52(1 row)
53
54
55
56
57
58
59-- S3存储
60
61docker rm -f lhrminio
62docker run -d --name lhrminio -h lhrminio \
63  -p 9000:9000 -p 9001:9001 -p  922:22 -p 39389:3389 \
64 -p 29990:9090 -p 29300:3000 \
65  -v /sys/fs/cgroup:/sys/fs/cgroup   --restart=always \
66 --privileged=true lhrbest/minio:1.0 \
67   /usr/sbin/init
68
69docker exec -it lhrminio bash
70
71
72-- 用户名和密码:minioadmin/minioadmin
73-- 9000端口指的是minio的WebUI端口,9001是API的端口
74-- 默认buket桶为gpbk
75-- 默认ak和sk都为:12345678
76http://192.16.7.162:9000/login  

安装gpbackup软件:

 1-- GPDB 6
2[gpadmin@gpdb6270 soft]$ gppkg -i greenplum_backup_restore-1.30.6-gp6-rhel7-x86_64.gppkg 
320240830:14:24:03:001173 gppkg:gpdb6270:gpadmin-[INFO]:-Starting gppkg with args: -i greenplum_backup_restore-1.30.6-gp6-rhel7-x86_64.gppkg
420240830:14:24:03:001173 gppkg:gpdb6270:gpadmin-[INFO]:-Installing package greenplum_backup_restore-1.30.6-gp6-rhel7-x86_64.gppkg
520240830:14:24:03:001173 gppkg:gpdb6270:gpadmin-[INFO]:-Validating rpm installation cmdStr='rpm --test -i /usr/local/greenplum-db-6.27.0/.tmp/gpbackup_tools_RHEL7-1.30.6-1.x86_64.rpm --dbpath /usr/local/greenplum-db-6.27.0/share/packages/database --prefix /usr/local/greenplum-db-6.27.0'
620240830:14:24:06:001173 gppkg:gpdb6270:gpadmin-[INFO]:-Installing greenplum_backup_restore-1.30.6-gp6-rhel7-x86_64.gppkg locally
720240830:14:24:07:001173 gppkg:gpdb6270:gpadmin-[INFO]:-Validating rpm installation cmdStr='rpm --test -i /usr/local/greenplum-db-6.27.0/.tmp/gpbackup_tools_RHEL7-1.30.6-1.x86_64.rpm --dbpath /usr/local/greenplum-db-6.27.0/share/packages/database --prefix /usr/local/greenplum-db-6.27.0'
820240830:14:24:07:001173 gppkg:gpdb6270:gpadmin-[INFO]:-greenplum_backup_restore-1.30.6-gp6-rhel7-x86_64.gppkg is already installed.
920240830:14:24:07:001173 gppkg:gpdb6270:gpadmin-[INFO]:-gpbackup 1.30.6 successfully installed
1020240830:14:24:07:001173 gppkg:gpdb6270:gpadmin-[INFO]:-greenplum_backup_restore-1.30.6-gp6-rhel7-x86_64.gppkg successfully installed.
11[gpadmin@gpdb6270 soft]$ 
12
13
14-- GPDB 7
15[gpadmin@gpdb7 soft]$ gppkg install greenplum_backup_restore-1.30.6-gp7-rhel8-x86_64.gppkg 
16Detecting network topology:    [==============================================================] [OK]
172 coordinators and 4 segment instances are detected on 1 unique host.
18Distributing package:          [==============================================================] [OK]
19Decoding package:              [==============================================================] [OK]
20Verifying package installation:[==============================================================] [OK]
21Verifying package integrity:   [==============================================================] [OK]
22You are going to install the following packages:
23        Install 'greenplum_backup_restore@1.30.6-gp7'
24Continue? [y/N] y
25Allocating disk space:         [==============================================================] [OK]
26Install 'greenplum_backup_rest:[==============================================================] [OK]
27Result:
28        greenplum_backup_restore has been successfully installed
29Clean Up:                      [==============================================================] [OK]
30[gpadmin@gpdb7 soft]$ 
31

方式1:GP6备份gpbackup--》GP7还原gprestore

在GP6上进行备份,备份文件放到S3存储中,在GP7上还原

 1cat > /home/gpadmin/s3_config.yaml <<"EOF"
2executablepath: $GPHOME/bin/gpbackup_s3_plugin
3options:
4  region:
5  endpoint: http://192.16.7.162:9001
6  aws_access_key_id: 12345678
7  aws_secret_access_key: 12345678
8  bucket: gpbk
9  folder: sbtest
10EOF
11
12
13
14
15gpbackup --dbname sbtest --jobs=16   --plugin-config /home/gpadmin/s3_config.yaml
16
17
18gprestore --plugin-config /home/gpadmin/s3_config.yaml --timestamp=20240903143134 \
19--jobs 8 --create-db  --redirect-db sbtest4 --on-error-continue

日志:

 1-- GPDB6 备份
2[gpadmin@gpdb6270 bk]$ gpbackup --dbname sbtest --jobs=16   --plugin-config /home/gpadmin/s3_config.yaml
320240903:14:31:34 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-gpbackup version = 1.30.6
420240903:14:31:34 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Greenplum Database Version = 6.27.0 build commit:482d4e839d519dcfde9cddf03807787d9d91d015
520240903:14:31:34 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Starting backup of database sbtest
620240903:14:31:34 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Reading Plugin Config /home/gpadmin/s3_config.yaml
720240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Backup Timestamp = 20240903143134
820240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Backup Database = sbtest
920240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Gathering table state information
1020240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Acquiring ACCESS SHARE locks on tables
11Locks acquired:  11 / 11 [==============================================================] 100.000s
1220240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Gathering additional table metadata
1320240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Getting partition definitions
1420240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Getting storage information
1520240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Getting child partitions with altered schema
1620240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Metadata will be written to /opt/greenplum/data/master/gpseg-1/backups/20240903/20240903143134/gpbackup_20240903143134_metadata.sql
1720240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Writing global database metadata
1820240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Global database metadata backup complete
1920240903:14:31:35 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Writing pre-data metadata
2020240903:14:31:36 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Pre-data metadata metadata backup complete
2120240903:14:31:36 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Writing post-data metadata
2220240903:14:31:36 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Post-data metadata backup complete
2320240903:14:31:36 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Writing data to file
24Tables backed up:  11 / 11 [============================================================] 100.000s
2520240903:14:31:36 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Data backup complete
2620240903:14:31:37 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Found neither /usr/local/greenplum-db-6.27.0/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
2720240903:14:31:37 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Email containing gpbackup report /opt/greenplum/data/master/gpseg-1/backups/20240903/20240903143134/gpbackup_20240903143134_report will not be sent
2820240903:14:31:38 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Beginning cleanup
2920240903:14:31:38 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Cleanup complete
3020240903:14:31:38 gpbackup:gpadmin:gpdb6270:483967-[INFO]:-Backup completed successfully
31
32
33-- GPDB7 还原
34[gpadmin@gpdb7 ~]$ gprestore --plugin-config /home/gpadmin/s3_config.yaml --timestamp=20240903143134 \
35--jobs 8 --create-db  --redirect-db sbtest4 --on-error-continue
3620240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[INFO]:-Restore Key = 20240903143134
3720240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[INFO]:-Reading Plugin Config /home/gpadmin/s3_config.yaml
3820240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[INFO]:-plugin config path: /tmp/20240903143153_s3_config.yaml
3920240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[WARNING]:-cannot recover plugin version from history using timestamp 20240903143134, so using current plugin version. This is fine unless there is a backwards compatibility consideration within the plugin
4020240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[INFO]:-gpbackup version = 1.30.6
4120240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[INFO]:-gprestore version = 1.30.6
4220240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[INFO]:-Greenplum Database Version = 7.3.0 build commit:ebf6c7db0034f8928f20252a306c4ecdb9324d81
4320240903:14:31:53 gprestore:gpadmin:gpdb7:004298-[INFO]:-Creating database
4420240903:14:31:54 gprestore:gpadmin:gpdb7:004298-[INFO]:-Database creation complete for: sbtest4
4520240903:14:31:54 gprestore:gpadmin:gpdb7:004298-[INFO]:-Restoring pre-data metadata
46Pre-data objects restored:  56 / 56 [===================================================] 100.000s
4720240903:14:31:54 gprestore:gpadmin:gpdb7:004298-[INFO]:-Pre-data metadata restore complete
48Table data loads restored:  11 / 11 [===================================================] 100.001s
4920240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Data restore complete
5020240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Restoring post-data metadata
51Post-data objects restored:  20 / 20 [==================================================] 100.000s
5220240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Post-data metadata restore complete
5320240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Found neither /usr/local/greenplum-db-7.3.0/bin/gp_email_contacts.yaml nor /home/gpadmin/gp_email_contacts.yaml
5420240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Email containing gprestore report /opt/greenplum/data/master/gpseg-1/backups/20240903/20240903143134/gprestore_20240903143134_20240903143153_report will not be sent
5520240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Beginning cleanup
5620240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Cleanup complete
5720240903:14:31:56 gprestore:gpadmin:gpdb7:004298-[INFO]:-Restore completed successfully

方式2:gpcopy迁移

1export PGSSLMODE=disable
2gpcopy --source-host 172.72.6.40  --source-port 5432 --source-user gpadmin \
3--dest-host 172.72.6.50 --dest-port 5432 --dest-user gpadmin \
4--dbname lhrgpdb --dest-dbname  lhrgpdb \
5--drop --jobs 4 

日志:

 1[gpadmin@gpdb7 ~]$ export PGSSLMODE=disable
2[gpadmin@gpdb7 ~]$ gpcopy --source-host 192.92.0.58  --source-port 5432 --source-user gpadmin \
3--dest-host 192.92.0.59 --dest-port 5432 --dest-user gpadmin \
4--dbname sbtest --dest-dbname  sbtest5 \
5--drop --jobs 4 
620240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-"--drop" will work as "--truncate" on partition tables
720240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Starting copy 2.6.0...
820240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Copy appname: gpcopy_202409031458_zapLBx2KhNmYHtvXNft3jc
920240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-'gpcopy' --dbname 'sbtest' --dest-dbname 'sbtest5' --dest-host '192.92.0.59' --dest-port '5432' --dest-user 'gpadmin' --drop --jobs '4' --source-host '192.92.0.58' --source-port '5432' --source-user 'gpadmin'
1020240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Initializing gpcopy
1120240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Source cluster version: 6.27.0 build commit:482d4e839d519dcfde9cddf03807787d9d91d015
1220240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Destination cluster version7.3.0 build commit:ebf6c7db0034f8928f20252a306c4ecdb9324d81
1320240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-pg_dump (PostgreSQL) 12.12
1420240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[INFO]:-pg_dumpall (PostgreSQL) 12.12
1520240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[WARNING]:-pg_dump version is higher than source cluster, which might have compatibility issues
1620240903:14:58:01 gpcopy:gpadmin:gpdb7:006850-[WARNING]:-pg_dumpall version is higher than source cluster, which might have compatibility issues
1720240903:14:58:02 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Start copying database "sbtest"
1820240903:14:58:02 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 1Start copying table "sbtest"."public"."sbtest3" => "sbtest5"."public"."sbtest3"
1920240903:14:58:02 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 0Start copying table "sbtest"."public"."sbtest6" => "sbtest5"."public"."sbtest6"
2020240903:14:58:02 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 3Start copying table "sbtest"."public"."sbtest8" => "sbtest5"."public"."sbtest8"
2120240903:14:58:02 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 2Start copying table "sbtest"."public"."sbtest5" => "sbtest5"."public"."sbtest5"
2220240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 3Start copying table "sbtest"."public"."sbtest10" => "sbtest5"."public"."sbtest10"
2320240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 3] [Progress: (0/1) DBs, (1/11tables done] Finished copying table "sbtest"."public"."sbtest8" => "sbtest5"."public"."sbtest8"
2420240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 1Start copying table "sbtest"."public"."t1" => "sbtest5"."public"."t1"
2520240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 1] [Progress: (0/1) DBs, (2/11tables done] Finished copying table "sbtest"."public"."sbtest3" => "sbtest5"."public"."sbtest3"
2620240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 2Start copying table "sbtest"."public"."sbtest1" => "sbtest5"."public"."sbtest1"
2720240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 2] [Progress: (0/1) DBs, (3/11tables done] Finished copying table "sbtest"."public"."sbtest5" => "sbtest5"."public"."sbtest5"
2820240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 0Start copying table "sbtest"."public"."sbtest4" => "sbtest5"."public"."sbtest4"
2920240903:14:58:03 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 0] [Progress: (0/1) DBs, (4/11tables done] Finished copying table "sbtest"."public"."sbtest6" => "sbtest5"."public"."sbtest6"
3020240903:14:58:04 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 1Start copying table "sbtest"."public"."sbtest9" => "sbtest5"."public"."sbtest9"
3120240903:14:58:04 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 1] [Progress: (0/1) DBs, (5/11tables done] Finished copying table "sbtest"."public"."t1" => "sbtest5"."public"."t1"
3220240903:14:58:04 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 3Start copying table "sbtest"."public"."sbtest2" => "sbtest5"."public"."sbtest2"
3320240903:14:58:04 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 3] [Progress: (0/1) DBs, (6/11tables done] Finished copying table "sbtest"."public"."sbtest10" => "sbtest5"."public"."sbtest10"
3420240903:14:58:04 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 2Start copying table "sbtest"."public"."sbtest7" => "sbtest5"."public"."sbtest7"
3520240903:14:58:04 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 2] [Progress: (0/1) DBs, (7/11tables done] Finished copying table "sbtest"."public"."sbtest1" => "sbtest5"."public"."sbtest1"
3620240903:14:58:04 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 0] [Progress: (0/1) DBs, (8/11tables done] Finished copying table "sbtest"."public"."sbtest4" => "sbtest5"."public"."sbtest4"
3720240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 1] [Progress: (0/1) DBs, (9/11tables done] Finished copying table "sbtest"."public"."sbtest9" => "sbtest5"."public"."sbtest9"
3820240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 2] [Progress: (0/1) DBs, (10/11tables done] Finished copying table "sbtest"."public"."sbtest7" => "sbtest5"."public"."sbtest7"
3920240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[Worker 3] [Progress: (0/1) DBs, (11/11tables done] Finished copying table "sbtest"."public"."sbtest2" => "sbtest5"."public"."sbtest2"
4020240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-[sbtest] Finished copying database "Progress: (1/1) DBs, (11/11) tables done"
4120240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Copy completed successfully
4220240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-------------------------------------------------
4320240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Total elapsed time3.90043957s
4420240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Total transferred data 203.4MB, transfer rate 194.6GB/h
4520240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-Copied 1 databases
4620240903:14:58:05 gpcopy:gpadmin:gpdb7:006850-[INFO]:-  Database sbtest: successfully copied 11 tables, skipped 0 tablesfailed 0 tables

方式3:pg_dump和pg_restore

1、此种方式不详细演示了。

2、大数据量的话不推荐。

 1-- 本地导出并行导出导入(推荐,只打包不压缩:tar -cvf sbtest.tar /bk/ --remove-files      解压: tar -xvf bb.tar )
2pg_dump  --dbname=HDW  --format=directory --file=/data/greenplum/sbtest  -j 32
3
4
5-- 远程导出
6pg_dump -h 192.168.8.8 -p 5432 -U gpadmin --dbname=sbtest  --format=directory --file=/data/greenplum/sbtest  -j 32
7
8
9
10-- 恢复(若库存在,则修改mdw的端口,然后删库)
11create database "sbtest";
12pg_restore  --dbname=HDW  /data/greenplum/sbtest   -j 32
13
14
15-- 创建用户
16SELECT 'CREATE USER ' || usename || ' WITH PASSWORD ''' || passwd || ''' CREATEDB;'
17FROM pg_shadow
18WHERE usename not in ('gpmon','gpadmin','gpcc_basic','gpcc_operator','gpcc_operator_basic')
19ORDER BY usename 
20;

总结

  • 如果需要快速备份和恢复且已有Greenplum集群,可以选择gpbackupgprestore

  • 如果目标是在最短停机时间内进行迁移并且有足够的硬件资源,可以考虑使用gpcopy

  • 如果数据量较小或迁移环境有限制(如使用非Greenplum数据库),可以选择使用pg_dumppg_restore

参考

https://www.dbaup.com/greenplumbanbenshengjijixiaobanbenshengjishili.html

https://www.dbaup.com/greenplumtongguogpbackuphegprestorezaiminiodes3cunchuzhongdeyidebeifenhehaiyuan.html

https://www.dbaup.com/greenplumbeifenhuifugongjuzhigpbackuphegprestore.html

https://www.dbaup.com/greenplumshujuqianyigongjuzhigpcopy.html



AiDBA
【PostgreSQL培训认证】【Oracle OCP、OCM、高可用(RAC+DG+OGG)培训认证】【MySQL OCP培训认证】【GreenPlum培训】【SQL Server培训】官网:www.dbaup.com,学习不止数据库
 最新文章