在初始化GreenPlum时,如何自定义p和m的关系,从而达到 机房容灾 的效果?

教育   2024-11-06 20:13   陕西  

简介

问题一: 安装GP时,p和m如何自定义对应关系 ???    达到 机房容灾??

问题二:2个机房之间网络断开,会出现脑裂的情况吗???  到底以哪个为准呢????

架构图

实现过程

在GP初始化阶段,通过-O和-I参数即可实现该功能。

  1docker rm -f lhrgptest
2docker run -itd --name lhrgptest -h lhrgptest \
3  -v /sys/fs/cgroup:/sys/fs/cgroup \
4  --privileged=true lhrbest/greenplum:gpdball_6.25.3 \
5  /usr/sbin/init
6
7docker exec -it lhrgptest bash  
8docker start mdw smdw sdw1 sdw2 sdw3 sdw4  
9
10docker exec -it mdw su - gpadmin
11
12gpstart -a
13gpcc start
14
15gpcc status
16gpstate 
17
18
19
20
21
22gpdeletesystem -d /opt/greenplum/data/master/gpseg-1 -f
23
24gpinitsystem -c /home/gpadmin/conf/initgp_config  -s smdw -P 5432 -S /opt/greenplum/data/master/gpseg-1 -O /home/gpadmin/conf/cluster_init.config
25
26
27cat /home/gpadmin/conf/cluster_init.config
28
29
30gpinitsystem -I /home/gpadmin/conf/cluster_init_new.config -s smdw -P 5432 -S /opt/greenplum/data/master/gpseg-1 
31
32
33
34# host~port~data_directory/seg_prefix<segment_id>~dbid~content_id
35
36# dbid从2开始依次递增,seg_prefix<segment_id>和content_id保持一致,p和m保持一致
37
38cat > /home/gpadmin/conf/cluster_init_new.config <<"EOF"
39ARRAY_NAME="lhrgp"
40TRUSTED_SHELL=/bin/ssh
41CHECK_POINT_SEGMENTS=8
42ENCODING=UTF-8
43SEG_PREFIX=gpseg
44HEAP_CHECKSUM=on
45HBA_HOSTNAMES=0
46QD_PRIMARY_ARRAY=mdw~mdw~5432~/opt/greenplum/data/master/gpseg-1~1~-1
47declare -a PRIMARY_ARRAY=(
48sdw1~sdw1~6000~/opt/greenplum/data/primary/gpseg0~2~0
49sdw1~sdw1~6001~/opt/greenplum/data/primary/gpseg1~3~1
50sdw1~sdw1~6002~/opt/greenplum/data/primary/gpseg2~4~2
51sdw1~sdw1~6003~/opt/greenplum/data/primary/gpseg3~5~3
52sdw2~sdw2~6000~/opt/greenplum/data/primary/gpseg4~6~4
53sdw2~sdw2~6001~/opt/greenplum/data/primary/gpseg5~7~5
54sdw2~sdw2~6002~/opt/greenplum/data/primary/gpseg6~8~6
55sdw2~sdw2~6003~/opt/greenplum/data/primary/gpseg7~9~7
56sdw3~sdw3~6000~/opt/greenplum/data/primary/gpseg12~10~12
57sdw3~sdw3~6001~/opt/greenplum/data/primary/gpseg13~11~13
58sdw3~sdw3~6002~/opt/greenplum/data/primary/gpseg14~12~14
59sdw3~sdw3~6003~/opt/greenplum/data/primary/gpseg15~13~15
60sdw4~sdw4~6000~/opt/greenplum/data/primary/gpseg8~14~8
61sdw4~sdw4~6001~/opt/greenplum/data/primary/gpseg9~15~9
62sdw4~sdw4~6002~/opt/greenplum/data/primary/gpseg10~16~10
63sdw4~sdw4~6003~/opt/greenplum/data/primary/gpseg11~17~11
64)
65declare -a MIRROR_ARRAY=(
66sdw1~sdw1~7000~/opt/greenplum/data/mirror/gpseg12~18~12
67sdw1~sdw1~7001~/opt/greenplum/data/mirror/gpseg13~19~13
68sdw1~sdw1~7002~/opt/greenplum/data/mirror/gpseg14~20~14
69sdw1~sdw1~7003~/opt/greenplum/data/mirror/gpseg15~21~15
70sdw2~sdw2~7000~/opt/greenplum/data/mirror/gpseg8~22~8
71sdw2~sdw2~7001~/opt/greenplum/data/mirror/gpseg9~23~9
72sdw2~sdw2~7002~/opt/greenplum/data/mirror/gpseg10~24~10
73sdw2~sdw2~7003~/opt/greenplum/data/mirror/gpseg11~25~11
74sdw3~sdw3~7000~/opt/greenplum/data/mirror/gpseg0~26~0
75sdw3~sdw3~7001~/opt/greenplum/data/mirror/gpseg1~27~1
76sdw3~sdw3~7002~/opt/greenplum/data/mirror/gpseg2~28~2
77sdw3~sdw3~7003~/opt/greenplum/data/mirror/gpseg3~29~3
78sdw4~sdw4~7000~/opt/greenplum/data/mirror/gpseg4~30~4
79sdw4~sdw4~7001~/opt/greenplum/data/mirror/gpseg5~31~5
80sdw4~sdw4~7002~/opt/greenplum/data/mirror/gpseg6~32~6
81sdw4~sdw4~7003~/opt/greenplum/data/mirror/gpseg7~33~7
82)
83EOF
84
85
86
87select d1.content,d1.hostname p_hostname,d2.hostname m_hostname,d1.datadir p_datadir,d2.datadir m_datadir from gp_segment_configuration d1 join gp_segment_configuration d2 on d1.content=d2.content and d2.role='m' and d1.role='p' order by content;
88
89postgres=# select d1.content,d1.hostname p_hostname,d2.hostname m_hostname,d1.datadir p_datadir,d2.datadir m_datadir from gp_segment_configuration d1 join gp_segment_configuration d2 on d1.content=d2.content and d2.role='m' and d1.role='p' order by content;
90 content | p_hostname | m_hostname |              p_datadir              |             m_datadir              
91---------+------------+------------+-------------------------------------+------------------------------------
92      -1 | mdw        | smdw       | /opt/greenplum/data/master/gpseg-1  | /opt/greenplum/data/master/gpseg-1
93       0 | sdw1       | sdw3       | /opt/greenplum/data/primary/gpseg0  | /opt/greenplum/data/mirror/gpseg0
94       1 | sdw1       | sdw3       | /opt/greenplum/data/primary/gpseg1  | /opt/greenplum/data/mirror/gpseg1
95       2 | sdw1       | sdw3       | /opt/greenplum/data/primary/gpseg2  | /opt/greenplum/data/mirror/gpseg2
96       3 | sdw1       | sdw3       | /opt/greenplum/data/primary/gpseg3  | /opt/greenplum/data/mirror/gpseg3
97       4 | sdw2       | sdw4       | /opt/greenplum/data/primary/gpseg4  | /opt/greenplum/data/mirror/gpseg4
98       5 | sdw2       | sdw4       | /opt/greenplum/data/primary/gpseg5  | /opt/greenplum/data/mirror/gpseg5
99       6 | sdw2       | sdw4       | /opt/greenplum/data/primary/gpseg6  | /opt/greenplum/data/mirror/gpseg6
100       7 | sdw2       | sdw4       | /opt/greenplum/data/primary/gpseg7  | /opt/greenplum/data/mirror/gpseg7
101       8 | sdw4       | sdw2       | /opt/greenplum/data/primary/gpseg8  | /opt/greenplum/data/mirror/gpseg8
102       9 | sdw4       | sdw2       | /opt/greenplum/data/primary/gpseg9  | /opt/greenplum/data/mirror/gpseg9
103      10 | sdw4       | sdw2       | /opt/greenplum/data/primary/gpseg10 | /opt/greenplum/data/mirror/gpseg10
104      11 | sdw4       | sdw2       | /opt/greenplum/data/primary/gpseg11 | /opt/greenplum/data/mirror/gpseg11
105      12 | sdw3       | sdw1       | /opt/greenplum/data/primary/gpseg12 | /opt/greenplum/data/mirror/gpseg12
106      13 | sdw3       | sdw1       | /opt/greenplum/data/primary/gpseg13 | /opt/greenplum/data/mirror/gpseg13
107      14 | sdw3       | sdw1       | /opt/greenplum/data/primary/gpseg14 | /opt/greenplum/data/mirror/gpseg14
108      15 | sdw3       | sdw1       | /opt/greenplum/data/primary/gpseg15 | /opt/greenplum/data/mirror/gpseg15
109(17 rows)

测试:机房2宕机

假设,我们关闭机房2,即关闭smdw  sdw3  sdw4,测试一下:

  1-- 查询状态
2[gpadmin@mdw ~]$ gpstate -e
320241106:09:12:20:102955 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -e
420241106:09:12:20:102955 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.25.3 build commit:367edc6b4dfd909fe38fc288ade9e294d74e3f9a Open Source'
520241106:09:12:20:102955 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: '
PostgreSQL 9.4.26 (Greenplum Database 6.25.3 build commit:367edc6b4dfd909fe38fc288ade9e294d74e3f9a Open Sourceon x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.064-bit compiled on Oct  4 2023 23:27:58'
620241106:09:12:20:102955 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
720241106:09:12:20:102955 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments...
8.
920241106:09:12:22:102955 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
1020241106:09:12:22:102955 gpstate:mdw:gpadmin-[INFO]:-Segment Mirroring Status Report
1120241106:09:12:22:102955 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
1220241106:09:12:22:102955 gpstate:mdw:gpadmin-[INFO]:-All segments are running normally
13
14-- 关闭机房2
15docker stop smdw sdw3 sdw4
16
17
18-- 查询状态 ,操作数据库发现不受影响
19[gpadmin@mdw ~]$ gpstate -e
2020241106:09:13:03:103062 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -e
2120241106:09:13:03:103062 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: '
postgres (Greenplum Database6.25.3 build commit:367edc6b4dfd909fe38fc288ade9e294d74e3f9a Open Source'
2220241106:09:13:03:103062 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version: '
PostgreSQL 9.4.26 (Greenplum Database 6.25.3 build commit:367edc6b4dfd909fe38fc288ade9e294d74e3f9a Open Sourceon x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.064-bit compiled on Oct  4 2023 23:27:58'
2320241106:09:13:03:103062 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
2420241106:09:13:03:103062 gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments...
25..................................
2620241106:09:13:37:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
2720241106:09:13:37:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
2820241106:09:13:38:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
2920241106:09:13:38:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
3020241106:09:13:38:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
3120241106:09:13:38:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
3220241106:09:13:38:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
3320241106:09:13:38:103062 gpstate:mdw:gpadmin-[WARNING]:-pg_stat_replication shows no standby connections
3420241106:09:13:41:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 14 (sdw4:6000)
3520241106:09:13:44:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 15 (sdw4:6001)
3620241106:09:13:47:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 16 (sdw4:6002)
3720241106:09:13:50:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 17 (sdw4:6003)
3820241106:09:13:53:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 10 (sdw3:6000)
3920241106:09:13:56:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 11 (sdw3:6001)
4020241106:09:13:59:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 12 (sdw3:6002)
4120241106:09:14:02:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 13 (sdw3:6003)
4220241106:09:14:02:103062 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
4320241106:09:14:02:103062 gpstate:mdw:gpadmin-[INFO]:-Segment Mirroring Status Report
4420241106:09:14:05:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 14 (sdw4:6000)
4520241106:09:14:08:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 15 (sdw4:6001)
4620241106:09:14:11:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 16 (sdw4:6002)
4720241106:09:14:14:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 17 (sdw4:6003)
4820241106:09:14:17:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 10 (sdw3:6000)
4920241106:09:14:20:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 11 (sdw3:6001)
5020241106:09:14:23:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 12 (sdw3:6002)
5120241106:09:14:26:103062 gpstate:mdw:gpadmin-[WARNING]:-could not query segment 13 (sdw3:6003)
5220241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
5320241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-Unsynchronized Segment Pairs
5420241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   Current Primary   Port   WAL sync remaining bytes   Mirror   Port
5520241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw1              6000   Unknown                    sdw3     7000
5620241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw1              6001   Unknown                    sdw3     7001
5720241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw1              6002   Unknown                    sdw3     7002
5820241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw1              6003   Unknown                    sdw3     7003
5920241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw2              6000   Unknown                    sdw4     7000
6020241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw2              6001   Unknown                    sdw4     7001
6120241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw2              6002   Unknown                    sdw4     7002
6220241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw2              6003   Unknown                    sdw4     7003
6320241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-----------------------------------------------------
6420241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-Downed Segments (may include segments where status could not be retrieved)
6520241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   Segment   Port   Config status   Status
6620241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      7000   Up              Unknown -- unable to load segment status
6720241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      7001   Up              Unknown -- unable to load segment status
6820241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      7002   Up              Unknown -- unable to load segment status
6920241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      7003   Up              Unknown -- unable to load segment status
7020241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      7000   Up              Unknown -- unable to load segment status
7120241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      7001   Up              Unknown -- unable to load segment status
7220241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      7002   Up              Unknown -- unable to load segment status
7320241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      7003   Up              Unknown -- unable to load segment status
7420241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      6000   Up              Unknown -- unable to load segment status
7520241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      6001   Up              Unknown -- unable to load segment status
7620241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      6002   Up              Unknown -- unable to load segment status
7720241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw4      6003   Up              Unknown -- unable to load segment status
7820241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      6000   Up              Unknown -- unable to load segment status
7920241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      6001   Up              Unknown -- unable to load segment status
8020241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      6002   Up              Unknown -- unable to load segment status
8120241106:09:14:26:103062 gpstate:mdw:gpadmin-[INFO]:-   sdw3      6003   Up              Unknown -- unable to load segment status
82[gpadmin@mdw ~]$ psql
83psql (9.4.26)
84Type "help" for help.
85
86postgres=# select * from gp_segment_configuration where status='
d';
87 dbid | content | role | preferred_role | mode | status | port | hostname | address |               datadir               
88------+---------+------+----------------+------+--------+------+----------+---------+-------------------------------------
89   15 |       9 | m    | p              | n    | d      | 6001 | sdw4     | sdw4    | /opt/greenplum/data/primary/gpseg9
90   14 |       8 | m    | p              | n    | d      | 6000 | sdw4     | sdw4    | /opt/greenplum/data/primary/gpseg8
91   10 |      12 | m    | p              | n    | d      | 6000 | sdw3     | sdw3    | /opt/greenplum/data/primary/gpseg12
92   11 |      13 | m    | p              | n    | d      | 6001 | sdw3     | sdw3    | /opt/greenplum/data/primary/gpseg13
93   16 |      10 | m    | p              | n    | d      | 6002 | sdw4     | sdw4    | /opt/greenplum/data/primary/gpseg10
94   17 |      11 | m    | p              | n    | d      | 6003 | sdw4     | sdw4    | /opt/greenplum/data/primary/gpseg11
95   12 |      14 | m    | p              | n    | d      | 6002 | sdw3     | sdw3    | /opt/greenplum/data/primary/gpseg14
96   13 |      15 | m    | p              | n    | d      | 6003 | sdw3     | sdw3    | /opt/greenplum/data/primary/gpseg15
97   19 |       1 | m    | m              | n    | d      | 7001 | sdw3     | sdw3    | /opt/greenplum/data/mirror/gpseg1
98   22 |       4 | m    | m              | n    | d      | 7000 | sdw4     | sdw4    | /opt/greenplum/data/mirror/gpseg4
99   18 |       0 | m    | m              | n    | d      | 7000 | sdw3     | sdw3    | /opt/greenplum/data/mirror/gpseg0
100   20 |       2 | m    | m              | n    | d      | 7002 | sdw3     | sdw3    | /opt/greenplum/data/mirror/gpseg2
101   21 |       3 | m    | m              | n    | d      | 7003 | sdw3     | sdw3    | /opt/greenplum/data/mirror/gpseg3
102   23 |       5 | m    | m              | n    | d      | 7001 | sdw4     | sdw4    | /opt/greenplum/data/mirror/gpseg5
103   24 |       6 | m    | m              | n    | d      | 7002 | sdw4     | sdw4    | /opt/greenplum/data/mirror/gpseg6
104   25 |       7 | m    | m              | n    | d      | 7003 | sdw4     | sdw4    | /opt/greenplum/data/mirror/gpseg7
105(16 rows)
106
107postgres=# create database db2;
108CREATE DATABASE
109postgres=# 
110

然后后恢复和平衡操作。

测试:机房1宕机

假设,我们关闭机房1,即关闭mdw  sdw1  sdw2,测试一下:

 1[root@lhrgptest /]# docker stop mdw sdw1 sdw2
2
3
4-- 升级smdw为mdw
5[gpadmin@smdw ~]$ gpactivatestandby -d /opt/greenplum/data/master/gpseg-1
620241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[DEBUG]:-Running Command: echo 'START_CMD_OUTPUT';ps -ef | grep postgres | grep -v grep | awk '{print $2}' | grep \`cat /opt/greenplum/data/master/gpseg-1/postmaster.pid | head -1\` || echo -1
720241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[INFO]:------------------------------------------------------
820241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[INFO]:-Standby data directory    = /opt/greenplum/data/master/gpseg-1
920241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[INFO]:-Standby port              = 5432
1020241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[INFO]:-Standby running           = no
1120241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[INFO]:-Force standby activation  = yes
1220241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[INFO]:------------------------------------------------------
1320241106:10:03:38:002828 gpactivatestandby:smdw:gpadmin-[DEBUG]:-Running Command: cat /tmp/.s.PGSQL.5432.lock
14Do you want to continue with standby master activation? Yy|Nn (default=N):
15> y
1620241106:10:03:47:002828 gpactivatestandby:smdw:gpadmin-[DEBUG]:-Running Command: ps -ef | grep postgres | grep -v grep | awk '{print $2}' | grep `cat /opt/greenplum/data/master/gpseg-1/postmaster.pid | head -1` || echo -1
1720241106:10:03:47:002828 gpactivatestandby:smdw:gpadmin-[INFO]:-Starting standby master database in utility mode...
1820241106:10:03:47:002828 gpactivatestandby:smdw:gpadmin-[DEBUG]:-Running Command: $GPHOME/bin/gpstart -a -m -v -d /opt/greenplum/data/master/gpseg-1
1920241106:10:03:49:002828 gpactivatestandby:smdw:gpadmin-[INFO]:-Reading current configuration...
2020241106:10:03:49:002828 gpactivatestandby:smdw:gpadmin-[DEBUG]:-Connecting to dbname='postgres'
2120241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:------------------------------------------------------
2220241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-The activation of the standby master has completed successfully.
2320241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-mdw is now the new primary master.
2420241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-You will need to update your user access mechanism to reflect
2520241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-the change of master hostname.
2620241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-Do not re-start the failed master while the fail-over master is
2720241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-operational, this could result in database corruption!
2820241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-MASTER_DATA_DIRECTORY is now /opt/greenplum/data/master/gpseg-1 if
2920241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-this has changed as a result of the standby master activation, remember
3020241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-to change this in any startup scripts etc, that may be configured
3120241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-to set this value.
3220241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-MASTER_PORT is now 5432if this has changed, you
3320241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-may need to make additional configuration changes to allow access
3420241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-to the Greenplum instance.
3520241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-Refer to the Administrator Guide for instructions on how to re-activate
3620241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-the master to its previous state once it becomes available.
3720241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-Query planner statistics must be updated on all databases
3820241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-following standby master activation.
3920241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:-When convenient, run ANALYZE against all user databases.
4020241106:10:03:57:001458 gpactivatestandby:mdw:gpadmin-[INFO]:------------------------------------------------------
41
42.......
43[gpadmin@smdw ~]$ psql
44psql (9.4.26)
45Type "help" for help.
46
47postgres=# create database db3;
48CREATE DATABASE
49postgres=# \l+
50                                                                 List of databases
51   Name    |  Owner  | Encoding |  Collate   |   Ctype    |  Access privileges  |  Size  | Tablespace |                Description                 
52-----------+---------+----------+------------+------------+---------------------+--------+------------+--------------------------------------------
53 db3       | gpadmin | UTF8     | en_US.utf8 | en_US.utf8 |                     | 278 MB | pg_default | 
54 postgres  | gpadmin | UTF8     | en_US.utf8 | en_US.utf8 |                     | 279 MB | pg_default | default administrative connection database
55 template0 | gpadmin | UTF8     | en_US.utf8 | en_US.utf8 | =c/gpadmin         +| 276 MB | pg_default | unmodifiable empty database
56           |         |          |            |            | gpadmin=CTc/gpadmin |        |            | 
57 template1 | gpadmin | UTF8     | en_US.utf8 | en_US.utf8 | =c/gpadmin         +| 278 MB | pg_default | default template for new databases
58           |         |          |            |            | gpadmin=CTc/gpadmin |        |            | 
59(4 rows)
60
61postgres=
62
63
64# 增加

然后做恢复和再平衡操作,切回master,最后增加standby master。

 1[gpadmin@mdw ~]$ gpinitstandby -s smdw -S /opt/greenplum/data/master/gpseg-1 -P 5432
220241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Validating environment and parameters for standby initialization...
320241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Checking for data directory /opt/greenplum/data/master/gpseg-1 on smdw
420241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
520241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master initialization parameters
620241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:------------------------------------------------------
720241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master hostname               = mdw
820241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master data directory         = /opt/greenplum/data/master/gpseg-1
920241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum master port                   = 5432
1020241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master hostname       = smdw
1120241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master port           = 5432
1220241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum standby master data directory = /opt/greenplum/data/master/gpseg-1
1320241106:11:03:46:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Greenplum update system catalog         = On
14Do you want to continue with standby master initialization? Yy|Nn (default=N):
15> y
1620241106:11:03:47:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions to standby
1720241106:11:03:48:005835 gpinitstandby:mdw:gpadmin-[INFO]:-The packages on smdw are consistent.
1820241106:11:03:48:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Adding standby master to catalog...
1920241106:11:03:48:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Database catalog updated successfully.
2020241106:11:03:48:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Updating pg_hba.conf file...
2120241106:11:03:49:005835 gpinitstandby:mdw:gpadmin-[INFO]:-pg_hba.conf files updated successfully.
2220241106:11:03:51:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Starting standby master
2320241106:11:03:51:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Checking if standby master is running on host: smdw  in directory: /opt/greenplum/data/master/gpseg-1
2420241106:11:03:52:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Cleaning up pg_hba.conf backup files...
2520241106:11:03:52:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Backup files of pg_hba.conf cleaned up successfully.
2620241106:11:03:52:005835 gpinitstandby:mdw:gpadmin-[INFO]:-Successfully created standby master on smdw
27[gpadmin@mdw ~]$ gpstate -f
2820241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -f
2920241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version'postgres (Greenplum Database) 6.25.3 build commit:367edc6b4dfd909fe38fc288ade9e294d74e3f9a Open Source'
3020241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-master Greenplum Version'PostgreSQL 9.4.26 (Greenplum Database 6.25.3 build commit:367edc6b4dfd909fe38fc288ade9e294d74e3f9a Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Oct  4 2023 23:27:58'
3120241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...
3220241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-Standby master details
3320241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-----------------------
3420241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-   Standby address          = smdw
3520241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-   Standby data directory   = /opt/greenplum/data/master/gpseg-1
3620241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-   Standby port             = 5432
3720241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-   Standby PID              = 990
3820241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:-   Standby status           = Standby host passive
3920241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
4020241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--pg_stat_replication
4120241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
4220241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--WAL Sender State: streaming
4320241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--Sync state: sync
4420241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--Sent Location: 0/1C000000
4520241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--Flush Location: 0/1C000000
4620241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--Replay Location: 0/1C000000
4720241106:11:03:56:005913 gpstate:mdw:gpadmin-[INFO]:--------------------------------------------------------------
48[gpadmin@mdw ~]$ 

总结

1、在此架构下,2个机房的主机节点数需要一致。

2、2个机房之间网络断开,会出现脑裂的情况吗???  到底以哪个为准呢???? -- 这个以 master节点所在的机房为准

3、在此架构下,可以达到机房级容灾的效果。



AiDBA
【PostgreSQL培训认证】【Oracle OCP、OCM、高可用(RAC+DG+OGG)培训认证】【MySQL OCP培训认证】【GreenPlum培训】【SQL Server培训】官网:www.dbaup.com,学习不止数据库
 最新文章