现代Oracle数据库的版本选择问题

Oracle公司从版本12c以后实际上引入了类似于SQL Server的以年份为版本号的版本风格。Oracle 12c以后的已release版本为18c,19c,21c,2020年由于全球疫情的问题未发布大版本号。

其大版本的释放周期大大缩短了,实际上大版本的释放频率已经覆盖了过去Patch Set补丁集的作用。

对于先进企业选择Oracle数据库版本而言,建议可以考虑对当前年份-2/3的算法。

即如果当前是2021年,则考虑使用(2021-2 or 3) 18c或19c;并打上最新的RU(Release Update)和RUR(Release Update Revision)。

 

 

使用Dgg实现Oracle数据库同构跨库数据校验

在使用Dgg实现Oracle跨库数据同步后,可以使用Dgg的数据校验功能来验证数据是否一致。

 

请首先参考《使用Dgg 全量同步/初始化Initial Load Oracle数据》 https://www.askmac.cn/archives/datagogo-initial-load-oracle-database.html

选中 树形图最上方的用户名,右击 Compare:

 

 

 

 

DataGoGo中存在三种 表数据的比较粒度:

  1. 只比较行数 ( 适用于大表 如日志表,为最粗粒度的比较)
  2. 只比较主键HASH (适用于中大表,要求表有主键,中等粒度比较)
  3. 比较全表HASH (适用于中小表,最细粒度比较 , 速度较慢)

 

测试 90万行 24列的表, 比较约耗时 2分钟。

 

比较结果如下:

 

Compare start date: 2021-06-07 14:58:54

Comparing SCOTT.T_XT_SYNCPERSONLOG and EAS.T_XT_SYNCPERSONLOG:

SCOTT.T_XT_SYNCPERSONLOG full table checksum: 235801620123
EAS.T_XT_SYNCPERSONLOG full table checksum: 235801620123

Full table checksum is the same.

End date: 2021-06-07 15:00:31

 

 

udev reload会影响oracle ASM?

UDEV RELOAD是否会影响oracle ASM?

 

回答: 如果频繁 reload 显然会影响Oracle ASM ,甚至导致ASM Diskgroup OFFLINE。

如果只是一次很快速的reload,那么影响的概率较小。

 

参考链接:

 

‘udev’ rules continuously being reloaded resulted in Oracle ASM diskgroup outage

https://access.redhat.com/solutions/1465913

LINUX 7 配置 multipath udev Oracle ASM

multipath.conf

 


# This is a basic configuration file with some examples, for device mapper
# multipath.
#
# For a complete list of the default configuration values, run either
# multipath -t
# or
# multipathd show config
#
# For a list of configuration options with descriptions, see the multipath.conf
# man page

## By default, devices with vendor = "IBM" and product = "S/390.*" are
## blacklisted. To enable mulitpathing on these devies, uncomment the
## following lines.
#blacklist_exceptions {
#	device {
#		vendor	"IBM"
#		product	"S/390.*"
#	}
#}

## Use user friendly names, instead of using WWIDs as names.
defaults {
	user_friendly_names yes
	find_multipaths yes
}
##
## Here is an example of how to configure some standard options.
##
#
#defaults {
#	polling_interval 	10
#	path_selector		"round-robin 0"
#	path_grouping_policy	multibus
#	uid_attribute		ID_SERIAL
#	prio			alua
#	path_checker		readsector0
#	rr_min_io		100
#	max_fds			8192
#	rr_weight		priorities
#	failback		immediate
#	no_path_retry		fail
#	user_friendly_names	yes
#}
##
## The wwid line in the following blacklist section is shown as an example
## of how to blacklist devices by wwid.  The 2 devnode lines are the
## compiled in default blacklist. If you want to blacklist entire types
## of devices, such as all scsi devices, you should use a devnode line.
## However, if you want to blacklist specific devices, you should use
## a wwid line.  Since there is no guarantee that a specific device will
## not change names on reboot (from /dev/sda to /dev/sdb for example)
## devnode lines are not recommended for blacklisting specific devices.
##
#blacklist {
#       wwid 26353900f02796769
#	devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
#	devnode "^hd[a-z]"
#}
#multipaths {
#	 {
#       device {
#               vendor                  "COMPAQ  "
#               product                 "HSV110 (C)COMPAQ"
#               path_grouping_policy    multibus
#               path_checker            readsector0
#               path_selector           "round-robin 0"
#               hardware_handler        "0"
#               failback                15
#               rr_weight               priorities
#               no_path_retry           queue
#       }
#       device {
#               vendor                  "COMPAQ  "
#               product                 "MSA1000         "
#               path_grouping_policy    multibus
#       }
#}

#		wwid			3600508b4000156d700012000000b0000
#		alias			yellow
#		path_grouping_policy	multibus
#		path_selector		"round-robin 0"
#		failback		manual
#devices {
#       device {
#               vendor                  "COMPAQ  "
#               product                 "HSV110 (C)COMPAQ"
#               path_grouping_policy    multibus
#               path_checker            readsector0
#               path_selector           "round-robin 0"
#               hardware_handler        "0"
#               failback                15
#               rr_weight               priorities
#               no_path_retry           queue
#       }
#       device {
#               vendor                  "COMPAQ  "
#               product                 "MSA1000         "
#               path_grouping_policy    multibus
#       }
#}
#		rr_weight		priorities
#		no_path_retry		5
#	}
#	multipath {
#		wwid			1DEC_____321816758474
#		alias			red
#	}
#
multipaths {

multipath {
 wwid 360050764008103127000000000000067
 alias disk30001
 }
 multipath {
 wwid 360050764008103127000000000000068
 alias disk30002
 }
 multipath {
 wwid 360050764008103127000000000000069
 alias disk30003
 }


multipath {
 wwid 360050764008103127000000000000064
 alias disk31001
 }
 multipath {
 wwid 360050764008103127000000000000065
 alias disk31002
 }



 multipath {
 wwid 36005076400810312700000000000005f
 alias disk32001
 }
 multipath {
 wwid 360050764008103127000000000000060
 alias disk32002
 }
 multipath {
 wwid 360050764008103127000000000000061
 alias disk32003
 }
 multipath {
 wwid 360050764008103127000000000000062
 alias disk32004
 }
 multipath {
 wwid 360050764008103127000000000000063
 alias disk32005
 }


 multipath {
 wwid 360050764008103127000000000000066
 alias disk33001
 }




}



12-dm-permissions.rules


ENV{DM_NAME}=="disk30001",NAME="asmdisk30001", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk30002",NAME="asmdisk30002", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk30003",NAME="asmdisk30003", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk31001",NAME="asmdisk31001", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk31002",NAME="asmdisk31002", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk32001",NAME="asmdisk32001", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk32002",NAME="asmdisk32002", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk32003",NAME="asmdisk32003", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk32004",NAME="asmdisk32004", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk32005",NAME="asmdisk32005", OWNER:="grid", GROUP:="asmadmin", MODE:="660"
ENV{DM_NAME}=="disk33001",NAME="asmdisk33001", OWNER:="grid", GROUP:="asmadmin", MODE:="660"


99-oracle-asmdevices.rules


KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000067", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000068", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000069", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000064", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000065", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="36005076400810312700000000000005f", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000060", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000061", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000062", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000063", OWNER="grid", GROUP="asmadmin", MODE="0660"
KERNEL=="sd*", SUBSYSTEM=="block", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}=="360050764008103127000000000000066", OWNER="grid", GROUP="asmadmin", MODE="0660"


使用Dgg全量同步/初始化Initial Load MySQL数据

现有2套MySQL服务器,需要将一台上的服务器上的数据同步到另一台。可以使用Dgg数据同步校准软件实现数据初始化同步的目的。

 

打开Dgg软件,点击开始 => 打开, 创建2个MySQL session:

 

 

选择源端 并点击下一步,选择目标端 并点击 连接:

 

 

可以在两侧的对象树上看到相关的表 , 点中用户 右键 => Transfer:

 

 

 

初始化同步后,若对表数据的一致性存在疑问,可以通过Compare功能比对,在左侧树形种点中你要比对的表,右键Compare:

校验分成几种:

  1. 行数对比校验 (最粗粒度)
  2. 主键对比校验  (较粗粒度)
  3. 全表对比校验    (精细粒度)

 

其中行数对比校验 ,仅对比2个表的行数是否一致;即只比较count(1)

主键比对,只比对主键信息是否一致,支持联合主键

全表校验,比对全表数据信息(会排除TEXT LONGTEXT等长字段列)是否一致

 

对于小表而言(一般来说小于10万行),全表校验的速度也较快,其时间可接受。对于大表而言,除非是一致性要求极高的账目表,否则建议使用主键比对或行数比对。

 

 

 

 

 

 

Oracle 相比 mysql 的优势在哪里

Oracle 相比 mysql 的优势在哪里   原帖子在这里:  https://v2ex.com/t/773654?p=1

 

目前的项目用的 mysql,支撑几十亿数据没问题(分库分表),偶尔慢 sql 也能优化索引解决,至于复杂查询通过搜索引擎实现,听说 Oracle 超级贵,那 Oracle 具体有什么优势呢,哪些场景下是 oracle only 的?

看完了回复,oracle 能单表支撑几十亿数据还是很强的,因为目前因为分表键导致很多需求无法实现,只能同步一个表用另一个分表键。

我再问下,如果是同时涉及几十个字段的复杂搜索,oracle 可以支持吗,目前用的搜索引擎实现也非常贵。

有一说一,单表支撑几十亿数据,听着有点玄乎,真的假的啊

我的回复:

 

oracle 的优化器 CBO optimizer 目前应该是所有 RDBMS 里最复杂的(不说是最先进的)。MYSQL 至少在优化器上还处于比较初级的阶段,虽然 MySQL 的目标可能并不希望实现非常复杂的优化器算法。其他一些东西 例如 undo 的实现等等 可能优势并不明显, 这里就不提了。

 

举一个例子, 都不使用索引的情况下,NO INDEX ! NO INDEX | NO INDEX !

以下数据量是一样的,机器是同一台。 Oracle 11.2.0.4 MySQL 8.0.14

都没有索引的情况下:oracle 使用 0.04 秒, MySQL 等了 10 分钟也没运行完

对于简单的 SQL 而言,差别不会有那么大。对于复杂的 SQL 而言, 能明显体现出优化器的优势。

对于拔高某个技术,没有兴趣。 所以 MYSQL 死忠请勿拍。

Oracle : 

SQL> set timing on;
SQL> SELECT c.cust_city,
  2         t.calendar_quarter_desc,
  3         SUM(s.amount_sold) sales_amount
  4    FROM sh.sales s, sh.times t, sh.customers c, sh.channels ch
  5   WHERE s.time_id = t.time_id
  6     AND s.cust_id = c.cust_id
  7     AND s.channel_id = ch.channel_id
  8     AND c.cust_state_province = 'FL'
  9     AND ch.channel_desc = 'Direct Sales'
 10     AND t.calendar_quarter_desc IN ('2000-01', '2000-02','1999-12')
 11   GROUP BY c.cust_city, t.calendar_quarter_desc;

CUST_CITY                      CALENDA SALES_AMOUNT
------------------------------ ------- ------------
Cypress Gardens                2000-01      3545.82
Candler                        2000-01      4166.32
Sanibel                        2000-02     17908.67
Ocala                          2000-02      7081.73
Molino                         2000-01     18765.25
Ocala                          2000-01      7146.73
Palmdale                       2000-02     25829.24
Palmdale                       2000-01     37793.44
Molino                         2000-02      17144.7
Saint Marks                    2000-01     55781.37
Noma                           2000-01     33572.55
Evinston                       2000-02     62657.21
Candler                        2000-02      6493.94
Winter Springs                 2000-02           20
Sugarloaf Key                  2000-01     12027.66
Saint Marks                    2000-02      48858.7
Blountstown                    2000-02     38052.58
Sugarloaf Key                  2000-02      9659.44
Cypress Gardens                2000-02      4928.93
Evinston                       2000-01     53509.69
Blountstown                    2000-01      27024.7
Sanibel                        2000-01     15870.34
Winter Springs                 2000-01        31.46
Noma                           2000-02     23903.58

已选择 24 行。

已用时间:  00: 00: 00.04

执行计划
----------------------------------------------------------
Plan hash value: 1865285285

------------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT               |           |   607 | 46132 |   955   (2)| 00:00:12 |       |       |
|   1 |  HASH GROUP BY                 |           |   607 | 46132 |   955   (2)| 00:00:12 |       |       |
|*  2 |   HASH JOIN                    |           |  2337 |   173K|   954   (2)| 00:00:12 |       |       |
|   3 |    PART JOIN FILTER CREATE     | :BF0000   |   274 |  4384 |    18   (0)| 00:00:01 |       |       |
|*  4 |     TABLE ACCESS FULL          | TIMES     |   274 |  4384 |    18   (0)| 00:00:01 |       |       |
|*  5 |    HASH JOIN                   |           | 12456 |   729K|   936   (2)| 00:00:12 |       |       |
|   6 |     MERGE JOIN CARTESIAN       |           |   383 | 14937 |   408   (1)| 00:00:05 |       |       |
|*  7 |      TABLE ACCESS FULL         | CHANNELS  |     1 |    13 |     3   (0)| 00:00:01 |       |       |
|   8 |      BUFFER SORT               |           |   383 |  9958 |   405   (1)| 00:00:05 |       |       |
|*  9 |       TABLE ACCESS FULL        | CUSTOMERS |   383 |  9958 |   405   (1)| 00:00:05 |       |       |
|  10 |     PARTITION RANGE JOIN-FILTER|           |   918K|    18M|   526   (2)| 00:00:07 |:BF0000|:BF0000|
|  11 |      TABLE ACCESS FULL         | SALES     |   918K|    18M|   526   (2)| 00:00:07 |:BF0000|:BF0000|
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("S"."TIME_ID"="T"."TIME_ID")
   4 - filter("T"."CALENDAR_QUARTER_DESC"='1999-12' OR "T"."CALENDAR_QUARTER_DESC"='2000-01' OR
              "T"."CALENDAR_QUARTER_DESC"='2000-02')
   5 - access("S"."CUST_ID"="C"."CUST_ID" AND "S"."CHANNEL_ID"="CH"."CHANNEL_ID")
   7 - filter("CH"."CHANNEL_DESC"='Direct Sales')
   9 - filter("C"."CUST_STATE_PROVINCE"='FL')


统计信息
----------------------------------------------------------
          0  recursive calls
          0  db block gets
       1726  consistent gets
          0  physical reads
          0  redo size
       1495  bytes sent via SQL*Net to client
        531  bytes received via SQL*Net from client
          3  SQL*Net roundtrips to/from client
          1  sorts (memory)
          0  sorts (disk)
         24  rows processed


SQL> select count(*) from sh.sales;

  COUNT(*)
----------
    918843

SQL> select * From v$VERSION;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE    11.2.0.4.0      Production
TNS for 64-bit Windows: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

MySQL :

mysql> SELECT c.cust_city,
    ->        t.calendar_quarter_desc,
    ->        SUM(s.amount_sold) sales_amount
    ->   FROM sh.sales s, sh.times t, sh.customers c, sh.channels ch
    ->  WHERE s.time_id = t.time_id
    ->    AND s.cust_id = c.cust_id
    ->    AND s.channel_id = ch.channel_id
    ->    AND c.cust_state_province = 'FL'
    ->    AND ch.channel_desc = 'Direct Sales'
    ->    AND t.calendar_quarter_desc IN ('2000-01', '2000-02','1999-12')
    ->  GROUP BY c.cust_city, t.calendar_quarter_desc;


+----+-------------+-------+------------+------+---------------+------+---------+------+--------+----------+----------------------------------------------------+
| id | select_type | table | partitions | type | possible_keys | key  | key_len | ref  | rows   | filtered | Extra                                              |
+----+-------------+-------+------------+------+---------------+------+---------+------+--------+----------+----------------------------------------------------+
|  1 | SIMPLE      | ch    | NULL       | ALL  | NULL          | NULL | NULL    | NULL |      5 |    20.00 | Using where; Using temporary                       |
|  1 | SIMPLE      | t     | NULL       | ALL  | NULL          | NULL | NULL    | NULL |   1804 |    30.00 | Using where; Using join buffer (Block Nested Loop) |
|  1 | SIMPLE      | c     | NULL       | ALL  | NULL          | NULL | NULL    | NULL |  55065 |    10.00 | Using where; Using join buffer (Block Nested Loop) |
|  1 | SIMPLE      | s     | NULL       | ALL  | NULL          | NULL | NULL    | NULL | 914584 |     0.10 | Using where; Using join buffer (Block Nested Loop) |
+----+-------------+-------+------------+------+---------------+------+---------+------+--------+----------+----------------------------------------------------+



mysql> select version();
+-----------+
| version() |
+-----------+
| 8.0.14    |
+-----------+
1 row in set (0.00 sec)

mysql> select count(*) from sh.sales;
+----------+
| count(*) |
+----------+
|   918843 |
+----------+
1 row in set (0.96 sec)

使用Dgg全量同步/初始化Initial Load Oracle数据

现有ZHS和AL2套数据库,需要将ZHS下的EAS Schema数据初始化复制到AL数据库。

准备工作: 在AL数据库中对应创建EAS用户,并赋权:

 

 

 

set ORACLE_SID=AL

sqlplus / as sysdba

create user eas identified by eas;
grant connect,resource to eas;

 

 

 

打开DataGoGo.exe 点击开始

 

点击新建:

 

输入数据源ZHS实例的连接信息:

点击测试连接:

 

点击保存后, 再次新建目标数据库连接:

 

 

选中源数据源,点击下一步:

 

 

选中目标数据源,点击连接

 

连接后,会在左右2侧显示对象树形图:

 

选中用户名,右键点中,点击Transfer:

 

出现Schema 传输界面,默认选中了所有表

 

点击Start后开始Schema用户级别数据传输,传输过程中 会显示传输进度:

 

 

在有NVARCHAR、LOB(BLOB CLOB NCLOB)字段的情况下,DataGoGo的单线程传输速度约为4500条/秒。

在没有NVARCHAR、LOB(BLOB CLOB NCLOB)字段的情况下,DataGoGo的单线程传输速度约为30000条/秒。

点击Close后,可以在右侧树形图看到传输过来的表:

 

 

 

按照重要程度划分数据库级别

按照重要程度划分数据库级别
级别 D C B A S
影响面 小于10人 10-1000人 1000-100000人 100000-1000000人 1000000人以上
业务类型举例 测试/开发系统,小型OA 记账软件等 中型OA ERP 财务软件等 中大型ERP MES HRM ,大型医院HIS 电信CRM 银行BANKING等 大型公共应用 如12306等
灾难救援价格 500-5000 5千到三万 3万到10万 10万到50万 50万以上
一般的配套设施 几乎无任何有效备份 少量磁盘上逻辑备份 可能有物理备份+其他备份形式 物理/逻辑备份+DataGuard,OGG等物理/逻辑灾备;存储级别冗余等+多数据中心冗余 都有

Dgg Oracle/MySQL数据库复制同步校准软件

Dgg Oracle/MySQL数据库复制同步校准软件

Dgg Rdbms Oracle MySQL Table/Schema Sync Compare Repair

最新版:

 

Dgg is a single application that allows you to tranfer data between Oracle and MySQL.
It will also support Microsoft SQL Server in future.

It supports below directionals:

Oracle To Oracle
Oracle To MySQL
MySQL To MySQL
MySQL To Oracle

特性:

快速比较同步Oracle表数据
快速比较同步MySQL表数据
支持异构传输 Oracle <=> MySQL , 未来将支持 SQL Server
将支持快速数据表校准修复
将支持索引、视图、触发器等其他类型对象

 

 

DataGoGo210413

 

 

 

Download  https://zcdn.askmac.cn/DataGoGo2103.zip

DataGoGo2104

 

 

 

 

Oracle 12.2 使用datagurd技术极短停机时间内快速迁移数据库方案步骤

=================================> 前期配置应在割接前10天完成 <=================================

 

1、在新服务器上配置必要的os参数,包含大内存页等

2、在新服务器上12.2.0.1 GI和RDBMS,安装2010补丁:p31741641_122010和p31750094_122010

3、创建一个与oldorcl库参数配置匹配的空数据库

4、复制密码文件到新服务器:

[oracle@]$ srvctl config database -d as19_fra1gh | grep ‘Password file’
Password file: +DATAC1//PASSWORD/passwd <– this file is what needs to be copied to /tmp and scp’d to the standby (result may differ)

[oracle@]$ export ORACLE_SID=+ASM1
[oracle@]$ export ORACLE_HOME=/u01/app/12.2.0.1/grid
[oracle@]$ export PATH=$ORACLE_HOME/bin:$PATH
[oracle@]$asmcmd cp +DATAC1//PASSWORD/passwd /tmp/passwd. copying +DATAC1//PASSWORD/passwd -> /tmp/passwd.

[oracle@]$ scp /tmp/orapw1 oracle@:/tmp/orapw

5、配置tnsnames.ora文件
包含 oldorcl 条目 和neworcl条目

oldorcl 老的源库
neworcl 新的目标库

6、在新服务器上恢复控制文件

startup nomount;

restore standby controlfile from service oldorcl;
alter database mount;

7、配置rman参数

CONFIGURE DEVICE TYPE DISK PARALLELISM 2;

8、 全量恢复数据库 ;也可以基于传输备份文件实现,影响大同小异

restore database from service oldorcl;

9、 增加standby logfile:

SQL> alter database add standby logfile thread 1
group 7 (‘+DATA’) size 4294967296,
group 8 (‘+DATA’) size 4294967296,
group 9 (‘+DATA’) size 4294967296;

SQL> alter database add standby logfile thread 2
group 11 (‘+DATA’) size 4294967296,
group 12 (‘+DATA’) size 4294967296,
group 13 (‘+DATA’) size 4294967296;

10、 配置 归档传输 log_archive_dest_2 (基于归档同步) log_archive_dest_2=’service=neworcl lgwr async’

如果 归档日志过多 ,那么可以 定期执行recover database from service oldorcl; (基于增量备份实现同步) ,而不打开归档传输

11、 将neworcl数据库启动到只读模式,并打开日志应用

alter database open read only;
alter database recover managed standby database using current logfile disconnect from session;

=================================> 割接窗口开始 <=================================

 

必须打开归档传输log_archive_dest_2=’service=neworcl lgwr async’

 

停止应用

 

停止数据库实例 shutdown immediate; ==》 约5分钟

 

将oldorcl数据库启动到 限制模式 ==》 约5分钟

 

startup restrict;

 

创建一条新记录 ,并确认传输到新库 ==> 约1分钟

create table test1(t1 date);
insert into test1 values(sysdate);
commit;
alter session set nls_date_format=’YYYY-MM-DD hh24:mi:ss’;
alter system checkpoint;

alter system switch logfile;

alter system switch logfile;
alter system switch logfile;
select * from test1;

在新orcl确认
select * from test1;

将neworcl数据库启动到打开模式 ==》10分钟

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;

alter database open;

检查数据

停数据库实例 、监听、SCAN ip资源 ==》10分钟

切换SCAN IP ==>10分钟

启动所有资源 ,启动neworcl 2节点数据库实例 ==》15分钟

去掉 log_archive_dest_2

=================================> 割接窗口结束 <=================================

沪ICP备14014813号-2

沪公网安备 31010802001379号