Monday, July 25, 2011

Table Comparison and Synchronisation

If you are managing more than one databases for same application, it is very common that you need to compare one table ( say scott.employee) in two different databases. I think, we had previous experiences to do such job :) . The simplest way is just minus two tables. Like

1. Two different tables In same DB - SELECT * FROM table1 minus SELECT * FROM table2;
2. Same table in two different DB - SELECT * FROM table1 minus SELECT * FROM table2@remote_db;

But this simple task can be time consuming if your tables are very big (say 1000000+ records). If targeted tables do not contain BLOB,CLOB or Long data type columns, you can easily use DBMS_COMPARISON, a new package introduce in Oracle 11g, to compare them. More over this package also provide Synchronization to remove miss match records in targeted tables. For details please go through Oracle Documentations.

Here we are going to compare and synchronise 'employee' table in two different DB.

Requirement: A database link between host (i.e, where these script will be run) and remote DB.

Step One: Create comparison Task

BEGIN
DBMS_COMPARISON.CREATE_COMPARISON(
comparison_name => 'compare_employee'
, schema_name => 'SCOTT'
, object_name => 'EMPLOYEE'
, dblink_name => 'db_link_to_remote'
, remote_schema_name=>'SCOTT'
, remote_object_name=>'EMPLOYEE'
);
END;
/

If you wish to compare two different table in same DB, set dblink_name => NULL, object_name => 'table1' and remote_object_name=>'table2'

Step Two: Execute comparison Task

DECLARE
consistent BOOLEAN;
scan_info DBMS_COMPARISON.COMPARISON_TYPE;
BEGIN
consistent := DBMS_COMPARISON.COMPARE( comparison_name =>'compare_employee', scan_info=> scan_info, perform_row_dif => TRUE);
DBMS_OUTPUT.PUT_LINE('Scan ID: '||scan_info.scan_id);
IF consistent=TRUE THEN DBMS_OUTPUT.PUT_LINE('No differences found.');
ELSE
DBMS_OUTPUT.PUT_LINE('Differences were found.');
END IF;
END;
/

Step Three: See how may difference you got

col COMPARISON_NAME format a15;
col SCHEMA_NAME format a15;
col OBJECT_NAME format a15;


SELECT s.scan_id ,c.COMPARISON_NAME,c.SCHEMA_NAME,c.OBJECT_NAME ,s.CURRENT_DIF_COUNT
FROM USER_COMPARISON c,USER_COMPARISON_SCAN_SUMMARY s
WHERE c.COMPARISON_NAME = s.COMPARISON_NAME AND s.scan_id = 514; (this number is a out put of step two)


Step Four: Synchronise Remote table with host table.

DECLARE
scan_info DBMS_COMPARISON.COMPARISON_TYPE;
BEGIN
DBMS_COMPARISON.CONVERGE (comparison_name => 'compare_employee',
scan_id => 506,
scan_info => scan_info, --this number is a out put of step two
converge_options => DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS);
DBMS_OUTPUT.PUT_LINE('Local Rows Merged: '||scan_info.loc_rows_merged);
DBMS_OUTPUT.PUT_LINE('Remote Rows Merged: '||scan_info.rmt_rows_merged);
DBMS_OUTPUT.PUT_LINE('Local Rows Deleted: '||scan_info.loc_rows_deleted);
DBMS_OUTPUT.PUT_LINE('Remote Rows Deleted: '||scan_info.rmt_rows_deleted);
END;
/

You can also Synchronise Host table by using Remote table, please visit given link.

I hope you all will enjoy this exiting new package of Oracle 11g.

Monday, July 11, 2011

Traveling past with Log Miner

Some times you may wish to see what happen in recent past. In a beautiful morning, you may discover that a misshape occure in your database :(. Unfortunately you did not enabled any auditing options earlier.

If your database is in archivelog mod and supplemental log is enabled then 'Log Miner' can save the day ! Oracle Log Miner is also capable of mining redo log, archive log of remote DB. Please see Oracle Utility Document for details.

In this post, we only consider a scenario in which Source DB and Mining DB is same. Lets describe how you can do this


STEP 1 : Create Log Miner User

CREATE USER logminer_adm identified by logminer_adm;

GRANT resource,connect to logminer_adm;
GRANT EXECUTE_CATALOG_ROLE,DBA to logminer_adm;


Step 2 : Enable Supplemental Login

Check whether supplemental login is enabled

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

If not, then you can't mine all transections. So enable it for future and follow the remaining steps to mine a subset of all transections.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;



Step 3: Add Archieve Log Files to Log Miner

ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';


EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'/u/archivelog/2011_07_10/o1_mf_1_470_71m4jv90_.arc', OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'/u/archivelog/2011_07_10/o1_mf_1_471_71m4kn09_.arc', OPTIONS => DBMS_LOGMNR.ADDFILE);


You can add as much files as you need

Step 4: Start Log Miner

EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'10-JUL-2011 19:50:00',ENDTIME => '10-JUL-2011 19:55:00',OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG
+DBMS_LOGMNR.CONTINUOUS_MINE);

Step 5: View What Happen Last Night :)

SELECT SEG_OWNER ,OPERATION, SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS
WHERE SEG_OWNER = 'SYS';


Step 6: Stop Log Miner Session

EXECUTE DBMS_LOGMNR.END_LOGMNR;

Saturday, July 2, 2011

DATA PUMP: Network mode import

Some time we need to copy one schema from one DB machine to another machine or copy one schema as schema (i.e. remap schema) in same DB. The simplest way is to

(a) export required schema, move dump to remote DB and import [copy in 2 DB]

(b) export required schema and import the dump with schema replace option [copy in same DB]


Scenario ONE

Suppose the source schema is too small (say 1-3 GB) the above process is time consuming and labours compare with task requirements.

Scenario TOW

Suppose the source schema is too big (say 100 GB+) and there is not enough disk space to store the 100G size dump.

In above cases you can use the option of network import. But you need to consider that network import is a slower process because all the data should be travel in connection cable if your target is coping a schema in two different DB machine.

Lets describe how we can achieve this. we move user_info schema from source db to remote db.



STEP 1: grant exp_full_database role to socre shcema [IN REMOTE DB]
conn system/pass;

GRANT EXP_FULL_DATABASE to user_info;

STEP 2: [IN DESTINATION DB] Create destination user and grant necessary roles

conn system/pass;
CREATE USER user_info
IDENTIFIED BY pass
DEFAULT TABLESPACE USER
TEMPORARY TABLESPACE TEMP;

GRANT CONNECT , RESOURCE TO user_info;

STEP 3: [IN DESTINATION DB] grant read/write on dump directory

conn system/pass;

GRANT read,write on DIRECTORY dump_directory to user_info;

Network import does not requer any dump file. This directory is only requer to write the import log file.

STEP 4: [IN DESTINATION DB] create public DB Link

conn system/pass;
CREATE PUBLIC DATABASE LINK SATURN
connect to user_info identified by pass
using '(DESCRIPTION=(
ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=remote_db)
(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=orcl.oracle.com)
(server=DEDICATED)))';

STEP 5: [IN DESTINATION DB MACHINE] execute impdp

impdp user_info/pass directory=dump_dir network_link=SATURN logfile=net_import_proddev.log EXCLUDE=GRANT,STATISTICS,SYNONYM,DB_LINK REMAP_SCHEMA=USER_INFO:USER_INFO