Monday, July 28, 2008

AWR top sql quarries

FOR EXECUTION : EXECUTIONS_DELTA
FOR CPU TIME: CPU_TIME_DELTA
FOR DISK READ: DISK_READS_DELTA
FOR BUFFER GETS: BUFFER_GETS_DELTA


SELECT SQL_TEXT,X.CPU_TIME FROM DBA_HIST_SQLTEXT DHST,
(SELECT DHS.SQL_ID SQL_ID,SUM(DHS.CPU_TIME_DELTA) CPU_TIME
FROM DBA_HIST_SQLSTAT DHS
WHERE DHS.SNAP_ID IN(SELECT SNAP_ID FROM DBA_HIST_SNAPSHOT
WHERE BEGIN_INTERVAL_TIME>=TO_DATE('07/20/2008','MM/DD/YYYY')
AND END_INTERVAL_TIME<=TO_DATE('07/27/2008','MM/DD/YYYY'))
GROUP BY DHS.SQL_ID) X WHERE X.SQL_ID=DHST.SQL_ID
ORDER BY X.CPU_TIME DESC;

Sunday, July 27, 2008

user: SYS vs SYSTEM

SYS

When any database is created, the user SYS is automatically created and granted the DBA role.

All of the base tables and views for the database's data dictionary are stored in the schema SYS. These base tables and views are critical for the operation of Oracle. To maintain the integrity of the data dictionary, tables in the SYS schema are manipulated only by Oracle. They should never be modified by any user or database administrator, and no one should create any tables in the schema of user SYS. (However, you can change the storage parameters of the data dictionary settings if necessary.)

Ensure that most database users are never able to connect using the SYS account.

SYSTEM

When a database is created, the user SYSTEM is also automatically created and granted the DBA role.

The SYSTEM username is used to create additional tables and views that display administrative information, and internal tables and views used by various Oracle options and tools. Never create in the SYSTEM schema tables of interest to individual users.

SYS has the SYSDBA privilege, SYSTEM doesn't.

Wednesday, July 16, 2008

Data pump Import (impdp) patt #2

2. Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE parameter because parameter files cannot be nested. The use of parameter files is recommended if you are using parameters whose values require quotation marks. See Use of Quotation Marks On the Data Pump Command Line.


3. Interactive-Command Interface: Stops logging to the terminal and displays the Export prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an export operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job. For a complete description of the commands available in interactive-command


ADD_FILE - Add additional dump files.

CONTINUE_CLIENT - Exit interactive mode and enter logging mode.

EXIT_CLIENT - Stop the export client session, but leave the job running.

FILESIZE - Redefine the default size to be used for any subsequent dump files.

HELP - Display a summary of available commands.

KILL_JOB - Detach all currently attached client sessions and kill the current job.

PARALLEL - Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 10g.

START_JOB - Restart a stopped job to which you are attached.

STATUS - Display detailed status for the current job and/or set statusinterval.

STOP_JOB - Stop the current job for later restart.



Data Pump Import Modes


Full Import Mode

A full import is specified using the FULL parameter. In full import mode, the entire content of the source (dump file set or another database) is loaded into the target database. This is the default for file-based imports. You must have the IMP_FULL_DATABASE role if the source is another database.
Cross-schema references are not imported for non-privileged users. Example:

impdp system/password DIRECTORY=dpump_dir DUMPFILE=expfull.dmp FULL=Y

Schema Mode

A schema import is specified using the SCHEMAS parameter. In a schema import, only objects owned by the specified schemas are loaded. The source can be a full, table,tablespace, or schema-mode export dump file set or another database. If you have the IMP_FULL_DATABASE role, then a list of schemas can be specified and the schemas themselves (including system privilege grants) are created in the database in addition to the objects contained within those schemas. Cross-schema references are not imported for non-privileged users unless the other schema is remapped to the current schema. Example


impdp hr/hr SCHEMAS=hr DIRECTORY=dump_dir
DUMPFILE=hr-20080107.dmp

impdp hr/hr DIRECTORY=dump_dir SCHEMAS=hr
ESTIMATE_ONLY = Y EXCLUDE=TABLE:"like 'EMP":"EMP_2005'"

impdp system/pssword SCHEMAS=hr DIRECTORY=dump_dir
DUMPFILE=hr20080120.dmp REMAP_SCHEMA=hr:test

impdp hr/hr DIRECTORY=dump_dir SCHEMAS=hr INCLUDE=TABLE:"like'EMP_%'" DUMPFILE=nr_emp_tables.dmp TABLE_EXISTS_ACTION=REPLACE

Table Mode

A table-mode import is specified using the TABLES parameter. In table mode, only the specified set of tables, partitions, and their dependent objects are loaded. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. You must have the IMP_FULL_DATABASE role to specify tables that are not in your own schema.

impdp hr/hr TABLES=emp DIRECTORY=dump_dir CONTENT=DATA_ONLY DUMPFILE=emp-20080106.dmp

impdp hr/hr TABLES=login DIRECTORY=dump_dir DUMPFILE=hr_login_table.dmp TABLE_EXISTS_ACTION=REPLACE


Tablespace Mode

A tablespace-mode import is specified using the TABLESPACES parameter. In tablespace mode, all objects contained within the specified set of tablespaces are loaded, along with the dependent objects. The source can be a full, schema, tablespace, or table-mode export dump file set or another database. For unprivileged users, objects not remapped to the current schema will not be processed.

impdp system/password DIRECTORY=dpump_dir DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6


Transportable Tablespace Mode

A transportable tablespace import is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, the metadata from a transportable tablespace export dump file set or from another database is loaded. The datafiles specified by the TRANSPORT_DATAFILES parameter must be made available from the
source system for use in the target database, typically by copying them over to the target system.


impdp system/password DIRECTORY=dpump_dir DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=Y LOGFILE=tts.log



Data pump Import (impdp) patt #1

Data Pump Import is a utility for loading an export dump file set into a target system. The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format. During an import operation, the Data Pump Import utility uses these files to locate each database object in the dump file set.

Import can also be used to load a target database directly from a source database with no intervening dump files. This allows export and import operations to run concurrently, minimizing total elapsed time. This is known as a network import.

Data Pump Import enables you to specify whether a job should move a subset of the data and metadata from the dump file set or the source database (in the case of a network import), as determined by the import mode. This is done using data filters and metadata filters, which are implemented through Import commands

Invoking Data Pump Import

The Data Pump Import utility is invoked using the impdp command. The
characteristics of the import operation are determined by the import parameters you specify. These parameters can be specified either on the command line or in a parameter file.

Data Pump Import Interfaces

You can interact with Data Pump Import by using a command line, a parameter file, or an interactive-command mode.

1.Command-Line Interface: Enables you to specify the Import parameters directly on the command line. For a complete description of the parameters available in the command-line interface, see Parameters Available in Import's Command-Line Mode


ParameterDescription
Default
attach
connects a client session to a current running data pump import job

content
filters what is import: data_only, metadata_only , all
all
directory
destination of log and dump files
data_pump_dir
dumpfile
name of the dump file

estimate
method used to estimate dump file size : blocks or statistics

estimate_only
Y/N. instruct data_pump whether data should be imported or estimated.
N
exclude
exclude objects and data from being imported.

flashback_scn
scn of the database to flash back to to during imported

full
Y/N. import all data and metadata in a full mode import
N
help
Y/N. display a list of avillable commands and options
N
include
specify which objects and data will be exported

job_name
name of the job
system generated
logfile
name of log file
expdp.log
network_link
source database link for a data pump job importing a remote database

nologfile
a Y/N flag used to suppress log file creation

parallel
set number of workers for the export job
1
parfile
name the parameter file to use

query
filters row from table during the import

schemas
name the schema to be exported for schema mode import

remap_datafiles
change the name fo source datafiles

remap tablespace
change the name fo source tablespace

remap_schema
import data into a new schema

skip_unusable_indexes
a N/Y flag

sql file
name of file in which DDL of the import will be write

table_exists_action
instruct import how to proceed if the table being imported is exist. Values include skip,append,truncate and replace
skip
status
displays detailed status of the data pump job

tables
list of tables and partitions to be exported for table mode import

tablespace
list of tablespaces to import in tablespace mode

transport_full_check
Specifies whether the tablespaces being imported should be verified as a self-contained set.

transport_tablespaces
specifies a transportable tablespace mode import

transport_datafiles
list of datafiles to be imported during a transportable tablespace mode import

version
specifies the version of the database.



Please go part#2

Sunday, July 13, 2008

Data pump Export (expdp) patt #2

2. Parameter File Interface: Enables you to specify command-line parameters in a parameter file. The only exception is the PARFILE parameter, because parameter files cannot be nested.

3. Interactive-Command Interface: Stops logging to the terminal and displays the Export prompt, from which you can enter various commands, some of which are specific to interactive-command mode. This mode is enabled by pressing Ctrl+C during an export operation started with the command-line interface or the parameter file interface. Interactive-command mode is also enabled when you attach to an executing or stopped job. For a complete description of the commands available in interactive-command

ADD_FILE - Add additional dump files.

CONTINUE_CLIENT - Exit interactive mode and enter logging mode.

EXIT_CLIENT - Stop the export client session, but leave the job running.

FILESIZE - Redefine the default size to be used for any subsequent dump files.

HELP - Display a summary of available commands.

KILL_JOB - Detach all currently attached client sessions and kill the current job.

PARALLEL - Increase or decrease the number of active worker processes for the current job. This command is valid only in the Enterprise Edition of Oracle Database 10g.

START_JOB - Restart a stopped job to which you are attached.

STATUS - Display detailed status for the current job and/or set statusinterval.

STOP_JOB - Stop the current job for later restart.


Data Pump Export Modes: One of the most significant characteristics of an export operation is its mode, because the mode largely determines what is exported. Export provides different modes for unloading different portions of the database. The mode is specified on the command line, using the appropriate parameter. The available modes are as follows:

1. Full Export Mode: A full export is specified using the FULL parameter. In a full database export, the entire database is unloaded. Example:

expdp system/password DIRECTORY=dpump_dir DUMPFILE=expfull.dmp FULL=Y

2. Schema Mode: A schema export is specified using the SCHEMAS parameter. This is the default export mode.
Example:

expdp hr/hr SCHEMAS=hr DIRECTORY=dump_dir DUMPFILE=hr.dmp FLASHBACK_SCN=65610662

expdp hr/hr DIRECTORY=dump_dir SCHEMAS=hr
EXCLUDE=TABLE:"like'EMP'" DUMPFILE=hr_exclude_emp.dmp

expdp hr/hr DIRECTORY=dump_dir SCHEMAS=hr
INCLUDE=TABLE:"like'EM%'" DUMPFILE=hr.dmp


expdp hr/hr DIRECTORY=dump_dir SCHEMAS=hr
ESTIMATE_ONLY = Y EXCLUDE=TABLE:"like 'EMP":"EMP_2005'"

3. Table Mode : A table export is specified using the TABLES parameter. In table mode, only a specified set of tables, partitions, and their dependent objects are unloaded.
Example:

expdp hr/hr DIRECTORY=dump_dir SCHEMAS=prod7
INCLUDE=TABLE:"like'HR_%'" DUMPFILE=hr_tables.dmp

expdp hr/hr TABLES=login,emp,salary DIRECTORY=dump_dir
DUMPFILE=hr_table.dmp


expdp hr/hr TABLES=emp DIRECTORY=dump_dir
DUMPFILE=emp-20080106.dmp FLASHBACK_SCN=65619542


expdp hr/hr DIRECTORY=dump_dir DUMPFILE=emp.dmp
TABLES=EMP:EMP_PART_2005

expdp hr/hr DIRECTORY=dump_dir SCHEMAS=hr
EXCLUDE=TABLE:"like'EMP'" EXCLUDE=TABLE:"SALARY'"



4. Tablespace Mode : A tablespace export is specified using the TABLESPACES parameter. In tablespace mode, only the tables contained in a specified set of tablespaces are unloaded. If a table is unloaded, its dependent objects are also unloaded. Both object metadata and data
are unloaded. In tablespace mode, if any part of a table resides in the specified set, then that table and all of its dependent objects are exported. Privileged users get all tables.
Example:

expdp system/password DIRECTORY=dpump_dir DUMPFILE=tbs.dmp
TABLESPACES=tbs_4, tbs_5, tbs_6



5. Transportable Tablespace Mode : A transportable tablespace export is specified using the TRANSPORT_TABLESPACES parameter. In transportable tablespace mode, only the metadata for the tables (and their dependent objects) within a specified set of tablespaces are unloaded. This allows
the tablespace datafiles to then be copied to another Oracle database and incorporated using transportable tablespace import. This mode requires that you have the EXP_FULL_DATABASE role.
Unlike tablespace mode, transportable tablespace mode requires that the specified tables be completely self-contained. That is, the partitions of all tables in the set must also be in the set. Transportable tablespace exports cannot be restarted once stopped. Also, they cannot have a degree of parallelism greater than 1.
Example:

expdp system/password DIRECTORY=dpump_dir DUMPFILE=tts.dmp
TRANSPORT_TABLESPACES=tbs_1 TRANSPORT_FULL_CHECK=Y LOGFILE=tts.log



Data pump Export (expdp) patt #1

Data Pump Export is a utility for unloading data and metadata into a set of operating system files called a dump file set. The dump file set can be imported only by the Data Pump Import utility. The dump file set can be imported on the same system or it can be moved to another system and
loaded there.

The dump file set is made up of one or more disk files that contain table data, database object metadata, and control information. The files are written in a proprietary, binary format.

Invoking Data Pump Export :
The Data Pump Export utility is invoked using the expdp command. The characteristics of the export operation are determined by the Export parameters you specify. These parameters can be specified either on the command line or in a parameter file.

Data Pump Export Interfaces: You can interact with Data Pump Export by using a command line, a parameter file, or an interactive-command mode.

1.Command-Line Interface: Enables you to specify most of the Export parameters directly on the command line. Parameters Available in Export's
Command-Line Mode



ParameterDescription
Default
attach
connects a client session to a current running data pump export job

content
filters what is export: data_only, metadata_only , all
all
directory
destination of log and dump files
data_pump_dir
dumpfile
name of the dump file

estimate
method used to estimate dump file size : blocks or statistics

estimate_only
Y/N. instruct data_pump whether data should be exported or estimated.
N
exclude
exclude objects and data from being exported.

filesize
maximum size of each dump file

flashback_scn
scn of the database to flash back to to during exported

flushback_time
timestamp of the database to flash back to to during exported
full
Y/N. export all data and metadata in a full mode export
N
help
Y/N. display a list of avillable commands and options
N
include
specify which objects and data will be exported

job_name
name of the job
system generated
logfile
name of log file
expdp.log
network_link
source database link for a data pump job exporting a remote database

nologfile
a Y/N flag used to suppress log file creation

parallel
set number of workers for the export job
1
parfile
name the parameter file to use

query
filters row from table during the export

schemas
name the schema to be exported for schema mode export

status
displays detailed status of the data pump job

tables
list of tables and partitions to be exported for table mode export

tablespace
list of tablespaces to export in tablespace mode

transport_full_check
Specifies whether the tablespaces being exported should be verified as a self-contained set.

transport_tablespaces
specifies a transportable tablespace mode export

version
specifies the version of the database.



please go to Part 2