Saturday, April 28, 2007
Brought back deleted datafile
SQL> CREATE TABLESPACE test DATAFILE '/backup2/test_01.dbf'
SIZE 10M;
SQL> ALTER TABLESPACE test ADD DATAFILE '/backup2/test_02.dbf'
SIZE 800M ;
SQL> select TABLESPACE_NAME,FILE_NAME from dba_data_files where TABLESPACE_NAME='TEST';
TABLESPACE_NAME FILE_NAME
------------------------------ -----------------------------------------
TEST /backup2/test_01.dbf
TEST /backup2/test_02.dbf
SQL> alter tablespace test drop datafile '/backup2/test_02.dbf';
Tablespace altered.
SQL> select TABLESPACE_NAME,FILE_NAME from dba_data_files where TABLESPACE_NAME='TEST';
TABLESPACE_NAME FILE_NAME
------------------------------ --------------------------------------------------------------------------------
TEST /backup2/test_01.dbf
SQL> ALTER TABLESPACE test ADD DATAFILE '/backup2/test_02.dbf'
SIZE 800M REUSE;
Tablespace altered.
SQL> select TABLESPACE_NAME,FILE_NAME from dba_data_files where TABLESPACE_NAME='TEST';
TABLESPACE_NAME FILE_NAME
------------------------------ --------------------------------------------------------------------------------
TEST /backup2/test_01.dbf
TEST /backup2/test_02.dbf
** there is no guaranty that the object stored on the deleted datafile are available when it brought back :)
so when such thing happen the ideal way to use backup dump or other database backup system ...
Tuesday, April 24, 2007
ORA-01502
Cause: An attempt has been made to access an index or index partition that has been marked unusable by a direct load or by a DDL operation
Action: DROP the specified index, or REBUILD the specified index, or REBUILD the unusable index partition
Saturday, April 21, 2007
Start a Database instance
To START a database instance Use the followings:
Syntax: (This is a SQL*Plus command, not part of standard SQL)
STARTUP [FORCE] [RESTRICT] [PFILE=filename] NOMOUNT
STARTUP [FORCE] [RESTRICT] [PFILE=filename] MOUNT [dbname]
STARTUP [FORCE] [RESTRICT] [PFILE=filename] OPEN [Open_options] [dbname]
Open_options: READ {ONLY | WRITE [RECOVER]} | RECOVER
Key:
FORCE -Shut down the current Oracle instance (if it is running) with SHUTDOWN mode ABORT, before restarting it. If the current instance is running and FORCE is not specified,an error results. FORCE is useful while debugging and under abnormal circumstances. It should not normally be used.
RESTRICT -Only allow Oracle users with the RESTRICTED SESSION system privilege to connect to the database. Later,you can use the ALTER SYSTEM command to disable the restricted session feature.
PFILE=filename - The init.ora parameter file to be used while starting up the instance. If PFILE is not specified,then the default STARTUP parameter file is used. The default file used is platform specific. For example, the default file is $ORACLE_HOME/dbs/init$ORACLE_SID.ora on UNIX, and %ORACLE_HOME%\database\initORCL.ora on Windows.
MOUNT dbname -Mount a database but do not open it. dbname is the name of the database to mount or open. If no database name is specified, the database name is taken from the initialization parameter DB_NAME.
OPEN -Mount and open the specified database.
NOMOUNT -Don't mount the database upon instance startup. Cannot be used with MOUNT, or OPEN.
RECOVER -Specifies that media recovery should be performed, if necessary, before starting the instance.
STARTUP RECOVER has the same effect as issuing the RECOVER DATABASE command and starting an instance. Only 'complete recovery' is possible with the RECOVER option. Recovery proceeds, if necessary, as if AUTORECOVERY is set to ON, regardless of whether or not AUTORECOVERY is enabled.
If a redo log file is not found in the expected location, recovery will continue by prompting you with the suggested location and name of the subsequent log files that need to be applied.
Shutdown a Database instance
To Shutdown a database instance please do the followings:
Syntax: (This is a SQL*Plus command, not part of standard SQL)
SHUTDOWN ABORT
SHUTDOWN IMMEDIATE
SHUTDOWN TRANSACTIONAL [LOCAL]
SHUTDOWN NORMAL
key:
ABORT - The fastest possible shutdown of the database without waiting for calls to complete or users to disconnect. Uncommitted transactions are not rolled back. All users currently connected to the database are implicitly disconnected and the next database startup will require instance recovery. You must use this option if a background process terminates abnormally.
IMMEDIATE - Does not wait for current calls to complete or users to disconnect from the database. Further connects are prohibited. The database is closed and dismounted. The instance is shutdown and no instance recovery is required on the next database startup.
NORMAL - NORMAL is the default option which waits for users to disconnect from the database. Further connects are prohibited. The database is closed and dismounted. The instance is shutdown and no instance recovery is required on the next database startup.
TRANSACTIONAL [LOCAL] - A planned shutdown of an instance, allowing active transactions to complete first. It prevents clients from losing work without requiring all users to log off. No client can start a new transaction on this instance. Attempting to start a new transaction results in disconnection. After completion of all transactions, any client still connected to the instance is disconnected. Now the instance shuts down (SHUTDOWN IMMEDIATE). The next startup of the database will not require any instance recovery procedures.
The LOCAL mode specifies a transactional shutdown on the local instance only, so that it only waits on local transactions to complete, not all transactions. This is useful, for example, for scheduled outage maintenance.
Friday, April 13, 2007
Oracle 10g Installation in Solaris 10
Install Solaris
During installation of Solaris, create two additional partition named 'ORACLE' and 'ORADATA' which contain Oracle base directory and Oracle datafile directory respectively. The Swap partition must be grater than 400MB (2GB recommended or Double of current RAM size)(You can install Oracle in one partition but above arrangement has been followed in most software projects)
Log In to the System as root
Before you install the Oracle software, you must complete several tasks as the root user. If you are installing the software from an X Window System workstation or X terminal:- Start a local terminal session, for example, an X terminal (xterm).
Check the Hardware Requirements
Requirement Minimum Value- Physical memory (RAM) 512 MB (524288 KB)
- Swap space 1 GB (1048576 KB) or twice the size of RAM
- Disk space in /tmp 400 MB (409600 KB)
- Disk space for software files 2.5 GB (2621440 KB)
- To determine the physical RAM size, enter the following command:
# /usr/sbin/prtconf | grep "Memory size" - To determine the size of the configured swap space, enter the following command:
# /usr/sbin/swap -s - To determine the amount of free disk space available in the /tmp directory, enter the following command:
# df -k /tmp - To determine the amount of free disk space available on the system, enter the following command:
# df -k - To determine whether the system architecture is 64-bit, enter the following command:
# /bin/isainfo -kv
Check the Software Requirements
The system must meet the following minimum software requirements:- The version of Solaris must be Solaris 8 or Solaris 9.
- The following packages must be installed:
- SUNWarc
- SUNWlibms
- SUNWi1of
- SUNWbtool
- SUNWsprot
- SUNWi15cs
- SUNWi1cs
- SUNWhea
- SUNWsprox
- SUNWlibm
- SUNWtoo
- SUNWxwfnt
- To determine which version of Solaris is installed, enter the following command:
# uname -r
5.8
In this example, the version shown is Solaris 8 (5.8). If necessary, see your operating system documentation for information about upgrading the operating system. - To determine whether the required packages are installed, enter a command similar to the following:
# pkginfo -i SUNWarc SUNWbtool SUNWhea SUNWlibm SUNWlibms \SUNWsprot SUNWsprox SUNWtoo SUNWi1of SUNWi1cs SUNWi15cs \ SUNWxwfnt
If a package is not installed, then install it. See your operating system or software documentation for information about installing packages. - To determine whether an operating system patch is installed, enter a command similar to the following:
# /usr/sbin/patchadd -p | grep patch_number
Create Required UNIX Groups and User
First of all following users and groups should be created- Group oinstall, dba, oper (optional)
- . user oracle having oinstall as primary group
groupadd oinstall groupadd dba groupadd oper useradd -g oinstall -G dba[,oper] -d /export/home/oracle -s /usr/bin/bash -m oracle passwd -r files oracleNow create a project for oracle so that all resources can be alloacated to this project as root user
projadd oracle
Create Required Directories
Create directories with names similar to the following and specify the correct owner, group, and permissions for them:/u01/app/oracle (the Oracle base directory: u01=oracle)
/u02/oradata (an optional Oracle datafile directory : u02=oradata)
(Hear u01, u02 are the partition named oracle and oradata respectively)
Identify one file system with 2.5 GB (10 GB for Therap) of free disk space, for the Oracle base directory, and another file system with 1.2 GB (45 GB for Therap) of free disk space for the Oracle datafile directory.
To create the required directories and specify the correct owner, group and permissions for them, follow these steps:
- Enter the following command to create subdirectories in the mount point directory that you identified for the Oracle base directory: # mkdir -p /u01/app/oracle
- If you intend to use a second file system for the Oracle database files, create an oradata subdirectory in the mount point directory that you identified for the Oracle datafile directory (shown as /u02 in the examples): # mkdir /u02/oradata
- Change the owner and group of the directories that you created to the oracle user and the oinstall group:
# chown -R oracle:oinstall /u01/app/ oracle /u02/oradata - Change the permissions on the directories that you created to 775:
#chmod -R 775 /u01/app/oracle /u02/oradata
Configure Kernel Parameters
Verify that the following kernel parameters are set to values greater than or equal to the recommended value shown:Parameter | Recommended Value |
---|---|
noexec_user_stack | 1 |
semsys:seminfo_semmni | 100 |
semsys:seminfo_semmns | 1024 |
semsys:seminfo_semms | 256 |
shmsys:shminfo_shmmax | 4294967295 |
shmsys:shminfo_shmmin | 1 |
shmsys:shminfo_shmmni | 100 |
shmsys:shminfo_shmseg | 10 |
semsys:seminfo_semvmx | 32767 |
- To view the current values of these parameters, enter the following commands:
# grep noexec_user_stack /etc/system
# /usr/sbin/sysdef | grep SEM
# /usr/sbin/sysdef | grep SHM - If you have to change any of the current values, follow these steps:
- Create a backup copy of the /etc/system file, for example:# cp /etc/ system /etc/system.orig
- Open the /etc/system file in any text editor: # vi /etc/system
- To specify new values for the parameters, add lines similar to the following to the /etc/system file, or edit the lines if the file already contains them:
set noexec_user_stack=1
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=1024
set semsys:seminfo_semmsl=256
set semsys:seminfo_semvmx=32767
set shmsys:shminfo_shmmax=4294967295
set shmsys:shminfo_shmmin=1
set shmsys:shminfo_shmmni=100
set shmsys:shminfo_shmseg=10 - Reboot the system
- To view the current values of these parameters, enter the following commands:
# id -p // to verify the project id
uid=0(root) gid=0(root) projid=1 (user.root)
# prctl -n project.max-shm-memory -i project user.root
# prctl -n project.max-sem-ids -i project user.root - If you must change any of the current values, then follow these steps:
- To modify the value of max-shm-memory to 6 GB: # prctl -n project.max-shm-memory -v 6gb -r -i project user.root
- To modify the value of max-sem-ids to 256: # prctl -n project.max-sem-ids -v 256 -r -i project user.root
Log In as the oracle User and Configure the oracle User’s Environment
You run the Installer from the oracle account. However, before you start the installer you must configure the environment of the oracleuser. To configure the environment, you must:- Set the default file mode creation mask (umask) to 022 in the shell startup file.
- Set the DISPLAY, ORACLE_BASE, and ORACLE_SID environment variables.
- Start another terminal session.
- To determine the default shell for the oracle user, enter the following command:$ echo $SHELL
- Open the oracle user’s shell startup file in any text editor:$ vi .profile
- Enter or edit the following line in the shell startup file, specifying a value of 022 for the default file creation mask: umask 022
- Save the file and exit from the editor.
- Reboot
- Enter commands similar to the following to set the ORACLE_BASE and ORACLE_SID environment variables:
$ ORACLE_BASE=/u01/app/oracle
$ ORACLE_SID=sales
$ export ORACLE_BASE ORACLE_SID - Enter the following commands to ensure that the ORACLE_HOME an TNS_ADMIN environment variables are not set:
$ unset ORACLE_HOME
$ unset TNS_ADMIN - To verify that the environment has been set correctly, enter the following commands:
$ umask
$ env | more
Install Oracle Database 10g
After configuring the oracle user’s environment, start the Installerand install the Oracle software, as follows: $ /cdrom/cdrom0/runInstallerThe following table describe the recommended action of each installation screen
Screen | Recommended Action |
---|---|
Welcome | Click Next. |
Select Installation Method | The Basic Installation option is selected by default. Specify the directory path of the Oracle home. Ensure that the UNIX DBA group “oinstall” is selected. If you want to create a starter database, then specify a name and password for it. Then, click Next. |
Specify Inventory Directory and Credentials | This screen is displayed only during the first installation of Oracle products on a system. Specify the full path of the Oracle Inventory directory as “/oracle”. Ensure that the operating system group selected is “dba”. Then, click Next. |
Product-Specific Prerequisite Checks | Verify that all of the prerequisite checks succeed, then click Next. Oracle Universal Installer checks the system to verify that it is configured correctly to run Oracle software. If you have completed all of the preinstallation steps in this guide, all of the checks should pass. If a check fails, then review the cause of the failure listed for that check on the screen. If possible, rectify the problem and rerun the check. Alternatively, if you are satisfied that your system meets the requirements, then you can select the check box for the failed check to manually verify the requirement. |
Specify Oracle Datafile Directory | Specify the full path of the Oracle Oracle Datafile directory as “/oradata”. |
Summary | Review the information displayed on this screen, and then click Install. |
Install | This screen displays status information while the product is being installed. |
Configuration Assistants | This screen displays status information for the configuration assistants that configure the software and create a database. When the message is displayed at the end of this process, click OK to continue. |
Execute Configuration Scripts | When prompted, read the instructions and then run the scripts(root.sh, Root.sh) mentioned on this screen. Click OK to continue. |
End of Installation | The configuration assistants configure several Web-based applications, including Oracle Enterprise Manager Database Control. This screen displays the URLs configured for these applications. Make a note of the URLs used. The port numbers used in these URLs are also recorded in the following file: oracle_home/install/portlist.ini To exit from Oracle Universal Installer, click Exit and then click Yes. |
After Installation
- open the file /var/opt/oracle/oratab in your favourite editor. at the end of this file all the databases are listed like this orcl:/oracle/app/oracle/oracle/product/10.2.0/db_1:N
- Replace N by Y
- if you install 2 database then you’ll have 2 such lines in this file .if the lines contain N at the end. It means this database won’t be started during system bootup. change it to Y as above and save the file.
- Open the oracle user’s shell startup file in any text editor: $ vi .profile
add $ ORACLE_HOME= oracle home path
$ ORACLE_SID= database SID
$ export ORACLE_ SID ORACLE_HOME - save and exit the editor and reboot
- congratulation you complete the installation process successfully.
Sunday, April 8, 2007
Create and Manage Tablespaces
You must make several choices when creating a tablespace:
- Tablespace data block size.
- whether to make the tablespace bigfile or smallfile.
- whether to manage extents locally or with the dictionary.
- Whether to manage segment space automatically or manually.
Data Block Size : The SYSTEM and SYSAUX tablespaces have the database’s standard data block size, defined at creation time by the initialization parameter db_block_size. Other tablespaces can have different data block sizes, defined at tablespace creation time. The standard data block size are 2K,4K,8K,16K and 32K. You may choose any non-standard data block size but it must be multiple of your OS block size.
Tablespace Type: There are two type of tablespaces :
- Bigfile Tablespace: are New to Oracle10g and are built on a single datafile (or temp file), which can be as many as 232 data blocks in size. So, a bigfile tablespace that uses 8KB data blocks can be as much as 32TB in size. Bigfile tablespaces are intended for very large databases. When a very large database has thousands of read/write datafiles, operations that must update the datafile headers, such as checkpoints, can take a relatively long time. If you reduce the number of datafiles, these operations can complete faster.
- Smallfile Tablespace: Smallfile tablespace is the new name for the old Oracle tablespace datafile option. With a smallfile tablespace, you can have multiple datafiles for a tablespace. Each datafile can be as many as 222 data blocks in size. So datafiles in a smallfile tablespace that uses 8KB data blocks are limited to 32GB. The smallfile tablespace can have as many as 1,022 datafiles, limiting the 8KB data block tablespace to slightly less than 32TB—about the same as a bigfile tablespace.The SYSTEM and SYSAUX tablespaces are always created as smallfile tablespaces.
Extents Management: You can use tablespaces with either local extent management or dictionary extent management . With dictionary extent management, the database tracks free and used extents in the data dictionary, changing the FET$ and UET$ tables with recursive SQL. With local extent management, the database tracks extents through the use of bitmaps, eliminating the recursive SQL. Local extent management is the default if not specified and is generally the preferred technique.
With locally managed tablespaces, you have two options for how extents are allocated: UNIFORM or AUTOALLOCATE. The UNIFORM option tells the database to allocate and deallocate extents in the tablespace with the same unvarying size that you can specify or let extents default to 1MB. UNIFORM is the default for temporary tablespaces and cannot be specified for undo tablespaces. AUTOALLOCATE, on the other hand, tells the database to vary the size of extents for each segment.For example, on Windows and Linux with 8KB data blocks, each segment starts out with 64KB extents for the first 16 extents, and then extents increase in size to 1MB for the next 63 extents. The size then increases to 8MB for the next 120 extents, then 64MB, and so on as the segment grows. This algorithm allows small segments to remain small and large segments to grow without gaining too many extents. AUTOALLOCATE is best used for a general-purpose mixture of small and large tables.
Segment Space Management: For tablespaces that have local extent management, you can use either manual or automatic segment space management. Manual segment space management exists for backward compatibility and uses free block lists to identify the data blocks available for inserts together with the parameters PCT_FREE and PCT_USED, which control when a block is made available for inserts. After each INSERT or UPDATE, the database compares the remaining free space in that data block with the segment’s PCT_FREE setting. If the data block has less than PCT_FREE free space (meaning that it is almost full), it is taken off the free block list and is no longer available for inserts. The remaining free space is reserved for update operations that may increase the size of rows in that data block. After each UPDATE or DELETE, the database compares the used space in that data block with that segment’s PCT_USED setting. If the data block has less than PCT_USED used space, the data block is deemed empty enough for inserts and is placed on the free block list. When automatic segment space management is specified, bitmaps are used instead of free lists to identify which data blocks are available for inserts. The parameters PCT_FREE and PCT_USED are ignored for segments in tablespaces with automatic segment space management. Automatic segment space management is available only on tablespaces configured for local extent management; it is not available for temporary or system tablespaces. Automatic segment space management performs better and reduces your maintenance tasks, making it the preferred technique.
General Syntax:
CREATE [
DATAFILE '
[REUSE] AUTOEXTEND
[<LOGGING | NOLOGGING | FILESYSTEM_LIKE_LOGGING>]
[FORCE LOGGING][ENCRYPTION USING '
IDENTIFIED BY
[DEFAULT
[<ONLINE | OFFLINE>]
EXTENT MANAGEMENT LOCAL
[SEGMENT SPACE MANAGEMENT
[FLASHBACK
Example:
CREATE TABLESPACE TEST
DATAFILE '/oradata/datafiles/test_01.dbf'
SIZE 500M AUTOEXTEND ON
NEXT 200M MAXSIZE UNLIMITED
BLOCKSIZE 8K
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 100M
SEGMENT SPACE MANAGEMENT AUTO ONLINE ;
Add A Datafile To A Tablespace:
ALTER TABLESPACE TEST
ADD DATAFILE '/oradata/datafiles/test_02.dbf'
SIZE 800M AUTOEXTEND ON NEXT 200M MAXSIZE UNLIMITED;
Remove a Tablespace: Remove a tablespace from the database, use the DROP TABLESPACE statement. The optional clause INCLUDING CONTENTS recursively removes any segments (tables, indexes, and so on) in the tablespace, like this: DROP TABLESPACE dba_sandbox INCLUDING CONTENTS;
Dropping a tablespace does not automatically remove the datafiles from the file system. Use the additional clause INCLUDING CONTENTS AND DATAFILES to remove the underlying datafilesas well as the stored objects, like this: DROP TABLESPACE hr_data INCLUDING CONTENTS AND DATAFILES;
Put A Tablespace into backup Mode : ALTER TABLESPACE system BEGIN BACKUP; Use the keywords END BACKUP to take a tablespace out of backup mode, like this: ALTER TABLESPACE system END BACKUP;
Read-only Tablespace: When a tablespace is read-only, it does not have to be backed up with the nightly or weekly database backups. One backup after being marked read-only is all that is needed for future recoveries. Tables in a read-only tablespace can only be selected from; their rows cannot be inserted, updated, or deleted. To do this: ALTER TABLESPACE test READ ONLY;
go back to read-write mode: ALTER TABLESPACE test READ WRITE;
Wednesday, April 4, 2007
Create and Manage Constraints
- NOT NULL
- UNIQUE
- PRIMARY KEY
- REFERENTIAL
- CHECK
NOT NULL Constraints
By default, all columns in a table allow NULL as a valid value. A NULL represents unknown or nonexistent information. Some business rules can be enforced with a NOT NULL constraint. For example, an employee may not be considered a valid employee if their hire date is not known. You enforce this business rule by placing a NOT NULL constraint on the hire_date column of
the employees table. Any INSERT or UPDATE statements fail if the protected column does not have a value. NOT NULL constraints must be declared together with the column definition using in-line syntax. Here is an example using the NOT NULL constraint:
CREATE TABLE employees
(employee_id NUMBER CONSTRAINT nn_emp_id NOT NULL
,hire_date DATE NOT NULL ,first_name VARCHAR2(42)
,last_name VARCHAR2(42) );
UNIQUE Constraints
A UNIQUE constraint ensures that each occurrence of the columns protected by this constraint is different from all other occurrences in the table. UNIQUE constraints cannot be created on columns of type CLOB, NCLOB, BLOB, LONG, LONG RAW, or TIMESTAMP WITH TIMEZONE. Here is how you create an employees table that has a UNIQUE constraint on the payroll_id column using the out-of-line syntax:
CREATE TABLE employees
(employee_id NUMBER NOT NULL,hire_date DATE NOT NULL
,first_name VARCHAR2(42),last_name VARCHAR2(42)
,payroll_id VARCHAR2(10),CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
);
Using the in-line syntax, the statement looks like this:
CREATE TABLE employees
(employee_id NUMBER NOT NULL
CONSTRAINT uniq_payroll_id UNIQUE
,hire_date DATE NOT NULL ,first_name VARCHAR2(42)
,last_name VARCHAR2(42) ,payroll_id VARCHAR2(10)
);
No two rows in this table can have the same value in payroll_id. NULL values do not count as a distinct value, so this employees table can have multiple rows with a NULL payroll_id. To ensure that payroll_id is always present, you need a NOT NULL constraint. The database uses an index to help enforce this constraint. The index is usually a unique index, and if you create the UNIQUE constraint together with the table, the database automatically creates a unique index on the columns protected by the UNIQUE constraint, and the name of the index defaults to the name of the constraint. To assign attributes to this index, take advantage of the USING INDEX clause, like this:
CREATE TABLE employees
(employee_id NUMBER NOT NULL ,hire_date DATE NOT NULL
,first_name VARCHAR2(42) ,last_name VARCHAR2(42)
,payroll_id VARCHAR2(10) ,CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
USING INDEX TABLESPACE indx );
You can add a UNIQUE constraint after the table is built by using an ALTER TABLE statement, like this:
ALTER TABLE employees ADD CONSTRAINT uniq_payroll_id UNIQUE (payroll_id) USING INDEX TABLESPACE indx ;
FOREIGN KEY Constraints
FOREIGN KEY constraints are also known as referential integrity constraints because they enforce referential integrity. FOREIGN KEY constraints enforce referential integrity by ensuring that data values referenced in one table are defined in another table. FOREIGN KEY constraints tie these two tables together in a parent/child or referenced/dependent relationship. Here is an example of creating a parent table (DEPARTMENTS) and child table (EMPLOYEES) with a PRIMARY KEY constraint on the parent and a FOREIGN KEY constraint on the child table, using out-of-line syntax:
CREATE TABLE departments
(dept_nbr NUMBER NOT NULL CONSTRAINT department_pk PRIMARY KEY ,dept_name VARCHAR2(32) ,manager_id NUMBER );
CREATE TABLE employees
(employee_id NUMBER NOT NULL ,hire_date DATE NOT NULL
,first_name VARCHAR2(42) ,last_name VARCHAR2(42)
,payroll_id VARCHAR2(10) ,dept_nbr NUMBER
,CONSTRAINT uniq_payroll_id UNIQUE (payroll_id)
USING INDEX TABLESPACE indx
,CONSTRAINT employees_pk PRIMARY KEY (employee_id)
USING INDEX TABLESPACE indx
,CONSTRAINT emp_dept_fk FOREIGN KEY (dept_nbr)
REFERENCES departments(dept_nbr) );
In this example, each employee belongs to a department, so we put a DEPT_NBR column in the EMPLOYEES table, which will hold each employee’s department number. The DEPARTMENTS table defines all the valid department numbers to ensure that DEPT_NBR values appearing in the EMPLOYEES table are defined in the DEPARTMENTS table—in essence that an employee belongs to
a valid department. You implement this relationship or rule with a FOREIGN KEY constraint. By default, FOREIGN KEYs allow NULLs.
By default, the database raises an exception and does not allow you to delete rows from a parent table if those rows are referenced in the child table. If this behavior isn’t what you want, you can tell the database to automatically maintain referential integrity in a couple of ways: bydeleting the child rows and specifying ON DELETE CASCADE or by setting the columns in the child
table to NULL with the ON DELETE SET NULL clause, like this:
ALTER TABLE employees ADD CONSTRAINT emp_dept_fk FOREIGN KEY (dept_nbr) REFERENCES departments(dept_nbr) ON DELETE CASCADE;
ALTER TABLE departments ADD CONSTRAINT dept_mgr_fk FOREIGN KEY (manager_id) REFERENCES employees(employee_id) ON DELETE SET NULL;
The first statement tells the database that deleting a department should cause a cascading deletion of that department’s employees. The second statement tells the database that deleting an employee who is a department manager should cause that department’s MANAGER_ID column to revert to NULL.
A Self-Referencing Foreign Key
The parent and child tables do not always have to be separate tables; they can be separate columns of the same table. This configuration is known as a self-referencing foreign key. An example of a self-referencing foreign key can be added to the EMPLOYEES table used in the previous section. The business rule that will be enforced requires that each employee report to a manager and also that the manager be a valid employee. To add this rule to the EMPLOYEES
table, add the MANAGER column together with a FOREIGN KEY constraint on which it references the EMPLOYEES table, like this:
ALTER TABLE employees ADD
(manager NUMBER
,CONSTRAINT mgr_emp_fk FOREIGN KEY (manager)
REFERENCES employees(employee_id)
ON DELETE SET NULL
);
CHECK Constraints
CHECK constraints verify that a column or set of column values meet a specific condition that must evaluate to a Boolean. If the condition evaluates to FALSE, the database raises an exception, and the INSERT or UPDATE statement fails. The condition cannot include subqueries, references to other tables, or calls to functions that are not deterministic. A function is deterministic if it always returns the same result when passed the same input parameters. Examples of deterministic functions include SQRT, TO_DATE, and SUBSTR. The functions SYSDATE, USER, and DBTIMEZONE are not deterministic. The condition must be a Boolean SQL expression enclosed in parentheses. Add a CHECK constraint to ensure that every employee’s hire date is later than the company’s founding date, like this:
ALTER TABLE employees ADD CONSTRAINT validate_hire_date CHECK
(hire_date > TO_DATE('15-Apr-1999','DD-Mon-YYYY'));
Modifying Constraints
Once created, constraints can be dropped, disabled (temporarily not enforced), enabled (enforced again), or renamed. You make these changes to constraints using an ALTER TABLE statement. Take care in disabling UNIQUE or PRIMARY KEY constraints because disabling these constraints results in the supporting index being dropped (unless you also specify KEEP INDEX. To drop a constraint, use an ALTER TABLE statement with the constraint name, like this:
ALTER TABLE employees DROP CONSTRAINT validate_hire_date;
Create Table
CREATE TABLE change_log ( log_id NUMBER, who VARCHAR2(64)
,when TIMESTAMP, what VARCHAR2(200) );
You can add some attributes to your table definition such as the tablespace in which you want your table stored:
CREATE TABLE change_log ( log_id NUMBER ,who VARCHAR2(64)
,when TIMESTAMP ,what VARCHAR2(200)
) TABLESPACE users;
The TABLESPACE option tells the database where to store the table. After you create the table, you can display the structure of a table with the SQL*Plus DESCRIBE command:
SQL> describe change_log;
Name Null? Type
------ ----- ----------------
LOG_ID NUMBER
WHO VARCHAR2(64)
WHEN TIMESTAMP(6)
WHAT VARCHAR2(200)
Creating a Table Using a Query
You can create a table based on a query, you do not need to specify the column attributes; they are inherited from the existing schema object like a single table, a view, or join multiple tables. This table creation syntax is frequently identified with the abbreviation CTAS (Create Table As Select):
CREATE TABLE january2004_log TABLESPACE archives AS SELECT * FROM change_log WHERE when BETWEEN TO_DATE('01-JAN-2004','DD-Mon-YYYY’) AND TO_DATE('31-JAN-2004','DD-Mon-YYYY’);
SQL> describe january2004_log
Name Null? Type
-------- -------- -------
LOG_ID NUMBER
WHO VARCHAR2(64)
WHEN TIMESTAMP(6)
WHAT VARCHAR2(200)
Creating a Temporary Table
You can create a temporary table whose contents are transitory and only visible to the session that inserted data into it. The definition of the table persists, but the data in a temporary table lasts only for either the duration of the transaction (ON COMMIT DELETE ROWS) or for the duration of the session (ON COMMIT PRESERVE ROWS). Programs can manipulate data in temporary tables or join them to permanent tables in the same manner as any other table. Here is an example:
CREATE GLOBAL TEMPORARY TABLE my_session
(category VARCHAR2(16) ,running_count NUMBER
) ON COMMIT DELETE ROWS;
Setting Default Values
You can set default values for the columns in your table. When subsequent INSERT statements do not explicitly populate these columns, the database assigns the default value to the column. Having a default value does not ensure that the column will always have a value. An INSERT or an UPDATE statement can always explicitly set a column to NULL unless there is a NOT NULL constraint on the column.Default values can be any SQL expression that does not reference a PL/SQL function, other columns, or the pseudocolumns ROWNUM, NEXTVAL, CURRVAL, LEVEL, or PRIOR.Default values are defined as part of a column specification. This definition can be made at table creation time, like this:
(log_id NUMBER ,who VARCHAR2(64) DEFAULT USER
,when TIMESTAMP DEFAULT SYSTIMESTAMP ,what VARCHAR2(200)
) TABLESPACE users;
ALTER TABLE change_log MODIFY who VARCHAR2(64) DEFAULT USER;
Adding Comments to a Table or a Column
You can add descriptive comments to your tables and columns in order to better describe the content or usage of these database objects. Comments can be a maximum of 4,000 bytes in length and can have embedded white space and punctuation. Use the COMMENT ON statement to assign a comment to either a table or a column, like this:
COMMENT ON TABLE change_log IS 'This table is where you record changes to the configuration of the DEMO system';
COMMENT ON COLUMN change_log.log_id IS 'System generated key for change log table Populated with the change_seq sequence.';
The comment must be enclosed in single quotes, but can span physical lines. To display the comments on a table, query the DBA_TAB_COMMENTS, ALL_TAB_COMMENTS, or USER_TAB_COMMENTS data dictionary views.
Renaming a Table
You can use the RENAME statement to change the name of a table, view, sequence, or private synonym—objects that share a namespace with tables. The syntax is RENAME old_name TO new_name. When you rename a table, the database invalidates all objects that depend on (refer to) the table, such as views, procedures, or synonyms. The database automatically alters associated indexes, grants, and constraints to reference the new name. To change the name of the change_log table to demo_change_log, execute this:
RENAME change_log TO demo_change_log;
Another way to rename a table is with the RENAME TO clause of the ALTER TABLE statement. For example:
ALTER TABLE change_log RENAME TO demo_change_log;
Neither syntax is preferred; both semantics are fully supported.
Adding and Dropping Columns in a Table
You will need to perform while managing tables is adding new columns to an existing table. Use the ALTER TABLE statement to add columns. When adding a single column, use the syntax ALTER TABLE table_name ADD column_spec. When you add multiple columns to a table, enclose a comma-delimited list of column specifications with parentheses. The column specification includes the column name, the column’s datatype, and any default value that the column will have. For example, to add a column named HOW to the change_log table, execute the following SQL:
ALTER TABLE change_log ADD how VARCHAR2(45);
To add the two columns—HOW and WHY—to the change_log table, use the syntax with parentheses, like this:
ALTER TABLE change_log ADD ( how VARCHAR2(45) ,why VARCHAR2(60) );
To remove a column from a table, use the ALTER TABLE DROP COLUMN statement, as in this example:
ALTER TABLE change_log DROP COLUMN how;
To drop multiple columns, you don’t use the keyword COLUMN, and instead enclose the comma-delimited list of columns in parentheses:
ALTER TABLE change_log DROP (how,why);
Modifying Columns
You may need to occasionally make changes to the columns of a table—increase or decrease the size of a column, rename a column, or assign a default value to a column. You use the ALTER TABLE MODIFY statement to make these column-level changes. As with the ADD and DROP options, you have two syntactical options: one for modifying a single column and one for modifying multiple columns. To make changes to a single column, you specify the column name together with the new characteristics. For example, to change the column WHAT from VARCHAR2(200) to VARCHAR2(250), you execute:
ALTER TABLE change_log MODIFY what VARCHAR2(250);
To change multiple columns, enclose a comma-delimited list of modified column specs in parentheses, like this:
ALTER TABLE change_log MODIFY (what VARCHAR2(250) ,who VARCHAR2(50) DEFAULT user );
Viewing the Attributes of a Table
You can use several data dictionary views to display the attributes of a table. The first place to start is usually with the DESCRIBE command from SQL*Plus or iSQL*Plus:
SQL> describe employees
Name Null? Type
----------------------- -------- ----------------
EMPLOYEE_ID NOT NULL NUMBER
HIRE_DATE NOT NULL DATE
FIRST_NAME VARCHAR2(42)
LAST_NAME VARCHAR2(42)
PAYROLL_ID VARCHAR2(10)
DEPT_NBR NUMBER
MANAGER NUMBER
The DESCRIBE command displays the column names, datatypes, and nullity of each column. To see the table physical attributes, query the ALL_TABLES or USER_TABLES dictionary views.
DROPING A TABLE
To drop a Table user the following SQL:
- drop table table_name;
- drop table table_name cascade constraints;
- drop table table_name purge;
To learn more about CONSTRAINT please see 'Create and Manage Constraints' post.
Tuesday, April 3, 2007
Create an user
sql> CONN system/password as sysdba; [connect as sysdba]
sql> CREATE USER user_name IDENTIFIED BY password DEFAULT TABLESPACE user TEMPORARY TABLESPACE temp QUOTA UNLIMITED ON user;
Now give some privileges to the newly created user.
sql> GRANT CONNECT , RESOURCE user_name;
To connect newly created user :
sql> CONN user_name/password
To change password :
sql> CONN system/password as sysdba; [connect as sysdba]
sql> ALTER USER user_name IDENTIFIED BY new_password ;
To Drop the user: ( including all schema objects ):
sql> DROP USER user_name CASCADE;