Monday, April 27, 2009


Listener Commands:
The listener's involvement is summarized as:
The listener catches the request spawns or requests a database process/thread redirects or passes the connection to the process/thread, usually on a different port gets out of the way

set an encrypted listener password
c:\oracle\product\ora102\bin> lsnrctl startLSNRCTL> change_password
Old password:
New password:
Reenter new password:
LSNRCTL> save_config

Quit the listener application:
c:\oracle\product\ora10\bin> lsnrctl

View help information:
c:\oracle\product\ora10\bin> lsnrctl help
c:\oracle\product\ora10\bin> lsnrctl

kill on *NIX:
ps -efgrep tnslsnrawk '{ print $2; }'xargs kill -9

Causes a reread the listener.ora file. Enables the addition or modification of statically configured services without stopping the listener.
In addition, the database services, instances, service handlers, and listening endpoints that were dynamically registered with the listener are unregistered and subsequently registered again.
LSNRCTL> reload

Use the SAVE_CONFIG command to compare the current configuration state of the listener, including trace level, trace file, trace directory, and logging to the listener.ora file. Any changes are stored in listener.ora, preserving formatting, comments, and case as much as possible. Prior to modification of the listener.ora file, a backup of the file, called listener.bak, is created.
See change_password above

View the configured listener service:
LSNRCTL> services

set Parameter:
current_listener: set current_listener
LSNRCTL> set current_listener uwlistener

displaymode: set displaymode
LSNRCTL> set displaymode VERBOSE

inbound_connect_timeout: set inbound_connect_timeout
LSNRCTL> set inound_connect_timeout 0

log_directory: set log_directory
LSNRCTL> set log_directory c:\temp

log_file: set log_file
LSNRCTL> set log_file uwlistener.log

log_status: set log_status <>
LSNRCTL> set log_status ON

password: set password
LSNRCTL> set password Password:

rawmode:set rawmode
LSNRCTL> set rawmode OFF

save_config_on_stop: set save_config_on_stop
LSNRCTL> set save_config_on_stop ON

startup_waittime: set startup_waittime
LSNRCTL> set startup_waittime 0

trc_directory: set trc_directory
LSNRCTL> set trc_directory c:\temp

trc_file: set trc_level
LSNRCTL> set trc_file uwlistener.trc

trc_level: set trc_level
Value Description
0 - 1 off
2 - 3 err
4 - 5 user
6 - 14 admin
15 dev
16 - 99 support
LSNRCTL> set trc_level 0

View the current listener services:
c:\oracle\product\ora10\bin> lsnrctl services

show Parameter
current_listener: show current_listener
LSNRCTL> show current_listener

displaymode: show displaymode
LSNRCTL> show displaymode

LSNRCTL> show inbound_connect_timeout

log_directory: show log_directory
LSNRCTL> show log_directory

log_file: show log_file
LSNRCTL> show log_file

log_status: show log_status
ON if listener activity is logged
LSNRCTL> show log_status

rawmode: show rawmode
LSNRCTL> show rawmode

save_config_on_stop: show save_config_on_stop
LSNRCTL> show save_config_on_stop

snmp_visible: show snmp_visible
LSNRCTL> show snmp_visible

startup_waittime: show startup_waittime
LSNRCTL> show startup_waittimeDefault is 0

trc_directory: show trc_directory
LSNRCTL> show trc_directory

trc_file: show trc_fileDefault is listener.trc
LSNRCTL> show trc_file

trc_level: show trc_level
LSNRCTL> show trc_level

start the listener:
c:\oracle\product\ora10\bin> lsnrctl start

view the current listener status:
c:\oracle\product\ora10\bin> lsnrctl status

stop the listener:
c:\oracle\product\ora10\bin> lsnrctl stop

Show the currently installed listener version:
c:\oracle\product\ora10\bin> lsnrctl version

Listener Parameters:
Prevent remote modification of listener.ora using lsnrctl commands

Typical Listener.ora file for standalone Oracle.
# listener.ora Network Configuration File:
# c:\oracle\product\ora10\network\admin\listener.ora
# Generated by Oracle configuration tools.

(ORACLE_HOME = C:\oracle\product\ora10)
(PROGRAM = extproc)
(GLOBAL_DBNAME = orabase)
(ORACLE_HOME = C:\oracle\product\ora10)
(SID_NAME = orabase)

(ADDRESS = (PROTOCOL = TCP)(HOST = perrito)(PORT = 1521))

Queue Size:
Add the queuesize parameter in your listener.ora file in the address description. By default oracle is able to open only 16 simultaneous connections

The result should look like this:
(ADDRESS= (PROTOCOL = TCP)(HOST = your-hostname)(PORT = 1521) (QUEUESIZE = 30))

Trace Levels:
Value equivalents:
OFF (equivalent to 0) provides no tracing
USER (equivalent to 4) identify user error error conditions
ADMIN (equivalent to 6) identify installation-specific problems
SUPPORT (equivalent to 16) provides trace information for support

Trace file results are written to $ORACLE_HOME/network/trace.

Force the database to register, or reregister, with the listener:

Monday, April 20, 2009

Terminating Oracle Sessions

Terminating Sessions and Cleaning Up Processes:
This module describes how you can terminate sessions and clean up session-related operating system processes.

This module discusses the following:
Terminating Sessions Using SQL*Plus
Terminating Sessions Using Enterprise Manager
Terminating Session-related Operating System Processes on UNIX
Terminating Session-related Operating System Processes on Windows

Terminating Sessions Using SQL*Plus:
You can terminate sessions with the ALTER SYSTEM KILL command. When you issue the ALTER SYSTEM KILL session command, you must specify the session's index number and serial number. To identify the session index number (sid) and serial number of a session, query the V$SESSION dynamic performance view as shown below. The value of the STATUS column will be ACTIVE when the session is making a SQL call to Oracle. It will be INACTIVE if it is not making a SQL call to Oracle.

Identify the correct session and terminate the session by performing the steps below:

1.Invoke SQL*Plus.

2.Query V$SESSION supplying the username for the session you want to terminate:


3.Execute the ALTER SYSTEM command to terminate the session:


4.Query V$SESSION:


5.After PMON has cleaned up after the session, the row is removed from V$SESSION:


Terminating Sessions Using Enterprise Manager:
Identify the correct session and terminate the session by performing the steps below.
Note: Oracle Enterprise Manager 9.2 was used in the examples.
1.Select START > Programs > Oracle > OraHome92 > Enterprise Manager Console. Select Launch Standalone and click OK.

2.Expand Databases. Expand your database. Expand Instance and select Sessions. Identify the session you want to terminate.

3.Expand Sessions. Select the session you want to terminate and click Kill Session:

4.Select Immediate and click OK:

5.STATUS changes to KILLED and SERVER changes to PSEUDO:

6.The row is removed after PMON has cleaned up after the session.

Terminating Session-related Operating System Processes on UNIX:
The PMON background process will clean up after any user session you terminate with the ALTER SYSTEM KILL SESSION command. You can kill the session-related operating system process by performing the steps outlined below:

1.Invoke SQL*Plus.

2.Issue the following query to determine the operating system process identifier (spid):
FROM v$session WHERE paddr = addr);

3.If you are unable to identify the operating system process identifier (spid) from the query in step 2, you can issue the following query to help identify the correct session:
SELECT s.sid, s.serial#, p.spid
FROM v$process p, v$session s
WHERE p.addr = s.paddr
AND s.username = '';

4.At the operating system prompt, issue the kill command and supply the operating system process identifier (spid):

Terminating Session-related Operating System Processes on Windows:
The PMON background process will clean up after any user session you terminate with the ALTER SYSTEM KILL SESSION command. You can kill the session-related operating system process by performing the steps outlined below:

1.Invoke SQL*Plus.

2.Issue the following query to determine the operating system process identifier (spid) or thread:
SELECT spid, s.osuser, s.programFROM v$process p, v$session sWHERE p.addr = s.paddr;

3.At the operating system prompt, issue the orakill command. Supply the SID and the thread which you obtained from the SPID column in step 2:

Unix Crontab

1. Crontab Restrictions
You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does not exist, you can use crontab if your name does not appear in the file /usr/lib/cron/cron.deny. If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root user can use crontab. The allow/deny files consist of one user name per line.

2. Crontab Commands
export EDITOR=vi ;to specify a editor to open crontab file.
crontab -e Edit your crontab file, or create one if it doesn't already exist.
crontab -l Display your crontab file.
crontab -r Remove your crontab file.
crontab -v Display the last time you edited your crontab file. (This option is only available on a few systems.)

3. Crontab file
Crontab syntax :-
A crontab file has five fields for specifying day , date and time followed by the command to be run at that interval.

* * * * * command to be executed
First * indicates: min (0 - 59)
Second * indicates: hour (0 - 23)
Third * indicates: day of month (1 - 31)
Fourth * indicates: month (1 - 12)
Fifth * indicates: day of week (0 - 6) (Sunday=0)

* in the value field above means all legal values as in braces for that column. The value column can have a * or a list of elements separated by commas. An element is either a number in the ranges shown above or two numbers in the range separated by a hyphen (meaning an inclusive range).

Note: The specification of days can be made in two fields: month day and weekday. If both are specified in an entry, they are cumulative meaning both of the entries will get executed .

4. Crontab Example
A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at 6:30 PM.
30 18 * * * rm /home/someuser/tmp/*

5. Crontab Environment
cron invokes the command from the user's HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:

Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a script called by the entry.

6. Disable Email
By default cron jobs sends a email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line .
>/dev/null 2>&1

7. Generate log file
To collect the cron execution execution log in a file :
30 18 * * * rm /home/someuser/tmp/* > /home/someuser/cronlogs/clean_tmp_dir.log

Thursday, April 16, 2009

Cluster Ready Services Logs

Log Directory Structure in Cluster Ready Services:
Each component in the CRS (Cluster Ready Services) stack has its respective directories created under the CRS home:
$ORA_CRS_HOME/crs/log Contains trace files for the CRS resources.

$ORA_CRS_HOME/crs/init Contains trace files of the CRS daemon during startup. Good place to start with any CRS login problems.

$ORA_CRS_HOME/css/log The Cluster Synchronization (CSS) logs indicate all actions such as reconfigurations, missed check-ins, connects, and disconnects from the client CSS listener. In some cases, the logger logs messages with the category of auth.crit for the reboots done by Oracle. This could be used for checking the exact time when the reboot occurred.

$ORA_CRS_HOME/css/init Contains core dumps from the Oracle Cluster Synchronization Service daemon (OCSSd) and the process ID (PID) for the CSS daemon whose death is treated as fatal. If abnormal restarts for CSS exist, the core files will have the format of core..

$ORA_CRS_HOME/evm/log Log files for the Event Volume Manager (EVM) and evmlogger daemons. Not used as often for debugging as the CRS and CSS directories.

$ORA_CRS_HOME/evm/init PID and lock files for EVM. Core files for EVM should also be written here.

$ORA_CRS_HOME/srvm/log Log files for Oracle Cluster Registry (OCR), which contains the details at the Oracle cluster level.

$ORA_CRS_HOME/log Log files for Oracle Clusterware (known as the cluster alert log), which contains diagnostic messages at the Oracle cluster level. This is available from Oracle database 10g R2.

Oracle Background Processes

You can see the Oracle background processes with this query:
select *

Here are some of the most important Oracle background processes:
ARCH - (Optional) Archive process writes filled redo logs to the archive log location(s). In RAC, the various ARCH processes can be utilized to ensure that copies of the archived redo logs for each instance are available to the other instances in the RAC setup should they be needed for recovery.

CJQ - Job Queue Process (CJQ) - Used for the job scheduler. The job scheduler includes a main program (the coordinator) and slave programs that the coordinator executes. The parameter job_queue_processes controls how many parallel job scheduler jobs can be executed at one time.

CKPT - Checkpoint process writes checkpoint information to control files and data file headers.

CQJ0 - Job queue controller process wakes up periodically and checks the job log. If a job is due, it spawns Jnnnn processes to handle jobs.

DBWR - Database Writer or Dirty Buffer Writer process is responsible for writing dirty buffers from the database block cache to the database data files. Generally, DBWR only writes blocks back to the data files on commit, or when the cache is full and space has to be made for more blocks. The possible multiple DBWR processes in RAC must be coordinated through the locking and global cache processes to ensure efficient processing is accomplished.

FMON - The database communicates with the mapping libraries provided by storage vendors through an external non-Oracle Database process that is spawned by a background process called FMON. FMON is responsible for managing the mapping information. When you specify the FILE_MAPPING initialization parameter for mapping data files to physical devices on a storage subsystem, then the FMON process is spawned.

LGWR - Log Writer process is responsible for writing the log buffers out to the redo logs. In RAC, each RAC instance has its own LGWR process that maintains that instance’s thread of redo logs.

LMON - Lock Manager process

MMON - The Oracle 10g background process to collect statistics for the Automatic Workload Repository (AWR).

MMNL - This process performs frequent and lightweight manageability-related tasks, such as session history capture and metrics computation.MMAN - is used for internal database tasks that manage the automatic shared memory. MMAN serves as the SGA Memory Broker and coordinates the sizing of the memory components.

PMON - Process Monitor process recovers failed process resources. If MTS (also called Shared Server Architecture) is being utilized, PMON monitors and restarts any failed dispatcher or server processes. In RAC, PMON’s role as service registration agent is particularly important.

Pnnn - (Optional) Parallel Query Slaves are started and stopped as needed to participate in parallel query operations.

RBAL - This process coordinates rebalance activity for disk groups in an Automatic Storage Management instance.

SMON - System Monitor process recovers after instance failure and monitors temporary segments and extents. SMON in a non-failed instance can also perform failed instance recovery for other failed RAC instance.

WMON - The "wakeup" monitor process

Data Guard/Streams/replication Background processes:
- The Data Guard Broker process.

SNP - The snapshot process.

MRP - Managed recovery process - For Data Guard, the background process that applies archived redo log to the standby database.

ORBn - performs the actual rebalance data extent movements in an Automatic Storage Management instance. There can be many of these at a time, called ORB0, ORB1, and so forth.

OSMB - is present in a database instance using an Automatic Storage Management disk group. It communicates with the Automatic Storage Management instance.

RFS - Remote File Server process - In Data Guard, the remote file server process on the standby database receives archived redo logs from the primary database.

QMN - Queue Monitor Process (QMNn) - Used to manage Oracle Streams Advanced Queuing.

Oracle Real Application Clusters (RAC) Background Processes:
The following are the additional processes spawned for supporting the multi-instance coordination:
DIAG: Diagnosability Daemon – Monitors the health of the instance and captures the data for instance process failures.

LCKx - This process manages the global enqueue requests and the cross-instance broadcast. Workload is automatically shared and balanced when there are multiple Global Cache Service Processes (LMSx).

LMON - The Global Enqueue Service Monitor (LMON) monitors the entire cluster to manage the global enqueues and the resources. LMON manages instance and process failures and the associated recovery for the Global Cache Service (GCS) and Global Enqueue Service (GES). In particular, LMON handles the part of recovery associated with global resources. LMON-provided services are also known as cluster group services (CGS)

LMDx - The Global Enqueue Service Daemon (LMD) is the lock agent process that manages enqueue manager service requests for Global Cache Service enqueues to control access to global enqueues and resources. The LMD process also handles deadlock detection and remote enqueue requests. Remote resource requests are the requests originating from another instance.

LMSx - The Global Cache Service Processes (LMSx) are the processes that handle remote Global Cache Service (GCS) messages. Real Application Clusters software provides for up to 10 Global Cache Service Processes. The number of LMSx varies depending on the amount of messaging traffic among nodes in the cluster.

The LMSx handles the acquisition interrupt and blocking interrupt requests from the remote instances for Global Cache Service resources. For cross-instance consistent read requests, the LMSx will create a consistent read version of the block and send it to the requesting instance. The LMSx also controls the flow of messages to remote instances.

The LMSn processes handle the blocking interrupts from the remote instance for the Global Cache Service resources by:
1. Managing the resource requests and cross-instance call operations for the shared resources.
2. Building a list of invalid lock elements and validating the lock elements during recovery.
3. Handling the global lock deadlock detection and Monitoring for the lock conversion timeouts

Flashback Reinstantiation

Oracle Flashback Reinstantiation:
In an Oracle9i Data Guard environment, a failover operation leads to a resetlogs. This operation invalidates the old primary database. Therefore, you need to perform a hot backup on the new primary database immediately, and then re-create a new standby database. This operation can take a long time, and your new primary database is vulnerable during this period.

The new Flashback Reinstantiation feature reduces the need to reinstantiate the old primary database following a failover. This feature allows you to quickly restore full resiliency after a failure. This is accomplished by using the SQL statement FLASHBACK DATABASE to roll back the old primary database in time to synchronize with the old standby database.

A beautiful document covering most of Flash back features of Oracle database.

Oracle Data Guard

Introducing Oracle Data Guard:
Oracle Data Guard is the most effective and comprehensive Disaster Recovery solution available today for enterprise data. Follow the below URL to understand the basics of Oracle Data guard.

Wednesday, April 15, 2009

Oracle Flashback

Oracle Flashback Technology:
According to many studies, 40% of application outages are caused by operator or user errors. Part of being human is making mistakes. But these errors are extremely difficult to avoid and can be particularly difficult to recover from without advance planning and the right technology. Such errors can result in "logical" data corruption, or cause downtime of one or more components of the IT infrastructure. While it is relatively simple to rectify the failure of an individual component, detection and repair of logical data corruption, such as accidental deletion of valuable data, is a time consuming operation that causes enormous loss of business productivity. Typical user-errors may include accidental deletion of valuable data, deleting the wrong data, and dropping the wrong table. Refer the below URL to get an understanding about Oracle Flashback technology.

Oracle Flashback Technology:Alternatives to Point-in-Time Recovery:
The flashback features of Oracle are more efficient than media recovery in most circumstances in which they are available. They can also be used to investigate past states of Most Flashback Technology features operate at the logical level, viewing and manipulating database objects, as follows:
Oracle Flashback Query lets you specify a target time and then run queries against your database, viewing results as they would have appeared at that time. To recover from an unwanted change like an erroneous update to a table, a user could choose a target time before the error and run a query to retrieve the contents of the lost rows.
Oracle Flashback Version Query lets you view all the versions of all the rows that ever existed in one or more tables in a specified time interval. You can also retrieve metadata about the differing versions of the rows, including start time, end time, operation, and transaction ID of the transaction that created the version. This feature can be used both to recover lost data values and to audit changes to the tables queried.
Oracle Flashback Transaction Query lets you view changes made by a single transaction, or by all the transactions during a period of time.
Oracle Flashback Table returns a table to its state at a previous point in time. You can restore table data while the database is online, undoing changes only to the specified table.
Oracle Flashback Drop reverses the effects of a DROP TABLE statement.

Flashback Table, Flashback Query, Flashback Transaction Query and Flashback Version Query all rely on undo data, records of the effects of each update to an Oracle database and values overwritten in the update. Used primarily for such purposes as providing read consistency for SQL queries and rolling back transactions, these undo records contain the information required to reconstruct data as it stood at a past time and examine the record of changes since that past time.

Flashback Drop is built around a mechanism called the Recycle Bin, which Oracle uses to manage dropped database objects until the space they occupied is needed to store new data.

Oracle Flashback Database provides a more efficient direct alternative to database point-in-time recovery. It is unlike the other flashback features in that it operates at a physical level. When you use Flashback Database, your current datafiles revert to their contents at a past time. The end product is much like the result of a database point-in-time recovery, but can be much faster because it does not require you to restore datafiles from backup, and requires only limited application of redo compared to media recovery.

Flashback Database uses flashback logs to access past versions of data blocks, as well as some information from the archived redo log. To have the option of using Flashback Database to repair your database, you must either configure your database to generate flashback logs, or use the related capability of guaranteed restore points to protect the contents of your database at a fixed point in time, such as immediately before a risky database change.

Performing Flashback and Database Point-in-Time Recovery:
It it sometimes necessary to return some objects in your database or the entire database to a previous state, following the effects of a mistaken database update. For example, a user or DBA might erroneously delete or update the contents of one or more tables, drop database objects that are still needed for a risky operation such as an update to an application or a large batch update might fail.

In addition to point-in-time restore and recovery of the entire database, Oracle provides a group of features known as Oracle Flashback Technology, that are often faster than point-in-time recovery, and less disruptive to database availability.

This URL presents a guide to investigating unwanted database changes, and selecting and carrying out an appropriate recovery strategy based upon Oracle Flashback Technology and database backups.

Flashback Database:
Flashback Database is introduced with 10gr1, It is alternative for PITR (Point In Time Recovery).
It is a superb feature, it can also be called "rewind button" for database because through this option we can rewind database to any point of time.

1. How to ENABLE Flashback Database option.
By Default it is DISABLE so we need to enable this option.

2. Requirement for the flashback database
a: Database must run in archivelog mode
b: Following parameters are set in database initialization parameters:

  • db_recovery_file_dest
  • db_recovery_file_dest_size
  • db_flashback_retention_target
3.Enable Flashback mode in Database
a: connect with SYS user and shutdown database with normal
sqlplus / as sysdba
b: startup database with MOUNT mode
sql>startup mount
sql>alter database flashback on;
d: Open the database with normal operation
sql>alter database open
e: Check flashback database option enable for the database
sql> select flashback_on from v$database;

Test a Flashback Scenario:
Make sure the database flashback is enabled to do the below test.
1. shutdown the database
sql>select current_scn from v$database;
sql>drop table scott.emp purge;
sql>select current_scn from v$database;
2. flashback database before table was dropped
sql>shutdown immediate
sql>startup mount
sql>flashback database to scn 542151;
3. open database with read only
sql>alter database open read only;
sql>desc scott.emp
4. export dropped table
sql>! exp system/oracle file=/tmp/emp.dmp tables=scott.emp
5. shutdown and startup database with mount mode
sql>shutdown immediate;
sql> startup mount;
6. perform recover database
sql> recover database;
7. open database with normal operation without resetlogs
sql> alter database open;
sql>desc scott.emp
sql>! imp system/oracle file=/tmp/emp.dmp fromuser=scott touser=scott
sql>desc scott.emp (The table is back now)

Oracle Flashback Query: Recovering at the Row Level using Timestamp Feature:

RVWR Background Process:
Enabling Flashback Database starts a new RVWR background process. This process is similar to the LGWR (log writer) process. The new process writes Flashback Database data to the Flashback Database logs.

The following list below shows all the background processes for “grid” instance:
$ ps -ef grep grid
oracle 25124 1 0 16:32:05 ? 0:00 ora_s000_grid
oracle 25116 1 0 16:32:04 ? 0:00 ora_reco_grid
oracle 25169 1 0 16:32:22 ? 0:00 ora_rvwr_grid
oracle 25112 1 0 16:32:04 ? 0:00 ora_ckpt_grid
oracle 25110 1 0 16:32:04 ? 0:00 ora_lgwr_grid
oracle 25108 1 0 16:32:04 ? 0:00 ora_dbw0_grid
oracle 25114 1 0 16:32:04 ? 0:00 ora_smon_grid
oracle 25118 1 0 16:32:04 ? 0:00 ora_cjq0_grid
oracle 25120 1 0 16:32:04 ? 0:00 ora_rbal_grid
oracle 25122 1 0 16:32:04 ? 0:00 ora_d000_grid
oracle 25106 1 0 16:32:04 ? 0:00 ora_pmon_grid

Monitor logging in the Flashback Database logs:
SQL> select begin_time, flashback_data, db_data, redo_data, ESTIMATED_FLASHBACK_SIZE from v$flashback_database_stat;

Monitor the Flashback Database retention target:
SQL> select from v$flashback_database_log;

Adjust recovery area disk quota:
SQL> select estimated_flashback_size from v$flashback_database_log;

Flashback a Database Using RMAN:
3> (‘06/25/03 12:00:00’,’MM/DD/YY HH:MI:SS’);

Flashback a database using SQL command:
The database must be in mount state to issue these commands:

You must issue the follow command afterwards:

Restoring a dropped object:
You can view the dropped objects in the recycle bin from two dictionary views:
USER_RECYCLEBIN — list all dropped user objects.
DBA_RECYCLEBIN — list all dropped system-wide objects.

SQL> drop table test;
Table dropped.

SQL> select object_name, original_name, type, createtime, droptime from user_recyclebin;
OBJECT_NAME ORIGINAL_NAME TYPE CREATETIME DROPTIME------------------------------ ---------------- ------ ------------------- ------------------- BIN$0+ktoVCgEmXgNAAADiUEHQ==$0 TEST TABLE 2004-02-21:19:04:03 2004-02-21:19:04:41

SQL> flashback table “BIN$0+ktoVCgEmXgNAAADiUEHQ==$0” to before drop;

Dropping a table permanently:
This statement puts the table in the recycle bin:
SQL> drop table test purge;
Table dropped.

This statement removes the table permanently:
SQL> purge table "BIN$0+ktoVCgEmXgNAAADiUEHQ==$0";
Table purged.

Purging the Recycle Bin:
This statement purges the user recycle bin:
SQL> purge recyclebin;
Recyclebin purged.

This statement removes all objects from the recycle bin:
SQL> purge dba_recyclebin;
DBA Recyclebin purged.

This statement purges all objects from tablespace users in the recycle bin:
SQL> purge tablespace users;
Tablespace purged.

Important Notes:
1) objects will go to recyclebin is completely based on recyclebin parameter settings.If I set alter system set recyclebin=off then object will not go in recycle bin.

2)Dropped SYS and SYSTEM schema objects don't go to recyclebin.

3)The un-drop feature brings the table back to its original name, but not the associated objects like indexes and triggers, which are left with the recycled names.Views and procedures defined on the table are not recompiled and remain in the invalid state. These old names must be retrieved manually and then applied to the flashed-back table.Like,

------------------------------ ---------- --------
BIN$04LhcpdfanfgMAAAAAANPw==$0 TEST_01 INDEX

After the table is flashed-back, the indexes and triggers on the table TEST will be named as shown in the OBJECT_NAME column. From the above query, you can use the original name to rename the objects as follows:

But exception is the bitmap indexes. When they are dropped, they are not placed in the recycle bin and so they are not retrievable. The constraint names are also not retrievable from the view. They have to be renamed from other sources.

4) To do flashback enable row movement must be enabled.
sql>alter table test enable row movement;

Flashback Standby Database:
If you have multiple standby sites, you may utilize the DELAY option in Data Guard to prevent physical/logical corruption or user errors in your primary database.

For example, the first scenario in the diagram below has only one standby database. Here, a logical or physical corruption in the primary database will cause an immediate corruption in the standby database. To avoid such a pitfall, you can implement a second standby database with the 'Delay' option (introducing a delay of minutes or hours on the second standby database for applying archive log changes). This will prevent the corruptions on the second standby database and allow recovery from a possible physical/logical corruption or user errors in the primary database.

You can issue the following command to accomplish this:
SQL> alter database recover managed standby database delay 60 disconnect;

However, in Oracle 10g, you can configure the standby database with Flashback Database to achieve the same benefit as the DELAY option. Therefore, there is no need to implement a second standby database with the DELAY option. Refer the below URL for more information:

Flashback Re-instantiation: (Nice Feature of Oracle Flash Back)
In an Oracle9i Data Guard environment, a failover operation leads to a resetlogs. This operation invalidates the old primary database. Therefore, you need to perform a hot backup on the new primary database immediately, and then re-create a new standby database. This operation can take a long time, and your new primary database is vulnerable during this period.

The new Flashback Re-instantiation feature reduces the need to reinstantiate the old primary database following a failover. This feature allows you to quickly restore full resiliency after a failure. This is accomplished by using the SQL statement FLASHBACK DATABASE to roll back the old primary database in time to synchronize with the old standby database.

Here are the steps to perform Flashback Re-instantiation:

1: On your new primary database (Instance B):
SQL> select standby_became_primary_scn from v$database;


2. Mount the old primary database (Instance A).

3. Flashback the old primary database (Instance A) to the SCN.
SQL> flashback database to scn 2960985;

4. Disable Flashback on the old primary database (Instance A).

5. On the old primary database (Instance A), create a standby control file.
SQL> alter database create standby controlfile as'/dba/standby/stbycf.f' reuse;

6. Shutdown the old primary database (Instance A), and replace the control files with the new standby control files.
SQL> shutdown immediate;
$ cp /dba/standby/stbyct.f /u02/oradata/sid/control01.ctl
$ cp /dba/standby/stbyct.f /u02/oradata/sid/control02.ctl

7. Bring up the old primary database as a new physical standby database (Instance A).
SQL> startup mount;

8. Turn flashback back on the new primary database (Instance A).
SQL> alter database flashback on;

9. Enable transport from the new primary database (Instance B)

10. On the new standby database (Instance A), start real time apply.

11. The Managed Recovery process (MRP) will hit the End-Of-Redo and then need to be restarted.

A beautiful document covering most of Flash back features of Oracle database.


Oracle Parameters Which Controls the Jobs:

SNP Background Processes:

SNP processes run in the background and implement database snapshots and job queues. If an SNP process fails, Oracle restarts it without affecting the rest of the database. An SNP process can run one job at a time with a maximum of ten SNP processes running simultaneously. The INIT.ORA initialization file contains three parameters that control the behavior of the SNP processes:
How many processes to start. If set to zero, no jobs are executed. Default is 0. Range is 0 to 1000.
JOB_QUEUE_INTERVAL (obsolete in 9i):
How long an interval the process will sleep before checking for a new job. Default is 60 sec. Range is 1 to 3600 sec
Controls whether an SNP process closes any remote database connections. Default is False. Range is True/False.

The DBMS_JOB package schedules and manages jobs in the job queue.
Note:The DBMS_JOB package has been superseded by the DBMS_SCHEDULER package. In particular, if you are administering jobs to manage system load, you should consider disabling DBMS_JOB by revoking the package execution privilege for users.

This chapter contains the following topics:
--Security Model
--Operational Notes
Summary of DBMS_JOB Subprograms

Security Model:

No specific system privileges are required to use DBMS_JOB. No system privileges are available to manage DBMS_JOB. Jobs cannot be altered or deleted other than jobs owned by the user. This is true for all users including those users granted DBA privileges.

You can execute procedures that are owned by the user or for which the user is explicitly granted EXECUTE. However, procedures for which the user is granted the execute privilege through roles cannot be executed.

Note: Once a job is started and running, there is no easy way to stop the job.

Note: You must issue a COMMIT statement immediately after executing the statement of dbms_job subprograms.

Click on the links to go through the syntax and format of DBMS_JOB.

I would like to suggest to refer "All about Database Jobs and Intervals" document for more information and better understanding:

Also you can refer Scheduling Jobs Using Oracle’s Job Queue for more information about Oracle scheduled jobs.

Tuesday, April 14, 2009

Oracle Database Patch Sets

Latest Patch Set Release by Platform:
You must have a MetaLink account to download patch sets. To register for MetaLink, you will need a valid Support Identifier (CSI). Please go to for more information. Once an account is established, you can download patch sets from MetaLink by clicking on the Patches tab. Please contact Oracle Support by logging an iTAR at MetaLink for any questions regarding patch sets.

Wednesday, April 8, 2009

DBA Interview Questions

Oracle Database Administration (DBA):
1. What is an Oracle Instance?
2. What information is stored in Control File?
3. When you start an Oracle DB which file is accessed first?
4. What is the Job of SMON, PMON processes?
5. What is Instance Recovery?
6. What is written in Redo Log Files?
7. How do you control number of Datafiles one can have in an Oracle database?
8. How many Maximum Datafiles can there be in an Oracle Database?
9. What is a Tablespace?
10. What is the purpose of Redo Log files?
11. Which default Database roles are created when you create a Database?
12. What is a Checkpoint?
13. Which Process reads data from Datafiles?
14. Which Process writes data in Datafiles?
15. Can you make a Datafile auto extendible. If yes, how?
16. What is a Shared Pool?
17. What is kept in the Database Buffer Cache?
18. How many maximum Redo Logfiles one can have in a Database?
19. What is difference between PFile and SPFile?
20. What is PGA_AGGREGRATE_TARGET parameter?
21. Large Pool is used for what?
22. What is PCT Increase setting?
23. What is PCTFREE and PCTUSED Setting?
24. What is Row Migration and Row Chaining?
25. What is 01555 - Snapshot Too Old error and how do you avoid it?
26. What is a Locally Managed Tablespace?
27. Can you audit SELECT statements?
28. What does DBMS_FGA package do?
29. What is Cost Based Optimization?
30. How often you should collect statistics for a table?
31. How do you collect statistics for a table, schema and Database?
32. Can you make collection of Statistics for tables automatic?
33. On which columns you should create Indexes?
34. What type of Indexes are available in Oracle?
35. What is B-Tree Index?
36. A table is having few rows, should you create indexes on this table?
37. A Column is having many repeated values which type of index you should create on this column, if you have to?
38. When should you rebuilt indexes?
39. Can you built indexes online?
40. Can you see Execution Plan of a statement.
41. A table is created with the following setting
storage (initial 200k
next 200k
minextents 2
maxextents 100
pctincrease 40)
What will be size of 4th extent?
42. What is DB Buffer Cache Advisor?
43. What is STATSPACK tool?
44. Can you change SHARED_POOL_SIZE online?
45. Can you Redefine a table Online?
46. Can you assign Priority to users?
47. You want users to change their passwords every 2 months. How do you enforce this?
48. How do you delete duplicate rows in a table?
49. What is Automatic Management of Segment Space setting?
50. What is the difference between DELETE and TRUNCATE statements?
51. What is COMPRESS and CONSISTENT setting in EXPORT utility?
52. What is the difference between Direct Path and Convention Path loading?
53. Can you disable and enable Primary key?
54. What is an Index Organized Table?
55. What is a Global Index and Local Index?
56. What is the difference between Range Partitioning and Hash Partitioning?
57. What is difference between Multithreaded/Shared Server and Dedicated Server?
58. Can you import objects from Oracle ver. 7.3 to 9i?
59. How do you move tables from one tablespace to another tablespace?
60. How do see how much space is used and free in a tablespace?
61. What are the steps for manual database upgrade?
62: How to clone a database using RMAN?
63: What are the manual steps to clone a database?
64: How many ASM instances are required to operate multiple databases?
65: What is the location of CRS Logs in a 10G RAC and what are they?
66: What are the mandatory processes in 9i and 10G?
67: What are the mandatory tablespaces required in 9i and 10G?
68: What is RAC?
69: What is Cache Fusion?
70: What components in RAC must reside in shared storage?
71: What is an interconnect network?
72: What is the use of cluster interconnect?
73: How do users connect to database in an Oracle RAC environment?
74: What is the use of a service in Oracle RAC environemnt?
75: Which enable the load balancing of applications in RAC?
76: What is a virtual IP address or VIP?
77: What is the use of VIP?
78: How do we verify that RAC instances are running?
79: What is FAN?
80: What is rolling upgrade?
81: Can rolling upgrade be used to upgrade from 10g to 11g database?
82: What are the initialization parameters that must have same value for every instance in an Oracle RAC database?
83: What are the components of Oracle clusterware?
84: What are the performance views in an Oracle RAC environment?
85: What are the types of connection load-balancing?
86: What is the usage of srvctl?
87: Can Oracle's Data Guard be used on Standard Edition, and if so how?
88: How can you test that the standby database is in sync?
89: What is the difference between Active Dataguard, and the Logical Standby implementation of 10g dataguard?
90: What is a Dataguard?
91: What are the uses of Oracle Data Guard?
92: What is Redo Transport Services?
93: What is apply services?
94: What is difference between physical and logical standby databases?
95: What is Data Guard Broker?
96: What are the Data guard Protection modes and summarize each?

Backup & Recovery Questions for an Oracle DBA:
1. Which types of backups you can take in Oracle?
2. A database is running in NOARCHIVELOG mode then which type of backups you can take?
3. Can you take partial backups if the Database is running in NOARCHIVELOG mode?
4. Can you take Online Backups if the the database is running in NOARCHIVELOG mode?
5. How do you bring the database in ARCHIVELOG mode from NOARCHIVELOG mode?
6. You cannot shutdown the database for even some minutes, then in which mode you should run the database?
7. Where should you place Archive logfiles, in the same disk where DB is or another disk?
8. Can you take online backup of a Control file if yes, how?
9. What is a Logical Backup?
10. Should you take the backup of Logfiles if the database is running in ARCHIVELOG mode?
11. Why do you take tablespaces in Backup mode?
12. What is the advantage of RMAN utility?
13. How RMAN improves backup time?
14. Can you take Offline backups using RMAN?
15. How do you see information about backups in RMAN?
16. What is a Recovery Catalog?
17. Should you place Recovery Catalog in the Same DB?
18. Can you use RMAN without Recovery catalog?
19. Can you take Image Backups using RMAN?
20. Can you use Backupsets created by RMAN with any other utility?
21. Where RMAN keeps information of backups if you are using RMAN without Catalog?
22. You have taken a manual backup of a datafile using o/s. How RMAN will know about it?
23. You want to retain only last 3 backups of datafiles. How do you go for it in RMAN?
24. Which is more efficient Incremental Backups using RMAN or Incremental Export?
25. Can you start and shutdown DB using RMAN?
26. How do you recover from the loss of datafile if the DB is running in NOARCHIVELOG mode?
27. You loss one datafile and it does not contain important objects. The important objects are there in other datafiles which are intact. How do you proceed in this situation?
28. You lost some datafiles and you don't have any full backup and the database was running in NOARCHIVELOG mode. What you can do now?
29. How do you recover from the loss of datafile if the DB is running in ARCHIVELOG mode?
30. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week old and partial backup of this datafile which is just 1 day old. From which backup should you restore this file?
31. You loss controlfile how do you recover from this?
32. The current logfile gets damaged. What you can do now?
33. What is a Complete Recovery?
34. What is Cancel Based, Time based and Change Based Recovery?
35. Some user has accidentally dropped one table and you realize this after two days. Can you recover this table if the DB is running in ARCHIVELOG mode?
36. Do you have to restore Datafiles manually from backups if you are doing recovery using RMAN?
37. A database is running in ARCHIVELOG mode since last one month. A datafile is added to the database last week. Many objects are created in this datafile. After one week this datafile gets damaged before you can take any backup. Now can you recover this datafile when you don't have any backups?
38. How do you recover from the loss of a controlfile if you have backup of controlfile?
39. Only some blocks are damaged in a datafile. Can you just recover these blocks if you are using RMAN?
40. Some datafiles were there on a secondary disk and that disk has become damaged and it will take some days to get a new disk. How will you recover from this situation?
41. Have you faced any emergency situation. Tell us how you resolved it?
42. At one time you lost parameter file accidentally and you don't have any backup. How you will recreate a new parameter file with the parameters set to previous values.

Unix related Oracle DBA Interview Questions:
1. How do you see how many instances are running?
2. How do you automate starting and shutting down of databases in Unix?
3. You have written a script to take backups. How do you make it run automatically every week?
4. What is OERR utility?
5. How do you see Virtual Memory Statistics in Linux?
6. How do you see how much hard disk space is free in Linux?
7. What is SAR?
8. What is SHMMAX?
9. Swap partition must be how much the size of RAM?
10. What is DISM in Solaris?
11. How do you see how many memory segments are acquired by Oracle Instances?
12. How do you see which segment belongs to which database instances?
13. What is VMSTAT?
14. How do you set Kernel Parameters in Red Hat Linux, AIX and Solaris?
15. How do you remove Memory segments?
16. What is the difference between Soft Link and Hard Link?
17. What is stored in oratab file?
18. How do you see how many processes are running in Unix?
19. How do you kill a process in Unix?
20. Can you change priority of a Process in Unix?

Database Administration

Oracle 10g Database Administration:
Nice steps to Manage a 10G database. Almost all management steps for a 10G database is available in the below link.

Virtual Private Database

Restricting Data Access Using the Virtual Private Database:

Virtual Private Database:

Virtual Private Database (VPD) with Oracle:

Tuesday, April 7, 2009

Oracle Architecture

Oracle Database Architecture Overview:

Oracle Database Process Architecture:

Architecture of Oracle Net Services:

Remote Listener

Registering Information with a Remote Listener:
A remote listener is a listener residing on one computer that redirects connections to a database instance on another computer. Remote listeners are typically used in an Oracle Real Application Clusters (Oracle RAC) environment. You can configure registration to remote listeners, such as in the case of Oracle RAC, for dedicated server or shared server environments.

Registering Information with a Remote Listener in a Dedicated Server Environment:
In a dedicated server environment, you must enable the PMON background process to register with a remote listener. You achieve this goal by configuring the REMOTE_LISTENER parameter in the initialization parameter file. The syntax of the REMOTE_LISTENER initialization parameter is as follows:


listener_alias is resolved to the listener protocol addresses through a naming method such as a tnsnames.ora file on the database host.

To dynamically update the REMOTE_LISTENER initialization parameter, use the SQL statement ALTER SYSTEM SET. If you set the parameter to null with the statement that follows, then PMON de-registers information with the remote listener with which it had previously registered information, as in the following example:


To register information with a remote listener in a dedicated server environment:
On the host where the remote listener resides, configure the listener.ora file with the protocol addresses of the remote listener.
For example, assume that a remote listener listens on port 1521 on host sales2-server.
On the database to which you want requests to be redirected, set the REMOTE_LISTENER parameter in the database initialization parameter file to the alias of the remote listener.

For example, suppose that a database resides on host sales1-server. To redirect requests to the database on sales1-server, you can set the REMOTE_LISTENER parameter in the initialization file for the database on host sales1-server as follows:


Resolve the listener name alias for the REMOTE_LISTENER setting through a tnsnames.ora file on the database host.

For example, in the tnsnames.ora on sales1-server, you can resolve the remote listener alias listener_sales2 as follows:


Registering Information with a Remote Listener in a Shared Server Environment:
In a shared server environment, you can use the same registration technique as for a dedicated server environment. Alternatively, you can set the LISTENER attribute of the DISPATCHERS parameter in the initialization parameter file to register the dispatchers with any listener.

The LISTENER attribute overrides the REMOTE_LISTENER parameter. Because the REMOTE_LISTENER parameter and the LISTENER attribute enable PMON to register dispatcher information with the listener, you need not specify both the parameter and the attribute if the listener values are the same.

The syntax of the LISTENER attribute is as follows:

To register information with a remote listener in a shared server environment:
On the host where the remote listener resides, configure the listener.ora file with the protocol addresses of the remote listener.

For example, assume that a remote listener listens on port 1521 on host sales2-server.
On the database to which you want requests to be redirected, configure the LISTENER attribute of the DISPATCHERS parameter.

For example, suppose that a database resides on host sales1-server. To redirect requests to the database on sales1-server, set the DISPATCHER parameter in the initialization file for the database on host sales1-server as follows:

Resolve the listener name alias for the LISTENER attribute through a tnsnames.ora file on the database host.

For example, in the tnsnames.ora on sales1-server, you can resolve the remote listener alias listener_sales2 as follows:

Oracle Listener

Oracle Default Listener:
Prior to Oracle 8i, a listener was statically configured (listener.ora) to service a given set of SIDs. From 8i, PMON dynamically registers a database service with the listener.Further, if the listener is running on the default TCP port of 1521, then there is no need to configure a listener.ora at all.

A listener.ora file is not required in order to use the default listener. The listener is started in the conventional manner:
$lsnrctl start
This listener will listen on two addresses:

In order to change parameters to non default values (such as enabling listener tracing), a listener.ora should be created with the relevant parameters specified. The listener then needs to be restarted.

By default, PMON will register the database service with the listener on port 1521.

When a non-default listener is used, then a listener.ora must be configured with the relevant listener address. For example,

This would start a listener on port 2500.

In order for PMON to be able to register the database service(s) with this listener, the init.ora parameter LOCAL_LISTENER must be set.

eg, LOCAL_LISTENER=listener_A

PMON will attempt to resolve LOCAL_LISTENER using some naming method. For example, this may be resolved in tnsnames.ora, as follows:

listener_A = (DESCRIPTION = (ADDRESS=(PROTOCOL=TCP)(HOST=uksn155)(PORT=2500)) )

PMON will search for tnsnames.ora in the following order:

$HOME/.tnsnames.ora $TNS_ADMIN/tnsnames.ora /var/opt/oracle/tnsnames.ora or /etc/tnsnames.ora (depending on platform) $ORACLE_HOME/network/admin/tnsnames.ora

If a tnsnames.ora cannot be found or if LOCAL_LISTENER cannot be resolved, the alert.log will show:

PMON started with pid=2
Syntax error in listener string

If LOCAL_LISTENER can be resolved, but there is a syntax error in the tnsnames.ora
specification, the alert log will show:

PMON started with pid=2
Syntax error in listener string (DESCRIPTION =)

The instance will start regardless of PMON errors during registration, unless MTS is enabled. If
MTS enabled, then both of the above error scenarios will give:

ORA-00101: invalid specification for system parameterMTS_DISPATCHERS

in addition to the relevant alert log message. The instance will not start.

Note that if 'NAMES.DEFAULT_DOMAIN' is set in sqlnet.ora, then the tnsnames.ora entry should be of the form NAME.DOMAIN. The domain will be appended to LOCAL_LISTENER if not already specified.
init.ora: LOCAL_LISTENER=listener_A (or

The search order for the 'system' sqlnet.ora is:


Additionally, the 'local' sqlnet.ora is always read from:

If this file exists, then any parameters defined here will override the ones in the 'system' sqlnet.ora.

Note, /etc or /var/opt/oracle is not searched for the 'system' sqlnet.ora unless TNS_ADMIN happens to be set to this directory.

Multiple LOCAL_LISTENERs can be specified in one of two ways in the init.ora:
local_listener=listener_A, listener_B
In both cases, v$parameter will show: local_listener=listener_A, listener_B
PMON will register ONLY with the listener that appears first in the v$parameter value for local_listener (ie, listener_A in the above).
The correct method is to specify one local_listener in the init.ora, and to specify multiple listener ADDRESSes in the connect descriptor.

For example,

In non-MTS mode, all listeners must be on the same host as the instance (unless pre-spawned servers are used on the remote host). However, even in dedicated mode and no pre-spawned servers, PMON still registers with listeners on another node. But this does not make any sense, as the remote listener will not be able to fork/exec oracle.

Registration in an MTS Environment:
Service registration is more flexible if the instance is running in MTS mode. For example,
PMON can register services with listeners on more than one node the dispatchers can register with a different listener than dedicated services different dispatchers can register with different listeners
This is illustrated by way of the following examples.
Example 1

init.ora on host1:

tnsnames.ora on host1:

output of 'lsnrctl services':

host1, listener on port 2500:
Services Summary...
V816 has 2 service handler(s)
DEDICATED SERVER established:0 refused:0
DISPATCHER established:0 refused:0 current:0 max:1022 state:ready

host1, listener on port 2600:
Services Summary...
V816 has 2 service handler(s)
DEDICATED SERVER established:0 refused:0
DISPATCHER established:0 refused:0 current:0 max:1022 state:ready
In this case, the dispatcher has registered with the listeners specified by the local_listener parameter.

Example 2
init.ora on host1:


tnsnames.ora on host1:

output of 'lsnrctl services':
Services Summary...
Nov10 has 1 service handler(s)
DEDICATED SERVER established:0 refused:0

Services Summary...
V816 has 1 service handler(s)
DISPATCHER established:0 refused:0 current:0 max:1022state:ready

In this case, the dispatcher explicitly registers with a different
listener than the one for the dedicated service.

Example 3
init.ora on host1:
mts_dispatchers="(protocol=tcp)(" local_listener=all_listeners

tnsnames.ora on host1:

output of 'lsnrctl services':

host1, listener on port 2500:
Services Summary...
V816 has 1 service handler(s)
DEDICATED SERVER established:0 refused:0

host1, listener on port 2600:
Services Summary...
V816 has 2 service handler(s)
DEDICATED SERVER established:0 refused:0
DISPATCHER established:0 refused:0 current:0 max:1022 state:ready

This illustrates that the 'listener=' part of mts_dispatchers overrides local_listener when registering dispatchers.

Static Info Overwrite:
If a listener.ora is used, and a SID_DESC entry exists for an instance, the data within the SID_DESC section is referred to as 'static information' for that instance.

In 8.1.6, all static information in the listener.ora is overwritten when the instance is dynamically registered with the listener.

Therefore, any environment variables set within the listener.ora will not be visible unless the variable is set in the environment used to start the instance (and thus inherited by PMON).

This behaviour is different from 8.1.5. In 8.1.5, the existance of a SID_DESC section results in the listener NOT registering PMON's and therefore the instances' environment (note that the instance is still registered).

Therefore, in 8.1.5, any environment variables set in the listener.ora would be retained even after dynamic registration.

If there is no SID_DESC section, then the listener WILL register PMON's environment (ie, behaves as 8.1.6).

Monday, April 6, 2009

Clone Database Manually

Clone an Oracle database using an online/hot backup:
This procedure will clone a database using a online copy of the source database files. Before beginning though, there are a few things that are worth noting about online/hot backups:
  1. When a tablespace is put into backup mode, Oracle will write entire blocks to redo rather than the usual change vectors. For this reason, do not perform a hot backup during periods of heavy database activity - it could lead to a lot of archive logs being created.
  2. This procedure will put all tablespaces into backup mode at the same time. If the source database is quite large and you think that it might take a long time to copy, consider copying the tablespaces one at a time, or in groups.
  3. While the backup is in progress, it will not be possible to take the tablespaces offline normally or shut down the instance.

Ok, lets get started...

1. Make a note of the current archive log change number

Because the restored files will require recovery, some archive logs will be needed. This applies even if you are not intending to put the cloned database into archive log mode. Work out which will be the first required log by running the following query on the source database. Make a note of the change number that is returned:

SQL>select max(first_change#) chng
from v$archived_log;

2. Prepare the begin/end backup scripts

The following sql will produce two scripts; begin_backup.sql and end_backup.sql. When executed, these scripts will either put the tablespaces into backup mode or take them out of it:


set lines 999 pages 999
set verify off
set feedback off
set heading off

spool begin_backup.sql

select 'alter tablespace ' tablespace_name ' begin backup;' tsbb
from dba_tablespaces
where contents != 'TEMPORARY'
order by tablespace_name
spool off

spool end_backup.sql

select 'alter tablespace ' tablespace_name ' end backup;' tseb
from dba_tablespaces
where contents != 'TEMPORARY'
order by tablespace_name
spool off

3. Put the source database into backup mode

From sqlplus, run the begin backup script created in the last step:

This will put all of the databases tablespaces into backup mode.

4. Copy the files to the new location

Copy, scp or ftp the data files from the source database/machine to the target. Do not copy the control files across. Make sure that the files have the correct permissions and ownership.

5. Take the source database out of backup mode

Once the file copy has been completed, take the source database out of backup mode. Run the end backup script created in step 2. From sqlplus:


6. Copy archive logs

It is only necessary to copy archive logs created during the time the source database was in backup mode. Begin by archiving the current redo:

SQL>alter system archive log current;

Then, identify which archive log files are required. When run, the following query will ask for a change number. This is the number noted in step 1.

SQL>select name
from v$archived_log
where first_change# >= &change_no
order by name;

Create an archive directory in the clone database's file system and copy all of the identified logs into it.

7. Produce a pfile for the new database

This step assumes that you are using a spfile. If you are not, just copy the existing pfile.From sqlplus:

SQL>create pfile='init.ora' from spfile;

This will create a new pfile in the $ORACLE_HOME/dbs directory.Once created, the new pfile will need to be edited. If the cloned database is to have a new name, this will need to be changed, as will any paths. Review the contents of the file and make alterations as necessary. Also think about adjusting memory parameters. If you are cloning a production database onto a slower development machine you might want to consider reducing some values.

Ensure that the archive log destination is pointing to the directory created in step 6.

8. Create the clone controlfile

Create a control file for the new database. To do this, connect to the source database and request a dump of the current control file. From sqlplus:

SQL>alter database backup controlfile to trace as '/home/oracle/cr_clone_test.sql';

The file will require extensive editing before it can be used. Using your favourite editor make the following alterations:

  1. Remove all lines from the top of the file up to but not including the second 'STARTUP MOUNT' line (it's roughly halfway down the file).
  2. Remove any lines that start with --
  3. Remove any lines that start with a #
  4. Remove any blank lines in the 'CREATE CONTROLFILE' section.
  7. Make a copy of the 'ALTER TABLESPACE TEMP...' lines, and then remove them from the file. Make sure that you hang onto the command, it will be used later.
  8. Move to the top of the file to the 'CREATE CONTROLFILE' line. The word 'REUSE' needs to be changed to 'SET'. The database name needs setting to the new database name (if it is being changed). Decide whether the database will be put into archivelog mode or not.
  9. If the file paths are being changed, alter the file to reflect the changes.

Here is an example of how the file would look for a small database called dg9a which isn't in archivelog mode:

GROUP 1 '/u03/oradata/dg9a/redo01.log' SIZE 100M,
GROUP 2 '/u03/oradata/dg9a/redo02.log' SIZE 100M,
GROUP 3 '/u03/oradata/dg9a/redo03.log' SIZE 100M

9. Add a new entry to oratab and source the environment

Edit the /etc/oratab (or /opt/oracle/oratab) and add an entry for the new database.Source the new environment with '. oraenv' and verify that it has worked by issuing the following command:


If this doesn't output the new database sid go back and investigate.

10. Create a password file

Use the following command to create a password file (add an appropriate password to the end of it):

orapwd file=${ORACLE_HOME}/dbs/orapw${ORACLE_SID} password=xxx

11. Create the new control file(s)

Ok, now for the exciting bit! It is time to create the new controlfiles and open the database:

sqlplus "/ as sysdba"

If all goes to plan you will see the instance start and then the message 'Control file created'.

12. Recover and open the database

The archive logs that were identified and copied in step 6 must now be applied to the database. Issue the following command from sqlplus:

SQL>recover database using backup controlfile until cancel;
When prompted to 'Specify log' enter 'auto'.

Oracle will then apply all the available logs, and then error with ORA-00308. This is normal, it simply means that all available logs have been applied.

Open the database with reset logs:

SQL>alter database open resetlogs;

13. Create temp files

Using the 'ALTER TABLESPACE TEMP...' command from step 8, create the temp files. Make sure the paths to the file(s) are correct, then run it from sqlplus.

14. Perform a few checks

If the last couple of steps went smoothly, the database should be open. It is advisable to perform a few checks at this point:
Check that the database has opened with:

SQL> select status from v$instance;
The status should be 'OPEN'

Make sure that the datafiles are all ok:

SQL>select distinct status from v$datafile;
It should return only ONLINE and SYSTEM.

Take a quick look at the alert log too.

15. Set the databases global name

The new database will still have the source databases global name. Run the following to reset it:

SQl>alter database rename global_name to new_database_global_name;
Note. no quotes!

16. Create a spfileFrom sqlplus:

SQL>create spfile from pfile;

17. Change the database ID

If RMAN is going to be used to back-up the database, the database ID must be changed. If RMAN isn't going to be used, there is no harm in changing the ID anyway - and it's a good practice to do so.From sqlplus:

SQL>shutdown immediate
SQL>startup mount
From unix:

nid target=username/Password

NID will ask if you want to change the ID. Respond with 'Y'. Once it has finished, start the database up again in sqlplus:

SQL>shutdown immediate;
SQL>startup mount;
SQL>alter database open resetlogs;

18. Configure TNS

Add entries for new database in the listener.ora and tnsnames.ora as necessary.


RMAN Clone a Database

Clone a Database Using RMAN:
By means of the use of Recovery Manager it is possible to duplicate a database out from a previously taken rman backup.The method will be outlined in the next few lines.
Let's assume a source database named SRCDB and the target database, named GEMINI. A unix like environment is assumed, but it can be implemented on windows as well, just beware of the particular Oracle implementation on a windows platform (orapwd file name, service creation, path format)

1. Create a password file for the Cloned (GEMINI) instance:
orapwd file=/u01/app/oracle/product/ password=password entries=10 2. Configure tnsnames.ora and listner.ora

Properly identify the database at the tnsnames.ora and have the instance manually registered against the listener.ora files, both files located at the $ORACLE_HOME/network/admin directory.

2.a Manually register the database against the listener (listener.ora)

(ORACLE_HOME = /u01/app/oracle/product/

2.b Added the target GEMINI to the tnsnames.ora

(ADDRESS = (PROTOCOL = TCP)(HOST = = 1521)) )

2.c Reload the listener
lsnrctl reload

3. Create a new init.ora for the cloned database.
Next create an init.ora file for the cloned database. In case the same paths cannot be used on the target host, either because it is the same source host or because those paths are not reproducible on the target, then DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT may be required to be defined

# Convert file names to allow for different directory structure.
# block_size and compatible parameters must match those of the source database

4. Connect to the cloned instance
sqlplus /nologconn / as sysdba

5. Create an SPFILE based on the init.ora
CREATE SPFILE FROM PFILE='/u01/app/oracle/admin/GEMINI/pfile/init.ora';

6. Start the database in NOMOUNT mode:

7. Connect to the TARGET, CATALOG and AUXILIARY databases.
By means of the rman three connections are open, one for the Source Database (SOURCEDB), another for the Catalog database (RCAT), and one more for the cloned database (GEMINI)

rman TARGET sys/password@SRCDB CATALOG rman/rman@RCAT AUXILIARY /

8. Complete or Incomplete clone (recover)
From the rman the database using one of the following commands:

8.a Clone the database by means of a complete recover.

8.b Clone the database up to a defined point in time in the past by means of an incomplete

9. Process finished.
Once the process is finished, the newly created GEMINI database is ready to be used as an independent new cloned database.

Friday, April 3, 2009

Managing Listeners

Managing Listeners with lsnrctl:
$ lsnrctl

LSNRCTL for Solaris: Version - Production on 30-JAN-2003 11:54:13

(c) Copyright 1998 Oracle Corporation. All rights reserved.

Welcome to LSNRCTL, type "help" for information.

The following operations are available
An asterisk (*) denotes a modifier or extended command:

start stop status
services version reload
save_config trace spawn
dbsnmp_start dbsnmp_stop dbsnmp_status
change_password quit exit
set* show*

The following commands are used to manage the listener:
· start – Starts the listener with the name specified, otherwise LISTENER will be used. For Windows systems, the listener can also be started from the Control Panel.
· stop – Stops the listener. For Windows systems, the listener can also be stopped from the Control Panel.
· status – Provides status information about the listener, including start date, uptime, and trace level.
· services – Displays each service available, along with the connection history.
· version – Displays the version information of the listener.
· reload – Forces a read of the configuration file in order for new settings to take effect without stopping and starting the listener.
· save_config – Creates a backup of the existing listener.ora file and saves changes to the current version.
· trace – Sets the trace level to one of the following – OFF, USER, ADMIN, or SUPPORT.
· spawn – Spawns a program that runs with an alias in the listener.ora file.
· dbsnmp_start – Starts the DBSNMP subagent.
· dbsnmp_stop – Stops the DBSNMP subagent.
· dbsnmp_status – Displays the status of the DBSNMP subagent.
· change_password – Sets a new password for the listener.
· quit and exit – Exits the utility.
· set – Changes the value of any parameter. Everything that can be shown can be set.
. show – Displays current parameter settings.

Wednesday, April 1, 2009

Oracle Database Security

Oracle Database Security Checklist:
This paper recaps the security checklist that can be found in newer versions of the Oracle Database Security Guide. The recommendations contained in this document are intended to be general in nature. This document provides guidance on configuring the Oracle Database based on security best practices for operational database deployments. Details on specific database-related tasks and actions can be found throughout the Oracle documentation set.

Critical Patch Updates

Oracle Critical Patch Updates and Security Alerts:
This page lists security patches, in the form of Critical Patch Updates (CPUs) and Security Alerts, that Oracle has released. The page is updated when new Critical Patch Updates and Security Alerts are released.
Critical Patch Updates

Oracle Forum

Forum Home:
Welcome to Forum Home and post your issues.

Oracle Certification Matrices

Certify - Oracle's Certification
Welcome to Certify. This application provides product certification information for Oracle and non-Oracle products. Furthermore, Certify identifies patch and workaround information for problems that were discovered after a product's release.
To best view the matrices, it is recommended that you maximize your browser window.
View Certifications by Product
View Certifications by Platform
View Product Availablility

CRS Administration (RAC)


You can use srvctl to manage these resources. Below are syntax and examples.

SRVCTL command line utility and examples are available at:

10G R2:
Oracle Database Oracle Clusterware and Oracle Real Application Clusters Administration and Deployment Guide10g Release 2 (10.2)

9i R2:
Oracle9i Real Application Clusters AdministrationRelease 2 (9.2)


srvctl status database -d [-f] [-v] [-S ]
srvctl status instance -d -i >[,]
[-f] [-v] [-S ]
srvctl status service -d -s [,]
[-f] [-v] [-S ]
srvctl status nodeapps [-n ]
srvctl status asm -n


Status of the database, all instances and all services.
srvctl status database -d ORACLE -v
Status of named instances with their current services.
srvctl status instance -d ORACLE -i RAC01, RAC02 -v
Status of a named services.
srvctl status service -d ORACLE -s ERP -v
Status of all nodes supporting database applications.
srvctl status node


srvctl start database -d [-o <>]
[-c -q]
srvctl start instance -d -i
[,] [-o ] [-c -q]
srvctl start service -d [-s [,]]
[-i ] [-o ] [-c -q]
srvctl start nodeapps -n
srvctl start asm -n [-i ] [-o ]


Start the database with all enabled instances.
srvctl start database -d ORACLE
Start named instances.
srvctl start instance -d ORACLE -i RAC03, RAC04
Start named services. Dependent instances are started as needed.
srvctl start service -d ORACLE -s CRM
Start a service at the named instance.
srvctl start service -d ORACLE -s CRM -i RAC04
Start node applications.
srvctl start nodeapps -n myclust-4


srvctl stop database -d [-o ]
[-c -q]
srvctl stop instance -d -i [,]
[-o ][-c -q]
srvctl stop service -d [-s [,]]
[-i ][-c -q] [-f]
srvctl stop nodeapps -n
srvctl stop asm -n [-i ] [-o ]


Stop the database, all instances and all services.
srvctl stop database -d ORACLE
Stop named instances, first relocating all existing services.
srvctl stop instance -d ORACLE -i RAC03,RAC04
Stop the service.
srvctl stop service -d ORACLE -s CRM
Stop the service at the named instances.
srvctl stop service -d ORACLE -s CRM -i RAC04
Stop node applications. Note that instances and services also stop.
srvctl stop nodeapps -n myclust-4


srvctl add database -d -o [-m ] [-p ]
[-s ] [-n ]
srvctl add instance -d -i -n
srvctl add service -d -s -r
[-a ] [-P ] [-u]
srvctl add nodeapps -n -o
[-A /netmask[/if1[if2...]]]
srvctl add asm -n -i -o


-A vip range, node, and database, address specification. The format of
address string is:
[]//[/] [,] []//
-a for services, list of available instances, this list cannot include
preferred instances
-m domain name with the format “”
-n node name that will support one or more instances
-o $ORACLE_HOME to locate Oracle binaries
-P for services, TAF preconnect policy - NONE, PRECONNECT
-r for services, list of preferred instances, this list cannot include
available instances.
-s spfile name
-u updates the preferred or available list for the service to support the
specified instance. Only one instance may be specified with the -u
switch. Instances that already support the service should not be


Add a new node:
srvctl add nodeapps -n myclust-1 -o $ORACLE_HOME –A
Add a new database.
srvctl add database -d ORACLE -o $ORACLE_HOME
Add named instances to an existing database.
srvctl add instance -d ORACLE -i RAC01 -n myclust-1
srvctl add instance -d ORACLE -i RAC02 -n myclust-2
srvctl add instance -d ORACLE -i RAC03 -n myclust-3
Add a service to an existing database with preferred instances (-r) and
available instances (-a). Use basic failover to the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04
Add a service to an existing database with preferred instances in list one and
available instances in list two. Use preconnect at the available instances.
srvctl add service -d ORACLE -s STD_BATCH -r RAC01,RAC02 -a RAC03,RAC04 -P PRECONNECT


srvctl remove database -d
srvctl remove instance -d [-i ]
srvctl remove service -d -s [-i ]
srvctl remove nodeapps -n


Remove the applications for a database.
srvctl remove database -d ORACLE
Remove the applications for named instances of an existing database.
srvctl remove instance -d ORACLE -i RAC03
srvctl remove instance -d ORACLE -i RAC04
Remove the service.
srvctl remove service -d ORACLE -s STD_BATCH
Remove the service from the instances.
srvctl remove service -d ORACLE -s STD_BATCH -i RAC03,RAC04
Remove all node applications from a node.
srvctl remove nodeapps -n myclust-4


srvctl modify database -d [-n ] [-m ]
[-s ]
srvctl modify instance -d -i -n
srvctl modify instance -d -i {-s -r}
srvctl modify service -d -s -i
-t [-f]
srvctl modify service -d -s -i
-r [-f]
srvctl modify nodeapps -n [-A ] [-x]


-i -t the instance name (-i) is replaced by the
instance name (-t)
-i -r the named instance is modified to be a preferred instance
-A address-list for VIP application, at node level
-s add or remove ASM dependency


Modify an instance to execute on another node.
srvctl modify instance -d ORACLE -n myclust-4
Modify a service to execute on another node.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC01 -t RAC02
Modify an instance to be a preferred instance for a service.
srvctl modify service -d ORACLE -s HOT_BATCH -i RAC02 –r


srvctl relocate service -d -s [-i ]-t [-f]


Relocate a service from one instance to another
srvctl relocate service -d ORACLE -s CRM -i RAC04 -t RAC01

ENABLE CRS RESOURCES (The resource may be up or down to use this function)

srvctl enable database -d
srvctl enable instance -d -i [,]
srvctl enable service -d -s ] [, ] [-i ]


Enable the database.
srvctl enable database -d ORACLE
Enable the named instances.
srvctl enable instance -d ORACLE -i RAC01, RAC02
Enable the service.
srvctl enable service -d ORACLE -s ERP,CRM
Enable the service at the named instance.
srvctl enable service -d ORACLE -s CRM -i RAC03

DISABLE CRS RESOURCES (The resource must be down to use this function)

srvctl disable database -d
srvctl disable instance -d -i [,]
srvctl disable service -d -s ] [,] [-i ]


Disable the database globally.
srvctl disable database -d ORACLE
Disable the named instances.
srvctl disable instance -d ORACLE -i RAC01, RAC02
Disable the service globally.
srvctl disable service -d ORACLE -s ERP,CRM
Disable the service at the named instance.
srvctl disable service -d ORACLE -s CRM -i RAC03,RAC04

Note: For more information on this see the Oracle10g Real Application Clusters
Administrator’s Guide - Appendix B