Friday, October 23, 2009

OCRCONFIG

Usage of ocrconfig to administer Oracle Cluster Registry:
Prerequisites:
Requires administrative previlege on UNIX based systems or as a user with administrator previleges on windows based systems.

Usage:
ocrconfig -option

Options:
-backuploc: To change an ocr backup file location.
-downgrade: To downgrade ocr to an earlier version.
-export: To export the contents of an ocr to a target file.
-help: to display help for ocrconfig tool.
-import: To import the ocr contents from a previously exported ocr file.
-overwrite: To update an ocr configuration that is recorded on the ocr with the current ocr configuration information that is found on the node from which you are running this command.
-repair: To update an ocr configuration on the node from which you are running this command with the new configuration information specified by this command
-restore: To restore an ocr from an automatically created ocr backup file.
-showbackup: To display ocr backup information.
-upgrade: To upgrade an ocr to a latest version.

RAC Administration and Maintenance Tasks

RAC Administration and Maintenance Tasks using ocrconfig, ocrdump, olsnodes, srvctl, crsctl, oifcfg, ocrcheck

This topic covers the following RAC maintenance activities:
  1. Checking CRS status
  2. Viewing name of the cluster
  3. Viewing nodes configuraion
  4. Checking votedisk information
  5. Checking OCR disk information
  6. Timeout settings in cluster
  7. Add/Remove OCR files
  8. Add/Remove votedisk
  9. Backing up OCR
  10. Backing up votedisk
  11. Restoring OCR Devices
  12. Restoring voting disk devices
  13. Changing Public IPs and Virtual IPs
Refer the below link to learn more about the above maintenance activities.
http://www.oracledba.org/11g/rac/11g_RAC_Admin_Maintenance_Tasks.html

The above link will also give beautiful information and usage of the below tools:
  1. ocrconfig
  2. ocrdump
  3. olsnodes
  4. ocrcheck
  5. crsctl
  6. oifcfg
  7. srvctl

Tuesday, October 13, 2009

WebLogic Configuration for RAC

How-To Configure and Use Oracle Real Application Clusters (RAC) with Oracle WebLogic Server 10.3:
This demo shows how to configure Oracle WebLogic Server to work with Oracle Real Application Clusters (RAC) as well as how to test the connections to the backend Oracle RAC nodes using a web application.

Oracle WebLogic Server 10.3 integrates Oracle Real Application Clusters (RAC) features in Oracle Database 11g, minimizing database access time while allowing transparent access to rich pooling management functions that maximize both connection performance and availability.

There are multiple configuration options for Oracle RAC features within Oracle WebLogic Server. Oracle recommends using Oracle WebLogic Server JDBC multi data sources. This applies to scenarios with or without global transactions. Also you could configure Oracle WebLogic Server to use Oracle JDBC THIN driver’s connect-time failover as well as Fast Connection Failover from Oracle JDBC driver’s Implicit Connection Cache.

In this demo, we will configure and use Oracle WebLogic Server JDBC multi data sources for failover and load balancing.

Check the below URL for detailed steps:
http://www.oracle.com/technology/products/weblogic/howto/rac/index.html#

Tuesday, October 6, 2009

Task List for Physical Standby Database Creation

Task List: Preparing to create a physical standby database:
This section provides a checklist of tasks that you perform to create a physical standby database and synchronize it so that it is ready to begin managed recovery. Each step includes a reference to a section that provides additional information.

Preparing the Primary Database for Standby Database Creation:
Primary Node:
Enable Forced Logging
Create a Password File
Configure a Standby Redo Log
Set Primary Database Initialization Parameters
Enable Archiving

Step-by-Step Instructions for Creating a Physical Standby Database:
Primary Node:
Create a Backup Copy of the Primary Database Datafiles
Create a Control File for the Standby Database
Prepare an Initialization Parameter File for the Standby Database
Copy Files from the Primary System to the Standby System

Standby Node:
Set Up the Environment to Support the Standby Database
Start the Physical Standby Database
Verify the Physical Standby Database Is Performing Properly

Post-Creation Steps:
Primary/Standby Node:
Upgrade the data protection mode
Enable Flashback Database

Configure Standby Database using RMAN

Configure Physical Standby Database using RMAN:
Follow the below steps to configure physical standby database using RMAN

Target Database:
Make sure the target database is mount or on open stage.

$ export ORACLE_SID TEST
$ rman target /
RMAN> show all;
Using target database controlfile instead of recovery catalog RMAN configuration parameters are:
CONFIGURE RETENTION POLICY TO RECOVERY WINDOW OF 1 DAYS;
CONFIGURE BACKUP OPTIMIZATION OFF;CONFIGURE DEFAULT DEVICE TYPE TO DISK;
CONFIGURE CONTROLFILE AUTOBACKUP ON;
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '/u01/oradata/backup/Oracle_Backups d_ F.rman';
CONFIGURE DEVICE TYPE DISK PARALLELISM 1;
CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/u01/oradata/backup/Oracle_Backups d_ F.rman';
CONFIGURE MAXSETSIZE TO UNLIMITED;
CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/u01/oradata/backup/ORACLEORA92DATABASESNCFTEST.ORA';

2. Backup the current production database to create a standby database:

RMAN> backup database include current controlfile for standby plus archivelog;

3. Manually copy the backup sets from the production server to the DR Server (location of backups must match on both production and DR). Make sure all DR filesystems are identical with respect to the target database environment.

4. On the DR Server start up the TEST database in nomount mode. Make sure the parameter file and password file are all present at DR server at $ORACLE_HOME/dbs location for UNIX system.

$ set ORACLE_SID TEST
$ sqlplus /nolog
SQL> connect / as sysdba
SQL> startup nomount
SQL> exit

5. Create the standby database using RMAN. This assumes the database file structures will be identical on both servers. Also you have initialization parameter, password file and backup piece are all placed in their correct locations. The tnsnames.ora and listener.ora must have information for target and auxiliary database and listener is up.

$ RMAN target ‘sys/password@TEST’ auxiliary / (This is executed from DR server.)
RMAN> duplicate target database for standby nofilenamecheck dorecover;

6. Once the DR database is created, you will need to manually add a tempfile:

SQL> alter database open read only;
SQL> alter tablespace temp add tempfile ‘/u01/oradata/TEMP01.DBF’ size 500M;

7. Put the DR database into managed standby mode:

SQL> shutdown immediate
SQL> startup nomount
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect;

8. On the production database switch logs to initiate replication:

SQL> alter system switch logfile;

The configuration of Dataguard is now complete.

For more detailed information to create a Physical standby database using RMAN, refer the below URL:
http://www.idevelopment.info/data/Oracle/DBA_tips/RMAN_9i/RMAN9_32.shtml

Monday, October 5, 2009

ASM Commands

Summary of ASM Commands:
This section describes each individual ASMCMD command in detail:

cd: Changes the current directory to the specified directory.
du:Displays the total disk space occupied by ASM files in the specified ASM directory and all its subdirectories, recursively.
exit:Exits ASMCMD.
find:Lists the paths of all occurrences of the specified name (with wildcards) under the specified directory.
help:Displays the syntax and description of ASMCMD commands.
ls:Lists the contents of an ASM directory, the attributes of the specified file, or the names and attributes of all disk groups.
lsct:Lists information about current ASM clients.
lsdg:Lists all disk groups and their attributes.
mkalias:Creates an alias for a system-generated filename.
mkdir:Creates ASM directories.
pwd:Displays the path of the current ASM directory.
rm:Deletes the specified ASM files or directories.
rmalias:Deletes the specified alias, retaining the file that the alias points to.

Refer the below URL for more details on asmcmd for 10GR2:
http://www.mcs.csueastbay.edu/support/oracle/doc/10.2/server.102/b14215/asm_util.htm#table

Refer the below URL for more details on asmcmd for 11G Release 1:
http://download.oracle.com/docs/cd/B28359_01/server.111/b31107/asm_util.htm#table

RMAN Commands

RMAN Commands:
This chapter describes, in alphabetical order, Recovery Manager commands and subclauses. For a summary of the RMAN commands and command-line options, refer to "Summary of RMAN Commands".

http://download.oracle.com/docs/cd/B14117_01/server.101/b10770/rcmsynta.htm#77541

For command line options for the RMAN client, refer to "cmdLine".

To configure recovery catalog database refer the below URL:
http://oracledocaccess.blogspot.com/2009/06/configure-recovery-catalog.html

For RMAN concepts refer the below URL:
http://oracledocaccess.blogspot.com/2009/06/rman-concepts.html

Monday, August 31, 2009

LogMiner Viewer

Using LogMiner Viewer to Perform a Logical Recovery:
Purpose:
In this module, you will learn how to use LogMiner to analyze your redo log files so that you can logically recover your database.
http://www.oracle.com/technology/obe/obe9ir2/obe-ha/logminer/logminer.htm

Thursday, August 27, 2009

Logminer

Logminer:

LogMiner is an Oracle utility which uses a dictionary meta data (online , from a flat file, or from redologs ) to scan the redo/archive logs and generates a set of SQL statements which would have the same effect on the database as applying the corresponding redo record.

Logminer can be used to mine the logs of the current database or a remote database.

The following steps are needed to analyze logs of a remote database.

1) Build the Dictionary file of the production database and scp to the mining/test server.
2) Get the list of archive logs which you want to mine from production server and scp to the mining/test server.
3) Add Redo Log Files for mining
4) Start LogMiner
5) Query V$LOGMNR_CONTENTS
6) End the LogMiner Session

Follow the below steps:
Production:
1: Log in as sys user
2: execute sys.dbms_logmnr_d.build ('PROD_dict.ora', '/oracle/admin/PROD/utldir');
3: Determine the archive log sequences that are needed by querying v$archived_log on the remote database. (Look at first_time, next_time values).
Ex: /u006/archive/PROD/PROD_1_4829_651194874.arc
4: send the above 2 files mentioned in step 2 and 3 to Mining/Test Server using scp/ftp/sftp etc.

Mining/Test Server:
1: Log in as sys user
2: exec sys.dbms_logmnr.add_logfile('/export/home/oracle/logminer/archive/PROD_1_4829_651194874.arc');
3: exec sys.Dbms_Logmnr.start_logmnr(dictfilename => '/export/home/oracle/logminer/dict/PROD_dict.ora');
4: Query the V$LOGMNR_CONTENTS to see the mined rows
5: exec sys.DBMS_LOGMNR.end_logmnr;

Note: Do not execute step 5 until you have finished quering V$LOGMNR_CONTENTS.

Wednesday, August 26, 2009

ROW CACHE LOCK

WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK:
When Row cache contention occurs, if the enqueue cannot be gotten within a certain time period, a trace file will be generated in the location with some trace details.
The trace file tends to contain the words:
>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! <<>> WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK! << address="700000036f27628" cid="0(dc_tablespaces)
hash=a6840aa5 typ=9 transaction=0 flags=00008000
...

The trace will often contain a systemstate dump, although most useful information is in the header section. Typically a session holding the row cache resource will either be on cpu or blocked by another session. If it is on cpu then errorstacks are likely to be required to diagnose, unless tuning can be done to reduce the enqueue hold time. Remember that on a multi node system (RAC) the holder may be on another node and so multiple systemstates from each node will be required. For each enqueue type, there are a limited number of operations that require each enqueue.

DC_TABLESPACES:
Probably the most likely is allocation of new extents. If extent sizes are set low then the application may constantly be requesting new extents and causing contention. Do you have objects with small extent sizes that are rapidly growing? (You may be able to spot these by looking for objects with large numbers of extents). Check the trace for insert/update activity, check the objects inserted into for number of extents.

DC_SEQUENCES:
Check for appropriate caching of sequences for the application requirements.

DC_USERS:
Deadlock and resulting "WAITED TOO LONG FOR A ROW CACHE ENQUEUE LOCK!" can occur if a session issues a GRANT to a user, and that user is in the process of logging on to the database.

DC_OBJECTS:
Look for any object compilation activity which might require an exclusive lock and thus block online activity.

DC_SEGMENTS:
This is likely to be down to segment allocation. Identify what the session holding the enqueue is doing and use errorstacks to diagnose.

The details are also available at:
http://www.metalink.oracle.com/
Doc ID:
278316.1


Solution to ROW/LIBRARY Cache locks:
Set the below event in database level and restart the database. This will resolve the issue.
event='32333 trace name context forever, level 10'

Query Rewrite Feature in Materialized Views

Query Rewrite Feature in Materialized Views:
--------------------------------------------------------------
Setup the Environment:
This includes creating a MV_CAPABILITIES_TABLE and a customised function to test this feature on Materialized views.

create table MV_CAPABILITIES_TABLE
( statement_id varchar(30) ,
mvowner varchar(30) ,
mvname varchar(30) ,
capability_name varchar(30) ,
possible character(1) ,
related_text varchar(2000) ,
related_num number ,
msgno integer ,
msgtxt varchar(2000) ,
seq number) ;

The above table can be created by executing the below script:
$ORACLE_HOME/rdbms/admin/utlxmv.sql

You can create a customised function to generate query rewrite functionalities reports on materialized views.
Refer the below URL to create a function called my_mv_capabilities.
http://www.sqlsnippets.com/en/topic-12884.html

Create/Test the Query Rewrite Feature of Materialized Views:
Create a table called dept:
SQL> create table dept (id number, name varchar2(20));
SQL> insert into dept values (1, 'Finance');
SQL> insert into dept values (2, 'Admin');
SQL> insert into dept values (3, 'HR');
SQL> commit;

Create a test mview:
SQL> Create materialized view test DISABLE QUERY REWRITE as select id,name from dept;

Generate Report:
SQl> set long 5000
SQL> select my_mv_capabilities( 'TEST', 'REWRITE' ) as mv_report from dual ;

MV_REPORT
--------------------------------------------------------------------------------
Not Capable of:
REWRITE

REWRITE_FULL_TEXT_MATCH
query rewrite is disabled on the materialized view

REWRITE_PARTIAL_TEXT_MATCH
query rewrite is disabled on the materialized view

REWRITE_GENERAL
query rewrite is disabled on the materialized view

The above report clarifies that the MV has no query rewrite feature available.

Enable Query Rewrite:
SQL>alter materialized view test enable query rewrite;

Generate Report:
SQl> set long 5000
SQL> select my_mv_capabilities( 'TEST', 'REWRITE' ) as mv_report from dual ;

MV_REPORT
--------------------------------------------------------------------------------
Capable of:
REWRITE

REWRITE_FULL_TEXT_MATCH

REWRITE_PARTIAL_TEXT_MATCH

REWRITE_GENERAL

The above report clarifies that the MV has no query rewrite feature enabled.

Effect of query rewrite on PLAN:
A very good analysis is available below:
http://www.sqlsnippets.com/en/topic-12918.html

How Query Rewrite can be enabled from database level:
Following 2 initialization parameters can enable this feature in database:
query_rewrite_enabled string TRUE
query_rewrite_integrity string enforced

Tuesday, August 25, 2009

Database Features: 8i vs 9i vs 10G vs 10GR2

Database Features: 8i vs 9i vs 10G vs 10GR2:
-----------------------------------------------------------
Here is an article which will give you an idea about the major differences in features for various versions of Oracle databases:
-------------------------------------------
Oracle 8i Features:
-------------------------------------------
Auditing - Basic setup instructions to allow the auditing of user actions.

Cost Based Optimizer (CBO) And Database Statistics - Lists the available mechanisms for gathering database statistics that are used by the cost based optimizer.

Dropping Columns - Hide unused columns or completely remove them.

Index Organized Tables - Save space and improve the performance of tables which are predominantly access by their primary keys.

Locally Managed Tablespaces - Reduce contention on the system tables by delegating extent management to individual tablespaces.

LogMiner - Analyze archived redo logs to identify specific specific statements and undo them.

Partitioned Tables And Indexes - Reduce tables and indexes to a more managable size and simultaneously improve performance.

Recovery Manager (RMAN) - Explanation of RMANs basic backup, recovery and reporting functionality.

Refreshing Stale Statistics - Keep your optimizer statistics up to date without increasing the load on the server.

Replication (Simple) - Using materialized views (snapshots) to replicate data on a remote database.

Replication (Advanced) - Using Oracle Advanced Replication for two-way data replication.

Statspack - Analyze the performance of your system using the replacement for the UTLBSTAT/UTLESTAT scripts.

Standby Database - Keep an up-to-date copy of the production database ready for failover in the event of a disaster.

Temporary Tables - Most applications need temporary tables to store information during complex data processing. See how Oracle can manage your temporary tables and keep your data secure.
----------------------------------------------
Oracle9i New Features:
---------------------------------------------
Automatic Segment Free Space Management - Reduce segment header contention and wasted space within blocks by switching from using FreeLists to Automatic Segment Free Space Management.

Automatic Undo Management - Replace you existing rollback segments with self-tuning undo segments.

Bitmap Join Indexes - Increase the performance and reduce the size of your data warehouse.

Data Guard - Oracle9i Data Guard is the new name for Oracle8i Standby Database incorporating a large number of new features.

Flashback Query - Get a consistent view of your data at a previous point in time.

Memory Management In Oracle9i - See how easy memory management has become in Oracle9i.

Multiple Block Sizes - Improve efficiency by having multiple block sizes that relate more closely to the operations that are being performed.

Oracle Managed Files - Let Oracle manage operating system level file placement, naming and cleanup.

Real Application Clusters - A brief introduction to the Oracle9i replacement for Oracle Parallel Server.

Recovery Enhancements In Oracle9i - Reduce unplanned downtime by using the new crash, instance and media recovery features of Oracle9i.

Recovery Manager (RMAN) Enhancements In Oracle9i - Use the latest RMAN features which make backup and recovery quicker and more reliable.

DBNEWID Utility - Change the internal DBID and the database name using this new utility without rebuilding your controlfile.

DBMS_XPLAN - Easily format the output of an explain plan with this replacement for the utlxpls.sql script.

Export BLOB Contents Using UTL_FILE - Use the new UTL_FILE functionality to write binary data to files.

STATISTICS_LEVEL - Let Oracle9i Release 2 control the collection of statistics and advisories with a single parameter.
------------------------------------------------
Oracle 10G New Features:
------------------------------------------------
Flashback Versions Query-

Tablespace Management-

ALTER TABLESPACE RENAME TO ;
ALTER DATABASE DEFAULT TABLESPACE ;

Oracle Data Pump-
create directory dpdata1 as '/u02/dpdata1';
grant read, write on directory dpdata1 to ananda;

* Data Pump Export
expdp ananda/abc123 tables=CASES directory=DPDATA1 dumpfile=expCASES.dmp job_name=CASES_EXPORT
* With Parallel Operation
dumpfile=expCASES_%U.dmp parallel=4
* Data Pump Import
impdp ananda/abc123 directory=dpdata1 dumpfile=expCASES.dmp job_name=cases_import

Flashback Table-
drop table recycletest;
show recyclebin
FLASHBACK TABLE RECYCLETEST TO BEFORE DROP;

Managing the Recycle Bin-
PURGE TABLE TEST;
PURGE TABLESPACE USERS;
PURGE RECYCLEBIN;

Other Uses of Flashback Tables-
FLASHBACK TABLE RECYCLETEST TO SCN 2202666520;

Automatic Workload Repository-

Automatic Storage Management-


RMAN-
Incremental Backups Revisited by block change tracking enabled:
alter database enable block change tracking using file '/rman_bkups/change.log';

Flash Recovery Area-
alter system set db_recovery_file_dest = '/ora_flash_area';
alter system set db_recovery_file_dest_size = 2g;
alter system set db_flashback_retention_target = 1440;
alter database flashback on;

Incremental Merge-
backup incremental level_1 for recover of copy with tag level_0 database;

* Compressed Files
backup as compressed backupset incremental level 1 database;
* Recovery Preview
restore database preview;
* Resetlogs and Recovery

RMAN can now readily use all backups, before and after a resetlogs operation, to recover the Oracle database. There is no need to shut down the database to make a backup. This new capability means that the database can be re-opened immediately for the user community after a resetlogs operation.

Enterprise Manager 10g-
Ultra Search HTTP port number = 5620
iSQL*Plus HTTP port number = 5560
Enterprise Manager Console HTTP Port (starz10) = 5500
Enterprise Manager Agent Port (starz10) = 1830

Automatic Segment Management-
High Water Mark to Low Water Mark:
alter table bookings enable row movement;
alter table bookings shrink space compact;
alter table bookings shrink space cascade;

Transportable Tablespaces-

Automatic Shared Memory Management-

ADDM and SQL Tuning Advisor-

---------------------------------------------------
10G R2 new features
---------------------------------------------------
Manageability Features:
• ASM Command Line Tool
• Drop Empty Datafiles
• Direct SGA Access for Hung/Slow Systems
• Faster Startup
• Manage Multiple Objects in Oracle Enterprise Manager
• Automatic Segment Advisor

Backup and Availability Features
• Oracle Secure Backup
• Dynamic RMAN Views for Past and Current Jobs
• Dynamic Channel Allocation for Oracle RAC Clusters
• Tempfiles Recovery via RMAN
• Flashback Database/Query Through RESETLOGS
• Flashback Database Restore Points
• Flash Recovery Area View

Monday, August 24, 2009

Pre-Requirements for Grid Control

Pre-Requirements for Grid Control:
This chapter provides information about the preinstallation requirements you must meet before installing Enterprise Manager Grid Control (Grid Control).
http://download.oracle.com/docs/cd/B16240_01/doc/install.102/e10953/pre-installation_req.htm

High Availability Grid Control Architecture

High Availability Grid Control Architecture:

Oracle Enterprise Manager 10g Grid Control is based on a flexible architecture, which allows you to deploy the Grid Control components in the most efficient and practical manner for your organization. This chapter describes some common configurations that demonstrate how you can deploy the Grid Control architecture in various computing environments.

This chapter presents the common configurations in a logical progression, starting with the simplest configuration and ending with a complex configuration that involves the deployment of high availability components, such as server load balancers, Oracle Real Application Clusters, and Oracle Data Guard.

Follow the below link for more information on various recommended architecture of Oracle Grid control. Based on your business need the setup can be built.
http://download.oracle.com/docs/html/B12013_03/configs.htm#CHDCFGFC

Wednesday, July 29, 2009

ASM Instance

Manually Creating an ASM Instance:
Automatic Storage Management (ASM) is a new feature in Oracle10g that alleviates the DBA from having to manually manage and tune disks used by Oracle databases. ASM provides the DBA with a file system and volume manager that makes use of an Oracle instance (referred to as an ASM instance) and can be managed using either SQL or Oracle Enterprise Manager.

Only one ASM instance is required per node. The same ASM instance can manage ASM storage for all 10g databases running on the node.

When the DBA installs the Oracle10g software and creates a new database, creating an ASM instance is a snap. The DBCA provides a simple check box and an easy wizard to create an ASM instance as well as an Oracle database that makes use of the new ASM instance for ASM storage. But, what happens when the DBA is migrating to Oracle10g or didn't opt to use ASM when a 10g database was first created. The DBA will need to know how to manually create an ASM instance and that is what this article provides.

  1. Overview
  2. Configuring Oracle Cluster Synchronization Services (CSS)
  3. Creating the ASM Instance
  4. Identify RAW Devices
  5. Starting the ASM Instance
  6. Verify RAW / Logical Disk Are Discovered
  7. Creating Disk Groups
  8. Using Disk Groups
  9. Startup Scripts

Go through the below link to cover all the above topic.

http://www.idevelopment.info/data/Oracle/DBA_tips/Automatic_Storage_Management/ASM_10.shtml

Note: Automatic Storage Management (ASM) requires the use of Oracle Cluster Synchronization Services (CSS), and as such, CSS must be configured and running before attempting to use ASM. The CSS service is required to enable synchronization between an ASM instance and the database instances that rely on it for database file storage.

With Oracle10g R1, CSS was always configured regardless of whether you chose to configure ASM or not. On the Linux / UNIX platform, CSS was installed and configured via the root.sh script. This caused a lot of problems since many did not know what this process was, and for most of them, didn't want the CSS process running since they were not using ASM.

Oracle listened carefully to the concerns (and strongly worded complaints) about the CSS process and in Oracle10g R2, will only configure this process when it is absolutely necessary. In Oracle10g R2, for example, if you don't choose to configure an ASM stand-alone instance or if you don't choose to configure a database that uses ASM storage, Oracle will not automatically configure CSS in the root.sh script.

In the case where the CSS process is not configured to run on the node (see above), you can make use of the $ORACLE_HOME/bin/localconfig script in Linux / UNIX or %ORACLE_HOME%\bin\localconfig.bat batch file in Windows. For example in Linux, run the following command as root to configure CSS outside of the root.sh script after the fact:

Friday, July 24, 2009

Oracle in the Cloud

Check out this SlideShare Presentation: Lovely presentation: Oracle in Cloud technology

Oracle Cloud Computing

Cloud computing center of Oracle:
Oracle has played a pioneering role in making Grid Computing relevant to enterprises with ground breaking products such as Oracle Real Applications Clusters (RAC), Automatic Storage Management (ASM), and Storage Grid.

More recently, Oracle has brought Grid Computing to middleware with the Application Grid approach to infrastructure. These products/technologies make the enterprise IT infrastructure elastic so that it can grow incrementally without any theoretical upper limit, as well as provide the flexibility to move resources around in order to meet dynamic business priorities.

In September 2008, Oracle introduced new offerings that allow enterprises to benefit from the developments taking place in the area of Cloud Computing. As a part of our initial offering, Oracle has partnered with Amazon Web Services (AWS) to offer the following products and services:

•Deploy Oracle Software in the Cloud
•Backup Oracle Database in the Cloud

These offerings may be extended to other Cloud platforms in the future. Check the below links for more details:
http://www.oracle.com/technology/tech/cloud/index.html

Deploying Oracle Database in the Cloud
Following is a demo/tutorial for deploying database in cloud.
http://www.oracle.com/technology/tech/cloud/demos/oracle_on_ec2_viewlet_swf.htm

Backing up Database in the Cloud
Following is a demo/tutorial for backing up database in the cloud.
http://www.oracle.com/technology/tech/cloud/demos/osb_cloud_backup_viewlet_swf.htm

Wednesday, July 22, 2009

RAC Filesystem Options

Storage Options for RAC:
In this section we are going to discuss various file system options available to host Oracle Real application Cluster. Lets first understand what are all those file system and then we will see which file systems support Oracle RAC.

Following are the storage options to store Oracle database for RAC.

Raw: Raw Devices
ASM: Automatic Storage Management
CFS: Cluster File System
OCFS: Oracle Cluster File System
LVM: Logical Volume Manager
NFS: Network File System

Now lets find out what are all the physical stuffs that makes a Oracle Database:

1: Oracle Software (Called as Oracle Binaries)
2: Oracle Dumps (Known as bdump, cdump, udump, adump and spfile/pfile etc)
3: Oracle database files (Data files, Control Files and Redo log files )
4: Recovery Files (Oracle Archives and Oracle Flash back)
5: CRS Files (OCR (Oracle Cluster Registry) and Voting Disk. Applicable only for RAC database.)

As you know the above files are required to set up a database (Single Instance/RAC). Lets see which storage option is compatible to host these files:

RAW:
-------------------
Supported: Oracle Database Files and CRS
Not Supported: Oracle Software, Recovery files and Oracle Dumps
Platform: All Platforms

ASM:
------------------
Supported: Oracle Database Files and Recovery Files
Not Supported: Oracle Software, CRS and Oracle Dumps
Platform: All Platforms

CFS:
-----------------
Supported: All
Not Supported: None
Platform: AIX, HP Tru64 UNIX, SPARC Solaris

OCFS:
---------------
Supported: Oracle Database Files, CRS and Recovery Files
Not Supported: Oracle Software and Oracle Dumps
Platform: Windows, Linux

LVM:
--------------
Supported: All
Not Supported: None
Platform: HP-UX, HP Tru64 UNIX, SPARC Solaris

NFS:
-------------
Supported: All
Not Supported: None
Platform: Linux, SPARC Solaris

Thursday, July 16, 2009

Convert Single Instance Database to RAC and ASM

Convert a Single Instance Database to Oracle RAC 10g on RHEL3:

Learn how to convert a single-instance database to Oracle Real Application Clusters (RAC) 10g on Red Hat Enterprise Linux 3 (RHEL3), step by step.
Contents
Overview
Step 1: Preliminary Installation
Step 2: Migrate Your Database to ASM
Step 3: Install Cluster Ready Services (CRS) Software
Step 4: Install Oracle RAC Software
Step 5: Post Installation
Step 6: Test Transparent Application Failover (TAF)Conclusion

Download for this article:
Oracle Database 10g Enterprise Edition and Clusterware for Linux x86
Oracle ASMlib Drivers

For those with limited hands-on experience, implementing Oracle RAC 10g can be an intimidating prospect. But it doesn't have to be that way.

In this guide, I'll provide the precise procedures for converting a single-instance Oracle 10g database to Oracle RAC on RHEL3. We'll use Oracle Automatic Storage Management (ASM), an integrated file system and a volume management solution built into the Oracle database kernel, for RAC database storage because it simplifies database storage management tasks while offering storage reliability and performance.

To summarize on a very high level, there are several ways to convert your database to RAC. If the database is small, you can consider installing a new RAC database on ASM and export/import your data from your current database to the RAC database. For a larger database, the preferred method would be to use RMAN to migrate the database files over to ASM.

The method demonstrated here is a two-phase approach:

first, migrating the data to ASM, and
second, converting the single-instance database to RAC.

If you are new to ASM, I would recommend taking this migration path to get familiar with ASM before leaping into ASM and RAC at the same time. (For more information about Oracle ASM, visit the Oracle ASM Product Center or refer to the documentation.)

This guide requires a basic understanding of RAC architecture and some familiarity with managing and administering Oracle Database and Red Hat Enterprise Linux. Refer to the documentation for details.

The detailed stepwise guidance is available at the below URL:
http://www.oracle.com/technology/pub/articles/chan_sing2rac_install.html

The above URL holds a beautiful document to migrate a single instance database from Cluster file system to ASM and to RAC.

Wednesday, June 10, 2009

Configure Recovery Catalog

Create a Recovery Catalog database:
As you have already aware of the RMAN Concepts, it is time to create a recovery catalog database to keep all backup/recovery metadata. Lets first create the environment to create a recovery catalog database.

Pre-requisites:
1: It is assumed that oracle software is installed and a normal Oracle database (Ex: RMAND) is already created using dbca.
2: It is assumed that the Listener is up and running.
3: It is assumed that the tnsnames.ora contains RMAND database connection information.

Test the environment in Server:
$ ps -ef grep tns
LISTENER_RMAND process is running

$ tnsping rmand
OK (80 msec)

Create the RMAN user in the database.
$ sqlplus / as sysdba
SQL> Create user rman identifed by rman default tablespace Users and temporary tablespace temp quota unlimited on users;
SQL> Grant create session, connect, resource, recovery_catalog_owner to rman;

Test Connectivity:
Lets do a test whether connectivity works fine or not.

$sqlplus rman/rman@rmand

If it is connected to database then you are all set to create a recovery catalog database.

Create a Catalog database:
$ rman catalog rman/rman@rmand
RMAN> Create Catalog;

The above steps will complete the catalog database creation. RMAN schema will have entire RMAN objects which will hold backup/recovery metadata.

Registering the TARGET database:
As the recovery catalog database is created. It is time to register the target database which needs to be backed up using RMAN.

Pre-requisites:

1: It is assumed that oracle software is installed and a normal Oracle database (Ex: RIPO) is already created using dbca.
2: It is assumed that the Listener is up and running.
3: It is assumed that the tnsnames.ora contains RMAND (Recovery Catalog) and RIPO (Target database) database connection information.

Test the environment in Server:
$ ps -ef grep tns
LISTENER_RMAND process is running
LISTENER_RIPO process is running

$ tnsping ripo
OK (80 msec)

$ tnsping rmand
OK (80 msec)

Set the TARGET database environment:
$ . oraenvORACLE_SID = [RIPO] ? RIPO

Register Target Database:
$ rman target system/manager@ripo catalog rman/rman@rmand

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

Where to See whether the target database is registered or not?
Connect to recovery catalog database and execute the below to find whether target database is registered with recovery catalog database or not.

$sqlplus rman/rman@rmand
SQL> select * from rc_database;
DB_KEY DBINC_KEY DBID NAME RESETLOGS_CHANGE# RESETLOGS
--------- ------------- ------ ----- ----------------- -------------
1 2 2498101982 RIPO 1 15-JAN-04

For RMAN Concepts refer:
http://oracledocaccess.blogspot.com/2009/06/rman-concepts.html

Thursday, June 4, 2009

Monitor Oracle Alert Log

Unix Shell Script To Monitor Oracle Alert Log:

#!/bin/ksh.
/usr/local/bin/RACT.env

export machine=`uname -n`
export script_dir=/export/home/oracle/local/scripts
export bdump=/export/home/oracle/admin/RACT/bdump
export flagfile=/export/home/oracle/admin/RACT/bdump/flag.lis

flagoldval=`cat $flagfile`
cat $bdump/alert_${ORACLE_SID}.log wc -l > $flagfile
flagnewval=`cat $flagfile`
NUM=$(($flagnewval-$flagoldval))

tail -$NUM $bdump/alert_${ORACLE_SID}.log awk
'BEGIN {prev="" ; ret=1 }
/^(...-Error)/ { if ( prev !~ /^(...-Error)/ ) { print "" ; print prev;} print $0;ret=0}
{prev=$0}
END { exit ret } '> $bdump/alert_temp10.log

if [ `cat $bdump/alert_temp10.log wc -l` -gt 0 ]
then
(echo "To: mailto:xxy@test.com/nSubject: CRITICAL: RACT: Error in alert log \n\n"; cat $bdump/alert_temp10.log)/usr/lib/sendmail xxy@test.com
else
echo "No Error"
fi

Oracle Database Upgrade Path

Oracle Database Upgrade Path:
The below document in metalink will recommend what are all required patches for upgrade.
Doc ID: 756671.1 : Oracle Recommended Patches -- Oracle Database

The below document is a nice document for database upgrade to 10GR2.
Doc ID: 316889.1 : Complete Checklist for Manual Upgrades to 10gR2

This document will give you all the details regarding database upgrade to any version.
Doc ID: 730365.1 : Oracle Database Upgrade Path Reference List

For the above documents you need to have a account with https://metalink.oracle.com/. The account with Oracle for this site is possible only when you have a valid Customer Service Identification (CSI) number available with you.

RMAN Concepts

Components of the RMAN Environment:
Component:Target database
Description:The control files, datafiles, and optional archived redo logs that RMAN is in charge of backing up or restoring. RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations. The work of backup and recovery is performed by server sessions running on the target database.
Required?:Yes

Component:RMAN client
Description:The client application that manages backup and recovery operations for a target database. The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net.
Required?:Yes

Component:Recovery catalog database
Description:A database containing the recovery catalog schema, which contains the metadata that RMAN uses to perform its backup and recovery operations.
Required?:No

Component:Recovery catalog schema
Description:The user within the recovery catalog database that owns the metadata tables maintained by RMAN. RMAN periodically propagates metadata from the target database control file into the recovery catalog.
Required?:No

Component:Standby database
Description:A copy of the primary database that is updated using archived logs created by the primary database. RMAN can create or back up a standby database. You can fail over to the standby database if the primary database goes down.
Required?:No

Component: Duplicate database
Description:A copy of the primary database that you can use for testing purposes.
Required?:No

Component:Media management application
Description:A vendor-specific application that allows RMAN to back up to a storage system such as tape.
Required?:No

Component:Media management catalog
Description:A vendor-specific repository of information about a media management application.
Required?:No

Component:Enterprise Manager
Description:A browser-based interface to the database, including backup and recovery through RMAN.
Required?No

As you know the above mentioned required components (Target Database and RMAN client) are available as soon as you are ready with your target database. The RMAN client is an executable which comes with the database software installation. Fire the below and you will see whether RMAN is installed or not.

C:\Documents and Settings\sisrath>rman
Recovery Manager: Release 10.2.0.1.0 - Production on Thu Jun 4 11:58:46 2009
Copyright (c) 1982, 2005, Oracle. All rights reserved.
RMAN>


Now when you know how to get ready with the required components you should have a basic idea about how RMAN stores all the backup and recovery related information. The below section will provide a clear idea on this.

Storage of the RMAN Repository in the Control File:
Because most information in the recovery catalog is also available in the target database's control file, RMAN supports an operational mode in which it uses the target database control file instead of a recovery catalog. This mode is especially appropriate for small databases where installation and administration of a separate recovery catalog database is burdensome. The only RMAN feature that is not supported in NOCATALOG mode is stored scripts.

When you do not use a recovery catalog, the control file is the exclusive source of information about backups and copies as well as other relevant information. The control file contains two types of records:
1: Circular reuse records
2: Noncircular reuse records.

Circular Reuse Records:
Circular reuse records contain noncritical information that is eligible to be overwritten if the need arises. These records contain information that is continually generated by the database. Circular reuse records are arranged in a logical ring. When all available record slots are full, the database either expands the control file to make room for a new record or overwrites the oldest record. The CONTROL_FILE_RECORD_KEEP_TIME initialization parameter specifies the minimum age in days of a record before it can be reused. By default it is 7 days. SO it is always recommended to have CONTROL_FILE_RECORD_KEEP_TIME with a higher value when no catalog database is used to store as much as backup and recovery metadata information.

Noncircular Reuse Records:
Noncircular reuse records contain critical information that does not change often and cannot be overwritten. Some examples of information in noncircular reuse records include datafiles, online redo logs, and redo threads.

RMAN Repository:
The RMAN repository is the collection of metadata about the target databases that RMAN uses for backup, recovery, and maintenance. Recovery catalog, an external Oracle database in which this information can be stored. The control file has finite space for records of backup activities, while a recovery catalog can store a much longer history. The added complexity of operating a recovery catalog database can be offset by the convenience of having the extended backup history available if you have to do a recovery that goes further back in time than the history in the control file.

There are also a few features of RMAN that only function when you use a recovery catalog. For example, RMAN stored scripts are stored in the recovery catalog, so commands related to them require the use of a recovery catalog. Other RMAN commands are specifically related to managing the recovery catalog and so are not available (and not needed) if RMAN is not connected to a recovery catalog.

The recovery catalog's version of the RMAN repository is maintained solely by RMAN. The target instance never accesses it directly. RMAN propagates information about the database structure, archived redo logs, backup sets, and datafile copies into the recovery catalog from the target database's control file after any operation that updates the repository, and also before certain operations.

The RMAN repository is no different than a normal database. As it holds the most backup/recovery information of all target databases, hence it is required to keep a backup of this database. The backup of this database is always recommended to be taken by using Cold/Hot backup concepts.

Now you know what are all the required components and how RMAN stores the backup/recovery metadata information. Now it is time to setup an environment to do the RMAN backup.

Create Recovery Catalog Database and Register the Target Database:
http://oracledocaccess.blogspot.com/2009/06/configure-recovery-catalog.html

Wednesday, June 3, 2009

Manage 10G DB Control

Enterprise Manager Grid Control - Version: 10.1 to 10.2
Information in this document applies to any platform for DB Console:

This article provides following instructions on how to manage the db console application used in a single instance 10G database:
1: Create repository and configuration files for the DB Control application
2: Drop repository and configuration files for the DB Control application
3: Recreate the repository and configuration files for the DB Control application

Create DB Control Objects:
Option1: Create the Db Control Configuration Files
Option2: Create the DB Control Repository and Configuration Files

Option1:
For DB Control 10.1.x, run the command: /bin/emca -r
For DB Control 10.2.x, run the command: /bin/emca -config dbcontrol db

Option2:
For DB Control 10.1.x, run the command: /bin/emca
For DB Control 10.2.x, run the command: bin/emca -config dbcontrol db -repos create

Drop DB Control Objects:
Option 1: Delete DB Control Configuration Files using EMCA scripts
Option 2: Delete DB Control Configuration Files Manually
Option 3: Delete DB Control Repository Objects using RepManager
Option 4: Delete DB Control Repository Objects Manually
Option 5: Delete DB Control Configuration Files and Repository Objects using EMCA

Option 1:
For DB Control 10.1.x, run the command: /bin/emca -x
For DB Control 10.2.x, run the command: bin/emca -deconfig dbcontrol db

Option 2:
Remove the following directories from your filesystem:
/
/oc4j/j2ee/OC4J_DBConsole__

NOTE: On Windows you also need to delete the DB Console service:
- run regedit
- navigate to HKEY_LOCAL_MACHINE/SYSTEM/CurrentControlSet/Services
- locate the OracleDBConsole entry and delete it

Option 3:
In both 10g R1 and R2 run: /sysman/admin/emdrep/bin/RepManager -action drop

Option 4:
Logon SQLPLUS as user SYS or SYSTEM, and drop the sysman account and management objects:
SQL> drop usersysman cascade;
SQL> drop role MGMT_USER;
SQL> drop user MGMT_VIEW cascade;
SQL> drop public synonym MGMT_TARGET_BLACKOUTS;
SQL> drop public synonym SETEMVIEWUSERCONTEXT;

Option 5:

For DB Control 10.1.x:
/bin/emca -x
/sysman/admin/emdrep/bin/RepManager -action drop

For DB Control 10.2.x:
/bin/emca -deconfig dbcontrol db -repos drop

Recreate DB Control Objects:
Option 1. Recreate the DB Control Configuration Files only (leave Repository intact)
Option 2. Recreate the DB Control Configuration Files and Repository

Option 1:
For DB Control 10.2.x, run the command:
/bin/emca -config dbcontrol db

Option 2:
For DB Control 10.2.x, run the command:
/bin/emca -config dbcontrol db -repos recreate

FTP to/from Oracle Support

FTP to/from Oracle Support:

During most of dba issues we deal with Oracle Support for which we need quick communication with Oracle. Somtimes during this support window Oracle Support Team might ask you to upload some files or download some patches which are larger in size and upload facilities in Service Request (SR) might be very slow for that to upload or download the files. Under such circumstances the below can be used to communicate with Oracle Support Team:

External Support FTP site:
Scope and ApplicationUsers of this ftp site should be aware of the following:
1: Customers and Employees should be familiar with an FTP commands or an FTP tool.
2: The following directory structures will be used by all groups in Support:

/ftp/anonymous/support/incoming/

/ftp/anonymous/support/outgoing

3: All files in these directories will be erased every 7 calendar days.
4: The outgoing directory is public domain and therefore nothing security sensitive should be placed here (no passwords etc).

To place a file for Oracle support to get:
Logon to ftp.oracle.com

Userid: anonymous

Password: your_complete_email id (ex: abc.def@xyz.com)

Change your working directory to where the customer will place the files.
cd support/incoming/

Make a new directory for the files by using your SR number
mkdir 12345678.9

change to that directory
cd 12345678.9

switch to bin mode and place the files in the SR directory
bin

put (filename)

exit out of the ftp system
quit

To get a file from Oracle Support:
Logon to ftp.oracle.com

Userid: anonymous

Password: your_complete_email id (ex: abc.def@xyz.com)


Change your working directory to outgoing
cd support/outgoing/

Change your working directory to the SR number
cd 12345678.9

switch to bin mode and get the files in the SR directory
bin

get (filename)

exit out of the ftp system
quit

Optimizer Hints

Oracle10g New Optimizer Hints:
With the 10g database release, there are many new optimizer hints available to control the optimization rules. Here are few for your understanding. Follow the URL and get an idea about the Oracle Hints.
http://www.fast-track.cc/10g_24.htm

Tuesday, June 2, 2009

Manage a Job

Manage Oracle Jobs:

Create a job using DBMS_JOB Package:

DECLARE
X NUMBER;
BEGIN
SYS.DBMS_JOB.SUBMIT
( job => X
,what => 'DBMS_MVIEW.REFRESH(''TEST.DW_CUSTOMERLIFECYCLE'', ATOMIC_REFRESH => FALSE);'
,next_date => to_date('02/06/2009 02:00:00','dd/mm/yyyy hh24:mi:ss')
,interval => 'TRUNC(SYSDATE+1) + 2/24 '
,no_parse => TRUE
);
SYS.DBMS_OUTPUT.PUT_LINE('Job Number is: ' to_char(x));
END;
/

commit;


Note: Assume in the above process a job is created whose job# is 745.

Alter the NEXT_DATE of a Job using DBMS_JOB package:
SQL>execute DBMS_JOB.NEXT_DATE (
job IN BINARY_INTEGER,
next_date IN DATE);
SQL>commit;

Example:
SQL>execute DBMS_JOB.NEXT_DATE(745,to_date('03/06/2009 02:00:00','dd/mm/yyyy hh24:mi:ss'));
SQL>Commit;


Alter the INTERVAL of a Job using DBMS_JOB package:
SQL>execute DBMS_JOB.INTERVAL (
job IN BINARY_INTEGER,
interval IN VARCHAR2);

SQL>commit;

Example:
SQL> execute DBMS_JOB.INTERVAL(745, 'TRUNC(SYSDATE+1) + 3/24');
SQL> Commit;


Alter the WHAT of a Job using DBMS_JOB package:
SQL>execute DBMS_JOB.WHAT (
job IN BINARY_INTEGER,
what IN VARCHAR2);
SQL>commit;

Example:
SQL>execute DBMS_JOB.WHAT(745, ‘DBMS_MVIEW.REFRESH(''XXBLIS.DW_CUSTOMERLIFECYCLE'');');
SQL>Commit;


Stop Jobs:
It is not always easy to stop a running job. There are different ways to stop a job:

1: You can change the job_queue_process parameter to 0. This will stop all the jobs running in the database.
2: You can find the server process of the job and kill the server process which will stop the job.
Find the SID from dba_jobs_running and get the server process for the same SID. Kill the server process bi kill -9 and you are all set.

Runing Jobs:
How do you know which job is running on the server. To find the list of jobs running on the server, execute the below:

SQl> select job.log_user, schema_user, what, failures, broken from dba_jobs where this_date is not null;

Failures or Broken Jobs:
How do you know what are all the jobs failed or in broken stage. Execute the below to find the same.

SQL> select job, schema_user, failures, broken, what from dba_jobs where failures>0 or broken ='Y';

Fix Broken Jobs:
When the job is broken how to fix the broken job. To do that run the below:

SQL> execute DBMS_JOB.BROKEN(job,FALSE);
SQL> commit;

Create Materialized View

Create Materialized View:

CREATE MATERIALIZED VIEW TEST.DW_CUSTOMERLIFECYCLE
ON PREBUILT TABLE WITH REDUCED PRECISION
REFRESH COMPLETE
START WITH TO_DATE('02-Jun-2009 02:00:00','dd-mon-yyyy hh24:mi:ss')
NEXT TRUNC(SYSDATE+1) + 2/24
WITH PRIMARY KEY
AS
SELECT ORGANIZATIONUNITID UPOID,STATUS,EVENTNAME,EFFECTIVEFROM,EFFECTIVETO,
C.CREATEDBY,C.CREATIONDATE,C.LASTMODIFIEDBY,C.lastmodifiedDate
FROM customerlifecycle@repo C, DUAL@repo C;

This creates a materialized view and schedule a job by default and uses a prebuilt table. Hence make sure a table is already created. The materialized view created will do complete refresh as defined in the script.

Tuesday, May 26, 2009

SQLTXPLAIN

Oracle SQLTXPLAIN:
-----------------------------------
SQLTXPLAIN is a tool that generates a comprehensive report about a SQL statement which has been provided as input to the tool. The report includes the explain plan for a given SQL statement, cost-based optimizer statistics for the objects accessed by the statement, and details of related objects. The output HTML report produced can be used to analyze and diagnose SQL performance issues (SQL tuning).

Unlike SQL statements "Explain Plan" and "Autotrace", SQLTXPLAIN offers complete environment and configuration reports relating the SQL we exmaines. The report is categorized as followings.

Environment
SQL Identification
SQL Statement
Explain Plan
Observations
Object Dependency
Objects
Tables
Indexes
Table Columns
Index Columns
Constraints
Column Histograms
Metadata
Tablespaces
Initialization Parameters
Optimizer Trace (EVENT 10053)
Tool Configuration Parameters

During the installation of SQLTXPLAIN, the following parameters will be asked for value.
Specify SQLTXPLAIN password:

Enter value for host_string: (For instance, it your db name is prod, then input @prod)
Enter value for application_schema: (the schema under which you examine sql statement)
Enter value for default_tablespace:
Enter value for temporary_tablespace:

SQLT inputs one SQL DML statement provided as one of these methods:
1. XPLAIN: As a stand-alone SQL in a flat text file.
2. XTRACT: As a memory-resident SQL, identified by its hash_value or sql_id.
3. XECUTE: As a stand-alone script that contains one SQL, together with the declaration and values of its bind variables.
4. XTRXEC: As a combination of XTRACT and XECUTE, this method gets a SQL from memory and executes both XTRACT and XECUTE.

SQLT provides the following benefits:
1. Consistent snapshot of most commonly used pieces of information associated to a SQL analysis and tuning effort, like CBO statistics and performance dynamic views. For most of these objects, inconsistent snapshots of different objects would render the analysis inaccurate or inconclusive.
2. Reduce the number of trips (thus overall time) between people involved in the SQL analysis and tuning. This is specially relevant in systems where the person doing the analysis is external to the organization or does not have full access to the system experiencing poor performance.
3. Preserves CBO statistics and initialization parameters in effect at the time of the analysis, making it easier to reverse the environment to a known state. Further more, the associated CBO statistics can easily be migrated to a similar system in order to try to replicate the same plan good or bad, thus the query performance.
4. Since the tool is installed into its own schema and makes use of temporary objects for most of its data structures, it is lite and moderate intrusive. The schema owner SQLTXPLAIN only gets granted a small subset of roles and attributes in order to generate SQL analysis results. This SQLTXPLAIN schema owner does not read application data itself.

The below metalink note will gove you the full details regarding SQLTXPLAIN tool and its usage. The software is also available in the same note.

215187.1: SQLT (SQLTXPLAIN) - Enhanced Explain Plan and related diagnostic information for one SQL

For 9i, 10g and 11g instances, download tool from sqlt.zip. Read intructions.txt included in zip file.
For output samples (9i, 10g and 11g), download sqlt_sample.zip
For 8i instances, download and use old tool from sqlt_old.zip

Oracle Statspack

Oracle STATSPACK installation steps:
Refer the below link for statspack installation and scheduling the same for automatic statspack data collection.
http://www.dba-oracle.com/t_statspack_installation_steps.htm

Overview of STATSPACK Scripts:
The STATSPACK scripts have completely changed. All of the STATSPACK scripts are located in the $ORACLE_HOME/rdbms/admin directory. Here is the link to understand the importance of the scripts.
http://www.dba-oracle.com/t_statspack_install_scripts.htm

Oracle Statspack Maintenance:
The Statspack data maintenance scripts must be run as the PERFSTAT user.

SPPURGE.SQL: Purges a limited range of Snapshot IDs for a given database instance.
SPTRUNC.SQL: Truncates all performance data in Statspack tables
SPUEXP.PAR: An export parameter file supplied for exporting the whole PERFSTAT user.

This is how you can know what is going on with the statspack owner and tablespace usage.

SQL> select OWNER, sum(bytes)/1024/1024 "Size in MB" from dba_segments where TABLESPACE_NAME='PERFSTAT' group by OWNER order by sum(bytes)/1024/1024 desc;
OWNER Size in MB
------------------------------ ----------
PERFSTAT 21238.6094
1 row selected.

This is how we can know how long the maintenance has not been done on statspack data. This will also give an idea about the statspack data availability.

SQL> select to_char(min(SNAP_TIME), 'YYYYMMDD HH24:MI:SS') "Oldest Snap", to_char(max(SNAP_TIME), 'YYYYMMDD HH24:MI:SS') "Latest Snap", round(max(SNAP_TIME)-min(SNAP_TIME)) "sppurge run before x days" from stats$snapshot;
Oldest Snap Latest Snap sppurge run before x days
----------------- ----------------- -------------------------
20071025 15:00:01 20090526 10:15:03 579

Based on the above information you can decide whether you want to keep the data or do some maintenance like purging old statspack data to avoid the growth of the tablespace.

Friday, May 22, 2009

Logical vs Physical Standby database

Logical vs Physical Standby database:
------------------------------------------------
The primary functional difference between logical and physical standby database setups is that:

Physical standby database: This is always an exact structural duplicate of the master database.
Logical standby database: Permits you to add additional objects (tables, indexes, etc) to the database.
Logical standby database: Logical standby is based on newer technologies (logical standby is new in Oracle 9.2) and tends to be generally regarded as more tempramental than physical standby.

Physical standby database: This is physically identical to the primary database on a block-for-block basis. It's maintained in managed recovery mode to remain current and can be set to read only. Archive logs are copied and applied.
Logical standby database: This is logically identical to the primary database. It is updated using SQL statements.

Physical Standby database: Achieved by copying the archivle log files.
Logical Standby database: Makes the dml statements using archivelogs generated and applies to the secondary database.

Physical Standby database: Archived redo logs and FTP'ed directly to the standby database which is always running in "recover" mode. Upon arrival, the archived redo logs are applied directly to the standby database.
Logical standby database: This uses LogMiner techniques to transform the archived redo logs into native DML statements (insert, update, delete). This DML is transported and applied to the standby database.
Logical standby database: This can have additional materialized views and indexes added for faster performance.

Learn more how to create physical or logical standby databases for your enterprise.
http://www.orafaq.com/node/957

sed Unix Command

Sed:
-------------
How to use sed, a special editor for modifying files automatically. If you want to write a program to make changes in a file, sed is the tool to use.

There are a few programs that are the real workhorse in the Unix toolbox. These programs are simple to use for simple applications, yet have a rich set of commands for performing complex actions. Don't let the complex potential of a program keep you from making use of the simpler aspects. This chapter, like all of the rest, start with the simple concepts and introduces the advanced topics later on. A note on comments. When I first wrote this, most versions of sed did not allow you to place comments inside the script. Lines starting with the '#' characters are comments. Newer versions of sed may support comments at the end of the line as well.

Find more about sed at:
http://www.grymoire.com/Unix/Sed.html

Find Unix Command

FIND:
--------------
This topic covers the various options of unix command called "FIND". Following are the Table of Contents:
Introduction to find
Problems with other methods
The Simplest Example
"Using find with other commands'
Using xargs with find
Looking for files with particular names
Looking for files by type
Looking for files by sizes
Searching for old files
Searching for files by permission
Owners and groups
Find and commands
Find and Cpio
Using Find to Execute Commands
File comparisons
Expressions
Keeping find from going too far
Fast Find

The below link covers all the above listed table of contents:

http://www.grymoire.com/Unix/Find.html#uh-6

Materialized View Refresh

Oracle Database Materialized View Refresh:
---------------------------------------------------------------
Refresh Option:
--------------------------
COMPLETE
C
Refreshes by recalculating the materialized view's defining query when atomic refresh=TRUE and COMPLETE is the same as FORCE if atomic refresh=FALSE.
FAST
F
Refreshes by incrementally applying changes to the detail tables.
FORCE
?
Uses the default refresh method. If the default refresh method is FORCE, it tries to do a fast refresh. If that is not possible, it does a complete refresh.
ALWAYS
A
Unconditionally does a complete refresh.

Refresh operations:
----------------------------
DBMS_MVIEW.REFRESH
Refresh one or more materialized views.

DBMS_MVIEW.REFRESH_ALL_MVIEWS
Refresh all materialized views.

DBMS_MVIEW.REFRESH_DEPENDENT
Refresh all table-based materialized views that depend on a specified detail table or list of detail tables.

Required Initialization Parameters for Warehouse Refresh:
---------------------------------------------------------
JOB_QUEUE_PROCESSES
The number of background processes. Determines how many materialized views can be refreshed concurrently.

JOB_QUEUE_INTERVAL
In seconds, the interval between which the job queue scheduler checks to see if a new job has been submitted to the job queue.

UTL_FILE_DIR
Determines the directory where the refresh log is written. If unspecified, no refresh log will be created

Note:These packages also create a log which, by default, is called refresh.log and is useful in helping to diagnose problems during the refresh process. This log file can be renamed by calling the procedure DBMS_OLAP.SET_LOGFILE_NAME ('log filename').

DBMS_SNAPSHOT
---------------------
DBMS_SNAPSHOT enables you to refresh snapshots that are not part of the same refresh group and purge logs.

Note: DBMS_MVIEW is a synonym for DBMS_SNAPSHOT. This synonym may be used in the future with data warehousing.

Examples:
Complete Refresh:
SQL> execute DBMS_SNAPSHOT.REFRESH( 'MV_EMP','c');
PL/SQL procedure successfully completed.

Fast Refresh
SQL> execute DBMS_SNAPSHOT.REFRESH( 'MV_EMP','f');
PL/SQL procedure successfully completed.

Force Refresh
SQL> execute DBMS_SNAPSHOT.REFRESH( 'MV_EMP','?');
PL/SQL procedure successfully completed.

Also provided with DBMS_SNAPSHOT is the REFRESH_ALL procedure. This procedure refreshes all materialized views, which were defined using the automatic refreshes.
SQL> execute DBMS_SNAPSHOT.REFRESH_ALL;
PL/SQL procedure successfully completed.

Thursday, May 21, 2009

Data Guard Protection Modes

Oracle 10G Data Guard Protection Modes:
---------------------------------------------------------
This chapter contains the following sections:
  • Data Guard Protection Modes
  • Setting the Data Protection Mode of a Primary Database

Data Guard Protection Modes:
This section describes the Data Guard protection modes.
In these descriptions, a synchronized standby database is meant to be one that meets the minimum requirements of the configured data protection mode and that does not have a redo gap.

Maximum Availability:
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database. Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one synchronized standby database. If the primary database cannot write its redo stream to at least one synchronized standby database, it operates as if it were in maximum performance mode to preserve primary database availability until it is again able to write its redo stream to a synchronized standby database.

This mode ensures that no data loss will occur if the primary database fails, but only if a second fault does not prevent a complete set of redo data from being sent from the primary database to at least one standby database.

Maximum Performance:
This protection mode provides the highest level of data protection that is possible without affecting the performance of a primary database. This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log. Redo data is also written to one or more standby databases, but this is done asynchronously with respect to transaction commitment, so primary database performance is unaffected by delays in writing redo data to the standby database(s).

This protection mode offers slightly less data protection than maximum availability mode and has minimal impact on primary database performance.

This is the default protection mode.

Maximum Protection
This protection mode ensures that zero data loss occurs if a primary database fails. To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one synchronized standby database before the transaction commits. To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions, if it cannot write its redo stream to at least one synchronized standby database.

Because this data protection mode prioritizes data protection over primary database availability, Oracle recommends that a minimum of two standby databases be used to protect a primary database that runs in maximum protection mode to prevent a single standby database failure from causing the primary database to shut down.

Setting the Data Protection Mode of a Primary Database
Perform the following steps to change the data protection mode of a primary database:
Step 1 Select a data protection mode that meets your availability, performance and data protection requirements.

Step 2 Verify that redo transport is configured to at least one standby database
The value of the LOG_ARCHIVE_DEST_n database initialization parameter that corresponds to the standby database must include the redo transport attributes listed below for the data protection mode that you are moving to.

If the primary database has more than one standby database, only one of those standby databases must use the redo transport settings.

The standby database must also have a standby redo log.

Required Redo Transport Attributes for Data Protection Modes

Maximum Availability:

  • AFFIRM
  • SYNC
  • DB_UNIQUE_NAME

Maximum Performance:

  • NOAFFIRM
  • ASYNC
  • DB_UNIQUE_NAME

Maximum Protection:

  • AFFIRM
  • SYNC
  • DB_UNIQUE_NAME


Step 3 Verify that the DB_UNIQUE_NAME database initialization parameter has been set to a unique name on the primary and standby database.

For example, if the DB_UNIQUE_NAME parameter has not been defined on either database, the following SQL statements might be used to assign a unique name to each database.

Execute this SQL statement on the primary database:

SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='CHICAGO' SCOPE=SPFILE;

Execute this SQL statement on the standby database:

SQL> ALTER SYSTEM SET DB_UNIQUE_NAME='BOSTON' SCOPE=SPFILE;

Step 4 Verify that the LOG_ARCHIVE_CONFIG database initialization parameter has been defined on the primary and standby database and that its value includes a DG_CONFIG list that includes the DB_UNIQUE_NAME of the primary and standby database.

For example, if the LOG_ARCHIVE_CONFIG parameter has not been defined on either database, the following SQL statement could be executed on each database to configure the LOG_ARCHIVE_CONFIG parameter:

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(CHICAGO,BOSTON)';

Step 5 Shut down the primary database and restart it in mounted mode if the protection mode is being set to Maximum Protection or being changed from Maximum Performance to Maximum Availability. If the primary database is an Oracle Real Applications Cluster, shut down all of the instances and then start and mount a single instance.
For example:

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;

Step 6 Set the data protection mode.
Execute the following SQL statement on the primary database:

SQL> ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE {AVAILABILITY PERFORMANCE PROTECTION};

If the primary database is an Oracle Real Applications Cluster, any instances stopped in Step 5 can now be restarted.

Step 7 Open the primary database.
If the database was restarted in Step 5, open the database:
SQL> ALTER DATABASE OPEN;

Step 8 Confirm that the primary database is operating in the new protection mode.
Perform the following query on the primary database to confirm that it is operating in the new protection mode:
SQL> SELECT PROTECTION_MODE FROM V$DATABASE;

Thursday, May 14, 2009

Automatic Storage Management

ASM using ASMLib and Raw Devices:
--------------------------------------------
This is a beutiful document which covers almost the full setup for an ASM (Automatic Storage Management ) environment. For your reference it covers the following:



  • Introduction
  • Partition the Disks
  • ASMLib Installation
  • Raw Device Setup
  • ASM Creation
  • Database Creation
  • Switching from Raw Devices to ASMLib
  • Switching from ASMLib to Raw Devices
  • Performance Comparison

http://www.oracle-base.com/articles/10g/ASMUsingASMLibAndRawDevices.php#raw_device_setup

Automatic Storage Management (ASM) in Oracle Database 10g:
-------------------------------------------------------------- ---------------
This document covers a small part of Oracle Automatic Storage Management (ASM) Commands to Manage an ASM instance and database. Also You will get a quick idea how to Migrate a Oracle single instance database to ASM using RMAN.

http://www.oracle-base.com/articles/10g/AutomaticStorageManagement10g.php

Commanding ASM (Automatic Storage Management)
----------------------------------------------------------
Access, transfer, and administer ASM files without SQL commands.

Oracle Database 10g Release 1 introduced Automatic Storage Management (ASM), a new framework for managing Oracle database files. ASM provided a foundation for highly efficient storage management with kernelized asynchronous I/O, direct I/O, redundancy, striping, and an easy way to manage storage. ASM includes volume management functionality similar to that of a generic logical volume manager (LVM).

Managing ASM through SQL interfaces in Oracle Database 10g Release 1 posed a challenge for system administrators who were not very familiar with SQL and preferred a more conventional command-line interface. In Oracle Database 10g Release 2, you have an option to manage the ASM files by using ASMCMD—a powerful and easy-to-use command-line tool.

In Oracle Database 10g Release 1, ASM disk groups are not visible outside the database for regular file system administration tasks such as copying and creating directories. In Oracle Database 10g Release 2, however, you can transfer the files from ASM to locations outside of the disk groups via FTP and through a Web browser using HTTP.

This article shows you how to use these enhancements in ASM to accomplish everyday tasks.
http://www.oracle.com/technology/oramag/oracle/06-mar/o26asm.html

Set asmcmd prompt display:
You can invoke asmcmd with the -p option to display the current path, very similar to the UNIX/Linux.
PS1="
`hostname`*\${ORACLE_SID}-\${PWD}
>"
export PS1

fred:/u01/app/oracle/admin>

asmcmd command line history:
The asmcmd utility does not provide a command history with the up-arrow key. With rlwrapinstalled, this can be fixed by adding the following entry to the ~oracle/.bashrc file:

alias asmcmd='rlwrap asmcmd'

Automatic Storage Manager (ASM) Enhancements in Oracle Database 11g Release 1:
----------------------------------------------------------------------------
http://www.oracle-base.com/articles/11g/AsmEnhancements_11gR1.php

SQL Help for ASM (Automatic Storage Management):
--------------------------------------------------------------
Create Diskgroup:
Use the CREATE DISKGROUP clause to create a collection of disks. Oracle Database manages a disk group as a logical unit and evenly spreads each file across the disks to balance I/O. Oracle Database also automatically distributes database files across all available disks in disk groups and rebalances storage automatically whenever the storage configuration changes.

This statement creates a disk group, assigns one or more disks to the disk group, and mounts the disk group for the first time. If you want Automatic Storage Management to mount the disk group automatically in subsequent instances, you must add the disk group name to the value of the ASM_DISKGROUPS initialization parameter in the initialization parameter file. If you use an SPFILE, then the disk group is added to the initialization parameter automatically.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_5008.htm#i2153287

Alter Diskgroup:
The ALTER DISKGROUP statement lets you perform a number of operations on a disk group or on the disks in a disk group.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_1006.htm#i2166968

Drop Diskgroup:
The DROP DISKGROUP statement lets you drop an Automatic Storage Management disk group along with all the files in the disk group. Automatic Storage Management first ensures that no files in the disk group are open. It then drops the disk group and all its member disks and clears the disk header.
http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_8013.htm#i2152629

Oracle Password

Handle Oracle Password:
-----------------------------------
Do you know what to do in case as a DBA you forgot the Oracle database accounts' password.

There is a very simple way to handle this issue. Now let me explain the situation here. As a DBA you might be handling thousands of Oracle Database Accounts and might be maintaining the passwords. Sometimes you might needs some account passwords to handle the issue of application users and everybody knows it is very hard to remember application users' passwords. So in such situations when you need the password and you do not have it. Also you do not want to change the password as it might affect the application users and you do not want to spend time sending emails to application users to collect the password. There you go..An easiest way to handle such Oracle passwords:

SQL> Select username,password from DBA_USERS where username='user';

Output will be:
Username and encrypted password. Save these information.

SQL> Alter user "user" identified by xyz123;

Output will be:
The user's password is changed to xyz123.

Now connect the user with password xyz123 and complete your maintenance acitivity.

After your maintenance activity is over, Change the password to the old password and you are all set. No issues for anybody.

SQL> alter user "user" identified by values 'encrypted password which you collected earlier';
SQL>Select username,password from DBA_USERS where username='user';

You are all set and it matches with the old information.

Let me give you an example for your ref:

SQL> select username, password from dba_users where username='SQLTXPLAIN';
USERNAME PASSWORD
------------------------------ ------------------------------
SQLTXPLAIN 04F1603C1F2033F3
SQL> conn SQLTXPLAIN/webex123
Connected.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, OLAP, Data Miningand Real Application Testing options

sqlplus '/as sysdba'
SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 14 13:12:07 2009
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, OLAP, Data Miningand Real Application Testing options
SQL> alter user SQLTXPLAIN identified by test123;
User altered.
SQL> select username, password from dba_users where username='SQLTXPLAIN';
USERNAME PASSWORD
------------------------------ ------------------------------
SQLTXPLAIN 6CFD89ED56C7C349
SQL> conn SQLTXPLAIN/test123
Connected.
SQL> exit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, OLAP, Data Miningand Real Application Testing options

sqlplus '/as sysdba'
SQL*Plus: Release 10.2.0.4.0 - Production on Thu May 14 13:12:43 2009
Copyright (c) 1982, 2007, Oracle. All Rights Reserved.
Connected to:Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit ProductionWith the Partitioning, Real Application Clusters, OLAP, Data Miningand Real Application Testing options

SQL> alter user SQLTXPLAIN identified by values '04F1603C1F2033F3';
User altered.
SQL> conn SQLTXPLAIN/webex123
Connected.
SQL> show user
USER is "SQLTXPLAIN"
SQL> select username, password from dba_users where username='SQLTXPLAIN';
USERNAME PASSWORD
------------------------------ ------------------------------
SQLTXPLAIN 04F1603C1F2033F3
SQL>

Monday, April 27, 2009

LISTENER

Listener Commands:
The listener's involvement is summarized as:
The listener catches the request spawns or requests a database process/thread redirects or passes the connection to the process/thread, usually on a different port gets out of the way

change_password:
set an encrypted listener password
c:\oracle\product\ora102\bin> lsnrctl startLSNRCTL> change_password
Old password:
New password:
Reenter new password:
LSNRCTL> save_config

Quit the listener application:
c:\oracle\product\ora10\bin> lsnrctl
LSNRCTL> exit

View help information:
c:\oracle\product\ora10\bin> lsnrctl help
or
c:\oracle\product\ora10\bin> lsnrctl
LSNRCTL> help

kill on *NIX:
ps -efgrep tnslsnrawk '{ print $2; }'xargs kill -9

Reload:
Causes a reread the listener.ora file. Enables the addition or modification of statically configured services without stopping the listener.
In addition, the database services, instances, service handlers, and listening endpoints that were dynamically registered with the listener are unregistered and subsequently registered again.
LSNRCTL> reload

save_config:
Use the SAVE_CONFIG command to compare the current configuration state of the listener, including trace level, trace file, trace directory, and logging to the listener.ora file. Any changes are stored in listener.ora, preserving formatting, comments, and case as much as possible. Prior to modification of the listener.ora file, a backup of the file, called listener.bak, is created.
See change_password above

View the configured listener service:
LSNRCTL> services

set Parameter:
current_listener: set current_listener
LSNRCTL> set current_listener uwlistener

displaymode: set displaymode
LSNRCTL> set displaymode VERBOSE

inbound_connect_timeout: set inbound_connect_timeout
LSNRCTL> set inound_connect_timeout 0

log_directory: set log_directory
LSNRCTL> set log_directory c:\temp

log_file: set log_file
LSNRCTL> set log_file uwlistener.log

log_status: set log_status <>
LSNRCTL> set log_status ON

password: set password
LSNRCTL> set password Password:

rawmode:set rawmode
LSNRCTL> set rawmode OFF

save_config_on_stop: set save_config_on_stop
LSNRCTL> set save_config_on_stop ON

startup_waittime: set startup_waittime
LSNRCTL> set startup_waittime 0

trc_directory: set trc_directory
LSNRCTL> set trc_directory c:\temp

trc_file: set trc_level
LSNRCTL> set trc_file uwlistener.trc

trc_level: set trc_level
Value Description
0 - 1 off
2 - 3 err
4 - 5 user
6 - 14 admin
15 dev
16 - 99 support
LSNRCTL> set trc_level 0

View the current listener services:
c:\oracle\product\ora10\bin> lsnrctl services

show Parameter
current_listener: show current_listener
LSNRCTL> show current_listener

displaymode: show displaymode
LSNRCTL> show displaymode

inbound_connect_timeout:
LSNRCTL> show inbound_connect_timeout

log_directory: show log_directory
LSNRCTL> show log_directory

log_file: show log_file
LSNRCTL> show log_file

log_status: show log_status
ON if listener activity is logged
LSNRCTL> show log_status

rawmode: show rawmode
LSNRCTL> show rawmode

save_config_on_stop: show save_config_on_stop
LSNRCTL> show save_config_on_stop

snmp_visible: show snmp_visible
LSNRCTL> show snmp_visible

startup_waittime: show startup_waittime
LSNRCTL> show startup_waittimeDefault is 0

trc_directory: show trc_directory
LSNRCTL> show trc_directory

trc_file: show trc_fileDefault is listener.trc
LSNRCTL> show trc_file

trc_level: show trc_level
LSNRCTL> show trc_level

start the listener:
c:\oracle\product\ora10\bin> lsnrctl start

view the current listener status:
c:\oracle\product\ora10\bin> lsnrctl status

stop the listener:
c:\oracle\product\ora10\bin> lsnrctl stop

Show the currently installed listener version:
c:\oracle\product\ora10\bin> lsnrctl version

Listener Parameters:
Prevent remote modification of listener.ora using lsnrctl commands
ADMIN_RESTRICTIONS_[listener_name]= ADMIN_RESTRICTIONS_ORABASE=ON Listener.ora Samples

Typical Listener.ora file for standalone Oracle.
# listener.ora Network Configuration File:
# c:\oracle\product\ora10\network\admin\listener.ora
# Generated by Oracle configuration tools.

SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = C:\oracle\product\ora10)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = orabase)
(ORACLE_HOME = C:\oracle\product\ora10)
(SID_NAME = orabase)
)
)

LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = perrito)(PORT = 1521))
)
)
)

Queue Size:
Add the queuesize parameter in your listener.ora file in the address description. By default oracle is able to open only 16 simultaneous connections

The result should look like this:
(ADDRESS= (PROTOCOL = TCP)(HOST = your-hostname)(PORT = 1521) (QUEUESIZE = 30))

Trace Levels:
TRACE_LEVEL_ =
Value equivalents:
OFF (equivalent to 0) provides no tracing
USER (equivalent to 4) identify user error error conditions
ADMIN (equivalent to 6) identify installation-specific problems
SUPPORT (equivalent to 16) provides trace information for support

Trace file results are written to $ORACLE_HOME/network/trace.
TRACE_LEVEL_LISTENER = USER

Force the database to register, or reregister, with the listener:
ALTER SYSTEM REGISTER;