Wednesday, April 27, 2011

What is FAST_START_MTTR_TARGET?

What is FAST_START_MTTR_TARGET?

FAST_START_MTTR_TARGET enables you to specify the number of seconds the database takes to perform crash recovery of a single instance.
When specified, FAST_START_MTTR_TARGET is overridden by LOG_CHECKPOINT_INTERVAL.
Let’s see it with an example, set the FAST_START_MTTR_TARGET to 1:

SQL> ALTER SYSTEM SET FAST_START_MTTR_TARGET=1;

Then, execute the following query immediately after opening the database:

SQL> SELECT TARGET_MTTR, ESTIMATED_MTTR 
FROM V$INSTANCE_RECOVERY;


Oracle responds with the following:

TARGET_MTTR ESTIMATED_MTTR 
20 45 

The TARGET_MTTR value of 20 seconds is the minimum MTTR target that the system can achieve, that is, the lowest practical value for FAST_START_MTTR_TARGET. This minimum is calculated based on the average database startup time.

The ESTIMATED_MTTR field contains the estimated mean time to recovery based on the current state of the running database. Because the database has just opened, the system contains few dirty buffers, so not much cache recovery would be required if the instance failed at this moment. That is why ESTIMATED_MTTR can, for the moment, be lower than the minimum possible TARGET_MTTR.

ESTIMATED_MTTR can be affected in the short term by recent database activity. Assume that you query V$INSTANCE_RECOVERY immediately after a period of heavy update activity in the database. Oracle responds with the following:

TARGET_MTTR ESTIMATED_MTTR 
20 45

Now the effective MTTR target is still 20 seconds, and the estimated MTTR (if a crash happened at that moment) is 30 seconds. This is an acceptable result. This means that some checkpoints writes might not have finished yet, so the buffer cache contains more dirty buffers than targeted.

Now wait for sixty seconds and reissue the query to V$INSTANCE_RECOVERY. Oracle responds with the following:

TARGET_MTTR ESTIMATED_MTTR 
20 35 

The estimated MTTR at this time has dropped to 35 seconds, because some of the dirty buffers have been written out during this period

Enabling MTTR Advisor
To enable MTTR Advisor, set the two initialization parameters STATISTICS_LEVEL and FAST_START_MTTR_TARGET.

STATISTICS_LEVEL governs whether all advisors are enabled and is not specific to MTTR Advisor. Make sure that it is set to TYPICAL or ALL. Then, when FAST_START_MTTR_TARGET is set to a non-zero value, the MTTR Advisor is enabled.

Using MTTR Advisor

After enabling MTTR Advisor, run a typical database workload for a while. When MTTR Advisor is ON, the database simulates checkpoint queue behavior under the current value of FAST_START_MTTR_TARGET, and up to four other different MTTR settings within the range of valid FAST_START_MTTR_TARGET values. (The database will in this case determine the valid range for FAST_START_MTTR_TARGET itself before testing different values in the range.)


Please read the below link as well :



RMAN Incremental Backup:


This type of backup back-up only those blocks that have changed since a specified previous backup.


Why should I use Incremental backup?

-To reduce the amount of time needed for daily backups
-To save network bandwidth when backing up over a network
-To get adequate backup performance when the aggregate tape bandwidth available for tape write I/Os is much less than the aggregate disk bandwidth for disk read I/Os
-To be able to recover changes to objects created with the NOLOGGING option. For example, direct load inserts do not create redo log entries and their changes cannot be reproduced with media recovery. They do, however, change data blocks and so are captured by incremental backups.

How does it work?

Each data block in a datafile contains a system change number (SCN), which is the SCN at which the most recent change was made to the block. During an incremental backup, RMAN reads the SCN of each data block in the input file and compares it to the checkpoint SCN of the parent incremental backup. If the SCN in the input data block is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.

Types of Incremental Backup:

·         A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0(n or n-1)

·         A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0(n)

Differential Incremental Backups

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;







  • Sunday :An incremental level 0 backup backs up all blocks that have ever been in use in this database.
  • Monday - Saturday
    On each day from Monday through Saturday, a differential incremental level 1 backup backs up all blocks that have changed since the most recent incremental backup at level 1 or 0. So, the Monday backup copies blocks changed since Sunday level 0 backup, the Tuesday backup copies blocks changed since the Monday level 1 backup, and so forth.
  • The cycle is repeated for the next week.

Cumulative Incremental Backup

BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE;





  • Sunday
    An incremental level 0 backup backs up all blocks that have ever been in use in this database.
  • Monday - Saturday
    A cumulative incremental level 1 backup copies all blocks changed since the most recent level 0 backup. Because the most recent level 0 backup was created on Sunday, the level 1 backup on each day Monday through Saturday backs up all blocks changed since the Sunday backup.
  • The cycle is repeated for the next week.

Which type of backup I should use? Well this is based on the below factors have looks at them:

Recovery speed:  Cumulative backups are faster to restore from than differential backups because fewer incremental backups need to be applied during recovery.



Backup speed:  For daily backup speed, differential backups run faster than cumulative backups because to don’t duplicate the work done by previous backups.  However, differential backups take longer when doing a recovery.


Disk space usage: Cumulative backups take more disk space because they duplicate the work   done by previous backups

Change tracking in incremental backup:

RMAN's change tracking feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled, RMAN uses the change tracking file to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile.

Change tracking is disabled by default, because it does introduce some minimal performance overhead on your database during normal operations
One change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in DB_CREATE_FILE_DEST.

ALTER DATABASE ENABLE BLOCK CHANGE TRACKING
USING FILE 'c:\rman_change_track.f' REUSE;

You can query V$BLOCK_CHANGE_TRACKING.STATUS to determine whether change tracking is enabled

Tuesday, April 19, 2011

Different phases of a query processing in oracle:

Sometimes I wonder how a query is processed when we fire it against an oracle database. Let’s discover the same…

  

Actually a query is different from other types of SQL statement because, if successful, it returns data as results. Whereas other statements simply return success or failure, a query can return one row or thousands of rows. There are three main stages in the processing of a query:


1- Parse


2. Bind


3- Execute

4- Fetch


During the parse stage, the SQL statement is passed from the user process to the server process, and a parsed representation of the SQL statement is loaded into a shared SQL area. During the parse, the server process performs the following functions:


- Searches for an existing copy of the SQL statement in the shared pool
- Validates the SQL statement by checking its syntax - Performs data dictionary lookups to validate table and column definitions

2) Bind: After parsing, the oracle server knows the meaning of the oracle statement but still may not have enough info(values for variables) to execute the statement. The process of obtaining these value is called as bind values.

The execute executes the statement using the best optimizer approach and fetch retrieves the rows back to the user.


There are two types of parse operations:

1- Hard parsing:

A SQL statement is submitted for the first time, and no shareable match is found in the shared pool. Hard parses are the most resource-intensive and unscalable, because they perform all the operations involved in a
parse.


2- Soft parsing:


A SQL statement is submitted, and a match is found in the shared pool. The match can be the result of a previous execution by another user. The SQL statement is shared, which is good for performance. However, soft parses still require syntax and security checking, which consume system resources.


When bind variables are used properly, more soft parses are possible, thereby reducing hard parses and keeping parsed statements in the library cache for a longer period. This is very important for an Oracle application to be scalable.
The optimizer is the part of the Oracle Database that creates the execution plan for a SQL statement. The determination of the execution plan is an important step in the processing of any SQL statement and can greatly affect execution time. The execution plan is a series of operations that are performed in sequence to execute the statement. The optimizer considers many factors related to the objects referenced and the conditions specified in the query. The information necessary to the optimizer includes:


- Statistics gathered for the system (I/O, CPU, and so on) as well as schema objects (number of rows, index, and so on)
- Information in the dictionary
- WHERE clause qualifiers
- Hints supplied by the developer

When we use diagnostic tools such as Enterprise Manager, EXPLAIN PLAN, and SQL*Plus AUTOTRACE, we can see the execution plan that the optimizer chooses.
The Oracle query optimizer (Cost Based Optimize-CBO) determines the most efficient execution plan and is the most important step in the processing of any SQL statement.

The optimizer: -


- Evaluates expressions and conditions
- Uses object and system statistics
- Decides how to access the data
- Decides how to join tables
- Decides which path is most efficient

A good understanding of SQL processing is essential for writing optimal SQL statements.
:-) 

TRUNCATE and DELETE..Spot a light please


I have been asked a question from an application developer that how can we differentiate between  DELETE and TRUNCATE and I explained him as below.

TRUNCATE is a DDL command whereas DELETE is a DML command.

TRUNCATE is much faster than DELETE.

When you type DELETE all the data get copied into the Rollback tablespace first then delete operation get performed that’s why when you type ROLLBACK after deleting a table ,you can get back the data(The system get it for you from the Rollback tablespace).All this process take time. But when you type TRUNCATE, it removes data directly without copying it into the Rollback tablespace that’s why TRUNCATE is faster. Once you truncate you can't get back the data.

You can't rollback in TRUNCATE but in DELETE you can rollback. TRUNCATE removes the record permanently.
In case of TRUNCATE, Trigger doesn't get fired. But in DML commands like DELETE trigger get fired.

You can’t use conditions (WHERE clause) in TRUNCATE. But in DELETE you can write conditions using WHERE clause

ENJOYJJ

Monday, April 18, 2011

Checking the tablespace for self-contained..obstacles during transport



In my blog TRANSPORT TABLESPACE MADE SAFE AND EASY... I have explained how to transport a tablespace among databases. But before transport any tablespace you must ensure that it should be self-contained.

 

What the hell is self-contained?

It means it should not contain any objects that depend on other objects outside the table space or exists on another table space.

For example: A table may be existing in one table space but its associated index exists on another tablespace.

A self contained tablespace can be checked using the DBMS_TTS.TRANSPORT_SET_CHECK procedure. The TS_LIST parameter accepts a comma separated list of tablespace names

Let’s see how to tackle with this?

Creating tablespace ttbs1:

Create tablespace ttbs1 datafile ‘D:\oracle\oradata\noida\DATAFILES\ttbs1.dbf’ size 5m;

Creating tablespace ttbs2:

Create tablespace ttbs2 datafile ‘D:\oracle\oradata\noida\DATAFILES\ttbs2.dbf’ size 5m;

Now create table invent in tablespace ttbs1:

SQL> create table scott.invent(id number) tablespace ttbs1;

Table created.

Here ,adding constraint primary pkey_id will create index on tablespace ttbs2:

SQL> alter table scott.invent add constraint pkey_id primary key(id) using index

tablespace ttbs2;

Table altered.

Checking tablespace for self-contained:

SQL> execute DBMS_TTS.TRANSPORT_SET_CHECK ('ttbs1',TRUE);

PL/SQL procedure successfully completed.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;

Here  you see the violation because index scott.pkey_id exist on tablespace ttbs2 but its corresponding table exist on tablespace ttbs1.

 

VIOLATIONS

--------------------------------------------------------------------------------

 

ORA-39908: Index SCOTT.PKEY_ID in tablespace TTBS2 enforces primary constraints

 of table SCOTT.INVENT in tablespace TTBS1.

Now ,a big question how to solve this issue..the solution is to move the index from tablespace ttbs2 to ttbs1.lets see how it is going to be done..

SQL> alter index scott.pkey_id rebuild tablespace ttbs1;

Index altered.

SQL> execute DBMS_TTS.TRANSPORT_SET_CHECK ('ttbs1',TRUE);

PL/SQL procedure successfully completed.

You can clearly see that the tablespace ttbs1 is now self-contained as there is no violations appearing.

SQL> SELECT * FROM TRANSPORT_SET_VIOLATIONS;


no rows selected

 

ENJOYJ


Saturday, April 16, 2011

Where is my alert log file in 11g??



Alert log file in oracle database 11g has been put in a separate folder known as "diag"..


Type the below command from sqlplus prompt to get the desired location:


SQL> select value from v$parameter where name ='user_dump_dest';


VALUE
---------------------------------------------------------------------


d:\oracle\diag\rdbms\noida\noida\trace 


Above  is the location where you can find your alert log file:


where noida is the name of the instance.


enjoy:-)

Cloning through rman on different server with different directory structure and different DBNAME...

Here we will see how to clone a database from one server to another.


Lets say we have two servers:


server 1


server 2


Now we will going to clone the database from server1 to server2.


Target database:Noida


First take the full backup of target database from server 1 and note the current scn.

SQL>select current_scn from v$database.

CURRENT_SCN
-----------
1200598


Copy this full backup on server 2.


On server 1 perform the below activities:

Create the instance:

C:\>oradim -new -sid noida -intpwd noida -startmode m
Instance created.

C:\>set ORACLE_SID=noida

Create the password file:

C:\>orapwd file=D:\oracle\product\11.1.0\db_1\dbs\noidapwd12.ora entries=10

Enter password for SYS:

C:\>set ORACLE_SID=noida

C:\>sqlplus sys/noida as sysdba

SQL*Plus: Release 11.1.0.6.0 - Production on Sat Apr 16 13:58:25 2011

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to an idle instance.

Modify the init parameter file and set the location for udump ,bdump and controlfiles.

Now, create the spfile from this parameter file..

SQL> create spfile from pfile='D:\bkp\orcl_bkp\noidainit.ora';

File created.

SQL> exit
Disconnected

C:\>rman target /

Recovery Manager: Release 11.1.0.6.0 - Production on Sat Apr 16 13:59:08 2011

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

connected to target database (not started)

RMAN> startup nomount

Oracle instance started

Total System Global Area     318046208 bytes

Fixed Size                     1332920 bytes
Variable Size                234883400 bytes
Database Buffers              75497472 bytes
Redo Buffers                   6332416 bytes


Restore controlfile from the copied backup....


RMAN> restore controlfile from 'D:\bkp\noida_bkp\cf\C-1502483083-20110416-00';

Starting restore at 16-APR-11
using channel ORA_DISK_1

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:16
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL01.CTL
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL02.CTL
output file name=D:\ORACLE\ORADATA\NOIDA\CONTROL03.CTL
Finished restore at 16-APR-11

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1


Catalog the backup of server 1 to server 2..

RMAN> catalog backuppiece 'D:\bkp\orcl_bkp\06M9U16S_1_1';

cataloged backup piece
backup piece handle=D:\BKP\ORCL_BKP\06M9U16S_1_1 RECID=8 STAMP=748621118

RMAN> catalog backuppiece 'D:\bkp\orcl_bkp\07M9U1BF_1_1';

cataloged backup piece
backup piece handle=D:\BKP\ORCL_BKP\07M9U1BF_1_1 RECID=9 STAMP=748621138

RMAN> catalog backuppiece 'D:\bkp\orcl_bkp\08M9U1K7_1_1';

cataloged backup piece
backup piece handle=D:\BKP\ORCL_BKP\08M9U1K7_1_1 RECID=10 STAMP=748621154


RMAN> run
2> {
3> set until scn 1200598;
4> set newname for datafile 1 to 'D:\oracle\oradata\noida\DATAFILES\system01.dbf
';
5> set newname for datafile 2 to 'D:\oracle\oradata\noida\DATAFILES\sysaux01.dbf
';
6> set newname for datafile 3 to 'D:\oracle\oradata\noida\DATAFILES\UNDOTBS01.DB
F.dbf';
7> set newname for datafile 4 to 'D:\oracle\oradata\noida\DATAFILES\users01.dbf'
;
8> set newname for datafile 5 to 'D:\oracle\oradata\noida\DATAFILES\EXAMPLE01.DB
F.dbf';
9> restore database ;
10> switch datafile all;
12>  }

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 16-APR-11
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to D:\ORACLE\ORADATA\NOIDA\DATAFILE
S\SYSTEM01.DBF
channel ORA_DISK_1: restoring datafile 00002 to D:\ORACLE\ORADATA\NOIDA\DATAFILE
S\SYSAUX01.DBF
channel ORA_DISK_1: restoring datafile 00003 to D:\ORACLE\ORADATA\NOIDA\DATAFILE
S\UNDOTBS01.DBF.DBF
channel ORA_DISK_1: restoring datafile 00004 to D:\ORACLE\ORADATA\NOIDA\DATAFILE
S\USERS01.DBF
channel ORA_DISK_1: restoring datafile 00005 to D:\ORACLE\ORADATA\NOIDA\DATAFILE
S\EXAMPLE01.DBF.DBF
channel ORA_DISK_1: reading from backup piece D:\BKP\ORCL_BKP\07M9U1BF_1_1
channel ORA_DISK_1: piece handle=D:\BKP\ORCL_BKP\07M9U1BF_1_1 tag=TAG20110416T13
4452
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:05:56
Finished restore at 16-APR-11

datafile 1 switched to datafile copy
input datafile copy RECID=7 STAMP=748621662 file name=D:\ORACLE\ORADATA\NOIDA\DA
TAFILES\SYSTEM01.DBF
datafile 2 switched to datafile copy
input datafile copy RECID=8 STAMP=748621663 file name=D:\ORACLE\ORADATA\NOIDA\DA
TAFILES\SYSAUX01.DBF
datafile 3 switched to datafile copy
input datafile copy RECID=9 STAMP=748621665 file name=D:\ORACLE\ORADATA\NOIDA\DA
TAFILES\UNDOTBS01.DBF.DBF
datafile 4 switched to datafile copy
input datafile copy RECID=10 STAMP=748621667 file name=D:\ORACLE\ORADATA\NOIDA\D
ATAFILES\USERS01.DBF
datafile 5 switched to datafile copy
input datafile copy RECID=11 STAMP=748621669 file name=D:\ORACLE\ORADATA\NOIDA\D
ATAFILES\EXAMPLE01.DBF.DBF

Note that the directory structure “D:\ORACLE\ORADATA\NOIDA\DATAFILE\” on server 2 is different from server 1 “D:\ORACLE\ORADATA\NOIDA\”

SQL> select * from v$log;

    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS
---------- ---------- ---------- ---------- ---------- --- ----------------
FIRST_CHANGE# FIRST_TIM
------------- ---------
         1          1         13   52428800          1 NO  CURRENT
      1195448 16-APR-11

         3          1         12   52428800          1 YES ACTIVE
      1191704 16-APR-11

         2          1         11   52428800          1 YES INACTIVE
      1187896 16-APR-11



RMAN> recover database until logseq 13;

Starting recover at 16-APR-11
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:05

Finished recover at 16-APR-11

RMAN> exit


Recovery Manager complete.

SQL> alter database open resetlogs;

Database altered.


Now,crosscheck some data..

SQL> select * from hr.test;

        ID
----------
        12
        23
        34
        45
        56
        67
        78
        89
        11
        22
        33

        ID
----------
        44
        55
        66
        77
        88
        99
       111
       222
       444
       333
      5555

22 rows selected.

SQL> select * from test;

        ID
----------
        12
        23
        34
        45
        56
        67
        78
        89

8 rows selected.

SQL> select name,open_mode from v$database;

NAME      OPEN_MODE
--------- ----------
NOIDA     READ WRITE


Changing DBNAME now!

You may change the DBNAME as explained in my blog "Changing database name and DBID…looks exciting!"

:-) all the best