Wednesday, August 31, 2011

How to be an expert


This is not an oracle post but is very interesting and enthusiastic.Have a read!



The only thing standing between you-as-amateur and you-as-expert is dedication. 

Seriously. How many people think they've missed their opportunity to be a musician, or an expert golfer, or even a chess grand master because they didn't start when they were young? Or because they simply lacked natural talent? Those people are (mostly) wrong. According to some brain scientists, almost anyone can develop world-class (or at least top expertise) abilities in things for which they aren't physically impaired. Apparently God-given talent, natural "gifts", and genetic predispositions just aren't all they're cracked up to be. Or at least not in the way most of us always imagined. It turns out that rather than being naturally gifted at music or math or chess or whatever, a superior performer most likely has a gift for concentration, dedication, and a simple desire to keep getting better. In theory, again, anyone willing to do what's required to keep getting better WILL get better.

Maybe the "naaturally talented artist" was simply the one who practiced a hell of a lot more. Or rather, a hell of a lot more deliberately. Dr. K. Anders Ericsson, professor of psychology at Florida State University, has spent most of his 20+ year career on the study of genuises, prodigies, and superior performers. In the book The New Brain (it was on my coffee table) Richard Restak quotes Ericsson as concluding:

"For the superior performer the goal isn't just repeating the same thing again and again but achieving higher levels of control over every aspect of their performance. That's why they don't find practice boring. Each practice session they are working on doing something better than they did the last time."

So it's not just how long they practice, it's how they practice.

HTH:-)

What are Change Vectors?

Change vectors describing changes made to database blocks. Each change vector describes a single transactional change to a database block. Each change vector is marked by a unique number called as System Change Number (SCN). Change vectors also are stored for changes to the rollback segment data blocks and the transaction table of the rollback segments.

HTH:-)

Understand Oracle Architecture in Simple Words

I have come across an article about  "Oracle Architecture" which is a very good summary of oracle working.

Please click on the below link:


HTH:-)

Evolvement of Oracle releases.


In 1992, Oracle released its version 7 for UNIX.

In 1994, there came out the version 7.1.

In 1997, Oracle 8 was released; its prime features included more scalability and object relational features.

In 1999, Oracle 8i came out equipped with internet technologies.

In 2000, Oracle 8i Release 2 with Oracle 9i Application Server (AS) was released

In 2001, Oracle 9i R1 wade made public.
Its major features include Real Application Cluster (RAC) and analytical functions

In 2002, Oracle 9i R2 got released. In 2004, Oracle 10g was released, the first ever database to support grid infrastructure.


In 2005, Oracle 10g R2 was commercialized for major platforms. 
Now is 11g 

HTH:-)

Myth - Oracle is just SQL and PL/SQL.


It is a common myth seen among people who don't know much about Oracle and DBA.

Actually,Oracle is not just confined to database or SQL, PL/SQL. Oracle covers whole suite of ERP, hundreds of products in middleware, many computer languages including JAVA, operating systems like Oracle Linux and Solaris, many many products in vertical industries.

I hope this clear a bit.

What are bdump,udump,dpdump,adump and cdump in oracle?


Below is the description of all. Have a look!

BDUMP:

SQL> show parameter BACKGROUND_DUMP_DEST

C:\ORACLE\PRODUCT\10.2.0\ADMIN\<SID>\BDUMP

This shows the bdump directory you were referring to in your question. This is a directory where debugging trace files for the background processes (LGWR, DBWn, and so on) are written. 

UDUMP:

SQL> show parameter USER_DUMP_DEST

C:\ORACLE\PRODUCT\10.2.0\ADMIN\<SID>\UDUMP

This specifies the pathname for a directory where debugging trace files will be written on behalf of a user process.

DPDUMP:

The dpdump directory has files created by the data pump export/import utility.

ADUMP:

The adump directory is the area where auditing trace files are stored.

CDUMP:

CDUMP is specifying the core dump directory where all OS error logs (core dump files) will be placed.

Enjoy:-)

Monday, August 29, 2011

Users are complaining that their application is hanging.


Every DBA encountered a situation like this in his/her life.

Below are few steps you can follow in sequence to tackle this.

If the user is complaining the hang problem.

First try to connect to the database itself and fire some query to check whether you are allowed to connect or not.
Check for any locked_objects by checking v$locked_object and dba_waiters and dba_blockers.
Then check at os level that which process is consuming the most of the time.

Analyze the problem if the problem relates to a single user process.
Then check what that user is doing by checking the sql statement he is firing.

HTH:-)

Friday, August 26, 2011

ORA-01792: maximum number of columns in a table or view is 1000


Description:

Cause: An attempt was made to create a table or view with more than 1000 columns, or to add more columns to a table or view which pushes it over the maximum allowable limit of 1000. Note that unused columns in the table are counted toward the 1000 column limit.

Solution: If the error is a result of a CREATE command, then reduce the number of columns in the command and resubmit. If the error is a result of an ALTER TABLE command, then there are two options: 1) If the table contained unused columns, remove them by executing ALTER TABLE DROP UNUSED COLUMNS before adding new columns; 2) Reduce the number of columns in the command and resubmit.

HTH:-)

Difference between Chaining and Migration of a row

 Chaining:
 ---------
 It occurs when the row is too large to fit into one data block when it is first
 inserted. In this case, Oracle stores the data for the row in a chain of data
 blocks (one or more) reserved for that segment. Row chaining most often occurs with large rows, such as rows that contain a column of datatype LONG, LONG RAW, LOB, etc. Row chaining in these cases is unavoidable.

 Migration:
 ----------
 It occurs when a row that originally fitted into one data block is updated so
 that the overall row length increases, and the block's free space is already
 completely filled.  In this case, Oracle migrates the data for the entire row
 to a new data block, assuming the entire row can fit in a new block.  Oracle
 preserves the original row piece of a migrated row to point to the new block
 containing the migrated row: the rowid of a migrated row does not change.

HTH:-)

Steps involved in startup of database:


NOMOUNT STAGE:

-SGA is allocated.
-Background processes are started.
-Alert log file is opened.

MOUNT STAGE:

-Controlfile is read.
-Name & location of the datafiles,logfiles are detected.

OPEN STAGE:

All Files are opened for use.

HTH:-)

Thursday, August 25, 2011

Redo generation in backup mode

When a user managed Hot backup is executed, the tablespace or database has to be placed in backup mode. This informs the Oracle kernel that SCNs should not be updated on the datafile. i.e. the header is frozen until the end backup statement is executed. During this period the datafiles will be updated but not the SCNs in the header. 

Why so much redo is generated?


As per Tom:


When you place a tablespace in hot backup mode, oracle will log extra information for a block the first time it is modified while the tablespace it belongs to is in hot backup mode.

Say tablespace X containing file 55 is put into hot backup mode.

You modify block 123 in file 55 -- this generates redo.  Oracle will log the ENTIRE block image instead of just changed bytes.

When you commit.

Someone else modifies blocks 123 and 124 in file 55.  Oracle will log just changed bytes for block 123 but a full block image copy for 124.


Below link would also give practical approach of this topic:


http://www.oraclealchemist.com/oracle/hot-backups-redo-and-fractured-blocks/

HTH:-)

Stages involved in shutdown


There are the following steps of shutting down the database. 

1)Close the database.
2) Unmount the database.
3)Shut down the instance.

1) Close a database:
When we close the database, oracle writes all the data and recovery data in the SGA to the datafiles and redo log files. Next orcale close all online datafiles and redo log files.
Any offline datafiles of any offline tablespace have been closed already.

2) Unmount the database:

After the database has been closed, Oracle unmount the database to dissociate it from its instance. At this point, instance remains in the memory of the computer.
After the database is unmounted, Oracle closes the control files of the database.

3) Shut down an Instance:

The final step in the shut down the database is the shutting down an instance, The SGA is removed from the memory and the background processes are terminated.

HTH:-)

Quiescing the database.


Quiesce means pausing or altering the state of running processes.

Some operations on database objects will fail if non-DBA queries reference the object during the operation, such as moving a table to a new schema.To prevent users being affected by these operations DBAs often shutdown the database and open it in restricted mode. This has an obvious affect on availability as users are locked out of the system until the restriction is lifted.

Why do we quiesce our database?

1.) The quiesce feature is useful when performing table maintenance or complicated data maintenance.

2.) The main advantage of this method is that users do not loose their sessions during the process.

3.) The shared pool does not have to "warm up" after a shutdown so performance should return to normal instantly.

4.) This method is advantageous when performing ALTER TABLE, CREATE OR REPLACE PACKAGE and EXP/IMP operations.

5.) Without the ability to quiesce the database, you would need to shut down the database and reopen it in restricted mode. This is a serious restriction, especially for systems requiring 24 x 7 availability. Quiescing a database is much a smaller restriction, because it eliminates the disruption to users and the downtime associated with shutting down and restarting the database.

How to quiesce the database?

Connect to the database as sys or system user and type the below command.

SQL>ALTER SYSTEM QUIESCE RESTRICTED;

As soon as the above statement is issued, all attempts to activate an inactive non-DBA session are blocked. Once finished the database can be returned to a fully available state using:

How to unquiesce the database?

SQL>ALTER SYSTEM UNQUIESCE;

When you type the above command everything will back to normal;

Enjoy:-)

Wednesday, August 24, 2011

Difference between Statspack and AWR..


1.) STATSPACK does not store the Active Session History (ASH) statistics which are available in the AWR dba_hist_active_sess_history view.

2.)An important difference between STATSPACK and the AWR is that STATSPACK does not store history for new metric statistics introduced in Oracle10g. The key AWR views, dba_hist_sysmetric_history and dba_hist_sysmetric_summary.

3.)Statspack snapshots must be run by an external scheduler (dbms_jobs, CRON, etc.). AWR snapshots are scheduled every 60 minutes by default. Administrators can manually adjust the snapshot interval if so desired.

4.)ADDM captures a much greater depth and breadth of statistics than Statspack does. 

5.)Statspack snapshot purges must be scheduled manually.AWR snapshots are purged automatically by MMON every night. MMON, by default, tries to keep one week's worth of AWR snapshots available. If AWR detects that the SYSAUX tablespace is in danger of running out of space, it will free space in SYSAUX by automatically deleting the oldest set of snapshots.

Enjoy:-)

The AWR Report


The AWR Report:

AWR is an enhancement/upgradation of statspack, which help us to tune the database.

Use the below links to know all about AWR:



Enjoy:-)

Interpreting Statspack part-V

Additionally Memory Statistics:





The above provides details of the buffer cache hit ratio. It provides the details of different pools created.It also shows statistics related to PGA and enqueue.






Latch Activity: -

Latch Activity DB/Inst: TSM/tsm  Snaps: 1-3
->"Get Requests", "Pct Get Miss" and "Avg Slps/Miss" are statistics for
  willing-to-wait latch get requests
->"NoWait Requests", "Pct NoWait Miss" are for no-wait latch get requests
->"Pct Misses" for both should be very close to 0.0

Latches are locks on memory which are obtained and released very quickly. There are two kinds of latches. Willing to wait latches and not willing to wait latch. _SPIN_COUNT parameter comes into play for willing to wait latches. Any modification to the same should be done after careful monitoring since spinning is CPU intensive process.
The not willing to wait latches will populate the Pct No Wait Miss column.If this value is greater than .99 investigations is necessary.

Dictionary and Library Cache Statistics: -

Proactively monitor your Dictionary stats.

SGA Memory Statistics:





This section lists the output of V$SGA and a listing of memory changes during the snapshot interval.

It also lists SGA breakdown and Difference in the memory allocation during the snap interval.

List of Non Default Initialization Parameters:

This is the last section of the Statspack report. These are the non default initialization parameter configured in the database. It also show if the parameter was changed during the snap interval.

This is all over now about 'Interpreting statspack series'.Hope this helps.:-)

Interpreting Statspack part-IV

Instance Activity Statistics:





Following are some important statistics:

Consistent gets: Number of blocks accessed from buffer cache for queries.
Db block gets: Blocks accessed from buffer cache for DML.

Physical reads: Data blocks read from the disk from DML and select statements.
Note: Buffer Hit Ratio= (consistent gets + db block gets) – physical reads / logical reads

Dirty buffers inspected: If this value is high then it is indicated that the DBWR is not being able to keep up with the load. You could be benefited by increasing the number of DBWR processes or by faster disks or spreading IO across disks.

Enqueue Time outs: It reflects times when lock was requested and was not available. Investigate locking if this statistics is greater than zero.

Free buffers inspected: The buffers that were skipped because they were dirty, pinned or busy. Large number could indicate need to increase buffer_cache_size.

Sorts Disk: This shows the times when the sorting was done in the temporary segment. Increase pga_aggregate_target or sort_area_size depending on your relevant parameters.

Table fetch continued row: This indicates that a chained row was fetched. Chained rows can be very detrimental to database performance.

Table scans (Long/short tables): Scan on long tables is not encouraged while it is acceptable with short tables. These tables are with NO CACHE clause.

Tablespace IO Stats: -





The above indicates the I/O break down per tablespace and data files.If one particular data file is getting high IO consider stripping it with RAID. Avoid RAID 5.

Enjoy:-)

Interpreting Statspack part-III

Top 5 Events:



♀Wait Events  DB/Inst: TSM/tsm  Snaps: 1-3
-> s - second, cs - centisecond,  ms - millisecond, us - microsecond
-> %Timeouts:  value of 0 indicates value was < .5%.  Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)

Here in the above figure the statistics are ordered by the waits in second. Hence it can be deduced that the parameter timed_statistics=true in the database where this report was run.

However in a system that demands tuning you will see that the waits will be much higher and see figures for waits and wait times in hundreds of thousands as mentioned above.


Below is listing of the wait events.



  ♀Wait Event Histogram  DB/Inst: TSM/tsm Snaps: 1-3

Details: -

Db File scattered read: This indicated full table scans. Consider proper indexing.

Db file sequential read: This indicates index reads. Look for inefficient SQL to fix the issue.

Buffer busy wait: This indicates that buffer is used in an exclusive manner. This could be due to multiple reasons but you can start by increasing freelists and then the db_cache_size.

Latch Free: Contention of low lying queuing mechanisms.

Enqueue: Enqueue is a lock that protects a shared resource.

Logfile switch: This wait is due to checkpoints not being completed. Increase the log file size and run your index tablespace in NOLOGGING mode provided you have a script to recreate them. Add database writers if DBWR is the issue.

Log buffer space: The LGWR is not coping up with the load. Increase log buffer size, increase log file size and see if the issue is not because your log files are on the same device as other busy tablespace.


Log File sync: Put log files on a faster disk.


Top SQL: -

Look for SQL which have high buffer gets and high Physical reads. Tune the SQL with high buffer gets first so that space can be freed in the cache to minimize physical reads. You will see SQL ordered by a)buffer gets, b)physical reads, c)executions and d)parse calls and in the mentioned order.

Tuning trouble queries on non tuned system can give almost unbelievable performance gains.

Enjoy:-)

Tuesday, August 23, 2011

Interpreting Statspack Part-II

Load Profile: -




This section gives you per second and per transaction statistics.

Details are: -

a) An increase in redo size, block changes and %blocks changed per read indicate increase DML i.e. insert, update or delete activity.
b) Parses: This figure indicates the application is not reusing the SQL and needs investigation and the program is parsing the SQL repeatedly instead of reusing it.
c)  The hard parse figures will show an increased adhoc queries or that certain portion does not use bind variables. 

Instance Efficiency Percentages:





Details are: -

a) Buffer No wait %: This value should greater than 99%
b) Buffer Hit %: This value should be ideally above 95%. Unselective indexes will deceptively give you higher misleading figures and evidently you will see waits indicating these problems.
c) Library hit %: This value should be more that 95% .Lesser figure indicates that bind variables are not used or that the allocated memory is insufficient.
d) In Memory sort %: This should be above 97% in OLTP systems. You definitely do not want disk sorts leading to a poor response time. Try and get a figure as close to 100 by increasing the pga_aggregate_target / sort_area_size. Proper Indexing also helps. Consider reverse key indexes for columns generated by sequences.
e) Soft parse%. This figure should be above 95%. A lower figure indicates that SQL is not being reused and needs investigation. It is also seen in the section Load Profile.
f) Latch Hit %: This figure should be above 99%. Any figure less than this indicates a bad situation. Top waits will also indicate the issue in details.

Any drastic change in these ratios from day to day cycles indicate as change that needs to be investigated in details to avoid performance issues.

Enjoy:-)

Interpreting Statspack part-I


After we all understood what is a statspack report and how to install and report it. Now i am starting a series of post where we will see how to interpret each section of it.

Header: -






Here we see the database name 'tsm' and DBID: 1538534826.The instance number here it is 1 since it is a single instance database and not a cluster(RAC).It shows that the beginning snap id is 1 to 3.The snaps beginning and ending times and the elapsed time here its  79.63 minutes i.e. 79 minutes and 63 seconds.

Here there were 17 sessions on the database open when the statistic gathering began and 16 were open when the collection stopped.In the last section of the Header you see the cache sizes and the standard block size for the database.

Enjoy:-)

The Statspack Report


The Statspack package is a set of SQL, PL/SQL, and SQL*Plus scripts that allow the collection, automation, storage, and viewing of performance data. Statspack stores the performance statistics permanently in Oracle tables, which can later be used for reporting and analysis. The data collected can be analyzed using Statspack reports, which includes an instance health and load summary page, high resource SQL statements, and the traditional wait events and initialization parameters.

How Statspack Works?

When you run the Statspack installation script, the PERFSTAT user is created automatically. PERFSTAT owns all objects needed by the Statspack package.Statspack users become familiar with the concept of a snapshot, a single collection of performance data.Each time a new collection is taken, a new SNAP_ID is generated.

Installing Statspack..

1.) Creating tablespace s_perfstat:

SQL>create tablespace S_PERFSTAT datafile ‘D:\oracle\product\10.2.0\oradata\s_PERFSTAT.dbf’ size 200M autoextend on next 20M maxsize 1024M;

2.) Now run spcreate.sql script placed at :   D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\

3.) Take the snapshots now.

Snapshots must be collected to evaluate database performance. Snapshots are moment-in-time collections of all of the database statistics that the Oracle database continuously collects. Once two snapshots are collected, they can be compared to identify the activity that occurred during the interval between the two snapshots.

Snapshots can be collected a various levels, each increasing level collecting a greater amount of information about the database. As the levels go higher, each level is inclusive of the information collected at the levels below it.











c:\> sqlplus perfstat/perfstat

SQL> execute statspack.snap(i_snap_level=>7);


Generating Reports:

Oracle Statspack comes with a comprehensive reporting script called spreport.sql. When this script is run, it outputs a list of available snapshots, asks the user for two snapshot IDs and a name for the report, and then outputs a text report of the results.

1.) Goto D:\oracle\product\10.2.0\db_1\RDBMS\ADMIN\ and run spreport.sql

2.) Enter a beginning snapshot ID.
3.) Enter an ending snapshot ID.
4.) Enter a name for the report or accept the default.

Enjoy:-)

Interpreting TKPROF Output:


We shall see the below output and understand each bit of information of it one by one:

select * 
from
 tab

call     count       cpu    elapsed       disk      query    current        rows
------- ------  -------- ---------- ---------- ---------- ----------  ----------
Parse        3      0.00       0.00          0          0          0           0
Execute      3      0.00       0.00          0          0          0           0
Fetch        6      0.04       0.04         21       1884          0           6
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total       12      0.04       0.04         21       1884          0           6

Misses in library cache during parse: 1
Optimizer mode: ALL_ROWS
Parsing user id: 54  

Rows     Row Source Operation
-------  ---------------------------------------------------
      2  NESTED LOOPS OUTER (cr=628 pr=21 pw=0 time=25030 us)
      2   TABLE ACCESS FULL OBJ$ (cr=621 pr=20 pw=0 time=21996 us)
      2   TABLE ACCESS CLUSTER TAB$ (cr=7 pr=1 pw=0 time=3051 us)
      2    INDEX UNIQUE SCAN I_OBJ# (cr=4 pr=1 pw=0 time=3033 us)(object id 3)

CALL Value
Meaning
PARSE
Translates the SQL statement into an execution plan, including checks for proper security authorization and checks for the existence of tables, columns, and other referenced objects.
EXECUTE
Actual execution of the statement by Oracle. For INSERTUPDATE, and DELETE statements, this modifies the data. For SELECT statements, this identifies the selected rows.
FETCH
Retrieves rows returned by a query. Fetches are only performed for SELECT statements.


SQL Trace Statistic
Meaning
COUNT
Number of times a statement was parsed, executed, or fetched.
CPU
Total CPU time in seconds for all parse, execute, or fetch calls for the statement. This value is zero (0) if TIMED_STATISTICS is not turned on.
ELAPSED
Total elapsed time in seconds for all parse, execute, or fetch calls for the statement. This value is zero (0) if TIMED_STATISTICS is not turned on.
DISK
Total number of data blocks physically read from the datafiles on disk for all parse, execute, or fetch calls.
QUERY
Total number of buffers retrieved in consistent mode for all parse, execute, or fetch calls. Usually, buffers are retrieved in consistent mode for queries.
CURRENT
Total number of buffers retrieved in current mode. Buffers are retrieved in current mode for statements such as INSERTUPDATE, and DELETE.


Library Cache Misses in TKPROF: TKPROF also lists the number of library cache misses resulting from parse and execute steps for each SQL statement.

Deciding Which Statements to Tune:

You need to find which SQL statements use the most CPU or disk resource.

Also a large gap between CPU and elapsed timings indicates Physical I/Os.

Enjoy:-)