Friday, August 19, 2011

OMS Grid Console - Exploring using SQL

The Grid Console hosts a repository of data collected from all the monitored databases and hosts. It has wealth of information and few of the areas are covered here. To explore this, logon to the OMS 10g Grid Console database 
  • To Find the Init Parameters for a monitored database use the following SQL
select host_name,  target_name, collection_timestamp,
             name, isdefault, value, datatype
      from   sysman.MGMT$DB_INIT_PARAMS 
      where  target_name like 'yourdbname%';
  • To find the number of data files configured and the actual number of data files for the monitored database.
	select a.target_name, db1.db_name, db2.db_files "Configured DB Files", count(*) "Actual # of Datafiles"
	from   MGMT$DB_DATAFILES a, ( select value db_name, target_name
	                               from  sysman.mgmt$db_init_params a1
	                               where a1.name        = 'db_name') db1 ,
	                            ( select value db_files, target_name
	                               from  sysman.mgmt$db_init_params a1
	                               where a1.name        = 'db_files') db2
	where  a.target_name = db1.target_name
	and    a.target_name = db2.target_name
	group by  a.target_name, db1.db_name, db2.db_files
	order by 4 desc;
  • To find the characterset used across all the monitored database
select * from sysman.MGMT$TARGET_PROPERTIES
      where  property_name = 'ConvertFromCharset';

      --  MGMT$TARGET_PROPERTIES corresponds to database_properties view
  • To find the SGA parameters for a monitored database use the following SQL
select host_name, target_name, collection_timestamp, sganame, sgasize
      from   sysman.mgmt$db_sga
      where  target_name like 'yourdbname%';
  • To find outstanding Alerts for a monitored database
	select t.type_display_name, t.target_name, t.host_name, m.metric_name, m.metric_column,
	       m.column_label, m.key_value, m.collection_timestamp,
	       m.message, m.alert_state
	from sysman.MGMT$ALERT_CURRENT m, sysman.mgmt$target  t
	where  t.target_guid = m.target_guid
	and   t.target_name like 'yourdbname%'
	order by  m.collection_timestamp desc;
  • To find out the Target up and down status
	select t.target_name, m.start_timestamp,
	       m.end_timestamp, m.availability_status
	from   sysman.MGMT$AVAILABILITY_HISTORY m, sysman.mgmt$target  t
	where  t.target_guid = m.target_guid
	and    t.target_name like 'yourdbname%'
	order by start_timestamp desc;
  • Other useful information from Grid
MGMT$OS_SUMMARY    - to find the OS patch, swap and version
      -- Issue this query to see several other tables that holds useful info and you can query against it to see what you get.
         select * 
         from   dba_objects 
         where  objecT_name like 'MGMT%' 
         and    owner = 'SYSMAN' 
         and    object_type in ('TABLE', 'VIEW');
  • To find out the Interconnect network details
select * 
     from sysman.MGMT$CLUSTER_INTERCONNECTS 
     where host_name = 'your agent host name fqdn';
  • Detailed Interconnect Traffic for RAC Instances (Past 7 Days)
select m.target_name, m.metric_name, m.metric_column, m.collection_timestamp,
             m.value*1024 "Traffic KB/s", m.key_value
      from   mgmt$metric_details m,
             mgmt$target t
      where  (t.target_type='rac_database' OR
             (t.target_type='oracle_database' AND t.TYPE_QUALIFIER3 = 'RACINST'))
      and     m.metric_column='interconnect_rate'
      and     m.target_guid  = t.target_guid
      and     m.collection_timestamp >= sysdate -7
      and     m.target_name like 'yourdbname%';
  • Historical Interconnet Traffic for RAC instances (Average, Min, Max)
	select m.target_name, m.metric_name, m.metric_column, m.rollup_timestamp,
	       round(m.average*1024,2) "Average Traffic KB/s",
	       round(m.minimum*1024,2) "Minimum Traffic KB/s",
	       round(m.maximum*1024,2) "Maximum Traffic KB/s"
	from   mgmt$metric_daily m,
	       mgmt$target t
	where  (t.target_type='rac_database' OR
	       (t.target_type='oracle_database' AND t.TYPE_QUALIFIER3 = 'RACINST'))
	and     m.metric_column='interconnect_rate'
	and     m.target_guid  = t.target_guid
	and     m.rollup_timestamp >= sysdate - 60
	and     m.target_name like 'yourdbname%';

The Grid Console stores bulk of its collected metrics in the following tables
MGMT$METRIC_DETAILS   -  Stores details upto 7 days
  MGMT$METRIC_CURRENT   -  Stores the current metrics from the past 24 hours
  MGMT$METRIC_HOURLY    -  Stores details upto 30 days in a hourly snapshot with AVG, MIN and MAX values.
  MGMT$METRIC_DAILY     -  Stores the metrics in a daily snapshot format with AVG, MIN and MAX values.

The following are the details captured in the metrics tables and you can use the above query appropriately as needed by replacing the metric_column WHERE clause.
1. Interconnect Traffic for RAC (metric_column = 'interconnect_rate')
   2. Archive Space Usage ( metric_column = 'archTotal' for Total Archive Space, metric_column = 'archUsed' for actual used space + 
                            metric_name='archFull')
   3. Tablespace Usage    ( metric_column = 'spaceAllocated', 'spaceUsed' +  metric_name='tbspAllocation')

This table covers the other metrics you can use for current and historical analysis.
COLUMN_LABEL METRIC_NAME METRIC_COLUMN
Number of Capture Processesstreams_processes_countCAPTURE_COUNT
Tablespace Allocated Space (MB)tbspAllocationspaceAllocated
Physical Writes Direct (per transaction)instance_throughputphyswritesdir_pt
Global Cache Blocks Corruptrac_global_cachecorrupt
Enqueue Deadlocks (per second)instance_throughputenqdeadlocks_ps
Physical Reads Direct Lobs (per transaction)instance_throughputphysreadslob_pt
Physical Writes Direct (per second)instance_throughputphyswritesdir_ps
Archive Area Used (KB)archFullarchUsed
Cursor Cache Hit (%)instance_efficiencycursorcachehit_pct
Redo Log Allocation Hit (%)instance_efficiencyredologalloc_hit_pct
Database CPU Time (%)instance_efficiencycpu_time_pct
Physical Reads Direct (per transaction)instance_throughputphysreadsdir_pt
User Limit Usage (%)Database_Resource_Usageuser_limit
Library Cache Miss (%)instance_efficiencylibcache_miss_pct
Physical Writes Direct Lobs (per transaction)instance_throughputphyswriteslob_pt
User Rollbacks (per transaction)instance_throughputrollbacks_pt
Total Archive Area (KB)archFullarchTotal
Physical Writes Direct Lobs (per second)instance_throughputphyswriteslob_ps
User Commits (per transaction)instance_throughputcommits_pt
Row Cache Miss Ratio (%)instance_efficiencydictionarymiss_pct
Parallel Execution Downgraded to Serial (per second)instance_efficiencypxdwngrdserial_ps
Data Block Corruption Alert Log ErroralertLogblockCorruptErrStack
Number of Propagation Jobsstreams_processes_countPROP_COUNT
Usable Flash Recovery Area (%)ha_flashrecoveryusable_area
Media Failure Alert Log Error StatusalertLogStatusmediaFailureErrors
PGA Cache Hit (%)instance_efficiencypgacachehit_pct
Enqueue Timeout (per transaction)instance_throughputenqtimeouts_pt
Parallel Execution Downgraded to Serial (per transaction)instance_efficiencypxdwngrdserial_pt
Service Response Time (per user call) (microseconds)serviceelapsed_cs
Session Limit Usage (%)Database_Resource_Usagesession_usage
Response Time (per transaction)instance_efficiencyresponse_time_pt
Total Index Scans (per second)instance_throughputindxscanstotal_ps
Session Logical Reads (per transaction)instance_throughputlogreads_pt
User Calls (%)instance_throughputusercall_pct
Database Block Changes (per second)instance_throughputdbblkchanges_ps
Redo Writes (per second)instance_throughputredowrites_ps
Open Cursors (per second)instance_throughputopncurs_ps
Average Instance CPU (%)wait_bottlenecksavg_user_cpu_time_pct
Network Bytes (per second)instance_throughputnetworkbytes_ps
Parse Failure Count (per transaction)instance_throughputfailedparses_pt
Consistent Read Blocks Created (per transaction)instance_throughputcrblks_pt
Enqueue Timeout (per second)instance_throughputenqtimeouts_ps
Connections Refused (per min)LoadrefConns
Tablespace Space Used (%)problemTbsppctUsed
Full Index Scans (per second)instance_throughputindxscansfull_ps
Free Archive Area (KB)archFullarchAvail
Enqueue Deadlocks (per transaction)instance_throughputenqdeadlocks_pt
User Rollback Undo Records Applied (per second)instance_throughputuserrollbackundorec_ps
Total Parses (per second)instance_throughputparses_ps
Database Block Gets (per second)instance_throughputdbblkgets_ps
Current Open Cursors CountDatabase_Resource_Usageopencursors
Active Sessions Waiting: I/Owait_bottlenecksuserio_wait_cnt
Library Cache Hit (%)instance_efficiencylibcache_hit_pct
Transfer Rate (MB/s)ha_rac_intrconn_trafficinterconnect_rate
Hard Parses (per transaction)instance_throughputhardparses_pt
Total Dump Area (KB)dumpFulldumpTotal
Total Table Scans (per second)instance_throughputtabscanstotal_ps
Global Cache Average CR Block Request Time (centi-seconds)rac_global_cachecr_request_cs
Process Limit Usage (%)Database_Resource_Usageprocess_usage
Active Sessions Waiting: Otherwait_bottlenecksother_wait_cnt
User Calls (per transaction)instance_throughputusercalls_pt
Session Terminated Alert Log Error StatusalertLogStatussessTerminateErrors
Java Pool Free (%)sga_pool_wastagejava_free_pct
Session Logical Reads (per second)instance_throughputlogreads_ps
Audited User Session CountUserAuditSession Count
Response Time (msec)ResponsetnsPing
Connections Established (per min)LoadestConns
Audited User HostUserAuditmachine
Total Table Scans (per transaction)instance_throughputtabscanstotal_pt
Physical Reads (per second)instance_throughputphysreads_ps
Leaf Node Splits (per second)instance_throughputleafnodesplits_ps
Redo Writes (per transaction)instance_throughputredowrites_pt
Recursive Calls (per second)instance_throughputrecurscalls_ps
Physical Writes (per second)instance_throughputphyswrites_ps
Consistent Read Changes (per transaction)instance_throughputconsistentreadchanges_pt
Global Cache Average Current Block Request Time (centi-seconds)rac_global_cachecurrentgets_cs
Host CPU Utilization (%)wait_bottleneckshost_cpu_usage_pct
Soft Parse (%)instance_throughputsoftparse_pct
Database Block Changes (per transaction)instance_throughputdbblkchanges_pt
Hard Parses (per second)instance_throughputhardparses_ps
Physical Reads Direct (per second)instance_throughputphysreadsdir_ps
Parse Failure Count (per second)instance_throughputfailedparses_ps
Scans on Long Tables (per second)instance_throughputtabscanslong_ps
Branch Node Splits (per second)instance_throughputbranchnodesplits_ps
Global Cache Blocks Lostrac_global_cachelost
Archiver Hung Alert Log Error StatusalertLogStatusarchiveHungErrors
Consistent Read Gets (per transaction)instance_throughputconsistentreadgets_pt
Data Dictionary Hit (%)instance_efficiencydictionaryhit_pct
Consistent Read Blocks Created (per second)instance_throughputcrblks_ps
Generic Alert Log ErroralertLoggenericErrStack
Number of recommendationssegment_advisor_countrecommendations
Flash Recovery Areaha_flashrecoveryflash_recovery_area
Number of Apply Processesstreams_processes_countAPPLY_COUNT
Tablespace Used Space (MB)tbspAllocationspaceUsed
Cumulative Logons (per transaction)instance_throughputlogons_pt
Dump Area Used (KB)dumpFulldumpUsed
Parallel Execution Downgraded 75% or more (per second)instance_efficiencypxdwngrd75_ps
Service CPU Time (per user call) (microseconds)servicecpu_cs
Full Index Scans (per transaction)instance_throughputindxscansfull_pt
Executes Performed without Parses (%)instance_throughputexecuteswoparse_pct
System Response Time (centi-seconds)system_response_time_per_calltime_cs
Data Block Corruption Alert Log Error StatusalertLogStatusblockCorruptErrors
CPU Usage (per transaction)instance_efficiencycpuusage_pt
Large Pool Free (%)sga_pool_wastagelarge_free_pct
User Rollback Undo Records Applied (per transaction)instance_throughputuserrollbackundorec_pt
Sorts in Memory (%)instance_efficiencyinmem_sort_pct
CPU Usage (per second)instance_efficiencycpuusage_ps
Consistent Read Gets (per second)instance_throughputconsistentreadgets_ps
Baseline SQL Response Timesql_responsebaselineTime
Active Sessions Using CPUwait_bottlenecksuser_cpu_time_cnt
Alert Log Error Trace FilealertLogtraceFileName
Media Failure Alert Log ErroralertLogmediaFailureErrStack
Capture Processes Having Errorsstreams_processes_countCAPTURE_ERROR_COUNT
Apply Processes Having Errorsstreams_processes_countAPPLY_ERROR_COUNT
Datafiles Need Media Recoveryha_recoverydatafiles_need_recovery
Enqueue Requests (per second)instance_throughputenqreqs_ps
Sorts to Disk (per second)instance_throughputsortsdisk_ps
Generic Alert Log Error StatusalertLogStatusgenericErrors
Redo Generated (per second)instance_throughputredosize_ps
Rows Processed (per sort)instance_throughputrows_psort
Redo Generated (per transaction)instance_throughputredosize_pt
Enqueue Waits (per transaction)instance_throughputenqwaits_pt
BG Checkpoints (per second)instance_throughputbgcheckpoints_ps
Enqueue Waits (per second)instance_throughputenqwaits_ps
Current SQL Response Timesql_responsecurrentTime
Current Logons CountDatabase_Resource_Usagelogons
Database Block Gets (per transaction)instance_throughputdbblkgets_pt
Physical Writes (per transaction)instance_throughputphyswrites_pt
Cumulative Logons (per second)instance_throughputlogons_ps
Executes (per second)instance_throughputexecutions_ps
Archiver Hung Alert Log ErroralertLogarchiveHungErrStack
Alert Log NamealertLogalertLogName
Tablespace Free Space (MB)problemTbspbytesFree
SQL Response Time (%)sql_responsetime
Leaf Node Splits (per transaction)instance_throughputleafnodesplits_pt
Open Cursors (per transaction)instance_throughputopncurs_pt
Consistent Read Undo Records Applied (per transaction)instance_throughputcrundorecs_pt
Parallel Execution Downgraded 25% or more (per second)instance_efficiencypxdwngrd25_ps
Branch Node Splits (per transaction)instance_throughputbranchnodesplits_pt
DBWR Checkpoints (per second)instance_throughputdbwrcheckpoints_ps
Free Dump Area (KB)dumpFulldumpAvail
Wait Time (%)wait_bottlenecksuser_wait_time_pct
Parallel Execution Downgraded 50% or more (per second)instance_efficiencypxdwngrd50_ps
Dump Area Used (%)dumpFulldumpUsedPercent
Consistent Read Changes (per second)instance_throughputconsistentreadchanges_ps
Session Terminated Alert Log ErroralertLogsessTerminateErrStack
Propagation Errorsstreams_processes_countPROP_ERROR_COUNT
Corrupt Data Blocksha_recoverycorrupt_data_blocks
Datafiles with Errorha_recoverymissing_media_files
Consistent Read Undo Records Applied (per second)instance_throughputcrundorecs_ps
User Commits (per second)instance_throughputcommits_ps
Shared Pool Free (%)sga_pool_wastageshared_free_pct
User Calls (per second)instance_throughputusercalls_ps
StatusResponseStatus
Recursive Calls (per transaction)instance_throughputrecurscalls_pt
Database Time (centiseconds per second)instance_throughputdbtime_ps
Total Index Scans (per transaction)instance_throughputindxscanstotal_pt
Average Users Waiting Countwait_sess_clsavg_users_waiting_on_class
User Rollbacks (per second)instance_throughputrollbacks_ps
Total Parses (per transaction)instance_throughputparses_pt
Physical Reads (per transaction)instance_throughputphysreads_pt
Scans on Long Tables (per transaction)instance_throughputtabscanslong_pt
Sorts to Disk (per transaction)instance_throughputsortsdisk_pt
Parallel Execution Downgraded (per second)instance_efficiencypxdwngrd_ps
Archive Area Used (%)archFullarchUsedPercent
Physical Reads Direct Lobs (per second)instance_throughputphysreadslob_ps
Buffer Cache Hit (%)instance_efficiencybufcachehit_pct
Number of Transactions (per second)instance_throughputtransactions_ps
Database Time Spent Waiting (%)wait_sess_clsdbtime_waitclass_pct
Enqueue Requests (per transaction)instance_throughputenqreqs_pt
User Logon Time (msec)ResponseuserLogon
Audited UserUserAuditusername

Script/Automation: Generate AWR Report across all RAC nodes

I have found it useful many a time using the following script to generate AWR report for a given snap shot range for all the Nodes in a RAC environment.

I plan to post several automation scripts over a period of time.


#!/bin/ksh
REPORT_DIR=/outputdir/awr;

run_awr_report()
{
sqlplus -s / as sysdba <<EOF
define  inst_num     = $INSTANCE_ID;
define  num_days     = 3;
define  inst_name    = '$INSTANCE_NAME';
define  db_name      = '$DB_NAME';
define  dbid         = $DB_ID;
define  begin_snap   = $BEGIN_SNAP_ID;
define  end_snap     = $END_SNAP_ID;
define  report_type  = 'text';
define  report_name  = $REPORT_NAME
@@?/rdbms/admin/awrrpti
EOF
}


if (( $# < 2 ))
   then
     print "Required parms not specified"
     print "Usage snap_report.sh <Begin Snap ID> <End Snap Id>";
     exit 3
 fi
BEGIN_SNAP_ID=$1
END_SNAP_ID=$2

echo "
    set feedback off verify off heading off pagesize 0 echo off;
    select d.name, i.instance_number, i.instance_name, d.dbid \
    from   gv\$database d, gv\$instance i \
    where  i.inst_id = d.inst_id;
    exit;
    " |  sqlplus  -s / as sysdba  | while read DB_NAME INSTANCE_ID INSTANCE_NAME DB_ID
    do
         echo Running the report for:
         echo $DB_NAME - $INSTANCE_NAME
         REPORT_FILE=${INSTANCE_NAME}_${BEGIN_SNAP_ID}_${END_SNAP_ID}.lst
         REPORT_NAME=${REPORT_DIR}/${REPORT_FILE}
         run_awr_report

    done

Script/Automation: How to execute a script across all RAC nodes.

Often times, you may have run into scenarios where there is a need to run the same command across all the RAC nodes. I have attempted to automate this and the script is listed below.

Assumptions made for this script to work
1. The script you want to execute is accessible to all the nodes in a RAC environment either through NFS shared mount point or a common directory.
2. This script uses crs_stat to find all the nodes in a RAC. Ensure ORACLE_HOME and PATH variables are set appropriately.
3. Make sure secure shell login is permitted across all the nodes in the RAC cluster.




# Purpose: To execute a  unix script across all the nodes of a RAC environment
#
usage ()
{
     print "Usage ${PROGRAM_NAME} '<SCRIPT FILE with options>'";
     print "Eg.   ${PROGRAM_NAME}  '/nfsmount/script_name'";
     exit 3
}

PROGRAM_NAME=`basename $0`;
if (( $# < 1 ))
   then
     print "Required parameter not specified"
     usage;
fi
# Get the options supplied.
CMD=$1;
FILE=$(echo $1 | awk '{print $1}');
# Check if the file exists in the current server.

if [[ ! -f ${FILE} ]]
then
     print "Incorrect script file specified and/or the file is not accessible";
     usage;
fi;
crs_stat -t | awk '{ print $NF }' | grep -v 'Host' | grep -v '\-'  > /tmp/node_list;
for  listedhost in `sort /tmp/node_list | uniq`
do
   print "\nExecuting the Script on $listedhost ....\n\n";
   ssh -l oracle $listedhost ". /opt/oracle/.oravar; $CMD"
done;

Performance Tuning: How to find if a SQL changed it's plan based on AWR

How to Find if a SQL PLAN Changed
The Shared Pool keeps the current copy of the SQL Plan in it's cache. In order to track the SQL PLAN Change, we need to take assistance of the AWR history tables. When you are investigating a performance issue, it is better to have the AWR snapshot every 15 – 30  minutes. To change the AWR snapshot every 15 minutes, issue the following command.
 
·         To find out what is the current retention and snapshot interval

     select * from dba_hist_wr_control;

·         Change the setting using

execute dbms_workload_repository.modify_snapshot_settings ( interval => 10, retention => 14400);

The above command sets the snapshot interval to 10 minutes and retention to 14400 minutes.

To Track the SQL Plan Change.

        SELECT
            to_char(s.begin_interval_time,'mm/dd/yyyy hh24:mi:ss') c1, sql_id,
            s.snap_id,
            sql.PLAN_HASH_VALUE,
            sql.executions_delta   executions,
            decode(sql.executions_delta, 0, '0',
            sql.buffer_gets_delta/sql.executions_delta) BufferGets_Per_exec,
            sql.buffer_gets_Delta,
            decode(sql.executions_delta, 0, '0',
            sql.elapsed_time_delta/sql.executions_delta) Elap_Per_exec,
            sql.elapsed_time_delta
        from dba_hist_sqlstat sql, dba_hist_snapshot s
        where s.snap_id = sql.snap_id
        and s.instance_number = sql.instance_number
        and s.instance_number = 1
        and sql_id in (  '5s770nx4bfdjb')    --  ENTER your SQLID where you suspect the PLAN changed.
        --and executions_delta >0
        order by sql_id, s.begin_interval_time;

The above query will return records and you can observe the time when the plan really changed. You can also notice the buffer gets,elapsed time to see which query plan's hash value is an optimal one. (If you find multiple plan_hash_value for one particular SQL_ID, it means that the PLAN changed)

How to find SQL Execution PLAN using the AWR Repository

select plan_table_output from
table(dbms_xplan.display_awr('5s770nx4bfdjb',null,null,'typical+peeked_binds'));   -- ENTER the SQLID


IF the SQL Plan has changed, you will see multiple PLANS from the above query.


Currently in Oracle 10g, there is no easy way to get a plan from AWR repository and make the database use that PLAN.

How to find SQL Execution PLAN using the current CACHE

select plan_table_output from
table(dbms_xplan.display_cursor('5s770nx4bfdjb',null,'typical+peeked_binds'));

How to Backup and Restore Oracle Statistics

Following are the few tips:

To Backup the STATS

  • Create a table that will hold the STATS backup
           exec dbms_stats.create_stat_table(ownname => 'SYSTEM',stattab => 'STATS_BACKUP', tblspace => 'TOOLS');
This example, creates a table STATS_BACKUP under SYSTEM user and uses TOOLS tablespace for data storage.

  • Run the export command to capture the current stats into the backup table. (I suggest to export the entire SCHEMA as a safety net).
    exec dbms_stats.export_schema_stats(ownname => 'SCOTT', stattab => 'STATS_BACKUP', statid => 'JAN192011' , statown => 'SYSTEM');
          In this example, the SCOTT schema STAT is backed up to STATS_BACKUP table.

  • Verify if the export of the STATS by issuing this query
    select * from system.stats_Backup where statid = 'JAN192011';
To Restore STATS

There are couple of ways you can adopt to restore the STATS. Before restoring the STATS, always backup the existing STATS using the method described above.

Method 1: Restoring the STATS through the Oracle Base table for a particular TABLE.
a. Find out if that particular table has STATS history.

select to_char(t.stats_update_time, 'mm/dd/yyyy hh24:mi:ss'), t.*
from   DBA_TAB_STATS_HISTORY t
where    owner = 'SCOTT'
and      table_name = 'DEPT';

b. If there is a record returned by the above query, narrow it down to the right date and time you want to restore back to.

c. Issue this command to restore the STATS

   EXEC DBMS_STATS.RESTORE_TABLE_STATS('SCOTT' ,'DEPT', to_Date('01/11/2011 19:42:24', 'MM/DD/YYYY HH24:MI:SS'));

Method 2: Restoring the STATS from backup for a particular TABLE.

Sometimes when you want to go beyond 30 days (AWR retention is normally 30 days and the above step may not work since the STAT history gets purged), you will have to restore the STATS from the backup. The steps include the following.

a. Query the STATS Backup TABLE (SYSTEM.STATS_BACKUP) and find out the details related to that TABLE.

   select * from system.stats_Backup where c1 = '<tablename>';


b. From the above query, get the STATID you want to restore back to. Pay careful attention to the data/time for that STATID

c. Import the STATS back for that particular table.

   exec dbms_stats.import_Table_stats(ownname => 'SCOTT', stattab => 'STATS_BACKUP', statid => 'JAN192011' , 
                                            statown => 'SYSTEM', TABNAME => 'tablename');
Method 3: Restoring the STATS from backup from QAS to PROD.

a. Export the STATS table (SYSTEM.STATS_BACKUP) from the source database.

         Parfile for typical expdp

         dumpfile=expdp_stats_backup.dmp
         logfile=expdp_stats_backup.log
         directory=DATA_PUMP_DIR
         EXCLUDE=STATISTICS
         CONTENT=DATA_ONLY
         tables=STATS_BACKUP


b. Create the stats table in the destination database if it does not exists.

          exec dbms_stats.create_stat_table(ownname => 'SYSTEM',stattab => 'STATS_BACKUP', tblspace => 'TOOLS'); 
c. Import the stats into the destination database.

          dumpfile=expdp_stats_backup.dmp
          logfile=impdp_stats_backup.log
          directory=DATA_PUMP_DIR
          CONTENT=DATA_ONLY
          tables=STATS_BACKUP

 d.  Use the dbms_stats package to import the SCHEMA stats or STATS for a table.

          Eg. exec dbms_stats.import_schema_stats(ownname => 'SCOTT', stattab => 'STATS_BACKUP', statid => 'JAN192011' , 
                                            statown => 'SYSTEM');     -- To import the stats for the schema


          Eg. exec dbms_stats.import_Table_stats(ownname => 'SCOTT', stattab => 'STATS_BACKUP', statid => 'JAN192011' , 
                                            statown => 'SYSTEM', TABNAME => 'tablename');  -- To import the stats for a Table.


If you are importing stats from PROD to DEV/QA between two different schemas, make sure the SCHEMA owners are same before trying the import_schema_stats or import_table_stats. If the schema owners are not the same, you will have to 
update the stats backup table.

The following example shows the update statement if the SCHEMA OWNERS are different. 

         update system.stats_backup
         set   c5 = 'SCOTTPROD'
         where c5 = 'SCOTT';