Thursday, December 3, 2009

Setting log miner for tracking transaction

2. Steps in a Typical LogMiner Session
1) Enable Supplemental Logging
Database level:
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; -- enable minimal database level supplemental logging
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS; -- Database level : ALL system-generated uncondititional supplemental log group
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS; -- PRIMARY KEY system-generated uncondititional supplemental log group
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS; -- UNIQUE system-generated conditional supplemental log group
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS; -- FOREIGN KEY system-generated conditional supplemental log group

To disable all database supplemental logging, you must first disable any identification key logging that has been enabled, then disable minimal supplemental logging. The following example shows the correct order:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER DATABASE DROP SUPPLEMENTAL LOG DATA;

Table level:
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_parttime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID) ALWAYS; -- User-defined unconditional log groups
SQL> ALTER TABLE HR.EMPLOYEES ADD SUPPLEMENTAL LOG GROUP emp_fulltime (EMPLOYEE_ID, LAST_NAME, DEPARTMENT_ID); -- User-defined conditional supplemental log groups

2) Extract a LogMiner Dictionary (unless you plan to use the online catalog)
* Specify use of the online catalog by using the DICT_FROM_ONLINE_CATALOG option when you start LogMiner.
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);

* Extract database dictionary information to the redo log files.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD(OPTIONS=> DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);

* Extract database dictionary information to a flat file.
"UTL_FILE_DIR = /oracle/database" must be put in initial parameter file first to enable directory access.
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', '/oracle/database/', DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', '/oracle/database/');

3) Specify Redo Log Files for Analysis
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/logs/log1.f', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/logs/log2.f', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME=>'/oracle/logs/log2.f');
SQL> EXECUTE DBMS_LOGMNR.REMOVE_LOGFILE(LOGFILENAME => '/oracle/logs/log2.f');

To use online catalog and DBMS_LOGMNR.CONTINUOUS_MINE option in START_LOGMNR, it is not necessary to specify redo log files manually, it is automatically.


4) Start LogMiner
The OPTIONS parameter to DBMS_LOGMNR.START_LOGMNR:
* DICT_FROM_ONLINE_CATALOG
* DICT_FROM_REDO_LOGS
* CONTINUOUS_MINE
* COMMITTED_DATA_ONLY
* SKIP_CORRUPTION
* NO_SQL_DELIMITER
* PRINT_PRETTY_SQL
* NO_ROWID_IN_STMT
* DDL_DICT_TRACKING

SQL> ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR( -
STARTTIME => '01-Jan-2003 08:30:00', -
ENDTIME => '01-Jan-2003 08:45:00', -
OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + -
DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(options => DBMS_LOGMNR.SKIP_CORRUPTION);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN => 621047, ENDSCN => 625695, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(DICTFILENAME =>'/oracle/database/dictionary.ora');
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);

5) Query V$LOGMNR_CONTENTS
* LogMiner populates the view only in response to a query against it. You must successfully start LogMiner before you can query V$LOGMNR_CONTENTS.
* When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially.
* Every time you query V$LOGMNR_CONTENTS, LogMiner analyzes the redo log files for the data you request.
* The amount of memory consumed by the query is not dependent on the number of rows that must be returned to satisfy a query.
* The time it takes to return the requested data is dependent on the amount and type of redo log data that must be mined to find that data.

SQL> SELECT OPERATION, SQL_REDO, SQL_UNDO
SQL> FROM V$LOGMNR_CONTENTS
SQL> WHERE SEG_OWNER = 'OE' AND SEG_NAME = 'ORDERS' AND
SQL> OPERATION = 'DELETE' AND USERNAME = 'RON';

SQL> SELECT SQL_REDO FROM V$LOGMNR_CONTENTS
SQL> WHERE SEG_NAME = 'EMPLOYEES' AND
SQL> SEG_OWNER = 'HR' AND
SQL> OPERATION = 'UPDATE' AND
SQL> DBMS_LOGMNR.MINE_VALUE(REDO_VALUE, 'HR.EMPLOYEES.SALARY') >
SQL> 2*DBMS_LOGMNR.MINE_VALUE(UNDO_VALUE, 'HR.EMPLOYEES.SALARY');


6) End the LogMiner Session
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR;


3. Accessing LogMiner Operational Information in Views
* V$LOGMNR_DICTIONARY
Shows information about a LogMiner dictionary file that was created using the STORE_IN_FLAT_FILE option to DBMS_LOGMNR.START_LOGMNR.
* V$LOGMNR_LOGS
Shows information about specified redo log files.
* V$LOGMNR_PARAMETERS
Shows information about optional LogMiner parameters, including starting and ending system change numbers (SCNs) and starting and ending times.
* V$DATABASE, DBA_LOG_GROUPS, ALL_LOG_GROUPS, USER_LOG_GROUPS, DBA_LOG_GROUP_COLUMNS, ALL_LOG_GROUP_COLUMNS, USER_LOG_GROUP_COLUMNS
Shows information about the current settings for supplemental logging.


4. Examples
* Examples of Mining by Explicitly Specifying the Redo Log Files of Interest
* Example 1: Finding All Modifications in the Last Archived Redo Log File
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 2: Grouping DML Statements into Committed Transactions
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY);
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE username IN ('HR', 'OE');
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 3: Formatting the Reconstructed SQL
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_16_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO FROM V$LOGMNR_CONTENTS;
SQL> SELECT username AS USR, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_UNDO FROM V$LOGMNR_CONTENTS;
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 4: Using the LogMiner Dictionary in the Redo Log Files
SQL> SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);

Find a redo log file that contains the end of the dictionary extract.
SQL> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_END = 'YES' and SEQUENCE# <= 210); Find the redo log file that contains the start of the data dictionary extract that matches the end of the dictionary found in the previous step. SQL> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES' and SEQUENCE# <= 208); Specify the list of the redo log files of interest. SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_210_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_208_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_207_482701534.dbf');
Query the V$LOGMNR_LOGS view to display the list of redo log files to be analyzed, including their timestamps.
SQL> SELECT FILENAME AS name, LOW_TIME, HIGH_TIME FROM V$LOGMNR_LOGS;

SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT USERNAME AS usr, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
SQL> SELECT SQL_REDO FROM V$LOGMNR_CONTENTS WHERE XIDUSN = 1 and XIDSLT = 2 and XIDSQN = 1594;
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 5: Tracking DDL Statements in the Internal Dictionary
SQL> SELECT NAME, SEQUENCE# FROM V$ARCHIVED_LOG WHERE FIRST_TIME = (SELECT MAX(FIRST_TIME) FROM V$ARCHIVED_LOG);
SQL> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_END = 'YES' and SEQUENCE# <> SELECT NAME, SEQUENCE#, DICTIONARY_BEGIN d_beg, DICTIONARY_END d_end FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX (SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES' and SEQUENCE# <= 208); Make sure you have a complete list of redo log files. SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE SEQUENCE# >= 207 AND SEQUENCE# <= 210 ORDER BY SEQUENCE# ASC; SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_210_482701534.dbf', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_209_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_208_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/usr/oracle/data/db1arch_1_207_482701534.dbf');
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.DDL_DICT_TRACKING + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 6: Filtering Output by Time Range
Create a list of redo log files to mine.
--
-- my_add_logfiles
-- Add all archived logs generated after a specified start_time.
--
CREATE OR REPLACE PROCEDURE my_add_logfiles (in_start_time IN DATE) AS
CURSOR c_log IS
SELECT NAME FROM V$ARCHIVED_LOG
WHERE FIRST_TIME >= in_start_time;

count pls_integer := 0;
my_option pls_integer := DBMS_LOGMNR.NEW;

BEGIN
FOR c_log_rec IN c_log
LOOP
DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => c_log_rec.name,
OPTIONS => my_option);
my_option := DBMS_LOGMNR.ADDFILE;
DBMS_OUTPUT.PUT_LINE('Added logfile ' || c_log_rec.name);
END LOOP;
END;
/

SQL> EXECUTE my_add_logfiles(in_start_time => '13-jan-2003 14:00:00');

Query the V$LOGMNR_LOGS to see the list of redo log files.
SQL> SELECT FILENAME name, LOW_TIME start_time, FILESIZE bytes FROM V$LOGMNR_LOGS;
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '13-jan-2003 15:00:00', ENDTIME => '13-jan-2003 16:00:00', OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
SQL> SELECT TIMESTAMP, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) AS XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'OE';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();


* Examples of Mining Without Specifying the List of Redo Log Files Explicitly
* Example 1: Mining Redo Log Files in a Given Time Range
SQL> SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES');
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS WHERE LOW_TIME > '10-jan-2003 12:01:34';
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => '10-jan-2003 12:01:34', ENDTIME => SYSDATE, OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> SELECT FILENAME name FROM V$LOGMNR_LOGS;
SQL> SELECT USERNAME AS usr,(XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER IS NULL OR SEG_OWNER NOT IN ('SYS', 'SYSTEM') AND TIMESTAMP > '10-jan-2003 15:59:53';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 2: Mining the Redo Log Files in a Given SCN Range
SQL> SELECT CHECKPOINT_CHANGE#, CURRENT_SCN FROM V$DATABASE;
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTSCN => 56453576, ENDSCN => 56454208, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL + DBMS_LOGMNR.CONTINUOUS_MINE);
SQL> SELECT FILENAME name, LOW_SCN, NEXT_SCN FROM V$LOGMNR_LOGS;
SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG);
SQL> SELECT SCN, (XIDUSN || '.' || XIDSLT || '.' || XIDSQN) as XID, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER NOT IN ('SYS', 'SYSTEM');
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();

* Example 3: Using Continuous Mining to Include Future Values in a Query
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => SYSDATE, ENDTIME => SYSDATE + 5/24, OPTIONS => DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG);
SQL> SET ARRAYSIZE 1;
SQL> SELECT USERNAME AS usr, SQL_REDO FROM V$LOGMNR_CONTENTS WHERE SEG_OWNER = 'HR' AND TABLE_NAME = 'EMPLOYEES';
SQL> EXECUTE DBMS_LOGMNR.END_LOGMNR();


* Scenario 1: Using LogMiner to Track Changes Made by a Specific User
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'log1orc1.ora', OPTIONS => DBMS_LOGMNR.NEW);
SQL> EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => 'log2orc1.ora', OPTIONS => DBMS_LOGMNR.ADDFILE);
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(DICTFILENAME => 'orcldict.ora', STARTTIME => TO_DATE('01-Jan-1998 08:30:00','DD-MON-YYYY HH:MI:SS'), ENDTIME => TO_DATE('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));
SQL> SELECT SQL_REDO, SQL_UNDO FROM V$LOGMNR_CONTENTS WHERE USERNAME = 'joedevo' AND SEG_NAME = 'salary';
SQL> DBMS_LOGMNR.END_LOGMNR( );

* Scenario 2: Using LogMiner to Calculate Table Access Statistics
SQL> EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME => TO_DATE('07-Jan-2003 08:30:00','DD-MON-YYYY HH:MI:SS'), ENDTIME => TO_DATE('21-Jan-2003 08:45:00','DD-MON-YYYY HH:MI:SS'), DICTFILENAME => '/usr/local/dict.ora');
SQL> SELECT SEG_OWNER, SEG_NAME, COUNT(*) AS Hits FROM V$LOGMNR_CONTENTS WHERE SEG_NAME NOT LIKE '%$' GROUP BY SEG_OWNER, SEG_NAME ORDER BY Hits DESC;
SQL> DBMS_LOGMNR.END_LOGMNR( );

Tuesday, December 1, 2009

Query of Predict Oracle database growth

Biasanya dibuthkan untuk kepentingan capacity planning


for all databas size :
/* Formatted on 2009/12/02 11:12 (Formatter Plus v4.8.8) */
SELECT TIME, sum(tblsize_in_mb) dbsize, sum(usedsize_in_mb) db_used_size, sum(tblsize_in_mb) - sum(usedsize_in_mb) dbsize_free
  FROM (SELECT   /*+ parallel(c, 5) */
                 TO_CHAR (c.begin_interval_time, 'MON') "TIME", b.NAME,
                 ROUND (MAX (a.tablespace_size * d.block_size) / 1024 / 1024
                       ) tblsize_in_mb,
                 ROUND (MAX (a.tablespace_maxsize * d.block_size) / 1024
                        / 1024
                       ) totalsize_in_mb,
                 ROUND (MAX (a.tablespace_usedsize * d.block_size) / 1024
                        / 1024
                       ) usedsize_in_mb
            FROM dba_hist_tbspc_space_usage a,
                 v$tablespace b,
                 dba_hist_snapshot c,
                 dba_tablespaces d
           WHERE a.tablespace_id = b.ts#
             AND a.snap_id = c.snap_id
             AND b.NAME = d.tablespace_name
        GROUP BY TO_CHAR (c.begin_interval_time, 'MON'),
                 b.NAME
        ORDER BY 3, 2, 1 DESC)
group by time
order by time desc

melihat dari pertumbuhan segement :


select *
from (select to_char(end_interval_time, 'MM/DD/YY') mydate,
     sum(space_used_delta) / 1024 / 1024 "Space used (MB)",
     avg(c.bytes) / 1024 / 1024 "Total Object Size (MB)",
round(sum(space_used_delta) / sum(c.bytes) * 100, 2) "Percent of Total Disk Usage"
from
   dba_hist_snapshot sn,
   dba_hist_seg_stat a,
   dba_objects b,
   dba_segments c
--where begin_interval_time > trunc(sysdate) - &days_back
where sn.snap_id = a.snap_id
and b.object_id = a.obj#
and b.owner = c.owner
and b.object_name = c.segment_name
--and c.segment_name = '&segment_name'
group by to_char(end_interval_time, 'MM/DD/YY'))
order by to_date(mydate, 'MM/DD/YY');

Sunday, November 22, 2009

Check PGA utilization by Process on Database


set lines 110


col unm format a30 hea "USERNAME (SID,SERIAL#)"

col pus format 999,990.9 hea "PROC KB
USED"

col pal format 999,990.9 hea "PROC KB
MAX ALLOC"

col pgu format 99,999,990.9 hea "PGA KB
USED"

col pga format 99,999,990.9 hea "PGA KB
ALLOC"

col pgm format 99,999,990.9 hea "PGA KB
MAX MEM"



select s.username

' ('

s.sid

','

s.serial#

')' unm, round((sum(m.used)/1024),1) pus,

round((sum(m.max_allocated)/1024),1) pal, round((sum(p.pga_used_mem)/1024),1) pgu,

round((sum(p.pga_alloc_mem)/1024),1) pga, round((sum(p.pga_max_mem)/1024),1) pgm

from v$process_memory m, v$session s, v$process p

where m.serial# = p.serial# and p.pid = m.pid and p.addr=s.paddr and

s.username is not null group by s.username, s.sid, s.serial# order by unm;


Tuesday, October 20, 2009

bla..bla..bla net

Misal kan ....





diserver :
putty -ssh -C -2 -R 1522:114.59.249.20:1521 -R 5561:114.59.249.20:5560 -L 33
02:localhost:3308 -l usrname -ps pass***word -N xxxxx.com

diclient :
putty -ssh -C -2 -L 1523:localhost:1522 -L 5562:localhost:5561 -R 3308:192.168.204.128:3307 -l usrname -ps pass***word -N xxxxxx.com




ngerem********t
plink.exe -L 3307:localhost:3306 -l usrnm -pw pass*****wd -N +++++++.com










Wednesday, September 9, 2009

Troubeshoot Oracle standby DB

8.3.1.2.2 Recovering From Errors
To correct the problems described in Section 8.3.1.2.1, perform the following steps:
1. Create the raw slice on the standby database and assign permissions to the Oracle
user.
2. Query the V$DATAFILE view. For example:
    SELECT NAME FROM V$DATAFILE;
NAME
-------------------------------------------------------------------------------
-
/u01/MILLER/MTS/system01.dbf
/u01/MILLER/MTS/undotbs01.dbf
Managing Primary Database Events That Affect the Standby Database
Managing a Physical Standby Database 8-9
/u01/MILLER/MTS/sysaux01.dbf
/u01/MILLER/MTS/users01.dbf
/u01/MILLER/MTS/mts.dbf
/dev/raw/raw100
/u01/app/oracle/product/10.1.0/dbs/UNNAMED00007
SQL - ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=MANUAL;
SQL - ALTER DATABASE CREATE DATAFILE
           '/u01/app/oracle/product/10.1.0/dbs/UNNAMED00007'
           AS
           '/dev/raw/raw101';

3. In the standby alert log you should see information similar to the following:
    Fri Apr 8 10:09:30 2005
    alter database create datafile
    '/dev/raw/raw101' as '/dev/raw/raw101'
    Fri Apr 8 10:09:30 2005
    Completed: alter database create datafile
    '/dev/raw/raw101' a
4. On the standby database, set STANDBY_FILE_MANAGEMENT to AUTO and restart
    Redo Apply:
    SQL - ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO;
    SQL - RECOVER MANAGED STANDBY DATABASE DISCONNECT;
    At this point Redo Apply uses the new raw device datafile and recovery continues.

om tyo - area wajib comment :)

Sunday, September 6, 2009

Learn Effective pl/sql

Saya coba untuk mengulas mengenai penulisan pl/sql yang effective
berdasarkan referensi yang ada, dan saya tuangkan dalam contoh-contoh pl/sql
dan untuk skripting ini di tulis dengan menggunakan schema SH

declare
cursor c_prod is
select prod_id,
prod_name,
prod_desc,
prod_subcategory
from products
where prod_category_desc = 'Electronics';
rec_prod c_prod%rowtype;
begin
open c_prod;
loop
fetch c_prod into rec_prod;
exit when c_prod%notfound;
dbms_output.put_line('description product '||rec_prod.prod_desc);
end loop;
close c_prod;
end;

declare
v_prod_name products.prod_name%type;
begin
select prod_name
into v_prod_name
from products
where prod_category_desc = 'Electronics'
and prod_id = 300;
if sql%notfound then
dbms_output.put_line('the data not found');
else
dbms_output.put_line('the name of product -> '||v_prod_name);
end if;
exception when no_data_found then
dbms_output.put_line('no data found');
end;

Thursday, August 27, 2009

Issue wait event "Log File Sync"

Oracle lagi,
untuk tipe aplikasi dan transaksi yang setiap kali transaksi commit
biasanya issue yang mungkin terjadi adalah "Log File Sync"
ada beberapa solusi baik dari sisi aplikasi maupun dari sisi database

salah satu blog yang menjelaskan solusinya adalah
http://sysdba.wordpress.com/2006/04/20/asynchronous-commit-a-new-feature-in-oracle-10gr2/

kemudian dari sisi database
a. Reduce contention on existing disks.
b. Put log files on faster disks.
c. Put alternate redo logs on different disks to minimize the effect archive processes
(log files switches).
4. Review application design, use NOLOGGING operations where appropriate, and avoid changing
more data than required.

Wednesday, August 26, 2009

Setup Oracle Stream for Schema replication between DB

Untuk membuat repikasi dengan menggunakan stream
satu schema dengan schema yang lain caranya adalah sebagai berikut , saya ambil contoh
untuk kloning schema HR  dari database PRODDB ke CLONEDB adalah sebagai berikut :
1. siapkan tablespace untuk logminer di PRODDB

CREATE TABLESPACE LOGMNRTS DATAFILE '/oradata/DB/logmnrtbs.dbf'
SIZE 100M AUTOEXTEND ON MAXSIZE UNLIMITED;
 
2. jalankan logminer
BEGIN
DBMS_LOGMNR_D.SET_TABLESPACE('LOGMNRTS');

END;

3. supplement logging
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (primary key,
unique, foreign key) COLUMNS;
ALTER TABLE apps_a.emp ADD SUPPLEMENTAL LOG GROUP
pk_emp (id) ALWAYS;

4. export schema HR dari PRODDB ke CLONEDB
5. create user stream admin di kedua database (PRODDB dan CLONEDB)
    create user STRMADMIN identified by STRMADMIN;
    grant CONNECT, DBA, IMP_FULL_DATABASE,EXP_FULL_DATABASE to "STRMADMIN";
    exec DBMS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRMADMIN');
6. login sebagai strmadmin di PRODDB
    buat database ke user strmadmin di CLONEDB
7. configure apply process di CLONEDB
8. configure capture process di PRODDB
9. instantiate







CLONEDB (apply process) login dengan user strmadmin
-----------------------------


BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table ='"STREAMS_APPLY_QT"',
queue_name ='"STREAMS_APPLY_Q"',
queue_user ='"STRMADMIN"');
END;
/

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name ='"HR"',
streams_type ='apply',
streams_name ='"STREAMS_APPLY"',
queue_name ='"STRMADMIN"."STREAMS_APPLY_Q"',
include_dml =true,
include_ddl =true,
include_tagged_lcr = false,
inclusion_rule = true);
END;
/

BEGIN
DBMS_APPLY_ADM.ALTER_APPLY
(
apply_name = 'STREAMS_APPLY',
apply_user = 'HR'
);
END;

COMMIT;

Jika kita tidak menginginkan apply proses di abort setiap kali error,
harus ada penanganan agar prosess apply tetap berjalan walaupun ada error,

BEGIN
DBMS_APPLY_ADM.SET_PARAMETER
( apply_name ='STREAMS_APPLY',
  parameter ='DISABLE_ON_ERROR',
  value ='N' );
END;

start apply
BEGIN
DBMS_APPLY_ADM.START_APPLY
(
apply_name ='STREAMS_APPLY'
);
END;

stop apply
BEGIN
DBMS_APPLY_ADM.STOP_APPLY
(
apply_name = 'STREAMS_APPLY'
);
END;

PRODDB (capture process) login dengan user strmadmin
-----------------------------
BEGIN

DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => '"STREAMS_CAPTURE_QT"',
queue_name => '"STREAMS_CAPTURE_Q"',
queue_user => '"STRMADMIN"');
END;

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_RULES(
schema_name => '"HR"',
streams_type => 'capture',
streams_name => '"STREAMS_CAPTURE"',
queue_name => '"STRMADMIN"."STREAMS_CAPTURE_Q"',
include_dml => true,
include_ddl => true,
include_tagged_lcr => false,
inclusion_rule => true);
END;

BEGIN
DBMS_STREAMS_ADM.ADD_SCHEMA_PROPAGATION_RULES(
schema_name => '"HR"',
streams_name => '"STREAMS_PROPAGATION"',
source_queue_name => '"STRMADMIN"."STREAMS_CAPTURE_Q"',
destination_queue_name => '"STRMADMIN"."STREAMS_APPLY_Q"@STRM',
include_dml => true,
include_ddl => true,
inclusion_rule => true );
END;
/
COMMIT;

process instantiate scn untuk schema :
1. di PRODDB login dengan user strmadmin

BEGIN

DBMS_OUTPUT.PUT_LINE ('Instantiation SCN is: ' DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER());
END;

setelah dapat nilai nya

2. di CLONEDB login dengan user strmadmin
exec DBMS_APPLY_ADM.set_schema_instantiation_scn(source_schema_name => 'HR',source_database_name => 'CLONEDB',instantiation_scn => 514740);


3. Kemudian jalankan capture process

BEGIN

DBMS_CAPTURE_ADM.START_CAPTURE(
capture_name => 'STREAMS_CAPTURE');
END;

BEGIN
DBMS_CAPTURE_ADM.STOP_CAPTURE(
capture_name => 'STREAMS_CAPTURE');
END;


kemudian lakukan test .... !!!
untuk monitoring proses yang terjadi di stream dengan menggunakan table-table
berikut :

v$streams_apply_reader
v$streams_apply_coordinator
v$streams_capture
dba_apply
dba_apply_error
dba_capture
dba_propagation
dba_queue_schedules

untuk memanage stream dengan menggunakan package-package di bawah :
1. dbms_apply_adm
2. dbms_capture_adm
3. dbms_streams_adm

Managing Logical Standby Database

Logical standby juga perlu di managed dan dimonitoring salah satu step-step nya adalah sebagai berikut :
1. untuk menjadikan standby database dalam keadaan guard

ALTER DATABASE GUARD ...... ;
mode nya terdiri dari 
    a. ALL
    b. NONE
    c. STANDBY
   
2. untuk manage transaksi
    dengan menggunakan package berikut     
    dbms_logstdby
    untuk skip transaksi
    exec DBMS_LOGSTDBY.SKIP_TRANSACTION(10,38,234);
    skip_transaction(xidusn_p IN NUMBER,
                           xidslt_p IN NUMBER,
                           xidsqn_p IN NUMBER);
    untuk melihat transaksinya dari 
    sqlplus > select * from dba_logstdby_events where current_scn is not null

3. untuk skip objects :

    exec DBMS_LOGSTDBY.SKIP('DML','SCOTT','EMP');
    exec DBMS_LOGSTDBY.SKIP('PROCEDURE', 'XYZ', '%', null); 
    exec DBMS_LOGSTDBY.SKIP('SCHEMA_DDL', 'VCS_MONITOR', '%', null);
    exec DBMS_LOGSTDBY.SKIP('DML', 'VCS_MONITOR', '%', null); 

4.sql apply biasanya secara otomatis melakukan delete archive log yang

sudah tidak terpakai, supaya tidak demikian dapat menggunakan script berikut
EXECUTE DBMS_LOGSTDBY.APPLY_SET('LOG_AUTO_DELETE', FALSE);
lebih detail lagi untuk fungsi dari dbms_logstdby ada di 
 
http://www.psoug.org/reference/dbms_logstdby.html




QUERY untuk monitoring



select * from DBA_LOGSTDBY_PROGRESS
select * from DBA_LOGSTDBY_SKIP_TRANSACTION
select * from DBA_LOGSTDBY_PARAMETERS
select * from DBA_LOGSTDBY_LOG
select * from DBA_LOGSTDBY_SKIP
select * from DBA_LOGSTDBY_UNSUPPORTED
select * from DBA_LOGSTDBY_EVENTS
select * from DBA_LOGSTDBY_HISTORY




LOGSTDBY$APPLY_PROGRESS
LOGSTDBY$APPLY_MILESTONE
LOGSTDBY$SCN
LOGSTDBY$SKIP_SUPPORT
LOGSTDBY$SKIP
LOGSTDBY_SUPPORT
LOGSTDBY_UNSUPPORTED_TABLES
LOGSTDBY_LOG
V$LOGSTDBY
V$LOGSTDBY_STATS
V_$LOGSTDBY_TRANSACTION
V$LOGSTDBY_STATE
GV$LOGSTDBY

note :

pakage yang biasa untuk dilakukan manage logical
adalah dbms_logstdby
dan ada lagi yaitu DBMS_INTERNAL_LOGSTDBY (masih bingung cara pakenya)






Sunday, August 23, 2009

Create logical standby database 10g R2

pada kesempatan ini ane mau bagi2 pengalaman, bagi yang pernah coba ... diem aja (ke..ke..ke)
1. primary database (ORADB01)
2. standby database (ORADB01)

KONFIGURASI DI PRIMARY (ORADB01) :
listener.ora
--------------
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/apps/product/10.2.0/db_1)
(PROGRAM = extproc)
)
)

LISTENER_LDG =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = db01.com)(PORT = 1521))
)
)
)
tnsnames.ora
LISTENER_LDG =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = db01.com)(PORT = 1521))
)

ORADB02 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = db02.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORADB02)
)
)

ORADB01 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = db01.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORADB01)
)
)

INIT.ORA
------------
log_archive_dest_1='LOCATION=/oracle/apps/archive VALID_FOR=(ALL_LOGFILES,ALL_RO
LES) DB_UNIQUE_NAME=ORADB01'
log_archive_dest_2='SERVICE=ORADB02 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMAR
Y_ROLE) DB_UNIQUE_NAME=ORADB02'
log_archive_dest_state_1=enable
log_archive_dest_state_2=defer
log_archive_max_processes=4
fal_server=ORADB02
fal_client=ORADB01
standby_file_management=auto
db_name=ORADB01
db_unique_name=ORADB01

KONFIGURASI DI STANDBY (ORADB02) :

listener.ora
--------------
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /oracle/apps/product/10.2.0/db_1)
(PROGRAM = extproc)
)
)

LISTENER_STDBY =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = db02.com)(PORT = 1521))
)
)
)

tnsnames.ora
-----------------
LISTENER_STDBY =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = db02.com)(PORT = 1521))
)

ORADB02 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = db02.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORADB02)
)
)

ORADB01 =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = db01.com)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = ORADB01)
)
)


init_parameter_standby :
-----------------------------
log_archive_dest_1='LOCATION=/oracle/apps/archive VALID_FOR=(ALL_LOGFILES,ALL_RO
LES) DB_UNIQUE_NAME=ORADB02'
log_archive_dest_2='SERVICE=ORADB01 LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMAR
Y_ROLE) DB_UNIQUE_NAME=ORADB01'
log_archive_dest_state_1=enable
log_archive_dest_state_2=defer
log_archive_max_processes=4
fal_server=ORADB01
fal_client=ORADB02
standby_file_management=auto
db_name=ORADB01
db_unique_name=ORADB02


1. persiapan untuk membaut standby database di ORADB02
@primary
jadikan archive log mode
SQL> alter database force logging;
SQL> shutdown immediate
SQL> startup mount
SQL> alter database archivelog;
SQL> alter database open;

tambahkan standby log file
alter database add standby logfile group 4 ('/oracle/apps/oradata/ORADB01/stdby_redo04.log') size 52428800;

alter database add standby logfile group 5 ('/oracle/apps/oradata/ORADB01/stdby_redo05.log') size 52428800;

alter database add standby logfile group 6 ('/oracle/apps/oradata/ORADB01/stdby_redo06.log') size 52428800;

alter database add standby logfile group 7 ('/oracle/apps/oradata/ORADB01/stdby_redo07.log') size 52428800;

backup untuk standby dengan menggunakan rman :

a. rman target=/
b. backup full database format '/oracle/apps/flash_recovery_area/%d_%U.bckp' plus archivelog format
'/oracle/apps/flash_recovery_area/%d_%U.bckp';
c. CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/apps/flash_recovery_area/%U';
d. BACKUP CURRENT CONTROLFILE FOR STANDBY;

copykan hasil backup ke standby dengan lokasi yang sama
scp /oracle/apps/flash_refovery_area/* oracle@192.168.3.122:/oracle/apps/flash_refovery_area/

@standby database
1. buat orapwd
2. jalankan pfile dengan kondisi nomount
3. jalankan listener
4. restore clone rman dengan script berikut :
5. rman target=sys/oracle@oradb01 auxiliary=/
DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;
6. create standby logfile :
alter database add standby logfile group 4 ('/oracle/apps/oradata/ORADB01/stdby_redo04.log') size
52428800;

alter database add standby logfile group 5 ('/oracle/apps/oradata/ORADB01/stdby_redo05.log') size
52428800;

alter database add standby logfile group 6 ('/oracle/apps/oradata/ORADB01/stdby_redo06.log') size
52428800;

alter database add standby logfile group 7 ('/oracle/apps/oradata/ORADB01/stdby_redo07.log') size
52428800;

7. alter database recover managed standby database using current log file disconnect from sessions



perpare for logical standby database :
===========================
@primary
------------
tambahkan ubah dan tambahkan parameter berikut
SQL> alter system set log_archive_dest_3='LOCATION=/oracle/apps/archive2/ valid_for=(standby_logfiles,standby_role) db_unique_name=DBORA01' scope=both;

SQL> alter system set log_archive_dest_1='LOCATION=/oracle/apps/archive/ valid_for=(online_logfiles,all_roles) db_unique_name=DBORA01' scope=BOTH;

SQL> alter system set log_archive_dest_state_3=enable scope=both;

@standby
------------
SQL> alter system set log_archive_dest_3='LOCATION=/oracle/apps/archive2/ valid_for=(standby_logfiles,standby_role) db_unique_name=DBORA02' scope=both;

SQL> alter system set log_archive_dest_1='LOCATION=/oracle/apps/archive/ valid_for=(online_logfiles,all_roles) db_unique_name=DBORA02' scope=BOTH;

SQL> alter system set log_archive_dest_state_3=enable scope=both;

@primary
-------------
jalankan log minner
EXECUTE DBMS_LOGSTDBY.BUILD;

@standby
1. alter database recover managed standby database cancel;
2. mengganti db_name dan dbid yang baru
3. alter database recover to logical standby ORADB02;
4. shutdown immediate
5. startup mount
6. alter database open resetlogs
7. alter database start logical standby apply immediate; (jalankan process apply)

perhatikan alert nya di kedua database
untuk monitoring dapat menggunakan script di bawah ini :

SELECT EVENT_TIME, STATUS, EVENT FROM DBA_LOGSTDBY_EVENTS
ORDER BY EVENT_TIMESTAMP, COMMIT_SCN;

SELECT APPLIED_SCN, LATEST_SCN, MINING_SCN, RESTART_SCN FROM V$LOGSTDBY_PROGRESS;

SELECT APPLIED_TIME, LATEST_TIME, MINING_TIME, RESTART_TIME FROM V$LOGSTDBY_PROGRESS;

SELECT * FROM V$LOGSTDBY_STATS;

SELECT SESSION_ID, STATE FROM V$LOGSTDBY_STATE;

kemudian lakukan perubahan-perubahan di primary dan lihat hasilnya di standby
pastikan bahwa standby dan primary dalam status open mode read_write :


HANDLER :
skip transaksi untuk table tertentu
EXECUTE DBMS_LOGSTDBY.SKIP (stmt => 'DML', schema_name => 'SCOTT', object_name => 'TESTLOGICAL');

to be continued .... for handler







Saturday, August 22, 2009

Nice Address for oracle tips and trick

Dear All,
N buat mastah-mastah oracle, just share link for update knowledge and sorry if REPOST :
1. untuk buat logical standby database di 10g R2
2. yang mau ujian....yang mau ujian silahkan di donlot
a. examcollection.info
b. www.freepreptest.com/examworx/1Z0-042.exe
buat nyang ini tinggal ganti belakangnya aja misal 1Z0-042.exe jadi 1Z0-050.exe
........ to be continue for update

Friday, August 21, 2009

Unable to create a new listener from the EM Console

Ini adalah sebuah bug di EM nya, maka workaround yang dapat kita gunakan adalah
sebagai berikut :
1)go to oracle_home/nework/tolls
2)open the file Netproperties
3)comment the line INSTALLEDCOMPONENTS=ORACLENET

isi dari Netproperties :

[oracle@oracentos tools]$ cat NetProperties
#Thu Aug 20 04:04:16 WIT 2009
ListenerTrace=Off
IPC=yes
CommentWarning=YES
TCPS=yes
USDOMESTIC=YES
Names=YES
TCP=yes
ProfileTrace=Off
NamesTrace=Off
NMP=no
#INSTALLEDCOMPONENTS=ORACLENET
p2=IPC
p1=TCP

Wednesday, August 12, 2009

copy full directory to another server

Buat copy satu directory di mesin server (unix)
ke mesin yang lain dapat copy full directory tersebut dengan isi-isinya
contoh script nya adalah sebagai berikut :

scp -rp OraClient/ appigate@10.128.2.84:/export/home/appigate

Monday, August 10, 2009

Failover test Oracle Data Guard

AT PRIMARY
untuk melakukan failover :
1. aktifkan flashback di primary
2. select flashback_on from v$database;
3. shutdown immediate
4. startup mount
5. alter database flashback on
6. alter database open
7. select flashback_on from v$database;
8. shutdown abort


AT STANDBY :
1. di standby usahakan tidak ada gap jika ada gap :
alter database register physical logfile '****.arch';

2. lakukan failover ke standby :
SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
alter database commit to switchover to primary with session shutdown;
shutdown database
startup
check scn terkahir sebelum failover
select to_char(standby_became_primary_scn) from v$database;
> 312312

di primary hidupkan dalam kondisi mount:
startup mount
flashback database to scn 312312

alter database flashback off;

kemudian shutdown database primary
dari standby yang sudah jadi primary buat controlfile yang lain yang akan
digunakan primary

alter database create standby controlfile as 'test.ctl'
cp ke primary
kemudian startup mount
dan lakukan recover managed standby

STATUS = belum dirapihkan (mode=on)

change parameter session and process

merubah session dan process harus imbang, by default
rumusan nya adalah sebagai berikut

sessions = (1.1 x processes ) + 5
kalo processes nya = 1000
maka sessions nya disarankan 1105

di 11g cukup dengan merubah processes maka nilai session akan ditentukan sendiri
dengan rumusan di atas

Thursday, August 6, 2009

Unix - Unix - Unix

on solaris :
clear reboot using
shutdown -y -i6 -g0

untuk melihat process seperti tool top
di sun os 5.10 dapat menggunkan

prstat -a

Sunday, August 2, 2009

create package diagnostic using ADRCI at 11g

pengumuman-pengumuman bagi yang belum tau, yang udah tau diem ajah (kekekekeekek)
bair gak report bikin package untuk buat SR di oracle DB 11g ada cara untuk mendiagnosa
dan mengumpulkan semua report nya dalam satu package
kemudian dilakukan zip dan dikirim ke METAL-LINK
pake ADRCI, cara pake nya ginih

$ adrci
adrci > show homes
kalo banyak path nya lakukan set
adrci > set homepath diag/rdbms/oradev/ORADEV
adrci > show alert -tail
adrci > show incident
adrci > show incident -mode detail
adrci > show incident -mode DETAIL -p "incident_id=24177"

kalo kenyataanya ada error dipackage ajah

adrci > ips create package
adrci > ips add incident 24177 package 1
adrci> ips add file /oracle/apps/diag/rdbms/oradev/ORADEV/incident/incdir_24177/ORADEV_arc3_3084_i24177.trc package 2
adrci > ips generate package 1 in /oracle/home
nah kalo dah selesai masukin data analisa nya tinggal titutup sajah process packaging nya
adrci > ips finalize package 1


selamat mencoba :)




Saturday, August 1, 2009

Inconsisten Recovery Database

Untuk masalah recovery inconsitent (open resetlogs)
misal kita akan restore dari full backup dan restore archive log sampai sequence tertentu
untuk recovery karena ada table yang hilang/tertruncate maka sebelum nya kita check dl
kapan kira2 table tersebut terdrop atau tertruncate

check dl archive log
select * from v$archived_log
where creator <> 'RMAN'

kita bisa memperkirakan harus sampai sequence atau time atau scn berapa kita bisa restore
dan recover

script nya berikut :

RUN {      SET UNTIL SCN 1000;      
 # Alternatives:   # SET UNTIL TIME 'Nov 15 2004 09:00:00';   # SET UNTIL SEQUENCE 9923;     RESTORE DATABASE;   RECOVER DATABASE; }

yang sudah saya coba
sampai sequence ke 7

RUN {    # SET UNTIL SCN 1000;       # Alternatives:   # SET UNTIL TIME 'Nov 15 2004 09:00:00';   SET UNTIL SEQUENCE 8;     RESTORE DATABASE;   RECOVER DATABASE; 
  ALTER DATABASE OPEN RESETLOGS;  }

and DONE

mudah2an berguna :)

tambahan lagi
kita check yu, hasil backup yang bisa direstore
dari incremental nya
petunjuk bisa direstore sampe sequence, scn atau sampai waktu kapan

select sequence#,
to_char(first_time, 'DD-MM-YY HH24:MI:SS') timebkp
from v$backup_archivelog_details;

select name,
sequence#,
registrar,
to_char(first_time, 'DD-MM-YY HH24:MI:SS') timearc
from v$archived_log

:D ....

Disable auto start resource at RAC

misal kita punya PC yang pas2 an , ketika diinstall vmware untuk rac dan semua sudah terinstall baik
itu Clusterware dan database nya. Maka pada saat reboot vmware nya begitu berat kedua node nya
karena automatically instance database di rac akan dilakukan restart setelah semua resource startup
tambah berat lah vmware kita kadang terjadi hang walopun memory PC kita 3 GB. walah2
untuk mengantisipasinya ya kita buat supaya tidak autostart untuk resource2 tertentu misal
resource gsd ato resource instance dan service nya.
setelah clusterware naik baru kita manual untuk startup gsd dan instance atau service nya
caranya, ya begono :
misal untuk resource gsd

1. crs_stat -p ora.rac1.gsd
2. crs_stat -p ora.rac1.gsd > $ORA_CRS_HOME/crs/public/ora.rac1.gsd.cap (tambahin extention .cap)
3. vi ora.rac1.gsd.cap
rubah parameter AUTO_START=1 menjadi AUTO_START=2
4. kemudian update profile resource
crs_register -u ora.rac1.gsd
5. check dech profile yang sekarang crs_stat -p ora.rac1.gsd
nilainya harus seperti ini

NAME=ora.rac1.gsd
TYPE=application
ACTION_SCRIPT=/oracle/apps/product/10.2.0/crs_1/bin/racgwrap
ACTIVE_PLACEMENT=0
AUTO_START=2
CHECK_INTERVAL=600
DESCRIPTION=CRS application for GSD on node
FAILOVER_DELAY=0
FAILURE_INTERVAL=0
FAILURE_THRESHOLD=0
HOSTING_MEMBERS=rac1
OPTIONAL_RESOURCES=
PLACEMENT=restricted
REQUIRED_RESOURCES=
RESTART_ATTEMPTS=5
SCRIPT_TIMEOUT=600
START_TIMEOUT=0
STOP_TIMEOUT=0
UPTIME_THRESHOLD=7d
USR_ORA_ALERT_NAME=
USR_ORA_CHECK_TIMEOUT=0
USR_ORA_CONNECT_STR=/ as sysdba
USR_ORA_DEBUG=0

NOTE = untuk instance RESTART_ATTEMPTS harus 0 atau 1

status = DONE :)

Tuesday, July 28, 2009

Change Auto tune status manually on 11g

berikut adalah cara untuk menonaktifkan dan mengaktifkan auto tune management di
11g secara manual

execute script berikut untuk mendapatkan list auto task window :

/* Formatted on 2009/07/28 15:47 (Formatter Plus v4.8.8) */
SELECT a.window_name, a.window_active, a.autotask_status, a.optimizer_stats,
a.segment_advisor, a.sql_tune_advisor, a.health_monitor
FROM dba_autotask_window_clients a;

MONDAY_WINDOW
TUESDAY_WINDOW
WEDNESDAY_WINDOW
THURSDAY_WINDOW
FRIDAY_WINDOW
SATURDAY_WINDOW
SUNDAY_WINDOW

BEGIN
dbms_auto_task_admin.enable(client_name => 'auto space advisor', operation => NULL, window_name => 'SATURDAY_WINDOW');
END;

BEGIN
dbms_auto_task_admin.disable(client_name => 'auto optimizer stats collection', operation => NULL, window_name => 'THURSDAY_WINDOW');
END;


Sunday, July 26, 2009

managed archived log at standby database, shipped status

Agar disk tidak penuh di primary karean archivelog yang sudah di kirim ke standby, maka
anggap saja archive log tersebut sudah obsolete. Oleh karena itu archive tersebut bisa di delete atau di moving
ke tempat lain tapi khusus archive dengan status sudah dikirim ke standby. dengan menggunkan shell
script berikut :

[oracle@ora11g script]$ more checkarchive

export ORACLE_HOME=/oracle/apps/product/11.1.0/db_1
export ORACLE_SID=ORADEV
/oracle/apps/product/11.1.0/db_1/bin/sqlplus -s "/ as sysdba" '<<' EOF
set term off
set head off
set trims on
set pages 0
set feedback off
set linesize 1000
spool /oracle/home/script/checkarchtransport.txt

select name from v\$archived_log
where name is not null
and dest_id = 1
and sequence# in (
select sequence# from v\$archived_log
where name is not null
and dest_id = 2);

spool off
exit
EOF

while read i; do rm -rf "$i" ; sleep 2; done '<' checkarchtransport.txt

mudah2an bermanfaat :)

Installation RAC on Enterprise Linux 4 at VMWARE

Dear All,
Bagi yang belum tau dan mau belajar tentang installation Oracle Real Application Cluster (RAC) bisa
buat simulasinya dan real dengan menggunakan vmware + OS nya Linux Redhat EL4 dengan panduan
nya ada di link berikut :

http://sr3.telmos.ru/dsvolk/wmimages/linux_x86/10g/RAC/install_guide_oracle_rac10g_El4.pdf

tinggal download dan ikutin sajah step2 nya, kemudian kalo mau install di PC maka sediakan sajah memory
yang cukup, kisaran 3 GB lah idealnya.... selamat mencoba !!! :)

Tuesday, July 21, 2009

Oracle Application server clustering middletier

berikut adalah beberapa script yang mungkin berguna untuk memanage cluster
oracle application server :

listinstances
listclusters
joincluster -cl clisterApps -i Webapps1.web-ora-1.com
joincluster -cl clusterApps -i Webapps1.web-ora-1.com
start -cl clusterApps


Friday, July 10, 2009

After Installation Oracle Apps 11i

Setelah installation oracle apps, apa yang harus dilakukan :
http://orapps11i.com:8000
password standarnya :
sysadmin/sysadmin
operation/welcome


$ bash
$ Cd /u01/apps11i/prodappl/
/u01/apps11i/prodappl/APPSPROD_orapps11i.env
$ . ./.APPSPROD_orapps11i.env
. APPSPROD_orapps11i.env
adadmin

matiin
cd $COMMON_TOP
cd admin
cd scripts
adstpall.sh apps/apps
PROD_orapps11i.env
u01/oracle/proddb/9.2.0/appsutil/scripts

[oracle@orapps11i PROD_orapps11i]$ addlnctl.sh stop PROD (listener)
addbctl.sh stop PROD
[oracle@orapps11i PROD_orapps11i]$ addbctl.sh stop immediate