Thursday, September 3, 2009

test

ssssss's's

Thursday, July 30, 2009

Killing DBMS_JOB


To check scheduled job:

scheduled_dbms_jobs.sql

set line 200 pages 200

col log_user for a10
col job for 9999999 head 'Job'
col broken for a1 head 'B'
col failures for 99 head "fail"
col last_date for a18 head 'LastDate'
col this_date for a18 head 'ThisDate'
col next_date for a18 head 'NextDate'
col interval for 9999999.000 head 'RunInterval'
col what for a60

select j.log_user,j.job,j.broken,j.failures,j.last_date':'j.last_sec last_date,j.this_date':'j.this_sec this_date,j.next_date':'j.next_sec next_date,j.next_date - j.last_date interval,j.what
from (select dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES, dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC, dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj) j;

What Jobs are Actually Running

select * from dba_jobs_running;

running_jobs.sql

set linesize 250
col sid for 9999 head 'SessionID'
col log_user for a10
col job for 9999999 head 'Job'
col broken for a1 head 'B'
col failures for 99 head "fail"
col last_date for a18 head 'LastDate'
col this_date for a18 head 'ThisDate'
col next_date for a18 head 'NextDate'
col interval for 9999.000 head 'RunInterval'
col what for a60

select j.sid,j.log_user,j.job,j.broken,j.failures,j.last_date':'j.last_sec last_date,j.this_date':'j.this_sec this_date,j.next_date':'j.next_sec next_date,j.next_date - j.last_date interval,j.what
from (select djr.SID, dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES, dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC, dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj, dba_jobs_running djr
where dj.job = djr.job ) j;

What Sessions are Running the Jobs

session_jobs.sql

select j.sid,s.spid,s.serial#,j.log_user,j.job,j.broken,j.failures,j.last_date':'j.last_sec last_date,j.this_date':'j.this_sec this_date,j.next_date':'j.next_sec next_date,j.next_date - j.last_date interval,j.what
from (select djr.SID,
dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES, dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC, dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj, dba_jobs_running djr
where dj.job = djr.job ) j,
(select p.spid, s.sid, s.serial#
from v$process p, v$session s
where p.addr = s.paddr ) s
where j.sid = s.sid;

Bringing Down a DBMS_JOB

1. Find the Job You Want to Bring Down

2. Mark the DBMS_JOB as Broken

SQL> EXEC DBMS_JOB.BROKEN(job#,TRUE);

3. Kill the Oracle Session

ALTER SYSTEM KILL SESSION 'sid,serial#';

4. Kill the O/S Process

For Windows, at the DOS Prompt: orakill sid spid
For UNIX at the command line> kill -9 spid

5. Check if the Job is Still Running

6. Determine the Current Number of Job Queue Processes

SQL> select name,value from v$parameter where name = 'job_queue_processes';

7. Alter the Job Queue to Zero

SQL> ALTER SYSTEM SET job_queue_processes = 0;

8. Validate that No Processes are Using the Job Queue (Re-run the session_jobs.sql ).

9. Mark the DBMS_JOB as Not Broken

SQL>EXEC DBMS_JOB.BROKEN(job#,FALSE);

10. Alter the Job Queue to Original Value

ALTER SYSTEM SET job_queue_processes = original_value;

11. Validate that DBMS_JOB Is Running

To make sure everything is back to normal, re-run the above scripts to validate that jobs are scheduled, not broken, and are executing with the next and last dates columns changing.

Oracle have given us a great tool for scheduling activities within the database. As with many things inside the database, not everything goes as planned, nor are we given adequate tools to fix some of the problems we encounter. With the eleven steps outlined here, hopefully you will have increased your arsenal to handle those run away jobs that have given the best of us a few tense moments.



Sunday, July 12, 2009

Rename Oracle Datafile

Option 1: Tablespace datafile rename (the tablespace need to be offline)

We can use the alter tablespace rename datafile command.You must re-name the data file while the tablespace is offline.

SQL> ALTER TABLESPACE cust_ts OFFLINE;
OS $ mv 'OLDFILE.DBF ' 'NEWFILE.DBF'
SQL> ALTER TABLESPACE users RENAME datafile '/u01/app/oracle/mysid/oldname.dbf' TO '/u01/app/oracle/mysid/newname.dbf';

Option 2: Database datafile rename (Shutdown DB is required)

We can also use the alter database rename datafile command.
The data file must be renamed in the OS (using the mv unix command) while the database is down.
And the rename data file must be done while the database is un-opened (in the mount stage):

SQL> shutdown immediate;
OS $ mv 'OLDFILE.DBF ' 'NEWFILE.DBF'
SQL> startup mount;
SQL> ALTER DATABASE RENAME file '/u01/app/oracle/mysid/oldname.dbf' TO '/u01/app/oracle/mysid/newname.dbf';

Tuesday, June 16, 2009

AWR

Basically, AWR (Automatic Workload Repository) is an Oracle built-in tool that collects performance related statistics and derives performance metrics from them to track a potential problem. Unlike Statspack, snapshots are collected automatically every hour by a new background process called MMON and its slave processes. Be default, the collected data is automatically purged after 7 days. Both the snapshot frequency and retention time can be modified by the user. To see the present settings, you could use:

set lines 200
col SNAP_INTERVAL format a30;
col RETENTION format a30;
select * from dba_hist_wr_control;


SNAP_INTERVAL RETENTION
------------------------------ ------------------------------
+00000 01:00:00.0 +00007 00:00:00.0
+00000 01:00:00.0 +00007 00:00:00.0


1. To change snapshots’ interval & retention parameter setting:

BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200,
-- Minutes (= 30 Days). Current value retained if NULL.
interval => 60); -- Minutes (= 1 hour). Current value retained if NULL.
END;
/


2. To check Snapshot ID information can be queried from the DBA_HIST_SNAPSHOT view:

select * from DBA_HIST_SNAPSHOT order by BEGIN_INTERVAL_TIME;

3. Existing snapshots can be removed by:

BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range (
low_snap_id => 6491,
high_snap_id => 6499);
END;
/


4. Extra snapshots can be taken:

EXEC DBMS_WORKLOAD_REPOSITORY.create_snapshot;


5. Workload Repository Reports

Oracle provide two scripts to produce workload repository reports (awrrpt.sql and awrrpti.sql). They are similar in format to the statspack reports and give the option of HTML or plain text formats. The two reports give essential the same output but the awrrpti.sql allows you to select a single instance. The reports can be generated as follows:

@$ORACLE_HOME/rdbms/admin/awrrpt.sql
@$ORACLE_HOME/rdbms/admin/awrrpti.sql


The scripts prompt you to enter the report format (html or text), the start snapshot id, the end snapshot id and the report filename. The resulting report can be opend in a browser or text editor accordingly.

Monday, June 15, 2009

ADDM

ADDM (Automatic Database Diagnostic Monitor) is used to diagnose database performance issues. Its analysis can be performed across any two AWR snapshots.

Three ways to run ADDM analysis:

Option 1 , Running ADDM Using addmrpt.sql
(@$ORACLE_HOME/rdbms/admin/addmrpt.sql)

SQL> @$ORACLE_HOME/rdbms/admin/addmrpt.sql

... ...

Listing the last 3 days of Completed Snapshots

Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
ctgbu2 CTGBU2 6611 14 Jun 2009 00:00 1
6612 14 Jun 2009 01:00 1
6613 14 Jun 2009 02:00 1
6614 14 Jun 2009 03:00 1
6615 14 Jun 2009 04:00 1
6616 14 Jun 2009 05:00 1
6617 14 Jun 2009 06:00 1
6618 14 Jun 2009 07:00 1
6619 14 Jun 2009 08:00 1
6620 14 Jun 2009 09:00 1
6621 14 Jun 2009 10:00 1
6622 14 Jun 2009 11:00 1
6623 14 Jun 2009 12:00 1
6624 14 Jun 2009 13:00 1
6625 14 Jun 2009 14:00 1
6626 14 Jun 2009 15:00 1
6627 14 Jun 2009 16:00 1
6628 14 Jun 2009 17:00 1
6629 14 Jun 2009 18:00 1
6630 14 Jun 2009 19:00 1
6631 14 Jun 2009 20:01 1
6632 14 Jun 2009 21:00 1
6633 14 Jun 2009 22:00 1
6634 14 Jun 2009 23:00 1
6635 15 Jun 2009 00:00 1
6636 15 Jun 2009 01:00 1
6637 15 Jun 2009 02:00 1
6638 15 Jun 2009 03:00 1
6639 15 Jun 2009 04:00 1
6640 15 Jun 2009 05:00 1
6641 15 Jun 2009 06:00 1
6642 15 Jun 2009 07:00 1
6643 15 Jun 2009 08:00 1
6644 15 Jun 2009 09:00 1
6645 15 Jun 2009 10:00 1
6646 15 Jun 2009 11:00 1
6647 15 Jun 2009 12:00 1
6648 15 Jun 2009 13:00 1
6649 15 Jun 2009 14:00 1
6650 15 Jun 2009 15:00 1
6651 15 Jun 2009 16:00 1
6652 15 Jun 2009 17:00 1
6653 15 Jun 2009 18:00 1
6654 15 Jun 2009 19:00 1
6655 15 Jun 2009 20:00 1
6656 15 Jun 2009 21:00 1
6657 15 Jun 2009 22:00 1
6658 15 Jun 2009 23:00 1
6659 16 Jun 2009 00:00 1
6660 16 Jun 2009 01:00 1
6661 16 Jun 2009 02:00 1
6662 16 Jun 2009 03:00 1
6663 16 Jun 2009 04:01 1
6664 16 Jun 2009 05:01 1
6665 16 Jun 2009 06:01 1
6666 16 Jun 2009 07:01 1

Snap
Instance DB Name Snap Id Snap Started Level
------------ ------------ --------- ------------------ -----
ctgbu2 CTGBU2 6667 16 Jun 2009 08:00 1
6668 16 Jun 2009 09:00 1
6669 16 Jun 2009 10:01 1
6670 16 Jun 2009 11:00 1

Specify the Begin and End Snapshot Ids
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Enter value for begin_snap: 6664
Begin Snapshot Id specified: 6664

Enter value for end_snap: 6666
End Snapshot Id specified: 6666

Specify the Report Name
~~~~~~~~~~~~~~~~~~~~~~~
The default report file name is addmrpt_1_6664_6666.txt. To use this name,
press to continue, otherwise enter an alternative.

Enter value for report_name: /tmp/addmtest.log

Using the report name /tmp/addmtest.log

Running the ADDM analysis on the specified pair of snapshots ...

Generating the ADDM report for this analysis ...

DETAILED ADDM REPORT FOR TASK 'TASK_31477' WITH ID 31477
--------------------------------------------------------

Analysis Period: 16-JUN-2009 from 05:01:03 to 07:01:09
Database ID/Instance: 2568139559/1
Database/Instance Names: CTGBU2/ctgbu2
Host Name: JPSIT01
Database Version: 10.1.0.4.0
Snapshot Range: from 6664 to 6666
Database Time: 8710 seconds
Average Database Load: 1.2 active sessions

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

FINDING 1: 14% impact (1261 seconds)
------------------------------------
Individual SQL statements responsible for significant user I/O wait were
found.

RECOMMENDATION 1: SQL Tuning, 4.1% benefit (356 seconds)
ACTION: Tune the PL/SQL block with SQL_ID "89j4h9uu70grb". Refer to the
"Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide
and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID 89j4h9uu70grb
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN /* Database Analysis report */
quest_ix_scheduler.run_task(147,JOB,NEXT_DATE); :mydate := next_date;
IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

RECOMMENDATION 2: SQL Tuning, 3.5% benefit (302 seconds)
ACTION: Tune the PL/SQL block with SQL_ID "dxh8b2fpqfnu5". Refer to the
"Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide
and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID dxh8b2fpqfnu5
BEGIN :rule_value:=quest_ix_cbo_rule_pak.stale_analyzer_stats(:run_id
); END;

RECOMMENDATION 3: SQL Tuning, 3.5% benefit (302 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"07k0t67czx7g6".
RELEVANT OBJECT: SQL statement with SQL_ID 07k0t67czx7g6 and
PLAN_HASH 2736083658
SELECT OWNER , TABLE_NAME , BLOCKS , ANALYZED_BLOCKS , BLOCK_VARIANCE
FROM QUEST_IX_STALE_STATS ORDER BY 5 DESC

RECOMMENDATION 4: SQL Tuning, 3.4% benefit (300 seconds)
ACTION: Tune the PL/SQL block with SQL_ID "bqx3f42d29g34". Refer to the
"Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide
and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID bqx3f42d29g34
BEGIN DBMS_SPACE.SPACE_USAGE(segment_owner=>:segment_owner,segment_na
me=>:segment_name,segment_type=>:segment_type,unformatted_blocks=>:un
f,unformatted_bytes=>:unfb,fs1_blocks=>:fs1,fs1_bytes=>:fs1b,fs2_bloc
ks=>:fs2,fs2_bytes=>:fs2b,fs3_blocks=>:fs3,fs3_bytes=>:fs3b,fs4_block
s=>:fs4,fs4_bytes=>:fs4b,full_blocks=>:full,full_bytes=>:fullb); END;

SYMPTOMS THAT LED TO THE FINDING:
Wait class "User I/O" was consuming significant database time. (4.3%
impact [378 seconds])

FINDING 2: 14% impact (1236 seconds)
------------------------------------
PL/SQL execution consumed significant database time.

RECOMMENDATION 1: SQL Tuning, 3.6% benefit (316 seconds)
ACTION: Tune the PL/SQL block with SQL_ID "89j4h9uu70grb". Refer to the
"Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide
and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID 89j4h9uu70grb
DECLARE job BINARY_INTEGER := :job; next_date DATE := :mydate;
broken BOOLEAN := FALSE; BEGIN /* Database Analysis report */
quest_ix_scheduler.run_task(147,JOB,NEXT_DATE); :mydate := next_date;
IF broken THEN :b := 1; ELSE :b := 0; END IF; END;

RECOMMENDATION 2: SQL Tuning, 3.5% benefit (307 seconds)
ACTION: Tune the PL/SQL block with SQL_ID "dxh8b2fpqfnu5". Refer to the
"Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide
and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID dxh8b2fpqfnu5
BEGIN :rule_value:=quest_ix_cbo_rule_pak.stale_analyzer_stats(:run_id
); END;

RECOMMENDATION 3: SQL Tuning, 3.5% benefit (307 seconds)
ACTION: Run SQL Tuning Advisor on the SQL statement with SQL_ID
"07k0t67czx7g6".
RELEVANT OBJECT: SQL statement with SQL_ID 07k0t67czx7g6 and
PLAN_HASH 2736083658
SELECT OWNER , TABLE_NAME , BLOCKS , ANALYZED_BLOCKS , BLOCK_VARIANCE
FROM QUEST_IX_STALE_STATS ORDER BY 5 DESC

RECOMMENDATION 4: SQL Tuning, 3.5% benefit (305 seconds)
ACTION: Tune the PL/SQL block with SQL_ID "bqx3f42d29g34". Refer to the
"Tuning PL/SQL Applications" chapter of Oracle's "PL/SQL User's Guide
and Reference"
RELEVANT OBJECT: SQL statement with SQL_ID bqx3f42d29g34
BEGIN DBMS_SPACE.SPACE_USAGE(segment_owner=>:segment_owner,segment_na
me=>:segment_name,segment_type=>:segment_type,unformatted_blocks=>:un
f,unformatted_bytes=>:unfb,fs1_blocks=>:fs1,fs1_bytes=>:fs1b,fs2_bloc
ks=>:fs2,fs2_bytes=>:fs2b,fs3_blocks=>:fs3,fs3_bytes=>:fs3b,fs4_block
s=>:fs4,fs4_bytes=>:fs4b,full_blocks=>:full,full_bytes=>:fullb); END;

FINDING 3: 4.5% impact (391 seconds)
------------------------------------
Host CPU was a bottleneck and the instance was consuming 89% of the host CPU.
All wait times will be inflated by wait for CPU.

RECOMMENDATION 1: Host Configuration, 4.5% benefit (391 seconds)
ACTION: Consider adding more CPUs to the host or increasing the number
of instances serving the database.

ADDITIONAL INFORMATION: Host CPU consumption was 76%. The instance spent
significant time on CPU. However, there were no predominant SQL
statements responsible for the CPU load.

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ADDITIONAL INFORMATION
----------------------

Wait class "Administrative" was not consuming significant database time.
Wait class "Application" was not consuming significant database time.
Wait class "Cluster" was not consuming significant database time.
Wait class "Commit" was not consuming significant database time.
Wait class "Concurrency" was not consuming significant database time.
Wait class "Configuration" was not consuming significant database time.
Wait class "Network" was not consuming significant database time.
Wait class "Scheduler" was not consuming significant database time.
Wait class "Other" was not consuming significant database time.

The analysis of I/O performance is based on the default assumption that the
average read time for one database block is 10000 micro-seconds.

An explanation of the terminology used in this report is available when you
run the report with the 'ALL' level of detail.

Option 2, Running ADDM Using DBMS_ADVISOR APIs

Step 1: create function for ADDM Analysis on a Pair of Snapshots.

CREATE OR REPLACE FUNCTION run_addm(start_time IN DATE, end_time IN DATE )
RETURN VARCHAR2
IS
begin_snap NUMBER;
end_snap NUMBER;
tid NUMBER;
-- Task ID tname VARCHAR2(30);
-- Task Name tdesc VARCHAR2(256);
-- Task DescriptionBEGIN
-- Find the snapshot IDs corresponding to the given input parameters.
SELECT max(snap_id)INTO begin_snap FROM DBA_HIST_SNAPSHOT
WHERE trunc(end_interval_time, 'MI') <= start_time;
SELECT min(snap_id) INTO end_snap FROM DBA_HIST_SNAPSHOT
WHERE end_interval_time >= end_time;
-- -- set Task Name (tname) to NULL and let create_task return a unique name for the task.
tname := '';
tdesc := 'run_addm( ' begin_snap ', ' end_snap ' )';
-- -- Create a task, set task parameters and execute it
DBMS_ADVISOR.CREATE_TASK( 'ADDM', tid, tname, tdesc );

DBMS_ADVISOR.SET_TASK_PARAMETER( tname, 'START_SNAPSHOT', begin_snap );

DBMS_ADVISOR.SET_TASK_PARAMETER( tname, 'END_SNAPSHOT' , end_snap );

DBMS_ADVISOR.EXECUTE_TASK( tname );

RETURN tname;
END;
/

Step 2: Reporting ADDM Analysis on a Pair of Specific Snapshots

-- set SQL*Plus variables and column formats for the report
SET PAGESIZE 0 LONG 1000000 LONGCHUNKSIZE 1000;
COLUMN get_clob FORMAT a80;
-- execute run_addm() with 5am and 7am as input
VARIABLE task_name VARCHAR2(30);
BEGIN
:task_name := run_addm( TO_DATE('05:00:01 (06/16)', 'HH24:MI:SS (MM/DD)'),
TO_DATE('07:00:01 (06/16)', 'HH24:MI:SS (MM/DD)') );
END;
/


-- execute GET_TASK_REPORT to get the textual ADDM report.
SELECT DBMS_ADVISOR.GET_TASK_REPORT(:task_name)
FROM DBA_ADVISOR_TASKS t
WHERE t.task_name = :task_name
AND t.owner = SYS_CONTEXT( 'userenv', 'session_user');

Option 3, Running ADDM Report Using DBMS_ADVISOR

SQL>
BEGIN
DBMS_ADVISOR.create_task (
advisor_name => 'ADDM',
task_name => '5838_6027_AWR_SNAPSHOT',
task_desc => 'Advisor for snapshots 5838 to 6027');
DBMS_ADVISOR.set_task_parameter (
task_name => '5838_6027_AWR_SNAPSHOT',
parameter => 'START_SNAPSHOT',
value => 5838);
DBMS_ADVISOR.set_task_parameter (
task_name => '5838_6027_AWR_SNAPSHOT',
parameter => 'END_SNAPSHOT',
value => 6027);
DBMS_ADVISOR.execute_task(task_name => '5838_6027_AWR_SNAPSHOT');
END;
/

PL/SQL procedure successfully completed.

SQL> SET LONG 100000 PAGESIZE 50000
SQL> SELECT DBMS_ADVISOR.get_task_report('5838_6027_AWR_SNAPSHOT') AS report FROM dual;

REPORT
--------------------------------------------------------------------------------
DETAILED ADDM REPORT FOR TASK '5838_6027_AWR_SNAPSHOT' WITH ID 30828
--------------------------------------------------------------------
Analysis Period: from 13-MAY-2009 00:00 to 20-MAY-2009 16:00
Database ID/Instance: 2568139559/1
... ...

Views with ADDM Information

1. DBA_ADVISOR_TASKS This view provides basic information about existing tasks, such as the task Id, task name, and when created.
2. DBA_ADVISOR_LOG This view contains the current task information, such as status, progress, error messages, and execution times.
3. DBA_ADVISOR_RECOMMENDATIONS This view displays the results of completed diagnostic tasks with recommendations for the problems identified in each run. The recommendations should be looked at in the order of the RANK column, as this relays the magnitude of the problem for the recommendation. The BENEFIT column gives the benefit to the system you can expect after the recommendation is carried out.
4. DBA_ADVISOR_FINDINGS This view displays all the findings and symptoms that the diagnostic monitor encountered along with the specific recommendation.

Friday, May 8, 2009

SYSDBA Audit

Parameter:

audit_sys_operations = TRUE

Script 1 :

#!/usr/bin/ksh
#
# Description:
# SYSDBA Audit report daily
#
#

ORACLE_HOME=/u01/oracle/product/10.1.0
export ORACLE_HOME
ORACLE_SID=oradb
export ORACLE_SID

d=`date +\%Y\%m\%d`
yesterday=$((d-1))

mkdir /var/log/DbSYSAudit/$yesterday

auditdir=/var/log/DbSYSAudit
auditdir1=$auditdir/$yesterday
auditdir2=$ORACLE_HOME/rdbms/audit

#### copy

ls -lrt $auditdir2awk '{print $9}' > $auditdir/cplist.txt

for name1 in `cat $auditdir/cplist.txt`
do
cp $auditdir2/$name1 $auditdir1
done

#### count

cnt=`ls -lrt $auditdir1 wc -l`

echo "**************************" > $auditdir/AuditLogC.txt
echo "Report Date : " $yesterday >> $auditdir/AuditLogC.txt
echo "Server/DB : " `hostname`/$ORACLE_SID >> $auditdir/AuditLogC.txt
echo " " >> $auditdir/AuditLogC.txt
echo "SYS Audit Log Count : " $cnt >> $auditdir/AuditLogC.txt
echo "**************************" >> $auditdir/AuditLogC.txt

#### report

echo "$yesterday SYSDBA Audit Report:" > $auditdir/SYSAuditLog.$yesterday.log

for name2 in `cat $auditdir/cplist.txt`
do
echo "-----------------------------------------" >> $auditdir/SYSAuditLog.$yesterday.log
echo $name2 >> $auditdir/SYSAuditLog.$yesterday.log
echo " " >> $auditdir/SYSAuditLog.$yesterday.log
cat $auditdir2/$name2 >> $auditdir/SYSAuditLog.$yesterday.log
echo " " >> $auditdir/SYSAuditLog.$yesterday.log
echo " " >> $auditdir/SYSAuditLog.$yesterday.log
done

#### Email

MAILS='annaxu@jp.com.sg'
#MAILS='annaxu@jps.com.sg itmonitor@jps.com.sg'

(cat $auditdir/AuditLogC.txt; /usr/bin/uuencode $auditdir/SYSAuditLog.$yesterday.log SYSAuditLog.$yesterday.log) mailx -r OracleDB@jps.com.sg -s "SYS Audit Daily Report" $MAILS

### HouseKeep

rm $auditdir/AuditLogC.txt

rm $auditdir/cplist.txt

rm -rf $auditdir2/*

Wednesday, April 1, 2009

Oracle Password file

The Oracle Password File ($ORACLE_HOME/dbs/orapw or orapwSID) stores passwords for users with administrative privileges. One needs to create a password files before remote administrators (like OEM) will be allowed to connect. Follow this procedure to create a new password file:

  • Log in as the Oracle software owner
  • Run command on OS level: $orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID password=mypasswd
  • Shutdown the database (SQLPLUS> SHUTDOWN IMMEDIATE)
  • Edit the INIT.ORA file and ensure REMOTE_LOGIN_PASSWORDFILE=exclusive is set.
  • Startup the database (SQLPLUS> STARTUP)[/list]

NOTE: The orapwd utility presents a security risk in that it receives a password from the command line. This password is visible in the process table of many systems. Administrators needs to be aware of this!

Adding users to Password File:

One can select from the SYS.V_$PWFILE_USERS view to see which users are listed in the password file. New users can be added to the password file by granting them SYSDBA or SYSOPER privileges, or by using the orapwd utility.
e.g.
SQL> GRANT SYSDBA TO scott;