Sunday, December 13, 2009

Oracle SQL Loader

Oracle SQL Loader is a utility for loading data into an Oracle database and is often used for transporting data from a non-Oracle source system to an Oracle data warehouse. It is a versatile utility that can load data in almost any format, can load multiple files at the same time into multiple tables and can load data from files on disk, on tape or from a named pipe.It runs in one of 3 modes: conventional load, direct-path load and external-path load. The conventional load is the deafult method and has less restrictions (see below) than the direct-path load which is generally much faster but less flexible.
The direct-path load is faster for large data sets as it doesn't generate any undo data and bypasses the database
buffer cache but it is limited to use just on heap tables (see below for the other restrictions).
The external-path load creates an external table for the specified datafile and then executes SQL INSERT statements to load the data into the target table. This mode has 2 advantages over direct-path and conventional loads:
1.If a data file is big enough it will be loaded in parallel;
2.The source data can be modified by SQL and PL/SQL functions as it is being loaded.
SQL*loader enables you to:
•load data from multiple files (from disk, tape or named pipe) into one or more target tables in the same load
•load data in any character set supported by Oracle
•load or discard records depending on values in the input fields
•transform the data before loading using SQL functions
•generate unique sequential keys for specified columns
•append to existing data or replace existing data
•load large objects (lobs), collections (nested tables and varrays) and object-relational data

When Does the Trigger Fire

When Does the Trigger Fire?
A trigger fires based on a triggering statement, which specifies:
•The SQL statement, database event, or DDL event that fires the trigger body. The options include DELETE, INSERT, and UPDATE. One, two, or all three of these options can be included in the triggering statement specification.
•The table, view, DATABASE, or SCHEMA on which the trigger is defined.
Note:
Exactly one table or view can be specified in the triggering statement. If the INSTEAD OF option is used, then the triggering statement must specify a view; conversely, if a view is specified in the triggering statement, then only the INSTEAD OF option can be used.

Overview of Triggers

Overview of Triggers
A trigger is a named program unit that is stored in the database and fired (executed) in response to a specified event. The specified event is associated with either a table, a view, a schema, or the database, and it is one of the following:
•A database manipulation (DML) statement (DELETE, INSERT, or UPDATE)
•A database definition (DDL) statement (CREATE, ALTER, or DROP)
•A database operation (SERVERERROR, LOGON, LOGOFF, STARTUP, or SHUTDOWN)
The trigger is said to be defined on the table, view, schema, or database.
Topics:
• Trigger Types
• Trigger States
• Data Access for Triggers
• Uses of Triggers
Trigger Types
A DML trigger is fired by a DML statement, a DDL trigger is fired by a DDL statement, a DELETE trigger is fired by a DELETE statement, and so on.
An INSTEAD OF trigger is a DML trigger that is defined on a view (not a table). The database fires the INSTEAD OF trigger instead of executing the triggering DML statement. For more information, see Modifying Complex Views (INSTEAD OF Triggers).
A system trigger is defined on a schema or the database. A trigger defined on a schema fires for each event associated with the owner of the schema (the current user). A trigger defined on a database fires for each event associated with all users.
A simple trigger can fire at exactly one of the following timing points:
•Before the triggering statement executes
•After the triggering statement executes
•Before each row that the triggering statement affects
•After each row that the triggering statement affects
A compound trigger can fire at more than one timing point. Compound triggers make it easier to program an approach where you want the actions you implement for the various timing points to share common data. For more information, see Compound Triggers.
Trigger States
A trigger can be in either of two states:
Enabled. An enabled trigger executes its trigger body if a triggering statement is entered and the trigger restriction (if any) evaluates to TRUE.
Disabled. A disabled trigger does not execute its trigger body, even if a triggering statement is entered and the trigger restriction (if any) evaluates to TRUE.
By default, a trigger is created in enabled state. To create a trigger in disabled state, use the DISABLE clause of the CREATE TRIGGER statement.
See Also:
CREATE TRIGGER Statement
Data Access for Triggers
When a trigger is fired, the tables referenced in the trigger action might be currently undergoing changes by SQL statements in other users' transactions. In all cases, the SQL statements running within triggers follow the common rules used for standalone SQL statements. In particular, if an uncommitted transaction has modified values that a trigger being fired either must read (query) or write (update), then the SQL statements in the body of the trigger being fired use the following guidelines:
•Queries see the current read-consistent materialized view of referenced tables and any data changed within the same transaction.
•Updates wait for existing data locks to be released before proceeding.
Uses of Triggers
Triggers supplement the standard capabilities of your database to provide a highly customized database management system. For example, you can use triggers to:
•Automatically generate derived column values
•Enforce referential integrity across nodes in a distributed database

oracle database SESSION

BY SESSION
Specify BY SESSION if you want Oracle Database to write a single record for all SQL statements of the same type issued and operations of the same type executed on the same schema objects in the same session.
Oracle Database can write to an operating system audit file but cannot read it to detect whether an entry has already been written for a particular operation. Therefore, if you are using an operating system file for the audit trail (that is, the AUDIT_FILE_DEST initialization parameter is set to OS), then the database may write multiple records to the audit trail file even if you specify BY SESSION.
BY ACCESS
Specify BY ACCESS if you want Oracle Database to write one record for each audited statement and operation.
If you specify statement options or system privileges that audit data definition language (DDL) statements, then the database automatically audits by access regardless of whether you specify the BY SESSION clause or BY ACCESS clause.
For statement options and system privileges that audit SQL statements other than DDL, you can specify either BY SESSION or BY ACCESS. BY SESSION is the default.
WHENEVER [NOT] SUCCESSFUL
Specify WHENEVER SUCCESSFUL to audit only SQL statements and operations that succeed.
Specify WHENEVER NOT SUCCESSFUL to audit only statements and operations that fail or result in errors.
If you omit this clause, then Oracle Database performs the audit regardless of success or failure.

Uses of Triggers

Uses of Triggers
Triggers supplement the standard capabilities of your database to provide a highly customized database management system. For example, you can use triggers to:
• Automatically generate derived column values
• Enforce referential integrity across nodes in a distributed database
• Enforce complex business rules
• Provide transparent event logging
• Provide auditing
• Maintain synchronous table replicates
• Gather statistics on table access
• Modify table data when DML statements are issued against views
• Publish information about database events, user events, and SQL statements to subscribing applications
• Restrict DML operations against a table to those issued during regular business hours
• Enforce security authorizations
Prevent invalid transactionsPrivileges Required to Use Triggers
To create a trigger in your schema:
• You must have the CREATE TRIGGER system privilege
• One of the following must be true:
o You own the table specified in the triggering statement
o You have the ALTER privilege for the table specified in the triggering statement
o You have the ALTER ANY TABLE system privilege
To create a trigger in another schema, or to reference a table in another schema from a trigger in your schema:
• You must have the CREATE ANY TRIGGER system privilege.
• You must have the EXECUTE privilege on the referenced subprograms or packages.
To create a trigger on the database, you must have the ADMINISTER DATABASE TRIGGER privilege. If this privilege is later revoked, you can drop the trigger but not alter it.
The object privileges to the schema objects referenced in the trigger body must be granted to the trigger owner explicitly (not through a role). The statements in the trigger body operate under the privilege domain of the trigger owner, not the privilege domain of the user issuing the triggering statement (this is similar to the privilege model for stored subprograms).

BEFORE and AFTER Triggers

BEFORE and AFTER Triggers
When defining a trigger, you can specify the trigger timing—whether the trigger action is to be run before or after the triggering statement. BEFORE and AFTER apply to both statement and row triggers.
BEFORE and AFTER triggers fired by DML statements can be defined only on tables, not on views. However, triggers on the base tables of a view are fired if an INSERT, UPDATE, or DELETE statement is issued against the view. BEFORE and AFTER triggers fired by DDL statements can be defined only on the database or a schema, not on particular tables.
BEFORE Triggers
BEFORE triggers run the trigger action before the triggering statement is run. This type of trigger is commonly used in the following situations:
• When the trigger action determines whether the triggering statement should be allowed to complete. Using a BEFORE trigger for this purpose, you can eliminate unnecessary processing of the triggering statement and its eventual rollback in cases where an exception is raised in the trigger action.
• To derive specific column values before completing a triggering INSERT or UPDATE statement.
AFTER Triggers
AFTER triggers run the trigger action after the triggering statement is run.
Trigger Type Combinations
Using the options listed previously, you can create four types of row and statement triggers:
• BEFORE statement trigger
Before executing the triggering statement, the trigger action is run.
• BEFORE row trigger
Before modifying each row affected by the triggering statement and before checking appropriate integrity constraints, the trigger action is run, if the trigger restriction was not violated.
• AFTER statement trigger
After executing the triggering statement and applying any deferred integrity constraints, the trigger action is run.
• AFTER row trigger
After modifying each row affected by the triggering statement and possibly applying appropriate integrity constraints, the trigger action is run for the current row provided the trigger restriction was not violated. Unlike BEFORE row triggers, AFTER row triggers lock rows.

How Triggers Are Used

How Triggers Are Used
Triggers supplement the standard capabilities of Oracle to provide a highly customized database management system. For example, a trigger can restrict DML operations against a table to those issued during regular business hours. You can also use triggers to:
• Automatically generate derived column values
• Prevent invalid transactions
• Enforce complex security authorizations
• Enforce referential integrity across nodes in a distributed database
• Enforce complex business rules
• Provide transparent event logging
• Provide auditing
• Maintain synchronous table replicates
• Gather statistics on table access
• Modify table data when DML statements are issued against views
• Publish information about database events, user events, and SQL statements to subscribing applications

Types of Triggers
This section describes the different types of triggers:
• Row Triggers and Statement Triggers
• BEFORE and AFTER Triggers
• INSTEAD OF Triggers
• Triggers on System Events and User Events
Row Triggers and Statement Triggers
When you define a trigger, you can specify the number of times the trigger action is to be run:
• Once for every row affected by the triggering statement, such as a trigger fired by an UPDATE statement that updates many rows
• Once for the triggering statement, no matter how many rows it affects
Row Triggers
A row trigger is fired each time the table is affected by the triggering statement. For example, if an UPDATE statement updates multiple rows of a table, a row trigger is fired once for each row affected by the UPDATE statement. If a triggering statement affects no rows, a row trigger is not run.
Row triggers are useful if the code in the trigger action depends on data provided by the triggering statement or rows that are affected. For example, Figure 22-3 illustrates a row trigger that uses the values of each row affected by the triggering statement.
Statement Triggers
A statement trigger is fired once on behalf of the triggering statement, regardless of the number of rows in the table that the triggering statement affects, even if no rows are affected. For example, if a DELETE statement deletes several rows from a table, a statement-level DELETE trigger is fired only once.
Statement triggers are useful if the code in the trigger action does not depend on the data provided by the triggering statement or the rows affected. For example, use a statement trigger to:
• Make a complex security check on the current time or user
• Generate a single audit record
BEFORE and AFTER Triggers
When defining a trigger, you can specify the trigger timing—whether the trigger action is to be run before or after the triggering statement. BEFORE and AFTER apply to both statement and row triggers.
BEFORE and AFTER triggers fired by DML statements can be defined only on tables, not on views. However, triggers on the base tables of a view are fired if an INSERT, UPDATE, or DELETE statement is issued against the view. BEFORE and AFTER triggers fired by DDL statements can be defined only on the database or a schema, not on particular tables.

How the V$LOGMNR_CONTENTS View Is Populated

How the V$LOGMNR_CONTENTS View Is Populated
The V$LOGMNR_CONTENTS fixed view is unlike other views in that it is not a selective presentation of data stored in a table. Instead, it is a relational presentation of the data that you request from the redo log files. LogMiner populates the view only in response to a query against it. You must successfully start LogMiner before you can query V$LOGMNR_CONTENTS.
When a SQL select operation is executed against the V$LOGMNR_CONTENTS view, the redo log files are read sequentially. Translated information from the redo log files is returned as rows in the V$LOGMNR_CONTENTS view. This continues until either the filter criteria specified at startup are met or the end of the redo log file is reached.
In some cases, certain columns in V$LOGMNR_CONTENTS may not be populated. For example:
• The TABLE_SPACE column is not populated for rows where the value of the OPERATION column is DDL. This is because a DDL may operate on more than one tablespace. For example, a table can be created with multiple partitions spanning multiple table spaces; hence it would not be accurate to populate the column.
• LogMiner does not generate SQL redo or SQL undo for temporary tables. The SQL_REDO column will contain the string "/* No SQL_REDO for temporary tables */" and the SQL_UNDO column will contain the string "/* No SQL_UNDO for temporary tables */".
LogMiner returns all the rows in SCN order unless you have used the COMMITTED_DATA_ONLY option to specify that only committed transactions should be retrieved. SCN order is the order normally applied in media recovery.
See Also:
Showing Only Committed Transactions for more information about the COMMITTED_DATA_ONLY option to DBMS_LOGMNR.START_LOGMNR
Note:
Because LogMiner populates the V$LOGMNR_CONTENTS view only in response to a query and does not store the requested data in the database, the following is true:
• Every time you query V$LOGMNR_CONTENTS, LogMiner analyzes the redo log files for the data you request.
• The amount of memory consumed by the query is not dependent on the number of rows that must be returned to satisfy a query.
• The time it takes to return the requested data is dependent on the amount and type of redo log data that must be mined to find that data.
Querying V$LOGMNR_CONTENTS Based on Column Values
LogMiner lets you make queries based on column values. For instance, you can perform a query to show all updates to the hr.employees table that increase salary more than a certain amount. Data such as this can be used to analyze system behavior and to perform auditing tasks.
LogMiner data extraction from redo log files is performed using two mine functions: DBMS_LOGMNR.MINE_VALUE and DBMS_LOGMNR.COLUMN_PRESENT. Support for these mine functions is provided by the REDO_VALUE and UNDO_VALUE columns in the V$LOGMNR_CONTENTS view.

Querying V$LOGMNR_CONTENTS for Redo Data of Interest

Querying V$LOGMNR_CONTENTS for Redo Data of Interest
You access the redo data of interest by querying the V$LOGMNR_CONTENTS view. (Note that you must have the SELECT ANY TRANSACTION privilege to query V$LOGMNR_CONTENTS.) Per email from Diana Lorentz/Doug Voss on 3/16/05This view provides historical information about changes made to the database, including (but not limited to) the following:
• The type of change made to the database: INSERT, UPDATE, DELETE, or DDL (OPERATION column).
• The SCN at which a change was made (SCN column).
• The SCN at which a change was committed (COMMIT_SCN column).
• The transaction to which a change belongs (XIDUSN, XIDSLT, and XIDSQN columns).
• The table and schema name of the modified object (SEG_NAME and SEG_OWNER columns).
• The name of the user who issued the DDL or DML statement to make the change (USERNAME column).
• If the change was due to a SQL DML statement, the reconstructed SQL statements showing SQL DML that is equivalent (but not necessarily identical) to the SQL DML used to generate the redo records (SQL_REDO column).
• If a password is part of the statement in a SQL_REDO column, the password is encrypted. SQL_REDO column values that correspond to DDL statements are always identical to the SQL DDL used to generate the redo records.
• If the change was due to a SQL DML change, the reconstructed SQL statements showing the SQL DML statements needed to undo the change (SQL_UNDO column).
SQL_UNDO columns that correspond to DDL statements are always NULL. The SQL_UNDO column may be NULL also for some datatypes and for rolled back operations.
For example, suppose you wanted to find out about any delete operations that a user named Ron had performed on the oe.orders table. You could issue a SQL query similar to the following:
SELECT OPERATION, SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS
WHERE SEG_OWNER = 'OE' AND SEG_NAME = 'ORDERS' AND
OPERATION = 'DELETE' AND USERNAME = 'RON';

The following output would be produced. The formatting may be different on your display than that shown here.

Audit all Oracle user activity.

Audit all Oracle user activity.
This audits everything including DDL (create table), DML (inserts, updates, deletes) and login/logoff events:
audit all by FRED by access;
Audit all Oracle user viewing activity:
audit select table by FRED by access;
Audit all Oracle user data change activity:
audit update table, delete table,
insert table by FRED by access;
Audit all Oracle user viewing activity:
audit execute procedure by FRED by access;
We can also query the dba_audit_trail view. Here are the column descriptions from the Oracle documentation:

Oracle auditing setup and configuration

Oracle auditing setup & configuration
The Oracle audit command is used to to create specific audit trails for Oracle tables. to enable Oracle auditing you must set the initialization parameter "audit_trail = true" and run the cataudit.sql scripts (as SYS).
Managing Oracle audit trails
The Oracle audit command write the audit information to specific data dictionary views (BEWARE: These audit trail rows can quickly fill-up your SYSTEM tablespace, and special care needs to be taken to ensure that you do not "lock-up" your database fill filling the SYSTEM tablespace.
• dba_audit_exists
• dba_audit_object
• dba_audit_session
• dba_audit_statement
• dba_audit_trail
We also have these metadata views for Oracle auditing options:
• dba_obj_audit_opts
• dba_priv_audit_opts
• dba_stmt_audit_opts
Auditing the audit trail
Auditing the audit trail
Today, we need to audit the audit trail itself to prevent "inside jobs" and this Oracle command will record all changes to the audit trail. See my notes on the types of "inside jobs"
audit
delete,
insert,
update
on
sys.aud$
by access;
Auditing user activity with the Oracle audit command
Oracle has syntax for auditing specific user activity. To audit the activity of user FRED we could issue these audit commands:

Starting LogMiner

Starting LogMiner
You call the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner. Because the options available with the DBMS_LOGMNR.START_LOGMNR procedure allow you to control output to the V$LOGMNR_CONTENTS view, you must call DBMS_LOGMNR.START_LOGMNR before querying the V$LOGMNR_CONTENTS view.
When you start LogMiner, you can:
•Specify how LogMiner should filter data it returns (for example, by starting and ending time or SCN value)
•Specify options for formatting the data returned by LogMiner
•Specify the LogMiner dictionary to use
The following list is a summary of LogMiner settings that you can specify with the OPTIONS parameter to DBMS_LOGMNR.START_LOGMNR and where to find more information about them.
•DICT_FROM_ONLINE_CATALOG — See Using the Online Catalog
•DICT_FROM_REDO_LOGS — See Start LogMiner
•CONTINUOUS_MINE — See Redo Log File Options
•COMMITTED_DATA_ONLY — See Showing Only Committed Transactions
•SKIP_CORRUPTION — See Skipping Redo Corruptions
•NO_SQL_DELIMITER — See Formatting Reconstructed SQL Statements for Reexecution
•PRINT_PRETTY_SQL — See Formatting the Appearance of Returned Data for Readability
•NO_ROWID_IN_STMT — See Formatting Reconstructed SQL Statements for Reexecution
DDL_DICT_TRACKING — See Tracking DDL Statements in the LogMiner Dictionary
When you execute the DBMS_LOGMNR.START_LOGMNR procedure, LogMiner checks to ensure that the combination of options and parameters that you have specified is valid and that the dictionary and redo log files that you have specified are available. However, the V$LOGMNR_CONTENTS view is not populated until you query the view, as described in How the V$LOGMNR_CONTENTS View Is Populated.
Note that parameters and options are not persistent across calls to DBMS_LOGMNR.START_LOGMNR. You must specify all desired parameters and options (including SCN and time ranges) each time you call DBMS_LOGMNR.START_LOGMNR.

DATA LOADING METHODS

Data Loading Methods
SQL*Loader provides two methods for loading data:
•Conventional Path Load
•Direct Path Load
A conventional path load executes SQL INSERT statements to populate tables in an Oracle database. A direct path load eliminates much of the Oracle database overhead by formatting Oracle data blocks and writing the data blocks directly to the database files. A direct load does not compete with other users for database resources, so it can usually load data at near disk speed. Considerations inherent to direct path loads, such as restrictions, security, and backup implications, are discussed in this chapter.
The tables to be loaded must already exist in the database. SQL*Loader never creates tables. It loads existing tables that either already contain data or are empty.
The following privileges are required for a load:
•You must have INSERT privileges on the table to be loaded.
•You must have DELETE privileges on the table to be loaded, when using the REPLACE or TRUNCATE option to empty old data from the table before loading the new data in its place.
Conventional Path Load
Conventional path load (the default) uses the SQL INSERT statement and a bind array buffer to load data into database tables. This method is used by all Oracle tools and applications.
When SQL*Loader performs a conventional path load, it competes equally with all other processes for buffer resources. This can slow the load significantly. Extra overhead is added as SQL commands are generated, passed to Oracle, and executed.
The Oracle database server looks for partially filled blocks and attempts to fill them on each insert. Although appropriate during normal use, this can slow bulk loads dramatically.
Conventional Path Load of a Single Partition
By definition, a conventional path load uses SQL INSERT statements. During a conventional path load of a single partition, SQL*Loader uses the partition-extended syntax of the INSERT statement, which has the following form:
INSERT INTO TABLE T PARTITION (P) VALUES ...
The SQL layer of the Oracle kernel determines if the row being inserted maps to the specified partition. If the row does not map to the partition, the row is rejected, and the SQL*Loader log file records an appropriate error message.
When to Use a Conventional Path Load
If load speed is most important to you, you should use direct path load because it is faster than conventional path load. However, certain restrictions on direct path loads may require you to use a conventional path load. You should use a conventional path load in the following situations:
•When accessing an indexed table concurrently with the load, or when applying inserts or updates to a nonindexed table concurrently with the load
To use a direct path load (with the exception of parallel loads), SQL*Loader must have exclusive write access to the table and exclusive read/write access to any indexes.
•When loading data into a clustered table
A direct path load does not support loading of clustered tables.
•When loading a relatively small number of rows into a large indexed table
During a direct path load, the existing index is copied when it is merged with the new index keys. If the existing index is very large and the number of new keys is very small, then the index copy time can offset the time saved by a direct path load.
•When loading a relatively small number of rows into a large table with referential and column-check integrity constraints
Because these constraints cannot be applied to rows loaded on the direct path, they are disabled for the duration of the load. Then they are applied to the whole table when the load completes. The costs could outweigh the savings for a very large table and a small number of new rows.
•When loading records and you want to ensure that a record is rejected under any of the following circumstances:
oIf the record, upon insertion, causes an Oracle error
oIf the record is formatted incorrectly, so that SQL*Loader cannot find field boundaries
oIf the record violates a constraint or tries to make a unique index non-unique

AUDITING ORACLE SERVER

Auditing is a default feature of the Oracle server. The initialization parameters that influence its behaviour can be displayed using the SHOW PARAMETER SQL*Plus command.
SQL> SHOW PARAMETER AUDIT

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string C:\ORACLE\PRODUCT\10.2.0\ADMIN
\DB10G\ADUMP
audit_sys_operations boolean FALSE
audit_trail string NONE
SQL>
Auditing is disabled by default, but can enabled by setting the AUDIT_TRAIL static parameter, which has the following allowed values.

AUDIT_TRAIL = { none | os | db | db,extended | xml | xml,extended }
The following list provides a description of each setting:

none or false - Auditing is disabled.
db or true - Auditing is enabled, with all audit records stored in the database audit trial (SYS.AUD$).
db,extended - As db, but the SQL_BIND and SQL_TEXT columns are also populated.
xml- Auditing is enabled, with all audit records stored as XML format OS files.
xml,extended - As xml, but the SQL_BIND and SQL_TEXT columns are also populated.
os- Auditing is enabled, with all audit records directed to the operating system's audit trail.
Note. In Oracle 10g Release 1, db_extended was used in place of db,extended. The XML options are new to Oracle 10g Release 2.

The AUDIT_SYS_OPERATIONS static parameter enables or disables the auditing of operations issued by users connecting with SYSDBA or SYSOPER privileges, including the SYS user. All audit records are written to the OS audit trail.

The AUDIT_FILE_DEST parameter specifies the OS directory used for the audit trail when the os, xml and xml,extended options are used. It is also the location for all mandatory auditing specified by the AUDIT_SYS_OPERATIONS parameter.

To enable auditing and direct audit records to the database audit trail, we would do the following.

SQL> ALTER SYSTEM SET audit_trail=db SCOPE=SPFILE;

System altered.

SQL> SHUTDOWN
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> STARTUP
ORACLE instance started.

Total System Global Area 289406976 bytes
Fixed Size 1248600 bytes
Variable Size 71303848 bytes
Database Buffers 213909504 bytes
Redo Buffers 2945024 bytes
Database mounted.
Database opened.
SQL>
Audit Options
One look at the AUDIT command syntax should give you an idea of how flexible Oracle auditing is. There is no point repeating all this information, so instead we will look at a simple example.

First we create a new user called AUDIT_TEST.

CONNECT sys/password AS SYSDBA

CREATE USER audit_test IDENTIFIED BY password
DEFAULT TABLESPACE users
TEMPORARY TABLESPACE temp
QUOTA UNLIMITED ON users;

GRANT connect TO audit_test;
GRANT create table, create procedure TO audit_test;
Next we audit all operations by the AUDIT_TEST user.

CONNECT sys/password AS SYSDBA

AUDIT ALL BY audit_test BY ACCESS;
AUDIT SELECT TABLE, UPDATE TABLE, INSERT TABLE, DELETE TABLE BY audit_test BY ACCESS;
AUDIT EXECUTE PROCEDURE BY audit_test BY ACCESS;
These options audit all DDL and DML, along with some system events

ASM disk groups

ASM employs several basic concepts. ASM operates on disk groups which is a collection of disks defined by ASM and transparent to the interface. Within the groups, it creates ASM datafiles. Therefore the main building blocks of ASM are:
•ASM disks
•ASM disk groups
•ASM files
•ASM templates
You can run the list data groups (lsdg) ASM command from a bash shell script to quickly see the disk space usage:
#!/bin/bash
. /home/oracle/set_oraenv
sid="+ASM1"
echo "Check Space on "$sid
export ORACLE_SID=$sid
asmcmd << EOF
lsdg
EOF
Let's explore these ASM concepts in detail.
Components of the ASM instance
There are several components within an ASM instance.
ASM disk groups
A disk group is basically one or more ASM disks that are managed as a single logical unit. Any data-structure stored in an ASM disk group is totally contained within that disk group, or self-contained. A database using ASM disks doesn't have to be shutdown in order for a disk to be added or dropped. ASM rebalances the spread of data to ensure an even I/O load to all disks in a disk group when the disk group configuration changes.

We mentioned that any single ASM file is self-contained in a single ASM disk group. However, an ASM disk group can contain files belonging to several databases, and a single database can use storage from multiple ASM disk groups. You can specify a disk group as the default disk group for files created in a database by specifying the disk group in file destination initialization parameters.

ASM divides the datafiles into 1MB extents and spreads the extents for each file evenly across all of the disks in a disk group. ASM uses pointers to record extent location instead of using a mathematical function to track the placement of each extent. When the disk group configuration changes, ASM moves individual extents of a file rather than having to move all extents to adhere to a formula based on the number of disks.

For files, such as log files, that require low latency, ASM provides fine-grained (128k) striping to allow larger I/Os to be split and processed in parallel by multiple disks. At file creation time, you can decide whether or not to use fine-grained striping. File type specific templates in the disk group

LOGMINER

LogMiner™ is a powerful audit tool for Oracle databases, allowing administrators to easily locate changes in the database, enabling sophisticated data analyses, and providing undo capabilities to rollback logical data corruptions or user errors.
LogMiner offers two interfaces:
•A SQL interface, which calls built-in PL/SQL packages to setup a data dictionary, specify redo logs, and execute queries (this interface is applicable to SQL*Plus, command-line scripts, or custom applications).
•LogMiner Viewer, an intuitive GUI which allows the administrator to setup a data dictionary, specify the redo logs, specify query criteria, and view/save redo log data.
LogMiner can also be leveraged for:
•Ex post facto auditing of DML statements, the order in which transactions were committed and the user responsible for the updates.
•Historical analysis of data access patterns for database tuning and capacity planning.
•Tracking schema evolution and the impact on data structure.
LogMiner directly accesses the Oracle redo logs, which are complete records of all activities performed on the database, and the associated data dictionary, which is used to translate internal object identifiers and types to external names and data formats. Using a dynamic view V$LOGMNR_CONTENTS, LogMiner populates a row in this view with each logical operation performed on the database, thus offering familiar, relational-based access for ad-hoc querying or custom application consumption. Each row contains a SQL UNDO statement, which can be used to rollback the change, and SQL REDO statement, which details the original operation. Log analysis can be conducted with the online catalog as the dictionary source, or an offline dictionary that has been extracted to a set of relevant redo logs, or a standalone flat file. With offline analysis, LogMiner can be run on a separate database, using archived redo logs and the associated dictionary from the source database, thus alleviating resource consumption on the source system.