How to Copy ASM datafile From Primary Database to Standby Database on ASM using RMAN


How to Copy ASM datafile From Primary Database to Standby Database on ASM using RMAN


Requirement is to copy one datafile (file#20)from primary to standby.

We encountered NO LOGGING/Corruption errors in standby database and want to copy the current/good backup of datafile in Primary which is on ASM to Standby database which is on ASM
There are three options to achieve the same.

Option 1

On Primary database :-

Step 1:- Take an copy of the file to normal file system

RMAN > Copy datafile ''  to '' ;

for example

RMAN>copy datafile '+DATA/orcl/datafile/users.20.658960000'  to '/u01/stage/tmp/backup_file_20.dbf'

transfer the file backup_file_20.dbf to standby

use Ftp or copy (scp) to transfer the datafile  backup_file_20.dbf  to standby server at
/u01/stage/tmp/backup_file_20.dbf

On Standby database :-

Step 2:- On Standby Catalog this copy using Rman

RMAN> Catalog datafilecopy '/u01/stage/tmp/backup_file_20.dbf' ;

Step 3:- Switch the datafile to point copy on standby

Stop the recovery

SQL> Alter database recover managed standby database cancel ;

RMAN > switch datafile to COPY;

For example :-

RMAN> Switch datafile 20 to COPY;
datafile 20 switched to datafile copy "/u01/stage/tmp/backup_file_20.dbf"

Step 4 :- Now we copy this to ASM disk group on standby

RMAN> Backup as copy datafile   format  '';

For example :-

Rman> Backup as copy datafile 20 format '+DATA' ;
.....................................
.....................................
output filename=+DATA/orcl/datafile/users.20.658960000 tag=TAG20080701T174316 r ecid=20 stamp=658953191

Step 5 :- Switch to point Backup copy created in ASM disk group

RMAN>switch datafile  to COPY;

For example :-

Rman>Switch datafile 20 to Copy ;


Option 2

On Primary database :-

Step 1:- Take an copy of the file to normal file system


RMAN>copy datafile '+DATA/orcl/datafile/users.20.658960000'  to '/u01/stage/tmp/backup_file_20.dbf'

transfer the file backup_file_20.dbf to standby

Ftp or copy (scop) the datafile backup_file.dbf  to standby server at

/u01/stage/tmp/backup_file_20.dbf

On Standby database :-

Step 2

SQL> Shutdown immediate;

SQL> startup mount

Ensure recovery is stopped. Ensure no MRP process is listed in below query. If recovery is in progress stop it

SQL> select PROCESS,STATUS from v$managed_standby
  2  ;
PROCESS                     STATUS
--------------------------- --------------------
ARCH                        CLOSING
ARCH                        CLOSING
ARCH                        CONNECTED
ARCH                        CLOSING
RFS                         IDLE
RFS                         IDLE
RFS                         IDLE
RFS                         IDLE
RFS                         IDLE
RFS                         IDLE
10 rows selected.

 to stop recovery issue below command on standby

SQL> recover managed standby database cancel;

Step 3

notedown the file location

SQL> Select name from v$datafile where file#=20

+DATA/orcl/datafile/users.20.658933000

Step 4

Catalog datafilecopy on Standby using Rman

RMAN > Catalog datafilecopy '/u01/stage/tmp/backup_file_20.dbf' ;

Step 5
Rman > Connect target

RMAN> copy datafilecopy '/u01/stage/tmp/backup_file_20.dbf'  to '+DATA'
.....................................
.....................................
output filename=+DATA/orcl/datafile/users.20.658960000 tag=TAG20080701T174316 r ecid=20 stamp=658953191 <== note down this name
This will report the new location/name of the original file  20.
In this example  its +DATA/orcl/datafile/users.20.658960000

Step 6

Go to sqlplus on standby database

SQL> Select name from v$datafile where file#=20

For example

SQL> Select name from v$datafile where file#=20
'+DATA/orcl/datafile/users.20.658933000'

This will show the original name/location of the datafile. We would need to rename this to the new file
name showen from above rman command to update the information in controlfile about this new datafile
We would issue rename command to point to the new location of the file.

SQL>Alter system set standby_file_management=manual scope=spfile ;

SQL> Alter database rename  file '' to 'reported in step5 after copy datafile command;'

For example

SQL> Alter database rename file '+DATA/orcl/datafile/users.20.658933000' to
'+DATA/orcl/datafile/users.20.658960000';

SQL> Alter system set standby_file_management=auto scope=spfile ;

After the activity is over, remove the transient copy i.e.

RMAN> delete datafilecopy '/u01/stage/tmp/backup_file_20.dbf' ;

Option 3

On Primary database :-

Step1

backup affected datafile (in my case its file#20)

RMAN> BACKUP DATAFILE 20 FORMAT '/u01/stage/tmp/backup_file_20_%U';

Transfer the backup piece to standby

On Standby database :-

Step2

RMAN> catalog start with '/u01/stage/tmp/' noprompt;

Step3

SQL> recover managed standby database cancel;

Step4

RMAN> restore datafile 20;

Step5

SQL> recover managed standby disconnect from session;

The options assume that you have enough archivelogs availiable to recover the datafile to the current point in time
If you have space crunch and you are running 12cR2 and all the above 3 steps does not fit for you then you can check my post


ORA-65500: could not modify DB_UNIQUE_NAME, resource exists

ORA-65500: could not modify DB_UNIQUE_NAME, resource exists


ORCL1> show parameter spfile
NAME                                 TYPE            VALUE
------------------------------------ --------------- ------------------------------
spfile                               string          +DATA01/ORCL/spfile

ORCL1> alter system set db_unique_name="ORCLP" scope=spfile sid='*';
alter system set db_unique_name="ORCLP" scope=spfile sid='*'
*
ERROR at line 1:

ORA-32017: failure in updating SPFILE
ORA-65500: could not modify DB_UNIQUE_NAME, resource exists








What is Oracle Flashback Technologies

Understanding Oracle Flashback Technologies


What is oracle Flashback Technologies?


The Flashback features offer the capability to query historical data, perform change analysis, and perform self-service repair to recover from logical corruptions while the database is online. With Oracle Flashback Technology, you can indeed undo the past.

How to Enable Flashback?


Flashback was introduced in 10g and up to 11gR1 you need to clean mount the database to enable the
flashback. It means you need downtime to enable the flashback

before 11gR2

login as sysdba

SQL> SHUTDOWN IMMEDIATE;

SQL> STARTUP MOUNT EXCLUSIVE;

SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=2880 SCOPE=BOTH;  << for how long the flashback logs are retained

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=50G SCOPE=BOTH;   << maximum limit on size flashback log can use in DB_RECOVERY_FILE_DEST location

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST=/ora_data01/orcl/recovery/;  << location where the flashback logs are written

SQL> ALTER DATABASE FALSHBACK ON;

SQL> ALTER DATABASE OPEN;

11gR2 and after

SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=2880 SCOPE=BOTH;  << for how long the flashback logs are retained

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE=50G SCOPE=BOTH;   << maximum limit on size flashback log can use in DB_RECOVERY_FILE_DEST location

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST=/ora_data01/orcl/recovery/;  << location where the flashback logs are written

SQL> ALTER DATABASE FALSHBACK ON;

SQL> ALTER DATABASE OPEN;

Notice the difference. starting from 11gR2 you dont need to shutdown your database to enable the flashback

How to Check if the FALSHBACK is enabled?


SQL> select flashback_on from v$database;

Which background process is responsible for flashback logs writing?

RVWR (Recovery Writer, a.k.a Flashback Writer) was introduced in Oracle 10g to write flashback data from the Flashback Buffer in the SGA to the flashback database logs on disk.

What is Flashback Buffer?


flashback buffer is a designated area in SGA (just like log buffer) to hold the flashback logs. RVWR process flush the log from flashback buffer to disk

When are the flashback logs deleted?


  • If the flash recovery area has enough space, then a flashback log is deleted whenever necessary to satisfy the flashback retention target.
  • If a flashback log is old enough that it is no longer needed to satisfy the flashback retention target, then a flashback log is overwritten.
  • If the database must create a new flashback log and the fast recovery area is full or there is no disk space, then the oldest flashback log is overwritten instead.
  • If the fast recovery area is full, then an archived redo log that is reclaimable according to the FRA rules may be automatically deleted by the fast recovery area to make space for other files. In this case, any flashback logs that would require the use of that redo log file for the use of FLASHBACK DATABASE are also deleted.
  • No file in the fast recovery area is eligible for deletion if it is required to satisfy a guaranteed restore point. Thus, retention of flashback logs and other files required to satisfy the guaranteed restore point, in addition to files required to satisfy the backup retention policy, can cause the fast recovery area to fill completely.
  • When flashback mode is turned off all flashback logs are deleted ONLY if there’s no guaranteed restore points. If there’s at least one guaranteed restore point, no flashback logs are deleted.
  • When the oldest guaranteed restore point is deleted and flashback mode is off, all flashback logs older than the second oldest guaranteed restore point are deleted. If flashback mode is on for the database OR the guaranteed restore point is not the oldest no flashback logs are deleted.

What feature Flashback technology offers?

Enabling flashback offers following featuers

Flashback Database: restore the entire database to a specific point-in-time, using Oracle-optimized flashback logs, rather than via backups and forward recovery.

Flashback Table: easily recover tables to a specific point-in-time, useful when a logical corruption is limited to one or a set of tables instead of the entire database.

Flashback Drop: recover an accidentally dropped table. It restores the dropped table, and all of its indexes, constraints, and triggers, from the Recycle Bin (a logical container of all dropped objects).

Flashback Transaction: undo the effects of a single transaction, and optionally, all of its dependent transactions. via a single PL/SQL operation or by using an Enterprise Manager wizard.

Flashback Transaction Query:  see all the changes made by a specific transaction, useful when an erroneous transaction changed data in multiple rows or tables.

Flashback Query: query any data at some point-in-time in the past. This powerful feature can be used to view and logically reconstruct corrupted data that may have been deleted or changed inadvertently.

Flashback Versions Query: retrieve different versions of a row across a specified time interval instead of a single point-in-time.

Total Recall: efficiently manage and query long-term historical data. Total Recall automatically tracks every single change made to the data stored inside the database and maintains a secure, efficient and easily accessible archive of historical data.

FRA and Flashback Queries

To find Location, quota, in use/reclaimable space, number of files

SQL>SELECT * FROM v$recovery_file_dest;

For each file type, percent of FRA space it uses and is reclaimable and number of files of that type

SQL>SELECT * FROM v$recovery_area_usage;

Estimated space used by Flashbacklogs

SQL>SELECT estimated_flashback_size FROM v$flashback_database_log;


ORA-54013 ON FLASHBACK TABLE WITH VIRTUAL COLUMNS

ORA-54013: INSERT operation disallowed on virtual columns



Solution:


Oracle does not support direct flashback on a table with virtual column. the reason for the same is that the values for virtual column are derived rather than being stored on disk and there is no undo for them.

 Virtual column is not officially supported by flashback table feature. ( Including 12c )

Refer Metalink note Doc ID 1943791.1 for more details



How to find the Uptime of MySQL Instance


How can I find, since when my MySQL Instance is running


login to your MySQL server and execute

mysql> show status like 'Uptime';
+---------------+---------+
| Variable_name | Value   |
+---------------+---------+
| Uptime        | 2507101 |
+---------------+---------+
1 row in set (0.00 sec)

Note:- The Uptime value displayed here is in Seconds. My MySQL instance is running since
2507101 seconds

OR

mysql> \s;
--------------
mysql  Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline 5.1
Connection id:          289795
Current database:
Current user:           oracle@localhost
SSL:                    Not in use
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server version:         5.1.73-log Source distribution
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    utf8
Db     characterset:    utf8
Client characterset:    latin1
Conn.  characterset:    latin1
UNIX socket:            /var/lib/mysql/mysql.sock
Uptime:                 29 days 21 min 3 sec
Threads: 6  Questions: 14313149  Slow queries: 9  Opens: 77  Flush tables: 1  Open tables: 57  Queries per second avg: 5.709
--------------
ERROR:
No query specified

OR

mysql> status
--------------
mysql  Ver 14.14 Distrib 5.1.73, for redhat-linux-gnu (x86_64) using readline 5.1
Connection id:          289820
Current database:
Current user:           oracle@localhost
SSL:                    Not in use
Current pager:          stdout
Using outfile:          ''
Using delimiter:        ;
Server version:         5.1.73-log Source distribution
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    utf8
Db     characterset:    utf8
Client characterset:    latin1
Conn.  characterset:    latin1
UNIX socket:            /var/lib/mysql/mysql.sock
Uptime:                 29 days 28 min 45 sec
Threads: 6  Questions: 14314196  Slow queries: 9  Opens: 77  Flush tables: 1  Open tables: 57  Queries per second avg: 5.708
--------------
mysql>

List Oracle patches Applied in Database and Usages of DBMS_QOPATCH

How can I list all of the patches that I have applied

Run opatch lsinventory from $ORACLE_HOME/OPatch to check the patches applied

$opatch lsinventory
or

$opatch lsinventory -display
or

$opatch lsinventory -details

Execute below command directly in the database as dba to check the list of patches registered in the database


SQL> select * from sys.registry$history;

for 12c


SQL> select * from sys.dba_registry_sqlpatch

from 12c onwards you can even use dbms_qopatch package

SQL> set serverout on;
SQL> exec dbms_qopatch.get_sqlpatch_status;
SQL> select dbms_qopatch.GET_OPATCH_LIST from dual;

Query for any  specific patch that has been applied?

SQL> select xmltransform(dbms_qopatch.is_patch_installed('21359755'), dbms_qopatch.get_opatch_xslt) "Patch installed?" from dual;

Patch installed?
-------------------------------------------------------
Patch Information:
         21359755:   applied on 2015-10-21T23:48:17Z


All you can do with DBMS_QOPATCH package

SQL>  desc dbms_qopatch




FUNCTION ADD_OINV_JOB RETURNS BOOLEAN
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NNAME                          VARCHAR2                IN
 INAME                          VARCHAR2                IN


PROCEDURE CONFIG_OINV_JOBS

FUNCTION DROP_OINV_JOB RETURNS BOOLEAN
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NNAME                          VARCHAR2                IN
 INAME                          VARCHAR2                IN

FUNCTION GET_OPATCH_BUGS RETURNS XMLTYPE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM                           VARCHAR2                IN     DEFAULT

FUNCTION GET_OPATCH_COUNT RETURNS XMLTYPE

FUNCTION GET_OPATCH_DATA RETURNS XMLTYPE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM                           VARCHAR2                IN


FUNCTION GET_OPATCH_FILES RETURNS XMLTYPE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM                           VARCHAR2                IN

FUNCTION GET_OPATCH_INSTALL_INFO RETURNS XMLTYPE

FUNCTION GET_OPATCH_LIST RETURNS XMLTYPE

FUNCTION GET_OPATCH_LSINVENTORY RETURNS XMLTYPE

FUNCTION GET_OPATCH_OLAYS RETURNS XMLTYPE Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM                           VARCHAR2                IN

FUNCTION GET_OPATCH_PREQS RETURNS XMLTYPE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM                           VARCHAR2                IN


FUNCTION GET_OPATCH_XSLT RETURNS XMLTYPE

FUNCTION GET_PENDING_ACTIVITY RETURNS XMLTYPE

PROCEDURE GET_SQLPATCH_STATUS

 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM                           VARCHAR2                IN     DEFAULT

FUNCTION IS_PATCH_INSTALLED RETURNS XMLTYPE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PNUM                           VARCHAR2                IN

PROCEDURE OPATCH_INV_REFRESH_JOB

PROCEDURE OPATCH_RUN_JOB

FUNCTION PATCH_CONFLICT_DETECTION RETURNS XMLTYPE
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 FILENAME                       VARCHAR2                IN

PROCEDURE REFRESH_OPATCH_DATA

PROCEDURE REPLACE_DIRS_INT
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 PF_ID                          NUMBER                  IN

PROCEDURE REPLACE_LOGSCRPT_DIRS

PROCEDURE SET_CURRENT_OPINST
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 NODE_NAME                      VARCHAR2                IN     DEFAULT
 INST_NAME                      VARCHAR2                IN     DEFAULT

PROCEDURE SET_DEBUG
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 DEBUG                          BOOLEAN                 IN

PROCEDURE SKIP_SANITY_CHECK
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 SKIP
                           BOOLEAN                 IN

Oracle Launches worlds first fully automated database oracle 18c

Oracle launches 18c, its autonomous database and automated cybersecurity system


Oracle chief technology officer Larry Ellison unveiled the company's autonomous database called 18c in the oracle OpenWorld as well as a cybersecurity system that rides shot gun with it.

Key Takeaway from the seminar

  • Oracle autonomous database runs on Exadata Infrastructure
  • Provides total Automation based on machine learning
  • Oracle 18c is available on-premise, in Oracle's public cloud and cloud at customer
  • warehouse version of Oracle 18c is available in December 2017. The OLTP version will be available in June 2018.
  • Online scaleup and scaledown of resouces eg CPU


Autonomous Database For Datawarehouse Oracle 18c


  • Availiable from December2017
  • Fully Automated provisioning, patch,upgrade and Tune while the database is running
  • Eliminates human labour and therefore human error
  • Minimize h/w resource consumption
  • Guaranteed half the cost for the same workload running at Amazon
  • Guaranteed 99.995% availablily
     

Autonomous Database For OLTP Oracle 18c

  •  Availiable from June2018 
  • Fully Automated provisioning, patch,upgrade and Tune while the database is running
  •  Handels OLTP and mixed workloads
  •  provision Mission Critical with high performance and 99.995% availablily
  •  Provision Low Cost for non critical workloads eg test or dev

    Click Here to see the Oracle OpenWorld webcast

I am sure as a DBA you will not like this post but I will still ask you to hit the like button :)



How to find Exadata database machine version

Find Your Exadata Machine Version


Are you really curious to see which version of Exadata you are using. Just follow the step and you are ready with the information

$cd /opt/oracle.SupportTools/onecommand

$ cat databasemachine.xml |grep -i MACHINETYPE

     
                X5-2 Eighth Rack HC 8TB

Note:

HP => High Performance
HC => High Capcity





How to find indexes that are not in use on Oracle

Finding unused indexes on Oracle

Many times we are in need to find out or provide the list of Indexes that are used/not  used so that we can come to a decision whether to keep them or to drop them

Here is a quick way how you can enable monitoring for Index usages and query dba_object_usage or v$object_usage view later to see how frequently the indexes are being used. Starting from 12c v$object_usage view is depricated and therefore I recommend to use dba_object_usage


How to Enable Monitoring on Index Usages

alter index   monitoring usage;
SQL> alter index idx1  monitoring usage; 

How to Find the Index Usages


select index_name, table_name, used from v$object_usage;

or

select index_name, table_name, used from dba_object_usage;

How to disable Monitoring on Index Usages


alter index nomonitoring usage;
SQL> alter index  idx1 nomonitoring usage;


How to generate script to enable Index monitoring for all Indexes owned by an User


SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF

SPOOL enable_index_usages_monitoring.sql
SELECT 'ALTER INDEX "' || i.owner || '"."' || i.index_name || '" MONITORING USAGE;'
FROM   dba_indexes i
WHERE  owner      = UPPER('&OWNER')
SPOOL OFF

SET PAGESIZE 18
SET FEEDBACK ON

To enable Monitoring you can simply execute the scipt as shown below

SQL>@enable_index_usages_monitoring.sql



How to query OEM Sysman Repository to find some valuable information



Querying Oracle Management Repository (OEM repository) to dig out for a valuable informations 

How to find Availiable Target Types in OEM



SQL> elect distinct target_type,type_display_name from mgmt_targets order by 1;



Target Type Display Name
cluster Cluster
composite Group
has Oracle High Availability Service
host Host
j2ee_application Application Deployment
j2ee_application_domain Domain Application Deployment
metadata_repository Metadata Repository
oracle_apache Oracle HTTP Server
oracle_beacon Beacon
oracle_cloud Cloud
oracle_database Database Instance
oracle_dbmachine Oracle Database Machine
oracle_dbsys Database System
oracle_em_service EM Service
oracle_emd Agent
oracle_emrep OMS and Repository
oracle_emsvrs_sys EM Servers System
oracle_exadata Oracle Exadata Storage Server
oracle_exadata_dbsys Oracle Database Exadata Storage Server System
oracle_exadata_grid Oracle Exadata Storage Server Grid
oracle_home Oracle Home
oracle_listener Listener
oracle_oms Oracle Management Service
oracle_oms_console OMS Console
oracle_oms_pbs OMS Platform
oracle_pdb Pluggable Database
oracle_si_fabric Ethernet/InfiniBand Fabric
oracle_si_host_remote Systems Infrastructure Remote Host
oracle_si_netswitch Systems Infrastructure Switch
oracle_si_network Systems Infrastructure Network
oracle_si_pdu Systems Infrastructure PDU
oracle_si_rack Systems Infrastructure Rack
oracle_si_server Oracle Server
oracle_si_server_map Systems Infrastructure Server
oracle_si_virtual_platform Oracle Virtual Platform
oracle_si_virtual_server Oracle VM Instance
oracle_vm_cloud Oracle VM Cloud
osm_cluster Cluster ASM
osm_instance Automatic Storage Management
rac_database Cluster Database
weblogic_domain Oracle WebLogic Domain
weblogic_j2eeserver Oracle WebLogic Server
weblogic_nodemanager Oracle WebLogic Node Manager

How to find all registered targets in OEM for a particular target type

So I want to find all the targets registered in OEM repository whose target type is "rac_database"
in other words, basically I want to find all the rac databases registered in my OEM.

SQL> select target_name,target_type,target_guid from mgmt_targets where target_type='Cluster Database';

TargetName TargetType TargetGUID
MTP1P rac_database 237E5C0EB79B5453C77AC7DF1814F9C7
MTP2P rac_database 455C25441A577694E0537C01840A1B7A
MT1D rac_database 45CE40C9DA0E2497E0537C01840A7DC0
AND1D rac_database 46A5236E79CAE7DC821F3BD08D4B4DBC
AND2D rac_database 49BA387D72AB7BED6C800968C83B1334
ORCL1P rac_database 503190DF8F9B6E05E0537C01840A255C

How to find a particular target information registered in OEM

SQL> select * from mgmt_targets where target_name='MT1D';


How to find Importent Matrics related to a particular Target


SQL> select * from mgmt$metric_daily where target_name = 'MT1D' and trunc(rollup_timestamp) = trunc(sysdate)-1;


How to find the daily growth of a database from OEM repository


SQL> select rollup_timestamp, average
from mgmt$metric_daily
where target_name = 'MT1D'
and column_label = 'Used Space(GB)'
order by rollup_timestamp;

Rollup_Timestamp              Average DB Size in GB
20-DEC-16                          18317.2
21-DEC-16                          18343.82
22-DEC-16                          18366.56
23-DEC-16                          18488.34
24-DEC-16                          18419.59


Oracle Database Profile Management

All About Oracle Database Profile


What is a Profile in Oracle Database

The purpose of a profile in oracle database is basically to limit the use of resouce for a particular user.
When you create a profile you define a set of limits on database resources. If you assign the profile to a user, then that user cannot exceed these limits.

How to create Profile in Oracle

SQL> CREATE PROFILE MyFirstProfile LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30;

How to assign Profile to a user in Oracle


You can assign a profile to a user at the time of creating the user and you can alter user to assign the profile for an existing user

SQL> create user andy identified by password profile MyFirstProfile;

or

SQL> ALTER USER andy PROFILE MyFirstProfile;

How to find Profiles assiged to an user in Oracle


SQL> select username,profile from dba_users

UserName                Profile
ORACLE_OCM      DEFAULT
OJVMSYS               DEFAULT
SYSKM                   DEFAULT
XS$NULL               DEFAULT
GSMCATUSER      DEFAULT

How to find the list of Profiles created in the Database

SQL> select profile , common from dba_profiles

PROFILE                       COMMON
C##WDUSERS               YES
C##SYSPROFILE           YES
C##OEM_PROFILE        YES
C##APPUSER                  YES
ORA_STIG_PROFILE      NO

Note:- If you are running oracle database Version 12c, it got a feature of multitenancy where in you have multiple pluggable databases running in container. COMMON=Yes means the profile is common for all availiable databases while COMMON=NO means the profile is specific to only the Pluggable database for which it has been created and can not be used beyond this scope.

How to find the contents of a Profile

SQL> select * from dba_profiles where profile='DEFAULT'

PROFILE   RESOURCE_NAME                            RESOURCE_TYPE   LIMIT         OMMON
DEFAULT   COMPOSITE_LIMIT                            KERNEL                      UNLIMITED NO
DEFAULT   SESSIONS_PER_USER                        KERNEL                      UNLIMITED NO
DEFAULT   CPU_PER_SESSION                            KERNEL                        UNLIMITED NO
DEFAULT   CPU_PER_CALL                                 KERNEL                        UNLIMITED NO
DEFAULT   LOGICAL_READS_PER_SESSION   KERNEL                        UNLIMITED NO
DEFAULT   LOGICAL_READS_PER_CALL         KERNEL                        UNLIMITED NO
DEFAULT   IDLE_TIME                                          KERNEL                         UNLIMITED NO
DEFAULT   CONNECT_TIME                                KERNEL                         UNLIMITED NO
DEFAULT   PRIVATE_SGA                                   KERNEL                           UNLIMITED NO
DEFAULT   FAILED_LOGIN_ATTEMPTS           PASSWORD                       10                 NO
DEFAULT   PASSWORD_LIFE_TIME                  PASSWORD                       180                NO
DEFAULT   PASSWORD_REUSE_TIME              PASSWORD                       UNLIMITED NO
DEFAULT   PASSWORD_REUSE_MAX              PASSWORD                       UNLIMITED NO
DEFAULT   PASSWORD_VERIFY_FUNCTION   PASSWORD                             NULL    NO
DEFAULT   PASSWORD_LOCK_TIME                 PASSWORD                                1         NO
DEFAULT   PASSWORD_GRACE_TIME                PASSWORD                           7              NO


How to modify a Profile


SQL> ALTER PROFILE MyFirstProfile LIMIT PASSWORD_REUSE_TIME 90 PASSWORD_REUSE_MAX UNLIMITED;

How to delete a Profile

SQL> DROP PROFILE MyFirstProfile CASCADE;



How to find RAC Cluster Name and Version

How can you find the name of your RAC Cluster


It is really easy to find the name or RAC cluster. Just use the below methods

Option 1

execute below command from CRS_HOME/bin

$cemutlo -n
rac11

Option 2

execute below command from CRS_HOME/bin

$olsnodes -c
rac11


Option 3

navigate to the directory $CRS_HOME/cdata. Under this directory you will find a directory with the name of cluster which includes the OCR backup

cd $CRS_HOME/cdata
ls
localhost  node1 node1.olr  rac11


How to Find the Clusterware (CRS) version in Oracle RAC

follow the steps below to find the clusterware version (Grid infrastructure ) of your RAC cluster

to see on current node

$crsctl query crs softwareversion
Oracle Clusterware version on node [node1] is [12.1.0.2.0]

to see on all the nodes

$ crsctl query crs softwareversion -all
Oracle Clusterware version on node [node1] is [12.1.0.2.0]
Oracle Clusterware version on node [node2] is [12.1.0.2.0]

to see on a particular node

$ crsctl query crs softwareversion node2
Oracle Clusterware version on node [node2] is [12.1.0.2.0]

You can check the active version from below command

$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

Architecture of Hadoop cluster

High Level Architecture of Hadoop cluster


As we already know Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware.

What is commodity Hardware?

Commodity Hardware is really inexpensive affortable and easy to obtain Computers


Why is Hadoop important?


Hadoop is importent because of its ability to store and process huge amounts of any kind of data with the desired throughput. It is also capable to handel the data whose volume and verity is constently increasing and changing.
Hadoop cluster is capable to deliver as high computing power as required. For more processing power you just need more computers.
Another charactristic of Hadoop is to provide full fault tolerance although the individual computers are fault prone. How ? will see in this post later when we will discuss about HDFS.
Together with the above listed adventage Hadoop is really flexible, lowcost and really scalable.

These adventages of Hadoop making it famous in the world of Big Data.

I will discuss more in detail how Hadoop is achieving all these features but first let us see the core component of Hadoop

Core Hadoop Components



As you can see in the figure the core components of Hadoop are

  • Hadoop Common
  • Hadoop Distributed File System (HDFS)
  • MapReduce
    and
  • Data Access/Data Processing/Monitoring and Management Components

Hadoop Common


Hadoop common is a collection of components and interfaces that supports other hadoop module. For example, if HBase and Hive want to access HDFS they need Java archives (JAR files) that are stored in Hadoop Common. Hadoop common also contains the necessary Java Archive (JAR) files and scripts required to start Hadoop.

Hadoop Distributed File System (HDFS)


Image 1

HDFS is the default filesystem of Hadoop. Data on HDFS is stored as  data blocks. As we can guess from the name Distributed File System, HDFS component creates several replicas of the data block which are distributed across different clusters for reliable and quick data access. checksum is calculated for the data blocks replicated on several computing nodes. In case of a checksum error or system failure, erroneous  or lost data blocks can be recovered from backup blocks located on other nodes. This way Hadoop is able to recover any data loss. There are 3 component of HDFS

NameNode
DataNode
Secondary NameNode




Image 2

What is NameNode? What is the purpose of NameNode?


A NameNode in Hadoop stores metadata of actual data. e.g. filename, path, No. of Blocks, Block IDs, Block location, no. of replicas etc (see fig.1 above). Client applications talk to the NameNode whenever they wish to perform certain data operation eg. read, write etc.

NameNode knows the list of the blocks and its location for any given file in HDFS. With this information NameNode knows how to construct the file from blocks. When MasterNode receives a request from client application using its Metadata it drives DataNode to perform the operation. 

NameNode is so critical to HDFS and when the NameNode is down, HDFS/Hadoop cluster is inaccessible and considered down.

NameNode is a single point of failure in Hadoop cluster and therefore we should consider the hardware and redundency options for NameNode while desining and Implementing Hadoop Cluster.

NameNode is usually configured with a lot of memory (RAM). Because the block locations are held in main memory.

NameNode is responsible for operations like opening, closing, and renaming files and directories.


What is DataNode ?


 A DataNode stores data in HDFS file formate.It is responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode. See fig. 2 above


What is Secondary NameNode?


Secondary NameNode in hadoop is a specially dedicated node in HDFS cluster whose main function is to take checkpoints of the file system metadata present on namenode.

To eleborate little more about the purpose of Secondary NameNode let us go little bit technical with NameNode.

As explained above Namenode holds the metadata for the HDFS like Namespace information, block information etc. When in use, all this information is stored in main memory. But these informations are  also stored in disk for persistence storage. The files used by NameNode to store data on disk are called fsimage and Edit logs

Fsimage - Its the snapshot of the filesystem when namenode started.

Edit logs - Its the sequence of changes made to the filesystem after namenode started.

Edit logs are applied to fsimage only when the NameNode is restarted and NameNode resart is rare, which means edit logs can grow very large and may become unmanagible. Also as we just learned Edit Logs are applied to fsimage at the time of NameNode restart. Larger the editlog longer the restart time of NameNode. So to overcome these issues we need a mechanism which will help us reduce the edit log size to make it managible and also keep the fsimage data up to date. And the job is assigned to Secondary Namenode. At this point we can also conclude that the Secondary Namenode whole purpose is to have a checkpoint in HDFS and its not a backup Node of NameNode.

Data Access/Data Processing/Monitoring and Management Components in Hadoop

What is HBase ?


Apache HBase is a NoSQL database that runs on top of Hadoop as a distributed and scalable big data store in Hadoop framework. HBase is a powerful database in its own.

HBase provides two way of data access

  • Through their row key or via a table scan for a range of row keys
  • In a batch manner using map-reduce
This dual-approach to data access is something that makes HBase particularly powerful.
You can download HBase from http://www.apache.org/dyn/closer.cgi/hbase/
If you want to learn more about HBase just go through https://hbase.apache.org/


What is  Mahout ?


Mahout is one who drives elephant, in this sence Mahout in Hadoop is something which drives Hadoop (the logo of Hadoop is Elephant).

Mahout is an algorithm library for scalable machine learning on Hadoop. It runs on top of Hadoop and usages MapReduce paradigm.

What is Machine Learning?


Machine learning is a discipline of artificial intelligence focused on enabling machines to learn without being explicitly programmed, and it is commonly used to improve future performance based on previous outcomes.
After the Data stored on Hadoop Distributed File System (HDFS), Mahout provides the data science tools to automatically find meaningful patterns in those big data sets and usages this information to make it faster and easier to turn big data into big information.

If you want to learn more about Mahout just go through http://mahout.apache.org/

What is Pig ?


Pig is a high level scripting language that is used with Apache Hadoop. It enables, to write complex data transformations without knowing Java. Pig’s simple SQL-like scripting language is called Pig Latin. We can perform all the data manipulation operations in Hadoop using Pig.
If you want to learn more about Pig navigate through following links
https://pig.apache.org/
https://www.tutorialspoint.com/apache_pig/

What is Hive ?


Hive is a data warehouse infrastructure tool to process structured data in Hadoop. You can develop SQL type scripts to do MapReduce operations using Hive. Hive Usages SQL type language to process/query data, is called HiveQL or HQL
If you want to learn more about Hive go through following urls
https://hive.apache.org/
https://www.tutorialspoint.com/hive/hive_introduction.htm

What is ZooKeeper ?


Zookeeper is an open source server that reliably coordinates distributed processes and enables synchronization across a cluster. Zookeeper in Hadoop can be viewed as centralized repository where distributed applications can put data and get data out of it. It is used to keep the distributed system functioning together as a single unit, using its synchronization, serialization and coordination goals.
If you want to learn more about ZooKeeper go through  https://zookeeper.apache.org/

download Zookeeper from https://zookeeper.apache.org/releases.html

What is Oozie ?


Oozie is the tool in which all sort of programs can be pipelined in a desired order to work in Hadoop’s distributed environment. Oozie also provides a mechanism to run the job at a given schedule. In simple terms you can say Apache Oozie is a Workflow Scheduler and coordination Service for managing Hadoop jobs.

If you want to learn more about Oozie go through  http://oozie.apache.org/

What is Sqoop ?


If you want to transfer the data from an RDBMS system to Hadoop and vice versa you need Sqoop.
learn more about Sqoop at http://sqoop.apache.org/

What is Flume ?


Flume is a service used to collect, aggregate and move a large amount of log data
learn more about Flume at https://flume.apache.org/

What is Avro ?


Avro is a language-neutral data serialization system in Hadoop. Avro uses JSON format to declare the data structures. Presently, it supports languages such as Java, C, C++, C#, Python, and Ruby.
learn more about Avro at https://avro.apache.org/
download Avro from https://avro.apache.org/releases.html

What is Ambari?


Ambari is a management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.
Learn more about Ambari at https://ambari.apache.org/

The List can go long and long but I am taking a paue here and going ahead to discuss a little bit about our beloved MapReduce

MapReduce

What is Map and Reduce in Hadoop ?


As the name suggest MapReduce in Hadoop is basically an algorith based on JAVA, contains two important tasks, namely Map and Reduce. A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
Want to know more about MapReduce just click at
https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html

I will discuss MapReduce workflow and other related Importent Hadoop terminolgie in my Next post.

Till then Have a Nice Time and Enjoy Learning Hadoop with me.