How to find Exadata database machine version

Find Your Exadata Machine Version


Are you really curious to see which version of Exadata you are using. Just follow the step and you are ready with the information

$cd /opt/oracle.SupportTools/onecommand

$ cat databasemachine.xml |grep -i MACHINETYPE

     
                X5-2 Eighth Rack HC 8TB

Note:

HP => High Performance
HC => High Capcity





How to find indexes that are not in use on Oracle

Finding unused indexes on Oracle

Many times we are in need to find out or provide the list of Indexes that are used/not  used so that we can come to a decision whether to keep them or to drop them

Here is a quick way how you can enable monitoring for Index usages and query dba_object_usage or v$object_usage view later to see how frequently the indexes are being used. Starting from 12c v$object_usage view is depricated and therefore I recommend to use dba_object_usage


How to Enable Monitoring on Index Usages

alter index   monitoring usage;
SQL> alter index idx1  monitoring usage; 

How to Find the Index Usages


select index_name, table_name, used from v$object_usage;

or

select index_name, table_name, used from dba_object_usage;

How to disable Monitoring on Index Usages


alter index nomonitoring usage;
SQL> alter index  idx1 nomonitoring usage;


How to generate script to enable Index monitoring for all Indexes owned by an User


SET PAGESIZE 0
SET FEEDBACK OFF
SET VERIFY OFF

SPOOL enable_index_usages_monitoring.sql
SELECT 'ALTER INDEX "' || i.owner || '"."' || i.index_name || '" MONITORING USAGE;'
FROM   dba_indexes i
WHERE  owner      = UPPER('&OWNER')
SPOOL OFF

SET PAGESIZE 18
SET FEEDBACK ON

To enable Monitoring you can simply execute the scipt as shown below

SQL>@enable_index_usages_monitoring.sql



How to query OEM Sysman Repository to find some valuable information



Querying Oracle Management Repository (OEM repository) to dig out for a valuable informations 

How to find Availiable Target Types in OEM



SQL> elect distinct target_type,type_display_name from mgmt_targets order by 1;



Target Type Display Name
cluster Cluster
composite Group
has Oracle High Availability Service
host Host
j2ee_application Application Deployment
j2ee_application_domain Domain Application Deployment
metadata_repository Metadata Repository
oracle_apache Oracle HTTP Server
oracle_beacon Beacon
oracle_cloud Cloud
oracle_database Database Instance
oracle_dbmachine Oracle Database Machine
oracle_dbsys Database System
oracle_em_service EM Service
oracle_emd Agent
oracle_emrep OMS and Repository
oracle_emsvrs_sys EM Servers System
oracle_exadata Oracle Exadata Storage Server
oracle_exadata_dbsys Oracle Database Exadata Storage Server System
oracle_exadata_grid Oracle Exadata Storage Server Grid
oracle_home Oracle Home
oracle_listener Listener
oracle_oms Oracle Management Service
oracle_oms_console OMS Console
oracle_oms_pbs OMS Platform
oracle_pdb Pluggable Database
oracle_si_fabric Ethernet/InfiniBand Fabric
oracle_si_host_remote Systems Infrastructure Remote Host
oracle_si_netswitch Systems Infrastructure Switch
oracle_si_network Systems Infrastructure Network
oracle_si_pdu Systems Infrastructure PDU
oracle_si_rack Systems Infrastructure Rack
oracle_si_server Oracle Server
oracle_si_server_map Systems Infrastructure Server
oracle_si_virtual_platform Oracle Virtual Platform
oracle_si_virtual_server Oracle VM Instance
oracle_vm_cloud Oracle VM Cloud
osm_cluster Cluster ASM
osm_instance Automatic Storage Management
rac_database Cluster Database
weblogic_domain Oracle WebLogic Domain
weblogic_j2eeserver Oracle WebLogic Server
weblogic_nodemanager Oracle WebLogic Node Manager

How to find all registered targets in OEM for a particular target type

So I want to find all the targets registered in OEM repository whose target type is "rac_database"
in other words, basically I want to find all the rac databases registered in my OEM.

SQL> select target_name,target_type,target_guid from mgmt_targets where target_type='Cluster Database';

TargetName TargetType TargetGUID
MTP1P rac_database 237E5C0EB79B5453C77AC7DF1814F9C7
MTP2P rac_database 455C25441A577694E0537C01840A1B7A
MT1D rac_database 45CE40C9DA0E2497E0537C01840A7DC0
AND1D rac_database 46A5236E79CAE7DC821F3BD08D4B4DBC
AND2D rac_database 49BA387D72AB7BED6C800968C83B1334
ORCL1P rac_database 503190DF8F9B6E05E0537C01840A255C

How to find a particular target information registered in OEM

SQL> select * from mgmt_targets where target_name='MT1D';


How to find Importent Matrics related to a particular Target


SQL> select * from mgmt$metric_daily where target_name = 'MT1D' and trunc(rollup_timestamp) = trunc(sysdate)-1;


How to find the daily growth of a database from OEM repository


SQL> select rollup_timestamp, average
from mgmt$metric_daily
where target_name = 'MT1D'
and column_label = 'Used Space(GB)'
order by rollup_timestamp;

Rollup_Timestamp              Average DB Size in GB
20-DEC-16                          18317.2
21-DEC-16                          18343.82
22-DEC-16                          18366.56
23-DEC-16                          18488.34
24-DEC-16                          18419.59


Oracle Database Profile Management

All About Oracle Database Profile


What is a Profile in Oracle Database

The purpose of a profile in oracle database is basically to limit the use of resouce for a particular user.
When you create a profile you define a set of limits on database resources. If you assign the profile to a user, then that user cannot exceed these limits.

How to create Profile in Oracle

SQL> CREATE PROFILE MyFirstProfile LIMIT PASSWORD_REUSE_MAX 10 PASSWORD_REUSE_TIME 30;

How to assign Profile to a user in Oracle


You can assign a profile to a user at the time of creating the user and you can alter user to assign the profile for an existing user

SQL> create user andy identified by password profile MyFirstProfile;

or

SQL> ALTER USER andy PROFILE MyFirstProfile;

How to find Profiles assiged to an user in Oracle


SQL> select username,profile from dba_users

UserName                Profile
ORACLE_OCM      DEFAULT
OJVMSYS               DEFAULT
SYSKM                   DEFAULT
XS$NULL               DEFAULT
GSMCATUSER      DEFAULT

How to find the list of Profiles created in the Database

SQL> select profile , common from dba_profiles

PROFILE                       COMMON
C##WDUSERS               YES
C##SYSPROFILE           YES
C##OEM_PROFILE        YES
C##APPUSER                  YES
ORA_STIG_PROFILE      NO

Note:- If you are running oracle database Version 12c, it got a feature of multitenancy where in you have multiple pluggable databases running in container. COMMON=Yes means the profile is common for all availiable databases while COMMON=NO means the profile is specific to only the Pluggable database for which it has been created and can not be used beyond this scope.

How to find the contents of a Profile

SQL> select * from dba_profiles where profile='DEFAULT'

PROFILE   RESOURCE_NAME                            RESOURCE_TYPE   LIMIT         OMMON
DEFAULT   COMPOSITE_LIMIT                            KERNEL                      UNLIMITED NO
DEFAULT   SESSIONS_PER_USER                        KERNEL                      UNLIMITED NO
DEFAULT   CPU_PER_SESSION                            KERNEL                        UNLIMITED NO
DEFAULT   CPU_PER_CALL                                 KERNEL                        UNLIMITED NO
DEFAULT   LOGICAL_READS_PER_SESSION   KERNEL                        UNLIMITED NO
DEFAULT   LOGICAL_READS_PER_CALL         KERNEL                        UNLIMITED NO
DEFAULT   IDLE_TIME                                          KERNEL                         UNLIMITED NO
DEFAULT   CONNECT_TIME                                KERNEL                         UNLIMITED NO
DEFAULT   PRIVATE_SGA                                   KERNEL                           UNLIMITED NO
DEFAULT   FAILED_LOGIN_ATTEMPTS           PASSWORD                       10                 NO
DEFAULT   PASSWORD_LIFE_TIME                  PASSWORD                       180                NO
DEFAULT   PASSWORD_REUSE_TIME              PASSWORD                       UNLIMITED NO
DEFAULT   PASSWORD_REUSE_MAX              PASSWORD                       UNLIMITED NO
DEFAULT   PASSWORD_VERIFY_FUNCTION   PASSWORD                             NULL    NO
DEFAULT   PASSWORD_LOCK_TIME                 PASSWORD                                1         NO
DEFAULT   PASSWORD_GRACE_TIME                PASSWORD                           7              NO


How to modify a Profile


SQL> ALTER PROFILE MyFirstProfile LIMIT PASSWORD_REUSE_TIME 90 PASSWORD_REUSE_MAX UNLIMITED;

How to delete a Profile

SQL> DROP PROFILE MyFirstProfile CASCADE;



How to find RAC Cluster Name and Version

How can you find the name of your RAC Cluster


It is really easy to find the name or RAC cluster. Just use the below methods

Option 1

execute below command from CRS_HOME/bin

$cemutlo -n
rac11

Option 2

execute below command from CRS_HOME/bin

$olsnodes -c
rac11


Option 3

navigate to the directory $CRS_HOME/cdata. Under this directory you will find a directory with the name of cluster which includes the OCR backup

cd $CRS_HOME/cdata
ls
localhost  node1 node1.olr  rac11


How to Find the Clusterware (CRS) version in Oracle RAC

follow the steps below to find the clusterware version (Grid infrastructure ) of your RAC cluster

to see on current node

$crsctl query crs softwareversion
Oracle Clusterware version on node [node1] is [12.1.0.2.0]

to see on all the nodes

$ crsctl query crs softwareversion -all
Oracle Clusterware version on node [node1] is [12.1.0.2.0]
Oracle Clusterware version on node [node2] is [12.1.0.2.0]

to see on a particular node

$ crsctl query crs softwareversion node2
Oracle Clusterware version on node [node2] is [12.1.0.2.0]

You can check the active version from below command

$ crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [12.1.0.2.0]

Architecture of Hadoop cluster

High Level Architecture of Hadoop cluster


As we already know Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware.

What is commodity Hardware?

Commodity Hardware is really inexpensive affortable and easy to obtain Computers


Why is Hadoop important?


Hadoop is importent because of its ability to store and process huge amounts of any kind of data with the desired throughput. It is also capable to handel the data whose volume and verity is constently increasing and changing.
Hadoop cluster is capable to deliver as high computing power as required. For more processing power you just need more computers.
Another charactristic of Hadoop is to provide full fault tolerance although the individual computers are fault prone. How ? will see in this post later when we will discuss about HDFS.
Together with the above listed adventage Hadoop is really flexible, lowcost and really scalable.

These adventages of Hadoop making it famous in the world of Big Data.

I will discuss more in detail how Hadoop is achieving all these features but first let us see the core component of Hadoop

Core Hadoop Components



As you can see in the figure the core components of Hadoop are

  • Hadoop Common
  • Hadoop Distributed File System (HDFS)
  • MapReduce
    and
  • Data Access/Data Processing/Monitoring and Management Components

Hadoop Common


Hadoop common is a collection of components and interfaces that supports other hadoop module. For example, if HBase and Hive want to access HDFS they need Java archives (JAR files) that are stored in Hadoop Common. Hadoop common also contains the necessary Java Archive (JAR) files and scripts required to start Hadoop.

Hadoop Distributed File System (HDFS)


Image 1

HDFS is the default filesystem of Hadoop. Data on HDFS is stored as  data blocks. As we can guess from the name Distributed File System, HDFS component creates several replicas of the data block which are distributed across different clusters for reliable and quick data access. checksum is calculated for the data blocks replicated on several computing nodes. In case of a checksum error or system failure, erroneous  or lost data blocks can be recovered from backup blocks located on other nodes. This way Hadoop is able to recover any data loss. There are 3 component of HDFS

NameNode
DataNode
Secondary NameNode




Image 2

What is NameNode? What is the purpose of NameNode?


A NameNode in Hadoop stores metadata of actual data. e.g. filename, path, No. of Blocks, Block IDs, Block location, no. of replicas etc (see fig.1 above). Client applications talk to the NameNode whenever they wish to perform certain data operation eg. read, write etc.

NameNode knows the list of the blocks and its location for any given file in HDFS. With this information NameNode knows how to construct the file from blocks. When MasterNode receives a request from client application using its Metadata it drives DataNode to perform the operation. 

NameNode is so critical to HDFS and when the NameNode is down, HDFS/Hadoop cluster is inaccessible and considered down.

NameNode is a single point of failure in Hadoop cluster and therefore we should consider the hardware and redundency options for NameNode while desining and Implementing Hadoop Cluster.

NameNode is usually configured with a lot of memory (RAM). Because the block locations are held in main memory.

NameNode is responsible for operations like opening, closing, and renaming files and directories.


What is DataNode ?


 A DataNode stores data in HDFS file formate.It is responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode. See fig. 2 above


What is Secondary NameNode?


Secondary NameNode in hadoop is a specially dedicated node in HDFS cluster whose main function is to take checkpoints of the file system metadata present on namenode.

To eleborate little more about the purpose of Secondary NameNode let us go little bit technical with NameNode.

As explained above Namenode holds the metadata for the HDFS like Namespace information, block information etc. When in use, all this information is stored in main memory. But these informations are  also stored in disk for persistence storage. The files used by NameNode to store data on disk are called fsimage and Edit logs

Fsimage - Its the snapshot of the filesystem when namenode started.

Edit logs - Its the sequence of changes made to the filesystem after namenode started.

Edit logs are applied to fsimage only when the NameNode is restarted and NameNode resart is rare, which means edit logs can grow very large and may become unmanagible. Also as we just learned Edit Logs are applied to fsimage at the time of NameNode restart. Larger the editlog longer the restart time of NameNode. So to overcome these issues we need a mechanism which will help us reduce the edit log size to make it managible and also keep the fsimage data up to date. And the job is assigned to Secondary Namenode. At this point we can also conclude that the Secondary Namenode whole purpose is to have a checkpoint in HDFS and its not a backup Node of NameNode.

Data Access/Data Processing/Monitoring and Management Components in Hadoop

What is HBase ?


Apache HBase is a NoSQL database that runs on top of Hadoop as a distributed and scalable big data store in Hadoop framework. HBase is a powerful database in its own.

HBase provides two way of data access

  • Through their row key or via a table scan for a range of row keys
  • In a batch manner using map-reduce
This dual-approach to data access is something that makes HBase particularly powerful.
You can download HBase from http://www.apache.org/dyn/closer.cgi/hbase/
If you want to learn more about HBase just go through https://hbase.apache.org/


What is  Mahout ?


Mahout is one who drives elephant, in this sence Mahout in Hadoop is something which drives Hadoop (the logo of Hadoop is Elephant).

Mahout is an algorithm library for scalable machine learning on Hadoop. It runs on top of Hadoop and usages MapReduce paradigm.

What is Machine Learning?


Machine learning is a discipline of artificial intelligence focused on enabling machines to learn without being explicitly programmed, and it is commonly used to improve future performance based on previous outcomes.
After the Data stored on Hadoop Distributed File System (HDFS), Mahout provides the data science tools to automatically find meaningful patterns in those big data sets and usages this information to make it faster and easier to turn big data into big information.

If you want to learn more about Mahout just go through http://mahout.apache.org/

What is Pig ?


Pig is a high level scripting language that is used with Apache Hadoop. It enables, to write complex data transformations without knowing Java. Pig’s simple SQL-like scripting language is called Pig Latin. We can perform all the data manipulation operations in Hadoop using Pig.
If you want to learn more about Pig navigate through following links
https://pig.apache.org/
https://www.tutorialspoint.com/apache_pig/

What is Hive ?


Hive is a data warehouse infrastructure tool to process structured data in Hadoop. You can develop SQL type scripts to do MapReduce operations using Hive. Hive Usages SQL type language to process/query data, is called HiveQL or HQL
If you want to learn more about Hive go through following urls
https://hive.apache.org/
https://www.tutorialspoint.com/hive/hive_introduction.htm

What is ZooKeeper ?


Zookeeper is an open source server that reliably coordinates distributed processes and enables synchronization across a cluster. Zookeeper in Hadoop can be viewed as centralized repository where distributed applications can put data and get data out of it. It is used to keep the distributed system functioning together as a single unit, using its synchronization, serialization and coordination goals.
If you want to learn more about ZooKeeper go through  https://zookeeper.apache.org/

download Zookeeper from https://zookeeper.apache.org/releases.html

What is Oozie ?


Oozie is the tool in which all sort of programs can be pipelined in a desired order to work in Hadoop’s distributed environment. Oozie also provides a mechanism to run the job at a given schedule. In simple terms you can say Apache Oozie is a Workflow Scheduler and coordination Service for managing Hadoop jobs.

If you want to learn more about Oozie go through  http://oozie.apache.org/

What is Sqoop ?


If you want to transfer the data from an RDBMS system to Hadoop and vice versa you need Sqoop.
learn more about Sqoop at http://sqoop.apache.org/

What is Flume ?


Flume is a service used to collect, aggregate and move a large amount of log data
learn more about Flume at https://flume.apache.org/

What is Avro ?


Avro is a language-neutral data serialization system in Hadoop. Avro uses JSON format to declare the data structures. Presently, it supports languages such as Java, C, C++, C#, Python, and Ruby.
learn more about Avro at https://avro.apache.org/
download Avro from https://avro.apache.org/releases.html

What is Ambari?


Ambari is a management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.
Learn more about Ambari at https://ambari.apache.org/

The List can go long and long but I am taking a paue here and going ahead to discuss a little bit about our beloved MapReduce

MapReduce

What is Map and Reduce in Hadoop ?


As the name suggest MapReduce in Hadoop is basically an algorith based on JAVA, contains two important tasks, namely Map and Reduce. A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
Want to know more about MapReduce just click at
https://hadoop.apache.org/docs/r1.2.1/mapred_tutorial.html

I will discuss MapReduce workflow and other related Importent Hadoop terminolgie in my Next post.

Till then Have a Nice Time and Enjoy Learning Hadoop with me.


Big Data and Hadoop

What is Big Data?

Big data is classified as data which becomes challenging (for traditional data processing platforms such as relational database systems) to collect, manage and process the data in desired efficiency.
to solve many Big Data related problems we need to deploy Big data Solution.
But keep in mind, The design, implementation, and deployment of a Big Data platform require a clear definition of the Big Data problem by system architects and administrators
There are many Big Data platform availiable in Market below you find some of them.
1.Apache Spark
2.Apache Storm
3.Google BigQuery
4.Ceph
5.DataTorrent RTS
6.Disco
7.Pachyderm
8.Presto
9.Hydra
10.Misco
11.Qizmt
12.MongoDB
etc...etc....

what is Hadoop?

A Hadoop is nothing but just a data storage and processing engine.
since different Big Data problems have different properties. A Hadoop-based Big Data platform is capable of dealing with most of the Big Data problems, but might not be good fit for others. Because of these and many other reasons, we need to choose from Hadoop alternatives.
The alternative  could be any of the Big Data platforms listed above.

Do a DBA Need to learn Hadoop? if yes, do he need to learn Java to get into Big Data?


Yes A DBA should learn Hadoop or Big Data technologies its the growing demand of the buisness. Everyone knows DBA stands for “Default Blame Acceptor”. Since the database is always blamed, DBAs typically have great troubleshooting skills, processes, and instincts. All of these are critical for good cluster admins.
and therefore a DBA or System admin could become a really good cluster admin.
due to growing demand and deployment of Hadoop there are Hadoop Cluster Administrator's job in the Market and a DBA or Sysadmin can really encash this opportunity if learned hadoop and for such jobs there no hard and fast requirement of learning JAVA


What is Data Properties?

Ideally, data has the following three important properties: volume, velocity, and variety. There is another property we can include is value.
I will discuss here each of them to define the Big Data Problem

Defining a Big Data problem.

To define the big data problems you need to consider the following steps:

 Estimate the volume of data.

  There are two types of data in the real world: static ( national census data )and nonstatic data ( social network streaming data). While estimating the volume of data you must also consider the future volume of data

 Estimate the velocity of data

   The velocity estimate should include how much  data can be generated within a certain amount of time, for example during a day.  For static data, the velocity is zero. This property will not only affect the volume of data, but also determines how  fast a data processing system should handle the data.

Identify the data variety

  the data variety means the different sources of data, such as web click data, social network data, data in relational databases, and so on
  Each veriry of data requires specifically designed modules  to integerate it into the big data.
  For example, a web crawler is needed for getting data from the Web, and  a data translation module is needed to transfer data from relational databases to  a nonrelational Big Data platform

 Define the expected value of data

  The value property of Big Data defines what we can potentially derive from and how we can use Big Data.



Oracle Database 12c Architecture


Oracle Database 12c Architecture


Oracle Database 12c Architecture Diagram


Oracle 12c Database Architecture Diagram

Oracle Database Component Explanation.

What is Oracle Instance ?

An Oracle Instance is a combination of  Memory Area + Background Processes.

What Does Oracle Database Memory Area Consists for?

An Oracle Database Memory area consists of 2 parts SGA and PGA

What is SGA ?

SGA is known as System global area. An SGA is a Group of shared Memory Areas that contain data and control information for one Oracle Database instance. All server and background processes share the SGA . did you notice the word Group of Shared Memory.  Yes its Group. Let's explain what all are the component of this Group.

As you can see in the Image above the main Components are

Database Buffer Cache
Shared Pool
Large Pool
Redo Log Buffer
Java Pool
Flashback Buffer etc.

So let us start with the most importent component and discuss each one by one

You can query the V$SGASTAT view for information about SGA components.


Database Buffer Cache

The database buffer cache, also called the buffer cache, is the memory area that stores copies of data blocks read from data files.
A buffer is a main memory address in which the buffer manager temporarily caches a currently or recently used data block.

A buffer could be in one of the following states

Unused
The buffer is available for use because it has never been used or is currently unused. This type of buffer is the easiest for the database to use.
Clean
This buffer was used earlier and now contains a read-consistent version of a block as of a point in time. The block contains data but is "clean" so it does not need to be checkpointed. The database can pin the block and reuse it.
Dirty
The buffer contain modified data that has not yet been written to disk. The database must checkpoint the block before reusing it.


Database buffer cache is futher managed as pools of buffers and these are

default pool
keep pool
recycle pool

for detailed explanation click here

Syatem Global Area


Redo Log Buffer

The redo log buffer is a circular buffer in the SGA that stores redo entries describing changes made to the database.
A redo record is a data structure that contains the information necessary to reconstruct, or redo, changes made to the database by DML or DDL operations.

The database processes copy redo entries from the user memory space to the redo log buffer in the SGA. The redo entries take up continuous, sequential space in the buffer. The background process LGWR (log writer) writes the redo log buffer to the active online redo log group on disk


Shared Pool

Library Cache

The library cache is a shared pool memory structure that stores executable SQL and PL/SQL code.

Data Dictionary Cache

The data dictionary is a collection of database tables and views containing reference information about the database, its structures, and its users.

Server Result Cache

The server result cache is a memory pool within the shared pool. Unlike the buffer pools, the server result cache holds result sets and not data blocks.

Reserved Pool

The reserved pool is a memory area in the shared pool that Oracle Database can use to allocate large contiguous chunks of memory.




Large Pool

The large pool is an optional memory area intended for memory allocations that are larger than is appropriate for the shared pool.

Java Pool

The Java pool is an area of memory that stores all session-specific Java code and data within the Java Virtual Machine (JVM). This memory includes Java objects that are migrated to the Java session space at end-of-call.

Streams Pool

The Streams pool stores buffered queue messages and provides memory for Oracle Streams capture processes and apply processes. The Streams pool is used exclusively by Oracle Streams.

Fixed SGA

The fixed SGA is an internal housekeeping area.
For example, the fixed SGA contains:
  • General information about the state of the database and the instance, which the background processes need to access
  • Information communicated between processes, such as information about locks
The size of the fixed SGA is set by Oracle Database and cannot be altered manually. The fixed SGA size can change from release to release

Program Global Area (PGA)


The PGA is memory specific to an operating process or thread that is not shared by other processes or threads on the system. Because the PGA is process-specific, it is never allocated in the SGA.
PGA

Contents of the PGA


Over View of Oracle Database Processes

Types of Processes

Processes are divided into the following types:

client process

A client process runs the application or Oracle tool code.

Oracle process

An Oracle process is a unit of execution that runs the Oracle database code.

  •             background process


A background process starts with the database instance and perform maintenance tasks such as performing instance recovery, cleaning up processes, writing redo buffers to disk, and so on.

  •            server process

A server process performs work based on a client request.
The V$PROCESS view contains one row for each Oracle process connected to a database instance

COL SPID FORMAT a8
COL STID FORMAT a8
SELECT SPID, STID, PROGRAM FROM V$PROCESS ORDER BY SPID;


Importent Background Processes

Process Monitor Process (PMON)


The process monitor (PMON) monitors the other background processes and performs process recovery when a server or dispatcher process terminates abnormally


Listener Registration Process (LREG)


The listener registration process (LREG) registers information about the database instance and dispatcher processes with the Oracle Net Listener

System Monitor Process (SMON)


  • Performing instance recovery, if necessary, at instance startup. In an Oracle RAC database, the SMON process of one database instance can perform instance recovery for a failed instance.
  • Recovering terminated transactions that were skipped during instance recovery because of file-read or tablespace offline errors. SMON recovers the transactions when the tablespace or file is brought back online
  • Cleaning up unused temporary segments. For example, Oracle Database allocates extents when creating an index. If the operation fails, then SMON cleans up the temporary space
  • Coalescing contiguous free extents within dictionary-managed tablespaces.


Database Writer Process (DBW)


The database writer process (DBW) writes the contents of database buffers to data files. DBW processes write modified buffers in the database buffer cache to disk


Log Writer Process (LGWR)

The log writer process (LGWR) manages the online redo log buffer.

In the following circumstances, LGWR writes all redo entries that have been copied into the buffer since the last time it wrote:
  • A user commits a transaction.
  • An online redo log switch occurs.
  • Three seconds have passed since LGWR last wrote.
  • The redo log buffer is one-third full or contains 1 MB of buffered data.
  • DBW must write modified buffers to disk.


Checkpoint Process (CKPT)

The checkpoint process (CKPT) updates the control file and data file headers with checkpoint information and signals DBW to write blocks to disk. Checkpoint information includes the checkpoint position, SCN, and location in online redo log to begin recovery.


Manageability Monitor Processes (MMON and MMNL)

The manageability monitor process (MMON) performs many tasks related to the Automatic Workload Repository (AWR).


Recoverer Process (RECO)

In a distributed database, the recoverer process (RECO) automatically resolves failures in distributed transactions

Archiver Processes (ARCn)

An archiver process (ARCn) copies online redo log files to offline storage after a redo log switch occurs.

Job Queue Processes (CJQ0 and Jnnn)

A queue process runs user jobs, often in batch mode. A job is a user-defined task scheduled to run one or more times.

Flashback Data Archive Process (FBDA)

The flashback data archive process (FBDA) archives historical rows of tracked tables into Flashback Data Archives.

Space Management Coordinator Process (SMCO)

The SMCO process coordinates the execution of various space management related tasks.


I hope you got some idea to start with. If you want to read and understand more in detail just download Database Concepts guide from oracle document library.



ORA-1652: unable to extend temp segment by 128 in tablespace TEMP

ORA-1652: unable to extend temp segment


Error


ADR Home = /u01/app/oracle/diag/rdbms/orcl/orcl1p:
*************************************************************************
2017-09-06 08:39:08.701000 +02:00
ORA-1652: unable to extend temp segment by 128 in tablespace TEMP [CITI]
ORA-1652: unable to extend temp segment by 128 in tablespace TEMP [CITI]
ORA-1652: unable to extend temp segment by 128 in tablespace TEMP [CITI]

Solution


You can resize the tempfile if it is restricted to limited size

SQL> alter database tempfile '/u01/app/oradata/temp01.dbf' resize 10240M

You can now configure the tempfile to grow up to certain size or up the maximum size allowed by oracle (32G)
 SQL> alter database tempfile '/u01/app/oradata/temp01.dbf'  autoextend on next 100m maxsize 20480;
or
SQL> alter database tempfile '/u01/app/oradata/temp01.dbf'  autoextend on next 100m maxsize unlimited;

If your tempfile already reached to maximum allowed file size limit then you must add new tempfile

Add tempfile in your temp tablespace to allow the TEMP tablespace to grow further

SQL> alter tablespace temp add tempfile   '/u01/app/oradata/temp02.dbf' size 10240M reuse autoextend on next 100m maxsize  unlimited;



Exclude Tablespace from RMAN Backup

SkipTablespace from RMAN Backup


There are many times requirement when we want to ignore one or many tablespaces while backing up /restoring or cloning the database. In this post I will show you how you can archive this
Exclude a Tablespace from Backup.

You can use RMAN configure command to update the configuration as shown below

$ rman  target / catalog rman/xxxx@rcat.oracle.com

connected to target database: orcl(DBID=63198018)
connected to recovery catalog database

RMAN> Configure exclude for tablespace TOOLS;

Tablespace TOOLS will be excluded from future whole database backups
new RMAN configuration parameters are successfully stored.

Now if you backup your database by:

RMAN> BACKUP DATABASE;

Tablespace TOOLS will always be execluded.

How to see which Tablespace is execluded from RMAN backup

You can see which table is excluded from your backup strategy:

RMAN> show exclude;

RMAN configuration parameters for database with db_unique_name ORCL are:
CONFIGURE EXCLUDE FOR TABLESPACE USERS;

How to backup the Excluded tablespace


If  you already configure "exclude" option, even then you can backup the excluded tablespsace by explicitly specifying them in a Backup command or by specifying the “NOEXCLUDE” option on a Backup Database command as:

RMAN> backup database NOEXCLUDE;

How to enable the backup for an Excluded Tablespace again


You can disable the exclude tablespace feature as:

RMAN> configure exclude for tablespace TOOLS clear;

Tablespace USERS will be included in future whole database backups
old RMAN configuration parameters are successfully deleted

RMAN> show exclude;

RMAN configuration parameters for database with db_unique_name ORCL are:
RMAN configuration has no stored or default parameters

How to skip readonly and offline tablespace backup


RMAN>backup database skip READONLY, skip OFFLINE;

How to restore and recover database from excluded tablespace backup?


RMAN> Restore database skip tablespace 'tootls';
RMAN> alter database mount;
RMAN> Recover database SKIP tablespace 'tools';