Quantcast
Channel: Karl Arao's TiddlyWiki
Viewing all 2097 articles
Browse latest View live

connection pool / connection management

$
0
0
JDBC connection pool
http://blog.enkitec.com/2010/04/jdbc-connection-pooling-for-oracle-databases/
A good tutorial JDBC connection pool (with specific examples) http://www.webdbtips.com/267632/


from oaktable list
On a similar note, a recent discovery we made was in terms of connection pooling for odp.net.

The docs say:

Minpool = how many connections you start with
Maxpool = the cap
Lifetime = max life of a connection (0=forever)

We mistakenly assumed the third parameter, if set to 0, would mean connections would grow from minpool to 'peak usage' and stay there. But they don't. The pool management keeps on trying to get connections down to minpool size.

So we had the 'saw tooth' graph of connections until we bumped up minpool.

Connor McDonald
shared by Graham
https://www.evernote.com/shard/s48/sh/fb7056a3-cb9d-4f9f-a735-274519410839/20e3669a15bfee775aa65461d58c3095

DRCP

$
0
0
http://www.oracle.com/technetwork/database/oracledrcp11g-1-133381.pdf
Master Note: Overview of Database Resident Connection Pooling (DRCP) (Doc ID 1501987.1)
Is Database Resident Connection Pooling (DRCP) Supported with JDBC-THIN / JDBC-OCI ? (Doc ID 1087381.1)
How To Setup and Trace Database Resident Connection Pooling (DRCP) (Doc ID 567854.1)
How to tune Database Resident Connection Pooling(DRCP) for scalability (Doc ID 1391004.1)
Connecting to an already started session (Doc ID 1524070.1)

Managing Processes http://docs.oracle.com/cd/E11882_01/server.112/e25494/manproc.htm#ADMIN11000< HOWTO
Example 9-6 Database Resident Connection Pooling Application http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI18203
When to Use Connection Pooling, Session Pooling, or Neither http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI16652
Database Resident Connection Pooling and LOGON/LOGOFF Triggers http://docs.oracle.com/cd/E11882_01/server.112/e25494/manproc.htm#ADMIN13400
Example 9-7 Connect String to Use for a Deployment in Dedicated Server Mode with DRCP Not Enabled http://docs.oracle.com/cd/E11882_01/appdev.112/e10646/oci09adv.htm#LNOCI18204

DRCP with JDBC

supported starting 12c
12c New Features http://docs.oracle.com/database/121/NEWFT/chapter12101.htm#NEWFT182

configure multiple listeners

$
0
0
https://blogs.oracle.com/rtsai/entry/how_to_configure_multiple_oracle
How to configure multiple Oracle listeners
By Robert Tsai on Aug 10, 2009

It happened to me few times during the stress tests that Oracle listener became a bottleneck and not be able to handle the required workload. It was resolved by creating  multiple  listeners which appeared to be a quick solution.  Here is a short step-by-step procedure to configure multiple Oracle listeners  on Solaris for standalone Oracle 9i and 10g environment.

1)  First of all, add additional NIC and cables. They can be on separate subnetwork or the same. In the later, make sure to set static route if needed.   

2) Assume that we are going to configure two listeners,  LISTENER and LISTENER2

Modify listener.ora and tnsnames.ora as following:

Here is a sample of  listener.ora

 LISTENER =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.3)(PORT = 1521))
    )
  )

LISTENER2 =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.2)(PORT = 1525))
    )
  )

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (SID_NAME = PLSExtProc)
      (ORACLE_HOME = /u01/my10g/orcl)
      (PROGRAM = extproc)
    )
  )

Here is sample of  tnsnames.ora

LISTENER =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.3)(PORT = 1521))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = ORCL)
    )
  )

LISTENER2 =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.100.2)(PORT = 1525))
    )
    (CONNECT_DATA =
      (SERVICE_NAME = ORCL)
    )
  )

EXTPROC_CONNECTION_DATA =
  (DESCRIPTION =
    (ADDRESS_LIST =
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC0))
    )
    (CONNECT_DATA =
      (SID = PLSExtProc)
      (PRESENTATION = RO)
    )
  ) 

3)   To change database registry

Without changing the registration,  when starting the Oracle 10g database would cause the database to register itself with the listener running on port 1521 (the default listener). This is not what  I wanted. It should register itself to two listeners, LISTENER & LISTENER2, defined on port 1521 & 1525. For this to happen we have to add an extra line in the database parameter file init{$SID}.ora.  The parameter used by oracle is LOCAL_LISTENER. The reference for this parameter in the Oracle's  Database Reference Guide says: LOCAL_LISTENER specifies a network name that resolves to an address or address list of Oracle Net local listeners (that is, listeners that are running on the same machine as this instance). The address or address list is specified in the TNSNAMES.ORA file or other address repository as configured for your system. With default value: (ADDRESS=(PROTOCOL=TCP)(HOST=hostname)(PORT=1521))  where hostname is the network name of the local host. See sample below:

    LOCAL_LISTENER=(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.2)(PORT=1525))

If you don't use a database parameter file (or the above overwrite the previous definiation on 10.6.142.145), but use the spfile construction, then you can alter/set this setting via a SQL statement in eg. SQL\*Plus and an account with the correct privileges:

    Before change:
    SQL> show parameter LISTENER
    NAME                                 TYPE        VALUE
    ----------------------------------- --------- -----------------------------
    local_listener                       string
    remote_listener                      string
    SQL>

To change: (Do not put it in a single line which is "TOO LONG").
    SQL> alter system set LOCAL_LISTENER="(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.6.142.145)(PORT=1521))
      2  (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.2)(PORT=1525)))" scope=BOTH;
    System altered.
    SQL>

After change
    SQL> show parameter LISTENER
    NAME                      TYPE        VALUE
    ----------------------   --------  -----------------------------
    local_listener            string      (ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=10.6.142.145)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=192.168.100.2)(PORT=1525))) log_archive_local_first   boolean     TRUE
    SQL>

    SQL>
    SQL> show parameter DISPATCHERS
    NAME                      TYPE        VALUE
     -----------------------  -------   ----------------------------
    dispatchers               string      (PROTOCOL=TCP) (SERVICE=orclXDB)
    max_dispatchers           integer
    SQL>

4) Restart the listeners:

     lsnrctl stop

     lsnrctl start  LISTENER

     lsnrctl start  LISTENER2

5)  Check both listeners status and should report the same thing except on different IP:

     lsnrctl stat

     lsnrctl stat LISTENER1

     lsnrctl stat LISTENER2

     ps -ef |grep -i tns<=== should see two listeners running 

6) Should spread out your connections among different listeners and here are some samples of how to connect to  a particular listener. i.e 
     sqlplus system/oracle@//192.168.100.3:1521/orcl
     sqlplus system/oracle@//192.168.100.2:1522/orcl

ORA-12516: TNS listener

$
0
0
TNS listener could not find available handler with matching protocol stack

TNS:listener could not find available handler witht matching protocol stack https://community.oracle.com/thread/362226
Oracle Net Listener Parameters (listener.ora) http://docs.oracle.com/cd/B28359_01/network.111/b28317/listener.htm#NETRF424

http://jhdba.wordpress.com/2010/09/02/using-the-connection_rate-parameter-to-stop-dos-attacks/< good stuff
http://www.oracle.com/technetwork/database/enterprise-edition/oraclenetservices-connectionratelim-133050.pdf< good stuff Oracle Net Listener Connection Rate Limiter

http://rnm1978.wordpress.com/2010/09/02/misbehaving-informatica-kills-oracle/
http://rnm1978.wordpress.com/2010/10/18/when-is-a-bug-not-a-bug-when-its-a-design-decision/

bad
As per your update on the Application connection pooling settings
========================================
*****Connection pooling parameters*****
========================================
JMS.SESSION_CACHE_SIZE 50
JMS.CONCURRENT_CONSUMERS 50
JMS.RECEIVE_TIMEOUT_MILLIS 1
POOL.MAX_IDLE 10
POOL SIZE 250
POOL MAX WAIT -1
With that said, however, if we look at the settings logically from a purely client-server communication perspective,
We see that the pool itself (ie: how many connections will be made) is set to 250.
Here is the line which stands out:
POOL SIZE 250
From the SQL*Net point of view for JDBC Thin Connection Pooled connections, we usually see in the listener log from working environments, are 10 to 20 connections.
The value of 250 is very High 
++++

JDBC connection issues

GitHub

$
0
0
Awesome github walkthrough - video serieshttp://308tube.com/youtube/github/
https://github.com/karlarao
http://git-scm.com/download/win
http://www.javaworld.com/javaworld/jw-08-2012/120830-osjp-github.html?page=1

HOWTO - general workflow




Basic commands and getting started

Git Data Flow
1) Current Working Directory	<-- git init <project>
2) Index (cache)				<-- git add .
3) Local Repository				<-- git commit -m "<comment>"
4) Remote Repository	

Client side setup
http://git-scm.com/downloads   <-- download here 

git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"

Common commands
git init awrscripts				<-- or you can just cd on "awrscripts" folder and execute "git init"
git status
git add . 						<-- add all the files under the master folder to the staging area
git <filename>					<-- add just a file
git rm --cached <filename>		<-- remote a file
git commit -m "initial commit"	<-- to commit changes (w/ comment), and save a snapshot of the local repository 
                                             * note that when you modify, you have to do a "git add ." first..else it will say no changes added to commit
git log							<-- show summary of commits
vi README.md        <-- markdown format readme file, header should start with #

git diff
git add .				
git diff --cached				<-- get the differences in the staging area, because you've already executed the "add"..

## shortcuts
git commit -a -m "short commit"		<-- combination of add and commit
git log --oneline					<-- shorter summary
git status -s						<-- shorter show changes

Integration with Github.com

Github.com setup
go to github.com and create a new repository
on your PC go to C:\Users\Karl
open git bash and type in ssh-keygen below
ssh-keygen.exe -t rsa -C "karlarao@gmail.com"		<-- this will create RSA on C:\Users\Karl directory
copy the contents of id_rsa.pub under C:\Users\karl\.ssh directory
go to github.com -> Account Settings -> SSH Keys -> Add SSH Key
ssh -T git@github.com								<-- to test the authentication
Github.com integrate and push
go to repositories folder -> on SSH tab -> copy the key
git remote add origin <repo ssh key from website>
git remote add origin git@github.com:karlarao/awrscripts.git
git push origin master
Github.com integrate with GUI
download the GUI here http://windows.github.com/
login and configure, at the end just hit skip
go to tools -> options -> change the default storage directory to the local git directory C:\Dropbox\CodeNinja\GitHub
click Scan For Repositories -> click Add -> click Update
click Publish -> click Sync

Branch, Merge, Clone, Fork

Branching	<-- allows you to create a separate working copy of your code 
Merging		<-- merge branches together
Cloning		<-- other developers can get a copy of your code from a remote repo
Forking		<-- make use of someone's code as starting point of a new project


-- 1st developer created a branch r2_index
git branch								<-- show branches
git branch r2_index						<-- create a branch name "r2_index"
git checkout r2_index					<-- to switch to the "r2_index" branch
git checkout <the branch you want to go>	* make sure to close all files before switching to another branch

-- 2nd developer on another machine created r2_misc
git clone <ssh link>					<-- to clone a project
git branch r2_misc
git checkout r2_misc
git push origin <branch name>	<-- to update the remote repo

-- bug fix on master
git checkout master
git push origin master

-- merge to combine the changes from 1st developer to the master project
	* conflict may happen due to changes at the same spot for both branches
git branch r2_index
git merge master

	* conflict looks like the following:
		<<<<<<< HEAD
		1)
		=======
		TOC:
		1) one
		2) two
		3) three>>>>>>> master
git push origin r2_index

-- pull, synchronizes the local repo with the remote repo
	* remember, PUSH to send up GitHub, PULL to sync with GitHub
git pull origin master



Delete files on git permanently

http://stackoverflow.com/questions/1983346/deleting-files-using-git-github< good stuff
http://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository
https://www.kernel.org/pub/software/scm/git/docs/git-filter-branch.html
cd /Users/karl/Dropbox/CodeNinja/GitHub/tmp
git init
git status
git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch *' --prune-empty --tag-name-filter cat -- --all
git commit -m "."
git push origin master --force

Deleting a repository

https://help.github.com/articles/deleting-a-repository



other references

gitflow http://nvie.com/posts/a-successful-git-branching-model/



DefaultTiddlers

RSS & Search


DataWarehouseParameters

$
0
0
parallel_automatic_tuning=false                 <--- currently set to TRUE which is a deprecated parameter in 10g
parallel_max_servers=64                             <--- the current value is just too high, caused by parallel_automatic_tuning
parallel_adaptive_multi_user=false             <--- best practice recommends to set this to false to have predictable performance
db_file_multiblock_read_count=64              <--- 1024/16......16 is your blocksize
parallel_execution_message_size=16384    <--- best practice recommends to set this to this value


http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd
Christo Kutrovsky
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,13

Note that only if  you have parallel_automatic_tuning=true then the
buffers are allocated from LARGE_POOL, otherwise (the default) they
come from the shared pool, which may be an issue when you try to
allocate 64kb chunks.

Craig Shallahamer
http://shallahamer-orapub.blogspot.com/2010/04/finding-parallelization-sweet-spot-part.html
http://shallahamer-orapub.blogspot.com/2010/04/parallelization-vs-duration-part-2.html
http://shallahamer-orapub.blogspot.com/2010/04/parallelism-introduces-limits-part-3.html

Christian Antognini
http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd,22> alter session force parallel ddl parallel 32;

This should not be necessary. The parallel DDL are enabled by default...
You can check that with the following query:

select pddl_status 
from v$session 
where sid = sys_context('userenv','sid')


PX Deq Credit: send blkd - wait for what?
http://www.asktherealtom.ch/?p=8

PX Deq Credit: send blkd caused by IDE (SQL Developer, Toad, PL/SQL Developer)
http://iamsys.wordpress.com/2010/03/24/px-deq-credit-send-blkd-caused-by-ide-sql-developer-toad-plsql-developer/

http://www.freelists.org/post/oracle-l/PX-Deq-Credit-send-blkd

How can I associate the parallel query slaves with the session that's running the query?
http://www.jlcomp.demon.co.uk/faq/pq_proc.html

What event are the consumer slaves waiting on?

set linesize 150
col "Wait Event" format a30

select s.sql_id,
       px.INST_ID "Inst",
       px.SERVER_GROUP "Group",
       px.SERVER_SET "Set",
       px.DEGREE "Degree",
       px.REQ_DEGREE "Req Degree",
       w.event "Wait Event"
from GV$SESSION s, GV$PX_SESSION px, GV$PROCESS p, GV$SESSION_WAIT w
where s.sid (+) = px.sid and
      s.inst_id (+) = px.inst_id and
      s.sid = w.sid (+) and
      s.inst_id = w.inst_id (+) and
      s.paddr = p.addr (+) and
      s.inst_id = p.inst_id (+)
ORDER BY decode(px.QCINST_ID,  NULL, px.INST_ID,  px.QCINST_ID),
         px.QCSID,
         decode(px.SERVER_GROUP, NULL, 0, px.SERVER_GROUP),
         px.SERVER_SET,
         px.INST_ID;

monitordb

Auto DOP

$
0
0
http://www.oaktable.net/content/auto-dop-and-direct-path-inserts
http://www.pythian.com/news/27867/secrets-of-oracles-automatic-degree-of-parallelism/
http://uhesse.wordpress.com/2011/10/12/auto-dop-differences-of-parallel_degree_policyautolimited/
http://uhesse.wordpress.com/2009/11/24/automatic-dop-in-11gr2/
http://www.rittmanmead.com/2010/01/in-memory-parallel-execution-in-oracle-database-11gr2/


AUTO DOP

delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
alter system set parallel_degree_policy=AUTO scope=both sid='*';
alter system flush shared_pool;
select 'alter table '||owner||'.'||table_name||' parallel (degree default);' from dba_tables where owner='<app schema>'

AUTO DOP + PX queueing, with no in-mem PX

delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
alter system set parallel_degree_policy=LIMITED scope=both sid='*';
alter system set "_parallel_statement_queuing"=TRUE scope=both sid='*';

and some other config variations....

AUTO DOP PATH AND IGNORE HINTS

1) Calibrate the IO
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
2) Parallel_Degree_policy=limited
3) _parallel_statement_queueing=true
4) alter session set "_optimizer_ignore_hints" = TRUE ;
5) set the table and index to “default” degree

NO AUTO DOP PATH AND IGNORE HINTS

1) Calibrate the IO
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
2) Resource manager directive to limit the PX per session  = per session 4
3) alter session set "_optimizer_ignore_hints" = TRUE ;
4) _parallel_statement_queueing=true

NO AUTO DOP PATH WITHOUT IGNORING HINTS

1) Calibrate the IO
delete from resource_io_calibrate$;
insert into resource_io_calibrate$ values(current_timestamp, current_timestamp, 0, 0, 200, 0, 0);
commit;
2) Resource manager directive to limit the PX per session  = per session 4
3) _parallel_statement_queueing=true


Monitoring

determine if PX underscore params are set

select a.ksppinm name, b.ksppstvl value
from x$ksppi a, x$ksppsv b
where a.indx = b.indx
and a.ksppinm in ('_parallel_cluster_cache_pct','_parallel_cluster_cache_policy','_parallel_statement_queuing','_optimizer_ignore_hints')
order by 1,2
/

list if SQLs are using in-mem PX

The fourth column indicates whether the cursor was satisfied using In-Memory PX; if the 
number of parallel servers is greater than zero but the bytes eligible for predicate offload is
zero, it’s a good indication that In-Memory PX was in use.

select ss.sql_id,
sum(ss.PX_SERVERS_EXECS_total) px_servers,
decode(sum(ss.io_offload_elig_bytes_total),0,'No','Yes') offloadelig,
decode(sum(ss.io_offload_elig_bytes_total),0,'Yes','No') impx,
sum(ss.io_offload_elig_bytes_total)/1024/1024 offloadbytes,
sum(ss.elapsed_time_total)/1000000/sum(ss.px_servers_execs_total) elps,
dbms_lob.substr(st.sql_text,60,1) st
from dba_hist_sqlstat ss, dba_hist_sqltext st
where ss.px_servers_execs_total > 0
and ss.sql_id=st.sql_id
and upper(st.sql_text) like '%IN-MEMORY PX T1%'
group by ss.sql_id,dbms_lob.substr(st.sql_text,60,1)
order by 5
/



Quick PX test case

select degree,num_rows from dba_tables
where owner='&owner' and table_name='&table_name';

#!/bin/sh
for i in 1 2 3 4 5
do
nohup sqlplus oracle/oracle @px_test.sql $i &
done

set serveroutput on size 20000
variable n number
exec :n := dbms_utility.get_time;
spool autodop_&1..lst
select /* queue test 0 */ count(*) from big_table;
begin
dbms_output.put_line
( (round((dbms_utility.get_time - :n)/100,2)) || ' seconds' );
end;
/
spool off
exit

PX parameters, PX config

$
0
0

parameters

-- essentials
parallel_max_servers - (default: 20xCPU_COUNT) The maximum number of parallel slave process that may be created on an instance. The default is calculated based on system parameters including CPU_COUNT and PARALLEL_THREADS_PER_CPU. On most systems the value will work out to be 20xCPU_COUNT.
parallel_servers_target - The upper limit on the number of parallel slaves that may be in use on an instance at any given time if parallel queuing is enabled. The default is calculated automatically.
parallel_min_servers - (default: 0) The minimum number of parallel slave processes that should be kept running, regardless of usage. Usually set to eliminate the overhead of creating and destroying parallel processes.
parallel_threads_per_cpu - (default: 2) Used in various parallel calculations to represent the number of concurrent processes that a CPU can support

-- knobs
parallel_degree_policy - (default: MANUAL) Controls several parallel features including Automatic Degree of Parallelism (auto DOP), Parallel Statement Queuing and In-memory Parallel Execution
	MANUAL - disables everything
	LIMITED - only enables auto DOP, the PX queueing & in-memory PX remain disabled
	AUTO - enables everything
parallel_execution_message_size - (default: 16384) The size of parallel message buffers in bytes.
parallel_degree_level - New in 12c. The scaling factor for default DOP calculations. When the parameter value is set to 50 then the calculated default DOP will be multiplied by .5 thus reducing it to half.

-- resource mgt
pga_aggregate_limit - New in 12c. Has nothing to do with parallel queries. This parameter limits the process PGA memory usage. 
parallel_force_local - (default: FALSE) Determines whether parallel query slaves will be forced to execute only on the node that initiated the query (TRUE), or whether they will be allowed to spread on to multiple nodes in a RAC cluster (FALSE).
parallel_instance_group - Used to restrict parallel slaves to certain set of instances in a RAC cluster.
parallel_io_cap_enabled - (default: FALSE) Used in conjunction with the DBMS_RESOURCE_MANAGER.CALIBRATE_IO function to limit default DOP calculations based on the I/O capabilities of the system.

-- deprecated / old way
parallel_automatic_tuning - (default: FALSE) Deprecated since 10g. This parameter enabled an automatic DOP calculation on objects for which a parallelism attribute is set.
parallel_min_percent - (default: 0) Old throttling mechanism. It represents the minimum percentage of parallel servers that are needed for a parallel statement to execute.

-- leave it as is
parallel_adaptive_multi_user - (default: TRUE) Old mechanism of throttling parallel statements by downgrading. Provides the ability to automatically downgrade the degree of parallelism for a given statement based on the workload when a query executes. In most cases, this parameter should be set to FALSE on Exadata, for reasons we'll discuss later in the chapter. The bigger problem with the downgrade mechanism though is that the decision about how many slaves to use is based on a single point in time, the point when the parallel statement starts.
parallel_degree_limit - (default: CPU) This parameter sets an upper limit on the DOP that can be applied to a single statement. The default means that Oracle will calculate a value for this limit based on the system's characteristics.
parallel_min_time_threshold - (default: AUTO) The minimum estimated serial execution time that will be trigger auto DOP. The default is AUTO, which translates to 10 seconds. When the PARALLEL_DEGREE_POLICY parameter is set to AUTO or LIMITED, any statement that is estimated to take longer than the threshold established by this parameter will be considered a candidate for auto DOP.
parallel_server - Has nothing to do with parallel queries. Set to true or false depending on whether the database is RAC enabled or not. This parameter was deprecated long ago and has been replaced by the CLUSTER_DATABASE parameter.
parallel_server_instances - Has nothing to do with parallel queries. It is set to the number of instances in a RAC cluster.

-- underscore params
_parallel_statement_queuing - related to auto DOP, if set to TRUE this enables PX queueing 
_parallel_cluster_cache_policy - (default: ADAPTIVE) related to auto DOP, if set to CACHE this enables the in-mem PX
_parallel_cluster_cache_pct - (default: 80) determines the percentage of the aggregate buffer cache size that is reserved for In-Memory PX, if segments are larger than 80% the size of the aggregate buffer cache, by default, queries using these tables will not qualify for In-Memory PX
_optimizer_ignore_hints - (default: FALSE) if set to TRUE will ignore hints


configuration


See this tiddler for details -> Auto DOP




RAC attack, RACattack

step by step environment

$
0
0

Install rlwrap and set alias

-- if you are subscribed to the EPEL repo
yum install rlwrap

-- if you want to build from source
# wget http://utopia.knoware.nl/~hlub/uck/rlwrap/rlwrap-0.37.tar.gz
# tar zxf rlwrap-0.37.tar.gz
# rm rlwrap-0.37.tar.gz
The configure utility will shows error: you need the GNU readline library.
It just needs the readline-devel package 
# yum install readline-devel*
# cd rlwrap-0.37
# ./configure
# make
# make install
# which rlwrap
/usr/local/bin/rlwrap



alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'

Install environment framework - karlenv

# name: environment framework - karlenv
# source URL: http://karlarao.tiddlyspot.com/#%5B%5Bstep%20by%20step%20environment%5D%5D
# notes: 
#      - I've edited/added some lines on the setsid and showsid from 
#         Coskan's code making it suitable for most unix(solaris,aix,hp-ux)/linux environments http://goo.gl/cqRPK
#      - added lines of code before and after the setsid and showsid to get the following info:
#         - software homes installed
#         - get DBA scripts location
#         - set alias
#

# SCRIPTS LOCATION
export TANEL=~/dba/tanel
export KERRY=~/dba/scripts
export KARL=/home/oracle/dba/karao/scripts
export SQLPATH=~/:$TANEL:$KERRY:$KARL
# ALIAS
alias s='rlwrap -D2 -irc -b'\''"@(){}[],+=&^%#;|\'\'' -f $TANEL/setup/wordfile_11gR2.txt sqlplus / as sysdba @/tmp/login.sql'
alias s1='sqlplus / as sysdba @/tmp/login.sql'
alias oradcli='dcli -l oracle -g /home/oracle/dbs_group'
# alias celldcli='dcli -l root -g /root/cell_group'


# MAIN
cat `cat /etc/oraInst.loc | grep -i inventory | sed 's/..............\(.*\)/\1/'`/ContentsXML/inventory.xml | grep "HOME NAME" 2> /dev/null
export PATH=""
export PATH=$HOME/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$SQLPATH:~/dba/bin:$PATH
export myid="`whoami`@`hostname`"
export PS1='${myid}':'$PWD':'$ORACLE_SID
$ '
export EDITOR=vi

export GLOGIN=`ls /tmp/login.sql 2> /dev/null | wc -l`
        if [ "$GLOGIN" -eq 1 ] ; then
                        echo ""
        else
						echo "SET SQLPROMPT \"_USER'@'_CONNECT_IDENTIFIER'>' \"
						SET LINES 300 TIME ON" > /tmp/login.sql
        fi

setsid ()
        {
        unset ORATAB
        unset ORACLE_BASE
        unset ORACLE_HOME
        unset ORACLE_SID

        export ORATAB_OS=`ls /var/opt/oracle/oratab 2> /dev/null | wc -l`
        if [ "$ORATAB_OS" -eq 1 ] ; then
                        export ORATAB=/var/opt/oracle/oratab
        else
                        export ORATAB=/etc/oratab
        fi

        export ORAENVFILE=`ls /usr/local/bin/oraenv 2> /dev/null | wc -l`
        if [ "$ORAENVFILE" -eq 1 ] ; then
                        echo ""
        else
                        cat $ORATAB | grep -v "^#" | grep -v "*"
                        echo ""
                        echo "Please enter the ORACLE_HOME: "
                        read RDBMS_HOME
                        export ORACLE_HOME=$RDBMS_HOME
        fi

        if tty -s
        then
                if [ -f $ORATAB ]
                then
                        line_count=`cat $ORATAB | grep -v "^#" | grep -v "*" | sed 's/:.*//' | wc -l`
                        # check that the oratab file has some contents
                        if [ $line_count -ge 1 ]
                                then
                                sid_selected=0
                                while [ $sid_selected -eq 0 ]
                                do
                                        sid_available=0
                                        for i in `cat $ORATAB | grep -v "^#" | grep -v "*" | sed 's/:.*//'`
                                                do
                                                sid_available=`expr $sid_available + 1`
                                                sid[$sid_available]=$i
                                                done
                                        # get the required SID
                                        case ${SETSID_AUTO:-""} in
                                                YES) # Auto set use 1st entry
                                                sid_selected=1 ;;
                                                *)
                                                i=1
                                                while [ $i -le $sid_available ]
                                                do
                                                        printf "%2d- %10s\n" $i ${sid[$i]}
                                                        i=`expr $i + 1`
                                                done
                                                echo ""
                                                echo "Select the Oracle SID with given number [1]:"
                                                read entry
                                                if [ -n "$entry" ]
                                                then
                                                        entry=`echo "$entry" | sed "s/[a-z,A-Z]//g"`
                                                        if [ -n "$entry" ]
                                                        then
                                                                entry=`expr $entry`
                                                                if [ $entry -ge 1 ] && [ $entry -le $sid_available ]
                                                                then
                                                                        sid_selected=$entry
                                                                fi
                                                        fi
                                                        else
                                                        sid_selected=1
                                                fi
                                        esac
                                done
                                #
                                # SET ORACLE_SID
                                #
                                export PATH=$HOME/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:$ORACLE_HOME/bin:$ORACLE_PATH:$PATH
                                export ORACLE_SID=${sid[$sid_selected]}
                                echo "Your profile configured for $ORACLE_SID with information below:"
                                unset LD_LIBRARY_PATH
                                ORAENV_ASK=NO
                                . oraenv
                                unset ORAENV_ASK
                                #
                                #GIVE MESSAGE
                                #
                                else
                                echo "No entries in $ORATAB. no environment set"
                        fi
                fi
        fi
        }

showsid()
        {
        echo ""
        echo "ORACLE_SID=$ORACLE_SID"
        echo "ORACLE_BASE=$ORACLE_BASE"
        echo "ORACLE_HOME=$ORACLE_HOME"
        echo ""
        }

# Find oracle_home of running instance
printf "%6s %-20s %-80s\n" "PID" "NAME" "ORACLE_HOME"
pgrep -lf _pmon_ |
  while read pid pname  y ; do
    printf "%6s %-20s %-80s\n" $pid $pname `ls -l /proc/$pid/exe | awk -F'>' '{ print $2 }' | sed 's/bin\/oracle$//' | sort | uniq` 
  done

# SET ORACLE ENVIRONMENT
setsid
showsid




Usage

[root@desktopserver ~]# su - oracle
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ vi .karlenv      <-- copy the script from the "Install environment framework - karlenv" section of the wiki link above
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ ls -la | grep karl
-rw-r--r--  1 oracle dba   6071 Dec 14 15:58 .karlenv
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ . ~oracle/.karlenv      <-- set the environment<HOME_LIST><HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"/><HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"/></HOME_LIST><COMPOSITEHOME_LIST></COMPOSITEHOME_LIST>


 1-       +ASM
 2-         dw

Select the Oracle SID with given number [1]:
2      <-- choose an instance
Your profile configured for dw with information below:
The Oracle base has been set to /u01/app/oracle

ORACLE_SID=dw
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

oracle@desktopserver.local:/home/oracle:dw
$ s      <-- rlwrap'd sqlplus alias, also you can use the "s1" alias if you don't have rlwrap installed

SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 5 15:41:15 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP and Real Application Testing options


USERNAME             INST_NAME    HOST_NAME                 SID   SERIAL#  VERSION    STARTED  SPID            OPID  CPID            SADDR            PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
SYS                  dw           desktopserver.local       5     8993     11.2.0.3.0 20111219 27483           24    27480           00000000DFB78138 00000000DF8F9FA0


SQL> @gas      <-- calling one of Kerry's scripts from the /home/oracle/dba/scripts directory

 INST   SID PROG       USERNAME      SQL_ID         CHILD PLAN_HASH_VALUE        EXECS       AVG_ETIME SQL_TEXT                                  OSUSER                         MACHINE
----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ----------------------------------------- ------------------------------ -------------------------
    1     5 sqlplus@de SYS           bmyd05jjgkyz1      0        79376787            3         .003536 select a.inst_id inst, sid, substr(progra oracle                         desktopserver.local
    1   922 OMS        SYSMAN        2b064ybzkwf1y      0               0       50,515         .004947 BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2 oracle                         desktopserver.local

SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP and Real Application Testing options
oracle@desktopserver.local:/home/oracle:dw



making a generic environment script.. called as "dbaenv"

1)
  • mkdir -p $HOME/dba/bin
  • then add the $HOME/dba/bin on the path of .bash_profile
$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin:$HOME/dba/bin

export PATH
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
export PATH=$ORACLE_HOME/bin:.:$PATH
2) copy the code of .karlenv above then create it as dbaenv file on the $HOME/dba/bin directory
3) call it as follows on any directory
. dbaenv
4) for rac one node this pmoncheck is also helpful to have on the $HOME/dba/bin directory
$ cat pmoncheck
dcli -l oracle -g /home/oracle/dbs_group ps -ef | grep pmon | grep -v grep | grep -v ASM







1MB mbrc

$
0
0
create table parallel_t1(c1 int, c2 char(100));

insert into parallel_t1
select level, 'x'
from dual
connect by level <= 8000
;

commit;


alter system set db_file_multiblock_read_count=128;
*._db_block_prefetch_limit=0
*._db_block_prefetch_quota=0
*._db_file_noncontig_mblock_read_count=0

alter system flush buffer_cache;


-- generate one parallel query
select count(*) from parallel_t1;


16:28:36 SYS@orcl> shutdown abort
ORACLE instance shut down.
16:29:21 SYS@orcl> startup pfile='/home/oracle/app/oracle/product/11.2.0/dbhome_2/dbs/initorcl.ora'
ORACLE instance started.

Total System Global Area  456146944 bytes
Fixed Size                  1344840 bytes
Variable Size             348129976 bytes
Database Buffers          100663296 bytes
Redo Buffers                6008832 bytes
Database mounted.
Database opened.
16:29:33 SYS@orcl> alter system flush buffer_cache;

System altered.

16:29:38 SYS@orcl> show parameter db_file_multi

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
db_file_multiblock_read_count        integer     128
16:29:47 SYS@orcl>
16:29:47 SYS@orcl> set lines 300
16:29:51 SYS@orcl> col "Parameter" FOR a40
16:29:51 SYS@orcl> col "Session Value" FOR a20
16:29:51 SYS@orcl> col "Instance Value" FOR a20
16:29:51 SYS@orcl> col "Description" FOR a50
16:29:51 SYS@orcl> SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", a.ksppdesc "Description"
16:29:51   2  FROM x$ksppi a, x$ksppcv b, x$ksppsv c
16:29:51   3  WHERE a.indx = b.indx AND a.indx = c.indx
16:29:51   4  AND substr(ksppinm,1,1)='_'
16:29:51   5  AND a.ksppinm like '%&parameter%'
16:29:51   6  /
Enter value for parameter: read_count

Parameter                                Session Value        Instance Value       Description
---------------------------------------- -------------------- -------------------- --------------------------------------------------
_db_file_exec_read_count                 128                  128                  multiblock read count for regular clients
_db_file_optimizer_read_count            128                  128                  multiblock read count for regular clients
_db_file_noncontig_mblock_read_count     0                    0                    number of noncontiguous db blocks to be prefetched
_sort_multiblock_read_count              2                    2                    multi-block read count for sort

16:29:54 SYS@orcl>
16:29:54 SYS@orcl> @mystat

628 rows created.


SNAP_DATE_END
-------------------
2014-09-08 16:29:57


SNAP_DATE_BEGIN
-------------------



no rows selected


no rows selected


0 rows deleted.

16:29:57 SYS@orcl> select count(*) from parallel_t1;

  COUNT(*)
----------
      8000

16:30:03 SYS@orcl> @mystat

628 rows created.


SNAP_DATE_END
-------------------
2014-09-08 16:30:05


SNAP_DATE_BEGIN
-------------------
2014-09-08 16:29:57


      Difference Statistics Name
---------------- --------------------------------------------------------------
               2 CPU used by this session
               4 CPU used when call started
               3 DB time
             628 HSC Heap Segment Block Changes
              10 SQL*Net roundtrips to/from client
              80 buffer is not pinned count
           3,225 bytes received via SQL*Net from client
           2,308 bytes sent via SQL*Net to client
              15 calls to get snapshot scn: kcmgss
               1 calls to kcmgas
              32 calls to kcmgcs
       1,097,728 cell physical IO interconnect bytes
               4 cluster key scan block gets
               4 cluster key scans
             672 consistent changes
             250 consistent gets
              12 consistent gets - examination
             250 consistent gets from cache
             211 consistent gets from cache (fastpath)
               1 cursor authentications
           1,307 db block changes
             703 db block gets
             703 db block gets from cache
              10 db block gets from cache (fastpath)
              18 enqueue releases
              19 enqueue requests
              14 execute count
             530 file io wait time
             149 free buffer requested
               5 index fetch by key
               2 index scans kdiixs1
             218 no work - consistent read gets
              42 non-idle wait count
              19 opened cursors cumulative
               5 parse count (failures)
              12 parse count (hard)
              19 parse count (total)
               1 parse time elapsed
              32 physical read IO requests
       1,097,728 physical read bytes
              32 physical read total IO requests
       1,097,728 physical read total bytes
             134 physical reads
             134 physical reads cache
             102 physical reads cache prefetch
              56 recursive calls
             629 redo entries
          88,372 redo size
             953 session logical reads
               3 shared hash latch upgrades - no wait
               3 sorts (memory)
               2 sorts (rows)
               5 sql area purged
               1 table fetch by rowid
             211 table scan blocks gotten
          13,560 table scan rows gotten
               4 table scans (short tables)
          42,700 undo change vector size
              17 user calls
               3 workarea executions - optimal
               4 workarea memory allocated

61 rows selected.


SNAP_DATE_BEGIN     SNAP_DATE_END
------------------- -------------------
2014-09-08 16:29:57 2014-09-08 16:30:05


1256 rows deleted.

16:30:05 SYS@orcl> set lines 300
16:30:38 SYS@orcl> col "Parameter" FOR a40
16:30:38 SYS@orcl> col "Session Value" FOR a20
16:30:38 SYS@orcl> col "Instance Value" FOR a20
16:30:38 SYS@orcl> col "Description" FOR a50
16:30:38 SYS@orcl> SELECT a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value", a.ksppdesc "Description"
16:30:38   2  FROM x$ksppi a, x$ksppcv b, x$ksppsv c
16:30:38   3  WHERE a.indx = b.indx AND a.indx = c.indx
16:30:38   4  AND substr(ksppinm,1,1)='_'
16:30:38   5  AND a.ksppinm like '%&parameter%'
16:30:38   6  /
Enter value for parameter: prefetch

Parameter                                Session Value        Instance Value       Description
---------------------------------------- -------------------- -------------------- --------------------------------------------------
_db_block_prefetch_quota                 0                    0                    Prefetch quota as a percent of cache size
_db_block_prefetch_limit                 0                    0                    Prefetch limit in blocks



ZFS raid calculator

mystat.sql

$
0
0

Carlos Sierra - mystat

----------------------------------------------------------------------------------------
--
-- File name:   mystat.sql
--
-- Purpose:     Reports delta of current sessions stats before and after a SQL
--
-- Author:      Carlos Sierra
--
-- Version:     2013/10/04
--
-- Usage:       This scripts does not have parameters. It just needs to be executed
--              twice. First execution just before the SQL that needs to be evaluated.
--              Second execution right after.
--
-- Example:     @mystat.sql
--              <any sql>
--              @mystat.sql             
--
-- Description:
--
--              This script takes a snapshot of v$mystat every time it is executed. Then,
--              on first execution it does nothing else. On second execution it produces
--              a report with the gap between the first and second execution, and resets
--              all snapshots.
--              
--              If you want to capture session statistics for one SQL, then execute this
--              script right before and after your SQL.
--              
--  Notes:            
--              
--              This script uses the global temporary plan_table as a repository.
-- 
--              Developed and tested on 11.2.0.3
--
--              For a more robust tool use Tanel Poder snaper at
--              http://blog.tanelpoder.com
--             
---------------------------------------------------------------------------------------
--
-- snap of v$mystat
INSERT INTO plan_table (
       statement_id /* record_type */,
       timestamp, 
       object_node /* class */, 
       object_alias /* name */, 
       cost /* value */)
SELECT 'v$mystat' record_type,
       SYSDATE,
       TRIM (',' FROM
       TRIM (' ' FROM
       DECODE(BITAND(n.class,   1),   1, 'User, ')||
       DECODE(BITAND(n.class,   2),   2, 'Redo, ')||
       DECODE(BITAND(n.class,   4),   4, 'Enqueue, ')||
       DECODE(BITAND(n.class,   8),   8, 'Cache, ')||
       DECODE(BITAND(n.class,  16),  16, 'OS, ')||
       DECODE(BITAND(n.class,  32),  32, 'RAC, ')||
       DECODE(BITAND(n.class,  64),  64, 'SQL, ')||
       DECODE(BITAND(n.class, 128), 128, 'Debug, ')
       )) class,
       n.name,
       s.value
  FROM v$mystat s,
       v$statname n
 WHERE s.statistic# = n.statistic#;
--
DEF date_mask = 'YYYY-MM-DD HH24:MI:SS';
COL snap_date_end NEW_V snap_date_end;
COL snap_date_begin NEW_V snap_date_begin;
SET VER OFF PAGES 1000;
--
-- end snap
SELECT TO_CHAR(MAX(timestamp), '&&date_mask.') snap_date_end
  FROM plan_table
 WHERE statement_id = 'v$mystat';
--
-- begin snap (null if there is only one snap)
SELECT TO_CHAR(MAX(timestamp), '&&date_mask.') snap_date_begin
  FROM plan_table
 WHERE statement_id = 'v$mystat'
   AND TO_CHAR(timestamp, '&&date_mask.') < '&&snap_date_end.';
--
COL statistics_name FOR A62 HEA "Statistics Name";
COL difference FOR 999,999,999,999 HEA "Difference";
--
-- report only if there is a begin and end snaps
SELECT (e.cost - b.cost) difference,
       --b.object_node||': '||b.object_alias statistics_name
       b.object_alias statistics_name
  FROM plan_table b,
       plan_table e
 WHERE '&&snap_date_begin.' IS NOT NULL
   AND b.statement_id = 'v$mystat'
   AND b.timestamp = TO_DATE('&&snap_date_begin.', '&&date_mask.')
   AND e.statement_id = 'v$mystat'
   AND e.timestamp = TO_DATE('&&snap_date_end.', '&&date_mask.')
   AND e.object_alias = b.object_alias /* name */
   AND e.cost > b.cost /* value */ 
 ORDER BY
       --b.object_node,
       b.object_alias;
--
-- report snaps
SELECT '&&snap_date_begin.' snap_date_begin,
       '&&snap_date_end.' snap_date_end
  FROM DUAL
 WHERE '&&snap_date_begin.' IS NOT NULL;
--
-- delete only if report is not empty   
DELETE plan_table 
 WHERE '&&snap_date_begin.' IS NOT NULL 
   AND statement_id = 'v$mystat';
-- end


Andy Klock version

CREATE TABLE RUN_STATS 
   (	"TEST_NAME" VARCHAR2(100), 
	"SNAP_TYPE" VARCHAR2(5), 
	"SNAP_TIME" DATE, 
	"STAT_CLASS" VARCHAR2(10), 
	"NAME" VARCHAR2(100), 
	"VALUE" NUMBER);


create or replace package snap_time is

  -- Author  : I604174
  -- Created : 7/21/2014 2:46:05 PM
  -- Purpose : Grabs timestamp and session stats
  --           Code borrowed from Carlos Sierra's mystat.sql
  --           http://carlos-sierra.net/2013/10/04/carlos-sierra-shared-scripts/
  
  /*
    grants:  
      grant select on v_$statname to i604174;
      grant select on v_$mystat to i604174;
      grant execute on i604174.snap_time to public;
      create public synonym snap_time on i604174.snap_time;
      create public synonym snap_time for i604174.snap_time;

  */
  
  procedure begin_snap (p_run_name varchar2);
  procedure end_snap (p_run_name varchar2);

end snap_time;
/
create or replace package body snap_time is

  procedure begin_snap (p_run_name varchar2) is
    l_sysdate date:=sysdate; 
   
    begin
        -- snap time
        insert into run_stats values (p_run_name,'BEGIN',l_sysdate,'SNAP','snap time',null);
        -- snap mystat
        insert into run_stats
        SELECT p_run_name record_type,
               'BEGIN',
               l_sysdate,
               TRIM (',' FROM
               TRIM (' ' FROM
               DECODE(BITAND(n.class,   1),   1, 'User, ')||
               DECODE(BITAND(n.class,   2),   2, 'Redo, ')||
               DECODE(BITAND(n.class,   4),   4, 'Enqueue, ')||
               DECODE(BITAND(n.class,   8),   8, 'Cache, ')||
               DECODE(BITAND(n.class,  16),  16, 'OS, ')||
               DECODE(BITAND(n.class,  32),  32, 'RAC, ')||
               DECODE(BITAND(n.class,  64),  64, 'SQL, ')||
               DECODE(BITAND(n.class, 128), 128, 'Debug, ')
               )) class,
               n.name,
               s.value
          FROM v$mystat s,
               v$statname n
        WHERE s.statistic# = n.statistic#;
        commit;
  end begin_snap;

  procedure end_snap (p_run_name varchar2) is
    l_sysdate date:=sysdate;
    begin
        -- snap time
        insert into run_stats values (p_run_name,'END',l_sysdate,'SNAP','snap time',null);
        -- snap mystat
        insert into run_stats
        SELECT p_run_name record_type,
               'END',
               l_sysdate,
               TRIM (',' FROM
               TRIM (' ' FROM
               DECODE(BITAND(n.class,   1),   1, 'User, ')||
               DECODE(BITAND(n.class,   2),   2, 'Redo, ')||
               DECODE(BITAND(n.class,   4),   4, 'Enqueue, ')||
               DECODE(BITAND(n.class,   8),   8, 'Cache, ')||
               DECODE(BITAND(n.class,  16),  16, 'OS, ')||
               DECODE(BITAND(n.class,  32),  32, 'RAC, ')||
               DECODE(BITAND(n.class,  64),  64, 'SQL, ')||
               DECODE(BITAND(n.class, 128), 128, 'Debug, ')
               )) class,
               n.name,
               s.value
          FROM v$mystat s,
               v$statname n
        WHERE s.statistic# = n.statistic#;
        commit;
  end end_snap;

end snap_time;
/



/*

Usage: 

exec snap_time.begin_snap('Test 1.1')

run something

exec snap_time.end_snap('Test 1.1')

get time differences:

select test_name, begin_snap_time, end_snap_time, round((end_snap_time - begin_snap_time)* 1440,2) duration_minutes from (
select test_name, snap_type, lag(snap_time,1,null) over (order by test_name,snap_time) begin_snap_time,
       snap_time end_snap_time
from run_stats
where name = 'snap time'
order by test_name, snap_time)
where snap_type = 'END'
order by test_name, begin_snap_time;


*/


GetTablespace-size

$
0
0
-- SHOW FREE
SET LINESIZE 300
SET PAGESIZE 9999
SET VERIFY   OFF
COLUMN status      FORMAT a9                 HEADING 'Status'
COLUMN name        FORMAT a25                HEADING 'Tablespace Name'
COLUMN type        FORMAT a12                HEADING 'TS Type'
COLUMN extent_mgt  FORMAT a10                HEADING 'Ext. Mgt.'
COLUMN segment_mgt FORMAT a9                 HEADING 'Seg. Mgt.'
COLUMN pct_free    FORMAT 999.99             HEADING "% Free" 
COLUMN gbytes      FORMAT 99,999,999         HEADING "Total GBytes" 
COLUMN used        FORMAT 99,999,999         HEADING "Used Gbytes" 
COLUMN free        FORMAT 99,999,999         HEADING "Free Gbytes" 
BREAK ON REPORT
COMPUTE SUM OF gbytes ON REPORT 
COMPUTE SUM OF free ON REPORT 
COMPUTE SUM OF used ON REPORT 

SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize 		gbytes, (df.tssize - fs.free) used, fs.free free, ROUND(100 * (fs.free / df.tssize),2) pct_free 
    FROM
	  dba_tablespaces d,
	  (SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) tssize FROM dba_data_files GROUP BY tablespace_name) df,
	  (SELECT tablespace_name, ROUND(SUM(bytes)/1024/1024/1024) free FROM dba_free_space GROUP BY tablespace_name) fs
    WHERE
	d.tablespace_name = df.tablespace_name(+)
    AND d.tablespace_name = fs.tablespace_name(+)
    AND NOT (d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY')
UNION ALL
SELECT d.status status, d.bigfile, d.tablespace_name name, d.contents type, d.extent_management extent_mgt, d.segment_space_management segment_mgt, df.tssize 		gbytes, (df.tssize - fs.free) used, fs.free free, ROUND(100 * (fs.free / df.tssize),2) pct_free 
    FROM
	  dba_tablespaces d,
	  (select tablespace_name, sum(bytes)/1024/1024/1024 tssize from dba_temp_files group by tablespace_name) df,
	  (select tablespace_name, sum(bytes_cached)/1024/1024/1024 free from v$temp_extent_pool group by tablespace_name) fs
    WHERE
	d.tablespace_name = df.tablespace_name(+)
    AND d.tablespace_name = fs.tablespace_name(+)
    AND d.extent_management like 'LOCAL' AND d.contents like 'TEMPORARY'
ORDER BY 9;
CLEAR COLUMNS BREAKS COMPUTES


-- SHOW FREE SPACE IN DATAFILES
SET LINESIZE 145
SET PAGESIZE 9999
SET VERIFY   OFF
COLUMN tablespace  FORMAT a18             HEADING 'Tablespace Name'
COLUMN filename    FORMAT a50             HEADING 'Filename'
COLUMN filesize    FORMAT 99,999,999,999  HEADING 'File Size'
COLUMN used        FORMAT 99,999,999,999  HEADING 'Used (in MB)'
COLUMN pct_used    FORMAT 999             HEADING 'Pct. Used'
BREAK ON report
COMPUTE SUM OF filesize  ON report
COMPUTE SUM OF used      ON report
COMPUTE AVG OF pct_used  ON report

SELECT /*+ ordered */
    d.tablespace_name                     tablespace
  , d.file_name                           filename
  , d.file_id                             file_id
  , d.bytes/1024/1024                     filesize
  , NVL((d.bytes - s.bytes)/1024/1024, d.bytes/1024/1024)     used
  , TRUNC(((NVL((d.bytes - s.bytes) , d.bytes)) / d.bytes) * 100)  pct_used
FROM
    sys.dba_data_files d
  , v$datafile v
  , ( select file_id, SUM(bytes) bytes
      from sys.dba_free_space
      GROUP BY file_id) s
WHERE
      (s.file_id (+)= d.file_id)
  AND (d.file_name = v.name)
UNION
SELECT
    d.tablespace_name                       tablespace 
  , d.file_name                             filename
  , d.file_id                               file_id
  , d.bytes/1024/1024                       filesize
  , NVL(t.bytes_cached/1024/1024, 0)                  used
  , TRUNC((t.bytes_cached / d.bytes) * 100) pct_used
FROM
    sys.dba_temp_files d
  , v$temp_extent_pool t
  , v$tempfile v
WHERE 
      (t.file_id (+)= d.file_id)
  AND (d.file_id = v.file#)
ORDER BY 1;


-- SHOW AUTOEXTEND TABLESPACES (9i,10G SqlPlus)
set lines 300
col file_name format a65
select 
        c.file#, a.tablespace_name as "TS", a.file_name, a.bytes/1024/1024 as "A.SIZE", a.increment_by * c.block_size/1024/1024 as "A.INCREMENT_BY", a.maxbytes/1024/1024 as "A.MAX"
from 
        dba_data_files a, dba_tablespaces b, v$datafile c
where 
        a.tablespace_name = b.tablespace_name
        and a.file_name = c.name
        and a.tablespace_name in (select tablespace_name from dba_tablespaces)
    	and a.autoextensible = 'YES'
union all
select 
        c.file#, a.tablespace_name as "TS", a.file_name, a.bytes/1024/1024 as "A.SIZE", a.increment_by * c.block_size/1024/1024 as "A.INCREMENT_BY", a.maxbytes/1024/1024 as "A.MAX"
from 
        dba_temp_files a, dba_tablespaces b, v$tempfile c
where 
        a.tablespace_name = b.tablespace_name
        and a.file_name = c.name
        and a.tablespace_name in (select tablespace_name from dba_tablespaces)
    	and a.autoextensible = 'YES';

Exadata Storage Size - ASM-size

$
0
0
Understanding ASM Capacity and Reservation of Free Space in Exadata (Doc ID 1551288.1)
http://prutser.wordpress.com/2013/01/03/demystifying-asm-required_mirror_free_mb-and-usable_file_mb/
This statement is correct:
"If I have 1GB worth of data in my DB I should be using 2GB for Normal Redundancy and 3 GB for High Redundancy."
but then you also have to account for the "required mirror free" which is required in the case of a lost of failure group.

So this is the output of the script I sent you, it already accounts for the redundancy level you are on. Just look at the columns with "REAL" on it. On your statement above, the 4869.56 is used and that already accounts for the normal redundancy.. you said you have 4605 GB (incl TEMP) so that's just about right. Now you have to add the 2538 which will total to 7407.56 and if you subtract the total space requirement to the capacity (7614 - 7407.56) you'll get 206.44
                                                               REQUIRED     USABLE
                       RAW       REAL       REAL       REAL MIRROR_FREE       FILE
STATE    TYPE     TOTAL_GB   TOTAL_GB    USED_GB    FREE_GB          GB         GB PCT_USED PCT_FREE NAME
-------- ------ ---------- ---------- ---------- ---------- ----------- ---------- -------- -------- ----------
CONNECTE NORMAL      15228       7614    4869.56    2744.44        2538     206.44       64       36 DATA_AEX1
CONNECTE NORMAL    3804.75    1902.38     1192.5     709.87      634.13      75.75       63       37 RECO_AEX1
MOUNTED  NORMAL     873.75     436.88       1.23     435.64      145.63     290.02        0      100 DBFS_DG
                ---------- ---------- ---------- ---------- ----------- ----------
sum                19906.5    9953.26    6063.29    3889.95     3317.76     572.21
I hope that clears up the confusion on the space usage.

I'm also referencing a very good blog post that discuss about the required mirror free and usable file mb
http://prutser.wordpress.com/2013/01/03/demystifying-asm-required_mirror_free_mb-and-usable_file_mb/


-- WITH REDUNDANCY
set lines 400
col state format a8
col name format a10
col sector format 999990
col block format 999990
col label format a25
col path format a40
col redundancy format a25
col pct_used format 990
col pct_free format 990
col raw_gb                    heading "RAW|TOTAL_GB"
col usable_total_gb           heading "REAL|TOTAL_GB"
col usable_used_gb            heading "REAL|USED_GB"
col usable_free_gb            heading "REAL|FREE_GB"
col required_mirror_free_gb   heading "REQUIRED|MIRROR_FREE|GB"
col usable_file_gb            heading "USABLE|FILE|GB"
col voting format a6          heading "VOTING"
BREAK ON REPORT
COMPUTE SUM OF raw_gb ON REPORT 
COMPUTE SUM OF usable_total_gb ON REPORT 
COMPUTE SUM OF usable_used_gb ON REPORT 
COMPUTE SUM OF usable_free_gb ON REPORT 
COMPUTE SUM OF required_mirror_free_gb ON REPORT 
COMPUTE SUM OF usable_file_gb ON REPORT 
select 
		state,
		type,
		sector_size sector,
		block_size block,
		allocation_unit_size au,
		round(total_mb/1024,2) raw_gb,
		round((DECODE(TYPE, 'HIGH', 0.3333 * total_mb, 'NORMAL', .5 * total_mb, total_mb))/1024,2) usable_total_gb,
		round((DECODE(TYPE, 'HIGH', 0.3333 * (total_mb - free_mb), 'NORMAL', .5 * (total_mb - free_mb), (total_mb - free_mb)))/1024,2) usable_used_gb,
		round((DECODE(TYPE, 'HIGH', 0.3333 * free_mb, 'NORMAL', .5 * free_mb, free_mb))/1024,2) usable_free_gb,
		round((DECODE(TYPE, 'HIGH', 0.3333 * required_mirror_free_mb, 'NORMAL', .5 * required_mirror_free_mb, required_mirror_free_mb))/1024,2) required_mirror_free_gb,
        round(usable_file_mb/1024,2) usable_file_gb,
		round((total_mb - free_mb)/total_mb,2)*100 as "PCT_USED", 
		round(free_mb/total_mb,2)*100 as "PCT_FREE",
		offline_disks,
		voting_files voting,
		name
from v$asm_diskgroup
where total_mb != 0
order by 1;

Nutanix

Viewing all 2097 articles
Browse latest View live