Quantcast
Channel: Karl Arao's TiddlyWiki
Viewing all 2097 articles
Browse latest View live

DataWarehouse

$
0
0

x R - Datacamp

$
0
0

Tableau

$
0
0

download and documentation

Alternate download site across versions https://licensing.tableausoftware.com/esdalt/
Release notes across versions http://www.tableausoftware.com/support/releases?signin=650fb8c2841d145bc3236999b96fd7ab
Official doc http://www.tableausoftware.com/community/support/documentation-old
knowledgebase http://kb.tableausoftware.com/
manuals http://www.tableausoftware.com/support/manuals
http://www.tableausoftware.com/new-features/6.0
http://www.tableausoftware.com/new-features/7.0
http://www.tableausoftware.com/new-features/8.0
http://www.tableausoftware.com/fast-pace-innovation< timeline across versions

Tableau - Think Data Thursday Video Libraryhttp://community.tableausoftware.com/community/groups/tdt-video-library

license

upgrading tableau desktop http://kb.tableausoftware.com/articles/knowledgebase/upgrading-tableau-desktop
offline activation http://kb.tableausoftware.com/articles/knowledgebase/offline-activation
renewal cost for desktop and personal http://www.triadtechpartners.com/wp-content/uploads/Tableau-GSA-Price-List-April-2013.pdf
renewal FAQ http://www.tableausoftware.com/support/customer-success
eula http://mkt.tableausoftware.com/files/eula.pdf


viz types



connectors

Oracle Driver
there’s an Oracle Driver so you can connect directly to a database http://downloads.tableausoftware.com/drivers/oracle/desktop/tableau7.0-oracle-driver.msi
http://www.tableausoftware.com/support/drivers
http://kb.tableausoftware.com/articles/knowledgebase/oracle-connection-errors


HOWTOs

http://www.tableausoftware.com/learn/training< LOTS OF GOOD STUFF!!!
http://community.tableausoftware.com/message/242749#242749< Johan's Ideas Collections

parametershttp://www.youtube.com/watch?v=wvF7gAV82_c

calculated fieldshttp://www.youtube.com/watch?v=FpppiLBdtGc, http://www.tableausoftware.com/table-calculations. http://kb.tableausoftware.com/articles/knowledgebase/combining-date-and-time-single-field

scatter plotshttp://www.youtube.com/watch?v=RYMlIY4nT9k, http://downloads.tableausoftware.com/quickstart/feature-guides/trend_lines.pdf

getting the r2,trendlineshttp://kb.tableausoftware.com/articles/knowledgebase/statistics-finding-correlation, http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/trendlines_model.html

forecastinghttp://tombrownonbi.blogspot.com/2010/07/simple-forecasting-using-tableau.html, resolving forecast errors http://onlinehelp.tableausoftware.com/current/pro/online/en-us/forecast_resolve_errors.html

tableau forecast model - Holt-Winters exponential smoothing
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/help.html#forecast_describe.html

Method for Creating Multipass Aggregations Using Tableau Server < doing various statistical methods in tableau
http://community.tableausoftware.com/message/181143#181143

Monte Carlo in Tableau
http://drawingwithnumbers.artisart.org/basic-monte-carlo-simulations-in-tableau/

dashboardshttp://community.tableausoftware.com/thread/109753?start=0&tstart=0, http://tableaulove.tumblr.com/post/27627548817/another-method-to-update-data-from-inside-tableau, http://ryrobes.com/tableau/tableau-phpgrid-an-almost-instant-gratification-data-entry-tool/

dashboard sizehttp://kb.tableausoftware.com/articles/knowledgebase/fixed-size-dashboard

dashboard multiple sourceshttp://kb.tableausoftware.com/articles/knowledgebase/multiple-sources-one-worksheet

reference line, reference bandhttp://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/reflines_addlines.html, http://vizwiz.blogspot.com/2012/09/tableau-tip-adding-moving-reference.html, http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i1000860.html, http://kb.tableausoftware.com/articles/knowledgebase/independent-field-reference-line, http://community.tableausoftware.com/thread/127009?start=0&tstart=0, http://community.tableausoftware.com/thread/121369

dynamic reference line
http://community.tableausoftware.com/thread/124998, http://community.tableausoftware.com/thread/105433, http://www.interworks.com/blogs/iwbiteam/2012/04/09/adding-different-reference-lines-tableau

dynamic parameter
http://drawingwithnumbers.artisart.org/creating-a-dynamic-parameter-with-a-tableau-data-blend/

thresholds Multiple thresholds for different cells on one worksheet http://community.tableausoftware.com/thread/122285

email and alertinghttp://www.metricinsights.com/data-driven-alerting-and-email-notifications-for-tableau/, http://community.tableausoftware.com/thread/124411

templateshttp://kb.tableausoftware.com/articles/knowledgebase/replacing-data-source, http://www.tableausoftware.com/public/templates/schools, http://wannabedatarockstar.blogspot.com/2013/06/create-default-tableau-template.html, http://wannabedatarockstar.blogspot.co.uk/2013/04/colour-me-right.html

click to filterhttp://kb.tableausoftware.com/articles/knowledgebase/combining-sheet-links-and-dashboards

tableau worksheet actionshttp://community.tableausoftware.com/thread/138785

date functions and calculationshttp://onlinehelp.tableausoftware.com/current/pro/online/en-us/functions_functions_date.html, http://pharma-bi.com/2011/04/fiscal-period-calculations-in-tableau-2/

date dimensionhttp://blog.inspari.dk/2013/08/27/making-the-date-dimension-ready-for-tableau/

Date Range filter and Default date filter
google search https://www.google.com/search?q=tableau+date+range+filter&oq=tableau+date+range+&aqs=chrome.2.69i57j0l5.9028j0j7&sourceid=chrome&es_sm=119&ie=UTF-8
Creating a Filter for Start and End Dates Using Parameters http://kb.tableausoftware.com/articles/howto/creating-a-filter-for-start-and-end-dates-parameters
Tableau Tip: Showing all dates on a date filter after a Server refresh http://vizwiz.blogspot.com/2014/01/tableau-tip-showing-all-dates-on-date.html
Tableau Tip: Default a date filter to the last N days http://vizwiz.blogspot.com/2013/09/tableau-tip-default-date-filter-to-last.html

hide NULL valueshttp://reports4u.co.uk/tableau-hide-null-values/, http://reports4u.co.uk/tableau-hide-values-quick-filter/, http://kb.tableausoftware.com/articles/knowledgebase/replacing-null-literalsclass, http://kb.tableausoftware.com/articles/knowledgebase/null-values< good stuff

logical functions - if then else, case when thenhttp://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/functions_functions_logical.html, http://kb.tableausoftware.com/articles/knowledgebase/understanding-logical-calculations, http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/id2611b7e2-acb6-467e-9f69-402bba5f9617.html

tableau working with sets
https://www.tableausoftware.com/public/blog/2013/03/powerful-new-tools
http://onlinehelp.tableausoftware.com/v6.1/public/online/en-us/i1201140.html
http://community.tableausoftware.com/thread/136845< good example on filters
https://www.tableausoftware.com/learn/tutorials/on-demand/sets?signin=a8f73d84a4b046aec26bc955854a381b< GOOD STUFF video tutorial
IOPS SIORS - Combining several measures in one dimension - http://tableau-ext.hosted.jivesoftware.com/thread/137680

tableau groups
http://vizwiz.blogspot.com/2013/05/tableau-tip-creating-primary-group-from.html
http://www.tableausoftware.com/learn/tutorials/on-demand/grouping?signin=f98f9fd64dcac0e7f2dc574bca03b68c< VIDEO tutorial

Random Number generation in tableau
http://community.tableausoftware.com/docs/DOC-1474

Calendar view viz
http://thevizioneer.blogspot.com/2014/04/day-1-how-to-make-calendar-in-tableau.html
http://vizwiz.blogspot.com/2012/05/creating-interactive-monthly-calendar.html
http://vizwiz.blogspot.com/2012/05/how-common-is-your-birthday-find-out.html

Custom SQL
http://kb.tableausoftware.com/articles/knowledgebase/customizing-odbc-connections
http://tableaulove.tumblr.com/post/20781994395/tableau-performance-multiple-tables-or-custom-sql
http://bensullins.com/leveraging-your-tableau-server-to-create-large-data-extracts/
http://tableaulove.tumblr.com/post/18945358848/how-to-publish-an-unpopulated-tableau-extract
http://onlinehelp.tableausoftware.com/v8.1/pro/online/en-us/customsql.html
http://onlinehelp.tableausoftware.com/v7.0/pro/online/en-us/customsql.html
Using Raw SQL Functions http://kb.tableausoftware.com/articles/knowledgebase/raw-sql
http://community.tableausoftware.com/thread/131017

Geolocation
http://tableaulove.tumblr.com/post/82299898419/ip-based-geo-location-in-tableau-new-now-with-more
http://dataremixed.com/2014/08/from-gps-to-viz-hiking-washingtons-trails/
https://public.tableausoftware.com/profile/timothyvermeiren#!/vizhome/TimothyAllRuns/Dashboard

tableau perf analyzer
http://www.interworks.com/services/business-intelligence/tableau-performance-analyzer

tableau and python
http://bensullins.com/bit-ly-data-to-csv-for-import-to-tableau/

Visualize and Understand Tableau Functions
https://public.tableausoftware.com/profile/tyler3281#!/vizhome/EVERYONEWILLUSEME/MainScreen

tableau workbook on github
http://blog.pluralsight.com/how-to-store-your-tableau-server-workbooks-on-github

tableau radar chart / spider graph
https://wikis.utexas.edu/display/tableau/How+to+create+a+Radar+Chart

maps animation
http://www.tableausoftware.com/public/blog/2014/08/capturing-animation-tableau-maps-2574?elq=d12cbf266b1342e68ea20105369371cf


if in listhttp://community.tableausoftware.com/ideas/1870, http://community.tableausoftware.com/ideas/1500
IF 
trim([ENV])='x07d' OR 
trim([ENV])='x07p'  
THEN 'AML' 
ELSE 'OTHER' END


IF 
TRIM([ENV]) = 'x07d' THEN 'AML' ELSEIF 
TRIM([ENV]) = 'x07p' THEN 'AML' 
ELSE 'OTHER' END


IF [Processor AMD] THEN 'AMD'
ELSEIF [Processor Intel] THEN 'INTEL'
ELSEIF [Processor IBM Power] THEN 'IBM Power'
ELSEIF [Processor SPARC] THEN 'SPARC'
ELSE 'Other' END


IF contains('x11p,x08p,x28p',trim([ENV]))=true THEN 'PROD' 
ELSEIF contains('x29u,x10u,x01u',trim([ENV]))=true THEN 'UAT' 
ELSEIF contains('x06d,x07d,x12d',trim([ENV]))=true THEN 'DEV' 
ELSEIF contains('x06t,x14t,x19t',trim([ENV]))=true THEN 'TEST' 
ELSE 'OTHER' END

What is the difference between Tableau Server and Tableau Server Worker?http://community.tableausoftware.com/thread/109121

tableau vs spotfire vs qlikviewhttp://community.tableausoftware.com/thread/116055, https://apandre.wordpress.com/2013/09/13/tableau-8-1-vs-qlikview-11-2-vs-spotfire-5-5/ , http://butleranalytics.com/spotfire-tableau-and-qlikview-in-a-nutshell/ , https://www.trustradius.com/compare-products/tableau-desktop-vs-tibco-spotfire





Videos

Tableau TCC12 Session: Facebook http://www.ustream.tv/recorded/26807227






Active DataGuard

$
0
0
http://gjilevski.wordpress.com/2010/03/14/creating-oracle-11g-active-standby-database-from-physical-standby-database/
Oracle Active Data Guard: What’s Really Under the Hood?http://www.oracle.com/technetwork/database/features/availability/s316924-1-175932.pdf


Read only and vice versa
http://www.adp-gmbh.ch/ora/data_guard/standby_read_only.html
http://juliandyke.wordpress.com/2010/10/14/oracle-11gr2-active-data-guard/
http://www.oracle-base.com/articles/11g/data-guard-setup-11gr2.php#read_only_active_data_guard


to be in Active DG, remove "read only" step for normal managed recovery

startup mount
alter database open read only;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE disconnect;


Snapper on Standby Database

On the standby site, if the database is open read only with apply you should be able to run snapper on it or do ash queries as well
Check out some commands here http://karlarao.tiddlyspot.com/#snapper
And if you want to loop it and leave it running and check the data the next day you can do this http://karlarao.tiddlyspot.com/#snapperloop (sections “snapper loop showing activity across all instances (must use snapper v4)” and “process the snap.txt file as csv input”)

Some commands you can use and things to check is attached as well. But I would start with
@snapper ash 5 1 all@*
Just to see what’s going on during the slow period



gvash_to_csv.sql

$
0
0
-- How to dump raw Active Session History Data into a spreadsheet (Doc ID 1630717.1)
-- gvash_to_csv.sql : modified by Karl Arao 

set feedback off pages 0 term off head on und off trimspool on 
set arraysize 5000
set termout off
set echo off verify off

COLUMN name NEW_VALUE _instname NOPRINT
select lower(instance_name) name from v$instance;

column INSTNAME format a20
column DELTA_WRITE_IO_BYTES format a24
column DELTA_READ_IO_BYTES format a24
column DELTA_WRITE_IO_REQUESTS format a24
column DELTA_READ_IO_REQUESTS format a24
column DELTA_TIME format a24
column TM_DELTA_DB_TIME format a24
column TM_DELTA_CPU_TIME format a24
column TM_DELTA_TIME format a24
column DBREPLAY_CALL_COUNTER format a24
column DBREPLAY_FILE_ID format a24
column ECID format a66
column PORT format a24
column MACHINE format a66
column CLIENT_ID format a66
column ACTION format a66
column MODULE format a66
column PROGRAM format a66
column SERVICE_HASH format a24
column IS_REPLAYED format a3
column IS_CAPTURED format a3
column REPLAY_OVERHEAD format a3
column CAPTURE_OVERHEAD format a3
column IN_SEQUENCE_LOAD format a3
column IN_CURSOR_CLOSE format a3
column IN_BIND format a3
column IN_JAVA_EXECUTION format a3
column IN_PLSQL_COMPILATION format a3
column IN_PLSQL_RPC format a3
column IN_PLSQL_EXECUTION format a3
column IN_SQL_EXECUTION format a3
column IN_HARD_PARSE format a3
column IN_PARSE format a3
column IN_CONNECTION_MGMT format a3
column TIME_MODEL format a24
column REMOTE_INSTANCE# format a24
column XID format a10
column CONSUMER_GROUP_ID format a24
column TOP_LEVEL_CALL_NAME format a66
column TOP_LEVEL_CALL# format a24
column CURRENT_ROW# format a24
column CURRENT_BLOCK# format a24
column CURRENT_FILE# format a24
column CURRENT_OBJ# format a24
column BLOCKING_HANGCHAIN_INFO format a3
column BLOCKING_INST_ID format a24
column BLOCKING_SESSION_SERIAL# format a24
column BLOCKING_SESSION format a24
column BLOCKING_SESSION_STATUS format a13
column TIME_WAITED format a24
column SESSION_STATE format a9
column WAIT_TIME format a24
column WAIT_CLASS_ID format a24
column WAIT_CLASS format a66
column P3 format a24
column P3TEXT format a66
column P2 format a24
column P2TEXT format a66
column P1 format a24
column P1TEXT format a66
column SEQ# format a24
column EVENT_ID format a24
column EVENT format a66
column PX_FLAGS format a24
column QC_SESSION_SERIAL# format a24
column QC_SESSION_ID format a24
column QC_INSTANCE_ID format a24
column PLSQL_SUBPROGRAM_ID format a24
column PLSQL_OBJECT_ID format a24
column PLSQL_ENTRY_SUBPROGRAM_ID format a24
column PLSQL_ENTRY_OBJECT_ID format a24
column SQL_EXEC_START format a9
column SQL_EXEC_ID format a24
column SQL_PLAN_OPTIONS format a66
column SQL_PLAN_OPERATION format a66
column SQL_PLAN_LINE_ID format a24
column SQL_PLAN_HASH_VALUE format a24
column TOP_LEVEL_SQL_OPCODE format a24
column TOP_LEVEL_SQL_ID format a15
column FORCE_MATCHING_SIGNATURE format a24
column SQL_OPNAME format a66
column SQL_OPCODE format a24
column SQL_CHILD_NUMBER format a24
column IS_SQLID_CURRENT format a3
column SQL_ID format a15
column USER_ID format a24
column FLAGS format a24
column SESSION_TYPE format a12
column SESSION_SERIAL# format a24
column SESSION_ID format a24
column TM format a13
column SAMPLE_TIME format a13
column SAMPLE_ID format a24
column INSTANCE_NUMBER format a24
column DBID format a24
column SNAP_ID format a24
column TEMP_SPACE_ALLOCATED format a24
column PGA_ALLOCATED format a24
column DELTA_INTERCONNECT_IO_BYTES format a24


set pages 2000
set lines 2750
set heading off
set feedback off
set echo off

! echo "INSTNAME, INST_ID , SAMPLE_ID , TM, SAMPLE_TIME , SESSION_ID , SESSION_SERIAL# , SESSION_TYPE , FLAGS , USER_ID , SQL_ID ,"-
"IS_SQLID_CURRENT , SQL_CHILD_NUMBER , SQL_OPCODE , SQL_OPNAME , FORCE_MATCHING_SIGNATURE , TOP_LEVEL_SQL_ID , TOP_LEVEL_SQL_OPCODE , "-
"SQL_PLAN_HASH_VALUE , SQL_PLAN_LINE_ID , SQL_PLAN_OPERATION , SQL_PLAN_OPTIONS , SQL_EXEC_ID , SQL_EXEC_START , PLSQL_ENTRY_OBJECT_ID,"-
"PLSQL_ENTRY_SUBPROGRAM_ID , PLSQL_OBJECT_ID , PLSQL_SUBPROGRAM_ID , QC_INSTANCE_ID , QC_SESSION_ID , QC_SESSION_SERIAL# , PX_FLAGS , EVENT ,"-
" EVENT_ID , SEQ# , P1TEXT , P1 , P2TEXT , P2 , P3TEXT , P3 , WAIT_CLASS , WAIT_CLASS_ID , WAIT_TIME , SESSION_STATE , TIME_WAITED ,"-
"BLOCKING_SESSION_STATUS , BLOCKING_SESSION , BLOCKING_SESSION_SERIAL# , BLOCKING_INST_ID , BLOCKING_HANGCHAIN_INFO , CURRENT_OBJ# , "-
"CURRENT_FILE# , CURRENT_BLOCK# , CURRENT_ROW# , TOP_LEVEL_CALL# , TOP_LEVEL_CALL_NAME , CONSUMER_GROUP_ID , XID , REMOTE_INSTANCE# , TIME_MODEL ,"-
"IN_CONNECTION_MGMT , IN_PARSE , IN_HARD_PARSE , IN_SQL_EXECUTION , IN_PLSQL_EXECUTION , IN_PLSQL_RPC , IN_PLSQL_COMPILATION , IN_JAVA_EXECUTION ,"-
" IN_BIND , IN_CURSOR_CLOSE , IN_SEQUENCE_LOAD , CAPTURE_OVERHEAD , REPLAY_OVERHEAD , IS_CAPTURED , IS_REPLAYED , SERVICE_HASH , PROGRAM , MODULE ,"-
" ACTION , CLIENT_ID , MACHINE , PORT , ECID , DBREPLAY_FILE_ID , DBREPLAY_CALL_COUNTER , TM_DELTA_TIME , TM_DELTA_CPU_TIME , TM_DELTA_DB_TIME , "-
"DELTA_TIME , DELTA_READ_IO_REQUESTS , DELTA_WRITE_IO_REQUESTS , DELTA_READ_IO_BYTES , DELTA_WRITE_IO_BYTES , DELTA_INTERCONNECT_IO_BYTES , "-
"PGA_ALLOCATED , TEMP_SPACE_ALLOCATED " > myash-&_instname..csv

spool myash-&_instname..csv append

select INSTNAME ||','|| INST_ID ||','|| SAMPLE_ID ||','|| TM ||','|| SAMPLE_TIME ||','|| SESSION_ID ||','|| SESSION_SERIAL# ||','|| -
SESSION_TYPE ||','|| FLAGS ||','|| USER_ID ||','|| SQL_ID ||','|| IS_SQLID_CURRENT ||','|| SQL_CHILD_NUMBER ||','|| SQL_OPCODE ||','|| SQL_OPNAME -
||','|| FORCE_MATCHING_SIGNATURE ||','|| TOP_LEVEL_SQL_ID ||','|| TOP_LEVEL_SQL_OPCODE ||','|| SQL_PLAN_HASH_VALUE ||','|| SQL_PLAN_LINE_ID -
||','|| SQL_PLAN_OPERATION ||','|| SQL_PLAN_OPTIONS ||','|| SQL_EXEC_ID ||','|| SQL_EXEC_START ||','|| PLSQL_ENTRY_OBJECT_ID ||','|| -
PLSQL_ENTRY_SUBPROGRAM_ID ||','|| PLSQL_OBJECT_ID ||','|| PLSQL_SUBPROGRAM_ID ||','|| QC_INSTANCE_ID ||','|| QC_SESSION_ID ||','|| QC_SESSION_SERIAL#- 
||','|| PX_FLAGS ||','|| EVENT ||','|| EVENT_ID ||','|| SEQ# ||','|| P1TEXT ||','|| P1 ||','|| P2TEXT ||','|| P2 ||','|| P3TEXT ||','|| P3 ||','|| -
WAIT_CLASS ||','|| WAIT_CLASS_ID ||','|| WAIT_TIME ||','|| SESSION_STATE ||','|| TIME_WAITED ||','|| BLOCKING_SESSION_STATUS ||','|| BLOCKING_SESSION-
||','|| BLOCKING_SESSION_SERIAL# ||','|| BLOCKING_INST_ID ||','|| BLOCKING_HANGCHAIN_INFO ||','|| CURRENT_OBJ# ||','|| CURRENT_FILE# ||','|| -
CURRENT_BLOCK# ||','|| CURRENT_ROW# ||','|| TOP_LEVEL_CALL# ||','|| TOP_LEVEL_CALL_NAME ||','|| CONSUMER_GROUP_ID ||','|| XID ||','|| -
REMOTE_INSTANCE# ||','|| TIME_MODEL ||','|| IN_CONNECTION_MGMT ||','|| IN_PARSE ||','|| IN_HARD_PARSE ||','|| IN_SQL_EXECUTION ||','|| -
IN_PLSQL_EXECUTION ||','|| IN_PLSQL_RPC ||','|| IN_PLSQL_COMPILATION ||','|| IN_JAVA_EXECUTION ||','|| IN_BIND ||','|| IN_CURSOR_CLOSE ||','|| -
IN_SEQUENCE_LOAD ||','|| CAPTURE_OVERHEAD ||','|| REPLAY_OVERHEAD ||','|| IS_CAPTURED ||','|| IS_REPLAYED ||','|| SERVICE_HASH ||','|| -
PROGRAM ||','|| MODULE ||','|| ACTION ||','|| CLIENT_ID ||','|| MACHINE ||','|| PORT ||','|| ECID ||','|| DBREPLAY_FILE_ID ||','|| -
DBREPLAY_CALL_COUNTER ||','|| TM_DELTA_TIME ||','|| TM_DELTA_CPU_TIME ||','|| TM_DELTA_DB_TIME ||','|| DELTA_TIME ||','|| -
DELTA_READ_IO_REQUESTS ||','|| DELTA_WRITE_IO_REQUESTS ||','|| DELTA_READ_IO_BYTES ||','|| DELTA_WRITE_IO_BYTES ||','|| -
DELTA_INTERCONNECT_IO_BYTES ||','|| PGA_ALLOCATED ||','|| TEMP_SPACE_ALLOCATED 
From 
(select trim('&_instname') INSTNAME, TO_CHAR(SAMPLE_TIME,'MM/DD/YY HH24:MI:SS') TM, a.*
from gv$active_session_history a)
Where SAMPLE_TIME > (select min(SAMPLE_TIME) from gv$active_session_history)
Order by SAMPLE_TIME, session_id asc;
spool off;

cell_iops.sh

$
0
0
I have this script that mines the cell metriccurrent https://www.dropbox.com/s/rcwek0rx8e50imc/cell_iops.sh
created it last night, pretty cool for monitoring and IO test cases (IORM, wbfc, esfc, esfl)

here's a sample viz you can do with the script https://www.evernote.com/shard/s48/sh/d89a1aa2-d1b1-42b6-b338-c95ba31bf3e9/c1c7604911c1aa21c821fae9e3e258a0 I haven’t included the latency yet on the viz


-Karl


Sample run as "cellmonitor"

Edit the following line on the script
datafile=`echo /home/oracle/dba/karao/scripts/metriccurrentall.txt`
/usr/local/bin/dcli -l cellmonitor -g /home/oracle/dba/karao/scripts/cell_group "cellcli -e list metriccurrent" > $datafile
export TM=$(date +%m/%d/%y" "%H:%M:%S)
Run it as follows
while :; do ./cell_iops.sh >> cell_iops.csv ; egrep "CS,ALL|DB," cell_iops.csv ; sleep 20; echo "--"; done


for longer runs


make sure you have a cell_group file on the /root directory, this contains the IPs of the storage cells

[root@enkx3cel01 ~]# cat /root/cell_group
192.168.12.3
192.168.12.4
192.168.12.5


-- install
[root@enkx3cel01 ~]# vi runit.sh
while :; do ./cell_iops.sh >> cell_iops.csv ; egrep "CS,ALL|DB," cell_iops.csv ; sleep 60; echo "--"; rm nohup.out; done


-- run 
[root@enkx3cel01 ~]# nohup sh runit.sh &


-- killing it afterwards
ps -ef | grep cell_iops ; lsof cell_iops.csv
ps -ef | grep -i runit
root     13311  8482  0 11:14 pts/0    00:00:00 sh runit.sh
root     14660  8482  0 11:14 pts/0    00:00:00 grep -i runit
[root@enkx3cel01 ~]#
[root@enkx3cel01 ~]#
[root@enkx3cel01 ~]# kill -9 13311
[1]+  Killed                  nohup sh runit.sh



the cell_iops.sh script

#!/bin/ksh
#
# cell_iops.sh - a "sort of" end to end Exadata IO monitoring script
#     * inspired by http://glennfawcett.wordpress.com/2013/06/18/analyzing-io-at-the-exadata-cell-level-a-simple-tool-for-iops/
#       and modified to show end to end breakdown of IOPS, inter-database, consumer groups, and latency across Exadata storage cells
#     * you must use this script together with "iostat -xmd" on storage cells on both flash and spinning disk and database IO latency on 
#       system level (AWR) and session level (Tanel Poder's snapper) for a "real" end to end IO troubleshooting and monitoring
#     * the inter-database and consumer groups data is very useful for overall resource management and IORM configuration and troubleshooting 
#     * check out the sample viz that can be done by mining the data here goo.gl/0Q1Oeo
#
# Karl Arao, Oracle ACE (bit.ly/karlarao), OCP-DBA, RHCE, OakTable
# http://karlarao.wordpress.com
#
# on any Exadata storage cell node you can run this one time
#     ./cell_iops.sh
#
# OR on loop spooling to a file and consume later with Tableau for visualization
#     while :; do ./cell_iops.sh >> cell_iops.csv ; egrep "CS,ALL|DB,_OTHER_DATABASE_" cell_iops.csv ; sleep 20; echo "--"; done
#
# Here are the 19 column headers:
#
#     TM             - the time on each snap
#     CATEGORY       - CS (cell server - includes IOPS, MBs, R+W breakdown, latency), DB (database - IOPS, MBs), CG (consumer group - IOPS, MBs)
#     GROUP          - grouping per CATEGORY, it could be databases or consumer groups.. a pretty useful dimension in Tableau to drill down on IO
#     DISK_IOPS      - (applies to CS, DB, CG) high level spinning disk IOPS
#     FLASH_IOPS     - (applies to CS, DB, CG) high level flash disk IOPS
#     DISK_MBS       - (applies to CS, DB, CG) high level spinning disk MB/s (bandwidth)
#     FLASH_MBS      - (applies to CS, DB, CG) high level flash disk MB/s (bandwidth)
#     DISK_IOPS_R    - (applies to CS only) IOPS breakdown, spinning disk IOPS read
#     FLASH_IOPS_R   - (applies to CS only) IOPS breakdown, flash disk IOPS read
#     DISK_IOPS_W    - (applies to CS only) IOPS breakdown, spinning disk IOPS write
#     FLASH_IOPS_W   - (applies to CS only) IOPS breakdown, flash disk IOPS write
#     DLAT_RLG       - (applies to CS only) average latency breakdown, spinning disk large reads
#     FLAT_RLG       - (applies to CS only) average latency breakdown, flash disk large reads
#     DLAT_RSM       - (applies to CS only) average latency breakdown, spinning disk small reads
#     FLAT_RSM       - (applies to CS only) average latency breakdown, flash disk small reads
#     DLAT_WLG       - (applies to CS only) average latency breakdown, spinning disk large writes
#     FLAT_WLG       - (applies to CS only) average latency breakdown, flash disk large writes
#     DLAT_WSM       - (applies to CS only) average latency breakdown, spinning disk small writes
#     FLAT_WSM       - (applies to CS only) average latency breakdown, flash disk small writes
#


datafile=`echo /tmp/metriccurrentall.txt`
/usr/local/bin/dcli -l root -g /root/cell_group "cellcli -e list metriccurrent" > $datafile
export TM=$(date +%m/%d/%y" "%H:%M:%S)


# Header
print "TM,CATEGORY,GROUP,DISK_IOPS,FLASH_IOPS,DISK_MBS,FLASH_MBS,DISK_IOPS_R,FLASH_IOPS_R,DISK_IOPS_W,FLASH_IOPS_W,DLAT_RLG,FLAT_RLG,DLAT_RSM,FLAT_RSM,DLAT_WLG,FLAT_WLG,DLAT_WSM,FLAT_WSM"

#######################################
# extract IOPS for cells
#######################################
export DRW=`cat $datafile | egrep  'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC|CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FRW=`cat $datafile | egrep  'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC|CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

export DRWM=`cat $datafile | egrep  'CD_IO_BY_R_LG_SEC|CD_IO_BY_R_SM_SEC|CD_IO_BY_W_LG_SEC|CD_IO_BY_W_SM_SEC' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FRWM=`cat $datafile | egrep  'CD_IO_BY_R_LG_SEC|CD_IO_BY_R_SM_SEC|CD_IO_BY_W_LG_SEC|CD_IO_BY_W_SM_SEC' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

export DR=`cat $datafile | egrep  'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FR=`cat $datafile | egrep  'CD_IO_RQ_R_LG_SEC|CD_IO_RQ_R_SM_SEC' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

export DW=`cat $datafile | egrep  'CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`
export FW=`cat $datafile | egrep  'CD_IO_RQ_W_LG_SEC|CD_IO_RQ_W_SM_SEC' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

export DLATRLG=`cat $datafile | egrep  'CD_IO_TM_R_LG_RQ' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATRLG=`cat $datafile | egrep  'CD_IO_TM_R_LG_RQ' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`

export DLATRSM=`cat $datafile | egrep  'CD_IO_TM_R_SM_RQ' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATRSM=`cat $datafile | egrep  'CD_IO_TM_R_SM_RQ' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`

export DLATWLG=`cat $datafile | egrep  'CD_IO_TM_W_LG_RQ' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATWLG=`cat $datafile | egrep  'CD_IO_TM_W_LG_RQ' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`

export DLATWSM=`cat $datafile | egrep  'CD_IO_TM_W_SM_RQ' |grep  -v FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`
export FLATWSM=`cat $datafile | egrep  'CD_IO_TM_W_SM_RQ' |grep  FD_ |sed 's/,//g'|awk 'BEGIN {sum=0;count=0} {sum+=$4;++count} END {printf("%.2f",(sum/count)/1000);}'`

print "$TM,CS,ALL,$DRW,$FRW,$DRWM,$FRWM,$DR,$FR,$DW,$FW,$DLATRLG,$FLATRLG,$DLATRSM,$FLATRSM,$DLATWLG,$FLATWLG,$DLATWSM,$FLATWSM"


#######################################
# extract IOPS for database
#######################################
export db_str=`cat $datafile | egrep 'DB_FD_IO_RQ_LG_SEC' | grep -v DBUA | awk '{ print $3}' | sort | uniq`

for db_name in `echo $db_str`
do
  # Calculate Total IOPS of harddisk
  # DB_IO_RQ_LG_SEC
  # DB_IO_RQ_SM_SEC
  db_drw=`cat $datafile | egrep 'DB_IO_RQ_LG_SEC|DB_IO_RQ_SM_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

  # Calculate Total IOPS of flashdisk
  # DB_FD_IO_RQ_LG_SEC
  # DB_FD_IO_RQ_SM_SEC
  db_frw=`cat $datafile | egrep 'DB_FD_IO_RQ_LG_SEC|DB_FD_IO_RQ_SM_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

  # Calculate Total MB/s of harddisk
  # DB_IO_BY_SEC
  db_drwm=`cat $datafile | egrep 'DB_IO_BY_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

  # Calculate Total MB/s of flashdisk
  # DB_FC_IO_BY_SEC
  # DB_FD_IO_BY_SEC
  # DB_FL_IO_BY_SEC
  db_frwm=`cat $datafile | egrep 'DB_FC_IO_BY_SEC|DB_FD_IO_BY_SEC|DB_FL_IO_BY_SEC' |grep $db_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

  print "$TM,DB,$db_name,$db_drw,$db_frw,$db_drwm,$db_frwm,0,0,0,0,0,0,0,0,0,0,0,0"

done


#######################################
# extract IOPS for DBRM consumer groups
#######################################
export cg_str=`cat $datafile | egrep 'CG_FD_IO_RQ_LG_SEC' | grep -v DBUA | awk '{ print $3}' | sort | uniq`

for cg_name in `echo $cg_str`
do

  # Calculate Total IOPS of harddisk
  # CG_IO_RQ_LG_SEC
  # CG_IO_RQ_SM_SEC
  cg_drw=`cat $datafile | egrep 'CG_IO_RQ_LG_SEC|CG_IO_RQ_SM_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

  # Calculate Total IOPS of flashdisk
  # CG_FD_IO_RQ_LG_SEC
  # CG_FD_IO_RQ_SM_SEC
  cg_frw=`cat $datafile | egrep 'CG_FD_IO_RQ_LG_SEC|CG_FD_IO_RQ_SM_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`


  # Calculate Total MB/s of harddisk
  # CG_IO_BY_SEC
  cg_drwm=`cat $datafile | egrep 'CG_IO_BY_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

  # Calculate Total MB/s of flashdisk
  # CG_FC_IO_BY_SEC
  # CG_FD_IO_BY_SEC
  cg_frwm=`cat $datafile | egrep 'CG_FC_IO_BY_SEC|CG_FD_IO_BY_SEC' |grep $cg_name |sed 's/,//g'|awk 'BEGIN {w=0} {w=$4+w;} END {printf("%d\n",w);}'`

  print "$TM,CG,$cg_name,$cg_drw,$cg_frw,$cg_drwm,$cg_frwm,0,0,0,0,0,0,0,0,0,0,0,0"

done






step by step environment

$
0
0

Install rlwrap and set alias

-- if you are subscribed to the EPEL repo
yum install rlwrap

-- if you want to build from source
# wget http://utopia.knoware.nl/~hlub/uck/rlwrap/rlwrap-0.37.tar.gz
# tar zxf rlwrap-0.37.tar.gz
# rm rlwrap-0.37.tar.gz
The configure utility will shows error: you need the GNU readline library.
It just needs the readline-devel package 
# yum install readline-devel*
# cd rlwrap-0.37
# ./configure
# make
# make install
# which rlwrap
/usr/local/bin/rlwrap



alias sqlplus='rlwrap sqlplus'
alias rman='rlwrap rman'

Install environment framework - karlenv

# name: environment framework - karlenv
# source URL: http://karlarao.tiddlyspot.com/#%5B%5Bstep%20by%20step%20environment%5D%5D
# notes: 
#      - I've edited/added some lines on the setsid and showsid from 
#         Coskan's code making it suitable for most unix(solaris,aix,hp-ux)/linux environments http://goo.gl/cqRPK
#      - added lines of code before and after the setsid and showsid to get the following info:
#         - software homes installed
#         - get DBA scripts location
#         - set alias
#

# SCRIPTS LOCATION
export TANEL=~/dba/tanel
export KERRY=~/dba/scripts
export KARL=~/dba/karao/scripts/
export SQLPATH=~/:$TANEL:$KERRY:$KARL
# ALIAS
alias s='rlwrap -D2 -irc -b'\''"@(){}[],+=&^%#;|\'\'' -f $TANEL/setup/wordfile_11gR2.txt sqlplus / as sysdba @/tmp/login.sql'
alias s1='sqlplus / as sysdba @/tmp/login.sql'
alias oradcli='dcli -l oracle -g ~/dbs_group'
# alias celldcli='dcli -l root -g /root/cell_group'


# MAIN
cat `cat /etc/oraInst.loc | grep -i inventory | sed 's/..............\(.*\)/\1/'`/ContentsXML/inventory.xml | grep "HOME NAME" 2> /dev/null
export PATH=""
export PATH=$HOME/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$SQLPATH:~/dba/bin:$PATH
export myid="`whoami`@`hostname`"
export PS1='${myid}':'$PWD':'$ORACLE_SID
$ '
export EDITOR=vi

export GLOGIN=`ls /tmp/login.sql 2> /dev/null | wc -l`
        if [ "$GLOGIN" -eq 1 ] ; then
                        echo ""
        else
						echo "SET SQLPROMPT \"_USER'@'_CONNECT_IDENTIFIER'>' \"
						SET LINES 260 TIME ON" > /tmp/login.sql
        fi

setsid ()
        {
        unset ORATAB
        unset ORACLE_BASE
        unset ORACLE_HOME
        unset ORACLE_SID

        export ORATAB_OS=`ls /var/opt/oracle/oratab 2> /dev/null | wc -l`
        if [ "$ORATAB_OS" -eq 1 ] ; then
                        export ORATAB=/var/opt/oracle/oratab
        else
                        export ORATAB=/etc/oratab
        fi

        export ORAENVFILE=`ls /usr/local/bin/oraenv 2> /dev/null | wc -l`
        if [ "$ORAENVFILE" -eq 1 ] ; then
                        echo ""
        else
                        cat $ORATAB | grep -v "^#" | grep -v "*"
                        echo ""
                        echo "Please enter the ORACLE_HOME: "
                        read RDBMS_HOME
                        export ORACLE_HOME=$RDBMS_HOME
        fi

        if tty -s
        then
                if [ -f $ORATAB ]
                then
                        line_count=`cat $ORATAB | grep -v "^#" | grep -v "*" | sed 's/:.*//' | wc -l`
                        # check that the oratab file has some contents
                        if [ $line_count -ge 1 ]
                                then
                                sid_selected=0
                                while [ $sid_selected -eq 0 ]
                                do
                                        sid_available=0
                                        for i in `cat $ORATAB | grep -v "^#" | grep -v "*" | sed 's/:.*//'`
                                                do
                                                sid_available=`expr $sid_available + 1`
                                                sid[$sid_available]=$i
                                                done
                                        # get the required SID
                                        case ${SETSID_AUTO:-""} in
                                                YES) # Auto set use 1st entry
                                                sid_selected=1 ;;
                                                *)
                                                i=1
                                                while [ $i -le $sid_available ]
                                                do
                                                        printf "%2d- %10s\n" $i ${sid[$i]}
                                                        i=`expr $i + 1`
                                                done
                                                echo ""
                                                echo "Select the Oracle SID with given number [1]:"
                                                read entry
                                                if [ -n "$entry" ]
                                                then
                                                        entry=`echo "$entry" | sed "s/[a-z,A-Z]//g"`
                                                        if [ -n "$entry" ]
                                                        then
                                                                entry=`expr $entry`
                                                                if [ $entry -ge 1 ] && [ $entry -le $sid_available ]
                                                                then
                                                                        sid_selected=$entry
                                                                fi
                                                        fi
                                                        else
                                                        sid_selected=1
                                                fi
                                        esac
                                done
                                #
                                # SET ORACLE_SID
                                #
                                export PATH=$HOME/bin:/bin:/usr/bin:/usr/local/bin:/sbin:/usr/sbin:/usr/local/sbin:$ORACLE_HOME/bin:$ORACLE_PATH:$PATH
                                export ORACLE_SID=${sid[$sid_selected]}
                                echo "Your profile configured for $ORACLE_SID with information below:"
                                unset LD_LIBRARY_PATH
                                ORAENV_ASK=NO
                                . oraenv
                                unset ORAENV_ASK
                                #
                                #GIVE MESSAGE
                                #
                                else
                                echo "No entries in $ORATAB. no environment set"
                        fi
                fi
        fi
        }

showsid()
        {
        echo ""
        echo "ORACLE_SID=$ORACLE_SID"
        echo "ORACLE_BASE=$ORACLE_BASE"
        echo "ORACLE_HOME=$ORACLE_HOME"
        echo ""
        }

# Find oracle_home of running instance
ps -ef | grep pmon | grep -v grep | grep -v bash | grep -v perl |\
while read PMON; do
   INST=`echo $PMON | awk {' print $2, $8 '}`
   INST_PID=`echo $PMON | awk {' print $2'}`
   INST_HOME=`ls -l /proc/$INST_PID/exe 2> /dev/null | awk -F'>' '{ print $2 }' | sed 's/bin\/oracle$//' | sort | uniq`
  echo "$INST $INST_HOME"
done

# Set Oracle environment 
setsid
showsid



Usage

[root@desktopserver ~]# su - oracle
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ vi .karlenv      <-- copy the script from the "Install environment framework - karlenv" section of the wiki link above
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ ls -la | grep karl
-rw-r--r--  1 oracle dba   6071 Dec 14 15:58 .karlenv
[oracle@desktopserver ~]$
[oracle@desktopserver ~]$ . ~oracle/.karlenv      <-- set the environment<HOME_LIST><HOME NAME="Ora11g_gridinfrahome1" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1" CRS="true"/><HOME NAME="OraDb11g_home1" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2"/></HOME_LIST><COMPOSITEHOME_LIST></COMPOSITEHOME_LIST>


 1-       +ASM
 2-         dw

Select the Oracle SID with given number [1]:
2      <-- choose an instance
Your profile configured for dw with information below:
The Oracle base has been set to /u01/app/oracle

ORACLE_SID=dw
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1

oracle@desktopserver.local:/home/oracle:dw
$ s      <-- rlwrap'd sqlplus alias, also you can use the "s1" alias if you don't have rlwrap installed

SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 5 15:41:15 2012

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP and Real Application Testing options


USERNAME             INST_NAME    HOST_NAME                 SID   SERIAL#  VERSION    STARTED  SPID            OPID  CPID            SADDR            PADDR
-------------------- ------------ ------------------------- ----- -------- ---------- -------- --------------- ----- --------------- ---------------- ----------------
SYS                  dw           desktopserver.local       5     8993     11.2.0.3.0 20111219 27483           24    27480           00000000DFB78138 00000000DF8F9FA0


SQL> @gas      <-- calling one of Kerry's scripts from the /home/oracle/dba/scripts directory

 INST   SID PROG       USERNAME      SQL_ID         CHILD PLAN_HASH_VALUE        EXECS       AVG_ETIME SQL_TEXT                                  OSUSER                         MACHINE
----- ----- ---------- ------------- ------------- ------ --------------- ------------ --------------- ----------------------------------------- ------------------------------ -------------------------
    1     5 sqlplus@de SYS           bmyd05jjgkyz1      0        79376787            3         .003536 select a.inst_id inst, sid, substr(progra oracle                         desktopserver.local
    1   922 OMS        SYSMAN        2b064ybzkwf1y      0               0       50,515         .004947 BEGIN EMD_NOTIFICATION.QUEUE_READY(:1, :2 oracle                         desktopserver.local

SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP and Real Application Testing options
oracle@desktopserver.local:/home/oracle:dw



making a generic environment script.. called as "dbaenv"

1)
  • mkdir -p $HOME/dba/bin
  • then add the $HOME/dba/bin on the path of .bash_profile
$ cat .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin:$HOME/dba/bin

export PATH
export ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
export PATH=$ORACLE_HOME/bin:.:$PATH
2) copy the code of .karlenv above then create it as dbaenv file on the $HOME/dba/bin directory
3) call it as follows on any directory
. dbaenv
4) for rac one node this pmoncheck is also helpful to have on the $HOME/dba/bin directory
$ cat pmoncheck
dcli -l oracle -g /home/oracle/dbs_group ps -ef | grep pmon | grep -v grep | grep -v ASM







GitHub

$
0
0
Awesome github walkthrough - video serieshttp://308tube.com/youtube/github/
https://github.com/karlarao
http://git-scm.com/download/win
http://www.javaworld.com/javaworld/jw-08-2012/120830-osjp-github.html?page=1

HOWTO - general workflow




Basic commands and getting started

Git Data Flow
1) Current Working Directory	<-- git init <project>
2) Index (cache)				<-- git add .
3) Local Repository				<-- git commit -m "<comment>"
4) Remote Repository	

Client side setup
http://git-scm.com/downloads   <-- download here 

git config --global user.name "karlarao"
git config --global user.email "karlarao@gmail.com"

Common commands
git init awrscripts				<-- or you can just cd on "awrscripts" folder and execute "git init"
git status
git add . 						<-- add all the files under the master folder to the staging area
git <filename>					<-- add just a file
git rm --cached <filename>		<-- remote a file
git commit -m "initial commit"	<-- to commit changes (w/ comment), and save a snapshot of the local repository 
                                             * note that when you modify, you have to do a "git add ." first..else it will say no changes added to commit
git log							<-- show summary of commits
vi README.md        <-- markdown format readme file, header should start with #

git diff
git add .				
git diff --cached				<-- get the differences in the staging area, because you've already executed the "add"..

## shortcuts
git commit -a -m "short commit"		<-- combination of add and commit
git log --oneline					<-- shorter summary
git status -s						<-- shorter show changes

Integration with Github.com

Github.com setup
go to github.com and create a new repository
on your PC go to C:\Users\Karl
open git bash and type in ssh-keygen below
ssh-keygen.exe -t rsa -C "karlarao@gmail.com"		<-- this will create RSA on C:\Users\Karl directory
copy the contents of id_rsa.pub under C:\Users\karl\.ssh directory
go to github.com -> Account Settings -> SSH Keys -> Add SSH Key
ssh -T git@github.com								<-- to test the authentication
Github.com integrate and push
go to repositories folder -> on SSH tab -> copy the key
git remote add origin <repo ssh key from website>
git remote add origin git@github.com:karlarao/awrscripts.git
git push origin master
Github.com integrate with GUI
download the GUI here http://windows.github.com/
login and configure, at the end just hit skip
go to tools -> options -> change the default storage directory to the local git directory C:\Dropbox\CodeNinja\GitHub
click Scan For Repositories -> click Add -> click Update
click Publish -> click Sync

Branch, Merge, Clone, Fork

Branching	<-- allows you to create a separate working copy of your code 
Merging		<-- merge branches together
Cloning		<-- other developers can get a copy of your code from a remote repo
Forking		<-- make use of someone's code as starting point of a new project


-- 1st developer created a branch r2_index
git branch								<-- show branches
git branch r2_index						<-- create a branch name "r2_index"
git checkout r2_index					<-- to switch to the "r2_index" branch
git checkout <the branch you want to go>	* make sure to close all files before switching to another branch

-- 2nd developer on another machine created r2_misc
git clone <ssh link>					<-- to clone a project
git branch r2_misc
git checkout r2_misc
git push origin <branch name>	<-- to update the remote repo

-- bug fix on master
git checkout master
git push origin master

-- merge to combine the changes from 1st developer to the master project
	* conflict may happen due to changes at the same spot for both branches
git branch r2_index
git merge master

	* conflict looks like the following:
		<<<<<<< HEAD
		1)
		=======
		TOC:
		1) one
		2) two
		3) three>>>>>>> master
git push origin r2_index

-- pull, synchronizes the local repo with the remote repo
	* remember, PUSH to send up GitHub, PULL to sync with GitHub
git pull origin master



Delete files on git permanently

http://stackoverflow.com/questions/1983346/deleting-files-using-git-github< good stuff
http://dalibornasevic.com/posts/2-permanently-remove-files-and-folders-from-a-git-repository
https://www.kernel.org/pub/software/scm/git/docs/git-filter-branch.html
cd /Users/karl/Dropbox/CodeNinja/GitHub/tmp
git init
git status
git filter-branch --force --index-filter 'git rm --cached --ignore-unmatch *' --prune-empty --tag-name-filter cat -- --all
git commit -m "."
git push origin master --force


Deleting a repository

https://help.github.com/articles/deleting-a-repository


version control format

http://git-scm.com/book/en/v2/Git-Basics-Tagging
Semantic Versioning 2.0.0 http://semver.org/


other references

gitflow http://nvie.com/posts/a-successful-git-branching-model/

master zip
http://stackoverflow.com/questions/8808164/set-the-name-of-a-zip-downloadable-from-github-or-other-ways-to-enroll-google-tr
http://stackoverflow.com/questions/7106012/download-a-single-folder-or-directory-from-a-github-repo
http://alblue.bandlem.com/2011/09/git-tip-of-week-git-archive.html
http://gitready.com/intermediate/2009/01/29/exporting-your-repository.html
http://manpages.ubuntu.com/manpages/intrepid/man1/git-archive.1.html
http://stackoverflow.com/questions/8377081/github-api-download-zip-or-tarball-link

uploading binary files (zip)
https://help.github.com/articles/distributing-large-binaries/
https://help.github.com/articles/about-releases/
https://help.github.com/articles/creating-releases/
https://gigaom.com/2013/07/09/oops-github-did-it-again-relaunches-binary-uploads-after-scuttling-them/
https://github.com/blog/1547-release-your-software








RealTimeUserCheck.sql

$
0
0
-- viewing waits system wide
col event format a46
col seconds format 999,999,990.00
col calls format 999,999,990
select a.event,
       a.time_waited,
       a.total_waits calls,
       a.time_waited/a.total_waits average_wait,
       sysdate - b.startup_time days_old
from   v$system_event a, v$instance b
where rownum < 6
order by a.time_waited desc;


-- viewing waits on a session
select
  e.event, e.time_waited
from
  v$session_event  e
where
  e.sid = 12
union all
select
  n.name,
  s.value
from
  v$statname  n,
  v$sesstat  s
where
  s.sid = 12
and n.statistic# = s.statistic# 
and n.name = 'CPU used by this session'
order by
  2 desc
/


-- sesstat
select a.sid, b.name, a.value
from v$sesstat a, v$statname b
where a.statistic# = b.statistic#
and a.value > 0
and a.sid = 12;


-- kill sessions 
-- select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' post_transaction;'
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$process p, v$session s, v$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
and   sa.sql_text NOT LIKE '%usercheck%'
-- and   upper(sa.sql_text) LIKE '%CP_IINFO_DAILY_RECON_PKG.USP_DAILYCHANGEFUND%'
-- and   s.sid = 178
and s.sql_id = '&sql_id'
-- and sid in (1404,1023,520,389,645)
-- and   s.username = 'APAC'
 -- and sa.plan_hash_value = 3152625234
order by status desc;

-- quicker kill sessions
select /* usercheck */ 'alter system disconnect session '''||s.sid||','||s.serial#||''''||' immediate;'
from v$session s
where s.sql_id = '&sql_id';


-- purge SQL_ID on shared pool

var name varchar2(50)
BEGIN
	select /* usercheck */ sa.address||','||sa.hash_value into :name
	from v$process p, v$session s, v$sqlarea sa
	where p.addr=s.paddr
	and   s.username is not null
	and   s.sql_address=sa.address(+)
	and   s.sql_hash_value=sa.hash_value(+)
	and   sa.sql_text NOT LIKE '%usercheck%'
	-- and   upper(sa.sql_text) LIKE '%CP_IINFO_DAILY_RECON_PKG.USP_DAILYCHANGEFUND%'
	 and   s.sid = 176
	-- and   s.username = 'APAC'
	order by status desc;

dbms_shared_pool.purge(:name,'C',1);
END;
/



-- show all users
-- on windows to kill do.. orakill<instance_name> <spid>
set lines 32767
col terminal format a4
col machine format a4
col os_login format a4
col oracle_login format a4
col osuser format a4
col module format a5
col program format a8
col schemaname format a5
-- col state format a8
col client_info format a5
col status format a4
col sid format 99999
col serial# format 99999
col unix_pid format a8
col txt format a50
col action format a8
select /* usercheck */ s.INST_ID, s.terminal terminal, s.machine machine, p.username os_login, s.username oracle_login, s.osuser osuser, s.module, s.action, s.program, s.schemaname,
	s.state,
	s.client_info, s.status status, s.sid sid, s.serial# serial#, lpad(p.spid,7) unix_pid, -- s.sql_hash_value, 
	sa.plan_hash_value,	-- remove in 817, 9i
	s.sql_id, 		-- remove in 817, 9i
	substr(sa.sql_text,1,1000) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
and   sa.sql_text NOT LIKE '%usercheck%'
-- and   lower(sa.sql_text) LIKE '%grant%'
-- and s.username = 'APAC'
-- and s.schemaname = 'SYSADM'
-- and lower(s.program) like '%uscdcmta21%'
-- and s.sid=12
-- and p.spid  = 14967
-- and s.sql_hash_value = 3963449097
-- and s.sql_id = '5p6a4cpc38qg3'
-- and lower(s.client_info) like '%10036368%'
-- and s.module like 'PSNVS%'
-- and s.program like 'PSNVS%'
order by status desc;


-- find running jobs
set linesize 250
col sid            for 9999     head 'Session|ID'
col spid                        head 'O/S|Process|ID'
col serial#        for 9999999  head 'Session|Serial#'
col log_user       for a10
col job            for 9999999  head 'Job'
col broken         for a1       head 'B'
col failures       for 99       head "fail"
col last_date      for a18      head 'Last|Date'
col this_date      for a18      head 'This|Date'
col next_date      for a18      head 'Next|Date'
col interval       for 9999.000 head 'Run|Interval'
col what           for a60
select j.sid,
s.spid,
s.serial#,
       j.log_user,
       j.job,
       j.broken,
       j.failures,
       j.last_date||':'||j.last_sec last_date,
       j.this_date||':'||j.this_sec this_date,
       j.next_date||':'||j.next_sec next_date,
       j.next_date - j.last_date interval,
       j.what
from (select djr.SID, 
             dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES, 
             dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC, 
             dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
        from dba_jobs dj, dba_jobs_running djr
       where dj.job = djr.job ) j,
     (select p.spid, s.sid, s.serial#
          from v$process p, v$session s
         where p.addr  = s.paddr ) s
where j.sid = s.sid;



-- find where a system is stuck
break on report
compute sum of sessions on report
select event, count(*) sessions from v$session_wait
where state='WAITING'
group by event
order by 2 desc;


-- find the session state
select event, state, count(*) from v$session_wait group by event, state order by 3 desc;


-- when user calls up, describe wait events per session since the session has started up
select max(total_waits), event, sid from v$session_event   
where sid = 12
group by sid, event
order by 1 desc;

-- You can easily discover which session has high TIME_WAITED on the db file sequential read or other waits
select a.sid,
       a.event,
       a.time_waited,
       a.time_waited / c.sum_time_waited * 100 pct_wait_time,
       round((sysdate - b.logon_time) * 24) hours_connected
from   v$session_event a, v$session b,
      (select sid, sum(time_waited) sum_time_waited
       from   v$session_event
       where  event not in (
                   'Null event',
                   'client message',
                   'KXFX: Execution Message Dequeue - Slave',
                   'PX Deq: Execution Msg',
                   'KXFQ: kxfqdeq - normal deqeue',
                   'PX Deq: Table Q Normal',
                   'Wait for credit - send blocked',
                   'PX Deq Credit: send blkd',
                   'Wait for credit - need buffer to send',
                   'PX Deq Credit: need buffer',
                   'Wait for credit - free buffer',
                   'PX Deq Credit: free buffer',
                   'parallel query dequeue wait',
                   'PX Deque wait',
                   'Parallel Query Idle Wait - Slaves',
                   'PX Idle Wait',
                   'slave wait',
                   'dispatcher timer',
                   'virtual circuit status',
                   'pipe get',
                   'rdbms ipc message',
                   'rdbms ipc reply',
                   'pmon timer',
                   'smon timer',
                   'PL/SQL lock timer',
                   'SQL*Net message from client',
                   'WMON goes to sleep')
       having sum(time_waited) > 0 group by sid) c
where a.sid = b.sid
and   a.sid = c.sid
and   a.time_waited > 0
-- and   a.event = 'db file sequential read'
order by hours_connected desc, pct_wait_time;


-- show all users RAC
select s.inst_id instance_id,
       s.failover_type failover_type,
       s.FAILOVER_METHOD failover_method,
       s.FAILED_OVER failed_over,
       p.username os_login,                  
       s.username oracle_login,
       s.status status,
       s.sid oracle_session_id,
       s.serial# oracle_serial_no,
       lpad(p.spid,7) unix_process_id,
       s.machine, s.terminal, s.osuser,
       substr(sa.sql_text,1,540) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
--and s.sid=48
--and p.spid  
order by 3;


-- this is for RAC TAF, fewer columns
col oracle_login format a10
col instance_id format 99
col sidserial format a8
select s.inst_id instance_id,
       s.failover_type failover_type,
       s.FAILOVER_METHOD failover_method,
       s.FAILED_OVER failed_over,               
       s.username oracle_login,
       s.status status,
       concat (s.sid,s.serial#) sidserial,
	substr(sa.sql_text,1,15) txt
from gv$process p, gv$session s, gv$sqlarea sa
where p.addr=s.paddr
and   s.username is not null
and   s.type = 'USER'
and   s.username = 'ORACLE'
and   s.sql_address=sa.address(+)
and   s.sql_hash_value=sa.hash_value(+)
--and s.sid=48
--and p.spid  
order by 6;


-- show open cursors
col txt format a100
select sid, hash_value, substr(sql_text,1,1000) txt from v$open_cursor where sid = 12;


-- show running cursors
select   nvl(USERNAME,'ORACLE PROC'), s.SID, s.sql_hash_value, SQL_TEXT
from     sys.v_$open_cursor oc, sys.v_$session s
where    s.SQL_ADDRESS = oc.ADDRESS
and      s.SQL_HASH_VALUE = oc.HASH_VALUE
and s.sid = 12
order by USERNAME, s.SID;


-- Get recent snapshot
select instance_number, to_char(startup_time, 'DD-MON-YY HH24:MI:SS') startup_time, to_char(begin_interval_time, 'DD-MON-YY HH24:MI:SS') begin_interval_tim, snap_id 
from DBA_HIST_SNAPSHOT 
order by snap_id;


-- Finding top expensive SQL in the workload repository, get snap_ids first
select * from (
select a.sql_id as sql_id, sum(elapsed_time_delta)/1000000 as elapsed_time_in_sec,
	      (select x.sql_text
	      from dba_hist_sqltext x
	      where x.dbid = a.dbid and x.sql_id = a.sql_id) as sql_text
from dba_hist_sqlstat a, dba_hist_sqltext b
where a.sql_id = b.sql_id and
a.dbid   = b.dbid
and a.snap_id between 710 and 728
group by a.dbid, a.sql_id
order by elapsed_time_in_sec desc
) where ROWNUM < 2
/



-- Only valid for 10g Release 2, Finding top 10 expensive SQL in the cursor cache by elapsed time
select * from (
select sql_id, elapsed_time/1000000 as elapsed_time_in_sec, substr(sql_text,1,80) as sql_text
from   v$sqlstats
order by elapsed_time_in_sec desc
) where rownum < 11
/


-- get hash value statistics, The query sorts its output by the number of LIO calls executed per row returned. This is a rough measure of statement efficiency. For example, the following output should bring to mind the question, "Why should an application require more than 174 million memory accesses to compute 5 rows?"
col stmtid      heading 'Stmt Id'               format    9999999999
col dr          heading 'PIO blks'              format   999,999,999
col bg          heading 'LIOs'                  format   999,999,999,999
col sr          heading 'Sorts'                 format       999,999
col exe         heading 'Runs'                  format   999,999,999,999
col rp          heading 'Rows'                  format 9,999,999,999
col rpr         heading 'LIOs|per Row'          format   999,999,999,999
col rpe         heading 'LIOs|per Run'          format   999,999,999,999
select  hash_value stmtid
       ,sum(disk_reads) dr
       ,sum(buffer_gets) bg
       ,sum(rows_processed) rp
       ,sum(buffer_gets)/greatest(sum(rows_processed),1) rpr
       ,sum(executions) exe
       ,sum(buffer_gets)/greatest(sum(executions),1) rpe
 from v$sql
where command_type in ( 2,3,6,7 )
and hash_value in (2023740151)
-- and rownum < 20
group by hash_value
order by 5 desc;


-- check block gets of a session
col block_gets format 999,999,999,990
col consistent_gets format 999,999,999,990
select to_char(sysdate, 'hh:mi:ss') "time", physical_reads, block_gets, consistent_gets, block_changes, consistent_changes
from v$sess_io 
where sid=681;


-- show SQL in shared SQL area, get hash value
    SELECT /* example */ substr(sql_text, 1, 80) sql_text,
           sql_id, 
	    hash_value, address, child_number, plan_hash_value, FIRST_LOAD_TIME
      FROM v$sql
     WHERE 
	--sql_id = '6wps6tju5b8tq'
	-- hash_value = 1481129178
	upper(sql_text) LIKE '%INSERT INTO PS_CBLA_RET_TMP SELECT CB_BUS_UN%'
       AND sql_text NOT LIKE '%example%' 
      order by first_load_time; 


-- show SQL hash
col txt format a1000
select 
       	sa.hash_value, sa.sql_id,
		substr(sa.sql_text,1,1000) txt
from v$sqlarea sa
where 
sa.hash_value = 517092776
--ADDRESS = '2EBC7854'
 --sql_id = 'gz5bfrcjq060u'; 


-- show full sql text of the transaction
col sql_text format a1000
set heading off
select sql_text from v$sqltext                
where  HASH_VALUE = 1481129178
-- where sql_id = 'a5xnahpb62cvq'
order by piece;
set heading on


/*
The trace doesn't contain the SQLID as such but the hash value. 
In this case Hash=61d72ac6. 
Translate this to decimal and query v$sqlarea where hash_value=#
( if the hash value is still in the v$sqlarea )
/u01/app/oracle/diag/rdbms/biprddal/biprd1/incident/incdir_65625/biprd1_dia0_19054_i65625.trc
~~~~~~~~~~
....
LibraryHandle: Address=0x1ff1434c8 Hash=61d72ac6 LockMode=N PinMode=0 LoadLockMode=0 Status=VALD 
ObjectName: Name=UPDATE WC_BUDGET_BALANCE_A_TMP A SET 
*/
select sql_text from dba_hist_sqltext where sql_id in (select sql_id from DBA_HIST_SQLSTAT where plan_hash_value = 1641491142);


-- get sql id and hash value, convert hash to, sqlid to hash, sql_id to hash, h2s
col sql_text format a1000
select substr(sql_text, 1,30), sql_id, hash_value from v$sqltext                
  where  
  HASH_VALUE = 1312665718
  -- sql_id = '048znjmq3uvs9'
and rownum < 2;


-- show full sql text of the transaction, of the top sqls in awr
col sql_text format a1000
set heading off
select ''|| sql_id || ' '|| hash_value || ' ' || sql_text || '' from v$sqltext                
-- where  HASH_VALUE = 1481129178
where sql_id in (select distinct a.sql_id as sql_id
		from dba_hist_sqlstat a, dba_hist_sqltext b
		where a.sql_id = b.sql_id and
		a.dbid   = b.dbid
		and a.snap_id between 710 and 728)
order by sql_id, piece;
set heading on



-- query SQL in ASH
set lines 3000
select substr(sa.sql_text,1,500) txt, a.sample_id, a.sample_time, a.session_id, a.session_serial#, a.user_id, a.sql_id,
       a.sql_child_number, a.sql_plan_hash_value, 
       a.sql_opcode, a.plsql_object_id, a.service_hash, a.session_type,
       a.session_state, a.qc_session_id, a.blocking_session,
       a.blocking_session_status, a.blocking_session_serial#, a.event, a.event_id,
       a.seq#, a.p1, a.p2, a.p3, a.wait_class,
       a.wait_time, a.time_waited, a.program, a.module, a.action, a.client_id
from gv$active_session_history a, gv$sqltext sa 
where a.sql_id = sa.sql_id
-- and session_id = 126


/* -- weird scenario, when I'm looking for TRUNCATE statement, I can see it in V$SQLTEXT
-- and I can't see it on V$SQLAREA and V$SQL
select * from v$sqltext where upper(sql_text) like '%TRUNCATE%TEST3%';

select * from v$sqlarea 
where sql_id = 'dfwz4grz83d6a'
where upper(sql_text) like '%TRUNCATE%';

select * from v$sql 
where sql_id = 'dfwz4grz83d6a'
where upper(sql_text) like '%TRUNCATE%'; 

from oracle-l:

Checking V$FIXED_VIEW_DEFINITION, you can see that V$SQLAREA is based off of 
x$kglcursor_child_sqlid, V$SQL is off x$kglcursor_child, and V$SQLTEXT is off 
x$kglna.  I may be way off on this, but I believe pure DDL is not a cursor, 
which is why it won't be found in X$ cursor tables.  Check with a CTAS vs. a 
plain CREATE TABLE ... (field ...).  CTAS uses a cursor and would be found in 
all the X$ sql tables.  A plan CREATE TABLE won't.
*/




-- query long operations
set lines 200
col opname format a35
col target format a10
col units format a10
select * from (
			select 
			sid, serial#, sql_id,
			opname, target, sofar, totalwork, round(sofar/totalwork, 4)*100 pct, units, elapsed_seconds, time_remaining time_remaining_sec, round(time_remaining/60,2) min
			,sql_hash_value
		-- 	,message
			from v$session_longops 
			WHERE sofar < totalwork
			order by start_time desc);


-- query session waits
set lines 300
col program format a23
col event format a18
col seconds format 99,999,990
col state format a17
select w.sid, s.sql_hash_value, s.program, w.event, w.wait_time/100 t, w.seconds_in_wait seconds_in_wait, w.state, w.p1, w.p2, w.p3
from v$session s, v$session_wait w
where s.sid = w.sid and s.type = 'USER'
and s.sid = 37
-- and s.sql_hash_value = 1789726554
-- and s.sid = w.sid and s.type = 'BACKGROUND'
and w.state = 'WAITING'
order by 6 asc;



-- show actual transaction start time, and exact object
SELECT s.saddr, s.SQL_ADDRESS, s.sql_hash_value, t.START_TIME, t.STATUS, s.lockwait, s.row_wait_obj#, row_wait_file#, s.row_wait_block#, s.row_wait_row#
--, s.blocking_session    
FROM   v$session s, v$transaction t
WHERE  s.saddr = t.ses_addr
and s.sid = 12;

-- search for the object
  select owner, object_name, object_type              
  from dba_objects
  where object_id = 73524;

  SELECT owner,segment_name,segment_type
  FROM   dba_extents
  WHERE  file_id = 32
  AND 238305
  BETWEEN block_id AND block_id + blocks - 1;


-- open transactions 
set lines 199 pages 100
col object_name for a30
COL iid for 999
col usn for 9999
col slot for 9999
col ublk for 99999
col uname for a15
col sid for 9999
col ser# for 9999999
col start_scn for 99999999999999
col osuser for a20

select * from (
select v.inst_id iid, v.XIDUSN usn, v.XIDSLOT slot, v.XIDSQN ,v. START_TIME, v.start_scn,  v.USED_UBLK ublk, o.oracle_username uname,s.sid sid,s.serial# ser#, s.osuser, o.object_id oid ,d.object_name 
from gv$transaction v, gv$locked_object o, dba_objects d, gv$session s 
where  v.XIDUSN = o.XIDUSN and v.xidslot=o.xidslot and v.xidsqn=o.xidsqn and o.object_id = d.object_id and v.addr = s.taddr order by 6,1,11,12,13) where rownum < 26;



-- search for the object in the buffer cache
select b.sid,
       nvl(substr(a.object_name,1,30),
                  'P1='||b.p1||' P2='||b.p2||' P3='||b.p3) object_name,
       a.subobject_name,
       a.object_type
from   dba_objects a, v$session_wait b, x$bh c
where  c.obj   = a.object_id(+)
and    b.p1    = c.file#(+)
and    b.p2    = c.dbablk(+)
-- and    b.event = 'db file sequential read'
union
select b.sid,
       nvl(substr(a.object_name,1,30),
                  'P1='||b.p1||' P2='||b.p2||' P3='||b.p3) object_name,
       a.subobject_name,
       a.object_type
from   dba_objects a, v$session_wait b, x$bh c
where  c.obj   = a.data_object_id(+)
and    b.p1    = c.file#(+)
and    b.p2    = c.dbablk(+)
-- and    b.event = 'db file sequential read'
order by 1;

-- if there are locks, show the locks thats are waited in the system
select sid, type, id1, id2, lmode, request, ctime, block
  from v$lock
 where request>0;



-- per session pga
BREAK ON REPORT
COMPUTE SUM OF alme ON REPORT 
COMPUTE SUM OF mame ON REPORT 
COLUMN alme     HEADING "Allocated MB" FORMAT 99999D9
COLUMN usme     HEADING "Used MB"      FORMAT 99999D9
COLUMN frme     HEADING "Freeable MB"  FORMAT 99999D9
COLUMN mame     HEADING "Max MB"       FORMAT 99999D9
COLUMN username                        FORMAT a15
COLUMN program                         FORMAT a22
COLUMN sid                             FORMAT a5
COLUMN spid                            FORMAT a8
set pages 3000
SET LINESIZE 3000
set echo off
set feedback off
alter session set nls_date_format='yy-mm-dd hh24:mi:ss';

SELECT sysdate, s.username, SUBSTR(s.sid,1,5) sid, p.spid, logon_time,
       SUBSTR(s.program,1,22) program , s.process pid_remote,
       ROUND(pga_used_mem/1024/1024) usme,
       ROUND(pga_alloc_mem/1024/1024) alme,
       ROUND(pga_freeable_mem/1024/1024) frme,
       ROUND(pga_max_mem/1024/1024) mame,
       decode(a.IO_CELL_OFFLOAD_ELIGIBLE_BYTES,0,'No','Yes') Offload,
       s.sql_id
FROM  v$session s,v$process p, v$sql a
WHERE s.paddr=p.addr
and s.sql_id=a.sql_id
ORDER BY pga_max_mem, logon_time;

-- pga breakdown
SELECT pid, category, allocated, used, max_allocated
  FROM   v$process_memory
 WHERE  pid = (SELECT pid
                 FROM   v$process
                WHERE  addr= (select paddr
                                FROM   v$session
                               WHERE  sid = &sid))




-- UNDO
/* Shows active (in progress) transactions -- feed the db_block_size to multiply with t.used_ublk */
/* select value from v$parameter where name = 'db_block_size'; */
select sid, serial#,s.status,username, terminal, osuser,
       t.start_time, r.name, (t.used_ublk*8192)/1024 USED_kb, t.used_ublk "ROLLB BLKS",
       decode(t.space, 'YES', 'SPACE TX',
          decode(t.recursive, 'YES', 'RECURSIVE TX',
             decode(t.noundo, 'YES', 'NO UNDO TX', t.status)
       )) status
from sys.v_$transaction t, sys.v_$rollname r, sys.v_$session s
where t.xidusn = r.usn
  and t.ses_addr = s.saddr;


-- TEMP, show user currently using space in temp space 
select   se.username
        ,se.sid
        ,se.serial#
        ,su.extents
        ,su.blocks * to_number(rtrim(p.value))/1024/1024 as Space
        ,tablespace
        ,segtype
from     v$sort_usage su
        ,v$parameter  p
        ,v$session    se
where    p.name          = 'db_block_size'
and      su.session_addr = se.saddr
order by se.username, se.sid;



-- To report the info on temp usage used...
select swa.sid, vs.process, vs.osuser, vs.machine,vst.sql_text, vs.sql_id "Session SQL_ID",
swa.sql_id "Active SQL_ID", trunc(swa.tempseg_size/1024/1024)"TEMP TOTAL MB"
from v$sql_workarea_active swa, v$session vs, v$sqltext vst
where swa.sid=vs.sid
and vs.sql_id=vst.sql_id
and piece=0
and swa.tempseg_size is not null
order by "TEMP TOTAL MB" desc;


-- a quick TEMP script for threshold
echo "TEMP_Threshold: $TMP_THRSHLD"
sqlplus -s << EOF | read GET_TMP
/ as sysdba
set head off
set pagesize 0
select sum(trunc(swa.tempseg_size/1024/1024))"TEMP TOTAL MB"
from v\$sql_workarea_active swa;
EOF



 -- Oracle also provides single-block read statistics for every database file in the V$FILESTAT view. The file-level single-block average wait time can be calculated by dividing the SINGLEBLKRDTIM with the SINGLEBLKRDS, as shown next. (The SINGLEBLKRDTIM is in centiseconds.) You can quickly discover which files have unacceptable average wait times and begin to investigate the mount points or devices and ensure that they are exclusive to the database

select a.file#,
       b.file_name,
       a.singleblkrds,
       a.singleblkrdtim,
       a.singleblkrdtim/a.singleblkrds average_wait
from   v$filestat a, dba_data_files b
where  a.file# = b.file_id
and    a.singleblkrds > 0
order by average_wait;


--------------------
-- BUFFER CACHE
--------------------

/*This dynamic view has an entry for each block in the database buffer cache. The status are:
free : Available ram block. It might contain data but it is not currently in use.
xcur :Block held exclusively by this instance
scur :Block held in cache, shared with other instance
cr    :Block for consistent read
read :Block being read from disk
mrec :Block in media recovery mode
irec :Block in instance (crash) recovery mode
If it is needed to investigate the buffer cache you can use the following script:*/
SELECT count(*), db.object_name, tb.name
    FROM v$bh bh, dba_objects db, v$tablespace tb
    WHERE bh.objd = db.object_id
    AND bh.TS# = TB.TS#
    AND db.owner NOT IN ('SYS', 'SYSTEM')
GROUP BY db.object_name, bh.TS#, tb.name
ORDER BY 1 ASC;


-- get block
select block#,file#,status from v$bh where objd = 46186


-- get touch count
select tch, file#, dbablk,
       case when obj = 4294967295
            then 'rbs/compat segment'
            else (select max( '('||object_type||') ' ||
                              owner || '.' || object_name  ) ||
                         decode( count(*), 1, '', ' maybe!' )
                    from dba_objects
                   where data_object_id = X.OBJ )
        end what
  from (
select tch, file#, dbablk, obj
  from x$bh
 where state <> 0
 order by tch desc
       ) x
 where rownum <= 5
/

--shows touch count for tables/indexes. Use to determine tables/indexes to keep
select decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE') buffer_pool,
s.owner, s.segment_name, s.segment_type,count(bh.obj) blocks, round(avg(bh.tch),2) avg_use, max(bh.tch) max_use 
from sys_dba_segs s, X$BH bh where s.segment_objd = bh.obj 
group by decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE'), s.segment_name, s.segment_type, s.owner 
order by decode(s.buffer_pool_id,0,'DEFAULT',1,'KEEP',2,'RECYCLE'), count(bh.obj) desc,
round(avg(bh.tch),2) desc, max(bh.tch) desc;


SQL test case

$
0
0
-- SQL Test Case - check on OPDG for more details
-------------------
select to_char(sysdate, 'YY/MM/DD HH24:MI:SS') AS "START" from dual;
col time1 new_value time1
col time2 new_value time2
select to_char(sysdate, 'SSSSS') time1 from dual;

set arraysize 5000
set termout off
set echo off verify off

-- define the variables in SQLPlus
variable PQ2 varchar2(32)
variable PQ1 varchar2(32)

-- Set the bind values
begin :PQ2 := '06/01/2011'; :PQ1 := '06/30/2011';
end;
/

-- Set statistics level to high
-- alter session set statistics_level = all;

alter session set current_schema = sysadm;
-- alter session set "_serial_direct_read"=ALWAYS;
-- ensure direct path read is done
-- alter session force parallel query;
-- alter session force parallel ddl;
-- alter session force parallel dml;

-- alter session set optimizer_index_cost_adj = 150 ;
-- alter session set "_b_tree_bitmap_plans"=false;
-- alter session set "_optimizer_cost_based_transformation" = on;
-- alter session set "_gby_hash_aggregation_enabled" = true;
-- alter session set "_unnest_subquery" = false;
-- alter session set "_optimizer_max_permutations"=80000;
-- alter session set plsql_optimize_level=3;
-- alter session set "_optim_peek_user_binds"=false;
-- alter session set "_optimizer_use_feedback" = false;


-- THE SQL STATEMENT
SELECT /*+ MONITOR */ ...
FROM ...
WHERE ...
    AND "PS_CBTA_PROD_TD_R_V1"."ACCOUNTING_DT" BETWEEN TO_DATE(:PQ2, 'MM/DD/YYYY') AND TO_DATE(:PQ1, 'MM/DD/YYYY')
/

set termout on
select to_char(sysdate, 'YY/MM/DD HH24:MI:SS') AS "END" from dual;
select to_char(sysdate, 'SSSSS') time2 from dual;
select &&time2 - &&time1 total_time from dual;
select '''END''' END from dual;
-------------------

sed

$
0
0
search for karl.com and replace it with example.com

sed -i 's/karl.com/example.com/g' *.trc


cat `ls -ltr *awr_topevents-tableau*csv   | awk '{print $9}'`  >> top_events-all.csv
cat `ls -ltr *awr_cpuwl-tableau*csv       | awk '{print $9}'`  >> cpuwl-all.csv       
cat `ls -ltr *awr_sysstat-tableau*csv     | awk '{print $9}'`  >> sysstat-all.csv   
cat `ls -ltr *awr_topsqlx-tableau-exa*csv | awk '{print $9}'`  >> topsqlx-all.csv   
cat `ls -ltr *awr_iowl-tableau-exa*csv    | awk '{print $9}'`  >> iowl-all.csv         

sed -i 's/fsprd2/fsprd1/g' iowl-all.csv    
sed -i 's/mtaprd112/mtaprd111/g' iowl-all.csv    
sed -i 's/pd01db04/pd01db03/g' iowl-all.csv  

http://www.warmetal.nl/sed
http://www.chriskdesigns.com/change-your-wordpress-domain-quickly-with-linux-mysql-and-sed/


remove first/last characters
http://www.ivorde.ro/How_to_remove_first_last_character_from_a_string_using_SED-75.html


grep

$
0
0
http://linux.byexamples.com/archives/304/grep-multiple-lines/
http://www.unix.com/unix-dummies-questions-answers/51767-grep-required-pattern-next-2-3-lines.html
http://www.unix.com/shell-programming-scripting/51395-pattern-matching-file-then-display-10-lines-above-every-time.html


grep before and after
grep -B1 -A2 "DMA" message.txt     <-- output before 1 line after 2 lines


grep for a search string, and list the file
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$ find . -type f | xargs grep "LOCAL_LISTEN"
./biprd1_ora_29214_2.aud:ACTION :[173] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:6:32} */'
./biprd1_ora_31855_2.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:13:13} */'
./biprd1_ora_1656_1.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:17:23} */'
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$ find . -exec grep -H "LOCAL_LISTEN" {} \;
./biprd1_ora_29214_2.aud:ACTION :[173] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:6:32} */'
./biprd1_ora_31855_2.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:13:13} */'
./biprd1_ora_1656_1.aud:ACTION :[174] 'ALTER SYSTEM SET LOCAL_LISTENER='(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<IP>)(PORT=1521))))' SCOPE=MEMORY SID='biprd1' /* db agent *//* {0:17:23} */'

oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$
oracle@host1:/u01/app/oracle/admin/biprd/adump:biprd1
$ grep -l "LOCAL_LISTENER" *aud
biprd1_ora_1656_1.aud
biprd1_ora_29214_2.aud
biprd1_ora_31855_2.aud


oracle@host1:/home/oracle:biprd1
$ grep -l "LOCAL_LISTENER" /u01/app/oracle/admin/biprd/adump/*aud | xargs ls -ltr
-rw-r----- 1 oracle dba 1943 Dec 13 16:21 /u01/app/oracle/admin/biprd/adump/biprd1_ora_29214_2.aud
-rw-r----- 1 oracle dba 1944 Dec 15 16:31 /u01/app/oracle/admin/biprd/adump/biprd1_ora_31855_2.aud
-rw-r----- 1 oracle dba 1942 Dec 15 20:54 /u01/app/oracle/admin/biprd/adump/biprd1_ora_1656_1.aud


oracle@host2:/home/oracle:mtaprd111
$ grep -l "LOCAL_LISTENER" /u01/app/oracle/admin/biprd/adump/*aud
oracle@host2:/home/oracle:mtaprd111
$ ls -1 | wc -l
71



grep exclude file list
http://dbaspot.com/shell/199876-grep-exclude-list.html


grep between two search terms
http://www.cyberciti.biz/faq/howto-grep-text-between-two-words-in-unix-linux/
sed -n "/~~BEGIN-OS-INFORMATION~~/,/~~END-OS-INFORMATION~~/p" awr-hist-565219483-PRODRAC-118749-120198.out | grep -v BEGIN- | grep -v END-





PowerShell

BashShell

$
0
0
Docshttp://wiki.bash-hackers.org/doku.php , FAQhttp://mywiki.wooledge.org/BashFAQ


Sorting data by dates, numbers and much much more
http://prefetch.net/blog/index.php/2010/06/24/sorting-data-by-dates-numbers-and-much-much-more/

This is crazy useful, and I didn’t realize sort could be used to sort by date. I put this to use today, when I had to sort a slew of data that looked similar to this:
Jun 10 05:17:47 some_data_string
May 20 05:17:48 some_data_string2
Jun 17 05:17:49 some_data_string0
I was able to first sort by the month, and then by the day of the month:
$ awk ‘{printf “%-3s %-2s %-8s %-50s\n”, $1, $2, $3, $4 }’ data | sort -k1M -k2n
May 17 05:17:49 some_data_string0
Jun 01 05:17:47 some_data_string
Jun 20 05:17:48 some_data_string2

http://www.linuxconfig.org/Bash_scripting_Tutorial
http://www.oracle.com/technetwork/articles/servers-storage-dev/kornshell-1523970.html

vim


bitcoin

CodeNinja Tools

Version Control Software - VCS

DevTools, Dev Tools

Debugger

Viewing all 2097 articles
Browse latest View live