Thursday, 12 May 2016

view sql statement executed by a concurrent request

SELECT to_char(sql_out.last_active_time,'dd hh24:mi:ss') Last_active
     , sql_out.sql_fulltext
     , sql_out.module
     , sql_plan.id
     , rpad(' ',depth      *2,' ')
       || rpad(operation,50- depth*2,' ')
       || rpad(OPTIONS,30,' ')
       || rpad(object_name,30,' ')
     , access_predicates
     , filter_predicates
  FROM gv$sql_plan sql_plan
     , gv$sql sql_out
 WHERE sql_plan.sql_id       = sql_out.sql_id
   AND sql_plan.inst_id      = sql_out.inst_id
   AND sql_plan.child_number = sql_out.child_number
   AND (sql_plan.sql_id, sql_plan.inst_id, sql_plan.child_number) IN
            (SELECT sql_id
                  , sql_in.inst_id
                  , MAX(child_number)
               FROM gv$sql sql_in
                  , fnd_concurrent_requests req
                  , fnd_concurrent_programs prog            
              WHERE req.request_id               = :request_id
                AND req.concurrent_program_id    = prog.concurrent_program_id
                AND req.program_application_id   = prog.application_id
                AND prog.concurrent_program_name = sql_in.module
                AND sql_in.last_active_time     >= req.actual_start_date
              GROUP BY sql_in.sql_id
                     , sql_in.inst_id
            )
 ORDER BY 1 desc
        , sql_plan.sql_id
        , sql_plan.child_number
        , sql_plan.id ;

Thursday, 21 April 2016

Collect AWR report

How to Collect Standard Diagnostic Information Using AWR Reports for Performance Issues ( Doc ID 1903158.1 )

Gathering an AWR report for a Performance Issue

To gather an AWR report using the AWR report generation script: $ORACLE_HOME/rdbms/admin/awrrpt.sql. As with ADDM, you can run an AWR report as any user as long as they have the following roles and privileges:
•ADVISOR , SELECT_CATALOG_ROLE  and EXECUTE ON dbms_workload_repository .

For example, to allow the SCOTT user to both generate AWR snapshots and submit ADDM recommendation jobs, the user will need to be granted the following (you may skip this if you have already granted the privileges for the ADDM section above):

# sqlplus / as sysdba
 GRANT ADVISOR TO scott;
 GRANT SELECT_CATALOG_ROLE TO scott;
 GRANT EXECUTE ON dbms_workload_repository TO scott;

Once this is done you can generate the report by changing to $ORACLE_HOME/rdbms/admin/ , logging in and executing the awrrpt.sql script:

# cd $ORACLE_HOME/rdbms/admin/
# sqlplus SCOTT/TIGER
SQL> START awrrpt.sql

The report will prompt whether you want HTML or TXT (choose HTML), Begin and End Snapshot Ids and a report name (if you do not wish to select the default).
By default the report is written to the current directory and will be called : awrrpt_1_BEGIN_SNAP_END_SNAP.txt for example awrrpt_1_1531_1532.html
The report will output the location once it has completed.

Note: If you do not have permissions on the current folder, the report will not be produced and you will receive a message similar to the following:
SP2-0606: Cannot create SPOOL file "awrrpt_1_1531_1532.html"

Wednesday, 6 April 2016

concurrent program which manager is processing

SELECT a.include_flag, a.queue_application_id , c.user_concurrent_queue_name,
       d.user_concurrent_program_name
FROM applsys.fnd_concurrent_queue_content a,
       applsys.fnd_concurrent_programs b,
       apps.fnd_concurrent_queues_vl c,
       fnd_concurrent_programs_tl d
WHERE type_id = b.concurrent_program_id
AND c.concurrent_queue_id = a.concurrent_queue_id
AND b.concurrent_program_id = d.concurrent_program_id
AND d.user_concurrent_program_name LIKE '%Calculate%'



select fcr.request_id req_id,

  substr(fcq.concurrent_queue_name, 1, 20) queue,
  to_char(fcr.actual_start_date,'hh24:mi') s_time,
  substr(fcr.user_concurrent_program_name, 1, 60)  name,
  substr(fcr.requestor, 1, 9 ) u_name,
  round((sysdate -actual_start_date) *24, 2) elap,
  decode(fcr.phase_code,'R','Running','P','Inactive','C','Completed', fcr.phase_code) Phase,
  substr(decode( fcr.status_code, 'A', 'WAITING', 'B', 'RESUMING',
        'C', 'NORMAL', 'D', 'CANCELLED', 'E', 'ERROR', 'F', 'SCHEDULED',
        'G', 'WARNING', 'H', 'ON HOLD', 'I', 'NORMAL', 'M', 'NO MANAGER',
        'Q', 'STANDBY', 'R', 'NORMAL', 'S', 'SUSPENDED', 'T', 'TERMINATED',
        'U', 'DISABLED', 'W', 'PAUSED', 'X', 'TERMINATED', 'Z', 'WAITING',
        'UNKNOWN'), 1, 10)
  from
        apps.fnd_concurrent_queues fcq,
        apps.fnd_concurrent_processes fcp,
        apps.fnd_conc_req_summary_v fcr
  where fcp.concurrent_queue_id = fcq.concurrent_queue_id
    and fcp.queue_application_id = fcq.application_id
    and fcr.controlling_manager = fcp.concurrent_process_id
    and fcr.request_id = '&1'
    order by request_id ;

Oracle Apps Log files locations

Database Tier Logs are

Alert Log File location:

$ORACLE_HOME/admin/$CONTEXT_NAME/bdump/alert_$SID.log


Trace file location:

$ORACLE_HOME/admin/SID_Hostname/udump

Application Tier Logs


Start/Stop script log files location:

$COMMON_TOP/admin/log/CONTEXT_NAME/ 


OPMN log file location

$ORACLE_HOME/opmn/logs/ipm.log


Apache, Jserv, JVM log files locations:

$IAS_ORACLE_HOME/Apache/Apache/logs/ssl_engine_log

$IAS_ORACLE_HOME/Apache/Apache/logs/ssl_request_log

$IAS_ORACLE_HOME/Apache/Apache/logs/access_log

$IAS_ORACLE_HOME/Apache/Apache/logs/error_log

$IAS_ORACLE_HOME/Apache/JServ/logs


Concurrent log file location:

$APPL_TOP/admin/PROD/log or $APPLLOG/$APPLCSF

$APPLCSF/$APPLOG and $APPLCSF/$APPLOUT

Patch log file location:

$APPL_TOP/admin/PROD/log


Worker Log file location:

$APPL_TOP/admin/PROD/log


AutoConfig log files location:

Application Tier:

$APPL_TOP/admin/SID_Hostname/log//DDMMTime/adconfig.log


Database Tier:

$ORACLE_HOME/appsutil/log/SID_Hostname/DDMMTime/adconfig.log


Error log file location:

Application Tier:

$APPL_TOP/admin/PROD/log


Database Tier :

$ORACLE_HOME/appsutil/log/SID_Hostname


In Oracle Applications R12, the log files are located in $LOG_HOME (which translates to $INST_TOP/logs)

Below list of log file locations could be helpful for you:


Concurrent Request related logs

$LOG_HOME/appl/conc - > location for concurrent requests log and out files

$LOG_HOME/appl/admin - > location for mid tier startup scripts log files


Apache Logs (10.1.3 Oracle Home which is equivalent to iAS Oracle Home - Apache, OC4J and OPMN)

$LOG_HOME/ora/10.1.3/Apache - > Location for Apache Error and Access log files

$LOG_HOME/ora/10.1.3/j2ee - > location for j2ee related log files

$LOG_HOME/ora/10.1.3/opmn - > location for opmn related log files


Forms & Reports related logs (10.1.2 Oracle home which is equivalent to 806 Oracle Home)

$LOG_HOME/ora/10.1.2/forms

$LOG_HOME/ora/10.1.2/reports


Startup/Shutdown Log files location:

$INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log


Patch log files location:

$APPL_TOP/admin/$SID/log/ 


Clone and AutoConfig log files location in Oracle E-Business Suite Release 12 

Logs for the adpreclone.pl are located: 

On the database tier: 

RDBMS $ORACLE_HOME/appsutil/log/< context >/StageDBTier_< timestamp >.log 


On the application tier: 

$INST_TOP/admin/log/StageAppsTier_< timestamp >.log 


Where the logs for the admkappsutil.pl are located? 

On the application tier: 

$INST_TOP/admin/log/MakeAppsUtil_< timestamp >.log

How to Trace a Concurrent Request And Generate TKPROF File

1. Enable Tracing For The Concurrent Manager  Program 
Responsibility: System Administrator
Navigate: Concurrent > Program > Define
Query Concurrent Program
Select the Enable Trace Checkbox 
Turn On Tracing
Responsibility: System Administrator
Navigate: Profiles > System
Query Profile Option Concurrent: Allow Debugging
Set profile to Yes
 Run Concurrent Program With Tracing Turned On
Logon to the Responsibility that runs the Concurrent Program 
 In the Submit Request Screen click on Debug Options (B)
Select the Checkbox for SQL Trace


 2. Find Trace File Name
  Run the following SQL to find out the Raw trace name and location for the concurrent program.  The SQL prompts the user for the request id

SELECT ‘Request id: ‘||request_id ,  ‘Trace id: ‘||oracle_Process_id,  ‘Trace Flag: ‘||req.enable_trace,  ‘Trace Name:  ‘||dest.value||’/’||lower(dbnm.value)||’_ora_’||oracle_process_id||’.trc’,  ‘Prog. Name: ‘||prog.user_concurrent_program_name,  ‘File Name: ‘||execname.execution_file_name|| execname.subroutine_name ,  ‘Status : ‘||decode(phase_code,’R’,’Running’)  ||’-‘||decode(status_code,’R’,’Normal’),  ‘SID Serial: ‘||ses.sid||’,’|| ses.serial#,  ‘Module : ‘||ses.module  from fnd_concurrent_requests req, v$session ses, v$process proc,  v$parameter dest, v$parameter dbnm, fnd_concurrent_programs_vl prog,  fnd_executables execname  where req.request_id = &request  and req.oracle_process_id=proc.spid(+)  and proc.addr = ses.paddr(+)  and dest.name=’user_dump_dest’  and dbnm.name=’db_name’  and req.concurrent_program_id = prog.concurrent_program_id  and req.program_application_id = prog.application_id  and prog.application_id = execname.application_id  and prog.executable_id=execname.executable_id; 


To know the Trace file location


SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';


3. TKPROF Trace File from concurrent request

Once you have obtained the Raw trace file you need to format the file using TKPROF.

$tkprof raw_trace_file.trc output_file explain=apps/apps sort=(exeela,fchela) sys=no

Where: raw_trace_file.trc: Name of trace file

output_file: tkprof out file

explain: This option provides the explain plan for the sql statements

sort: his provides the sort criteria in which all sql statements will be sorted.  This will bring the bad sql at the top of the outputfile.

sys=no:Disables sql statements issued by user SYS 

Another example: To get (TKPROF) sorted by longest running queries first and limits the results to the “Top 10” long running queries



$ tkprof <filename.trc> <output_filename> sys=no explain=apps/<password> sort='(prsela,exeela,fchela)’ print=10 

example: tkprof  CR30326048.trc CR30326048.tkp sort='(fchela,exeela,prsela)' explain=apps/pass






tkprof tracefile outputfile [explain= ] [table= ]
              [print= ] [insert= ] [sys= ] [sort= ]
  table=schema.tablename   Use 'schema.tablename' with 'explain=' option.
  explain=user/password    Connect to ORACLE and issue EXPLAIN PLAN.
  print=integer    List only the first 'integer' SQL statements.
  aggregate=yes|no
  insert=filename  List SQL statements and data inside INSERT statements.
  sys=no           TKPROF does not list SQL statements run as user SYS.
  record=filename  Record non-recursive statements found in the trace file.
  waits=yes|no     Record summary for any wait events found in the trace file.
  sort=option      Set of zero or more of the following sort options:
    prscnt  number of times parse was called
    prscpu  cpu time parsing
    prsela  elapsed time parsing
    prsdsk  number of disk reads during parse
    prsqry  number of buffers for consistent read during parse
    prscu   number of buffers for current read during parse
    prsmis  number of misses in library cache during parse
    execnt  number of execute was called
    execpu  cpu time spent executing
    exeela  elapsed time executing
    exedsk  number of disk reads during execute
    exeqry  number of buffers for consistent read during execute
    execu   number of buffers for current read during execute
    exerow  number of rows processed during execute
    exemis  number of library cache misses during execute
    fchcnt  number of times fetch was called
    fchcpu  cpu time spent fetching
    fchela  elapsed time fetching
    fchdsk  number of disk reads during fetch
    fchqry  number of buffers for consistent read during fetch
    fchcu   number of buffers for current read during fetch
    fchrow  number of rows fetched
    userid  userid of user that parsed the cursor

Panaya ETL Extraction error message

While extraction process of panaya etl, we encountered below error messages, but the extraction got completed.

* ad_bugs.csvError writing data to ad_bugs.csv: ORA-00904: "SUCCESS_FLAG": invalid identifier
* fast_formulas.csvError writing data to fast_formulas.csv: ORA-08103: object no longer exists
* ad_bugs.csvError writing data to ad_bugs.csv: ORA-00904: "SUCCESS_FLAG": invalid identifier
* profiles.csvError writing data to profiles.csv: ORA-00942: table or view does not exist

And as per the panaya support team, these temporary errors did not seem to affect the processing of the ETL, it seems to have been created anyway, recommend to upload the output to Panaya site.

Panaya ETL Extraction - How to extract & upload your custom code

Download the Oracle Extraction Tool
Select the Setup tab and then Code Box.

Select the relevant System from the drop down list

Download the Oracle extraction tool (jar file)
Always download the newest extraction code from Panaya's Code Box. It is not recommended to use older versions of the extraction.
Extracting the Custom Code and Usage Statistics
  1. Upload panaya-extractor.jar to your Oracle Application Server.
  2. Change your working directory to the location of the panaya-extractor.jar
  3. Using your Oracle Apps OS user, execute the following command:
For Oracle EBS target version* up to 12.1.3 (including):
java -jar panaya-extractor.jar <apps password> [<ETL type>] [-customJavaTop=<Custom Java Top>]
 
For Oracle EBS target version* above 12.2 (including):
java -jar panaya-extractor.jar <apps password> <SYSTEM password> [<ETL type>] [-customJavaTop=<Custom Java Top>] 
* The target version is the version you wish to upgrade to 

Example
java -jar panaya-extractor.jar <appspass> <systempass> F
Note
If you wish to expedite the extraction process when upgrading to Oracle EBS version 12.2 and above, click here
   

Parameters explanation
<apps password> - The password of your apps schema in the DB
<SYSTEM password> - The password of your SYSTEM schema in the DB
[<ETL type>] - What data to extract during the ETL 

F - Full ETL (Customizations & Usage data) Please use this for the first extraction
C - Customizations only
U - Usage only

If no ETL type is specified, a Full ETL will be executed

[-customJavaTop=<Custom Java Top>] - Path(s) to your framework projects other than $JAVA_TOP.
If more than one path is defined in the classpath, use ':' as a delimiter - If -customJavaTop is not specified, will use only $JAVA_TOP.



During the extraction process, an output directory will automatically will be created for you.
The name of the directory will be output_<date>_<time>.
 
At the end of the process, one ZIP File will be created in your output directory, which you will need to upload to Panaya Code Box.
If you do not have access from your Application server to the Panaya web site, please copy the file from there.
If you run the Concurrent Manager on a different server than the Oracle Application server, a separate extraction must be done from each server.
If you are having two node setup with concurrent managers on different server, then you need to copy the panaya extraction tool on both the servers and execute the command same as above. After we get the output file we need to merge it using below command.

A Merge tool is available to combine the two extracts into a single file which can be uploaded to Panaya’s servers.
  • Run the Panaya extractor both on the Oracle Applications server and on one of the Concurrent Manager servers, as described in the section: EXTRACTING THE CUSTOM CODE AND USAGE STATISTICS.
Using your Oracle Apps OS user, execute the following command in order to merge the two files into one:
java -cp panaya-extractor.jar MergeFiles appsETL=<oa_etl> concETL=<cm_etl> [<new_etl>]
Parameters explanation
  • <oa_etl> – is the full path of the extracted filename from the Oracle Application server
  • <cm_etl> – is the full path of the extracted filename from the Concurrent Manager server
  • <new_etl> – is the full path of the merged extracted filename. This is an optional parameter; the default value if not specified is "ETL_MERGE_ + timestamp.zip"
The merged file will be built as follows:
  • reports.zip/bin.zip/sql.zip – will be copied from the cm_etl file
  • framework.zip – will be copied from the cm_etl file, unless it is found in the oa_etl.zip file
  • Appl_tops.properties, etl.log – will be copied from the cm_etl and renamed to conc_appl_tops.properties, conc_etl.log
  • All other files will be copied from the oa_etl file
Example

java -cp panaya-extractor.jar MergeFiles appsETL=/b01/panaya_23feb/ETL_FULL_20160223_1633.zip concETL=/b01/knapps/panaya_23feb/output_20160223_0820/ETL_FULL_20160223_0820.zip /b01/knapps/panaya_23feb/merge/ETL_24feb.zip


Upload the Extract File
  1. Upload the ZIP file to Panaya Code Box
  2. Once uploaded, you should now receive a confirmation email regarding your custom code upload.
    Other users with the Upload Code and/or Manage Users permission will also receive a confirmation email. 

 To expedite the extraction process when upgrading to Oracle EBS target version 12.2 and above?

Oracle E-Business Suite Release 12.2 introduces Online Patching (OLP), a new feature that significantly reduces the patch downtime that was needed in previous releases. Panaya can greatly assist you in identifying the numerous changes required when implementing OLP.
The Panaya extract (ETL) includes 4 Oracle supplied SQL scripts (Readiness Reports) that will gather the necessary information for you about your instance and the OLP. These scripts can be very long running.
To improve the performance of these scripts, we highly recommend you refresh the SYS and data dictionary statistics to ensure that the database will use the most efficient execution plan.
Suggested commands (to be run at the DBA’s discretion) before the ETL:
  1. SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS (‘SYS’);
  2. SQL> EXEC DBMS_STATS.GATHER_DATABASE_STATS (gather_sys=>TRUE);
  3. SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
  4. SQL> EXEC DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;