Thursday, January 29, 2026

Extraction Multiple SQL Workflow for Automation.

 Step 1: Prepare the Folder

Create a folder with the SER number under: /monito_sam/extract_output/Extracts

Copy all required *.sql files into this folder.

Example:

cd /monito_sam/extract_output/Extracts

mkdir SRXXXXTEST

ls -lrt SRXXXXTEST

-rwxrwx--- 1 oracle dba 295 Nov 12 23:04 test_sam.sql

Step 2: Execute the Script

Syntax:

./run_sql.sh <Complete_PATH_OF_SQLS_FOLDER>

To run in background with nohup:

nohup ./run_sql.sh <Complete_PATH_OF_SQLS_FOLDER >/dev/null 2>&1 &

Example:

nohup ./run_sql.sh /monito_sam/extract_output/Extracts/SRXXXXTEST >/dev/null 2>&1

&

Step 3: Post-Execution

After completion: - The script compresses all generated *.csv files into .gz format inside the

same folder. - Grants appropriate access permissions.

Check nohup job status:

jobs -l

Verify output:

cd /monito_sam/extract_output/Extracts/SRXXXXTEST

Step 4: Email Confirmation

Upon successful completion, an email notification will be triggered automatically. We can

update the email address as per the requirements.





Example:

cat run_sql.sh

#!/bin/bash



# If not running in background, re-execute with nohup

if [[ -z "$NOHUP_WRAPPED" ]]; then

  export NOHUP_WRAPPED=1

  nohup "$0" "$@" > /dev/null 2>&1 &

  echo "Process started in background with nohup. PID: $!"

  exit 0

fi



# ===========================

# Constants

# ===========================

ORACLE_HOME="/app/oracle/product/19c/db"

EMAIL="SMXXX@karo.sam.com"

CRED_FILE="/home/oracle/admin/.dba_ro_cred"

LOG_DIR="/monito_sam/extract_output/Extracts/LOGS"

SERVICE_NAME="dbservice"

BASE_DIR="/monito_sam/extract_output/Extracts"  # Base directory for SQL folders



# ===========================

# Usage check

# ===========================

if [ "$#" -ne 1 ]; then

  echo "Usage: $0 <FOLDER_NAME>"

  echo "Example: $0 SRXXXXTEST"

  exit 1

fi



SQL_DIR="$BASE_DIR/$1"



# Validate directory exists

if [ ! -d "$SQL_DIR" ]; then

  echo "Directory $SQL_DIR does not exist!"

  exit 1

fi



# ===========================

# Set Oracle environment

# ===========================

export ORACLE_HOME

export PATH=$ORACLE_HOME/bin:$PATH

cd "$SQL_DIR"



# ===========================

# Load credentials

# ===========================

if [ ! -f "$CRED_FILE" ]; then

  echo "Credential file not found: $CRED_FILE"

  exit 1

fi



USERNAME=$(grep -i '^username=' "$CRED_FILE" | cut -d'=' -f2)

PASSWORD=$(grep -i '^password=' "$CRED_FILE" | cut -d'=' -f2)



if [ -z "$USERNAME" ] || [ -z "$PASSWORD" ]; then

  echo "Invalid credentials in $CRED_FILE"

  exit 1

fi



# ===========================

# Create log file

# ===========================

LOG_FILE="$LOG_DIR/sql_run_$(date +%Y%m%d_%H%M%S).log"

echo "Starting SQL execution at $(date)" > "$LOG_FILE"



# ===========================

# Execute all .sql files

# ===========================

SQL_SCRIPTS=("$SQL_DIR"/*.sql)

ERROR_OCCURRED=0



for SCRIPT in "${SQL_SCRIPTS[@]}"; do

  if [ -f "$SCRIPT" ]; then

    echo "Executing $SCRIPT..." | tee -a "$LOG_FILE"

    sqlplus -s "${USERNAME}/${PASSWORD}@${SERVICE_NAME}" <<EOF >> "$LOG_FILE" 2>&1

WHENEVER SQLERROR EXIT SQL.SQLCODE

@${SCRIPT}

EXIT;

EOF



    if [ $? -ne 0 ]; then

      echo "Error executing $SCRIPT. Check log: $LOG_FILE" | tee -a "$LOG_FILE"

      ERROR_OCCURRED=1

    else

      echo "Successfully executed $SCRIPT" | tee -a "$LOG_FILE"

    fi

  else

    echo "No .sql files found in directory: $SQL_DIR" | tee -a "$LOG_FILE"

    ERROR_OCCURRED=1

    break

  fi

done

echo "Execution completed at $(date)" >> "$LOG_FILE"



# ===========================

# Send email notification

# ===========================

if [ "$ERROR_OCCURRED" -eq 1 ]; then

  SUBJECT="Oracle SQL Execution Failed on $(hostname) for $SQL_DIR"

else

  SUBJECT="Oracle SQL Execution Succeeded on $(hostname) for $SQL_DIR"

fi



mail -s "$SUBJECT" "$EMAIL" < "$LOG_FILE"



echo "Log saved to: $LOG_FILE"

echo "Notification sent to: $EMAIL"



# ===========================

# Compress new .csv files

  for csv_file in "$SQL_DIR"/*.csv; do

    if [ -f "$csv_file" ]; then

      echo "Compressing $csv_file..."

      gzip -f "$csv_file"  # Force overwrite if .gz exists

      chmod 775 "${csv_file}.gz"

      echo "Compressed and set permissions for: ${csv_file}.gz"

    fi

  done



  # Set permissions for the SQL directory itself

  chmod 775 "$SQL_DIR"

fi



# ===========================

# Compress the log file

# ===========================

gzip "$LOG_FILE"


server*name*/orcl_monitor/extract_output/Extracts>./run_sql.sh SERREXXXX_all
Process started in background with nohup. PID: 24052030
11:19
server*name*/orcl_monitor/extract_output/Extracts>ps -ef|grep 24052030
  oracle 24052030        1   0 21:48:32  pts/0  0:00 /bin/bash ./run_sql.sh SERREXXXX_all
  oracle 44761554 23331340   0 21:49:24  pts/0  0:00 grep 24052030

  oracle 12911108 24052030   0 21:48:33  pts/0  0:00 sqlplus -s

Thursday, June 22, 2023

Find the external table details along with dirctories


col OWNER for a20
col TABLE_NAME for a30
col DIRECTORY_PATH for a50
set lines 250 pages 100
select a.OWNER,TABLE_NAME,DEFAULT_DIRECTORY_NAME,DIRECTORY_PATH from dba_external_tables a, dba_directories b where a.DEFAULT_DIRECTORY_NAME=b.DIRECTORY_NAME order by a.OWNER;

or 
c
ol TABLE_NAME for a30
col DIRECTORY_PATH for a50
set lines 250 pages 100
select distinct b.OWNER,DEFAULT_DIRECTORY_NAME,DIRECTORY_PATH from dba_external_tables a, dba_directories b where a.DEFAULT_DIRECTORY_NAME=b.DIRECTORY_NAME;

Friday, June 16, 2023

How to drop an user?

 SQL> drop user 8900308 cascade;

drop user 8900908 cascade   
*ERROR at line 1:
ORA-01935: missing user or role name

SQL> !oerr ora 1935

01935, 00000, "missing user or role name"
// *Cause:  A user or role name was expected.
// *Action: Specify a user or role name.

SQL> drop user "8900908" cascade;

User dropped.


Normal user :

Syntax:

Drop user <username> cascade;

Monday, June 12, 2023

Query to check top 15 Largest Segments and Larger than 10G segments

 Top 15  Largest Segments:

SELECT * FROM (select SEGMENT_NAME, SEGMENT_TYPE, BYTES/1024/1024/1024 GB, TABLESPACE_NAME from dba_segments order by 3 desc ) WHERE ROWNUM <= 15;

larger than 10G segments:

set lines 300 pages 900
col owner for a20
col segment_name for a35
col segment_type for a17
select owner, segment_name, segment_type, sum(bytes)/1024/1024/1024 object_size_GB from dba_segments where bytes>10737418240 group by owner, segment_name, segment_type order by 4;

Datapump : Import run slow during DB migration from 12c to 19c

DB details: SAMDB and DB Size: 741.54

Export completed Time: 01:46:27 with parallel16

Import work:

  • The FULL Import began with the default parallels, namely 200GB of data were imported in 2 hours, but the rest of the tables took longer to load, as evidenced by the message "enq: Data pump "TM – contention" event It has taken more than 20 hours, but the data import for "“MD_OWNER"."PARTY_RELATION_UPDATES_AUDIT” and "MD_OWNER"."ATTRIBUTE_DELTA_AUDIT" has not been completed. Finally, an import failed due to a fatal error.

  • Huge row count mismatch in “MD_OWNER.PARTY_RELATION_UPDATES_AUDIT” as observed. As a result, " PARTY_RELATION_UPDATES_AUDIT " performed the re-import, which took more than 8hr but did not complete.

  • Performed the Metadata import and Data import option. But, during the data load observed “enq: TM – contention” and data loading with “INSERT /*+ APPEND” SQL. This import has taken a long time but didn’t complete it. 


Recommendations

Option 1: 

  • Logical Migration, Identify the top tables with a high number row count in the export log(which does not contain partitions) and exclude those tables from full import and import them individually.
  • Ensure that tablespaces (including undo and temp) have sufficient space.
  • Set parallel=2, to start the individual export for excluded tables.  
  • Ensure that the stream pool size remains at least 1 GB.
  • Skip the constraints during import and run them again after import. 

Option 2: Oracle Suggested  

  • RMAN - Physical migration, which we can complete a least downtime within 8-9 Hrs. 

Tuesday, June 6, 2023

Oracle Database Login status with creation time and startup time

set pages 1000 lines 180
col HOST_NAME for a29;
col name for a12;
col DATABASE_ROLE for a18;
col startup_time for a20;
col DATABASE_ROLE for a20

select name,database_role,a.INST_ID,INSTANCE_NAME,HOST_NAME,to_char(startup_time, 'HH24:MI DD-MON-YY') "Startup_Time",CREATED,LOGINS,open_mode from gv$instance a,gv$database b where a.INST_ID=b.INST_ID order by INSTANCE_NUMBER,HOST_NAME,INSTANCE_NAME,CREATED;


Help yourself ...by helping people who need...!


Extraction Multiple SQL Workflow for Automation.

 Step 1: Prepare the Folder Create a folder with the SER number under: /monito_sam/extract_output/Extracts Copy all required *.sql files int...