Thursday, April 4, 2013

Flashback Database in Dataguard Environment (Dataguard Broker Configuration)

We have to follow the below steps to sync the standby database with Primary after flashback operation on Primary. Here we are used Dataguard Broker for Configuration.
Step-1

Do the Flashback operation on primary database.

SQL> select * from v$restore_point;

SCN       DATABASE_INCARNATION#   GUA STORAGE_SIZE TIME RESTORE_POINT_TIME                    PRE                 NAME

4127139        157286400                       YES              2                02-APR-13 01.05.40.000000000 AM      YES                FBTESTDG

SQL> shut immediate
SQL> startup mount
SQL> flashback database to restore point fbtestdg;
Flashback complete.
SQL> alter database open resetlogs;
Database altered.
SQL> archive log list
SQL> alter system switch logfile;

System altered.

Step-2

Check the status of Standby Database. It will give the Error Message.
DGMGRL> show configuration
Configuration - DG1
  Protection Mode: MaxPerformance
  Databases:
    ECPIX    - Primary database
    ECPIXSTB - Physical standby database
 Error: ORA-16810: multiple errors or warnings detected for the database
Fast-Start Failover: DISABLED
Configuration Status:
ERROR

Step-3

Flashback the Standby database to two less than production SCN i.e. 4127139-2=4127137
Sql> flashback database to scn 4127137;

Step-4

Enable redo apply for Standby database.
DGMGRL> show configuration
Configuration - DG1
  Protection Mode: MaxPerformance
  Databases:
    ECPIX    - Primary database
      Error: ORA-16778: redo transport error for one or more databases
    ECPIXSTB - Physical standby database
      Error: ORA-16766: Redo Apply is stopped
Fast-Start Failover: DISABLED
Configuration Status:
ERROR
DGMGRL> edit database ecpixstb set state=apply-on;
DGMGRL> show configuration
Configuration - DG1
 Protection Mode: MaxPerformance
 Databases:
  ECPIX    - Primary database
  ECPIXSTB - Physical standby database
 Fast-Start Failover: DISABLED
Configuration Status:
 SUCCESS

 

Wednesday, April 3, 2013

PRVF-7617 : Node connectivity between "SEZ08WTR-0723 : 192.168.3.***" and "SEZ08WTR-0724 : 192.168.3.***" failed TCP connectivity check failed for subnet




My Oracle 11G R2 GRID insttaltion failed at the end of the installation in windows 2008 R2 nodes with Oracle Cluster Verification Utility failed Error.
I clicked NEXT tab to proceed and completed INstallation. found below Errors in INstallActions.log

INFO: ERROR:

INFO: PRVF-7617 : Node connectivity between "SEZ08WTR-0723 : 192.168.3.198" and "SEZ08WTR-0724 : 192.168.3.199" failed

INFO: TCP connectivity check failed for subnet "192.168.3.0"

When I executed Cluvfy with post option given same Error and failed requisites.

cluvfy.bat stage -post crsinst -verbose -n SEZ08WTR-0723,sez08wtr-0724



If we proceed to create the database with the above Errors, It will create the problems.

In my case problem with host names  node 1 host name with all capital letters and Node 2 host name all small letters. But when i execute Cluvfy it is taking all capital letters (from domain) for host names and
checking prerequisites.

Solution:

I have changed my Second node host name to capital letters and tested Cluvfy, it went successful with out any Errors.