Help - Search - Members - Calendar
Full Version: Create 10g RAC database via command line
Oracle DBA Forums > Oracle > Oracle Forum
Paul O'Malley
Hello:

I have run into an Oracle environment problem where DBCA will not connect to ASM. I have a tar open for the issue and have tried, with no luck, the various Metalink documents on the subject.

Our environment is a 2-node RAC cluster using Oracle Clusterware/RAC 10.2.0.3. I need to create a second set of instances/database within this cluster.

Can someone provide the steps required to create a RAC-enabled database in an existing cluster? I have created the ASM diskgroups but am having difficulty finding a definitive set of steps for creating the instances/database.

Thanks!

Paul
HAL9000
Paul,

"Can someone provide the steps required to create a RAC-enabled database in an existing cluster?"

It's always good to know how to add nodes manually, IMHO! Lot's of RAC DBA's don;t use DMCA, and you have discovered one reason!

Remember, you are just creating a new instance (same database) and genning-it into your cluster.

The docs note:

"The preferred method to add nodes and instances to Oracle RAC databases is to use the Oracle cloning procedures that are described in the Oracle Universal Installer and OPatch User's Guide. "


I don't have a complete checklist, but it basically:

1 - Create the instance
2 - run addNode.sh

I think that Steve Karam has one in his Oracle RAC book:

Here is an excerpt:

http://www.dba-oracle.com/t_oracle10g_16_c...e_instance_.htm
Steve
Paul,

Just to be sure I understand what you're trying to do, let me recap:

1. You already have a two node cluster
2. You have already set up ASM instances on both nodes (+ASM1 and +ASM2) and have created diskgroups
3. You already have a database w/ 2 instances spanning the cluster
4. You now want to create a new database with its own two instances across the same two nodes

Is that correct?

When you go into DBCA and select that you wish to create a database in the cluster, where do you run into problems? What errors do you receive? When you try to select ASM as the central storage, it should ask you for the ASM SYS password that you set when you configured ASM. Have you reached that step?
aussie_dba
I found these staps for adding a new instance to RAC, but it's untested:

http://www.pythian.com/blogs/1043/oracle-s...de-to-a-102-rac

If you cannot or don’t want use DBCA to add an instance to your RAC database, you can always do it manually. Below is the list of steps involved.

Step 1: Add the instance prefix to the oratab file

echo "ORCL:/u01/app/oracle/product/10.2.0/db_1:N">>/etc/oratab

Step 2: Create the administration directory

You can locate these directories on the other servers and create them on the new server as below:

rac-server5$ mkdir -p /u01/app/oracle/admin/ORCL/adump
rac-server5$ mkdir -p /u01/app/oracle/admin/ORCL/bdump
rac-server5$ mkdir -p /u01/app/oracle/admin/ORCL/cdump
rac-server5$ mkdir -p /u01/app/oracle/admin/ORCL/udump

Step 3: Update the tnsnames.ora files from all the nodes

The easiest way to manage the network files is to have the exact same files on each of the servers. The entries you’d want to have in the tnsnames.ora file are:

LISTENER_<server_name> for each of the servers. This alias points to the VIP end of the listener from each of the servers, and is used in the local_listener parameter of each instance.
LISTENER_<gDbName> is an alias that points to all the listener VIP ends, and is used by the remote_listener parameter
<gDbName> is an alias that points to all the listeners to connect to the database.
<Instance_Name> is an alias that points to the local listener, and specify the instance_name parameter to force the connection to a specific instance.

Step 4: Check the instances register in the new node listener

Once you’ve tested the new tnsnames.ora and pushed it to all the servers, you have to force each one of the instances to reload it and to register again. The fastest way is to change the remote_listeners parameter value back and forth as below:

rac-server1$ sqlplus / as sysbda
SQL> alter system set remote_listeners='ORCL' sid='*';
SQL> alter system set remote_listeners='LISTENERS_ORCL' sid='*';
SQL> alter system register;

Once the tns aliases are reloaded, you can check in the new node that the instances are registered:

rac-server5$ lsnrctl status listener_rac-server5
Step 5: Add the Redo Logs for the new thread that will be used by the instance
The way to do this depends on whether you use Oracle Managed Files for the Redo Logs or if you name your files manually. To learn more about your redo logs, you can run the query below:

rac-server1$ sqlplus / as sysdba
SQL> col thread# format 99
SQL> col group# format 999
SQL> col bytes format 999,999,999,999
SQL> col member format a85
SQL> set lines 120
SQL> select l.thread#, f.group#, l.bytes, f.member
from v$logfile f, v$log l
where f.group#=l.group#(+)
order by f.group#;

Then you can create the log files for the new thread. Refer to the ALTER DATABASE documentation. Here is an example where OMF is being used:

SQL> alter database add logfile thread 5 group 13 size 500M;
SQL> alter database add logfile thread 5 group 14 size 500M;
SQL> alter database add logfile thread 5 group 15 size 500M;

Step 6: Activate the Thread you’ve created

SQL> alter database enable public thread 5;

Step 7: Create an UNDO tablespace for the new instance

The syntax will differ depending on whether or not you use OMF. For more details, refer to the CREATE TABLESPACE documentation. Below is an example in the case of OMF:

SQL> create undo tablespace UNDOTBS5 datafile size 500M;

Step 8: Modify the instance-specific parameters in the spfile

Some parameters depend on the instance. An easy way to see what specific parameter applies to an instance is to connect to it with SQL*Plus and run the query below:

SQL> col name format a30
SQL> col value format a80
SQL> set lines 120
SQL> select name, value
from v$spparameter
where sid in (select instance_name
from v$instance);

Then you can add the parameters for the new instance. You should at least add undo_tablespace, thread, local_listener, and instance_number, as below:

alter system set instance_number=5 scope=spfile sid='ORCL5';
alter system set thread=5 scope=spfile sid='ORCL5';
alter system set undo_tablespace='UNDOTBS5' scope=spfile sid='ORCL5';
alter system set local_listener='LISTENER_RAC-SERVER5' scope=spfile sid='ORCL5';

Step 9: Create the instance init.ora file on the new server
This file should contain only the spfile parameter. You can simply copy it from another server, as below:

rac-server5$ cd $ORACLE_HOME/dbs
rac-server5$ scp rac-server1:$ORACLE_HOME/dbs/initORCL1.ora initORCL5.ora
rac-server5$ cat initORCL5.ora

Step 10: Create or Link to a Password File on the new node

If you have the opportunity to do it with OCFS, NFS, or any other solution, share the password file across all the instances. In that case just create a link from the new node to that password file with the ln command. If you cannot, use the orapwd command to create a password file in the local ORACLE_HOME.

Step 11: Register the instance in the Clusterware

Connect to any of the nodes and register the new instance as root with the srvctl command:

rac-server5# export ORACLE_HOME=/u01/app/oracle/product/10.2.0/db_1
rac-server5# cd $ORACLE_HOME/bin
rac-server5# ./srvctl add instance -d ORCL \
-i ORCL5 -n rac-server5

Step 12: Start the new instance
Open a session to monitor the alert_ORCL5.log file with tail -f on rac-server5:

rac-server5$ cd /u01/app/oracle/admin/ORCL/bdump
rac-server5$ touch alert_ORCL5.log
rac-server5$ tail -f alert_ORCL5.log

From another session, start ORCL5:

srvctl start instance -d ORCL -i ORCL5
If you’ve done everything correctly, the instance will start. If it doesn’t start, check the alert.log content or if there is no alert.log, try to start the instance with SQL*Plus.

Step 13: Make sure you can connect to the new instance

This means you can connect to the new instance registers in all the listeners from a client. This assumes that (1) if you uses DNS aliases, the client knows about the new node VIP, and (2) if your application servers are in a DMZ or if there is a firewall in front of your database, you’ve updated the firewall ACL as needed.

Adding the services to the new node
The best way to connect to a RAC is to use services. If you’ve registered those services in the Clusterware, don’t forget to modify those according to the addition of your new node. Below is an example with the OLTP service that would be preferred on all the instances:

rac-server5$ srvctl modify service -d ORCL -s OLTP \
-n -i "ORCL1,ORCL2,ORCL3,ORCL4,ORCL5"
rac-server5$ srvctl start service -d ORCL -s OLTP \
-i ORCL5

andrew kerber
I have a complete set of sql scripts for creating the database around here someplace ....
here we go:

set verify off;
set heading off;
set pagesize 1000;
set trimspool on;
spool 1_create_database.log
connect / as SYSDBA
set echo on
spool /u01/app/oracle/admin/orcl/scripts/01_create_rac_database.log
prompt "STARTING UP IN NOMOUNT MODE"
startup nomount pfile=/u01/app/oracle/admin/orcl/pfile/initorcl_new1.ora

Create database orcl
MAXINSTANCES 5
MAXLOGHISTORY 10000
MAXLOGFILES 30
MAXLOGMEMBERS 5
MAXDATAFILES 1024
Datafile '/u05/oradata/orcl/system01.dbf' size 1024m
extent management local
undo tablespace UNDOTBS1
datafile '/u05/oradata/orcl/undotbs01.dbf' size 1024m
autoextend on next 10m maxsize 2000m
sysaux datafile '/u05/oradata/orcl/sysaux01.dbf' size 500m
autoextend on next 10m maxsize 2000m
default temporary tablespace TEMP tempfile
'/u05/oradata/orcl/temp01.tdf' size 500m
extent management local uniform size 1M
LOGFILE GROUP 1 ('/u05/oradata/orcl/orcl_t1_g01_m1.rdo',
'/u05/oramirror/orcl/orcl_t1_g01_m2.rdo') SIZE 20m ,
GROUP 2 ('/u05/oradata/orcl/orcl_t1_g02_m1.rdo',
'/u05/oramirror/orcl/orcl_t1_g02_m2.rdo') SIZE 20m ,
GROUP 3 ('/u05/oradata/orcl/orcl_t1_g03_m1.rdo',
'/u05/oramirror/orcl/orcl_t1_g03_m2.rdo') SIZE 20m ,
GROUP 4 ('/u05/oradata/orcl/orcl_t1_g04_m1.rdo',
'/u05/oramirror/orcl/orcl_t1_g04_m2.rdo') SIZE 20m ,
GROUP 5 ('/u05/oradata/orcl/orcl_t1_g05_m1.rdo',
'/u05/oramirror/orcl/orcl_t1_g05_m2.rdo') SIZE 20m
character set we8iso8859p1;

create undo tablespace UNDOTBS2 datafile
'/u05/oradata/orcl/undotbs02.dbf' size 1024m
autoextend on next 10m maxsize 2000m;

spool off


set verify off;
set heading off;
set pagesize 1000;
set trimspool on;
spool 02_run_rac_cat_scripts.log
--
connect sys/x as sysdba
--
prompt "RUNNING THE CATALOG AND CATPROC SCRIPTS AS SYS"
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
@?/rdbms/admin/catblock.sql
@?/rdbms/admin/catoctk.sql
@?/rdbms/admin/owminst.plb
@?/rdbms/admin/catparr.sql

commit;
--
prompt "RUNNING THE PUPBLD SCRIPT AS SYSTEM"
connect system/manager
@?/sqlplus/admin/pupbld
@?/sqlplus/admin/help/hlpbld.sql helpus.sql

--
spool off;



set verify off;
set heading off;
set pagesize 1000;
set trimspool on;

spool 03_add_rac_redo_thread2.out
connect / as sysdba
alter database add logfile thread 2
GROUP 6 ('/u61/oradata/orcl/orcl_t2_g06_m1.rdo',
'/u61/oramirror/orcl/orcl_t2_g06_m2.rdo') SIZE 20m,
GROUP 7 ('/u61/oradata/orcl/orcl_t2_g07_m1.rdo',
'/u61/oramirror/orcl/orcl_t2_g07_m2.rdo') SIZE 20m,
GROUP 8 ('/u61/oradata/orcl/orcl_t2_g08_m1.rdo',
'/u61/oramirror/orcl/orcl_t2_g08_m2.rdo') SIZE 20m,
GROUP 9 ('/u61/oradata/orcl/orcl_t2_g09_m1.rdo',
'/u61/oramirror/orcl/orcl_t2_g09_m2.rdo') SIZE 20m,
GROUP 10 ('/u61/oradata/orcl/orcl_t2_g10_m1.rdo',
'/u61/oramirror/orcl/orcl_t2_g10_m2.rdo') SIZE 20m;

alter database enable public thread 2;


set verify off;
set heading off;
set pagesize 1000;
set trimspool on;
spool 6_set_archivelog_mode.log
--
connect sys/x as sysdba
--
prompt "RESTARTING THE DATABASE"
shutdown immediate
startup mount
--
prompt "PLACING DATABASE IN ARCHIVELOG MODE"
alter database archivelog;
Alter database flashback on;
alter database open;
--
spool off;



--NOTE: Replace orcl with your database name.
-- Replace /u05 with your mount point.
-- Adjust all sizes for tablespaces, etc. for the specifics
-- of the database.
--
set verify off;
set heading off;
set pagesize 1000;
set trimspool on;
spool 3_create_tablespaces.log
--
connect / as sysdba
--
prompt "CREATING THE USER TABLESPACE"
create tablespace users
datafile '/u05/oradata/orcl/users_01.dbf' size 10m
extent management local autoallocate
SEGMENT SPACE MANAGEMENT AUTO;
--
--
prompt "CREATING THE PERFSTAT TABLESPACE"
create tablespace perfstat
datafile '/u05/oradata/orcl/perfstat_01.dbf' size 500m
extent management local autoallocate
SEGMENT SPACE MANAGEMENT AUTO;

prompt "CREATING THE AUDIT TABLESPACE"
create tablespace audit_data
datafile '/u05/oradata/orcl/audit_data01.dbf' size 100m
extent management local autoallocate
SEGMENT SPACE MANAGEMENT AUTO;

create tablespace audit_idx
datafile '/u05/oradata/orcl/audit_idx01.dbf' size 100m
extent management local autoallocate
SEGMENT SPACE MANAGEMENT AUTO;


--
spool off;


initorcl1.ora:
ifile=/u01/app/oracle/admin/ORCL/pfile/ifile_orcl.ora
local_listener="(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=RAC1)(PORT=1592)))"
ORCL1.instance_name=ORCL1
ORCL1.instance_number=1
ORCL1.thread=1
ORCL1.undo_tablespace=UNDOTBS1
b_domain=HOME
service_names=orcl
global_names=false
control_files=("/u05/oradata/orcl/orcl_01.ctl",
"/u05/oramirror/orcl/orcl_02.ctl")
sga_target=402653184
sga_max_size=536870912
pga_aggregate_target=536870912
log_archive_dest_1 = "LOCATION=/u06/oradata/orcl MANDATORY REOPEN=5"
log_archive_dest_2 = "LOCATION=/u06/oradata/orcl MANDATORY REOPEN=5"
log_archive_dest_state_1 = ENABLE
log_archive_dest_state_2 = ENABLE
log_archive_format = orcl_%t_%S_%r.arc
log_buffer = 1048576
log_checkpoint_interval = 10000
db_block_size=8192
db_file_multiblock_read_count=8


###########################################
# Cluster Database
###########################################
cluster_database=true
cluster_database_instances=2
###########################################
# Cursors and Library Cache
###########################################
open_cursors=2500
dml_locks=600
open_links=50
processes=600
nls_date_format="YYYY-MM-DD"
os_authent_prefix="ops$"
resource_limit=false
query_rewrite_integrity = trusted
query_rewrite_enabled = true

O7_DICTIONARY_ACCESSIBILITY = FALSE
background_dump_dest=/u01/app/oracle/admin/orcl/bdump
core_dump_dest=/u01/app/oracle/admin/orcl/cdump
user_dump_dest=/u01/app/oracle/admin/orcl/udump
db_recovery_file_dest=/u05/oradata/orcl/flash_recovery_area
db_recovery_file_dest_size=20971520000


audit_trail=db_extended
remote_login_passwordfile=exclusive
compatible=10.2.0.2.0
undo_management=AUTO
This is a "lo-fi" version of our main content. To view the full version with more information, formatting and images, please click here.
Invision Power Board © 2001-2014 Invision Power Services, Inc.