Multitenant CDB and PDBs
Creating new Pluggable Database should be easy and straightforward as much as possible – Oracle claims that this technology can lead to many savings through firing couple useless database administrators who manage databases as a standalone beings. With 12c and multitenant option you can consolidate databases, leverage costs on database administrators, who spend too much time on administration and put sky-rocking charts on one of the walls in your chairman office. Ok, it’s time to check, what multitenant is.
Introduction
Starting point for this article is presented in a table below.
Attribute | Value |
---|---|
Software |
Oracle Linux 6.5 (64 bit for x86)
Oracle Database 12.1.0.2.0 (64 bit for Linux 6.5) Installation and configuration of this software is described in this article: http://oradistrict.com/?p=11 |
Databases | Two container databases: tcdb beecdb |
glogin.sql file | Add couple new lines inside your login script for sql*plus. Most of the time you really don’t care about time zone in your test database. set termout off alter session set nls_timestamp_tz_format = ‘yyyy-mm-dd hh24:mi:ss’; alter session set nls_timestamp_format = ‘yyyy-mm-dd hh24:mi:ss’; alter session set nls_date_format = ‘yyyy-mm-dd hh24:mi:ss’; set linesize 92 set pagesize 250 set termout on |
Scripts used in this guide | ./mt/container121.sql – list containers (CDB + seed + PDBs) in your CDB ./sap/mysession121.sql – list details about your session by SID ./iad/databaseproperties112.sql – shows database properties (or current PDB) ./mt/cdatabaseproperties121.sql – show all PDBs properties (or current PDB) ./iad/parameter121.sql – list instance parameter active and specified in spfile values ./ts/datatempfiles121 – list of all data and temp files ./do/constraint112 – list of all constraints owned by pointed user ./do/object112.sql – list of all objects owned by pointed user ./mt/pdb12102 – shows PDBs status and logging options ./do/directory112.sql – shows database directories All scripts can be downloaded from my github project: Oradistrict Toolbox – download, unzip and use trunk directory to start menu via odstart.sql file. Enjoy! |
oracle user shell | It’s good to use aliases in shell – for example: alias sql=”sqlplus / as sysdba” provide easy way to login internally as sysdba. You can put it in your .bash_profile or .profile file. |
Why introduce a topic which was described in Oracle Database documentation in details? Just to lie background for my next guides. Real goal is to show how common users and roles works and what you should be aware off.
Creating a PDB from seed, cloning from another PDB or even standaone database – this is what you can find below.
Creating PDB from seed
Easiest way to create new PDB is to use seed pluggable database. This is a template which have been created with your CDB.
Copying from seed is quite easy. Start with a connection to your CDB, directly to root container. To do that just use old way to connect as sysdba – remember to set your environment variables first.
. oraenv sqlplus / as sysdba
Here is an example were I’ll create pluggable database pipboy. During PDB creation new administrative user called maxstone will be created with one local role assigned to new PDB_DBA role which is common. Quite complex. Default tablespace will be used just like USERS with a standard database – default for all non-common users inside PDB. Convert seed file names to PDB file names – you can check your seed database file names in v$datafile. You won’t find seed datafiles neither in dba_data_files or cdb_data_files – only in v$datafile when considering cdb$root connection. Storage is similar to old quoting feature for accounts but it’s adjusted to meet PDBs requirements – one limit for all PDB datafiles and one limit for shared temporary tablespace usage. Last parameter is for directory objects prefix – you can enable this parameter only during PDB creation. Sounds interesting – I’ll check this parameter in depth later.
I’m going to use tcdb container database in this example.
Full syntax can be found in official documentation.
create pluggable database pipboy admin user maxstone identified by MxSt9281 roles = (dba) default tablespace vault13 datafile '/u02/oradata/tcdb/pipboy/vault13_01.dbf' size 16m autoextend on next 16m maxsize 512m file_name_convert = ('/u02/oradata/tcdb/pdbseed/','/u02/oradata/tcdb/pipboy/') storage (maxsize 2048m max_shared_temp_size 1024m) path_prefix = '/u02/oradata/tcdb/pipboy/';
Use ./mt/container121.sql script to check current status of all containers.
start mt/container121
CID NAME DbID Size in MB Open mode Open time Res ---- -------------------- ------------ ---------- ---------- ---------------------- --- 1 CDB$ROOT 918792381 0 READ WRITE 2015-04-13 10:00:28 NO 2 PDB$SEED 4034246255 845 READ ONLY 2015-04-13 10:00:28 NO 3 PIPBOY 3788905833 0 MOUNTED 2015-04-13 11:30:51
Escape from toolbox menu with Ctrl+C and Enter. Open pipboy PDB. In this case commands presented below will produce the same result – when using all as PDB name you can start all mounted PDBs. In our case only pipboy PDB can be opened. Choose one of those commands to start pipboy PDB. Read write is the default mode of opening PDB.
alter pluggable database pipboy open read write; alter pluggable database pipboy open; alter pluggable database all open;
Create new terminal session as oracle user and check listener status. Inside the listing you can find a new service which was created in CDB for PDB.
lsnrctl status
Service "pipboy" has 1 instance(s). Instance "tcdb", status READY, has 1 handler(s) for this service...
Add new entry in tnsnames.ora for this service.
vi $ORACLE_HOME/network/admin/tnsnames.ora
# tcdb - pluggable databases PIPBOY = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = odone)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = PIPBOY) ) )
Connect as your PDB administrator – Max Stone.
sqlplus maxstone/MxSt9281@pipboy
start sap/mysession121
User Name OS User Program Service Name Machine ------------ -------------- -------------------------- ------------ ---------------- MAXSTONE oracle sqlplus@odone (TNS V1-V3) pipboy odone Container Name ------------------------------------------------ PIPBOY
You can see that you are using service pipboy and you are connected as maxstone local user in container with ID = 3.
You can check your current CDB identifier in sql*plus with show command.
show con_name
CON_NAME ------------------------------ PIPBOY
show con_id
CON_ID ------------------------------ 3
Check the same information for your cdb$root internal connection (switch to first ssh session).
start sap/mysession121
User Name OS User Program Service Name Machine ------------ -------------- -------------------------- ------------ ---------------- SYS oracle sqlplus@odone (TNS V1-V3) SYS$USERS odone Container Name ------------------------------------------------ CDB$ROOT
show con_name
CON_NAME ------------------------------ CDB$ROOT
show con_id
CON_ID ------------------------------ 1
If you’ve connected to database using internal connection then your service name is ‘SYS$USERS’. Try to connect as a sysdba through listener and check mysession.sql script result:
exit sqlplus sys@tcdb as sysdba
start sap/mysession121
User Name OS User Program Service Name Machine ------------ -------------- -------------------------- ------------ ---------------- SYS oracle sqlplus@odone (TNS V1-V3) tcdb odone Container Name ------------------------------------------------ CDB$ROOT
You can see different service name – with resource manager you can get even more detailed information about your services. Maybe one day I’ll return to this topic and check, how resource manager can control services in Multitenant architecture.
Going back to my maxstone user session. I would like to check, what are my limits for this PDB and other settings. Use ./mt/cdatabaseproperties121.sql script.
start mt/cdatabaseproperties121
Database properties for a PDB: PIPBOY Property name Property value Property description ------------------------ ------------------------ ------------------------------------------ DBTIMEZONE +02:00 DB time zone DEFAULT_EDITION ORA$BASE Name of the database default edition DEFAULT_PERMANENT_TABLES VAULT13 Name of default permanent tablespace PACE DEFAULT_TBS_TYPE SMALLFILE Default tablespace type DEFAULT_TEMP_TABLESPACE TEMP Name of default temporary tablespace DICT.BASE 2 dictionary base tables version # DST_PRIMARY_TT_VERSION 18 Version of primary timezone data file DST_SECONDARY_TT_VERSION 0 Version of secondary timezone data file DST_UPGRADE_STATE NONE State of Day Light Saving Time Upgrade EXPORT_VIEWS_VERSION 8 Export views revision # Flashback Timestamp Time GMT Flashback timestamp created in GMT Zone GLOBAL_DB_NAME PIPBOY Global database name MAX_PDB_STORAGE 2048 MB Maximum Space Usage of Datafiles and Local Tempfiles in Container MAX_SHARED_TEMP_SIZE 1024 MB Maximum Space Usage in Shared Temporary Ta blespace MAX_STRING_SIZE STANDARD MAX_STRING_SIZE paramter used for dictiona ry metadata NLS_CALENDAR GREGORIAN Calendar system NLS_CHARACTERSET AL32UTF8 Character set NLS_COMP BINARY NLS comparison NLS_CURRENCY $ Local currency NLS_DATE_FORMAT DD-MON-RR Date format NLS_DATE_LANGUAGE AMERICAN Date language NLS_DUAL_CURRENCY $ Dual currency symbol NLS_ISO_CURRENCY AMERICA ISO currency NLS_LANGUAGE AMERICAN Language NLS_LENGTH_SEMANTICS BYTE NLS length semantics NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set NLS_NCHAR_CONV_EXCP FALSE NLS conversion exception NLS_NUMERIC_CHARACTERS ., Numeric characters NLS_RDBMS_VERSION 12.1.0.2.0 RDBMS version for NLS parameters NLS_SORT BINARY Linguistic definition NLS_TERRITORY AMERICA Territory NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM Time stamp format NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM Timestamp with timezone format TZR NLS_TIME_FORMAT HH.MI.SSXFF AM Time format NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR Time with timezone format NO_USERID_VERIFIER_SALT 5A9235A582ACE9E5EEA30CEA D2E1B6AC OLS_OID_STATUS 0 OLS OID Status used for Label Security PATH_PREFIX /u02/oradata/tcdb/pipboy All paths for objects such as directories are relative to this TDE_MASTER_KEY_ID WORKLOAD_CAPTURE_MODE CAPTURE implies workload capture is in pro gress WORKLOAD_REPLAY_MODE PREPARE implies external replay clients ca n connect; REPLAY implies workload replay is in progress
Initialization parameters for PDB are inherited from cdb$root container. You can stop this behaviour by changing parameter value with alter system command from PDB session level. If you use cdb_properties view in PDB then you’ll get exactly the same data which can be reached through database_properties view. Difference is on container id column which is added to all CDB_ views.
Leaving properties area and going for initialization parameter file and server parameter file. In PDB there is no file associated with PDB in $ORACLE_HOME/dbs directory. As PDB uses data dictionary to store all information about itself there is no need for another spfile. Oracle let PDB inherit parameters from root container and allow to change some of the parameters. You can find which one can be changed checking column ISPDB_MODIFIABLE from v$system_parameter view.
As this behaviour is quite important to master we’ll take a quick trip behind the wheel of open_cursors parameter.
First check current value for root container and pipboy PDB. From cdb$root container connected as sysdba execute script ./iad/parameter121.sql.
start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- GLOBAL (conid=0) open_cursors 300 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 1 300 T
open_cursors parameter has been set to value of 300 by dbca. Default value for this parameter is 50.
Open new putty session and connect directly to pipboy as common user sys.
sqlplus sys/manager@pipboy as sysdba
start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- GLOBAL (conid=0) open_cursors 300 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 0 F
ordinal column in v$spparameter differs between containers because root container use spfile to start and pipboy haven’t got the ability to read spfile during startup (PDB’s use data dictionary information only). This column has more useful value when listed to find out what is the correct order of strings within one parameter, for instance when checking control_files parameter with more than one copy of it. Other data is the same within presented scope.
Change open_cursors value in pipboy PDB to 456 – use session in pipboy PDB.
alter system set open_cursors = 456; start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- PIPBOY open_cursors 456 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 1 456 T
Value for open_cursors parameter has been changed for pipboy PDB. From this point of view spparameter present “global” value. This is true only for pipboy PDB and her “data dictionary server parameter file”. Check what we can see from root container point of view. Switch to cdb$root session.
start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- PIPBOY open_cursors 456 F T GLOBAL (conid=0) open_cursors 300 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 1 300 T
From root container point of view we can see that someone has changed value for pipboy’s open_cursors parameter but in server parameter file we still have value 300. The same for active value in cdb$root. Note that con_id and con_name need to be considered as a point of view. Another thing is that we’ve broke inheritance behaviour – from pipboy point of view we can see that the parameter is no longer global (inherited) but it is specified within PDB.
How to revert to standard state where value of the parameter is inherited? We can reset this value from PDB side or use container = all and set this new value for all containers. To get back to default value we need to reset parameter on PDB side and reset in spfile. Let’s check setting the parameter everywhere at once. From root container use command with container clause.
alter system set open_cursors = 787 container = all; start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- GLOBAL (conid=0) open_cursors 787 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 1 787 T
Everywhere we’ve got the same value. This explains why we cannot see a row for pipboy PDB.
Execute the same script from PDB session.
start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- GLOBAL (conid=0) open_cursors 787 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 1 456 T
Data dictionary information has not been deleted yet for pipboy PDB. After pluggable database restart all return to normal state and there is no information in data dictionary cache.
Restart pluggable database from root container session.
alter pluggable database pipboy close immediate; alter pluggable database pipboy open read write;
Exit from pipboy sys session and connect again.
start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- GLOBAL (conid=0) open_cursors 787 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 0 F
Parameter has been reset to old value. The same could be achieved by resetting the parameter from pipboy PDB side. If you make change from root container of open_cursors parameter value to 300 then it will be inherited to pipboy PDB too as this PDB inheriting value again.
From pipboy sys user execute command.
alter system set open_cursors = 495; start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? IsS - is specified in spfile? Add %% if needed for parameter name. Search is case insensitive. Input system parameter name for like search: open_c% 1 - System parameter for LIKE with argument open_c% (active instance parameters) Container Name Parameter Name Parameter Value IsD IsPM ---------------- ------------------------------ -------------------------------- --- ---- PIPBOY open_cursors 495 F T 2 - Server Parameter File information for LIKE with argument open_c% Container Name Parameter Name Ord Parameter Value IsS ---------------- ------------------------------ ---- -------------------------------- --- GLOBAL (conid=0) open_cursors 1 495 T
alter system reset open_cursors; start iad/parameter121
Key for column names IsD - is default value? IsPM - is modifiable in pluggable database? 1 - System parameter for LIKE with argument 'open_cursors' (active instance parameters) Parameter Name Parameter Value IsD IsPM Container Name ------------------------------ -------------------------------- --- ---- ---------------- open_cursors 495 F T PIPBOY 2 - Server Parameter File information for LIKE with argument 'open_cursors' Parameter Name Parameter Value IsS Ord Container Name ------------------------------ -------------------------------- --- ---- ---------------- open_cursors F 0 GLOBAL (conid=0)
From this point we could set parameter value from root container for all PDB’s or only dynamically apply value for pipboy PDB. Restart of pipboy PDB will result in clearing current memory value and applying inherited value from cdb$root container. One thing to remember is to do not forget about inheritance of PDB parameters from global configuration. To clear situation I’m going to set value for all containers and return old behaviour with inheritance. In the root container:
alter system set open_cursors = 300 container = all;
Another thing I wanted to check was path prefix setup on PDB creation time. We cannot change it later, we can set it only during PDB creation. According to documentation my path: /u02/oradata/tcdb/pipboy/ is now prefix for all database directories. I’ll create simple directory as pipboy administrator. We need quota on default maxstone tablespace to create expdp table which will be removed after the export job will be done.
Value for this property can be checked in database_properties view (presented above).
From maxstone user session execute statements (you can re-login in second session or create new ssh session):
alter user maxstone quota unlimited on vault13; create directory maindir as 'maindir'; start do/directory112.sql
Owner Directory name Directory path ---------------- -------------------- ---------------------------------------------------- SYS DATA_PUMP_DIR /u01/app/oracle/admin/tcdb/dpdump/ SYS MAINDIR /u02/oradata/tcdb/pipboy/maindir SYS OPATCH_INST_DIR /u01/app/oracle/product/12.1.0/dbhome_1/OPatch SYS OPATCH_LOG_DIR /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch SYS OPATCH_SCRIPT_DIR /u01/app/oracle/product/12.1.0/dbhome_1/QOpatch SYS ORACLE_BASE / SYS ORACLE_HOME / SYS XSDDIR /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/xml/sc hema
To test this directory we will use expdp from command line. Switch to ssh session (or create one), create subdirectory and change your working directory to newly created one.
mkdir /u02/oradata/tcdb/pipboy/maindir cd /u02/oradata/tcdb/pipboy/maindir expdp userid=maxstone/MxSt9281@pipboy dumpfile=d1.dmp logfile=d1.log directory=maindir content=metadata_only schemas=maxstone
For security reasons you should never input password in command line command – always use parameter file or input interactively requested string. Returning to main topic – we can see that new files have been created in selected directory:
pwd
/u02/oradata/tcdb/pipboy/maindir
ls -l
total 252 -rw-r----- 1 oracle oinstall 249856 Apr 14 15:29 d1.dmp -rw-r--r-- 1 oracle oinstall 1536 Apr 14 15:29 d1.log
I’ll try to create absolute path for a directory object using sesssion with maxstone user logged in.
create directory absdir as '/u02/oradata/tcdb';
expdp userid=maxstone/MxSt9281@pipboy dumpfile=d1.dmp logfile=d1.log directory=absdir content=metadata_only schemas=maxstone
And all looks good. Files where created in /u02/oradata/tcdb directory. From Oracle Database SQL Language Reference under create pluggable database syntax:
path_prefix_clause Use this clause to ensure that file paths for directory objects associated with the PDB are restricted to the specified directory or its subdirectories.
For me this works only for directories with non-absolute paths. Omit / in front of your path and you’ll get your directory inside pointed directory. My theory was correct as I found another statement about this parameter in Administrator’s Guide in a table “38-2 Clauses for Creating a PDB From the Seed” with question/answer list reviewing all available parameters:
Do you want to use a PATH_PREFIX clause to ensure that all relative directory object paths associated with the PDB are treated as relative to the specified directory or its subdirectories?
This is for relative paths only. Cleaning up my playground from maxstone user:
drop directory absdir; drop directory maindir;
And removing operating system dump and log files d1.dmp/d1.log including maindir subdirectory.
Creating PDB from another PDB
We’re going to use pipboy PDB to create a clone named special. Below you can find sql command which will be our starting point for other examples. To introduce PDB parameter pdb_file_name_convert I’m going to omit file_name_convert clause in create pluggable database statement. If your pipboy PDB is open for read write close it and open for read only – it is required to use PDB as a source for cloning. As this is sql statement I’m going to create target directories before issuing commands. From ssh session:
mkdir -p /u02/oradata/tcdb/special/default_dpdir
And sql part:
alter session set pdb_file_name_convert = '/u02/oradata/tcdb/pipboy','/u02/oradata/tcdb/special'; alter pluggable database pipboy close; alter pluggable database pipboy open read only; create pluggable database special from pipboy storage (maxsize 2g max_shared_temp_size 2g) path_prefix = '/u02/oradata/tcdb/special/default_dpdir' user_tablespaces = all standbys = all logging create_file_dest = '/u02/oradata/tcdb/special';
To bring online both databases issue commands:
alter pluggable database all open read write;
You’ll receive an error:
ORA-65019: pluggable database PIPBOY already open
PDB special have been opened from mount state and this is normal. Pipboy PDB is in read only mode and we can change it’s state using force keyword only.
alter pluggable database pipboy open read write force;
Other way to open for read write would be closing and opening the PDB. Both options are correct and does not harm your database – even more – currently connected sessions to read only database will be maintained and can continue after command with force clause. To demonstrate changing PDB state first change it’s state from root$PDB container:
alter pluggable database pipboy open read only force;
Create new ssh session and connect using sql*plus as maxstone user to pipboy PDB:
sqlplus maxstone/MxSt9281@pipboy
Check your session state and sid:
start sap/mysession121
User Name OS User Program CID Service Name Machine ------------ -------------- -------------------------- ---- ------------ ---------------- MAXSTONE oracle sqlplus@odone (TNS V1-V3) 3 pipboy odone
select userenv('sid') from dual;
USERENV('SID') -------------- 414
select open_mode from v$database;
OPEN_MODE -------------------- READ ONLY
From root container session:
alter pluggable database pipboy open read write force;
And again check maxstone session.
select open_mode from v$database;
OPEN_MODE -------------------- READ WRITE
select userenv('sid') from dual;
USERENV('SID') -------------- 414
You can check logon time too just to be sure that’s the same session. As you can see PDB give some flexibility – you can change to read only for cloning time and then to read write after cloning – user sessions will be intact. The same is true when going back to read write. You need to look out for migrate state as it is not allowed to use force keyword to go to/from this state with force keyword.
Just to check the state of our new PDB cloned from pipboy PDB check it’s state with couple scripts from root container:
start mt/container121
Key for column names CID - Container ID CID NAME DbID Size in MB Open mode Open time Res ---- -------------------- ------------ ---------- ---------- ---------------------- --- 1 CDB$ROOT 918792381 0 READ WRITE 2015-04-19 09:08:10 NO 2 PDB$SEED 4034246255 845 READ ONLY 2015-04-19 09:08:10 NO 3 PIPBOY 3788905833 901 READ WRITE 2015-04-19 09:14:27 NO 4 SPECIAL 3986562878 901 READ WRITE 2015-04-19 09:16:20 NO
start ts/datatempfiles121
~~ data files ~~ Tb Name Tb Status Fl ID Fl Name Fl Status Cont. Name ------------ --------- ------ ---------------------------------------- --------- ---------- SYSAUX ONLINE 3 /u02/oradata/tcdb/sysaux01.dbf AVAILABLE CDB$ROOT SYSTEM ONLINE 1 /u02/oradata/tcdb/system01.dbf AVAILABLE CDB$ROOT UNDOTBS1 ONLINE 5 /u02/oradata/tcdb/undotbs01.dbf AVAILABLE CDB$ROOT USERS ONLINE 6 /u02/oradata/tcdb/users01.dbf AVAILABLE CDB$ROOT SYSAUX ONLINE 42 /u02/oradata/tcdb/pipboy/sysaux01.dbf AVAILABLE PIPBOY SYSTEM ONLINE 41 /u02/oradata/tcdb/pipboy/system01.dbf AVAILABLE PIPBOY VAULT13 ONLINE 43 /u02/oradata/tcdb/pipboy/vault13_01.dbf AVAILABLE PIPBOY SYSAUX ONLINE 45 /u02/oradata/tcdb/special/TCDB/140F91809 AVAILABLE SPECIAL 458086BE0530C38A8C0632D/datafile/o1_mf_s ysaux_bm6ohqh4_.dbf SYSTEM ONLINE 44 /u02/oradata/tcdb/special/TCDB/140F91809 AVAILABLE SPECIAL 458086BE0530C38A8C0632D/datafile/o1_mf_s ystem_bm6ohqgv_.dbf VAULT13 ONLINE 46 /u02/oradata/tcdb/special/TCDB/140F91809 AVAILABLE SPECIAL 458086BE0530C38A8C0632D/datafile/o1_mf_v ault13_bm6ohqh6_.dbf 10 rows selected. ~~ temp files ~~ Tb Name Tb Status Fl ID Fl Name Fl Status Cont. Name ------------ --------- ------ ---------------------------------------- --------- ---------- TEMP ONLINE 1 /u02/oradata/tcdb/temp01.dbf ONLINE CDB$ROOT TEMP ONLINE 3 /u02/oradata/tcdb/pipboy/temp01.dbf ONLINE PIPBOY TEMP ONLINE 4 /u02/oradata/tcdb/special/TCDB/140F91809 ONLINE SPECIAL 458086BE0530C38A8C0632D/datafile/o1_mf_t emp_bm6ohqh6_.dbf
I’ve set up both: pdb_file_name_convert as a session parameter and OMF parameter create_file_dest in create pluggable database clause. This forced Oracle to use OMF as it almost always takes precedence over other parameters. To do not use OMF I’m going to drop this PDB and create it again but using pdb_file_name create session parameter only. From root container session – clean up first:
alter pluggable database special close; drop pluggable database special including datafiles;
And create special PDB once more:
alter session set pdb_file_name_convert = '/u02/oradata/tcdb/pipboy','/u02/oradata/tcdb/special'; alter pluggable database pipboy open read only force; create pluggable database special from pipboy storage (maxsize 2g max_shared_temp_size 2g) path_prefix = '/u02/oradata/tcdb/special/default_dpdir' user_tablespaces = all standbys = all logging create_file_dest = none;
In my case above command returned an error:
ORA-65005: missing or invalid file name pattern for file - /u02/oradata/tcdb/pipboy/system01.dbf
Unfortunately create_file_dest = none did not force Oracle Database to drop this relation with OMF and still something went wrong. Corrected command:
create pluggable database special from pipboy storage (maxsize 2g max_shared_temp_size 2g) path_prefix = '/u02/oradata/tcdb/special/default_dpdir' user_tablespaces = all standbys = all logging;
Just remove create_file_dest clause from command and all will be good.
alter pluggable database pipboy open read write force; alter pluggable database special open read write; start ts/datatempfiles121
~~ data files ~~ Tb Name Tb Status Fl ID Fl Name Fl Status Cont. Name ------------ --------- ------ ---------------------------------------- --------- ---------- SYSAUX ONLINE 3 /u02/oradata/tcdb/sysaux01.dbf AVAILABLE CDB$ROOT SYSTEM ONLINE 1 /u02/oradata/tcdb/system01.dbf AVAILABLE CDB$ROOT UNDOTBS1 ONLINE 5 /u02/oradata/tcdb/undotbs01.dbf AVAILABLE CDB$ROOT USERS ONLINE 6 /u02/oradata/tcdb/users01.dbf AVAILABLE CDB$ROOT SYSAUX ONLINE 42 /u02/oradata/tcdb/pipboy/sysaux01.dbf AVAILABLE PIPBOY SYSTEM ONLINE 41 /u02/oradata/tcdb/pipboy/system01.dbf AVAILABLE PIPBOY VAULT13 ONLINE 43 /u02/oradata/tcdb/pipboy/vault13_01.dbf AVAILABLE PIPBOY SYSAUX ONLINE 48 /u02/oradata/tcdb/special/sysaux01.dbf AVAILABLE SPECIAL SYSTEM ONLINE 47 /u02/oradata/tcdb/special/system01.dbf AVAILABLE SPECIAL VAULT13 ONLINE 49 /u02/oradata/tcdb/special/vault13_01.dbf AVAILABLE SPECIAL 10 rows selected. ~~ temp files ~~ Tb Name Tb Status Fl ID Fl Name Fl Status Cont. Name ------------ --------- ------ ---------------------------------------- --------- ---------- TEMP ONLINE 1 /u02/oradata/tcdb/temp01.dbf ONLINE CDB$ROOT TEMP ONLINE 3 /u02/oradata/tcdb/pipboy/temp01.dbf ONLINE PIPBOY TEMP ONLINE 4 /u02/oradata/tcdb/special/temp01.dbf ONLINE SPECIAL
This time file names are much more understandable for us.
Creating PDB from non-cdb
I’ve created simple database with a name molerat as a non-CDB database. Before conversion I’ll put there some database objects which will be converted as PDB objects. All database files are clustered in one directory: /u02/oradata/molerat/. From sys user I’m starting with two users, couple tablespaces, tables and some very simple data.
create tablespace megaton_data datafile '/u02/oradata/molerat/megaton_data_01.dbf' size 16m autoextend on next 8m maxsize 128m; create tablespace megaton_indx datafile '/u02/oradata/molerat/megaton_indx_01.dbf' size 16m autoextend on next 8m maxsize 128m; create temporary tablespace megaton_temp tempfile '/u02/oradata/molerat/megaton_temp_01.dbf' size 8m autoextend on next 8m maxsize 256m; create user agnes identified by agnes default tablespace megaton_data temporary tablespace megaton_temp quota unlimited on megaton_data quota unlimited on megaton_indx account unlock; create role megaton_citizen; grant create session, create table, create sequence, create synonym, create view to megaton_citizen; grant megaton_citizen to agnes; create user lewis identified by lewis default tablespace megaton_data temporary tablespace megaton_temp quota unlimited on megaton_data quota unlimited on megaton_indx account unlock; grant megaton_citizen to lewis; connect agnes/agnes create table room ( id number(10), name varchar2(128), floor number(2) ) tablespace megaton_data; create unique index room_uidx1 on room(id) tablespace megaton_indx; alter table room add constraint room_pk primary key (id) using index room_uidx1; create table citizen ( id number(16), firstname varchar2(32), surename varchar2(64), room_id number(10) ) tablespace megaton_data; create unique index citizen_uidx1 on citizen(id) tablespace megaton_indx; alter table citizen add constraint citizen_pk primary key (id) using index citizen_uidx1; alter table citizen add constraint citizen_room_fk foreign key (room_id) referencing room(id); create sequence room_seq start with 1 increment by 1 cache 100; create sequence citizen_seq start with 1 increment by 1 cache 100; declare v_room_id room.id%type; begin v_room_id := room_seq.nextval; insert into room values (v_room_id, 'summer heat', 1); insert into citizen values (citizen_seq.nextval, 'Jimmy', 'James', v_room_id); insert into citizen values (citizen_seq.nextval, 'Shannon', 'Shawn', v_room_id); v_room_id := room_seq.nextval; insert into room values (v_room_id, 'dirty water', 3); insert into citizen values (citizen_seq.nextval, 'Danny', 'Dean', v_room_id); insert into citizen values (citizen_seq.nextval, 'Mickey', 'Mick', v_room_id); end; / commit; grant select, referencing on room to lewis; connect lewis/lewis create table gun ( id number(8), name varchar2(32), room_id number(10) ) tablespace megaton_data; create unique index gun_uidx1 on gun(id) tablespace megaton_indx; alter table gun add constraint gun_pk primary key (id) using index gun_uidx1; alter table gun add constraint gun_room_fk foreign key (room_id) referencing agnes.room(id); create sequence gun_seq start with 1 increment by 1 cache 100; insert into gun values (gun_seq.nextval, 'big gun', (select min(id) from agnes.room)); insert into gun values (gun_seq.nextval, 'tiny gun', (select max(id) from agnes.room)); commit; connect / as sysdba
This is our simple application which we’ll check, if it still exists in PDB after conversion. To see those objects we need couple simple queries with very predictable results. Run couple scripts for validation. As sysdba execute:
start do/constraint112
Key for column names CT - Constraint type: ..C - Check constraint on a table ..P - Primary key ..U - Unique key ..R - Referential integrity ..V - With check option, on a view ..O - With read only, on a view ..H - Hash expression ..F - Constraint that involves a REF column ..S - Supplemental logging Add %% if needed for owner name. Search is case insensitive. Input constraints owner: LEWIS Constraints with owner LIKE LEWIS Owner Table name Constraint name CT Refer. obj. Ref. constr. nam ---------------- -------------------- ---------------- -- ------------ ---------------- LEWIS GUN GUN_PK P LEWIS GUN GUN_ROOM_FK R AGNES ROOM_PK
start do/constraint112
Input constraints owner: agnes Constraints with owner LIKE agnes Owner Table name Constraint name CT Refer. obj. Ref. constr. nam ---------------- -------------------- ---------------- -- ------------ ---------------- AGNES CITIZEN CITIZEN_PK P AGNES CITIZEN CITIZEN_ROOM_FK R AGNES ROOM_PK AGNES ROOM ROOM_PK P
All objects in schemas can be listed through another script.
start do/object112
Add %% if needed for owner name. Search is case insensitive. Input objects owner: lewis Objects with owner LIKE lewis Object type Object name Creation date Status -------------------- ------------------------------ ------------------- ------- INDEX GUN_UIDX1 2015-04-19 10:54:58 VALID SEQUENCE GUN_SEQ 2015-04-19 10:54:58 VALID TABLE GUN 2015-04-19 10:54:58 VALID
start dbobjects
Add %% if needed for owner name. Search is case insensitive. Input objects owner: agnes Objects with owner LIKE agnes Object type Object name Creation date Status -------------------- ------------------------------ ------------------- ------- INDEX CITIZEN_UIDX1 2015-04-19 10:54:57 VALID INDEX ROOM_UIDX1 2015-04-19 10:54:57 VALID SEQUENCE CITIZEN_SEQ 2015-04-19 10:54:58 VALID SEQUENCE ROOM_SEQ 2015-04-19 10:54:58 VALID TABLE CITIZEN 2015-04-19 10:54:57 VALID TABLE ROOM 2015-04-19 10:54:57 VALID
And to see segment allocation in tablespaces simple query from dba_segments:
col segment_name for a32 col tablespace_name for a32 select segment_name, tablespace_name, bytes from dba_segments where owner in ('AGNES','LEWIS');
SEGMENT_NAME TABLESPACE_NAME BYTES -------------------------------- -------------------------------- ---------- GUN MEGATON_DATA 65536 CITIZEN MEGATON_DATA 65536 ROOM MEGATON_DATA 65536 GUN_UIDX1 MEGATON_INDX 65536 CITIZEN_UIDX1 MEGATON_INDX 65536 ROOM_UIDX1 MEGATON_INDX 65536
All is ready for our test. We’ll try to clone non-CDB to our tcdb as a PDB. From root container create a database link to molerat database. It’s not written anywhere in documentation but I assume that I should connect as system user. I’ve created tnsnames entry for my molerat database with the same name to ease database link creation:
# molerat - non-CDB for cloning to tcdb MOLERAT = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = odone)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = molerat) ) )
Then create directory for molerat database files in tcdb database.
create database link molerat connect to system identified by manager using 'molerat'; !mkdir /u02/oradata/tcdb/molerat/
Issue command for non-CDB cloning using from clause in create pluggable database command.
create pluggable database molerat_clone from non$cdb@molerat storage unlimited file_name_convert = ('/u02/oradata/molerat/','/u02/oradata/tcdb/molerat/') user_tablespaces = all logging create_file_dest = none;
Cloned database is now in mounted state. Notice that in documentation it is required to put source database into read only mode. We didn’t do that and there were no errors. In production environment you should always put source non-CDB database in read only mode before cloning. create_file_dest clause works fine too and file names aren’t created with OMF enabled – only converted paths were applied here. Next step is to convert this non-CDB to PDB using dedicated script. First check your current status of your PDB’s.
start mt/container121
Key for column names CID - Container ID CID NAME DbID Size in MB Open mode Open time Res ---- -------------------- ------------ ---------- ---------- ---------------------- --- 1 CDB$ROOT 918792381 0 READ WRITE 2015-04-19 09:08:10 NO 2 PDB$SEED 4034246255 845 READ ONLY 2015-04-19 09:08:10 NO 3 PIPBOY 3788905833 901 READ WRITE 2015-04-19 09:14:27 NO 4 SPECIAL 510174438 901 READ WRITE 2015-04-19 09:26:11 NO 5 MOLERAT_CLONE 2993991422 0 MOUNTED 2015-04-19 11:08:50
Switch you session to molerat_clone container and start dedicated script for conversion from non-CDB to PDB.
alter session set container = molerat_clone; @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql
During execution of this script my session hangs on recompilation with a command utl_recomp.recomp_parallel – as I have only couple database objects which do not require recompilation I thought that I shouldn’t even see this prompt – but still – this job took 12c huge amount of time (*in my virtual environment). I’ve hit enter after some time – maybe this was only a coincidence but script react and moved forward almost instantly.
To finish cloning process we need to open the database for read write (do not use read only as this can cause error – according to documentation).
alter database open read write; exit
As the script totally mess up your sql*plus session with strange settings I recommend to exit from this session and login again to root container and check status of all containers. Check if your environment is setup to connect to tcdb. Add tnsnames.ora entry to connect to PDB via listener.
# molerat_clone - cloned non-cdb to pdb in tcdb MOLERAT_CLONE = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = odone)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = MOLERAT_CLONE) ) )
sqlplus / as sysdba
start mt/container121
Key for column names CID - Container ID CID NAME DbID Size in MB Open mode Open time Res ---- -------------------- ------------ ---------- ---------- ---------------------- --- 1 CDB$ROOT 918792381 0 READ WRITE 2015-04-19 09:08:10 NO 2 PDB$SEED 4034246255 845 READ ONLY 2015-04-19 09:08:10 NO 3 PIPBOY 3788905833 901 READ WRITE 2015-04-19 09:14:27 NO 4 SPECIAL 510174438 901 READ WRITE 2015-04-19 09:26:11 NO 5 MOLERAT_CLONE 2993991422 1315 READ WRITE 2015-04-19 11:26:20 NO
For some unknown reason my cloned PDB has the same size as source database counting only allocated space of the database files. Source database, counting by segments, has 686 MB and 1487 MB of allocated space in dba_data_files. What happened during the conversion – maybe some dictionary objects were not converted as common and they exists as common and local in the same time (see the difference between pipboy and molerat_clone sizes above – both PDBs are almost empty). Counting internally from molerat_clone PDB database has allocated 1287 MB and used by segments 606 MB. I think that this process still can be improved as I used the same Oracle Database binaries for creating tcdb container database and molerat non-cdb database (for cloning from non-CDB to PDB you have to use 12.1.0.2.0 or later version).
Quickly check database objects state. Add tnsname entry in tnsname.ora file for newly created PDB and connect as agnes. In separate ssh session connect to tcdb PDB named molerat_clone. Then connect as lewis to check his accessibility and finaly switch to sys user and execute scripts as previously in molerat non-cdb.
sqlplus agnes/agnes@molerat_clone
connect lewis/lewis@molerat_clone connect sys/manager@molerat_clone as sysdba start do/constraint112
Input constraints owner: agnes Constraints with owner LIKE agnes Owner Table name Constraint name CT Refer. obj. Ref. constr. nam ---------------- -------------------- ---------------- -- ------------ ---------------- AGNES CITIZEN CITIZEN_PK P AGNES CITIZEN CITIZEN_ROOM_FK R AGNES ROOM_PK AGNES ROOM ROOM_PK P
start do/constraint112
Input constraints owner: lewis Constraints with owner LIKE lewis Owner Table name Constraint name CT Refer. obj. Ref. constr. nam ---------------- -------------------- ---------------- -- ------------ ---------------- LEWIS GUN GUN_PK P LEWIS GUN GUN_ROOM_FK R AGNES ROOM_PK
start do/object112
Input objects owner: agnes Objects with owner LIKE agnes Object type Object name Creation date Status -------------------- ------------------------------ ------------------- ------- INDEX CITIZEN_UIDX1 2015-04-19 10:54:57 VALID INDEX ROOM_UIDX1 2015-04-19 10:54:57 VALID SEQUENCE CITIZEN_SEQ 2015-04-19 10:54:58 VALID SEQUENCE ROOM_SEQ 2015-04-19 10:54:58 VALID TABLE CITIZEN 2015-04-19 10:54:57 VALID TABLE ROOM 2015-04-19 10:54:57 VALID
start do/object112
Input objects owner: lewis Objects with owner LIKE lewis Object type Object name Creation date Status -------------------- ------------------------------ ------------------- ------- INDEX GUN_UIDX1 2015-04-19 10:54:58 VALID SEQUENCE GUN_SEQ 2015-04-19 10:54:58 VALID TABLE GUN 2015-04-19 10:54:58 VALID
And separate script for segments:
col segment_name for a32 col tablespace_name for a32 select segment_name, tablespace_name, bytes from dba_segments where owner in ('AGNES','LEWIS');
SEGMENT_NAME TABLESPACE_NAME BYTES -------------------------------- -------------------------------- ---------- GUN MEGATON_DATA 65536 CITIZEN MEGATON_DATA 65536 ROOM MEGATON_DATA 65536 GUN_UIDX1 MEGATON_INDX 65536 CITIZEN_UIDX1 MEGATON_INDX 65536 ROOM_UIDX1 MEGATON_INDX 65536
Looks good. Try to add another row in lewis table using simple insert.
connect lewis/lewis@molerat_clone insert into gun values (gun_seq.nextval, 'old gun', (select min(id) from agnes.room)); insert into gun values (gun_seq.nextval, 'shiny gun', (select max(id) from agnes.room)); commit; select * from gun;
ID NAME ROOM_ID ---------- -------------------------------- ---------- 1 big gun 1 2 tiny gun 2 101 old gun 1 102 shiny gun 2
select * from session_roles;
ROLE -------------------------------------------------------------------------------------------- MEGATON_CITIZEN
Copying PDB to another CDB
Some time ago I’ve created another CDB and named her beecdb. Right now I’m going to use it to clone molerat_clone to this container database.
In target CDB from root container create new database link to molerat PDB in tcdb database.
create database link molerat_clone connect to system identified by manager using 'molerat_clone'; !mkdir /u02/oradata/BEECDB/molerat_clone2
From tcdb CDB switch molerat_clone PDB to read only mode.
alter pluggable database molerat_clone open read only force;
Issue command from a beecdb database to clone PDB using from clause in create pluggable database command. Remember to connect as sys in root container. After creating PDB open it.
create pluggable database molerat_clone2 from molerat_clone@molerat_clone storage unlimited file_name_convert = ('/u02/oradata/tcdb/molerat','/u02/oradata/BEECDB/molerat_clone2') user_tablespaces = all logging create_file_dest = none; alter pluggable database molerat_clone2 open read write;
After creating PDB switch current container to check if there are the database objects which were created earlier.
alter session set container = molerat_clone2;
start do/object112
Input objects owner: agnes Objects with owner LIKE agnes Object type Object name Creation date Status -------------------- ------------------------------ ------------------- ------- INDEX CITIZEN_UIDX1 2015-04-19 10:54:57 VALID INDEX ROOM_UIDX1 2015-04-19 10:54:57 VALID SEQUENCE CITIZEN_SEQ 2015-04-19 10:54:58 VALID SEQUENCE ROOM_SEQ 2015-04-19 10:54:58 VALID TABLE CITIZEN 2015-04-19 10:54:57 VALID TABLE ROOM 2015-04-19 10:54:57 VALID
start do/object112
Input objects owner: lewis Objects with owner LIKE lewis Object type Object name Creation date Status -------------------- ------------------------------ ------------------- ------- INDEX GUN_UIDX1 2015-04-19 10:54:58 VALID SEQUENCE GUN_SEQ 2015-04-19 10:54:58 VALID TABLE GUN 2015-04-19 10:54:58 VALID
Constraints are the same too. Looks like this procedure works just fine.
Moving PDB between CDBs
There is one more scenario which I would like to test – unplug PDB from beecdb container and then plug into tcdb. First, unplug molerat_clone2 from beecdb. Use a connection to root container – switch between containers if needed.
alter session set container = cdb$root; alter pluggable database molerat_clone2 close; alter pluggable database molerat_clone2 unplug into '/u02/oradata/BEECDB/molerat_clone2/molerat_clone2_meta.xml'; start mt/pdb12102
Key for column names FL - force logging FnL - force nologging CID - Container ID CID Pluggable database name Status Logging FL FnL ---- -------------------------------- ------------- --------- --- --- 2 PDB$SEED NORMAL LOGGING NO NO 3 MOLERAT_CLONE2 UNPLUGGED LOGGING NO NO
We can see that our molerat_clone2 status has changed to unplugged. This PDB is still part of the database and will be in backupset if you’ll start rman backup of the database. I’ve checked this by issuing command from rman (open a new session to check this command if needed):
rman target "'/ as sysbackup'"
backup as compressed backupset incremental level 0 database plus archivelog delete all input;
In output I can find:
[...] channel ORA_DISK_1: starting compressed incremental level 0 datafile backup set channel ORA_DISK_1: specifying datafile(s) in backup set input datafile file number=00019 name=/u02/oradata/BEECDB/molerat_clone2/system01.dbf input datafile file number=00020 name=/u02/oradata/BEECDB/molerat_clone2/sysaux01.dbf input datafile file number=00022 name=/u02/oradata/BEECDB/molerat_clone2/megaton_data_01.dbf input datafile file number=00023 name=/u02/oradata/BEECDB/molerat_clone2/megaton_indx_01.dbf input datafile file number=00021 name=/u02/oradata/BEECDB/molerat_clone2/users01.dbf channel ORA_DISK_1: starting piece 1 at 19-APR-15 [...]
Now let’s clone this PDB in the same database first. To do so we need to use as clone clause and rename database files during copy. First create target directory for new files.
!mkdir /u02/oradata/BEECDB/molerat_clone44/
And execute cloning command:
create pluggable database molerat_clone44 as clone using '/u02/oradata/BEECDB/molerat_clone2/molerat_clone2_meta.xml' copy file_name_convert = ('/u02/oradata/BEECDB/molerat_clone2/', '/u02/oradata/BEECDB/molerat_clone44/') storage unlimited path_prefix = '/u02/oradata/BEECDB/molerat_clone44' user_tablespaces = all logging;
I’m not using source_file_name_convert because I didn’t move the files anywhere earlier (manually). They are in the same place, where they were during unplugging from beecdb and I need to use file_name_convert (or initialization parameter pdb_file_name_convert) to copy files to correct location.
start mt/pdb12102
Key for column names FL - force logging FnL - force nologging CID - Container ID CID Pluggable database name Status Logging FL FnL ---- -------------------------------- ------------- --------- --- --- 2 PDB$SEED NORMAL LOGGING NO NO 3 MOLERAT_CLONE2 UNPLUGGED LOGGING NO NO 4 MOLERAT_CLONE44 NEW LOGGING NO NO
Our newly created PDB is in place. Let’s check what happens during opening of molerat_clone44.
alter pluggable database molerat_clone44 open read write;
Status is switched to normal and this PDB is available for use along with agnes and lewis users. Another attempt will be to use this unplugged database but I will move those files first. Just to clear this CDB of old entries I’m going to remove this PDB from beecdb but leave the files intact.
drop pluggable database molerat_clone2 keep datafiles; start mt/pdb12102
Key for column names FL - force logging FnL - force nologging CID - Container ID CID Pluggable database name Status Logging FL FnL ---- -------------------------------- ------------- --------- --- --- 2 PDB$SEED NORMAL LOGGING NO NO 4 MOLERAT_CLONE44 NORMAL LOGGING NO NO
PDB molerat_clone2 vanished from the list. Now I’ll move those files to new location. I’m creating new directory:
!mv /u02/oradata/BEECDB/molerat_clone2 /u02
Now paths from xml file points to location /u02/oradata/BEECDB/molerat_clone2/ but it’s not true anymore. They are located in /u02/molerat_clone2/. I’m switching to tcdb database and I’ll try to copy those files with directory conversion to copy files to /u02/oradata/tcdb/molerat_clone37. First, I’ll create target directory.
mkdir /u02/oradata/tcdb/molerat_clone37
And now I’m ready to create new PDB from xml using this unplugged PDB. From root container in CDB named tcdb:
create pluggable database molerat_clone37 using '/u02/molerat_clone2/molerat_clone2_meta.xml' source_file_name_convert = ('/u02/oradata/BEECDB/molerat_clone2/', '/u02/molerat_clone2/') copy file_name_convert = ('/u02/molerat_clone2/', '/u02/oradata/tcdb/molerat_clone37/') storage unlimited path_prefix = '/u02/oradata/tcdb/molerat_clone37' user_tablespaces = all logging;
start mt/pdb12102
Key for column names FL - force logging FnL - force nologging CID - Container ID CID Pluggable database name Status Logging FL FnL ---- -------------------------------- ------------- --------- --- --- 2 PDB$SEED NORMAL LOGGING NO NO 3 PIPBOY NORMAL LOGGING NO NO 4 SPECIAL NORMAL LOGGING NO NO 5 MOLERAT_CLONE NORMAL LOGGING NO NO 6 MOLERAT_CLONE37 NEW LOGGING NO NO
alter pluggable database molerat_clone37 open read write;
You can check details about all database files using ts/datatempfiles121.sql script and switch to molerat_clone37 container and see database objects through do/object112.sql script as previously. The last attempt would be with move option to create new PDB in tcdb container database with relocation of the database files. First, I’m going to drop this pluggable database and then create new one again. Command to do this from tcdb root container:
alter pluggable database molerat_clone37 close; drop pluggable database molerat_clone37 including datafiles; create pluggable database molerat_clone37 using '/u02/molerat_clone2/molerat_clone2_meta.xml' source_file_name_convert = ('/u02/oradata/BEECDB/molerat_clone2/', '/u02/molerat_clone2/') move file_name_convert = ('/u02/molerat_clone2/', '/u02/oradata/tcdb/molerat_clone37/') storage unlimited path_prefix = '/u02/oradata/tcdb/molerat_clone37' user_tablespaces = all logging; alter pluggable database molerat_clone37 open read write;
I can see that (again) all files are in place and PDB works properly.
In /u02/molerat_clone2/ directory no database files were kept, only molerat_clone2_meta.xml file was not deleted (looks like it’s auxiliary file for the database – it’s not registered anywhere) – they were moved to /u02/oradata/tcdb/molerat_clone37. During this test you could see that moving files inside the same file system was really fast just like with mv os command. Cleaning up after my copy operations:
rm -rf /u02/molerat_clone2
Using xml file is a good way to copy unplugged PDB and restore it from the “sands of time”. I cannot see clearly the benefits of unplugged PDB but maybe with next releases we would see something funky right here too – right now unplugged PDB can be only dropped. This way she will vanish from rman backup list.
Summary
It’s worth to mention that moving from 12.1.0.1.0 to 12.1.0.2.0 brings some facility in a matter of PDB creation from non-CDB – we don’t have to use anymore (manually) script or DBMS package to extract some xml files, import it to target CDB and follow complex procedure. Right now you’ll just create a database link and execute script which accommodate this new PDB in your CDB. Old way with xml file is still valid, especially in environment where you cannot use database link to connect directly with source container database.
Beside standard procedure you can use optional step in plugging in unplugged PDB into CDB – DBMS_PDB package. You can describe PDB if the xml file is lost using recover procedure, use describe routine to generate xml file which can be used to check, if this PDB can be plugged into different container or even check the compatibility for such operation between CDB’s by using check_plug_compatibility subprogram.
Looks like Multitenant idea will be explored by Oracle more deeply in future releases and we can expect new functionality with each cumulative update.