Import Utility Part2

 

Considerations When Importing Database Objects

Importing Object Identifiers

For object types, if IGNORE=y, the object type already exists, and the object identifiers, hashcodes, and type descriptors match, no error is reported. If the object identifiers or hashcodes do not match and the parameter TOID_NOVALIDATE has not been set to ignore the object type, an error is reported and any tables using the object type are not imported.

For object types, if IGNORE=n and the object type already exists, an error is reported. If the object identifiers, hashcodes, or type descriptors do not match and the parameter TOID_NOVALIDATE has not been set to ignore the object type, any tables using the object type are not imported.

For object tables, if IGNORE=y, the table already exists, and the object identifiers, hashcodes, and type descriptors match, no error is reported. Rows are imported into the object table. Import of rows may fail if rows with the same object identifier already exist in the object table. If the object identifiers, hashcodes, or type descriptors do not match, and the parameter TOID_NOVALIDATE has not been set to ignore the object type, an error is reported and the table is not imported.

For object tables, if IGNORE=n and the table already exists, an error is reported and the table is not imported.

Because Import preserves object identifiers of object types and object tables, consider the following when you import objects from one schema into another schema using the FROMUSER and TOUSER parameters:

If the FROMUSER object types and object tables already exist on the target system, errors occur because the object identifiers of the TOUSER object types and object tables are already in use. The FROMUSER object types and object tables must be dropped from the system before the import is started.

If an object table was created using the OID AS option to assign it the same object identifier as another table, both tables cannot be imported. You can import one of the tables, but the second table receives an error because the object identifier is already in use.

Importing Existing Object Tables and Tables That Contain Object Types

Users frequently create tables before importing data to reorganize tablespace usage or to change a table's storage parameters. The tables must be created with the same definitions as were previously used or a compatible format (except for storage parameters). For object tables and tables that contain columns of object types, format compatibilities are more restrictive.

For object tables and for tables containing columns of objects, each object the table references has its name, structure, and version information written out to the Export file. Export also includes object type information from different schemas, as needed.

Import verifies the existence of each object type required by a table prior to importing the table data. This verification consists of a check of the object type's name followed by a comparison of the object type's structure and version from the import system with that found in the Export file.

If an object type name is found on the import system, but the structure or version do not match that from the Export file, an error message is generated and the table data is not imported.

The Import parameter TOID_NOVALIDATE can be used to disable the verification of the object type's structure and version for specific objects.

Importing Nested Tables

Inner nested tables are exported separately from the outer table. Therefore, situations may arise where data in an inner nested table might not be properly imported:

Suppose a table with an inner nested table is exported and then imported without dropping the table or removing rows from the table. If the IGNORE=y parameter is used, there will be a constraint violation when inserting each row in the outer table. However, data in the inner nested table may be successfully imported, resulting in duplicate rows in the inner table.

If nonrecoverable errors occur inserting data in outer tables, the rest of the data in the outer table is skipped, but the corresponding inner table rows are not skipped. This may result in inner table rows not being referenced by any row in the outer table.

If an insert to an inner table fails after a recoverable error, its outer table row will already have been inserted in the outer table and data will continue to be inserted in it and any other inner tables of the containing table. This circumstance results in a partial logical row.

If nonrecoverable errors occur inserting data in an inner table, Import skips the rest of that inner table's data but does not skip the outer table or other nested tables.

You should always carefully examine the log file for errors in outer tables and inner tables. To be consistent, table data may need to be modified or deleted.

Because inner nested tables are imported separately from the outer table, attempts to access data from them while importing may produce unexpected results. For example, if an outer row is accessed before its inner rows are imported, an incomplete row may be returned to the user.

Importing REF Data

REF columns and attributes may contain a hidden ROWID that points to the referenced type instance. Import does not automatically recompute these ROWIDs for the target database. You should execute the following statement to reset the ROWIDs to their proper values:

ANALYZE TABLE [schema.]table VALIDATE REF UPDATE

Importing BFILE Columns and Directory Aliases

Export and Import do not copy data referenced by BFILE columns and attributes from the source database to the target database. Export and Import only propagate the names of the files and the directory aliases referenced by the BFILE columns. It is the responsibility of the DBA or user to move the actual files referenced through BFILE columns and attributes.

When you import table data that contains BFILE columns, the BFILE locator is imported with the directory alias and filename that was present at export time. Import does not verify that the directory alias or file exists. If the directory alias or file does not exist, an error occurs when the user accesses the BFILE data.

For directory aliases, if the operating system directory syntax used in the export system is not valid on the import system, no error is reported at import time. Subsequent access to the file data receives an error.

It is the responsibility of the DBA or user to ensure the directory alias is valid on the import system.

Importing Foreign Function Libraries

Import does not verify that the location referenced by the foreign function library is correct. If the formats for directory and filenames used in the library's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will receive an error.

It is the responsibility of the DBA or user to manually move the library and ensure the library's specification is valid on the import system.

Importing Stored Procedures, Functions, and Packages

The behavior of Import when a local stored procedure, function, or package is imported depends upon whether the COMPILE parameter is set to y or to n.

When a local stored procedure, function, or package is imported and COMPILE=y, the procedure, function, or package is recompiled upon import and retains its original time-stamp specification. If the compilation is successful, it can be accessed by remote procedures without error.

If COMPILE=n, the procedure, function, or package is still imported, but the original time stamp is lost. The compilation takes place the next time the procedure, function, or package is used.

Importing Java Objects

When a Java source or class is imported, it retains its original resolver (the list of schemas used to resolve Java full names). If the object is imported into a different schema, that resolver may no longer be valid. 

Importing External Tables

Import does not verify that the location referenced by the external table is correct. If the formats for directory and filenames used in the table's specification on the export file are invalid on the import system, no error is reported at import time. Subsequent usage of the callout functions will receive an error.

It is the responsibility of the DBA or user to manually move the table and ensure the table's specification is valid on the import system.

Importing Advanced Queue (AQ) Tables

Importing a queue table also imports any underlying queues and the related dictionary information. A queue can be imported only at the granularity level of the queue table. When a queue table is imported, export pretable and posttable action procedures maintain the queue dictionary.

Importing LONG Columns

LONG columns can be up to 2 gigabytes in length. In importing and exporting, the LONG columns must fit into memory with the rest of each row's data. The memory used to store LONG columns, however, does not need to be contiguous, because LONG data is loaded in sections.

Import can be used to convert LONG columns to CLOB columns. To do this, first create a table specifying the new CLOB column. When Import is run, the LONG data is converted to CLOB format. The same technique can be used to convert LONG RAW columns to BLOB columns.

Importing Views

Views are exported in dependency order. In some cases, Export must determine the ordering, rather than obtaining the order from the server database. In doing so, Export may not always be able to duplicate the correct ordering, resulting in compilation warnings when a view is imported, and the failure to import column comments on such views.

In particular, if viewa uses the stored procedure procb, and procb uses the view viewc, Export cannot determine the proper ordering of viewa and viewc. If viewa is exported before viewc and procb already exists on the import system, viewa receives compilation warnings at import time.

Grants on views are imported even if a view has compilation errors. A view could have compilation errors if an object it depends on, such as a table, procedure, or another view, does not exist when the view is created. If a base table does not exist, the server cannot validate that the grantor has the proper privileges on the base table with the GRANT OPTION. Access violations could occur when the view is used if the grantor does not have the proper privileges after the missing tables are created.

Importing views that contain references to tables in other schemas requires that the importer have SELECT ANY TABLE privilege. If the importer has not been granted this privilege, the views will be imported in an uncompiled state. Note that granting the privilege to a role is insufficient. For the view to be compiled, the privilege must be granted directly to the importer.

Importing Partitioned Tables

Import attempts to create a partitioned table with the same partition or subpartition names as the exported partitioned table, including names of the form SYS_Pnnn. If a table with the same name already exists, Import processing depends on the value of the IGNORE parameter.

Unless SKIP_UNUSABLE_INDEXES=y, inserting the exported data into the target table fails if Import cannot update a nonpartitioned index or index partition that is marked Indexes Unusable or is otherwise not suitable.

Support for Fine-Grained Access Control

You can export tables with fine-grained access control policies enabled. When doing so, keep the following considerations in mind:

To restore the fine-grained access control policies, the user who imports from an export file containing such tables must have the following privileges:

EXECUTE privilege on the DBMS_RLS package so that the tables' security policies can be reinstated.

EXPORT_FULL_DATABASE role enabled or EXEMPT ACCESS POLICY granted

If a user without the correct privileges attempts to import from an export file that contains tables with fine-grained access control policies, a warning message will be issued. Therefore, it is advisable for security reasons that the exporter/importer of such tables be the DBA.



Materialized Views and Snapshots

The three interrelated objects in a snapshot system are the master table, optional snapshot log, and the snapshot itself.

Snapshot Log

The snapshot log in a dump file is imported if the master table already exists for the database to which you are importing and it has a snapshot log.

When a ROWID snapshot log is exported, ROWIDs stored in the snapshot log have no meaning upon import. As a result, each ROWID snapshot's first attempt to do a fast refresh fails, generating an error indicating that a complete refresh is required.

To avoid the refresh error, do a complete refresh after importing a ROWID snapshot log. After you have done a complete refresh, subsequent fast refreshes will work properly. In contrast, when a primary key snapshot log is exported, the keys' values do retain their meaning upon Import. Therefore, primary key snapshots can do a fast refresh after the import.

Snapshots

A snapshot that has been restored from an export file has reverted to a previous state. On import, the time of the last refresh is imported as part of the snapshot table definition. The function that calculates the next refresh time is also imported.

Each refresh leaves a signature. A fast refresh uses the log entries that date from the time of that signature to bring the snapshot up to date. When the fast refresh is complete, the signature is deleted and a new signature is created. Any log entries that are not needed to refresh other snapshots are also deleted (all log entries with times before the earliest remaining signature).

Importing a Snapshot

Assume that a snapshot is refreshed at time A, exported at time B, and refreshed again at time C. Then, because of corruption or other problems, the snapshot needs to be restored by dropping the snapshot and importing it again. The newly imported version has the last refresh time recorded as time A. However, log entries needed for a fast refresh may no longer exist. If the log entries do exist (because they are needed for another snapshot that has yet to be refreshed), they are used, and the fast refresh completes successfully. Otherwise, the fast refresh fails, generating an error that says a complete refresh is required.

Importing a Snapshot into a Different Schema

Snapshots, snapshot logs, and related items are exported with the schema name explicitly given in the DDL statements; therefore, snapshots and their related items cannot be imported into a different schema.

If you attempt to use FROMUSER and TOUSER to import snapshot data, an error will be written to the Import log file and the items will not be imported.



Transportable Tablespaces

Transportable tablespaces let you move a set of tablespaces from one Oracle database to another.

To do this, you must make the tablespaces read-only, copy the datafiles of these tablespaces, and use Export/Import to move the database information (metadata) stored in the data dictionary. Both the datafiles and the metadata export file must be copied to the target database. The transport of these files can be done using any facility for copying flat, binary files, such as the operating system copying facility, binary-mode FTP, or publishing on CD-ROMs.

After copying the datafiles and importing the metadata, you can optionally put the tablespaces in read/write mode.

Import provides the following parameters to enable import of transportable tablespaces metadata.TRANSPORT_TABLESPACE,  TABLESPACES, DATAFILES, TTS_OWNERS



Storage Parameters

By default, a table is imported into its original tablespace.

If the tablespace no longer exists, or the user does not have sufficient quota in the tablespace, the system uses the default tablespace for that user, unless the table: Is partitioned, Is a type table, Contains LOB, VARRAY, or OPAQUE type columns, Has an index-organized table (IOT) overflow segment

The OPTIMAL Parameter

The storage parameter OPTIMAL for rollback segments is not preserved during export and import.

Storage Parameters for OID Indexes and LOB Columns

Tables are exported with their current storage parameters. For object tables, the OIDINDEX is created with its current storage parameters and name, if given. For tables that contain LOB, VARRAY, or OPAQUE type columns, LOB, VARRAY, or OPAQUE type data is created with their current storage parameters.

If you alter the storage parameters of existing tables prior to export, the tables are exported using those altered storage parameters. Note, however, that storage parameters for LOB data cannot be altered prior to export (for example, chunk size for a LOB column, whether a LOB column is CACHE or NOCACHE, and so forth).

Note that LOB data might not reside in the same tablespace as the containing table. The tablespace for that data must be read/write at the time of import or the table will not be imported.

If LOB data resides in a tablespace that does not exist at the time of import or the user does not have the necessary quota in that tablespace, the table will not be imported. Because there can be multiple tablespace clauses, including one for the table, Import cannot determine which tablespace clause caused the error.

Overriding Storage Parameters

Before using the Import utility to import data, you may want to create large tables with different storage parameters. If so, you must specify IGNORE=y on the command line or in the parameter file.

The Export COMPRESS Parameter

By default at export time, storage parameters are adjusted to consolidate all data into its initial extent. To preserve the original size of an initial extent, you must specify at export time that extents are not to be consolidated (by setting COMPRESS=n). 

Read-Only Tablespaces

Read-only tablespaces can be exported. On import, if the tablespace does not already exist in the target database, the tablespace is created as a read/write tablespace. If you want read-only functionality, you must manually make the tablespace read-only after the import. If the tablespace already exists in the target database and is read-only, you must make it read/write before the import.



Dropping a Tablespace

You can drop a tablespace by redefining the objects to use different tablespaces before the import. You can then issue the imp command and specify IGNORE=y.

In many cases, you can drop a tablespace by doing a full database export, then creating a zero-block tablespace with the same name (before logging off) as the tablespace you want to drop. During import, with IGNORE=y, the relevant CREATE TABLESPACE statement will fail and prevent the creation of the unwanted tablespace.

All objects from that tablespace will be imported into their owner's default tablespace with the exception of partitioned tables, type tables, and tables that contain LOB or VARRAY columns or index-only tables with overflow segments. Import cannot determine which tablespace caused the error. Instead, you must first create a table and then import the table again, specifying IGNORE=y.

Objects are not imported into the default tablespace if the tablespace does not exist or you do not have the necessary quotas for your default tablespace.



Reorganizing Tablespaces

If a user's quota allows it, the user's tables are imported into the same tablespace from which they were exported. However, if the tablespace no longer exists or the user does not have the necessary quota, the system uses the default tablespace for that user as long as the table is unpartitioned, contains no LOB or VARRAY columns, is not a type table, and is not an index-only table with an overflow segment. 

For example, you need to move joe's tables from tablespace A to tablespace B after a full database export. Follow these steps:

If joe has the UNLIMITED TABLESPACE privilege, revoke it. Set joe's quota on tablespace A to zero. Also revoke all roles that might have such privileges or quotas.

Role revokes do not cascade. Therefore, users who were granted other roles by joe will be unaffected.

Export joe's tables.

Drop joe's tables from tablespace A.

Give joe a quota on tablespace B and make it the default tablespace for joe.

Import joe's tables. (By default, Import puts joe's tables into tablespace B.)



Importing Statistics

If statistics are requested at export time and analyzer statistics are available for a table, Export will place the ANALYZE statement to recalculate the statistics for the table into the dump file. In most circumstances, Export will also write the precalculated optimizer statistics for tables, indexes, and columns to the dump file. 

Because of the time it takes to perform an ANALYZE statement, it is usually preferable for Import to use the precalculated optimizer statistics for a table (and its indexes and columns) rather than executing the ANALYZE statement saved by Export. By default, Import will always use the precalculated statistics that are found in the export dump file.

The Export utility flags certain precalculated statistics as questionable. In certain situations, the importer might want to import only unquestionable statistics, and may not want to import precalculated statistics in the following situations: 1) Character set translations between the dump file and the import client and the import database could potentially change collating sequences that are implicit in the precalculated statistics. 2) Row errors occurred while importing the table., 3) A partition level import is performed (column statistics will no longer be accurate).

In certain situations, the importer might want to always use ANALYZE statements rather than precalculated statistics. For example, the statistics gathered from a fragmented database may not be relevant when the data is imported in a compressed form. In these cases, the importer should specify STATISTICS=RECALCULATE to force the recalculation of statistics.

If you do not want any statistics to be established by Import, you should specify STATISTICS=NONE.



Using Export and Import to Partition a Database Migration

Advantages of Partitioning a Migration

Time required for the migration may be reduced because many of the subjobs can be run in parallel.

The import can start as soon as the first export subjob completes, rather than waiting for the entire export to complete.

Disadvantages of Partitioning a Migration

The export and import processes become more complex.

Support of cross-schema references for certain types of objects may be compromised. For example, if a schema contains a table with a foreign key constraint against a table in a different schema, you may not have all required parent records when you import the table into the dependent schema.

How to Use Export and Import to Partition a Database Migration

    For all top-level metadata in the database, issue the following commands:

    exp dba/password FILE=full FULL=y CONSTRAINTS=n TRIGGERS=n ROWS=n INDEXES=n

    imp dba/password FILE=full FULL=y

    For each scheman in the database, issue the following commands:

    exp dba/password OWNER=scheman FILE=scheman

    imp dba/password FILE=scheman FROMUSER=scheman TOUSER=scheman IGNORE=y

All exports can be done in parallel. When the import of full.dmp completes, all remaining imports can also be done in parallel.



Using Export Files from a Previous Oracle Release

Using Oracle Version 7 Export Files

Check Constraints on DATE Columns

In Oracle9i, check constraints on DATE columns must use the TO_DATE function to specify the format of the date. Because this function was not required in versions prior to Oracle8i, data imported from an earlier Oracle database might not have used the TO_DATE function. In such cases, the constraints are imported into the Oracle9i database, but they are flagged in the dictionary as invalid.

The catalog views DBA_CONSTRAINTS, USER_CONSTRAINTS, and ALL_CONSTRAINTS can be used to identify such constraints. Import issues a warning message if invalid date constraints are in the database.

Using Oracle Version 6 Export Files

User Privileges

When user definitions are imported into an Oracle database, they are created with the CREATE USER statement. So, when importing from export files created by previous versions of Export, users are not granted CREATE SESSION privileges automatically.

CHAR columns

Oracle Version 6 CHAR columns are automatically converted into the Oracle VARCHAR2 datatype.

Status of Integrity Constraints

NOT NULL constraints are imported as ENABLED. All other constraints are imported as DISABLED.

Length of Default Column Values

A table with a default column value that is longer than the maximum size of that column generates the following error on import to Oracle9i: ORA-1401: inserted value too large for column

Oracle Version 6 did not check the columns in a CREATE TABLE statement to be sure they were long enough to hold their default values so these tables could be imported into a Version 6 database. The Oracle9i server does make this check, however. As a result, column defaults that could be imported into a Version 6 database may not import into Oracle9i.

If the default is a value returned by a function, the column must be large enough to hold the maximum value that can be returned by that function. Otherwise, the CREATE TABLE statement recorded in the export file produces an error on import.

Using Oracle Version 5 Export Files

CHAR columns are automatically converted to VARCHAR2.

NOT NULL constraints are imported as ENABLED.

Import automatically creates an index on any clusters to be imported.

The CHARSET Parameter

Default: none, This parameter applies to Oracle Version 5 and 6 export files only. Use of this parameter is not recommended. It is provided only for compatibility with previous versions. Eventually, it will no longer be supported.

Oracle Version 5 and 6 export files do not contain the database character set identifier. However, a version 5 or 6 export file does indicate whether the user session character set was ASCII or EBCDIC. Use this parameter to indicate the actual character set used at export time. The Import utility will verify whether the specified character set is ASCII or EBCDIC based on the character set in the export file. If you do not specify a value for the CHARSET parameter and the export file is ASCII, Import will verify that the user session character set is ASCII. Or, if the export file is EBCDIC, Import will verify that the user session character set is EBCDIC.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章