Parallel bulk copy and IDENTITY columns

Parallel bulk copy and IDENTITY columns

When you areusing parallel bulk copy, IDENTITY columns can cause a bottleneck. As bcp reads in thedata, the utility both generates the values of the IDENTITY column and updates the IDENTITY column’smaximum value for each row. This extra work may adversely affect theperformance improvement that you expected to receive from using parallel bulkcopy.

To avoid thisbottleneck, you can explicitly specify the IDENTITY starting pointfor each session.

 

Retaining sort order

If you copysorted data into the table without explicitly specifying the IDENTITY starting point, bcp might not generate the IDENTITY column values in sorted order. Parallel bulk copy reads theinformation into all the partitions simultaneously and updates the values ofthe IDENTITY column as it reads in the data.

A bcp statement with no explicit starting point wouldproduce IDENTITY column numbers similar to those shown in Figure 4-2:

Figure 4-2: Producing IDENTITYcolumns in sorted order

The table has amaximum IDENTITY column number of 119, but the order is no longermeaningful.

If you wantAdaptive Server to enforce unique IDENTITY column values,you must run bcp with either the -g or -E parameter.

 

Specifying the starting point from the command line

Use the -g id_start_value flag to specify an IDENTITY starting pointfor a session in the command line.

The -gparameter instructs Adaptive Server to generate a sequence of IDENTITY column values for the bcp session without checkingand updating the maximum value of the table’s IDENTITY column foreach row. Instead of checking, Adaptive Server updates the maximum value at theend of each batch.

WARNING! Be cautious about creating duplicateidentity values inadvertently when you specify identity value ranges thatoverlap.

To specify astarting IDENTITY value, enter:

bcp[-gid_start_value]

For example, tocopy in four files, each of which has 100 rows, enter:

bcpmydb..bigtable in file1 -g100

bcpmydb..bigtable in file2 -g200

bcpmydb..bigtable in file3 -g300

bcpmydb..bigtable in file4 -g400

 

Using the -gparameter does not guarantee that the IDENTITY column valuesare unique. To ensure uniqueness, you must:

·        Know how many rows arein the input files and what the highest existing value is. Use this informationto set the starting values with the -g parameter and generate rangesthat do not overlap.

In the example above, if any file contains more than 100 rows, theidentity values overlap into the next 100 rows of data, creating duplicateidentity values.

·        Make sure that no oneelse is inserting data that can produce conflicting IDENTITY values.

 

Specifying the starting point using the data file

Use the -Eparameter to set the IDENTITY starting point explicitly from the data file.

The -E parameter instructs bcp to prompt youto enter an explicit IDENTITY column value for each row. If the number of inserted rowsexceeds the maximum possible IDENTITY column value, Adaptive Server returns an error.

 


發佈了45 篇原創文章 · 獲贊 2 · 訪問量 4萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章