數據庫現有兩張那個表,一張羣表cluste.一張羣用戶表clusteruser(當然還有用戶表user暫時用不到).兩張表的結構如下:
cluster表:有一個外鍵指向用戶id
mysql> desc cluster;
+--------------+-------------+------+-----+---------+-------+
| Field | Type | Null | Key | Default | Extra |
+--------------+-------------+------+-----+---------+-------+
| clusterId | varchar(15) | NO | PRI | NULL | |
| clusterName | varchar(15) | YES | | NULL | |
| owner | varchar(15) | YES | MUL | NULL | |
| info | varchar(15) | YES | | NULL | |
| num | int(2) | YES | | NULL | |
| registerTime | datetime | YES | | NULL | |
+--------------+-------------+------+-----+---------+-------+
6 rows in set (0.00 sec)
clusteruser:有兩個外鍵,一個外鍵指向用戶id,一個指向clusterId
mysql> desc clusteruser;
+-----------+-------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+-------------+------+-----+---------+----------------+
| id | int(10) | NO | PRI | NULL | auto_increment |
| clusterId | varchar(15) | YES | MUL | NULL | |
| uaccount | varchar(15) | YES | MUL | NULL | |
| role | int(2) | YES | | NULL | |
| cnickname | varchar(15) | YES | | NULL | |
+-----------+-------------+------+-----+---------+----------------+
5 rows in set (0.00 sec)
當用戶要創建羣的時候,既要外往cluster表插入記錄,也要往clusteruser表插入記錄,並且二者要麼都成功,要麼都失敗,也就是要進行進行事務控制.業務層如下:
public void testSql1() {
String sql = "insert into cluster values('6666','ceshi','8888','batch',0,now());insert into clusteruser values(null,'6666','1516',1,null);";
long start = System.currentTimeMillis();
if(DBUtil.execuBatchUpdate(sql,null)){
System.out.println("success");
}else {
System.out.println("failed");
}
long end = System.currentTimeMillis();
System.out.println("方式1,耗時"+(end-start));
}
業務層只需要調用封裝好的數據庫工具類就好了,當然都要開啓事務,具體有以下幾種方式:
方式一:先拆分sql,再參數預編譯,再往batch裏面扔,在執行
public static boolean execuBatchUpdate(String sql, Object... args) {
boolean flag = false;
try {
con = getConn();
if (con == null) return false;
con.setAutoCommit(false);
String[] sqls = sql.split(";");
int[] rarr = null;
for(int k=0;k<sqls.length;k++){
pstm = con.prepareStatement(sqls[k]);
if (args != null) {
for (int i = 0; i < args.length; i++) {
pstm.setObject(i + 1, args[i]);
}
}
pstm.addBatch();
rarr = pstm.executeBatch();
}
flag = true;
con.commit();
con.setAutoCommit(true);
} catch (SQLException e) {
e.printStackTrace();
try {
con.rollback();
} catch (SQLException ex) {
ex.printStackTrace();
}
} finally {
close(con, pstm, null);
}
return flag;
}
@org.junit.Test
方二先:先進行參數預編譯,再拆分成一條條獨立的sql,再往batch裏面扔,再執行
public static boolean execuTransActionUpdate(String sql, Object... args) {
boolean flag = false;
try {
con = getConn();
if (con == null) return false;
con.setAutoCommit(false);
pstm = con.prepareStatement(sql);
if(args!=null){
for(int i=0;i<args.length;i++){
pstm.setObject(i+1,args[i]);
}
}
String[] singles = sql.split(";");
for(int j=0;j<singles.length;j++){
pstm.addBatch(singles[j]);
}
pstm.executeBatch();
flag = true;
con.commit();
con.setAutoCommit(true);
} catch (SQLException e) {
e.printStackTrace();
try {
con.rollback();
} catch (SQLException ex) {
ex.printStackTrace();
}
} finally {
close(con, pstm, null);
}
return flag;
}
方式三:不用 pstm.executeBatch(),還是用 pstm.executeUpdate().只需要先開啓事務,對傳過來的sql語句進行拆分,如果捕獲到異常
說明至少有一條sql執行發生了錯誤,那麼直接回滾就好了,如下:
public static boolean executeLowBatchUpdate(String sql, Object... args) {
boolean flag = false;
try {
con = getConn();
if (con == null) return false;
con.setAutoCommit(false);
pstm = con.prepareStatement(sql);
if(args!=null){
for(int i=0;i<args.length;i++){
pstm.setObject(i+1,args[i]);
}
}
String[] singles = sql.split(";");
for(int j=0;j<singles.length;j++){
pstm.executeUpdate(singles[j]);
}
flag = true;
con.commit();
con.setAutoCommit(true);
} catch (SQLException e) {
e.printStackTrace();
try {
con.rollback();
} catch (SQLException ex) {
ex.printStackTrace();
}
} finally {
close(con, pstm, null);
}
return flag;
}
那這三者到底用那個好呢,因爲我們我們只有兩條slq,而且要傳的參數都預先寫進去了,就好像沒傳參一樣,也看不出效果.如果數據量非常大,不建議使用executeupdate(),而使用executeBatch();比如我們要把一張100萬用戶數據的表裏面的數據插入到另一張表內,當然我們不可能事先寫100萬條sql吧.再進行字符串切割吧,這顯然不可取.