druid的功能就不多講了,主要提供數據庫連接池的功能,但是支持豐富的監控和日誌以及防火牆功能。這些附加功能都是以插件的形式存在的,可以自由定製。
本文主要講解監控、日誌等插件的實現,以及怎麼集成到druid裏。
一、 Druid的使用
先來看一段使用druid連接池的流程。首先是配置連接池
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc-3.0.xsd
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">
<!-- 給web使用的spring文件 -->
<bean id="propertyConfigurer"
class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
<property name="locations">
<list>
<value>/WEB-INF/classes/dbconfig.properties</value>
</list>
</property>
</bean>
<bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource"
destroy-method="close">
<property name="url" value="${url}" />
<property name="username" value="${username}" />
<property name="password" value="${password}" />
<property name="driverClassName" value="${driverClassName}" />
<property name="filters" value="${filters}" />
<property name="maxActive" value="${maxActive}" />
<property name="initialSize" value="${initialSize}" />
<property name="maxWait" value="${maxWait}" />
<property name="minIdle" value="${minIdle}" />
<property name="timeBetweenEvictionRunsMillis" value="${timeBetweenEvictionRunsMillis}" />
<property name="minEvictableIdleTimeMillis" value="${minEvictableIdleTimeMillis}" />
<property name="validationQuery" value="${validationQuery}" />
<property name="testWhileIdle" value="${testWhileIdle}" />
<property name="testOnBorrow" value="${testOnBorrow}" />
<property name="testOnReturn" value="${testOnReturn}" />
<property name="maxOpenPreparedStatements"
value="${maxOpenPreparedStatements}" />
<property name="removeAbandoned" value="${removeAbandoned}" /> <!-- 打開removeAbandoned功能 -->
<property name="removeAbandonedTimeout" value="${removeAbandonedTimeout}" /> <!-- 1800秒,也就是30分鐘 -->
<property name="logAbandoned" value="${logAbandoned}" /> <!-- 關閉abanded連接時輸出錯誤日誌 -->
</bean>
<!-- jdbcTemplate -->
<bean id="jdbc" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource">
<ref bean="dataSource" />
</property>
</bean>
</beans>
然後獲取連接,執行sql。
// 通過druid數據源獲取數據庫連接
Connection conn = dataSource.getConnection();
// 創建語句
PreparedStatement stmt = conn.prepareStatement("select * form table_a where id = ?");
stmt.set(0, 1);
// 執行
stmt.execute();
stmt.close();
conn.close();
二、監控功能實現
我們知道druid提供了豐富的監控功能,這些監控功能是怎麼實現的呢?
先說答案吧,主要通過以下兩點:
1. 通過代理模式控制statement對象的訪問。druid裏的Statemest、PreparedStatement、Connection等對象都通過代理模式訪問。對應的druid的代理是StatementProxyImpl、PreparedStatementProxyImpl、ConnectionProxyImpl。
2. 通過filter過濾器鏈增加監控和日誌功能。
2.1 創建連接
看下datasource初始化的代碼。位於com.alibaba.druid.pool.DruidDataSource#init
摘取關鍵代碼如下:相關的代碼加了註釋,主要通過PhysicalConnectionInfo類來獲取連接,DruidConnectionHolder類來包裝底層的連接。
public void init() throws SQLException {
if (inited) {
return;
}
// bug fixed for dead lock, for issue #2980
DruidDriver.getInstance();
// 初始化前先加鎖
final ReentrantLock lock = this.lock;
try {
lock.lockInterruptibly();
} catch (InterruptedException e) {
throw new SQLException("interrupt", e);
}
boolean init = false;
try {
if (inited) {
return;
}
initStackTrace = Utils.toString(Thread.currentThread().getStackTrace());
this.id = DruidDriver.createDataSourceId();
if (this.id > 1) {
long delta = (this.id - 1) * 100000;
this.connectionIdSeedUpdater.addAndGet(this, delta);
this.statementIdSeedUpdater.addAndGet(this, delta);
this.resultSetIdSeedUpdater.addAndGet(this, delta);
this.transactionIdSeedUpdater.addAndGet(this, delta);
}
if (this.jdbcUrl != null) {
this.jdbcUrl = this.jdbcUrl.trim();
initFromWrapDriverUrl();
}
// 初始化過濾器
for (Filter filter : filters) {
filter.init(this);
}
if (this.dbType == null || this.dbType.length() == 0) {
this.dbType = JdbcUtils.getDbType(jdbcUrl, null);
}
if (JdbcConstants.MYSQL.equals(this.dbType)
|| JdbcConstants.MARIADB.equals(this.dbType)
|| JdbcConstants.ALIYUN_ADS.equals(this.dbType)) {
boolean cacheServerConfigurationSet = false;
if (this.connectProperties.containsKey("cacheServerConfiguration")) {
cacheServerConfigurationSet = true;
} else if (this.jdbcUrl.indexOf("cacheServerConfiguration") != -1) {
cacheServerConfigurationSet = true;
}
if (cacheServerConfigurationSet) {
this.connectProperties.put("cacheServerConfiguration", "true");
}
}
if (maxActive <= 0) {
throw new IllegalArgumentException("illegal maxActive " + maxActive);
}
if (maxActive < minIdle) {
throw new IllegalArgumentException("illegal maxActive " + maxActive);
}
if (getInitialSize() > maxActive) {
throw new IllegalArgumentException("illegal initialSize " + this.initialSize + ", maxActive " + maxActive);
}
if (timeBetweenLogStatsMillis > 0 && useGlobalDataSourceStat) {
throw new IllegalArgumentException("timeBetweenLogStatsMillis not support useGlobalDataSourceStat=true");
}
if (maxEvictableIdleTimeMillis < minEvictableIdleTimeMillis) {
throw new SQLException("maxEvictableIdleTimeMillis must be grater than minEvictableIdleTimeMillis");
}
if (this.driverClass != null) {
this.driverClass = driverClass.trim();
}
// spi服務
initFromSPIServiceLoader();
if (this.driver == null) {
if (this.driverClass == null || this.driverClass.isEmpty()) {
this.driverClass = JdbcUtils.getDriverClassName(this.jdbcUrl);
}
if (MockDriver.class.getName().equals(driverClass)) {
driver = MockDriver.instance;
} else {
if (jdbcUrl == null && (driverClass == null || driverClass.length() == 0)) {
throw new SQLException("url not set");
}
driver = JdbcUtils.createDriver(driverClassLoader, driverClass);
}
} else {
if (this.driverClass == null) {
this.driverClass = driver.getClass().getName();
}
}
initCheck();
initExceptionSorter();
// 校驗連接有效性,是否能夠ping通
initValidConnectionChecker();
validationQueryCheck();
if (isUseGlobalDataSourceStat()) {
dataSourceStat = JdbcDataSourceStat.getGlobal();
if (dataSourceStat == null) {
dataSourceStat = new JdbcDataSourceStat("Global", "Global", this.dbType);
JdbcDataSourceStat.setGlobal(dataSourceStat);
}
if (dataSourceStat.getDbType() == null) {
dataSourceStat.setDbType(this.dbType);
}
} else {
dataSourceStat = new JdbcDataSourceStat(this.name, this.jdbcUrl, this.dbType, this.connectProperties);
}
dataSourceStat.setResetStatEnable(this.resetStatEnable);
// 聲明連接池。DruidConnectionHolder包裝了底層的連接
connections = new DruidConnectionHolder[maxActive];
evictConnections = new DruidConnectionHolder[maxActive];
keepAliveConnections = new DruidConnectionHolder[maxActive];
SQLException connectError = null;
if (createScheduler != null && asyncInit) {
for (int i = 0; i < initialSize; ++i) {
submitCreateTask(true);
}
} else if (!asyncInit) {
// init connections 同步初始化,應用啓動的時候初始化所有的連接池
while (poolingCount < initialSize) {
try {
// 通過用戶配置的數據庫值,獲取數據庫連接。PhysicalConnectionInfo包裝了實際的連接ConnectionProxy對象。
PhysicalConnectionInfo pyConnectInfo = createPhysicalConnection();
// 初始化holder,所以下面重點看下這塊代碼。DruidConnectionHolder包裝了druid的連接
DruidConnectionHolder holder = new DruidConnectionHolder(this, pyConnectInfo);
connections[poolingCount++] = holder;
} catch (SQLException ex) {
LOG.error("init datasource error, url: " + this.getUrl(), ex);
if (initExceptionThrow) {
connectError = ex;
break;
} else {
Thread.sleep(3000);
}
}
}
if (poolingCount > 0) {
poolingPeak = poolingCount;
poolingPeakTime = System.currentTimeMillis();
}
}
createAndLogThread();
createAndStartCreatorThread();
createAndStartDestroyThread();
initedLatch.await();
init = true;
initedTime = new Date();
registerMbean();
if (connectError != null && poolingCount == 0) {
throw connectError;
}
if (keepAlive) {
// async fill to minIdle
if (createScheduler != null) {
for (int i = 0; i < minIdle; ++i) {
submitCreateTask(true);
}
} else {
this.emptySignal();
}
}
} catch (SQLException e) {
LOG.error("{dataSource-" + this.getID() + "} init error", e);
throw e;
} catch (InterruptedException e) {
throw new SQLException(e.getMessage(), e);
} catch (RuntimeException e){
LOG.error("{dataSource-" + this.getID() + "} init error", e);
throw e;
} catch (Error e){
LOG.error("{dataSource-" + this.getID() + "} init error", e);
throw e;
} finally {
inited = true;
lock.unlock();
if (init && LOG.isInfoEnabled()) {
String msg = "{dataSource-" + this.getID();
if (this.name != null && !this.name.isEmpty()) {
msg += ",";
msg += this.name;
}
msg += "} inited";
LOG.info(msg);
}
}
}
獲取數據庫連接的時序圖
DruidAbstractDataSource.java 類獲取連接
public PhysicalConnectionInfo createPhysicalConnection() throws SQLException {
String url = this.getUrl();
Properties connectProperties = getConnectProperties();
Properties physicalConnectProperties = new Properties();
Connection conn = null;
createStartNanosUpdater.set(this, connectStartNanos);
creatingCountUpdater.incrementAndGet(this);
try {
// 通過這個方法獲取連接,會在下面給出
conn = createPhysicalConnection(url, physicalConnectProperties);
if (conn == null) {
throw new SQLException("connect error, url " + url + ", driverClass " + this.driverClass);
}
initPhysicalConnection(conn, variables, globalVariables);
initedNanos = System.nanoTime();
validateConnection(conn);
validatedNanos = System.nanoTime();
setFailContinuous(false);
setCreateError(null);
} catch (SQLException ex) {
setCreateError(ex);
JdbcUtils.close(conn);
throw ex;
} catch (RuntimeException ex) {
setCreateError(ex);
JdbcUtils.close(conn);
throw ex;
} catch (Error ex) {
createErrorCountUpdater.incrementAndGet(this);
setCreateError(ex);
JdbcUtils.close(conn);
throw ex;
} finally {
long nano = System.nanoTime() - connectStartNanos;
createTimespan += nano;
creatingCountUpdater.decrementAndGet(this);
}
return new PhysicalConnectionInfo(conn, connectStartNanos, connectedNanos, initedNanos, validatedNanos, variables, globalVariables);
}
// 上面的代碼會調下面的方法獲取連接。
public Connection createPhysicalConnection(String url, Properties info) throws SQLException {
Connection conn;
if (getProxyFilters().size() == 0) {
conn = getDriver().connect(url, info);
} else {
// 如果有過濾器,說明需要代理這些連接。如果沒有,則會直接暴露底層數據庫連接
conn = new FilterChainImpl(this).connection_connect(info);
}
createCountUpdater.incrementAndGet(this);
return conn;
}
重點看一下過濾器鏈裏面的實現. FilterChainImpl.java
這個類裏提供了默認的實現,默認實現是原始的jdbc類,沒有被加強的功能。
public ConnectionProxy connection_connect(Properties info) throws SQLException {
// 調用過濾器鏈裏的過濾器。 如果過濾器裏沒有具體的實現,會向下傳遞,最終調用到下面的默認實現
if (this.pos < filterSize) {
return nextFilter()
.connection_connect(this, info);
}
// 這裏提供了默認的實現。只有某些filter才需要加強Connection的功能。
Driver driver = dataSource.getRawDriver();
String url = dataSource.getRawJdbcUrl();
Connection nativeConnection = driver.connect(url, info);
if (nativeConnection == null) {
return null;
}
return new ConnectionProxyImpl(dataSource, nativeConnection, info, dataSource.createConnectionId());
}
Filter的類圖和功能
- Filter接口定義了各種jdbc(connection,statement,resultset)相關的函數。
- FilterAdapter類提供了filter的默認實現,默認實現就是調用FilterChainImpl裏的方法.
- FilterEventAdapter類繼承自FilterAdapter,提供了一些擴展點。比如調connection_connect方法的時候,可以在獲取連接之前和之後做自定義的動作。
druid裏面有的過濾器繼承自FilterAdapter,有的則繼承自FilterEventAdapter,實現一些擴展的功能。類圖裏面只包含了典型的過濾器,並不包含所有的。
ConnectionProxyImpl的類圖
2.2 創建prepareStatement
先來看下時序圖。其實跟上面創建連接的過程很像,如果理解了上面的過程,自然也就理解了這個過程。其他jdbc相關的操作也是一樣的,這就是druid的思路。
底層也是調用FilterChainImpl以及Filter來生成PreparedStatement的代理類。在代理類裏面可以實現自定義的增強功能。
2.3 prepareStatement執行
當業務代碼調用prepareStatement.execute()時,其實調用的是PreparedStatementProxyImpl類的execute方法。
PreparedStatementProxyImpl還是調用FilterChainImpl.preparedStatement_execute來執行。
可以看出來思路跟上面兩個過程是一樣的。通過代理模式控制對象的訪問,通過過濾器模式來附加功能。
三、FilterChain
上面提到了Filter接口定義了jdbc相關的方法,那麼filterChain也定義了相關的方法,以及一些擴展方法。FilterChain是druid關於jdbc相關操作的入口。
FilterChainImpl提供了默認的實現。
FilterChainImpl方法的實現都分爲兩部分結構,第一部分從具體的Filter找實現,第二部分提供默認的實現。
如果業務沒有配置Filter功能,或者配置的Filter裏面不需要加強某個功能,就會走到第二部分。
四、過濾器模式
還有一種過濾器模式的經典應用那就是web開發裏常用的javax.servlet.FilterChain,用來執行配置的一系列Filter。
tomcat裏面org.apache.catalina.core.ApplicationFilterChain類實現了FilterChain接口,用來執行過濾器。在請求進來的時候,會把匹配的filter和servlet放到filterChain裏。
看下實現方法
@Override
public void doFilter(ServletRequest request, ServletResponse response)
throws IOException, ServletException {
if( Globals.IS_SECURITY_ENABLED ) {
final ServletRequest req = request;
final ServletResponse res = response;
try {
java.security.AccessController.doPrivileged(
new java.security.PrivilegedExceptionAction<Void>() {
@Override
public Void run()
throws ServletException, IOException {
internalDoFilter(req,res);
return null;
}
}
);
} catch( PrivilegedActionException pe) {
Exception e = pe.getException();
if (e instanceof ServletException)
throw (ServletException) e;
else if (e instanceof IOException)
throw (IOException) e;
else if (e instanceof RuntimeException)
throw (RuntimeException) e;
else
throw new ServletException(e.getMessage(), e);
}
} else {
internalDoFilter(request,response);
}
}
// 具體執行過濾器的地方
private void internalDoFilter(ServletRequest request,
ServletResponse response)
throws IOException, ServletException {
// Call the next filter if there is one
// 如果還有過濾器沒有執行到,pos是過濾器數組的下標
if (pos < n) {
// 匹配這次請求的過濾器會放到數組裏。
ApplicationFilterConfig filterConfig = filters[pos++];
try {
Filter filter = filterConfig.getFilter();
if (request.isAsyncSupported() && "false".equalsIgnoreCase(
filterConfig.getFilterDef().getAsyncSupported())) {
request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR, Boolean.FALSE);
}
if( Globals.IS_SECURITY_ENABLED ) {
final ServletRequest req = request;
final ServletResponse res = response;
Principal principal =
((HttpServletRequest) req).getUserPrincipal();
Object[] args = new Object[]{req, res, this};
SecurityUtil.doAsPrivilege ("doFilter", filter, classType, args, principal);
} else {
// 調用過濾器。需要把過濾器鏈本身傳遞過去,觸發下個過濾器的調用
filter.doFilter(request, response, this);
}
} catch (IOException | ServletException | RuntimeException e) {
throw e;
} catch (Throwable e) {
e = ExceptionUtils.unwrapInvocationTargetException(e);
ExceptionUtils.handleThrowable(e);
throw new ServletException(sm.getString("filterChain.filter"), e);
}
return;
}
// We fell off the end of the chain -- call the servlet instance
// 過濾器執行完後,執行servlet的service方法。因爲這個調用嵌入在filterChain中,filter也就可以控制servlet的訪問,或者在servlet執行之後也可以做一些事情。
try {
if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
lastServicedRequest.set(request);
lastServicedResponse.set(response);
}
if (request.isAsyncSupported() && !servletSupportsAsync) {
request.setAttribute(Globals.ASYNC_SUPPORTED_ATTR,
Boolean.FALSE);
}
// Use potentially wrapped request from this point
if ((request instanceof HttpServletRequest) &&
(response instanceof HttpServletResponse) &&
Globals.IS_SECURITY_ENABLED ) {
final ServletRequest req = request;
final ServletResponse res = response;
Principal principal =
((HttpServletRequest) req).getUserPrincipal();
Object[] args = new Object[]{req, res};
SecurityUtil.doAsPrivilege("service",
servlet,
classTypeUsedInService,
args,
principal);
} else {
// 執行servlet
servlet.service(request, response);
}
} catch (IOException | ServletException | RuntimeException e) {
throw e;
} catch (Throwable e) {
e = ExceptionUtils.unwrapInvocationTargetException(e);
ExceptionUtils.handleThrowable(e);
throw new ServletException(sm.getString("filterChain.servlet"), e);
} finally {
if (ApplicationDispatcher.WRAP_SAME_OBJECT) {
lastServicedRequest.set(null);
lastServicedResponse.set(null);
}
}
}
總結下,過濾器模式需要兩個組件,過濾器鏈執行器FilterChain和過濾器Filter。
FilterChain負責按順序執行過濾器,提供一種通過的模式或者框架。可以提供默認實現,或者等所有的過濾器執行之前或之後做一些操作。
Filter可以控制filterChain是否向下執行,也就是可以阻斷過濾器執行鏈,也可以在執行鏈向下傳遞之前做一些操作。