大家還記得在我的博客 《基於hadoop2.6.5搭建5個節點的分佈式集羣—(六)安裝Zookeeper》 安裝部署zookeeper的五個服務器吧,如下:
192.168.159.129:2181 192.168.159.130:2181 192.168.159.131:2181 192.168.159.132:2181 192.168.159.133:2181
爲了更好的實現Java操作zookeeper服務器,後來出現了Curator框架,非常的強大,目前已經是Apache的頂級項目,裏面提供了更多豐富的操作,例如session超時重連、主從選舉、分佈式計數器、分佈式鎖等等適用於各種複雜的zookeeper場景的API封裝。(zookeeper文章所需的jar包)
Curator所需的maven依賴:
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
<version>3.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-recipes</artifactId>
<version>3.2.1</version>
</dependency>
<dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-client</artifactId>
<version>3.2.1</version>
</dependency>
Curator框架中使用鏈式編程風格,易讀性更強,使用工廠方法創建zookeeper客戶端對象。1.使用CuratorFrameworkFactory的兩個靜態工廠方法(參數不同)來創建zookeeper客戶端對象。
參數1:connectString,zookeeper服務器地址及端口號,多個zookeeper服務器地址以“,”分隔。
參數2:sessionTimeoutMs,會話超時時間,單位毫秒,默認爲60000ms。
參數3:connectionTimeoutMs,連接超時時間,單位毫秒,默認爲15000ms。
參數4:retryPolicy,重試連接策略,有四種實現,分別爲:ExponentialBackoffRetry(重試指定的次數, 且每一次重試之間停頓的時間逐漸增加)、RetryNtimes(指定最大重試次數的重試策略)、RetryOneTimes(僅重試一次)、RetryUntilElapsed(一直重試直到達到規定的時間)
Curator的Helloworld入門:
public class CuratorHelloworld {
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:21811";
private static final int SESSION_TIMEOUT = 5000;
public static void main(String[] args) throws Exception {
//重試策略,初試時間1秒,重試10次
RetryPolicy policy = new ExponentialBackoffRetry(1000, 10);
//通過工廠創建Curator
CuratorFramework curator = CuratorFrameworkFactory.builder().connectString(CONNECT_ADDR)
.sessionTimeoutMs(SESSION_TIMEOUT).retryPolicy(policy).build();
//開啓連接
curator.start();
ExecutorService executor = Executors.newCachedThreadPool();
/**創建節點,creatingParentsIfNeeded()方法的意思是如果父節點不存在,則在創建節點的同時創建父節點;
* withMode()方法指定創建的節點類型,跟原生的Zookeeper API一樣,不設置默認爲PERSISTENT類型。
* */
curator.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT)
.inBackground((framework, event) -> { //添加回調
System.out.println("Code:" + event.getResultCode());
System.out.println("Type:" + event.getType());
System.out.println("Path:" + event.getPath());
}, executor).forPath("/super/c1", "c1內容".getBytes());
Thread.sleep(5000); //爲了能夠看到回調信息
String data = new String(curator.getData().forPath("/super/c1")); //獲取節點數據
System.out.println(data);
Stat stat = curator.checkExists().forPath("/super/c1"); //判斷指定節點是否存在
System.out.println(stat);
curator.setData().forPath("/super/c1", "c1新內容".getBytes()); //更新節點數據
data = new String(curator.getData().forPath("/super/c1"));
System.out.println(data);
List<String> children = curator.getChildren().forPath("/super"); //獲取子節點
for(String child : children) {
System.out.println(child);
}
//放心的刪除節點,deletingChildrenIfNeeded()方法表示如果存在子節點的話,同時刪除子節點
curator.delete().guaranteed().deletingChildrenIfNeeded().forPath("/super");
curator.close();
}
}
PS:create創建節點方法可選的鏈式項:creatingParentsIfNeeded(是否同時創建父節點)、withMode(創建的節點類型)、forPath(創建的節點路徑)、withACL(安全項)delete刪除節點方法可選的鏈式項:deletingChildrenIfNeeded(是否同時刪除子節點)、guaranteed(安全刪除)、withVersion(版本檢查)、forPath(刪除的節點路徑)
inBackground綁定異步回調方法。比如在創建節點時綁定一個回調方法,該回調方法可以輸出服務器的狀態碼以及服務器的事件類型等信息,還可以加入一個線程池進行優化操作。
2.Curator的監聽
1)NodeCache:監聽節點的新增、修改操作。
public class CuratorWatcher1 {
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:2181";
private static final int SESSION_TIMEOUT = 5000;
public static void main(String[] args) throws Exception {
RetryPolicy policy = new ExponentialBackoffRetry(1000, 10);
CuratorFramework curator = CuratorFrameworkFactory.builder().connectString(CONNECT_ADDR)
.sessionTimeoutMs(SESSION_TIMEOUT).retryPolicy(policy).build();
curator.start();
<span style="white-space:pre;"> </span>//最後一個參數表示是否進行壓縮
NodeCache cache = new NodeCache(curator, "/super", false);
cache.start(true);
//只會監聽節點的創建和修改,刪除不會監聽
cache.getListenable().addListener(() -> {
System.out.println("路徑:" + cache.getCurrentData().getPath());
System.out.println("數據:" + new String(cache.getCurrentData().getData()));
System.out.println("狀態:" + cache.getCurrentData().getStat());
});
curator.create().forPath("/super", "1234".getBytes());
Thread.sleep(1000);
curator.setData().forPath("/super", "5678".getBytes());
Thread.sleep(1000);
curator.delete().forPath("/super");
Thread.sleep(5000);
curator.close();
}
}
2)PathChildrenCache:監聽子節點的新增、修改、刪除操作。
public class CuratorWatcher2 {
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:2181";
private static final int SESSION_TIMEOUT = 5000;
public static void main(String[] args) throws Exception {
RetryPolicy policy = new ExponentialBackoffRetry(1000, 10);
CuratorFramework curator = CuratorFrameworkFactory.builder().connectString(CONNECT_ADDR)
.sessionTimeoutMs(SESSION_TIMEOUT).retryPolicy(policy).build();
curator.start();
//第三個參數表示是否接收節點數據內容
PathChildrenCache childrenCache = new PathChildrenCache(curator, "/super", true);
/**
* 如果不填寫這個參數,則無法監聽到子節點的數據更新
如果參數爲PathChildrenCache.StartMode.BUILD_INITIAL_CACHE,則會預先創建之前指定的/super節點
如果參數爲PathChildrenCache.StartMode.POST_INITIALIZED_EVENT,效果與BUILD_INITIAL_CACHE相同,只是不會預先創建/super節點
參數爲PathChildrenCache.StartMode.NORMAL時,與不填寫參數是同樣的效果,不會監聽子節點的數據更新操作
*/
childrenCache.start(PathChildrenCache.StartMode.POST_INITIALIZED_EVENT);
childrenCache.getListenable().addListener((framework, event) -> {
switch (event.getType()) {
case CHILD_ADDED:
System.out.println("CHILD_ADDED,類型:" + event.getType() + ",路徑:" + event.getData().getPath() + ",數據:" +
new String(event.getData().getData()) + ",狀態:" + event.getData().getStat());
break;
case CHILD_UPDATED:
System.out.println("CHILD_UPDATED,類型:" + event.getType() + ",路徑:" + event.getData().getPath() + ",數據:" +
new String(event.getData().getData()) + ",狀態:" + event.getData().getStat());
break;
case CHILD_REMOVED:
System.out.println("CHILD_REMOVED,類型:" + event.getType() + ",路徑:" + event.getData().getPath() + ",數據:" +
new String(event.getData().getData()) + ",狀態:" + event.getData().getStat());
break;
default:
break;
}
});
curator.create().forPath("/super", "123".getBytes());
curator.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/super/c1", "c1內容".getBytes());
//經測試,不會監聽到本節點的數據變更,只會監聽到指定節點下子節點數據的變更
curator.setData().forPath("/super", "456".getBytes());
curator.setData().forPath("/super/c1", "c1新內容".getBytes());
curator.delete().guaranteed().deletingChildrenIfNeeded().forPath("/super");
Thread.sleep(5000);
curator.close();
}
}
3)TreeCache:既可以監聽節點的狀態,又可以監聽子節點的狀態。類似於上面兩種Cache的組合。
public class CuratorWatcher3 {
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:2181";
private static final int SESSION_TIMEOUT = 5000;
public static void main(String[] args) throws Exception {
RetryPolicy policy = new ExponentialBackoffRetry(1000, 10);
CuratorFramework curator = CuratorFrameworkFactory.builder().connectString(CONNECT_ADDR).sessionTimeoutMs(SESSION_TIMEOUT)
.retryPolicy(policy).build();
curator.start();
TreeCache treeCache = new TreeCache(curator, "/treeCache");
treeCache.start();
treeCache.getListenable().addListener((curatorFramework, treeCacheEvent) -> {
switch (treeCacheEvent.getType()) {
case NODE_ADDED:
System.out.println("NODE_ADDED:路徑:" + treeCacheEvent.getData().getPath() + ",數據:" + new String(treeCacheEvent.getData().getData())
+ ",狀態:" + treeCacheEvent.getData().getStat());
break;
case NODE_UPDATED:
System.out.println("NODE_UPDATED:路徑:" + treeCacheEvent.getData().getPath() + ",數據:" + new String(treeCacheEvent.getData().getData())
+ ",狀態:" + treeCacheEvent.getData().getStat());
break;
case NODE_REMOVED:
System.out.println("NODE_REMOVED:路徑:" + treeCacheEvent.getData().getPath() + ",數據:" + new String(treeCacheEvent.getData().getData())
+ ",狀態:" + treeCacheEvent.getData().getStat());
break;
default:
break;
}
});
curator.create().forPath("/treeCache", "123".getBytes());
curator.create().creatingParentsIfNeeded().withMode(CreateMode.PERSISTENT).forPath("/treeCache/c1", "456".getBytes());
curator.setData().forPath("/treeCache", "789".getBytes());
curator.setData().forPath("/treeCache/c1", "910".getBytes());
curator.delete().forPath("/treeCache/c1");
curator.delete().forPath("/treeCache");
Thread.sleep(5000);
curator.close();
}
}
PS:Curator 2.4.2的jar包沒有TreeCache,我升級到了3.2.1的版本。但是在運行時報java.lang.NoSuchMethodError:org.apache.zookeeper.server.quorum.flexible.QuorumMaj.<init>(Ljava/util/Map;,出現這個錯誤的原因是因爲zookeeper服務器的版本與zookeeper.jar的版本不一致,因此將zookeeper.jar升級到與zookeeper服務器對應的3.5.2。再次運行,又報java.lang.NoSuchMethodError: com.google.common.collect.Sets.newConcurrentHashSet()Ljav;,好吧,一看跟之前的錯誤一樣,都是NoSuchMethodError,我猜想應該是guava的版本與zookeeper.jar所依賴的版本不一致(zookeeper.jar依賴io.netty,而io.netty依賴com.google.protobuf » protobuf-java),so,將guava的版本升級到了20.0,運行成功!
3.Curator應用場景
①分佈式鎖(該部分來自跟着實例學習ZooKeeper的用法: 分佈式鎖)
在分佈式場景中,爲了保證數據的一致性,經常在程序運行的某一個點需要進行同步操作(Java中提供了Synchronized和ReentrantLock實現)。我們使用Curator基於Zookeeper的特性提供的分佈式鎖來處理分佈式場景的數據一致性。
可重入鎖:InterProcessMutex(CuratorFramework client, String path)
通過acquire()獲得鎖,並提供超時機制;通過release()釋放鎖。makeRevocable(RevocationListener<T> listener)定義了可協商的撤銷機制,當別的進程或線程想讓你釋放鎖時,listener會被調用。如果請求撤銷當前的鎖,可以調用attemptRevoke(CuratorFramework client, String path)。
首先創建一個模擬的公共資源,這個資源期望只能單線程的訪問,否則會有併發問題。
public class FakeLimitedResource {
private final AtomicBoolean inUse = new AtomicBoolean(false);
public void use() throws Exception {
//這個例子在使用鎖的情況下不會拋出非法併發異常IllegalStateException
//但是在無鎖的情況下,由於sleep了一段時間,所以很容易拋出異常
if(!inUse.compareAndSet(false, true)) {
throw new IllegalStateException("Needs to be used by one client at a time");
}
try {
Thread.sleep((long) (3 * Math.random()));
} finally {
inUse.set(false);
}
}
}
然後創建一個ExampleClientThatLocks類,它負責請求鎖、使用資源、釋放鎖這樣一個完整的訪問過程。
public class ExampleClientThatLocks {
private final InterProcessMutex lock;
//private final InterProcessSemaphoreMutex lock;
private final FakeLimitedResource resource;
private final String clientName;
public ExampleClientThatLocks(CuratorFramework framework, String path, FakeLimitedResource resource, String clientName) {
this.lock = new InterProcessMutex(framework, path);
//this.lock = new InterProcessSemaphoreMutex(framework, path);
this.resource = resource;
this.clientName = clientName;
}
public void doWork(long time, TimeUnit timeUnit) throws Exception {
if(!lock.acquire(time, timeUnit)) {
throw new IllegalStateException(clientName + " could not acquire the lock!");
}
System.out.println(clientName + " has the lock");
/*if(!lock.acquire(time, timeUnit)) {
throw new IllegalStateException(clientName + " could not acquire the lock!");
}
System.out.println(clientName + " has the lock");*/
try {
resource.use();
} finally {
System.out.println(clientName + " releasing the lock");
lock.release();
//lock.release();
}
}
}
最後創建主程序來測試。
public class InterProcessMutexExample {
private static final int QTY = 5;
private static final int REPETITIONS = QTY * 10;
private static final String PATH = "/examples/locks";
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:2181";
public static void main(String[] args) throws Exception {
final FakeLimitedResource resource = new FakeLimitedResource();
ExecutorService executor = Executors.newFixedThreadPool(QTY);
try {
for(int i=0; i<QTY; i++) {
final int index = i;
Callable<Void> task = () -> {
CuratorFramework curator = CuratorFrameworkFactory.newClient(CONNECT_ADDR, new RetryNTimes(3, 1000));
curator.start();
try {
final ExampleClientThatLocks example = new ExampleClientThatLocks(curator, PATH, resource, "Client " + index);
for(int j=0; j<REPETITIONS; j++) {
example.doWork(10, TimeUnit.SECONDS);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
CloseableUtils.closeQuietly(curator);
}
return null;
};
executor.submit(task);
}
executor.shutdown();
executor.awaitTermination(10, TimeUnit.MINUTES);
} catch (Exception e) {
e.printStackTrace();
}
}
}
代碼也很簡單,生成10個client, 每個client重複執行10次,請求鎖–訪問資源–釋放鎖的過程。每個client都在獨立的線程中。結果可以看到,鎖是隨機的被每個實例排他性的使用。既然是可重用的,你可以在一個線程中多次調用acquire,在線程擁有鎖時它總是返回true。你不應該在多個線程中用同一個InterProcessMutex, 你可以在每個線程中都生成一個InterProcessMutex實例,它們的path都一樣,這樣它們可以共享同一個鎖。不可重入鎖:InterProcessSemaphoreMutex
這個鎖和可重入鎖相比,就是少了Reentrant功能,也就意味着不能在同一個線程中重入,使用方法和上面的類似。將ExampleClientThatLocks修改成如下:
public class ExampleClientThatLocks {
//private final InterProcessMutex lock;
private final InterProcessSemaphoreMutex lock;
private final FakeLimitedResource resource;
private final String clientName;
public ExampleClientThatLocks(CuratorFramework framework, String path, FakeLimitedResource resource, String clientName) {
//this.lock = new InterProcessMutex(framework, path);
this.lock = new InterProcessSemaphoreMutex(framework, path);
this.resource = resource;
this.clientName = clientName;
}
public void doWork(long time, TimeUnit timeUnit) throws Exception {
if(!lock.acquire(time, timeUnit)) {
throw new IllegalStateException(clientName + " could not acquire the lock!");
}
System.out.println(clientName + " has the lock");
if(!lock.acquire(time, timeUnit)) {
throw new IllegalStateException(clientName + " could not acquire the lock!");
}
System.out.println(clientName + " has the lock");
try {
resource.use();
} finally {
System.out.println(clientName + " releasing the lock");
lock.release();
lock.release();
}
}
}
注意我們也需要調用release兩次。這和JDK的ReentrantLock用法一致。如果少調用一次release,則此線程依然擁有鎖。 上面的代碼沒有問題,我們可以多次調用acquire,後續的acquire也不會阻塞。 將上面的InterProcessMutex換成不可重入鎖InterProcessSemaphoreMutex,如果再運行上面的代碼,結果就會發現線程被阻塞再第二個acquire上。 也就是此鎖不是可重入的。
在可重入鎖的代碼基礎上,使用下面的ExampleClientReadWriteLocks替換ExampleClientThatLocks類即可。
public class ExampleClientReadWriteLocks {
private final InterProcessReadWriteLock readWriteLock;
private final InterProcessMutex readLock;
private final InterProcessMutex writeLock;
private final FakeLimitedResource resource;
private final String clientName;
public ExampleClientReadWriteLocks(CuratorFramework client, String path, FakeLimitedResource resource, String clientName) {
this.readWriteLock = new InterProcessReadWriteLock(client, path);
this.readLock = readWriteLock.readLock();
this.writeLock = readWriteLock.writeLock();
this.resource = resource;
this.clientName = clientName;
}
public void doWork(long time, TimeUnit unit) throws Exception {
if(!writeLock.acquire(time, unit)) {
throw new IllegalStateException(clientName + " could not acquire the writeLock!");
}
System.out.println(clientName + " has the writeLock");
if(!readLock.acquire(time, unit)) {
throw new IllegalStateException(clientName + " could not acquire the readLock!");
}
System.out.println(clientName + " has the readLock");
try {
resource.use();
} finally {
readLock.release();
writeLock.release();
}
}
}
這次調用acquire會返回一個租約對象。 客戶端必須在finally中close這些租約對象,否則這些租約會丟失掉。 但是, 但是,如果客戶端session由於某種原因比如crash丟掉, 那麼這些客戶端持有的租約會自動close, 這樣其它客戶端可以繼續使用這些租約。 租約還可以通過下面的方式返還:
public void returnAll(Collection<Lease> leases)
public void returnLease(Lease lease)
注意一次你可以請求多個租約,如果Semaphore當前的租約不夠,則請求線程會被阻塞。 同時還提供了超時的重載方法。
public Lease acquire()
public Collection<Lease> acquire(int qty)
public Lease acquire(long time, TimeUnit unit)
public Collection<Lease> acquire(int qty, long time, TimeUnit unit)
下面是例子:
public class InterProcessSemaphoreExample {
private static final int MAX_LEASE = 10;
private static final String PATH = "/examples/locks";
public static void main(String[] args) throws Exception {
FakeLimitedResource resource = new FakeLimitedResource();
try (TestingServer server = new TestingServer()) {
CuratorFramework client = CuratorFrameworkFactory.newClient(server.getConnectString(), new ExponentialBackoffRetry(1000, 3));
client.start();
InterProcessSemaphoreV2 semaphore = new InterProcessSemaphoreV2(client, PATH, MAX_LEASE);
Collection<Lease> leases = semaphore.acquire(5);
System.out.println("get " + leases.size() + " leases");
Lease lease = semaphore.acquire();
System.out.println("get another lease");
resource.use();
Collection<Lease> leases2 = semaphore.acquire(5, 10, TimeUnit.SECONDS);
System.out.println("Should timeout and acquire return " + leases2);
System.out.println("return one lease");
semaphore.returnLease(lease);
System.out.println("return another 5 leases");
semaphore.returnAll(leases);
}
}
}
首先我們先獲得了5個租約, 最後我們把它還給了semaphore。 接着請求了一個租約,因爲semaphore還有5個租約,所以請求可以滿足,返回一個租約,還剩4個租約。 然後再請求一個租約,因爲租約不夠,阻塞到超時,還是沒能滿足,返回結果爲null。上面說講的鎖都是公平鎖(fair)。 總ZooKeeper的角度看, 每個客戶端都按照請求的順序獲得鎖。 相當公平。例子如下:
public class InterProcessMultiLockExample {
private static final String PATH1 = "/examples/locks1";
private static final String PATH2 = "/examples/locks2";
public static void main(String[] args) throws Exception {
FakeLimitedResource resource = new FakeLimitedResource();
try (TestingServer server = new TestingServer()) {
CuratorFramework client = CuratorFrameworkFactory.newClient(server.getConnectString(), new ExponentialBackoffRetry(1000, 3));
client.start();
InterProcessLock lock1 = new InterProcessMutex(client, PATH1);
InterProcessLock lock2 = new InterProcessSemaphoreMutex(client, PATH2);
InterProcessMultiLock lock = new InterProcessMultiLock(Arrays.asList(lock1, lock2));
if (!lock.acquire(10, TimeUnit.SECONDS)) {
throw new IllegalStateException("could not acquire the lock");
}
System.out.println("has the lock");
System.out.println("has the lock1: " + lock1.isAcquiredInThisProcess());
System.out.println("has the lock2: " + lock2.isAcquiredInThisProcess());
try {
resource.use(); //access resource exclusively
} finally {
System.out.println("releasing the lock");
lock.release(); // always release the lock in a finally block
}
System.out.println("has the lock1: " + lock1.isAcquiredInThisProcess());
System.out.println("has the lock2: " + lock2.isAcquiredInThisProcess());
}
}
}
新建一個InterProcessMultiLock, 包含一個重入鎖和一個非重入鎖。 調用acquire後可以看到線程同時擁有了這兩個鎖。 調用release看到這兩個鎖都被釋放了。②分佈式計數器一說到分佈式計數器,你可能馬上想到AtomicInteger這種經典的方式。如果是在同一個JVM下肯定沒有問題,但是在分佈式場景下,肯定會存在問題。所以就需要使用Curator框架的DistributedAtomicInteger了。
public class CuratorDistributedAtomicInteger {
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:2181";
private static final int SESSION_TIMEOUT = 5000;
public static void main(String[] args) throws Exception {
//重試策略,初試時間1秒,重試10次
RetryPolicy policy = new ExponentialBackoffRetry(1000, 10);
//通過工廠創建Curator
CuratorFramework curator = CuratorFrameworkFactory.builder().connectString(CONNECT_ADDR)
.sessionTimeoutMs(SESSION_TIMEOUT).retryPolicy(policy).build();
//開啓連接
curator.start();
DistributedAtomicInteger atomicInteger = new DistributedAtomicInteger(curator, "/super", new RetryNTimes(3, 1000));
AtomicValue<Integer> value = atomicInteger.add(1);
System.out.println(value.succeeded());
System.out.println(value.preValue()); //新值
System.out.println(value.postValue()); //舊值
curator.close();
}
}
③Barrier
分佈式Barrier是這樣一個類:它會阻塞所有節點上的等待進程,知道某一個被滿足,然後所有的節點繼續執行。比如賽馬比賽中,等賽馬陸續來到起跑線前,一聲令下,所有的賽馬都飛奔而出。
DistributedBarrier類實現了欄柵的功能,構造方法如下:
public DistributedBarrier(CuratorFramework client, String barrierPath)
首先需要調用setBarrier()方法設置欄柵,它將阻塞在它上面等待的線程,然後需要阻塞的線程調用waitOnBarrier()方法等待放行條件。當條件滿足時調用removeBarrier()方法移除欄柵,所有等待的線程將繼續執行。接下來看例子:
public class DistributedBarrierExample {
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:2181";
private static final int SESSION_TIMEOUT = 5000;
public static void main(String[] args) throws Exception {
CuratorFramework curator = CuratorFrameworkFactory.newClient(CONNECT_ADDR, new RetryNTimes(3, 1000));
curator.start();
ExecutorService executor = Executors.newFixedThreadPool(5);
DistributedBarrier controlBarrier = new DistributedBarrier(curator, "/example/barrier");
controlBarrier.setBarrier();
for(int i=0; i<5; i++) {
final DistributedBarrier barrier = new DistributedBarrier(curator, "/example/barrier");
final int index = i;
Callable<Void> task = () -> {
Thread.sleep((long) (3 * Math.random()));
System.out.println("Client#" + index + " wait on Barrier");
barrier.waitOnBarrier();
System.out.println("Client#" + index + " begins");
return null;
};
executor.submit(task);
}
Thread.sleep(5000);
controlBarrier.removeBarrier();
Thread.sleep(5000);
executor.shutdown();
curator.close();
}
}
雙欄柵:DistributedDoubleBarrier,雙欄柵允許客戶端在計算的開始和結束時同步。當足夠的進程加入到雙欄柵時,進程開始計算,當計算完成時離開欄柵。DistributedDoubleBarrier構造方法如下:
public DistributedDoubleBarrier(CuratorFramework client, String barrierPath, int memberQty)
memberQty是成元數量,當enter()方法被調用時,成員被阻塞,直到所有的成員都調用了enter()方法。當leave()方法被調用時,它也阻塞調用線程,直到所有的成員都調用了leave()方法。就像百米賽跑比賽,發令槍響,所有的運動員開始跑,等所有的運動員跑過終點線,比賽才結束。例子代碼:
public class DistributedDoubleBarrierExample {
private static final String CONNECT_ADDR = "192.168.159.129:2181,192.168.159.130:2181,192.168.159.131:2181,192.168.159.132:2181,192.168.159.133:2181";
public static void main(String[] args) throws InterruptedException {
CuratorFramework curator = CuratorFrameworkFactory.newClient(CONNECT_ADDR, new RetryNTimes(3, 1000));
curator.start();
ExecutorService executor = Executors.newFixedThreadPool(5);
for(int i=0; i<5; i++) {
final DistributedDoubleBarrier barrier = new DistributedDoubleBarrier(curator, "/example/barrier", 5);
final int index = i;
Callable<Void> task = () -> {
Thread.sleep((long) (3000 * Math.random()));
System.out.println("Client#" + index + " enter");
barrier.enter();
System.out.println("Client#" + index + "begin");
Thread.sleep((long) (3000 * Math.random()));
barrier.leave();
System.out.println("Client#" + index + "left");
return null;
};
executor.submit(task);
}
executor.shutdown();;
executor.awaitTermination(10, TimeUnit.MINUTES);
curator.close();
}
}
轉自:隔壁老王的專欄 https://blog.csdn.net/haoyuyang/article/details/53469269