解讀Nodejs多核處理模塊cluster
從零開始nodejs系列文章,將介紹如何利Javascript做爲服務端腳本,通過Nodejs框架web開發。Nodejs框架是基於V8的引擎,是目前速度最快的Javascript引擎。chrome瀏覽器就基於V8,同時打開20-30個網頁都很流暢。Nodejs標準的web開發框架Express,可以幫助我們迅速建立web站點,比起PHP的開發效率更高,而且學習曲線更低。非常適合小型網站,個性化網站,我們自己的Geek網站!!
關於作者
- 張丹(Conan), 程序員Java,R,PHP,Javascript
- weibo:@Conan_Z
- blog: http://blog.fens.me
- email: [email protected]
轉載請註明出處:
http://blog.fens.me/nodejs-core-cluster/
前言
大家都知道nodejs是一個單進程單線程的服務器引擎,不管有多麼的強大硬件,只能利用到單個CPU進行計算。所以,有人開發了第三方的cluster,讓node可以利用多核CPU實現並行。
隨着nodejs的發展,讓nodejs上生產環境,就必須是支持多進程多核處理!在V0.6.0版本,Nodejs內置了cluster的特性。自此,Nodejs終於可以作爲一個獨立的應用開發解決方案,映入大家眼簾了。
目錄
- cluster介紹
- cluster的簡單使用
- cluster的工作原理
- cluster的API
- master和worker的通信
- 用cluster實現負載均衡(Load Balance) — win7失敗
- 用cluster實現負載均衡(Load Balance) — ubuntu成功
- cluster負載均衡策略的測試
1. cluster介紹
cluster是一個nodejs內置的模塊,用於nodejs多核處理。cluster模塊,可以幫助我們簡化多進程並行化程序的開發難度,輕鬆構建一個用於負載均衡的集羣。
2. cluster的簡單使用
我的系統環境
- win7 64bit
- Nodejs:v0.10.5
- Npm:1.2.19
在win的環境中,我們通過cluster啓動多核的node提供web服務。
新建工程目錄:
~ D:\workspace\javascript>mkdir nodejs-cluster && cd nodejs-cluster
新建文件:app.js
~ vi app.js
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log("master start...");
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('listening',function(worker,address){
console.log('listening: worker ' + worker.process.pid +', Address: '+address.address+":"+address.port);
});
cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {
http.createServer(function(req, res) {
res.writeHead(200);
res.end("hello world\n");
}).listen(0);
}
在控制檯啓動node程序
~ D:\workspace\javascript\nodejs-cluster>node app.js
master start...
listening: worker 2368, Address: 0.0.0.0:57132
listening: worker 1880, Address: 0.0.0.0:57132
listening: worker 1384, Address: 0.0.0.0:57132
listening: worker 1652, Address: 0.0.0.0:57132
master是總控節點,worker是運行節點。然後根據CPU的數量,啓動worker。我本地是雙核雙通道的CPU,所以被檢測爲4核,啓動了4個worker。
3. cluster的工作原理
每個worker進程通過使用child_process.fork()函數,基於IPC(Inter-Process Communication,進程間通信),實現與master進程間通信。
當worker使用server.listen(...)函數時 ,將參數序列傳遞給master進程。如果master進程已經匹配workers,會將傳遞句柄給工人。如果master沒有匹配好worker,那麼會創建一個worker,再傳遞並句柄傳遞給worker。
在邊界條件,有3個有趣的行爲:
注:下面server.listen(),是對底層“http.Server-->net.Server”類的調用。
- 1. server.listen({fd: 7}):在master和worker通信過程,通過傳遞文件,master會監聽“文件描述爲7”,而不是傳遞“文件描述爲7”的引用。
- 2. server.listen(handle):master和worker通信過程,通過handle函數進行通信,而不用進程聯繫
- 3. server.listen(0):在master和worker通信過程,集羣中的worker會打開一個隨機端口共用,通過socket通信,像上例中的57132
當多個進程都在 accept() 同樣的資源的時候,操作系統的負載均衡非常高效。Node.js沒有路由邏輯,worker之間沒有共享狀態。所以,程序要設計得簡單一些,比如基於內存的session。
因爲workers都是獨力運行的,根據程序的需要,它們可以被獨立刪除或者重啓,worker並不相互影響。只要還有workers存活,則master將繼續接收連接。Node不會自動維護workers的數目。我們可以建立自己的連接池。
4. cluster的API
官網地址:http://nodejs.org/api/cluster.html#cluster_cluster
cluster對象
cluster的各種屬性和函數
- cluster.setttings:配置集羣參數對象
- cluster.isMaster:判斷是不是master節點
- cluster.isWorker:判斷是不是worker節點
- Event: 'fork': 監聽創建worker進程事件
- Event: 'online': 監聽worker創建成功事件
- Event: 'listening': 監聽worker向master狀態事件
- Event: 'disconnect': 監聽worker斷線事件
- Event: 'exit': 監聽worker退出事件
- Event: 'setup': 監聽setupMaster事件
- cluster.setupMaster([settings]): 設置集羣參數
- cluster.fork([env]): 創建worker進程
- cluster.disconnect([callback]): 關閉worket進程
- cluster.worker: 獲得當前的worker對象
- cluster.workers: 獲得集羣中所有存活的worker對象
worker對象
worker的各種屬性和函數:可以通過cluster.workers, cluster.worket獲得。
- worker.id: 進程ID號
- worker.process: ChildProcess對象
- worker.suicide: 在disconnect()後,判斷worker是否自殺
- worker.send(message, [sendHandle]): master給worker發送消息。注:worker給發master發送消息要用process.send(message)
- worker.kill([signal='SIGTERM']): 殺死指定的worker,別名destory()
- worker.disconnect(): 斷開worker連接,讓worker自殺
- Event: 'message': 監聽master和worker的message事件
- Event: 'online': 監聽指定的worker創建成功事件
- Event: 'listening': 監聽master向worker狀態事件
- Event: 'disconnect': 監聽worker斷線事件
- Event: 'exit': 監聽worker退出事件
5. master和worker的通信
實現cluster的API,讓master和worker相互通信。
新建文件: cluster.js
~ vi cluster.js
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log('[master] ' + "start master...");
for (var i = 0; i < numCPUs; i++) {
var wk = cluster.fork();
wk.send('[master] ' + 'hi worker' + wk.id);
}
cluster.on('fork', function (worker) {
console.log('[master] ' + 'fork: worker' + worker.id);
});
cluster.on('online', function (worker) {
console.log('[master] ' + 'online: worker' + worker.id);
});
cluster.on('listening', function (worker, address) {
console.log('[master] ' + 'listening: worker' + worker.id + ',pid:' + worker.process.pid + ', Address:' + address.address + ":" + address.port);
});
cluster.on('disconnect', function (worker) {
console.log('[master] ' + 'disconnect: worker' + worker.id);
});
cluster.on('exit', function (worker, code, signal) {
console.log('[master] ' + 'exit worker' + worker.id + ' died');
});
function eachWorker(callback) {
for (var id in cluster.workers) {
callback(cluster.workers[id]);
}
}
setTimeout(function () {
eachWorker(function (worker) {
worker.send('[master] ' + 'send message to worker' + worker.id);
});
}, 3000);
Object.keys(cluster.workers).forEach(function(id) {
cluster.workers[id].on('message', function(msg){
console.log('[master] ' + 'message ' + msg);
});
});
} else if (cluster.isWorker) {
console.log('[worker] ' + "start worker ..." + cluster.worker.id);
process.on('message', function(msg) {
console.log('[worker] '+msg);
process.send('[worker] worker'+cluster.worker.id+' received!');
});
http.createServer(function (req, res) {
res.writeHead(200, {"content-type": "text/html"});
res.end('worker'+cluster.worker.id+',PID:'+process.pid);
}).listen(3000);
}
控制檯日誌:
~ D:\workspace\javascript\nodejs-cluster>node cluster.js
[master] start master...
[worker] start worker ...1
[worker] [master] hi worker1
[worker] start worker ...2
[worker] [master] hi worker2
[master] fork: worker1
[master] fork: worker2
[master] fork: worker3
[master] fork: worker4
[master] online: worker1
[master] online: worker2
[master] message [worker] worker1 received!
[master] message [worker] worker2 received!
[master] listening: worker1,pid:6068, Address:0.0.0.0:3000
[master] listening: worker2,pid:1408, Address:0.0.0.0:3000
[master] online: worker3
[worker] start worker ...3
[worker] [master] hi worker3
[master] message [worker] worker3 received!
[master] listening: worker3,pid:3428, Address:0.0.0.0:3000
[master] online: worker4
[worker] start worker ...4
[worker] [master] hi worker4
[master] message [worker] worker4 received!
[master] listening: worker4,pid:6872, Address:0.0.0.0:3000
[worker] [master] send message to worker1
[worker] [master] send message to worker2
[worker] [master] send message to worker3
[worker] [master] send message to worker4
[master] message [worker] worker1 received!
[master] message [worker] worker2 received!
[master] message [worker] worker3 received!
[master] message [worker] worker4 received!
6. 用cluster實現負載均衡(Load Balance) -- win7失敗
新建文件: server.js
~ vi server.js
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log('[master] ' + "start master...");
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('listening', function (worker, address) {
console.log('[master] ' + 'listening: worker' + worker.id + ',pid:' + worker.process.pid + ', Address:' + address.address + ":" + address.port);
});
} else if (cluster.isWorker) {
console.log('[worker] ' + "start worker ..." + cluster.worker.id);
http.createServer(function (req, res) {
console.log('worker'+cluster.worker.id);
res.end('worker'+cluster.worker.id+',PID:'+process.pid);
}).listen(3000);
}
啓動服務器:
~ D:\workspace\javascript\nodejs-cluster>node server.js
[master] start master...
[worker] start worker ...1
[worker] start worker ...2
[master] listening: worker1,pid:1536, Address:0.0.0.0:3000
[master] listening: worker2,pid:5920, Address:0.0.0.0:3000
[worker] start worker ...3
[master] listening: worker3,pid:7156, Address:0.0.0.0:3000
[worker] start worker ...4
[master] listening: worker4,pid:2868, Address:0.0.0.0:3000
worker4
worker4
worker4
worker4
worker4
worker4
worker4
worker4
用curl工具訪問
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
C:\Users\Administrator>curl localhost:3000
worker4,PID:2868
我們發現了cluster在win中的bug,只用到worker4。果斷切換到Linux測試。
7. 用cluster實現負載均衡(Load Balance) -- ubuntu成功
Linux的系統環境
- Linux: Ubuntu 12.04.2 64bit Server
- Node: v0.11.2
- Npm: 1.2.21
構建項目:不多解釋
~ cd :/home/conan/nodejs/
~ mkdir nodejs-cluster && cd nodejs-cluster
~ vi server.js
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
console.log('[master] ' + "start master...");
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('listening', function (worker, address) {
console.log('[master] ' + 'listening: worker' + worker.id + ',pid:' + worker.process.pid + ', Address:' + address.address + ":" + address.port);
});
} else if (cluster.isWorker) {
console.log('[worker] ' + "start worker ..." + cluster.worker.id);
http.createServer(function (req, res) {
console.log('worker'+cluster.worker.id);
res.end('worker'+cluster.worker.id+',PID:'+process.pid);
}).listen(3000);
}
啓動服務器
conan@conan-deskop:~/nodejs/nodejs-cluster$ node server.js
[master] start master...
[worker] start worker ...1
[master] listening: worker1,pid:2925, Address:0.0.0.0:3000
[worker] start worker ...3
[master] listening: worker3,pid:2931, Address:0.0.0.0:3000
[worker] start worker ...4
[master] listening: worker4,pid:2932, Address:0.0.0.0:3000
[worker] start worker ...2
[master] listening: worker2,pid:2930, Address:0.0.0.0:3000
worker4
worker2
worker1
worker3
worker4
worker2
worker1
用curl工具訪問
C:\Users\Administrator>curl 192.168.1.20:3000
worker4,PID:2932
C:\Users\Administrator>curl 192.168.1.20:3000
worker2,PID:2930
C:\Users\Administrator>curl 192.168.1.20:3000
worker1,PID:2925
C:\Users\Administrator>curl 192.168.1.20:3000
worker3,PID:2931
C:\Users\Administrator>curl 192.168.1.20:3000
worker4,PID:2932
C:\Users\Administrator>curl 192.168.1.20:3000
worker2,PID:2930
C:\Users\Administrator>curl 192.168.1.20:3000
worker1,PID:2925
在Linux環境中,cluster是運行正確的!!!
8. cluster負載均衡策略的測試
我們在Linux下面,完成測試,用過測試軟件: siege
安裝siege
~ sudo apt-get install siege
啓動node cluster
~ node server.js > server.log
運行siege啓動命令,每秒50個併發請求。
~ sudo siege -c 50 http://localhost:3000
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.01 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.01 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.02 secs: 16 bytes ==> /
HTTP/1.1 200 0.00 secs: 16 bytes ==> /
HTTP/1.1 200 0.02 secs: 16 bytes ==> /
HTTP/1.1 200 0.01 secs: 16 bytes ==> /
HTTP/1.1 200 0.01 secs: 16 bytes ==> /
.....
^C
Lifting the server siege... done. Transactions: 3760 hits
Availability: 100.00 %
Elapsed time: 39.66 secs
Data transferred: 0.06 MB
Response time: 0.01 secs
Transaction rate: 94.81 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 1.24
Successful transactions: 3760
Failed transactions: 0
Longest transaction: 0.20
Shortest transaction: 0.00
FILE: /var/siege.log
You can disable this annoying message by editing
the .siegerc file in your home directory; change
the directive 'show-logfile' to false.
我們統計結果,執行3760次請求,消耗39.66秒,每秒處理94.81次請求。
查看server.log文件,
~ ls -l
total 64
-rw-rw-r-- 1 conan conan 756 9月 28 15:48 server.js
-rw-rw-r-- 1 conan conan 50313 9月 28 16:26 server.log
~ tail server.log
worker4
worker1
worker2
worker4
worker1
worker2
worker4
worker3
worker2
worker1
最後,用R語言分析一下:server.log
~ R
> df<-read.table(file="server.log",skip=9,header=FALSE)
> summary(df)
V1
worker1:1559
worker2:1579
worker3:1570
worker4:1535
我們看到,請求被分配到worker數據量相當。所以,cluster的負載均衡的策略,應該是隨機分配的。
好了,我們又學了一個很有用的技能!利用cluster可以構建出多核應用,充分的利用多CPU帶業的性能吧!!