IO 模型知多少 (2)

1. 引言

之前的一篇介紹 IO 模型的文章 IO 模型知多少(1) 比較偏理論,很多同學反應不是很好理解。這一篇咱們換一個角度,從代碼角度來分析一下。

2. Socket 編程基礎

開始之前,我們先來梳理一下,需要提前瞭解的幾個概念:

socket: 直譯爲“插座”,在計算機通信領域,socket 被翻譯爲“套接字”,它是計算機之間進行通信的一種約定或一種方式。通過 socket 這種約定,一臺計算機可以接收其他計算機的數據,也可以向其他計算機發送數據。我們把插頭插到插座上就能從電網獲得電力供應,同樣,應用程序爲了與遠程計算機進行數據傳輸,需要連接到因特網,而 socket 就是用來連接到因特網的工具。

另外還需要知道的是,socket 編程的基本流程。
在這裏插入圖片描述

2. 同步阻塞 IO

先回顧下概念:阻塞IO是指,應用進程中線程在發起IO調用後至內核執行IO操作返回結果之前,若發起系統調用的線程一直處於等待狀態,則此次IO操作爲阻塞IO。

public static void Start()
{
    //1. 創建Tcp Socket對象
    var serverSocket = new Socket(AddressFamily.InterNetwork, 
                                   SocketType.Stream, ProtocolType.Tcp);
    var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);
    //2. 綁定Ip端口
    serverSocket.Bind(ipEndpoint);
    //3. 開啓監聽,指定最大連接數
    serverSocket.Listen(10);   
    Console.WriteLine($"服務端已啓動({ipEndpoint})-等待連接...");

    while(true)
    {
        //4. 等待客戶端連接
        var clientSocket = serverSocket.Accept();//阻塞
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-已連接");
        Span<byte> buffer = new Span<byte>(new byte[512]);
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-開始接收數據...");
        int readLength = clientSocket.Receive(buffer);//阻塞
        var msg = Encoding.UTF8.GetString(buffer.ToArray(), 0, readLength);
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-接收數據:{msg}");
        var sendBuffer = Encoding.UTF8.GetBytes($"received:{msg}");
        clientSocket.Send(sendBuffer);    
    }
}

在這裏插入圖片描述

代碼很簡單,直接看註釋就OK了,運行結果如上圖所示,但有幾個問題點需要着重說明下:

  • 等待連接處 serverSocket.Accept(),線程阻塞!

  • 接收數據處 clientSocket.Receive(buffer),線程阻塞!

會導致什麼問題呢:

  • 只有一次數據讀取完成後,纔可以接受下一個連接請求

  • 一個連接,只能接收一次數據

3. 同步非阻塞 IO

看完,你可能會說,這兩個問題很好解決啊,創建一個新線程去接收數據就是了。於是就有了下面的代碼改進。

public static void Start2()    
{          
    //1. 創建Tcp Socket對象          
    var serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, 
                                   ProtocolType.Tcp  );         
    var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);          
    //2. 綁定Ip端口        
    serverSocket.Bind(ipEndpoint);          
    //3. 開啓監聽,指定最大連接數        
    serverSocket.Listen(10);          
    Console.WriteLine($"服務端已啓動({ipEndpoint})-等待連接...");              
    while(true)          
    {              
        //4. 等待客戶端連接             
        var clientSocket = serverSocket.Accept();//阻塞              
        Task.Run(() => ReceiveData(clientSocket));         
    }    
}        
 
private static void ReceiveData(Socket clientSocket)    
{          
    Console.WriteLine($"{clientSocket.RemoteEndPoint}-已連接");          
    Span<byte> buffer = new Span<byte>(new byte[512]);             
    while(true)          
    {              
        if(clientSocket.Available == 0)     
            continue  ;              
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-開始接收數據...");              
        int readLength = clientSocket.Receive(buffer);  //阻塞              
        var msg = Encoding.UTF8.GetString(buffer.ToArray(), 0, readLength);              
        Console.WriteLine($"{clientSocket.RemoteEndPoint}-接收數據:{msg}");              
        var sendBuffer = Encoding.UTF8.GetBytes($"received:{msg}");            
        clientSocket.Send(sendBuffer);          
    }    
}

在這裏插入圖片描述
是的,多線程解決了上述的問題,但如果你觀察以上動圖後,你應該能發現個問題:才建立4個客戶端連接,CPU的佔用率就開始直線上升了。

而這個問題的本質就是,服務端的IO模型爲阻塞IO模型,爲了解決阻塞導致的問題,採用重複輪詢,導致無效的系統調用,從而導致CPU持續走高。

4. IO 多路複用

既然知道原因所在,咱們就來予以改造。適用異步方式來處理連接、接收和發送數據。

public static class NioServer 
{  
	private static ManualResetEvent _acceptEvent = new ManualResetEvent(true);  
	private static ManualResetEvent _readEvent = new ManualResetEvent(true);  

	public static void Start ()  
	{  
		//1. 創建Tcp Socket對象  
		var serverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, 
		                               ProtocolType.Tcp);  
		// serverSocket.Blocking = false;//設置爲非阻塞  
		var ipEndpoint = new IPEndPoint(IPAddress.Loopback, 5001);  
		//2. 綁定Ip端口  
		serverSocket.Bind(ipEndpoint);  
		//3. 開啓監聽,指定最大連接數  
		serverSocket.Listen(10);  
		Console.WriteLine($ "服務端已啓動({ipEndpoint})-等待連接...");   
	
		while(true)  
		{  
			_acceptEvent.Reset (); //重置信號量  
			serverSocket.BeginAccept(OnClientConnected, serverSocket);  
			_acceptEvent.WaitOne (); //阻塞  
		}  
	}
  
	private static void OnClientConnected(IAsyncResult ar)  
	{  
		_acceptEvent.Set (); //當有客戶端連接進來後,則釋放信號量  
		var serverSocket = ar.AsyncState as Socket;  
		Debug.Assert(serverSocket != null, nameof(serverSocket) + " != null");   
		
		var clientSocket = serverSocket.EndAccept(ar);  
		Console.WriteLine($ "{clientSocket.RemoteEndPoint}-已連接");  
		
		while(true)  
		{  
			_readEvent.Reset (); //重置信號量   
			var stateObj = new StateObject { ClientSocket = clientSocket };  
			clientSocket.BeginReceive(stateObj.Buffer, 0, stateObj.Buffer.Length, 
			                       SocketFlags.None, OnMessageReceived, stateObj);  
			_readEvent.WaitOne (); //阻塞等待  
		}  
	}  
	
	private static void OnMessageReceived(IAsyncResult ar)  
	{  
		var state = ar.AsyncState as StateObject;  
		Debug.Assert(state != null, nameof(state) + " != null");  
		var receiveLength = state.ClientSocket.EndReceive(ar);  
		
		if(receiveLength > 0)  
		{   
			var msg = Encoding.UTF8.GetString(state.Buffer, 0, receiveLength);   
			Console.WriteLine($ "{state.ClientSocket.RemoteEndPoint}-接收數據:{msg}");   
			
			var sendBuffer = Encoding.UTF8.GetBytes($ "received:{msg}");  
			state.ClientSocket.BeginSend(sendBuffer, 0, sendBuffer.Length, 
			                    SocketFlags.None, SendMessage, state.ClientSocket);  
		} 
	}  
	
	private static void SendMessage(IAsyncResult ar)  
	{  
		var clientSocket = ar.AsyncState as Socket;  
		Debug.Assert(clientSocket != null, nameof(clientSocket) + " != null");  
		clientSocket.EndSend(ar);  
		_readEvent.Set (); //發送完畢後,釋放信號量  
	}
} 

public class StateObject
{  
	// Client socket.  
	public Socket ClientSocket = null;  
	// Size of receive buffer.  
	public const int BufferSize = 1024;  
	// Receive buffer.  
	public byte[] Buffer = new byte[BufferSize]; 
}

首先來看運行結果,從下圖可以看到,除了建立連接時CPU出現抖動外,在消息接收和發送階段,CPU佔有率趨於平緩,且佔用率低。
在這裏插入圖片描述
分析代碼後我們發現:

  • CPU使用率是下來了,但代碼複雜度上升了。

  • 使用異步接口處理客戶端連接: BeginAccept和 EndAccept

  • 使用異步接口接收數據: BeginReceive和 EndReceive

  • 使用異步接口發送數據: BeginSend和 EndSend

  • 使用 ManualResetEvent進行線程同步,避免線程空轉

那你可能好奇,以上模型是何種 IO 多路複用模型呢?
好問題,我們來一探究竟。

5. 驗證 I/O 模型

要想驗證應用使用的何種 IO 模型,只需要確定應用運行時發起了哪些系統調用即可。對於 Linux 系統來說,我們可以藉助strace命令來跟蹤指定應用發起的系統調用和信號。

5.1 驗證同步阻塞I/O發起的系統調用

可以使用 VSCode Remote 連接到自己的 Linux 系統上,然後新建項目 Io.Demo,以上面非阻塞 IO 的代碼進行測試,執行以下啓動跟蹤命令:

shengjie@ubuntu:~/coding/dotnet$ ls 
Io.Demo 
shengjie@ubuntu:~/coding/dotnet$ strace -ff -o Io.Demo /strace /io dotnet run --project Io.Demo/
Press any key to start! 
服務端已啓動(127.0.0.1:5001)-等待連接... 
127.0.0.1:36876-已連接 
127.0.0.1:36876-開始接收數據... 
127.0.0.1:36876-接收數據:1

另起命令行,執行 nc localhost 5001模擬客戶端連接。

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ nc localhost 5001
1
received:1

使用 netstat命令查看建立的連接。

shengjie@ubuntu:/proc/3763$ netstat -natp | grep 5001 
(Not all processes could be identified, non-owned process info  
will not be shown, you would have to be root to see it all.) 
tcp     0    0  127.0.0.1 : 5001      0.0.0.0 :*            LISTEN      3763/Io.Demo      
tcp     0    0  127.0.0.1 : 36920     127.0.0.1 : 5001      ESTABLISHED 3798/nc        
tcp     0    0  127.0.0.1 : 5001      127.0.0.1 : 36920     ESTABLISHED 3763/Io.Demo 

另起命令行,執行ps-h|grep dotnet抓取進程 Id。

shengjie@ubuntu:~/coding/ dotnet/Io.Demo$ ps -h | grep dotnet  
3694 pts/1   S+    0:11 strace -ff -o Io.Demo/strace/io dotnet run --project Io.Demo/ 
3696 pts/1   Sl+   0:01 dotnet run --project Io.Demo/ 
3763 pts/1   Sl+   0:00 /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo  
3779 pts/2   S+    0:00 grep --color = auto dotnet 
shengjie@ubuntu:~/coding/ dotnet$ ls Io.Demo/strace/ # 查看生成的系統調用文件 
io.3696  io.3702  io.3708  io.3714  io.3720  io.3726  io.3732  io.3738  io.3744  io.3750  io.3766  io.3772  io.3782  io.3827 
io.3697  io.3703  io.3709  io.3715  io.3721  io.3727  io.3733  io.3739  io.3745  io.3751  io.3767  io.3773  io.3786  io.3828 
io.3698  io.3704  io.3710  io.3716  io.3722  io.3728  io.3734  io.3740  io.3746  io.3752  io.3768  io.3774  io.3787 
io.3699  io.3705  io.3711  io.3717  io.3723  io.3729  io.3735  io.3741  io.3747  io.3763  io.3769  io.3777  io.3797 
io.3700  io.3706  io.3712  io.3718  io.3724  io.3730  io.3736  io.3742  io.3748  io.3764  io.3770  io.3780  io.3799 
io.3701  io.3707  io.3713  io.3719  io.3725  io.3731  io.3737  io.3743  io.3749  io.3765  io.3771  io.3781  io.3800

有上可知,進程Id爲3763,依次執行以下命令可以查看該進程的線程和產生的文件描述符:

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cd /proc/3763  # 進入進程目錄
shengjie@ubuntu:/proc/3763$ ls 
attr    cmdline     environ io     mem     ns       pagemap   sched   smaps_rollup syscall    wchan 
autogroup  comm       exe   limits   mountinfo  numa_maps   patch_state schedstat stack     task 
auxv    coredump_filter fd    loginuid  mounts   oom_adj    personality sessionid stat     timers 
cgroup   cpuset      fdinfo  map_files mountstats oom_score   projid_map  setgroups statm     timerslack_ns 
clear_refs cwd       gid_map maps    net     oom_score_adj root     smaps   status    uid_map 
shengjie@ubuntu:/proc/3763$ ll task # 查看當前進程啓動的線程 
total 0 
dr-xr-xr-x 9 shengjie shengjie 0  5 月  10  16:36  ./
dr-xr-xr-x 9 shengjie shengjie 0  5 月  10  16:34  ../
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3763/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3765/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3766/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3767/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3768/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3769/
dr-xr-xr-x 7 shengjie shengjie 0  5 月  10  16:36  3770/
shengjie@ubuntu:/proc/3763$ ll fd # 查看當前進程系統調用產生的文件描述符 
total 0 
dr-x------ 2 shengjie shengjie  0  5 月  10  16:36  ./
dr-xr-xr-x 9 shengjie shengjie  0  5 月  10  16:34  ../
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  0 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  1 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  10 ->  'socket:[44292]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  100 ->  /dev/random 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  11 ->  'socket:[41675]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  13 ->  'pipe:[45206]' 
l-wx------ 1 shengjie shengjie 64  5 月  10  16:37  14 ->  'pipe:[45206]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  15 ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  16 ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  17 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  18 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Console.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  19 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  2 ->  /dev/pts/1 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  20 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.Extensions.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  21 ->  /dev/pts/1 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  22 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Text.Encoding.Extensions.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  23 ->  /dev/urandom 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  24 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Sockets.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  25 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  26 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/Microsoft.Win32.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  27 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Tracing.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  28 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.Tasks.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  29 ->  'socket:[43429]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  3 ->  'pipe:[42148]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  30 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.ThreadPool.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  31 ->  'socket:[42149]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  32 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Memory.dll 
l-wx------ 1 shengjie shengjie 64  5 月  10  16:37  4 ->  'pipe:[42148]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  42 ->  /dev/urandom 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  5 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  6 ->  /dev/pts/1 
lrwx------ 1 shengjie shengjie 64  5 月  10  16:37  7 ->  /dev/pts/1 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  9 ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Private.CoreLib.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  16:37  99 ->  /dev/urandom

從上面的輸出來看,.NET Core控制檯應用啓動時啓動了多個線程,並在10、11、29、31號文件描述符啓動了socket監聽。那哪一個文件描述符監聽的是5001端口呢。

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cat/proc/net/tcp | grep 1389  # 查看5001端口號相關的tcp鏈接(0x1389 爲5001十六進制)   
4: 0100007F:1389  00000000:0000  0A  00000000:00000000  00:00000000  00000000  1000     0  43429  1  0000000000000000  100  0  0  10  0              
12: 0100007F:9038  0100007F:1389  01  00000000:00000000  00:00000000  00000000  1000     0  44343  1  0000000000000000  20  4  30  10  - 1             
13: 0100007F:1389  0100007F:9038  01  00000000:00000000  00:00000000  00000000  1000     0  42149  1  0000000000000000  20  4  29  10  - 1

從中可以看到inode爲 [43429] 的 socket 監聽在 5001 端口號,所以可以找到上面的輸出行lrwx------1shengjie shengjie645月1016:3729->'socket:[43429]',進而判斷監聽 5001 端口號 socket 對應的文件描述符爲 29。

當然,也可以從記錄到 strace目錄的日誌文件找到線索。在文中我們已經提及,socket 服務端編程的一般流程,都要經過 socket->bind->accept->read->write流程。所以可以通過抓取關鍵字,查看相關係統調用。

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ grep 'bind' strace/ -rn 
strace/io.3696:4570:bind(10,{sa_family = AF_UNIX,sun_path = "/tmp/dotnet-diagnostic-3696-327175-socket"}, 110) =  0 
strace/io.3763:2241:bind(11,{sa_family = AF_UNIX,sun_path = "/tmp/dotnet-diagnostic-3763-328365-socket"}, 110) =  0 
strace/io.3763:2949:bind(29,{sa_family = AF_INET,sin_port = htons(5001 ),sin_addr = inet_addr("127.0.0.1" )}, 16) =  0 
strace/io.3713:4634:bind(11,{sa_family = AF_UNIX,sun_path = "/tmp/dotnet-diagnostic-3713-327405-socket"}, 110) =  0

從上可知,在主線程也就是 io.3763線程的系統調用文件中,將29號文件描述符與監聽在 127.0.0.1:5001 的 socket 進行了綁定。同時也明白了 .NET Core 自動建立的另外 2 個 socket 是與 diagnostic 相關。
接下來咱們重點看下3763號線程產生的系統調用。

shengjie@ubuntu :~ /coding/dotnet/Io.Demo$ cd strace/
shengjie@ubuntu :~ /coding/dotnet/Io.Demo/strace$ cat io.3763  # 僅截取相關片段 
socket(AF_INET,SOCK_STREAM | SOCK_CLOEXEC,IPPROTO_TCP) =  29 
setsockopt(29,SOL_SOCKET,SO_REUSEADDR, [ 1 ],  4) =  0 
bind(29, {sa_family = AF_INET,sin_port = htons(5001), sin_addr = inet_addr("127.0.0.1")}, 16) =  0 
listen(29,10)   
write(21, "\346\234\215\345\212\241\347\253\257\345\267\262\345\220\257\345\212\250(127.0.0.1:500" ...,  51) = 51 
accept4(29, {sa_family = AF_INET,sin_port = htons(36920 ), sin_addr = inet_addr("127.0.0.1")},  [16], SOCK_CLOEXEC) = 31 
write(21, "127.0.0.1:36920-\345\267\262\350\277\236\346\216\245\n", 26) =  26 
write(21, "127.0.0.1:36920-\345\274\200\345\247\213\346\216\245\346\224\266\346\225\260\346" ...,  38) =  38 
recvmsg(31, { msg_name = NULL,msg_namelen = 0,msg_iov =[{ iov_base = "1\n",iov_len = 512 }], msg_iovlen = 1,msg_controllen = 0,msg_flags = 0 },  0) =  2 
write(21, "127.0.0.1:36920-\346\216\245\346\224\266\346\225\260\346\215\256\357\274\2321" ...,  34) =  34 
sendmsg(31, { msg_name = NULL,msg_namelen = 0,msg_iov =[{ iov_base = "received:1\n",iov_len = 11 }], msg_iovlen = 1,msg_controllen = 0,msg_flags = 0 },  0) =  11 
accept4(29, 0x7fecf001c978, [ 16 ], SOCK_CLOEXEC) =  ? ERESTARTSYS(To be restarted if SA_RESTART is  set)
--- SIGWINCH { si_signo = SIGWINCH,si_code = SI_KERNEL }  ---

從中我們可以發現幾個關鍵的系統調用:socket、bind、listen、accept4、recvmsg、sendmsg 通過命令man命令可以查看下accept4recvmsg系統調用的相關說明:

shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man accept4 
If no pending connections are present on the queue, and the socket is  not marked as nonblocking,accept () blocks the caller until a  
      connection  is  present. 
	  
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man recvmsg 
If no messages are available at the socket,the receive calls wait for a message to arrive, unless the socket is nonblocking (see fcntl (2))

也就是說 accept4recvmsg是阻塞式系統調用。

5.2 驗證I/O多路複用發起的系統調用

同樣以上面I/O多路複用的代碼進行驗證,驗證步驟類似:

shengjie@ubuntu:~/coding/dotnet$ strace -ff -o Io.Demo/strace2/io dotnet run --project Io.Demo/
Press any key to start! 
服務端已啓動(127.0.0.1:5001)-等待連接... 
127.0.0.1:37098 -已連接 
127.0.0.1:37098 -接收數據:1  

127.0.0.1:37098 -接收數據:2  

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ nc localhost 5001 
1
received:1 
2
received:2  

shengjie@ubuntu:/proc/2449 $ netstat -natp | grep 5001 
(Not all processes could be identified, non -owned process info  
will not be shown, you would have to be root to see it all .) 
tcp     0    0  127.0.0.1:5001      0.0.0.0 :*          LISTEN      2449/Io.Demo      
tcp     0    0  127.0.0.1:5001      127.0.0.1:56296     ESTABLISHED 2449/Io.Demo      
tcp     0    0  127.0.0.1:56296     127.0.0.1:5001      ESTABLISHED 2499/nc     
 
shengjie@ubuntu:~/coding/dotnet/Io.Demo$ ps -h | grep dotnet  
2400 pts/3   S+    0:10 strace -ff -o ./Io.Demo/strace2/io dotnet run --project Io.Demo/
2402 pts/3   Sl+   0:01 dotnet run --project Io.Demo/
2449 pts/3   Sl+   0:00 /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo  
2516 pts/5   S+    0:00 grep --color = auto dotnet   


shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cd/proc/2449/
shengjie@ubuntu:/proc/2449 $ ll task 
total 0 
dr-xr-xr-x 11 shengjie shengjie 0  5 月  10  22:15  ./
dr-xr-xr-x  9 shengjie shengjie 0  5 月  10  22:15  ../
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2449/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2451/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2452/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2453/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2454/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2455/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2456/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2459/
dr-xr-xr-x  7 shengjie shengjie 0  5 月  10  22:15  2462/
shengjie@ubuntu:/proc/2449 $ ll fd 
total 0 
dr-x------ 2 shengjie shengjie  0  5 月  10  22:15  ./
dr-xr-xr-x 9 shengjie shengjie  0  5 月  10  22:15  ../
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  0  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  1  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  10  ->  'socket:[35001]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  100  ->  /dev/random 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  11  ->  'socket:[34304]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  13  ->  'pipe:[31528]' 
l-wx------ 1 shengjie shengjie 64  5 月  10  22:16  14  ->  'pipe:[31528]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  15  ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  16  ->  /home/shengjie/coding/dotnet/Io.Demo/bin/Debug/netcoreapp3.0/Io.Demo.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  17  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  18  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Console.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  19  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  2  ->  /dev/pts/3 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  20  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Runtime.Extensions.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  21  ->  /dev/pts/3 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  22  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Text.Encoding.Extensions.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  23  ->  /dev/urandom 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  24  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Sockets.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  25  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Net.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  26  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/Microsoft.Win32.Primitives.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  27  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Tracing.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  28  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.Tasks.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  29  ->  'socket:[31529]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  3  ->  'pipe:[32055]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  30  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Threading.ThreadPool.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  31  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Collections.Concurrent.dll 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  32  ->  'anon_inode:[eventpoll]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  33  ->  'pipe:[32059]' 
l-wx------ 1 shengjie shengjie 64  5 月  10  22:16  34  ->  'pipe:[32059]' 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  35  ->  'socket:[35017]' 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  36  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Memory.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  37  ->  /dev/urandom 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  38  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Diagnostics.Debug.dll 
l-wx------ 1 shengjie shengjie 64  5 月  10  22:16  4  ->  'pipe:[32055]' 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  5  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  6  ->  /dev/pts/3 
lrwx------ 1 shengjie shengjie 64  5 月  10  22:16  7  ->  /dev/pts/3 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  9  ->  /usr/share/dotnet/shared/Microsoft.NETCore.App/3.0.0/System.Private.CoreLib.dll 
lr-x------ 1 shengjie shengjie 64  5 月  10  22:16  99  ->  /dev/urandom 
shengjie@ubuntu:/proc/2449 $ cat/proc/net/tcp | grep 1389   
 0: 0100007F:1389  00000000:0000  0A  00000000:00000000  00:00000000  00000000  1000     0  31529  1  0000000000000000  100  0  0  10  0              
 8: 0100007F:1389  0100007F:DBE8 01  00000000:00000000  00:00000000  00000000  1000     0  35017  1  0000000000000000  20  4  29  10  -1             
12: 0100007F:DBE8 0100007F:1389  01  00000000:00000000  00:00000000  00000000  1000     0  28496  1  0000000000000000  20  4  30  10  -1  

過濾 strace2 目錄日誌,抓取監聽在 localhost:5001 socket 對應的文件描述符。

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ grep 'bind' strace2/ -rn
strace2/io.2449:2243:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2449-23147-socket"}, 110) = 0
strace2/io.2449:2950:bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
strace2/io.2365:4568:bind(10, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2365-19043-socket"}, 110) = 0
strace2/io.2420:4634:bind(11, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2420-22262-socket"}, 110) = 0
strace2/io.2402:4569:bind(10, {sa_family=AF_UNIX, sun_path="/tmp/dotnet-diagnostic-2402-22042-socket"}, 110) = 0

從中可以看出同樣是29號文件描述符,相關係統調用記錄中 io.2449文件中,打開文件,可以發現相關係統調用如下:

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ cat strace2/io.2449 # 截取相關係統調用
socket(AF_INET, SOCK_STREAM|SOCK_CLOEXEC, IPPROTO_TCP) = 29
setsockopt(29, SOL_SOCKET, SO_REUSEADDR, [1], 4) = 0
bind(29, {sa_family=AF_INET, sin_port=htons(5001), sin_addr=inet_addr("127.0.0.1")}, 16) = 0
listen(29, 10) 
accept4(29, 0x7fa16c01b9e8, [16], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)
epoll_create1(EPOLL_CLOEXEC)            = 32
epoll_ctl(32, EPOLL_CTL_ADD, 29, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=0, u64=0}}) = 0
accept4(29, 0x7fa16c01cd60, [16], SOCK_CLOEXEC) = -1 EAGAIN (Resource temporarily unavailable)

從中我們可以發現 accept4直接返回-1而不阻塞,監聽在 127.0.0.1:5001的 socket 對應的29號文件描述符最終作爲 epoll_ctl 的參數關聯到 epoll_create1創建的 32 號文件描述符上。最終32號文件描述符會被 epoll_wait阻塞,以等待連接請求。我們可以抓取 epoll 相關的系統調用來驗證:

shengjie@ubuntu:~/coding/dotnet/Io.Demo$ grep 'epoll' strace2/ -rn
strace2/io.2459:364:epoll_ctl(32, EPOLL_CTL_ADD, 35, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=1, u64=1}}) = 0
strace2/io.2462:21:epoll_wait(32, [{EPOLLIN, {u32=0, u64=0}}], 1024, -1) = 1
strace2/io.2462:42:epoll_wait(32, [{EPOLLOUT, {u32=1, u64=1}}], 1024, -1) = 1
strace2/io.2462:43:epoll_wait(32, [{EPOLLIN|EPOLLOUT, {u32=1, u64=1}}], 1024, -1) = 1
strace2/io.2462:53:epoll_wait(32, 
strace2/io.2449:3033:epoll_create1(EPOLL_CLOEXEC)            = 32
strace2/io.2449:3035:epoll_ctl(32, EPOLL_CTL_ADD, 33, {EPOLLIN|EPOLLET, {u32=4294967295, u64=18446744073709551615}}) = 0
strace2/io.2449:3061:epoll_ctl(32, EPOLL_CTL_ADD, 29, {EPOLLIN|EPOLLOUT|EPOLLET, {u32=0, u64=0}}) = 0

因此我們可以斷定同步非阻塞I/O的示例使用的時IO多路複用的epoll模型。

關於epoll相關命令,man命令可以查看下 epoll_create1epoll_ctlepoll_wait系統調用的相關說明:

shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man epoll_create
DESCRIPTION
       epoll_create() creates a new epoll(7) instance.  Since Linux 2.6.8, the size argument is ignored, but must be
       greater than zero; see NOTES below.
	   
       epoll_create() returns a file descriptor referring to the new epoll instance.  This file descriptor  is  used
	   for  all  the subsequent calls to the epoll interface.

shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man epoll_ctl
DESCRIPTION
       This  system  call  performs  control  operations on the epoll(7) instance referred to by the file descriptor
       epfd.  It requests that the operation op be performed for the target file descriptor, fd.
	   
       Valid values for the op argument are:
	   
       EPOLL_CTL_ADD
				Register the target file descriptor fd on the epoll instance referred to by the file  descriptor  epfd
				and associate the event event with the internal file linked to fd.
				
	   EPOLL_CTL_MOD
                Change the event event associated with the target file descriptor fd.
				
	   EPOLL_CTL_DEL
                Remove  (deregister)  the  target file descriptor fd from the epoll instance referred to by epfd.  The
				event is ignored and can be NULL (but see BUGS below).
				
shengjie@ubuntu:~/coding/dotnet/Io.Demo/strace$ man epoll_wait
DESCRIPTION
       The  epoll_wait()  system  call  waits for events on the epoll(7) instance referred to by the file descriptor
       epfd.  The memory area pointed to by events will contain the events that will be available  for  the  caller.
       Up to maxevents are returned by epoll_wait().  The maxevents argument must be greater than zero.
	   
       The  timeout  argument  specifies  the number of milliseconds that epoll_wait() will block.  Time is measured
       against the CLOCK_MONOTONIC clock.  The call will block until either:
	   
       *  a file descriptor delivers an event;
	   
       *  the call is interrupted by a signal handler; or
	   
       *  the timeout expires.

簡而言之,epoll通過創建一個新的文件描述符來替換舊的文件描述符來完成阻塞工作,當有事件或超時時通知原有文件描述符進行處理,以實現非阻塞的線程模型。

6. 總結

寫完這篇文章,對I/O模型的理解有所加深,但由於對Linux系統的瞭解不深,所以難免有紕漏之處,大家多多指教。

同時也不僅感嘆Linux的強大之處,一切皆文件的設計思想,讓一切都有跡可循。現在.NET 已經完全實現跨平臺了,那麼Linux操作系統大家就有必要熟悉起來了。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章