此文請結合intel dpdk源碼去閱讀,基於dpdk-1.5.1 版本源碼講解,源碼可以去http://dpdk.org/dev 網頁中下載;更多官方文檔請訪問http://dpdk.org/
假如你沒有intel的網卡,沒有相應的linux系統,只是想簡單的使用瞭解一下dpdk,那麼你可以選擇在vmware中部署一套簡單的dpdk環境;
1、在vmware中安裝配置適合dpdk運行的虛擬機;
1)、虛擬機的配置要求,
vcpu = 2 最少兩個cpu,因爲dpdk是需要綁定core,一個是沒辦正常運行dpdk的,如你電腦運行,最好多配置幾個;
memory=1024 也就是1G ,當然越多越好,因爲要配置hugepage,還是多分點吧;
系統,我裝的是rhel6.1 ,當然你可以選擇更高版本,但不能選擇低版本,怕不支持;http://blog.csdn.net/linzhaolover/article/details/8223568 這有 RHEL6.3 6.4 6.5的下載地址;
系統的在裝好後要更新一下kernel,我目前虛擬機裏使用的是 linux-3.3.2,你最好選擇3.0 至3.8之間的,這之間的kernel有些人用過,是可以跑起dpdk的;
網卡, 給兩個吧,
vmware裝虛擬機系統我在這就不多說了,網上有很多的教程;
2)、添加dpdk支持的網卡
同學們虛擬網卡,大家就不要吝嗇了,至少添加兩塊intel 網卡吧;因爲一塊會報錯誤;
dpdk是intel出的,目前似乎只支持intel的網卡,在裝好虛擬機好,我們看一下當前虛擬機的網卡是什麼樣的;用lspci命令查看;
- # lspci | grep Ethernet
- 02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
- 02:05.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
好吧,先將虛擬機shutdown;
再添加一塊網卡,這時別急,先不啓動虛擬機;我們還需要去修改一下當前虛擬機的配置文件,
我的配置文件時在E:\Users\adm\Documents\Virtual Machines\Red Hat Enterprise Linux 5\Red Hat 6.vmx
你在安裝虛擬機時,應該選擇了其工作目錄,自己將鼠標放在VMware左側欄你創建的虛擬機名字處,就會自動顯示它的工作目錄的了;
用記事本打開配置文件,然後添加一行
- ethernet2.virtualDev = "e1000"
- ethernet2.virtualDev = "e1000"
- ethernet2.present = "TRUE"
好了,在重新啓動虛擬機,查看一下網卡,多了一個82545em 的網卡,
- # lspci | grep Ethernet
- 02:01.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
- 02:05.0 Ethernet controller: Advanced Micro Devices [AMD] 79c970 [PCnet32 LANCE] (rev 10)
- 02:06.0 Ethernet controller: Intel Corporation 82545EM Gigabit Ethernet Controller (Copper) (rev 01)
2、部署dpdk
1)、下載源碼
在開啓虛擬機後,從dpdk官網下載最新的code
- git clone git://dpdk.org/dpdk
2)、設置環境變量
進入dpdk目錄;編輯一個環境變量文件,然後source;
- export RTE_SDK=`pwd`
- #export RTE_TARGET=x86_64-default-linuxapp-gcc
- export RTE_TARGET=i686-default-linuxapp-gcc
由於我的是32虛擬機,所以我選擇i686,將x86_64那行環境變量註釋掉;
我將上面3行放在dpdkrc文件中,然後用source啓用這幾個環境變量;
- source dpkdrc
3)、用dpdk的腳本運行dpdk;
運行腳本進行dpdk測試;
然後再運行腳本
- ./tools/setup.sh
- ----------------------------------------------------------
- Step 1: Select the DPDK environment to build
- ----------------------------------------------------------
- [1] i686-default-linuxapp-gcc
- [2] i686-default-linuxapp-icc
- [3] x86_64-default-linuxapp-gcc
- [4] x86_64-default-linuxapp-icc
選擇 1
我的是32位系統,所以我選擇 1 , 採用gcc編譯32位源碼;如果你是64位虛擬機,請選擇 3
- ----------------------------------------------------------
- Step 2: Setup linuxapp environment
- ----------------------------------------------------------
- [5] Insert IGB UIO module
- [6] Insert KNI module
- [7] Setup hugepage mappings for non-NUMA systems
- [8] Setup hugepage mappings for NUMA systems
- [9] Display current Ethernet device settings
- [10] Bind Ethernet device to IGB UIO module
編譯ok後,
選擇 5
進行igb_uio.ko驅動的安裝,這個驅動在編譯後是,在i686-default-linuxapp-gcc/kmod/ 目錄中;其實在安裝igb_uio.ko之前,腳本先安裝了uio模塊,uio是一種用戶態驅動的實現機制,dpdk有些東西時基於uio實現的;有興趣的可以瞭解一下uio的驅動使用 http://blog.csdn.net/wenwuge_topsec/article/details/9628409
選擇 7
設置hugepage,
- Removing currently reserved hugepages
- .echo_tmp: line 2: /sys/devices/system/node/node?/hugepages/hugepages-2048kB/nr_hugepages: 沒有那個文件或目錄
- Unmounting /mnt/huge and removing directory
- Input the number of 2MB pages
- Example: to have 128MB of hugepages available, enter '64' to
- reserve 64 * 2MB pages
- Number of pages: 64
- Reserving hugepages
- Creating /mnt/huge and mounting as hugetlbfs
提示沒有nr_hugepage文件,我沒有理它,暫且不知道起原因;
有讓你輸入預留內存大小的 我輸入的是 64 , 64 乘以 2M 可以128M 做個簡單的測試夠了,
選 9
看一下你當前的設備
- Option: 9
- Network devices using IGB_UIO driver
- ====================================
- <none>
- Network devices using kernel driver
- ===================================
- 0000:02:01.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
- 0000:02:05.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
- 0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio
- Other network devices
- =====================
- <none>
我有3塊虛擬網卡,只有最後一個是intel的網卡,看他已經提示當前的網卡驅動而是 e1000而沒有用igb_uio , 接下來就是讓你去綁定它;
選擇10
進行 網卡bind
- Option: 10
- Network devices using IGB_UIO driver
- ====================================
- <none>
- Network devices using kernel driver
- ===================================
- 0000:02:01.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
- 0000:02:05.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
- 0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' if=eth2 drv=e1000 unused=igb_uio
- Other network devices
- =====================
- <none>
- Enter PCI address of device to bind to IGB UIO driver: 02:06.0
- OK
注意綁定的時候可以能有個錯誤的提示如下;
- Enter PCI address of device to bind to IGB UIO driver: 02:06.0 02:07.0
- Routing table indicates that interface 0000:02:06.0 is active. Not modifying
- OK
例如我的網卡是eth2;
- ifconfig eth2 down
再選擇 9
就看一下當前的網卡狀態;
- Option: 9
- Network devices using IGB_UIO driver
- ====================================
- 0000:02:06.0 '82545EM Gigabit Ethernet Controller (Copper)' drv=igb_uio unused=e1000
- Network devices using kernel driver
- ===================================
- 0000:02:01.0 '79c970 [PCnet32 LANCE]' if=eth0 drv=pcnet32 unused= *Active*
- 0000:02:05.0 '79c970 [PCnet32 LANCE]' if=eth1 drv=pcnet32 unused= *Active*
- Other network devices
- =====================
- <none>
選擇 12
測試一下dpdk程序
- Option: 12
- Enter hex bitmask of cores to execute testpmd app on
- Example: to execute app on cores 0 to 7, enter 0xff
- bitmask: 0x3
再輸入 start發一下包
- Interactive-mode selected
- Configuring Port 0 (socket -1)
- Checking link statuses...
- Port 0 Link Up - speed 1000 Mbps - full-duplex
- Done
- testpmd>
- testpmd> start
- Warning! Cannot handle an odd number of ports with the current port topology. Configuration must be changed to have an even number of ports, or relaunch application with --port-topology=chained
- io packet forwarding - CRC stripping disabled - packets/burst=16
- nb forwarding cores=1 - nb forwarding ports=1
- RX queues=1 - RX desc=128 - RX free threshold=0
- RX threshold registers: pthresh=8 hthresh=8 wthresh=4
- TX queues=1 - TX desc=512 - TX free threshold=0
- TX threshold registers: pthresh=36 hthresh=0 wthresh=0
- TX RS bit threshold=0 - TXQ flags=0x0
有警告,是什麼意思,????????
2014年1月1日22:32:10 星期三
上面這個錯誤,經由同樣學習dpdk 的同學 frank 解決了,是因爲我只添加了1塊intel的網卡,你在再添加一塊就ok了
輸入stop 停止;
- Telling cores to stop...
- Waiting for lcores to finish...
- ---------------------- Forward statistics for port 0 ----------------------
- RX-packets: 0 RX-dropped: 0 RX-total: 0
- TX-packets: 0 TX-dropped: 0 TX-total: 0
- ----------------------------------------------------------------------------
- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
- RX-packets: 0 RX-dropped: 0 RX-total: 0
- TX-packets: 0 TX-dropped: 0 TX-total: 0
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- Done.
爲什麼沒有數據包呢????????? 誰知道,告訴我一下;
2014年1月1日22:38:16 星期三
上面沒有數據的問題解決了,O(∩_∩)O~,原始是我將網卡模式添加時用的NAT模式,給修改成了HOSTONLY模式,哎,但我還是有疑問,只兩個模式有什麼重要區別嗎????
- ---------------------- Forward statistics for port 0 ----------------------
- RX-packets: 8890 RX-dropped: 0 RX-total: 8890
- TX-packets: 8894 TX-dropped: 0 TX-total: 8894
- ----------------------------------------------------------------------------
- ---------------------- Forward statistics for port 1 ----------------------
- RX-packets: 8895 RX-dropped: 0 RX-total: 8895
- TX-packets: 8889 TX-dropped: 0 TX-total: 8889
- ----------------------------------------------------------------------------
- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
- RX-packets: 17785 RX-dropped: 0 RX-total: 17785
- TX-packets: 17783 TX-dropped: 0 TX-total: 17783
- ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++