本節講解如何建立一個多核例程,並解釋代碼的運行結果.
首先確保安裝了SYS/BIOS中的IPC模塊,該模塊中有兩個例程:MessageQ,Notify。
其中MessageQ的代碼文件位於
F:\ProgramFiles\ti\ipc_1_25_03_15\packages\ti\sdo\ipc\examples\multicore\evm667x
(需要注意的是更高版本的IPC模塊在該路徑下沒有該例程)
MessageQ的配置文件也在該路徑下面。
在CCS中,新建SYS/BIOS項目:
新建對象配置文件(CCXML)即用於仿真器的配置文件。
將例程中的message_multicore.c文件(代碼中NUMLOOPS設置爲2,方便觀測結果)以及message_multicore.cfg拷貝到新建的工程下面編譯運行,核心數全部選中。
在Debug界面中,Group所有的核心,並選中Group1。
選中了Group1後點擊運行,程序結果如下:
[C66xx_0] Start the main loop
[C66xx_7] Start the main loop
[C66xx_0] CORE0 sending a message #1 to CORE1
[C66xx_6] Start the main loop
[C66xx_1] Start the main loop
[C66xx_2] Start the main loop
[C66xx_3] Start the main loop
[C66xx_4] Start the main loop
[C66xx_5] Start the main loop
[C66xx_1] CORE1 sending a message #1 to CORE2
[C66xx_2] CORE2 sending a message #1 to CORE3
[C66xx_3] CORE3 sending a message #1 to CORE4
[C66xx_4] CORE4 sending a message #1 to CORE5
[C66xx_5] CORE5 sending a message #1 to CORE6
[C66xx_6] CORE6 sending a message #1 to CORE7
[C66xx_7] CORE7 sending a message #1 to CORE0
[C66xx_0] CORE0 sending a message #2 to CORE1
[C66xx_1] CORE1 sending a message #2 to CORE2
The test is complete
[C66xx_2] CORE2 sending a message #2 to CORE3
The test is complete
[C66xx_3] CORE3 sending a message #2 to CORE4
The test is complete
[C66xx_4] CORE4 sending a message #2 to CORE5
The test is complete
[C66xx_5] CORE5 sending a message #2 to CORE6
The test is complete
[C66xx_6] CORE6 sending a message #2 to CORE7
The test is complete
[C66xx_7] CORE7 sending a message #2 to CORE0
The test is complete
[C66xx_0] The test is complete
代碼:
/*
* Copyright (c) 2013, Texas Instruments Incorporated
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* * Neither the name of Texas Instruments Incorporated nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
* */
/*
* ======== message_multicore.c ========
* Multiprocessor MessageQ example
*
* This is an example program that uses MessageQ to pass a message
* from one processor to another.
*
* Each processor creates its own MessageQ first and then will try to open
* a remote processor's MessageQ.
*
* See message_multicore.k file for expected output.
*/
#include <xdc/std.h>
#include <string.h>
/* -----------------------------------XDC.RUNTIME module Headers */
#include <xdc/runtime/System.h>
#include <xdc/runtime/IHeap.h>
/* ----------------------------------- IPC module Headers */
#include <ti/ipc/Ipc.h>
#include <ti/ipc/MessageQ.h>
#include <ti/ipc/HeapBufMP.h>
#include <ti/ipc/MultiProc.h>
/* ----------------------------------- BIOS6 module Headers */
#include <ti/sysbios/BIOS.h>
#include <ti/sysbios/knl/Task.h>
/* ----------------------------------- To get globals from .cfg Header */
#include <xdc/cfg/global.h>
#define HEAP_NAME "myHeapBuf"
#define HEAPID 0
#define NUMLOOPS 2
Char localQueueName[10];
Char nextQueueName[10];
UInt16 nextProcId;
/*
* ======== tsk0_func ========
* Allocates a message and ping-pongs the message around the processors.
* A local message queue is created and a remote message queue is opened.
* Messages are sent to the remote message queue and retrieved from the
* local MessageQ.
*/
Void tsk0_func(UArg arg0, UArg arg1)
{
MessageQ_Msg msg; //msg爲結構體指針
MessageQ_Handle messageQ; //創建句柄
MessageQ_QueueId remoteQueueId; //創建信息隊列
Int status;
UInt16 msgId = 0;
HeapBufMP_Handle heapHandle;
HeapBufMP_Params heapBufParams;//堆結構體參數
if (MultiProc_self() == 0) {//當前cpu id爲0,則分配信息的堆
/*
* Create the heap that will be used to allocate messages.
*/
HeapBufMP_Params_init(&heapBufParams); //初始化堆的結構體配置參數
heapBufParams.regionId = 0; //共享區域的ID
heapBufParams.name = HEAP_NAME;//堆對象的名字
heapBufParams.numBlocks = 1; //固定尺寸塊的個數
heapBufParams.blockSize = sizeof(MessageQ_MsgHeader);//塊的大小
heapHandle = HeapBufMP_create(&heapBufParams);//創建堆對象
if (heapHandle == NULL) {
System_abort("HeapBufMP_create failed\n" );
}
}
else { //ID 不爲0的情況
/* Open the heap created by the other processor. Loop until opened. */
do {
status = HeapBufMP_open(HEAP_NAME, &heapHandle);//打開創建的HEAP_NAME,返回給句柄heapHandle
/*
* Sleep for 1 clock tick to avoid inundating remote processor
* with interrupts if open failed
*/
if (status < 0) {
Task_sleep(1);//延遲一個系統時鐘,該函數只能用於任務線程
}
} while (status < 0);
}
/* Register this heap with MessageQ */
MessageQ_registerHeap((IHeap_Handle)heapHandle, HEAPID);//註冊信息到heapid中
/* Create the local message queue */
messageQ = MessageQ_create(localQueueName, NULL);//創建消息隊列
if (messageQ == NULL) {
System_abort("MessageQ_create failed\n" );
}
/* Open the remote message queue. Spin until it is ready. */
do {
/* 第一個參數: 打開對象的名字
* 第二個參數: 填充序列的ID */
status = MessageQ_open(nextQueueName, &remoteQueueId);//打開消息隊列
/*
* Sleep for 1 clock tick to avoid inundating remote processor
* with interrupts if open failed
*/
if (status < 0) {
Task_sleep(1);
}
} while (status < 0);
if (MultiProc_self() == 0) {
/* Allocate a message to be ping-ponged around the processors */
/*根據heapid 來分配消息到msg上,大小爲sizeof(MessageQ_MsgHeader) */
msg = MessageQ_alloc(HEAPID, sizeof(MessageQ_MsgHeader));//根據heapid分配消息
if (msg == NULL) {
System_abort("MessageQ_alloc failed\n" );
}
/*
* Send the message to the next processor and wait for a message
* from the previous processor.
*/
System_printf("Start the main loop\n");
while (msgId < NUMLOOPS) {
/* Increment...the remote side will check this */
msgId++;
/* 第一個參數: 當前的消息
* 第二個參數: 爲當前的消息msg設置id */
MessageQ_setMsgId(msg, msgId);//對消息設置id以便在接收端覈對
System_printf("Proc0 %s sending a message #%d to %s\n", localQueueName, msgId, nextQueueName);
/* send the message to the remote processor */
/* 第一個參數: 目標序列的ID
* 第二個參數: 發送的msg */
status = MessageQ_put(remoteQueueId, msg);
if (status < 0) {
System_abort("MessageQ_put had a failure/error\n");
}
/* Get a message */
/* 第一個參數: 消息句柄
* 第二個參數: 發送的msg
* 第三個參數: 超時 */
status = MessageQ_get(messageQ, &msg, MessageQ_FOREVER);
if (status < 0) {
System_abort("This should not happen since timeout is forever\n");
}
}
}
else {
/*
* Wait for a message from the previous processor and
* send it to the next processor
*/
System_printf("Start the main loop\n");
while (TRUE) {
/* Get a message */
status = MessageQ_get(messageQ, &msg, MessageQ_FOREVER);
if (status < 0) {
System_abort("This should not happen since timeout is forever\n");
}
System_printf("%s sending a message #%d to %s\n", localQueueName, MessageQ_getMsgId(msg),
nextQueueName);
/* Get the message id */
msgId = MessageQ_getMsgId(msg);
/* send the message to the remote processor */
status = MessageQ_put(remoteQueueId, msg);
if (status < 0) {
System_abort("MessageQ_put had a failure/error\n");
}
/* test done */
if (msgId >= NUMLOOPS) {
break;
}
}
}
System_printf("The test is complete\n");
BIOS_exit(0);
}
/*
* ======== main ========
* Synchronizes all processors (in Ipc_start) and calls BIOS_start
*/
Int main(Int argc, Char* argv[])
{
Int status;
/* MultiProc_self:返回正在運行的處理器ID
* MultiProc_getNumProcessors:返回處理器總數
* 循環8位CPU ID */
nextProcId = (MultiProc_self() + 1) % MultiProc_getNumProcessors();
/* Generate queue names based on own proc ID and total number of procs
* MultiProc_getName:傳遞處理器ID返回處理器的名稱
* 將字符串信息寫入數組中
* localQueueName:存儲當前處理器的名稱
* nextQueueName: 下一個處理器的名稱 */
System_sprintf(localQueueName, "%s", MultiProc_getName(MultiProc_self()));
System_sprintf(nextQueueName, "%s", MultiProc_getName(nextProcId));
/*
* Ipc_start() calls Ipc_attach() to synchronize all remote processors
* because 'Ipc.procSync' is set to 'Ipc.ProcSync_ALL' in *.cfg
*/
status = Ipc_start();//同步所有的處理器
if (status < 0) {
System_abort("Ipc_start failed\n");
}
BIOS_start();//使能中斷
return (0);
}
/*
* @(#) ti.sdo.ipc.examples.multicore.evm667x; 1, 0, 0, 0,; 5-10-2013 12:34:11; /db/vtree/library/trees/ipc/ipc-i15/src/ xlibrary
*/
CFG配置文件:
/*
* Copyright (c) 2013, Texas Instruments Incorporated
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* * Neither the name of Texas Instruments Incorporated nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
* AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
* THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
* CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE,
* EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
* */
var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc');
/*
* Get the list of names that the build device supports.
* I.e. ["CORE0", "CORE1", "CORE2" ... ]
*/
var nameList = MultiProc.getDeviceProcNames();
/*
* Since this is a single-image example, we don't (at build-time) which
* processor we're building for. We therefore supply 'null'
* as the local procName and allow IPC to set the local procId at runtime.
*/
MultiProc.setConfig(null, nameList);
/*
* The SysStd System provider is a good one to use for debugging
* but does not have the best performance. Use xdc.runtime.SysMin
* for better performance.
*/
var System = xdc.useModule('xdc.runtime.System');
var SysStd = xdc.useModule('xdc.runtime.SysStd');
System.SupportProxy = SysStd;
/* Modules explicitly used in the application */
var MessageQ = xdc.useModule('ti.sdo.ipc.MessageQ');
var Ipc = xdc.useModule('ti.sdo.ipc.Ipc');
var HeapBufMP = xdc.useModule('ti.sdo.ipc.heaps.HeapBufMP');
var MultiProc = xdc.useModule('ti.sdo.utils.MultiProc');
/* BIOS/XDC modules */
var BIOS = xdc.useModule('ti.sysbios.BIOS');
BIOS.heapSize = 0x8000;
var Task = xdc.useModule('ti.sysbios.knl.Task');
var tsk0 = Task.create('&tsk0_func');
tsk0.instance.name = "tsk0";
/* Synchronize all processors (this will be done in Ipc_start) */
Ipc.procSync = Ipc.ProcSync_ALL;
/* Shared Memory base address and length */
var SHAREDMEM = 0x0C000000;
var SHAREDMEMSIZE = 0x00100000;
/*
* Need to define the shared region. The IPC modules use this
* to make portable pointers. All processors need to add this
* call with their base address of the shared memory region.
* If the processor cannot access the memory, do not add it.
*/
var SharedRegion = xdc.useModule('ti.sdo.ipc.SharedRegion');
SharedRegion.setEntryMeta(0,
{ base: SHAREDMEM,
len: SHAREDMEMSIZE,
ownerProcId: 0,
isValid: true,
name: "DDR2 RAM",
});
/*
* @(#) ti.sdo.ipc.examples.multicore.evm667x; 1, 0, 0, 0,; 5-10-2013 12:34:11; /db/vtree/library/trees/ipc/ipc-i15/src/ xlibrary
*/
結果分析:
- 因爲6678共計8個核心,因此每個核心都會運行加載的程序,每個核心開始運行main函數,當前核心在main函數中獲取下一個核心ID,以及將前核心名稱存儲在localQueueName隊列中,下一個核心名稱存儲在nextQueueName的隊列中,例如當前核心0運行main函數中,獲得的當前核心名稱即Core0, nextProcId爲1,其核心名稱爲Core1, localQueueName隊列中存儲Core0, nextQueueName隊列中存儲Core其他核心中的main函數以此類推。
- 從配置文件中看出,該項目只創建了一個task線程,線程名爲tsk0_func,即每個核心都會執行該線程的函數.例如:對於核心0來說,首先創建一個堆,用來存儲要發送的信息,其他核心通過HEAP_NAME來打開該堆,以便讀取其中的內容,每個核心都創建屬於自己的信息,並且通過MessageQ_open來打開其他核心創建的信息。
- 核心0將信息放置在創建的堆中,並利用MessageQ_put將消息發送到其他處理器中,同時接收來自最後一個核心發送過來的信息。
- 對於其他核心來說,先接收信息,並獲取信息ID,利用MessageQ_put傳遞個下一個處理器核心中,當信息ID大於NUMLOOPS則停止程序運行。