Machine Learning for Communication Networks

Introduction

越讀越覺的寫的很好,寫篇文章給大家分享一下

Main part

In order to exemplify applications of supervised and unsupervised learning, we will offer annotated pointers to the literature on machine learning for communication systems. Rather than striving for a comprehensive, and historically minded, review, the applications and references have been selected with the goal of illustrating key aspects regarding the use of machine learning in engineering problems.

Throughout, we focus on tasks carried out at the network side, rather than at the users, and organize the applications along two axes. On one, with reference to Fig. 4, we distinguish tasks that are carried out at the edge of the network, that is, at the base stations or access points and at the associated computing platforms, from tasks that are instead responsibility of a centralized cloud processor connected to the core network (see [25]). The edge operates on the basis of timely local information collected at different layers of the protocol stack, which may include all layers from the physical up to the application layer. In contrast, the centralized cloud processes longer-term and global information collected from multiple nodes in the edge network, which typically encompasses only the higher layers of the protocol stack, namely networking and application layers. Examples of data that may be available at the cloud and at the edge can be found in Table I and Table II, respectively.

As a preliminary discussion, it is useful to ask which tasks of a communication network, if any, may benefit from machine learning through the lens of the criteria reviewed in Section I-C. First, as seen, there should be either a model deficit or an algorithm deficit that prevents the use of a conventional model-based engineering design. As an example of model deficit, proactive resource allocation that is based on predictions of human behaviour, e.g., for caching popular contents, may not benefit from well-established and reliable models, making a data-driven approach desirable (see [26], [27]). For an instance of algorithm deficit, consider the problem of channel decoding for channels with known and accurate models based on which the maximum likelihood decoder entails an excessive complexity.

Assuming that the problem at hand is characterized by model or algorithm deficits, one should then consider the rest of the criteria discussed in Section I-C. Most are typically satisfied by communication problems. Indeed, for most tasks in communication networks, it is possible to collect or generate training data sets and there is no need to apply common sense or to provide detailed explanations for how a decision was made.

The remaining two criteria need to be checked on a caseby-case basis. First, the phenomenon or function being learned should not change too rapidly over time. For example, designing a channel decoder based on samples obtained from a limited number of realizations of a given propagation channel requires the channel is stationary over a sufficiently long period of time (see [28]).

Second, in the case of a model deficit, the task should have some tolerance for error in the sense of not requiring provable performance guarantees. For instance, the performance of a decoder trained on a channel lacking a well-established channel model, such as a biological communication link, can only be relied upon insofar as one trusts the available data to be representative of the complete set of possible realizations of the problem under study. Alternatively, under an algorithm deficit, a physics-based model, if available, can be possibly used to carry out computer simulations and obtain numerical performance guarantees.

In Sections IV and VI, we will provide some pointers to specific applications to supervised and unsupervised learning, respectively.

爲了舉例說明監督學習和非監督學習的應用,我們將提供關於通信系統機器學習的文獻註釋。本文的應用和參考文獻並不是爲了全面的、歷史的回顧而選擇的,而是爲了闡明機器學習在工程問題中的應用的關鍵方面。

在這裏插入圖片描述
在整個過程中,我們專注於在網絡端而不是在用戶端執行的任務,並沿兩個軸組織應用程序。 一方面,參考圖4,我們將在網絡邊緣(即,在基站或接入點以及在關聯的計算平臺上)執行的任務與由集中式任務負責的任務區分開來,即連接到核心網絡的雲處理器這種任務。
邊緣根據協議棧不同層及時收集到的本地信息進行操作,這些協議棧可以包括從物理層到應用層的所有層。
相比之下,集中式雲處理從邊緣網絡的多個節點收集的長期和全局信息,邊緣網絡通常只包含協議棧的較高層,即網絡層和應用層。
表I和表II分別提供了雲和邊緣可用的數據示例。
在這裏插入圖片描述
作爲初步的討論,通過I-C部分審查的標準,問一下通信網絡的哪些任務(如果有的話)可能從機器學習中受益是有用的。
首先,正如所看到的,應該存在模型缺陷或算法缺陷,從而阻止使用傳統的基於模型的工程設計。作爲模型不足的一個例子,基於對人類行爲的預測的主動資源分配,例如,緩存流行的內容,可能不會從完善和可靠的模型中受益,使得數據驅動的方法是可取的(見[26],[27])。對於算法缺陷的一個實例,考慮具有已知和準確模型的信道的信道譯碼問題,最大似然解碼器基於這些模型導致了過度的複雜性。
假設當前問題的特徵在於模型或算法不足,則應考慮I-C節中討論的其餘標準。 大多數人通常對溝通問題感到滿意。 實際上,對於通信網絡中的大多數任務,有可能收集或生成訓練數據集,而無需運用常識或爲做出決策提供詳細的解釋。
剩下的兩個標準需要逐個檢查。首先,要學習的現象或功能不應該隨着時間的推移而改變得太快。例如,基於從給定傳播通道的有限數量實現中獲得的樣本設計信道解碼器,需要該信道在足夠長的時間內保持平穩(見[28])。
其次,在模型不足的情況下,任務應該在不需要可證明的性能保證的情況下對錯誤有一定的容忍度。 例如,解碼器的性能在一個通道缺乏完善的信道模型,如生物通信鏈路,只能依靠只要一個信託基金的可用的數據代表成套下可能實現問題的研究。或者,在算法不足的情況下,一個基於物理的模型,如果可用,可以用來進行計算機模擬並獲得數值性能保證。

reference

O. Simeone, “A Very Brief Introduction to Machine Learning With Applications to Communication Systems,” in IEEE Transactions on Cognitive Communications and Networking, vol. 4, no. 4, pp. 648-664, Dec. 2018, doi: 10.1109/TCCN.2018.2881442.

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章