計算機專業英語論文摘要合輯【1】

寫在前面:我是【程序員寶藏】的寶藏派發員,致力於創作原創乾貨。我熱愛技術、熱愛開源與分享,創作的【計算機基礎面試問題】系列文章和【計算機基礎主幹知識】系列文章廣受好評!後期會創作更多優質原創系列文章!如果您對計算機基礎知識、編程等感興趣,可以關注我,我們一起成長!

本人力薦:如果覺得CSDN排版不夠美觀,歡迎來我的個人原創公zong號【程序員寶藏】(號如其名,誠不欺你!)查看有紅色重點標記和排版美觀的全系列文章(不細你來找我要紅包
參考推文鏈接TCP三次握手四次揮手

好多同學問我要pdf版,我乾脆把我的全部原創文章都整理成了pdf直接打印版,在公zong號後臺回覆關鍵字【寶藏】即可免費帶回家慢慢看


本系列參考文章

計算機專業英語篇(專業英語提升必備)


一、人工智能相關

1.人工智能系統安全與隱私風險。

Security and Privacy Risks in Artificial Intelligence Systems


摘要: 人類正在經歷着由深度學習技術推動的人工智能浪潮,它爲人類生產和生活帶來了巨大的技術革新.在某些特定領域中,人工智能已經表現出達到甚至超越人類的工作能力.然而,以往的機器學習理論大多沒有考慮開放甚至對抗的系統運行環境,人工智能系統的安全和隱私問題正逐漸暴露出來.通過回顧人工智能系統安全方面的相關研究工作,揭示人工智能系統中潛藏的安全與隱私風險.首先介紹了包含攻擊面、攻擊能力和攻擊目標的安全威脅模型.從人工智能系統的4個關鍵環節——數據輸入(傳感器)、數據預處理、機器學習模型和輸出,分析了相應的安全隱私風險及對策.討論了未來在人工智能系統安全研究方面的發展趨勢.

關鍵詞: 智能系統安全, 系統安全, 數據處理, 人工智能, 深度學習

Abstract: Human society is witnessing a wave of artificial intelligence (AI) driven by deep learning techniques, bringing a technological revolution for human production and life. In some specific fields, AI has achieved or even surpassed human-level performance. However, most previous machine learning theories have not considered the open and even adversarial environments, and the security and privacy issues are gradually rising. Besides of insecure code implementations, biased models, adversarial examples, sensor spoofing can also lead to security risks which are hard to be discovered by traditional security analysis tools. This paper reviews previous works on AI system security and privacy, revealing potential security and privacy risks. Firstly, we introduce a threat model of AI systems, including attack surfaces, attack capabilities and attack goals. Secondly, we analyze security risks and counter measures in terms of four critical components in AI systems: data input (sensor), data preprocessing, machine learning model and output. Finally, we discuss future research trends on the security of AI systems. The aim of this paper is to arise the attention of the computer security society and the AI society on security and privacy of AI systems, and so that they can work together to unlock AI’s potential to build a bright future.

Key words: intelligent system security, system security, data processing, artificial intelligence (AI), deep learning

2.智慧教育研究現狀與發展趨勢

The State of the Art and Future Tendency of Smart Education


摘要: 當前,以大數據分析、人工智能等信息技術爲支撐的智慧教育模式已成教育信息化發展的趨勢,也成爲學術界熱點的研究方向.首先,對教學行爲、海量知識資源2類教育大數據的挖掘技術進行調研分析;其次,重點論述了導學、推薦、答疑、評價等教學環節中的4項關鍵技術,包括學習路徑生成與導航、學習者畫像與個性化推薦、智能在線答疑以及精細化評測,進而對比分析了國內外主流的智慧教育平臺;最後,探討了當前智慧教育研究的侷限性,總結出在線智能學習助手、學習者智能評估、網絡化羣體認知、因果關係發現等智慧教育的研究發展方向.

關鍵詞: 智慧教育, 教育大數據, 大數據分析, 人工智能, 知識圖譜

Abstract: At present the smart education pattern supported by information technology such as big data analytics and artificial intelligence has become the trend of the development of education informatization, and also has become a popular research direction in academic hotspots. Firstly, we investigate and analyze the data mining technologies of two kinds of educational big data including teaching behavior and massive knowledge resources. Secondly, we focus on four vital technologies in teaching process such as learning guidance, recommendation, Q&A and evaluation, including learning path generation and navigation, learner profiling and personalized recommendations, online smart Q&A and precise evaluation. Then we compare and analyze the mainstream smart education platforms at home and abroad. Finally, we discuss the limitations of current smart education research and summarize the research and development directions of online smart learning assistants, learner smart assessment, networked group cognition, causality discovery and other smart education aspects.

Key words: smart education, educational big data, big data analytics, artificial intelligence, knowledge graph

3.智能芯片的評述和展望。

A Survey of Artificial Intelligence Chip


摘要: 近年來,人工智能技術在許多商業領域獲得了廣泛應用,並且隨着世界各地的科研人員和科研公司的重視和投入,人工智能技術在傳統語音識別、圖像識別、搜索/推薦引擎等領域證明了其不可取代的價值.但與此同時,人工智能技術的運算量也急劇擴增,給硬件設備的算力提出了巨大的挑戰.從人工智能的基礎算法以及其應用算法着手,描述了其運算方式及其運算特性.然後,介紹了近期人工智能芯片的發展方向,對目前智能芯片的主要架構進行了介紹和分析.而後,着重介紹了DianNao系列處理器的研究成果.該系列的處理器爲智能芯片領域最新最先進的研究成果,其結構和設計分別面向不同的技術特徵而提出,包括深度學習算法、大規模的深度學習算法、機器學習算法、用於處理二維圖像的深度學習算法以及稀疏深度學習算法等.此外,還提出並設計了完備且高效的Cambricon指令集結構.最後,對人工神經網絡技術的發展方向從多個角度進行了分析,包括網絡結構、運算特性和硬件器件等,並基於此對未來工作可能的發展方向進行了預估和展望.

關鍵詞: 人工智能, 加速器, FPGA, ASIC, 權重量化, 稀疏剪枝

Abstract: In recent years, artificial intelligence (AI)technologies have been widely used in many commercial fields. With the attention and investment of scientific researchers and research companies around the world, AI technologies have been proved their irreplaceable value in traditional speech recognition, image recognition, search/recommendation engine and other fields. However, at the same time, the amount of computation of AI technologies increases dramatically, which poses a huge challenge to the computing power of hardware equipments. At first, we describe the basic algorithms of AI technologies and their application algorithms in this paper, including their operation modes and operation characteristics. Then, we introduce the development directions of AI chips in recent years, and analyze the main architectures of AI chips. Furthermore, we emphatically introduce the researches of DianNao series processors. This series of processors are the latest and most advanced researches in the field of AI chips. Their architectures and designs are proposed for different technical features, including deep learning algorithms, large-scale deep learning algorithms, machine learning algorithms, deep learning algorithms for processing two-dimensional images and sparse deep learning algorithms. In addition, a complete and efficient instruction architecture(ISA) for deep learning algorithms, Cambricon, is proposed. Finally, we analyze the development directions of artificial neural network technologies from various angles, including network structures, operation characteristics and hardware devices. Based on the above, we predict and prospect the possible development directions of future work.

Key words: artificial intelligence, accelerators, FPGA, ASIC, weight quantization, sparse pruning

二、機器學習相關

1.基於機器學習的智能路由算法綜述

A Survey on Machine Learning Based Routing Algorithms


摘要: 互聯網的飛速發展催生了很多新型網絡應用,其中包括實時多媒體流服務、遠程雲服務等.現有盡力而爲的路由轉發算法難以滿足這些應用所帶來的多樣化的網絡服務質量需求.隨着近些年將機器學習方法應用於遊戲、計算機視覺、自然語言處理獲得了巨大的成功,很多人嘗試基於機器學習方法去設計智能路由算法.相比於傳統數學模型驅動的分佈式路由算法而言,基於機器學習的路由算法通常是數據驅動的,這使得其能夠適應動態變化的網絡環境以及多樣的性能評價指標優化需求.基於機器學習的數據驅動智能路由算法目前已經展示出了巨大的潛力,未來很有希望成爲下一代互聯網的重要組成部分.然而現有對於智能路由的研究仍然處於初步階段.首先介紹了現有數據驅動智能路由算法的相關研究,展現了這些方法的核心思想和應用場景並分析了這些工作的優勢與不足.分析表明,現有基於機器學習的智能路由算法研究主要針對算法原理,這些路由算法距離真實環境下部署仍然很遙遠.因此接下來分析了不同的真實場景智能路由算法訓練和部署方案並提出了2種合理的訓練部署框架以使得智能路由算法能夠低成本、高可靠性地在真實場景被部署.最後分析了基於機器學習的智能路由算法未來發展中所面臨的機遇與挑戰並給出了未來的研究方向.

關鍵詞: 機器學習, 數據驅動路由算法, 深度學習, 強化學習, 服務質量

Abstract: The rapid development of the Internet accesses many new applications including real time multi-media service, remote cloud service, etc. These applications require various types of service quality, which is a significant challenge towards current best effort routing algorithms. Since the recent huge success in applying machine learning in game, computer vision and natural language processing, many people tries to design “smart” routing algorithms based on machine learning methods. In contrary with traditional model-based, decentralized routing algorithms (e.g.OSPF), machine learning based routing algorithms are usually data-driven, which can adapt to dynamically changing network environments and accommodate different service quality requirements. Data-driven routing algorithms based on machine learning approach have shown great potential in becoming an important part of the next generation network. However, researches on artificial intelligent routing are still on a very beginning stage. In this paper we firstly introduce current researches on data-driven routing algorithms based on machine learning approach, showing the main ideas, application scenarios and pros and cons of these different works. Our analysis shows that current researches are mainly for the principle of machine learning based routing algorithms but still far from deployment in real scenarios. So we then analyze different training and deploying methods for machine learning based routing algorithms in real scenarios and propose two reasonable approaches to train and deploy such routing algorithms with low overhead and high reliability. Finally, we discuss the opportunities and challenges and show several potential research directions for machine learning based routing algorithms in the future.

Key words: machine learning, data driven routing algorithm, deep learning, reinforcement learning, quality of service (QoS)

2.編碼技術改進大規模分佈式機器學習性能綜述

Coding-Based Performance Improvement of Distributed Machine Learning in Large-Scale Clusters


摘要: 由於分佈式計算系統能爲大數據分析提供大規模的計算能力,近年來受到了人們的廣泛關注.在分佈式計算系統中,存在某些計算節點由於各種因素的影響,計算速度會以某種隨機的方式變慢,從而使運行在集羣上的機器學習算法執行時間增加,這種節點叫作掉隊節點(straggler).介紹了基於編碼技術解決這些問題和改進大規模機器學習集羣性能的研究進展.首先介紹編碼技術和大規模機器學習集羣的相關背景;其次將相關研究按照應用場景分成了應用於矩陣乘法、梯度計算、數據洗牌和一些其他應用,並分別進行了介紹分析;最後總結討論了相關編碼技術存在的困難並對未來的研究趨勢進行了展望.

關鍵詞: 編碼技術, 機器學習, 分佈式計算, 掉隊節點容忍, 性能優化

Abstract: With the growth of models and data sets, running large-scale machine learning algorithms in distributed clusters has become a common method. This method divides the whole machine learning algorithm and training data into several tasks and each task runs on different worker nodes. Then, the results of all tasks are combined by master node to get the results of the whole algorithm. When there are a large number of nodes in distributed cluster, some worker nodes, called straggler, will inevitably slow down than other nodes due to resource competition and other reasons, which makes the task time of running on this node significantly higher than that of other nodes. Compared with running replica task on multiple nodes, coded computing shows an impact of efficient utilization of computation and storage redundancy to alleviate the effect of stragglers and communication bottlenecks in large-scale machine learning cluster.This paper introduces the research progress of solving the straggler issues and improving the performance of large-scale machine learning cluster based on coding technology. Firstly, we introduce the background of coding technology and large-scale machine learning cluster. Secondly, we divide the related research into several categories according to application scenarios: matrix multiplication, gradient computing, data shuffling and some other applications. Finally, we summarize the difficulties of applying coding technology in large-scale machine learning cluster and discuss the future research trends about it.


3.貝葉斯機器學習前沿進展綜述

Recent Advances in Bayesian Machine Learning


摘要: 隨着大數據的快速發展,以概率統計爲基礎的機器學習在近年來受到工業界和學術界的極大關注,並在視覺、語音、自然語言、生物等領域獲得很多重要的成功應用,其中貝葉斯方法在過去20多年也得到了快速發展,成爲非常重要的一類機器學習方法.總結了貝葉斯方法在機器學習中的最新進展,具體內容包括貝葉斯機器學習的基礎理論與方法、非參數貝葉斯方法及常用的推理方法、正則化貝葉斯方法等. 最後,還針對大規模貝葉斯學習問題進行了簡要的介紹和展望,對其發展趨勢作了總結和展望.

關鍵詞: 貝葉斯機器學習, 非參數方法, 正則化方法, 大數據學習, 大數據貝葉斯學習

Abstract: With the fast growth of big data, statistical machine learning has attracted tremendous attention from both industry and academia, with many successful applications in vision, speech, natural language, and biology. In particular, the last decades have seen the fast development of Bayesian machine learning, which is now representing a very important class of techniques. In this article, we provide an overview of the recent advances in Bayesian machine learning, including the basics of Bayesian machine learning theory and methods, nonparametric Bayesian methods and inference algorithms, and regularized Bayesian inference. Finally, we also highlight the challenges and recent progress on large-scale Bayesian learning for big data, and discuss on some future directions.

Key words: Bayesian machine learning, nonparametric methods, regularized methods, learning with big data, big Bayesian learning

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章