人工智能的革命:道德可以被編程麼?

寫在前面

發現一篇文章,討論的是我比較關注的人工智能的問題,有別人翻譯的,但是我覺得不滿意,自己動手擼了一版。

其實還有很多地方翻譯有問題,先這樣吧,回頭再改改。

原文鏈接:http://futurism.com/the-evolution-of-ai-can-morality-be-programmed/


--------------------------------------我是分割線----------------------------------------------------


IN BRIEF
導讀


Our artificial intelligence systems are advancing at a remarkable rate, and though it will be some time before we have human-like synthetic intelligence, it makes sense to begin working on programming morality now. And researchers at Duke University are already well on their way.
我們的人工智能系統正在以難以置信的速度在發展,也許現在是時候去開始編程道德了,儘管類人的模擬智能出現還得等候時日。而杜克大學的研究員們已經走在了路上。



Recent advances in artificial intelligence have made it clear that our computers need to have a moral code. Disagree? Consider this: A car is driving down the road when a child on a bicycle suddenly swerves in front of it. Does the car swerve into an oncoming lane, hitting another car that is already there? Does the car swerve off the road and hit a tree? Does it continue forward and hit the child?
人工智能最近的一些進展已經充分表明我們的電腦需要有一套道德體系。不同意?考慮這種情況:一個騎着自行車的小孩突然轉彎擋在一輛正在行駛的汽車面前。這輛車究竟是該駛入對面車道,撞上一輛已經在那兒的車,還是轉過這條路,撞上一棵樹,又或者繼續行駛撞到那個孩子呢?


Each solution comes with a problem: It could result in death.
每個解決方案都伴隨着問題:它可能會最終導致死亡。

It’s an unfortunate scenario, but humans face such scenarios every day, and if an autonomous car is the one in control, it needs to be able to make this choice. And that means that we need to figure out how to program morality into our computers.
這個場景假設令人不太舒服,但是人類卻每天都在面對這樣類似的場景,並且如果一輛自動行駛的汽車在控制範圍內,那它同樣需要做這樣的選擇。並且這就意味着我們得找出將道德編程進我們電腦的方法。


Vincent Conitzer, a Professor of Computer Science at Duke University, and co-investigator Walter Sinnott-Armstrong from Duke Philosophy, recently received a grant from the Future of Life Institute in order to try and figure out just how we can make an advanced AI that is able to make moral judgments…and act on them.
文森特 科尼策,杜克大學計算機教授,Walter Sinnott-Armstrong學院的聯合研究員,最近收到來自未來生命研究所的一筆贊助,希望他可以嘗試找出我們怎樣才能讓賦予高級人工智能進行道德判斷並且按照這些執行的能力。


MAKING MORALITY
創造道德標準


At first glance, the goal seems simple enough—make an AI that behaves in a way that is ethically responsible; however, it’s far more complicated than it initially seems, as there are an amazing amount of factors that come into play. As Conitzer’s project outlines, “moral judgments are affected by rights (such as privacy), roles (such as in families), past actions (such as promises), motives and intentions, and other morally relevant features. These diverse factors have not yet been built into AI systems.”
剛開始的時候,目標看上去很簡單---製造一個人工智能,並且讓其具備道德上的責任感;然而,情況的複雜程度卻遠遠的超過了剛開始的設想,因爲有大量的因素會影響到具體的實現。科尼策的項目大綱這樣描述,“道德決策受到權利(比如隱私權),角色(比如一個人在家庭中的角色),過去的行爲(比如承諾),動機和意圖,以及其他相關的特徵的影響。這些多樣的影響因素卻沒有被放入人工智能系統中。”


That’s what we’re trying to do now.
那就是現在我們一直嘗試在做的事情。


In a recent interview with Futurism, Conitzer clarified that, while the public may be concerned about ensuring that rogue AI don’t decide to wipe-out humanity, such a thing really isn’t a viable threat at the present time (and it won’t be for some time). As a result, his team isn’t concerned with preventing a global-robotic-apocalypse by making selfless AI that adore humanity. Rather, on a much more basic level, they are focused on ensuring that our artificial intelligence systems are able to make the hard, moral choices that humans make on a daily basis.
在《未來主義》雜誌的近期採訪中,科尼策聲明,儘管公衆也許關心如何確保邪惡的人工智能不會把人類清楚,但是這樣的情況在目前來看根本不必擔心(並且未來很長一段時間內也不必擔心)。因此,他的團隊並不關心如何製造一個完全敬仰人類的沒有自我的人工智能來防止這個世界面臨全球機器人天啓。相反,在更加基礎的層面上,他們專注在確保我們的人工智能系統可以去進行人類每天都會面臨的糾結的,道德上的決策。



So, how do you make an AI that is able to make a difficult moral decision?
所以,你怎麼樣才能製造一個能進行糾結道德決策的人工智能呢?


Conitzer explains that, to reach their goal, the team is following a two path process: Having people make ethical choices in order to find patterns and then figuring out how that can be translated into an artificial intelligence. He clarifies, “what we’re working on right now is actually having people make ethical decisions, or state what decision they would make in a given situation, and then we use machine learning to try to identify what the general pattern is and determine the extent that we could reproduce those kind of decisions.”
科尼策解釋道,爲了達到他們的目標,他們團隊遵循着兩條路徑:讓人們進行道德上的決策以便於發現決策的模式並找出將其轉換進人工智能的方法。他說,“我們正在做的事情其實是讓人們進行道德決策,或者陳述出他們在一個給定的場景中會採取的方式,然後我們會用機器學習的方式嘗試識別出一種通用的模式並決定在什麼程度上我們可以複製這種決策行爲。”


In short, the team is trying to find the patterns in our moral choices and translate this pattern into AI systems. Conitzer notes that, on a basic level, it’s all about making predictions regarding what a human would do in a given situation, “if we can become very good at predicting what kind of decisions people make in these kind of ethical circumstances, well then, we could make those decisions ourselves in the form of the computer program.”
簡而言之,他們團隊正在嘗試找出我們在進行道德決策時候的模式並將其翻譯進人工智能系統。科尼策強調說,在基礎層面上,這一切都是關於人類在給定的情境下,如何進行預測的研究,“如果我們可以很擅長於預測人類在面臨道德困境的時候會做出怎樣的決策,那麼,我們就可以使用電腦程序的方式進行這些決策。”

Right now, maybe our moral development hasn’t come to its apex.
現在,也許道德發展並沒有達到其頂點。
However, one major problem with this is, of course, that our moral judgments are not objective—it’s neither timeless nor universal.
然而,一個伴隨而來的主要問題就是,當然,我們的道德決策並不是客觀的---它並不是永恆的,也不是暫時的。


Conitzer articulates the problem by looking to previous decades, “if we did the same ethical tests a hundred years ago, the decisions that we would get from people would be much more racist, sexist, and all kinds of other things that we wouldn’t see as ‘good’ now. Similarly, right now, maybe our moral development hasn’t come to its apex, and a hundred years from now people might feel that some of the things we do right now, like how we treat animals, is completely immoral. So there’s kind of a risk of bias and with getting stuck at whatever our current level of moral development is.”
科尼策通過將時間提前幾十年來清晰的表述這個問題,“如果我們在一百年前做同樣的道德測試,我們得到的結果將會是種族主義,性別歧視,以及所有其他的我們目前不能稱之爲'好的'事情。相似的,當前我們的道德發展也並沒有到達其頂點,並且百年之後,人們也許會覺得我們目前做的很多事情,比如我們對待動物的方式,是完全非人道的。所以有這樣的一種風險,因爲偏見和我們目前所處的道德發展層面。”



And of course, there is the aforementioned problem regarding how complex morality is.”Pure altruism, that’s very easy to address in game theory, but maybe you feel like you owe me something based on previous actions. That’s missing from the game theory literature, and so that’s something that we’re also thinking about a lot—how can you make what game theory calls ‘Solution Concepts‘incorporate this aspect? How can you compute these things?”
當然,上述問題的出現由道德問題的複雜性所致。“純粹的利他主義,在博弈論中很好定義,但是也許是因爲在之前的行爲中你虧欠了我什麼。這是在博弈論中沒有考慮到的問題,所以這是我們經常思考的問題----你如何做出博弈論中成爲解決方案概念的決策,而不考慮這方面問題?你怎麼對這些事情進行計算呢?”



To solve these problems, and to help figure out exactly how morality functions and can (hopefully) be programmed into an AI, the team is combining the methods from computer science, philosophy, economics, and psychology “That’s, in a nutshell, what our project is about,” Conitzer asserts.
爲了解決這些問題,並幫助去弄清楚到底道德是如何發揮作用的並且能夠(但願)被編程進人工智能程序,他們團隊正在將計算機科學,哲學,經濟學和心理學等方法結合起來。“那就是,簡而言之,我們項目在做的事情。”科尼策說。


But what about those sentient AI? When will we need to start worrying about them and discussing how they should be regulated?
但是這些有情感的人工智能該怎樣處理呢?什麼時候我們應該開始擔心他們並討論他們該如何被規範呢?




THE HUMAN-LIKE AI
類人人工智能


According to Conitzer, human-like artificial intelligence won’t be around for some time yet (so yay! No Terminator-styled apocalypse…at least for the next few years).
根據科尼策所說,類人人工智能短期內並不會出現(對!並不會出現終結者樣的天啓……至少未來幾年內不會出現)。


“Recently, there have been a number of steps towards such a system, and I think there have been a lot of surprising advances….but I think having something like a ‘true AI,’ one that’s really as flexible, able to abstract, and do all these things that humans do so easily, I think we’re still quite far away from that,” Conitzer asserts.
“最近,朝着這樣一個系統出現了很多的進步,並且我覺得有很多令人驚喜的發展……。但是我覺得擁有‘真正的人工智能’,能夠像人一樣靈活,能進行抽象,並且像人一樣如此輕鬆的完成這些事情,我想我們離這樣的情況還相當遙遠”科尼策宣稱。



It may be quite a bit further out, but to computer scientists, that means maybe just on the order of decades.
True, we can program systems to do a lot of things that humans do well, but there are some things that are exceedingly complex and hard to translate into a pattern that computers can recognize and learn from (which is ultimately the basis of all AI).
這可能相當遙遠,但是對於計算機科學家來說,也許僅僅意味着幾十年的時間。確實,我們可以通過編程系統去做很多人類做的很好的事情,但是有一些事情卻極其的複雜,因爲將這些行爲轉換成一定的模式而且能都被電腦識別和學習很困難(這纔是終極人工智能的基礎)。



“What came out of early AI research, the first couple decades of AI research, was the fact that certain things that we had thought of as being real benchmarks for intelligence, like being able to play chess well, were actually quite accessible to computers. It was not easy to write and create a chess-playing program, but it was doable.”
“最早的人工智能研究得出了什麼結論呢,在最早幾十年的研究中,發現了這樣一個事情就是我們曾經認爲的是智能發展的里程碑的事情,比如能玩好象棋,其實對於電腦來說是相當簡單的。想要寫出並創造一個可以玩象棋的遊戲並不容易,但是確實可實現的。”


Indeed, today, we have computers that are able to beat the best players in the world in a host of games—Chess and Go, for example.
確實,今天,我們有很多電腦能夠在一些棋類遊戲上打敗世界上最好的棋手---例如象棋和圍棋。


But Conitzer clarifies that, as it turns out, playing games isn’t exactly a good measure of human-like intelligence. Or at least, there is a lot more to the human mind. “Meanwhile, we learned that other problems that were very simple for people were actually quite hard for computers, or to program computers to do. For example, recognizing your grandmother in a crowd. You could do that quite easily, but it’s actually very difficult to program a computer to recognize things that well.”
但是科尼策聲明,事實證明,下棋並不是一個好的方式去衡量類人智能。至少,人類的心靈還有很多其他的東西。“同時,我們發現很多對人類很容易的事情,對於計算機而言或者編程實現其實相當困難。比如,在人羣中識別出你的奶奶。你可能很容易做到,但是對於電腦而言要準確的識別並不是一件容易的事情。”


Since the early days of AI research, we have made computers that are able to recognize and identify specific images. However, to sum the main point, it is remarkably difficult to program a system that is able to do all of the things that humans can do, which is why it will be some time before we have a ‘true AI.’
從早期的人工智能研究開始,我們已經制造了很多電腦,能夠識別並找出特定的圖片。然而,總而言之,要用編程的方式創造出一個系統能夠實現所有人類可以完成的事情是極其困難的,這也是爲什麼我們想得到“真正的人工智能”還有相當的時間。



Yet, Conitzer asserts that now is the time to start considering what the rules we will use to govern such intelligences. “It may be quite a bit further out, but to computer scientists, that means maybe just on the order of decades, and it definitely makes sense to try to think about these things a little bit ahead.” And he notes that, even though we don’t have any human-like robots just yet, our intelligence systems are already making moral choices and could, potentially, save or end lives.
然而,科尼策也說現在是時候去思考該如何去管理這樣的智能。“這也許有點扯遠了,但是對於計算機科學家來說,這僅僅意味着幾十年的時間,並且提前一些時候去思考這些問題絕對有足夠的意義。”他還確定,即使我們現在還沒有類人的機器人,我們的智能系統已經能夠做道德決策,並且能夠,潛在的拯救或者結束生命。


“Very often, many of these decisions that they make do impact people and we may need to make decisions that will typically be considered to be a morally loaded decision. And a standard example is a self-driving car that has to decide to either go straight and crash into the car ahead of it or veer off and maybe hurt some pedestrian. How do you make those trade-offs? And that I think is something we can really make some progress on. This doesn’t require superintelligent AI, simple programs can just make these kind of trade-offs in various ways.”
“很多時候,他們做出的許多決策的確會對人類產生影響,並且我們需要做出一般情況下都被認爲是有道德感的決定。一個標準的案例就是自動駕駛汽車必須決定繼續直走或者撞上前面的一輛車,再或者改變方向,但卻有可能撞上人行道上的行人。你如何進行取捨呢?並且我覺得有些事請我們確實做出一些成果。這並不需要超級人工智能,簡單的程序就可以用很多不同的方式決定取捨。”



But of course, knowing what decision to make will first require knowing exactly how our morality operates (or at least having a fairly good idea). From there, we can begin to program it, and that’s what Conitzer and his team are hoping to do.
但是當然了,知道做什麼決定首先需要準確知道我們的道德運作方式(或者至少有一個相對較好的想法)。基於此,我們可以開始進行編程,並且這是科尼策和他的團隊希望去做的事情。


So welcome to the dawn of moral robots.

This interview has been edited for brevity and clarity.

-----

碼字不易,與君共勉!




發佈了31 篇原創文章 · 獲贊 39 · 訪問量 6萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章