启蒙如何终结?(基辛格2018《大西洋月刊》撰文)(下)
四
We must expect AI to make mistakes faster—and of greater magnitude—than humans do.
Heretofore confined to specific fields of activity, AI research now seeks to bring about a “generally intelligent” AI capable of executing tasks in multiple fields. A growing percentage of human activity will, within a measurable time period, be driven by AI algorithms. But these algorithms, being mathematical interpretations of observed data, do not explain the underlying reality that produces them. Paradoxically, as the world becomes more transparent, it will also become increasingly mysterious. What will distinguish that new world from the one we have known? How will we live in it? How will we manage AI, improve it, or at the very least prevent it from doing harm, culminating in the most ominous concern: that AI, by mastering certain competencies more rapidly and definitively than humans, could over time diminish human competence and the human condition itself as it turns it into data.
我们必须预料到人工智能会犯比人类更快且更严重的错误。
截至目前,人工智能技术仅在有限几个领域里获得了应用。如今,人工智能科学家们正试图将这一技术的应用领域扩大到社会生活的各个方面,未来的人工智能将可以执行多种任务。在不远的未来,将有越来越多的人类工作被人工智能算法所取代。不过,这些算法只能对所观察到的现象进行数学性理解,算法并不能理解现象背后的原因。令人感到矛盾的是,随着我们的世界越来越透明,人工智能算法却会变得越来越神秘。未来的新世界与我们早已习惯的今日世界会有哪些不同呢?我们将如何在那个新世界里生活呢?我们该如何管理人工智能?该如何改进它?该如何避免它对人类造成伤害?该如何避免比人类更快、更好掌握某些能力的人工智能(对它来说欲获得那些能力仅需处理一些数据)使人类的能力趋于退化或使人类的境况趋于恶化?
Artificial intelligence will in time bring extraordinary benefits to medical science, clean-energy provision, environmental issues, and many other areas. But precisely because AI makes judgments regarding an evolving, as-yet-undetermined future, uncertainty and ambiguity are inherent in its results. There are three areas of special concern:
人工智能迟早将为医学科学、清洁能源和环境保护等领域带来极大的进步。不过,正因为人工智能所面对的未来正处于不断演进之中,它为我们带来的结果就内在地具有某种模糊性和不确定性。在这方面,有如下四点值得我们注意:
First, that AI may achieve unintended results. Science fiction has imagined scenarios of AI turning on its creators. More likely is the danger that AI will misinterpret human instructions due to its inherent lack of context. A famous recent example was the AI chatbot called Tay, designed to generate friendly conversation in the language patterns of a 19-year-old girl. But the machine proved unable to define the imperatives of “friendly” and “reasonable” language installed by its instructors and instead became racist, sexist, and otherwise inflammatory in its responses. Some in the technology world claim that the experiment was ill-conceived and poorly executed, but it illustrates an underlying ambiguity: To what extent is it possible to enable AI to comprehend the context that informs its instructions? What medium could have helped Tay define for itself offensive, a word upon whose meaning humans do not universally agree? Can we, at an early stage, detect and correct an AI program that is acting outside our framework of expectation? Or will AI, left to its own devices, inevitably develop slight deviations that could, over time, cascade into catastrophic departures?
首先,人工智能可能为我们带来意想不到的后果。在科幻小说的情节中,人工智能机器人已经将其创造者列为攻击目标了。不过,另一种情况的可能性更大,由于人工智能并不是在一个完整的语境中理解信息的,它很可能无法正确理解人类的指令。最近一个有名的例子来自一个名为Tay的通过自然语言同人进行交流的人机智能对话系统,科学家的设计初衷是希望Tay能够以一个19岁女孩的身份和语言特征与人们进行友善、理性的交谈。不过,结果证明,这个智能对话系统并不能真正理解“友善”和“理性”的含义,Tay所使用的语言反而体现出种族主义者和性别歧视者的一些特征,有时候“她”讲的话甚至能将人激怒。
一些专业技术人员认为,人机智能对话系统Tay的设计思路不够合理,具体研发过程也出现了很多问题。不过,我们仍然能够借此就其背后的模糊之处提出一些疑问:我们能在多大程度上使人工智能系统理解人类指令的具体语境?当人类也未能就“冒犯”一词的准确含义达成共识的时候,我们能通过什么办法帮助Tay获得“冒犯”一词的准确概念?当人工智能系统的运行偏离其设计者的意图时,我们能早期发现并进行修正吗?如果我们对人工智能系统不施加任何干预,它会不会不可避免地在初期略微偏离设计意图甚至久而久之沿着偏离的轨道滑向灾难性的深渊?
Second, that in achieving intended goals, AI may change human thought processes and human values. AlphaGo defeated the world Go champions by making strategically unprecedented moves—moves that humans had not conceived and have not yet successfully learned to overcome. Are these moves beyond the capacity of the human brain? Or could humans learn them now that they have been demonstrated by a new master?
第二点,在实现既定目标的过程中,人工智能可能会改变人类的思维方式和价值观。阿尔法狗在击败围棋世界冠军时所采用的策略是前所未见的,人类从未想到过围棋会有那种走法而且至今也未能想出针对那种走法的应对之策。阿尔法狗所采用的围棋走法是否已经超出了人类大脑的理解能力?人类是否能够学会那种走法呢?
Before AI began to play Go, the game had varied, layered purposes: A player sought not only to win, but also to learn new strategies potentially applicable to other of life’s dimensions. For its part, by contrast, AI knows only one purpose: to win. It “learns” not conceptually but mathematically, by marginal adjustments to its algorithms. So in learning to win Go by playing it differently than humans do, AI has changed both the game’s nature and its impact. Does this single-minded insistence on prevailing characterize all AI?
在人工智能涉足围棋领域之前,人类在进行围棋对弈时往往抱着各不相同的甚至是多层面的目的:棋手们所追求的不仅是赢得棋盘上的胜利,他们还希望通过对弈过程领悟出可以在生活中其他领域获得应用的策略和智慧。而对于人工智能来说,它在棋盘上的目的只有一个:战胜对手。
人工智能不是像人类那样通过对概念的理解而是通过数学运算来进行“学习”的,具体表现在对算法的微调。所以,在学习如何赢得围棋比赛的过程中,人工智能与人类的思维方式是完全不同的,它已经改变了围棋对弈的本质以及人类围棋对弈的传统思维范式。这样一种对胜利毫无杂念的渴望是否是所有人工智能程序的共同特征呢?
Other AI projects work on modifying human thought by developing devices capable of generating a range of answers to human queries. Beyond factual questions (“What is the temperature outside?”), questions about the nature of reality or the meaning of life raise deeper issues. Do we want children to learn values through discourse with untethered algorithms? Should we protect privacy by restricting AI’s learning about its questioners? If so, how do we accomplish these goals?
已经有人工智能项目希望通过研发能够回答人类各种问题的设备来改变人类的思想。这种设备能够回答一些事实性的诸如“现在外面气温如何”这样的问题,但在回答关于存在本质或人生意义这样的问题时,你会发现它可能给人类带来一些深层的忧虑:我们真得愿意自己的孩子在与不受束缚的人工智能算法进行交谈时形成自己的价值观吗?为了保护提问者的隐私,我们是否应该限制人工智能算法对提问者的了解呢?在得出上述两个问题的答案之后,我们又该如何确保人工智能遵从我们的意愿呢?
If AI learns exponentially faster than humans, we must expect it to accelerate, also exponentially, the trial-and-error process by which human decisions are generally made: to make mistakes faster and of greater magnitude than humans do. It may be impossible to temper those mistakes, as researchers in AI often suggest, by including in a program caveats requiring “ethical” or “reasonable” outcomes. Entire academic disciplines have arisen out of humanity’s inability to agree upon how to define these terms. Should AI therefore become their arbiter?
如果人工智能的学习速度是人类的指数倍,那么我们应该想到,它同样能以指数倍的速度进行试错,而人类的决策通常就是通过试错才做出的。也就是说,人工智能犯错的速度要快于人类,其犯错的严重性也是人类无法相比的。人工智能科学家们通常认为,在人工智能程序中注入“道德”或“理性”的指令也无济于事,它们并不会使人工智能算法所犯的错误有丝毫减轻。实际上,人类所有的学科都是在人们没有就各学科的重要概念(诸如“道德”、“理性”这样的概念)达成共识的前提下发展起来的。如此说来,人工智能是否会成为超越所有这些具体学科理论体系的一门终极学科呢?
Third, that AI may reach intended goals, but be unable to explain the rationale for its conclusions. In certain fields—pattern recognition, big-data analysis, gaming—AI’s capacities already may exceed those of humans. If its computational power continues to compound rapidly, AI may soon be able to optimize situations in ways that are at least marginally different, and probably significantly different, from how humans would optimize them. But at that point, will AI be able to explain, in a way that humans can understand, why its actions are optimal? Or will AI’s decision making surpass the explanatory powers of human language and reason? Through all human history, civilizations have created ways to explain the world around them—in the Middle Ages, religion; in the Enlightenment, reason; in the 19th century, history; in the 20th century, ideology. The most difficult yet important question about the world into which we are headed is this: What will become of human consciousness if its own explanatory power is surpassed by AI, and societies are no longer able to interpret the world they inhabit in terms that are meaningful to them?
第三,人工智能有能力实现它被赋予的既定目标,但人工智能无法解释它实现目标过程背后的基本原理。在模式识别、大数据分析和游戏等领域,人工智能或许已经超越了人类。如果人工智能的计算能力继续快速进步,那么它或许很快就能够以与人类略为不同或截然不同的方式对场景进行优化。但到了那个时候,人工智能是否能够以人类能理解的方式来证明它的场景优化比人类做得更好呢?人工智能的决策是否会超越人类语言的解释能力或人类理性的理解能力呢?
纵观人类历史,我们创造了很多理解周遭的世界的方法——在中世纪,我们通过宗教来理解世界;在启蒙时代,我们诉诸理性;在19世纪,我们通过历史去分析世界;而到了20世纪,我们又找到了意识形态这个工具。至于当下这个我们正在投身其中的新世界,一个最难回答然而也最为重要的问题在于:如果人类个体对客观世界的认知能力被人工智能所超越,如果人类社会再也无法通过自己能够理解的方式去解释这个世界,届时人类意识本身会落入怎样的境遇呢?
How is consciousness to be defined in a world of machines that reduce human experience to mathematical data, interpreted by their own memories? Who is responsible for the actions of AI? How should liability be determined for their mistakes? Can a legal system designed by humans keep pace with activities produced by an AI capable of outthinking and potentially outmaneuvering them?
在未来的人工智能世界里,人类经验在机器眼中不过是一堆可以用自身内存处理的数据,从这一角度来说,人类意识又该如何定义呢?谁该为人工智能机器人的行为负责呢?当人工智能犯下错误时,责任归属该如何划定呢?当人工智能的思想比人类更加深邃、其行动能力比人类更加强大时,由人类制定的法律体系还能够对人工智能的行为做出裁定吗?
五
Ultimately, the term artificial intelligence may be a misnomer. To be sure, these machines can solve complex, seemingly abstract problems that had previously yielded only to human cognition. But what they do uniquely is not thinking as heretofore conceived and experienced. Rather, it is unprecedented memorization and computation. Because of its inherent superiority in these fields, AI is likely to win any game assigned to it. But for our purposes as humans, the games are not only about winning; they are about thinking. By treating a mathematical process as if it were a thought process, and either trying to mimic that process ourselves or merely accepting the results, we are in danger of losing the capacity that has been the essence of human cognition.
最后,“人工智能”(Artificial Intelligence)这个词可能用得并不恰当。不可否认,人工智能机器人能够解决很多非常复杂且表面看起来非常抽象的问题,这些问题此前都是在人类认知能力的参与下获得解决的。但人工智能的独特之处在于,它的“思维方式”是人类此前从未想到过的,更是此前从未采取过的。或者可以这样表述,人工智能前所未有的记忆能力和计算能力是其“思维方式”有别于人类的关键。由于在上述两方面的压倒性优势,人工智能可以在人类给它安排的任何游戏中胜出。
但对于人类而言,我们在参与游戏活动时,“赢”并非唯一的目的,我们还希望在游戏中享受思考的乐趣。当我们将人工智能的数据处理过程当作思考过程亲自进行模拟,当我们将这一过程得出的结果视为人类思考的结果,那么人类真正的思考能力就有可能退化,而思考能力正是人类认知能力的核心。
The implications of this evolution are shown by a recently designed program, AlphaZero, which plays chess at a level superior to chess masters and in a style not previously seen in chess history. On its own, in just a few hours of self-play, it achieved a level of skill that took human beings 1,500 years to attain. Only the basic rules of the game were provided to AlphaZero. Neither human beings nor human-generated data were part of its process of self-learning. If AlphaZero was able to achieve this mastery so rapidly, where will AI be in five years? What will be the impact on human cognition generally? What is the role of ethics in this process, which consists in essence of the acceleration of choices?
最近新推出的人工智能程序“阿尔法零”(Alpha Zero)就体现了上述趋势。这款人工智能棋类程序的国际象棋水平在大师级之上,它的行棋风格在国际象棋历史上也是前所未见的。仅仅通过几个小时的自我对弈训练,“阿尔法零”就获得了人类用1500年才能达到的国际象棋水平,而它最初具备的仅仅是国际象棋的基本规则。在“阿尔法零”自我训练的过程中,没有任何人参与,也没有任何来自人类的对弈数据被注入该人工智能程序。如果“阿尔法零”在短短几个小时之内就获得了国际象棋大师的对弈水平,那么5年之后,人工智能技术将发展到怎样的高度呢?人工智能对人类的认知能力会产生怎样的影响呢?从本质上来说,人工智能所做的其实是对选择过程进行加速,那么伦理道德在这一过程中又扮演着怎样的角色呢?
Typically, these questions are left to technologists and to the intelligentsia of related scientific fields. Philosophers and others in the field of the humanities who helped shape previous concepts of world order tend to be disadvantaged, lacking knowledge of AI’s mechanisms or being overawed by its capacities. In contrast, the scientific world is impelled to explore the technical possibilities of its achievements, and the technological world is preoccupied with commercial vistas of fabulous scale. The incentive of both these worlds is to push the limits of discoveries rather than to comprehend them. And governance, insofar as it deals with the subject, is more likely to investigate AI’s applications for security and intelligence than to explore the transformation of the human condition that it has begun to produce.
一般来说,此类问题应该留给技术人员以及相关科学领域的专业人士去回答。哲学及其他人文学科领域的学者们曾在“世界秩序”这一概念的塑造方面贡献了自己的力量,但由于他们对人工智能程序的运行机制缺乏理解或者干脆被人工智能的表现所震惊以至于对其心生敬畏,他们在分析上述问题时并非处于很合适的地位。相比之下,科学界在十分兴奋地探索人工智能在诸多领域获得成功的技术可行性,而技术界也在为人工智能未来规模巨大的商业前景感到鼓舞。科学界和技术界这两个群体都在探索推进人工智能技术进步的边界,但他们并没有花太多时间试图去理解这种技术进步到底意味着什么。至于我们的政府,就其工作内容而言,与思考人工智能所产生的对人类未来生存境况的影响比起来,他们对人工智能技术在国家安全和情报工作方面的应用似乎更加感兴趣。
The Enlightenment started with essentially philosophical insights spread by a new technology. Our period is moving in the opposite direction. It has generated a potentially dominating technology in search of a guiding philosophy. Other countries have made AI a major national project. The United States has not yet, as a nation, systematically explored its full scope, studied its implications, or begun the process of ultimate learning. This should be given a high national priority, above all, from the point of view of relating AI to humanistic traditions.
“启蒙运动”从根本上来说是由新技术出现所导致的人们对我们这个世界在哲学层面的理解所发生的变化。而我们当下的时代与启蒙时期相比,在技术进步和哲学思考的互相作用方向上是相反的。我们一直在寻求一种指导纲领性的哲学,而在寻求这种哲学的过程中,我们却创造出了一种很可能将主导一切的技术。其他很多国家已将人工智能列为重要的国家级项目,但美国甚至还没有开始对这项新技术所涉及的应用范围以及它可能在各层面造成的影响进行系统性研究,而且为了避免被人工智能取代,我们在人类的终极学习(ultimate learning)领域也还没有启动任何计划。鉴于人类人文主义传统(humanistic traditions)的命运与人工智能技术密切相关,美国必须在国家层面对这项技术的发展保持高度关注。
AI developers, as inexperienced in politics and philosophy as I am in technology, should ask themselves some of the questions I have raised here in order to build answers into their engineering efforts. The U.S. government should consider a presidential commission of eminent thinkers to help develop a national vision. This much is certain: If we do not start this effort soon, before long we shall discover that we started too late.
就像我不了解人工智能的技术细节一样,人工智能技术领域的研发人员们对政治和哲学也是缺乏认识的。他们应该把我在本文中提出的问题也提给自己,并在今后的研发过程中将那些问题纳入考虑。此外,美国政府应该成立一个由各领域杰出思想家组成的总统委员会以便他们在国家层面协助制定未来人工智能技术发展的远景规划。
有一点是明确的:如果我们不尽快行动起来,过不了多久我们就会发现一切已经太迟了。
Henry A. Kissinger served as national security adviser and secretary of state to Presidents Richard Nixon and Gerald Ford.
作者简介:亨利·A·基辛格,曾担任理查德·尼克松和杰拉尔德·福特两位总统的国家安全顾问和国务卿。







