ai人工智能_11条人工智能原则-程序员宅基地

技术标签: python  java  机器学习  人工智能  

ai人工智能

If not, how do we teach values to an autonomous intelligence? Can we codify them or simply enter it somewhere in the system? Is it more of an iterative process where we will correct parameters on the fly as systems learn on their own and potentially behave unexpectedly?

如果没有,我们如何向自主智能传授价值观? 我们可以对它们进行编纂还是直接将其输入系统中的某个位置? 它是否只是一个迭代过程,在此过程中,我们将在系统自行学习并可能出现意外行为时即时修正参数?

It does not seem practical, ideal, or even risk-free to teach values to an AI to preserve ourselves and avoid unwanted situations. Situations come to mind where an AI’s behavior was observed but its actions were unpredictable and there were not any options to correct the course. As we are facing those new complexities, behaviors, and potential uses, it makes sense to reflect on and explore what rules are needed. It is a pressing concern as we expand in this new field, especially as AI usage grows and takes over critical applications.

向AI传授价值观以保护自己并避免不必要的情况似乎不切实际,不理想甚至没有风险。 想到的情况是观察到AI的行为,但是其行为是不可预测的,并且没有任何纠正方法。 当我们面对这些新的复杂性,行为和潜在用途时,反思并探索需要哪些规则是有意义的。 随着我们在这个新领域的扩展,这是一个紧迫的问题,特别是随着AI使用的增长和接管关键应用程序。

我们已经有一些框架和规则 (We already have some frameworks and rules)

The AI space has benefited from high-level foundational principles. An organization, The Partnership on AI, has published high-level tenets to preserve AI as a positive and promising force. That is a first step forward but it does not address the day-to-day needs on the ground, especially as we go from experimenting to releasing AIs in the wild.

人工智能领域受益于高级基础原则。 一个名为AI伙伴关系的组织已经发布了高级原则,以保持AI的积极和有希望的力量。 这是向前迈出的第一步,但并不能满足当地的日常需求,尤其是在我们从试验到在野外发布AI的过程中。

On the technology side, maybe the best starting point and the main gap would be to define design principles intended first for technologists who are building the AIs and second for teams managing those advanced intelligence systems.

在技​​术方面,也许最好的起点和主要差距是定义设计原则,该原则首先针对正在构建AI的技术人员,其次用于管理这些高级智能系统的团队。

AI有很多阴影 (There are many shades of AI)

Of course, there is AI and AI. They are not all created equal and for the same purposes:

当然,有AI和AI。 它们并非都是相同且出于相同目的而创建的:

  • They have various levels of independence: from following a script under human supervision to independently allocating resources to robots in a factory

    它们具有不同程度的独立性:从遵循人工监督下的脚本到向工厂中的机器人独立分配资源
  • They have a wide range of responsibilities: from tweeting comments to managing armed drones

    他们承担着广泛的责任:从发布推文到管理武装无人机
  • They operate in different environments: from a lab not connected to the internet to a live trading environment

    它们在不同的环境中运行:从未连接互联网的实验室到实时交易环境
Image for post
Photo by Austin Distel on Unsplash
Austin DistelUnsplash拍摄的照片

先驱者清单(A checklist for the pioneers)

There are many considerations when designing AI systems to keep the risk to society manageable, especially for scenarios involving high independence, key responsibilities, and sensitive environments:

在设计AI系统以使对社会的风险可管理时,有很多考虑因素,尤其是对于涉及高度独立性,关键职责和敏感环境的场景:

  1. No black box: It has to be possible to check inside the program and review the code, logs, and timelines to understand how a system made a decision and which sources were checked. It should not be all machine code: users should be able to visualize and quickly understand the steps followed. It would avoid those situations where programs are shut down because nobody can fix bad behaviors or unintended actions.

    没有黑匣子:必须能够检查程序内部并查看代码,日志和时间表,以了解系统如何做出决定以及检查了哪些源。 它不应该全部是机器代码:用户应该能够可视化并快速了解所遵循的步骤。 这样可以避免由于没有人可以纠正不良行为或意外动作而导致程序关闭的情况。

  2. Debug mode: Artificial intelligence systems should have a debug mode, which could be turned on when the system makes mistakes, deliver unexpected results, or acts erratically. That would allow system administrators and support teams to quickly find root causes and to track more parameters, at the risk of temporarily slowing down processing. It would be beneficial to identify root causes.

    调试模式:人工智能系统应具有调试模式,当系统出错,交付意外结果或行为异常时,可以打开该模式。 这将使系统管理员和支持团队能够快速找到根本原因并跟踪更多参数,而有暂时降低处理速度的风险。 找出根本原因将是有益的。

  3. Fail-safe: For higher-risk cases, systems should have a fail-safe switch to reduce or turn off any capabilities creating issues if they cannot be fixed on the fly or explained quickly, to prevent potential damages. It is similar to the quality control process in a factory where an employee can stop an assembly line if he perceives an issue.

    故障安全:对于高风险的情况,系统应具有故障安全开关,以减少或关闭无法动态修复或快速解释的任何会造成问题的功能,以防止潜在的损坏。 这类似于工厂中的质量控制流程,在该工厂中,员工如果发现问题就可以停止流水线。

  4. Circuit breaker: For extreme cases, it must be possible to shut down the entire system. Some systems cannot be troubleshot in real-time and could do more harm than good when left active. Stock exchanges have automated circuit breakers to manage volatility and avoid crashes. Automated trading systems using AI should have the same systems in place, even if they have never had issues. That would prevent back swan situations, bugs, hacks, or any one-time situations leading to erratic trading and massive losses.

    断路器:在极端情况下,必须能够关闭整个系统。 某些系统无法实时进行故障排除,如果保持活动状态,则弊大于利。 证券交易所拥有自动断路器来管理波动并避免崩溃。 使用AI的自动交易系统应该具有相同的系统,即使它们从未出现过问题。 这样可以防止出现反向天鹅情况,错误,黑客或任何一次性情况,从而导致交易不稳定和大量损失。

  5. Approval matrices: At some point in the future, systems will fully mimic human reasoning and follow complex decision trees, applying judgment and making decisions. Humans should be in the chain of command and approve key decisions, especially when those are not repetitive and require some independent thinking. It can be useful to keep the RACI framework in mind. If an autonomous bus takes sometimes a slight detour to skip traffic, it should notify a human. If it decides to use a new road for the first time, then it should be approved by a human to avoid accidents. Giving systems control over resources such as electric power, security, and internet bandwidth can prove problematic, especially if bugs, security flaws, and other issues are discovered.

    批准矩阵:在将来的某个时候,系统将完全模仿人类的推理,并遵循复杂的决策树,应用判断和做出决策。 人类应该处在指挥链中并批准关键决策,尤其是在那些决策不是重复性的并且需要一些独立思考的情况下。 牢记RACI框架可能很有用。 如果无人驾驶巴士有时略微绕行以跳过交通,则应通知人员。 如果它决定第一次使用一条新路,则应得到人员的批准,以免发生事故。 赋予系统对诸如电力,安全性和互联网带宽之类的资源的控制可能会带来问题,尤其是在发现错误,安全漏洞和其他问题的情况下。

  6. Keeping track of assets, delegation, and autonomy: Humans get substantial leverage by transferring work to machines, especially if tasks become too complex, fast, expensive, or time-consuming. Algorithmic trading or real-time optimization solutions are good examples. However users should never delegate decision-making capability completely, nor stay on the sideline until issues arise nor lose track of what processes are automated/delegated to an AI. It is particularly relevant, for example, with the advances of Robotic Process Automation (RPA). As it expands (it is currently the fastest-growing software solution for enterprises), employees will start setting up their own routines, which could be running in the cloud indefinitely without anybody’s direct involvement. Companies should track centrally what routines are running and what AI agents are doing/creating. They should also implement policies preventing employees from using their own RPA from a USB drive or from the cloud to outsource tasks that should be controlled and owned by the company. Companies and users should also ensure they have a back door to be able to access any bots or AI processes running in the background, in case the main account gets disabled and users are locked out or in case of emergency if the regular account stops working.

    跟踪资产,委派和自治:人类将工作转移到机器上,从而获得了巨大的杠杆作用,尤其是在任务变得过于复杂,快速,昂贵或耗时的情况下。 算法交易或实时优化解决方案就是很好的例子。 但是,用户切勿完全委派决策能力,也不应待在问题出台之前,也不要跟踪将哪些流程自动化/委托给AI。 例如,它与机器人过程自动化(RPA)的进步特别相关。 随着它的扩展(它是目前企业中增长最快的软件解决方案),员工将开始设置自己的例程,这些例程可以无限期地在云中运行,而无需任何人的直接参与。 公司应集中跟踪正在运行的例程以及AI代理正在执行/创建的内容。 他们还应实施政策,防止员工使用USB驱动器或云中的RPA来外包应由公司控制和拥有的任务。 公司和用户还应确保他们拥有后门,以便能够访问在后台运行的任何bot或AI进程,以防主帐户被禁用并且用户被锁定,或者在常规帐户停止工作时出现紧急情况。

  7. No completely virtual or decentralized environments: A while back, sites such as Kazaa, Skype, and other peer-to-peer networks touted the idea of fully decentralized systems which would not reside in one location but instead would be hosted fractionally on a multitude of computers, with the ability to replicate content and repair themselves as hosts drop from the network. It is also one of the foundations of blockchain. That could obviously become a major threat if an autonomous AI system had this ability, went haywire, and became indestructible.

    没有完全虚拟或分散的环境:不久前,Kazaa,Skype和其他对等网络等站点吹捧了完全分散化系统的想法,该系统不会驻留在一个位置,而是部分地托管在多个位置计算机,能够复制内容并在主机从网络掉落时自行修复。 它也是区块链的基础之一。 如果一个自主的人工智能系统具有这种能力,陷入困境,并且变得坚不可摧,那么这显然将成为主要威胁。

  8. Feedback with discernment: The ability to receive and process feedback can be a great differentiator. It already allows voice recognition AI to understand and translate more languages than any human could ever learn. It can also enable machines to understand any accents or local dialects. However in some applications, for example, social media bots or in a newsroom, consuming all the feedback and using it can prove problematic. Between fake news, trolls, and users testing systems’ limits, processing feedback properly can be challenging for most AIs. In those areas, AIs need filters and tools to use feedback optimally and remain useful. Tay, the social bot from Microsoft, quickly fell off the deep end after misusing raw feedback and taunts, prompting it to release offensive content to its followers because it could not determine right from wrong inputs leading to unwanted outputs.

    敏锐的反馈:接收和处理反馈的能力可以与众不同。 它已经使语音识别AI能够理解和翻译比任何人都学得更多的语言。 它还可以使机器了解任何口音或当地方言。 但是,在某些应用程序中,例如社交媒体机器人或新闻编辑室中,吸收所有反馈并使用它可能会带来问题。 在虚假新闻,巨魔和用户测试系统的限制之间,对于大多数AI而言,正确处理反馈可能具有挑战性。 在这些领域,AI需要过滤器和工具来最佳地使用反馈并保持有用。 来自微软的社交机器人Tay在滥用原始反馈和嘲讽后Swift陷入困境,促使它向其追随者发布令人反感的内容,因为它无法确定错误输入导致错误输出的正确性。

  9. Annotated and editable code: In the event where machines write, edit, and update code, all code should automatically have embedded comments, to explain the system’s logic behind the change. Humans or another system should be able to review and change the code if needed, with the proper context and understanding of prior revisions.

    带注释的和可编辑的代码:在机器编写,编辑和更新代码的情况下,所有代码应自动具有嵌入式注释,以解释更改背后的系统逻辑。 如果需要,人员或其他系统应该能够在适当的上下文和对先前修订的理解下,根据需要查看和更改代码。

  10. Plan C: As with all systems, AIs in live environments have backups. Unlike typical IT systems, we are reaching a point where we cannot fully explore, understand, or test the AI systems we are building. If an AI system failed, went blank, or had major issues, we could revert to a backup that contains the same issues and ends up reproducing the problematic behaviors. In those cases, there should always be a plan C to switch back to human operations and use an alternative technology. As an example, a call center could handle thousands of automated AI-based voice interactions a day and dispatch users based on keywords. As volumes grow or peak, performance could decrease, cause dropped calls, and eventually crash the system. The backup could be restored but still contain the same flaw. The only option would be to turn off everything and decline all calls or to have a plan C in place, by redirecting incoming calls to humans or by using an alternative system.

    计划C:与所有系统一样,实时环境中的AI都有备份。 与典型的IT系统不同,我们正在达到无法完全探索,理解或测试所构建的AI系统的地步。 如果AI系统出现故障,变空白或出现重大问题,我们可以恢复到包含相同问题的备份,并最终重现有问题的行为。 在这些情况下,应该始终有计划C切换回人工操作并使用替代技术。 例如,呼叫中心每天可以处理数千个基于AI的自动化语音交互,并根据关键字分配用户。 随着音量的增加或达到峰值,性能可能会降低,导致掉话率并最终导致系统崩溃。 备份可以还原,但仍然包含相同的缺陷。 唯一的选择是通过将来电重新定向到人员或使用其他系统,关闭所有呼叫并拒绝所有呼叫或制定计划C。

长期会发生什么? (What could happen long-term?)

Image for post
Photo by Arseny Togulev on Unsplash
Arseny TogulevUnsplash上的 照片

In the worst-case scenario, a dystopian scenario: we end up with sprawling systems that we do not control very well and that we have trouble fixing or managing, leading to catastrophes. Skynet and HAL 9000 come to mind. Many additional dark scenarios can also be found in Black Mirror on Netflix. Great innovation can lead to collisions. The quest for growth, efficiencies and profits can open the door to unsustainable risks.

在最坏的情况下,是反乌托邦的情况:我们最终会得到庞大的系统,这些系统无法很好地控制,而且修复或管理时遇到麻烦,从而导致灾难。 天网HAL 9000浮现在脑海。 还可以在Netflix的《黑镜》中找到许多其他黑暗场景。 伟大的创新可能导致冲突。 对增长,效率和利润的追求可以为不可持续的风险敞开大门。

In the best-case scenario, we manage to strike a balance between using intelligent machines for efficiency and ensuring prosperity for our civilization. It translates into better jobs and a higher quality of life for all.

在最佳情况下,我们设法在使用智能机器提高效率与确保我们的文明繁荣之间取得平衡。 它可以为所有人带来更好的工作和更高的生活质量。

What do you think? Are there reasons to fear unchecked autonomous intelligences? Are we doing it well today? What other principles can you think of?

你怎么看? 是否有理由担心未经检查的自主情报? 我们今天过得好吗? 您还能想到什么其他原则?

Max Dufour is a Partner with Harmeda and leads strategic engagements for Financial Services, Technology, and Strategy Consulting clients. He can be reached directly at [email protected] or on LinkedIn.

Max DufourHarmeda的合伙人,负责为金融服务,技术和策略咨询客户提供战略服务。 可以通过[email protected]或通过LinkedIn与他直接联系

翻译自: https://towardsdatascience.com/11-artificial-intelligence-principles-554fd8adb36a

ai人工智能

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/weixin_26712095/article/details/109123055

智能推荐

分布式光纤传感器的全球与中国市场2022-2028年:技术、参与者、趋势、市场规模及占有率研究报告_预计2026年中国分布式传感器市场规模有多大-程序员宅基地

文章浏览阅读3.2k次。本文研究全球与中国市场分布式光纤传感器的发展现状及未来发展趋势,分别从生产和消费的角度分析分布式光纤传感器的主要生产地区、主要消费地区以及主要的生产商。重点分析全球与中国市场的主要厂商产品特点、产品规格、不同规格产品的价格、产量、产值及全球和中国市场主要生产商的市场份额。主要生产商包括:FISO TechnologiesBrugg KabelSensor HighwayOmnisensAFL GlobalQinetiQ GroupLockheed MartinOSENSA Innovati_预计2026年中国分布式传感器市场规模有多大

07_08 常用组合逻辑电路结构——为IC设计的延时估计铺垫_基4布斯算法代码-程序员宅基地

文章浏览阅读1.1k次,点赞2次,收藏12次。常用组合逻辑电路结构——为IC设计的延时估计铺垫学习目的:估计模块间的delay,确保写的代码的timing 综合能给到多少HZ,以满足需求!_基4布斯算法代码

OpenAI Manager助手(基于SpringBoot和Vue)_chatgpt网页版-程序员宅基地

文章浏览阅读3.3k次,点赞3次,收藏5次。OpenAI Manager助手(基于SpringBoot和Vue)_chatgpt网页版

关于美国计算机奥赛USACO,你想知道的都在这_usaco可以多次提交吗-程序员宅基地

文章浏览阅读2.2k次。USACO自1992年举办,到目前为止已经举办了27届,目的是为了帮助美国信息学国家队选拔IOI的队员,目前逐渐发展为全球热门的线上赛事,成为美国大学申请条件下,含金量相当高的官方竞赛。USACO的比赛成绩可以助力计算机专业留学,越来越多的学生进入了康奈尔,麻省理工,普林斯顿,哈佛和耶鲁等大学,这些同学的共同点是他们都参加了美国计算机科学竞赛(USACO),并且取得过非常好的成绩。适合参赛人群USACO适合国内在读学生有意向申请美国大学的或者想锻炼自己编程能力的同学,高三学生也可以参加12月的第_usaco可以多次提交吗

MySQL存储过程和自定义函数_mysql自定义函数和存储过程-程序员宅基地

文章浏览阅读394次。1.1 存储程序1.2 创建存储过程1.3 创建自定义函数1.3.1 示例1.4 自定义函数和存储过程的区别1.5 变量的使用1.6 定义条件和处理程序1.6.1 定义条件1.6.1.1 示例1.6.2 定义处理程序1.6.2.1 示例1.7 光标的使用1.7.1 声明光标1.7.2 打开光标1.7.3 使用光标1.7.4 关闭光标1.8 流程控制的使用1.8.1 IF语句1.8.2 CASE语句1.8.3 LOOP语句1.8.4 LEAVE语句1.8.5 ITERATE语句1.8.6 REPEAT语句。_mysql自定义函数和存储过程

半导体基础知识与PN结_本征半导体电流为0-程序员宅基地

文章浏览阅读188次。半导体二极管——集成电路最小组成单元。_本征半导体电流为0

随便推点

【Unity3d Shader】水面和岩浆效果_unity 岩浆shader-程序员宅基地

文章浏览阅读2.8k次,点赞3次,收藏18次。游戏水面特效实现方式太多。咱们这边介绍的是一最简单的UV动画(无顶点位移),整个mesh由4个顶点构成。实现了水面效果(左图),不动代码稍微修改下参数和贴图可以实现岩浆效果(右图)。有要思路是1,uv按时间去做正弦波移动2,在1的基础上加个凹凸图混合uv3,在1、2的基础上加个水流方向4,加上对雾效的支持,如没必要请自行删除雾效代码(把包含fog的几行代码删除)S..._unity 岩浆shader

广义线性模型——Logistic回归模型(1)_广义线性回归模型-程序员宅基地

文章浏览阅读5k次。广义线性模型是线性模型的扩展,它通过连接函数建立响应变量的数学期望值与线性组合的预测变量之间的关系。广义线性模型拟合的形式为:其中g(μY)是条件均值的函数(称为连接函数)。另外,你可放松Y为正态分布的假设,改为Y 服从指数分布族中的一种分布即可。设定好连接函数和概率分布后,便可以通过最大似然估计的多次迭代推导出各参数值。在大部分情况下,线性模型就可以通过一系列连续型或类别型预测变量来预测正态分布的响应变量的工作。但是,有时候我们要进行非正态因变量的分析,例如:(1)类别型.._广义线性回归模型

HTML+CSS大作业 环境网页设计与实现(垃圾分类) web前端开发技术 web课程设计 网页规划与设计_垃圾分类网页设计目标怎么写-程序员宅基地

文章浏览阅读69次。环境保护、 保护地球、 校园环保、垃圾分类、绿色家园、等网站的设计与制作。 总结了一些学生网页制作的经验:一般的网页需要融入以下知识点:div+css布局、浮动、定位、高级css、表格、表单及验证、js轮播图、音频 视频 Flash的应用、ul li、下拉导航栏、鼠标划过效果等知识点,网页的风格主题也很全面:如爱好、风景、校园、美食、动漫、游戏、咖啡、音乐、家乡、电影、名人、商城以及个人主页等主题,学生、新手可参考下方页面的布局和设计和HTML源码(有用点赞△) 一套A+的网_垃圾分类网页设计目标怎么写

C# .Net 发布后,把dll全部放在一个文件夹中,让软件目录更整洁_.net dll 全局目录-程序员宅基地

文章浏览阅读614次,点赞7次,收藏11次。之前找到一个修改 exe 中 DLL地址 的方法, 不太好使,虽然能正确启动, 但无法改变 exe 的工作目录,这就影响了.Net 中很多获取 exe 执行目录来拼接的地址 ( 相对路径 ),比如 wwwroot 和 代码中相对目录还有一些复制到目录的普通文件 等等,它们的地址都会指向原来 exe 的目录, 而不是自定义的 “lib” 目录,根本原因就是没有修改 exe 的工作目录这次来搞一个启动程序,把 .net 的所有东西都放在一个文件夹,在文件夹同级的目录制作一个 exe._.net dll 全局目录

BRIEF特征点描述算法_breif description calculation 特征点-程序员宅基地

文章浏览阅读1.5k次。本文为转载,原博客地址:http://blog.csdn.net/hujingshuang/article/details/46910259简介 BRIEF是2010年的一篇名为《BRIEF:Binary Robust Independent Elementary Features》的文章中提出,BRIEF是对已检测到的特征点进行描述,它是一种二进制编码的描述子,摈弃了利用区域灰度..._breif description calculation 特征点

房屋租赁管理系统的设计和实现,SpringBoot计算机毕业设计论文_基于spring boot的房屋租赁系统论文-程序员宅基地

文章浏览阅读4.1k次,点赞21次,收藏79次。本文是《基于SpringBoot的房屋租赁管理系统》的配套原创说明文档,可以给应届毕业生提供格式撰写参考,也可以给开发类似系统的朋友们提供功能业务设计思路。_基于spring boot的房屋租赁系统论文