• 数据民主
    数字助理已经来临,未来已来!乔治亚理工学院的计算机教授助理居然是一个机器人 作者: 本·惠特福德 ben whitford 乔治亚理工学院2016年计算机科学课程的学生在学期结束时得到了一个惊喜:据透露,他们的助教之一,一个友善但心胸狭隘的年轻女性Jill Watson,是一个机器人。 学生们从来没有见过沃森,但觉得他们认识她。在这个学期里,她发了数百个询问,发布到班级的数字公告板上,提供作业提示,领导在线讨论,赢得了她快速,有益的回应。但与其他助教不同,沃森实际上是一个“聊天机器人” - 由Ashok Goel教授创建的虚拟助手,以减轻他的助手的压力。 一位学生在大曝光后说:“我惊呆了。“就在我想提名吉尔·沃森为杰出的助教,”另一个声明。 Goel的老师 - 机器人提供了世界工作场所可能的未来的一瞥。比利·梅塞尔(Bill Meisel)说,戈尔曾经帮助学生和同事们很容易地适应人力资源部门的需求,为员工提供不折不扣的,定制化的,实际上即时的支持。工作场所。 “移动设备或网站上的员工可以使用自然语言聊天机器人,这可以自动解决员工问题,而不会迫使员工通过材料来查找答案或需要专业人员的时间。”他说。 迈塞尔不只是理论。由于苹果,亚马逊,微软等科技巨头以及众多小公司的努力,机器人在工作场所的殖民化进展顺利。Forrester 在2017年报告说,有41%的企业已经在使用或开发AI工具; 预计三年后,至少有8.43亿企业用户在工作场所使用数字助理。 许多数字助理专注于消费者,有些比游戏改变者更具噱头 - 例如,Taco Bell的“Tacobot”让Slack用户通过聊天机器人订购午餐,但仍然需要人来接订单。不过,对于面向销售和服务的机器人来说,未来是光明的:到Gartner报告预测,到2020年,客户将处理85%的业务交易,而不会与人进行交互。 “数据民主” 随着技术的发展,机器人有望成为企业围绕人力资源乃至管理职能进行战略规划的一部分。ADP创新实验室高级副总裁罗伯托·马西罗(Roberto Masiero)说,这有可能成为一种民主化的力量,让员工可以无障碍地获取信息,帮助他们更聪明地工作,充分利用机会。 ADP自己的聊天机器人,它在内部使用,并且正在与合作伙伴机构的约2000名员工进行测试,可以发出提醒,提供职业提示,并为员工提供24/7全天候的人力资源信息。Masiero说:“它成为一个推动者。“它创造了以前不存在的数据民主。” 而且,ADP不仅看到了以企业为中心的数字助理的巨大潜力。微软和亚马逊都在努力将语音助手带到工作场所,希望工作人员有一天会使用Cortana或Alexa来管理他们的日历,处理待办事项列表并执行一些工作职能。 其他公司正在开发更专业的工具。Voicera最近推出了一个名叫Eva的语音操作数字助手,可以在会议中记录笔记,并根据所听到的讨论发送提醒。IBM正在利用人工智能来提高人才管理水平,并表示“未来每个员工都有自己的导师”。 机器人还可以在上手和训练中扮演重要角色。一个海军项目发现,接受数字辅导人员接受信息技术培训的新兵后来比刚接受培训的新兵有更好的表现,经过七周的培训后,他们的表现可以达到与具有三年在职经验的专家相匹敌的水平。 要对AI可能带来的失望做好准备 不过,并不是每个人都对AI技术的乌托邦的承诺感到兴奋。一个HR.com调查发现,人力资源专业人士无论是“深恶痛绝”,“不喜欢”或“89%的有关于AI通过在工作场所有所保留”。 即使前人关注人工智能潜力的前GOOGLE人力资源总监拉兹洛·博克(Laszlo Bock)也表示,由于商界人士对技术的迅速接受,他有点害怕。他说:“这是你第一次站在高位跳水的感觉:你知道你可能不会堕落,但你也有点害怕。 博克说,人工智能有很多方式可以使工作场所变得更好,并使员工“更快乐,更有效率”,现在正在领导一家名叫胡姆的创业公司,以改善工作为目标,“通过科学,机器学习和一点点爱“。但也有很多情况下,人工智能可以疏远工人,加强现有的机构偏见或妨碍人的互动,使良好的领导成为可能。 博克说:“通过让人类与人交谈,你能够深入了解你的组织。“在大多数聊天机器人中,你失去了这种洞察力和知识。” 问题的一部分是,“人工智能”这个词有些用词不当 - 即使是最复杂的数字助理,也没有什么是真正聪明的,也没有什么是自我意识的。这意味着一个机器人所表现出来的任何人类或移情终究都是空洞的。 这不一定总是一件坏事,因为从工作场所交互中去除人为因素可能会使员工更容易谈论敏感问题。例如,DARPA的计算机科学家发现,当人们认为他们正在与一个没有灵魂的聊天机器人而不是一个人工监控的系统交谈时,人们更可能向一个人工智能治疗师开放。 这导致心理学家发展Woebot,一个数字助理,检查精神健康患者的福利,并获得比人类倾向于接受的弗兰克反应。其他数字助理专门讨论报废问题,给终端病人一个安全的空间来找出他们的选择。“当人们进行这种谈话的时候,很难做出不公正的判断。所以,有些人可能会发现,聊聊这些关于他们想法的聊天机会会更容易一些,“来自”对话项目“的罗斯玛丽·劳埃德牧师,一个报废的慈善机构,告诉”新科学家“。 寻找减轻风险的方法 博克说,更大的担忧是人工智能系统容易扩大设计者和用户的有意识或无意识的偏见。Bock指出,微软在2016年推出了一个Twitter聊天机器人,它利用机器学习磨练了基于与真人相互作用的会话技巧 - 在24小时之内,Twitter用户已经训练了机器人鹦鹉种族主义的可怕看法,迫使微软取消这个插件。 这是一个极端的例子,但是所有的AI系统都依靠现实世界的数据进行训练,所以从本质上来说,这往往会加强现状。加入基于用户反馈的微调算法的特性,认知技术也容易加强机构偏见,即使它们提供了一个客观性的单调。 Bock警告说:“如果你现在所做的只是对现有数据进行培训,那么你将会建立复制已经存在的偏差的系统,并将其扩展到新的领域。“大多数组织正在采用机器学习的方法,将使问题变得更糟,而不是更好。” IBM的高管咨询合作伙伴Dave Millner说,这样的问题可以计划和避免,但是只有管理者知道他们在做什么。不幸的是,人工智能系统的潜力与管理者对技术的理解之间存在着令人不安的差距。 HR.com的调查发现,大多数人力资源专业人士认为人工智能将在未来五年被广泛应用于其组织中,70%的受访者表示聊天机器人将成为员工获取人力资源信息的重要途径,一半以上的人表示,直接从电脑,没有人力老板的参与。 但是,只有8%的人力资源专业人员相信他们理解AI技术。Millner说,这种野心和无知的结合是危险的,因为它可以防止管理者以清醒的眼光看待AI。“有早期的采用者,这很好,”Millner说。“但是仍然有很多的无知,缺乏对它能做什么的知识和理解,更重要的是它不能做到。” Millner说,需要的是一个更为深思熟虑的方法,从教育开始,最终实施被充分理解的系统,以避免偏见和其他潜在的隐患。“当然,这是一个风险,”他说。“但是,如果以适当的方式引入测试和试点,并不断学习,那么就可以减轻这些风险。” 长期观点:“积极正面” 博克还说,工作场所人工智能可以是一个福音,如果它负责任地处理。博克说:“从长远来看,这将是一个净利好。“但在短期/中期,这一切都取决于建立这些系统的人的价值观和观点。” Bock说,对于决策者来说,并不是说AI最好避免。关键是要认识到潜在的奖励风险和意识。他说:“这是一个巨大的机会。“接下来的三到五年里,有一个关于如何善用这项技术的公司将会粉碎它,而且绝对会赢。有很大的好处。“ 博克说,与其将工作场所的数字助理视为节省资金的技术,而不能将人力交互的需求自动化,公司应该将其视为增强人类决策的手段,并让管理人员有更多时间完成困难但重要的任务建立关系和培养员工。 他说:“基本上我是一个乐观主义者。“一点点机器学习可以帮助我们成为更好的领导者。” 本文由AI自动翻译,仅供参考。下列为英文版本。     Students in a 2016 computer science course at Georgia Tech got a surprise as the semester was wrapping up: It was revealed that one of their teaching assistants, a friendly but serious-minded young woman named Jill Watson, was a robot. The students had never met Watson, but felt they knew her. Over the course of the semester she had fielded hundreds of inquiries posted to the class’ digital bulletin board, offering homework tips, leading online discussions and winning praise for her quick, helpful responses. But unlike the other teaching assistants, Watson was actually a “chatbot” — a virtual assistant created by Professior Ashok Goel to reduce the strain on his human helpers. “I was flabbergasted,” one student said after the big reveal. “Just when I wanted to nominate Jill Watson as an outstanding TA,” another declared. Goel’s teacher-bot offers a glimpse of a possible future for the world’s workplaces. The same techniques Goel used to help students and colleagues could easily be adapted to the needs of a human resources division, offering unflagging, customized and virtually instant support to employees, says Bill Meisel, a consultant who has researched the rise of digital assistants in the workplace. “A natural-language chatbot, available to employees on mobile devices or a website, could automate much of the burden of answering employee questions without forcing the employee to wade through material to find the answers or require the time of an HR professional,” he says. Meisel isn’t just theorizing. Thanks to the efforts of tech giants like Apple, Amazon and Microsoft, along with a host of smaller companies, the robotic colonization of the workplace is well underway. Forrester reported in 2017 that 41 percent of businesses were already using or developing AI tools; three years from now at least 843 million enterprise users are expected to be using digital assistants in the workplace. Many digital assistants focus on the consumer, and some are more gimmick than game-changer — Taco Bell’s “Tacobot,” for instance, lets Slack users order lunch via a chatbot, but still requires a human to pick up the order. Still, the future is bright for sales- and service-oriented bots: By 2020, a Gartner report predicts, customers will handle 85 percent of their dealings with businesses without interacting with a human. ‘A Democracy of Data’ As technologies evolve, bots are expected to become a bigger part of companies’ strategic planning around HR and even management functions. That has the potential to be a democratizing force by giving employees frictionless access to information and helping them to work smarter and make the most of opportunities, says Roberto Masiero, senior vice president of ADP Innovation Labs. ADP’s own chatbots, which it uses internally and is testing with around 2,000 employees at partner organizations, can issue reminders, offer career tips and provide workers with access to HR information on a 24/7 basis. “It becomes an enabler,” Masiero says. “It creates a democracy of data that didn’t exist before.” And it’s not just ADP that sees enormous potential for enterprise-focused digital assistants. Microsoft and Amazon are both fighting to bring voice-operated assistants into the workplace, in the hope that workers will one day use Cortana or Alexa to manage their calendars, handle to-do lists and carry out some job functions. Other companies are developing more specialized tools. Voicera recently launched a voice-operated digital assistant called Eva that can take notes during meetings and send reminders based on the discussions it overhears. And IBM is using AI to improve talent management, saying it envisions a future in which “every employee has a personal mentor.” Bots could also have a big role to play in onboarding and training. A Navy project found that recruits who received IT training from a digital tutor subsequently outperformed human-trained recruits, and after seven weeks of training could perform at a level that matched that of an specialist with three years of on-the-job experience. Bracing for a Potential Belly-Flop Still, not everyone’s excited about the promised AI-powered techno-utopia. An HR.com survey found that 89 percent of HR professionals either “detest,” “dislike” or “have some reservations” about AI adoption in the workplace. Even former Google HR chief Laszlo Bock, who is upbeat overall about AI’s potential, says he’s a little freaked out by the business community’s rapid embrace of the technology. “It’s the feeling when you stand on top of a high dive for the first time: You know you probably won’t belly-flop, but you’re also a little terrified,” he says. There are many ways in which AI could make the workplace better and make employees “happier and more productive,” says Bock, who is now leading a startup called Humu with a goal of improving work “through science, machine learning, and a little bit of love.” But there are also plenty of scenarios in which artificial intelligence could alienate workers, reinforce existing institutional biases or impede the human interactions that make good leadership possible. “You gain a lot of insight into your organization by having human beings talk to people,” Bock says. “In most chatbots, you lose that insight and knowledge.” Part of the problem is that the term “artificial intelligence” is something of a misnomer — there’s nothing truly intelligent, and certainly nothing self-aware, about even the most sophisticated digital assistants. That means any humanity or empathy manifested by a bot ultimately rings hollow. That’s not necessarily always a bad thing, because removing the human element from workplace interactions might make it easier for employees to talk about sensitive issues. Computer scientists at DARPA, for instance, found people were more likely to open up to an AI-powered therapist when they believed they were talking to a soulless chatbot rather than to a human-supervised system. That led psychologists to develop Woebot, a digital assistant that checks in on mental health patients’ wellbeing and that gets franker responses than humans tend to receive. Other digital assistants specialize in discussing end-of-life issues, giving terminal patients a safe space to figure out their options. “It’s hard for humans to be nonjudgmental when they’re having these kinds of conversations. So some people might find it easier to talk to a chatbot about their thoughts,” the Rev. Rosemary Lloyd from The Conversation Project, an end-of-life charity, told New Scientist. Finding Ways to Mitigate Risks A bigger concern, Bock says, is that AI systems are prone to amplifying the conscious or unconscious biases of their designers and users. Bock notes that Microsoft launched a Twitter chatbot in 2016 that used machine learning to hone its conversational skills based on interactions with real people — and within 24 hours Twitter users had trained the bot to parrot horrendously racist views, forcing Microsoft to pull the plug. That’s an extreme example, but all AI systems rely on real-world data for their training and so by their nature tend to reinforce the status quo. Add in features that fine-tune algorithms based on user feedback and it’s all too easy for cognitive technologies to reinforce institutional biases, even as they offer a veneer of objectivity. “If all you’re doing is training on existing data, you’ll build systems that replicate the bias that already exists, and expand it into new arenas,” Bock warns. “The approach most organizations are taking to applying machine learning today will make problems of bias worse, not better.” Such problems can be planned for and avoided, but only if managers know what they’re doing, says Dave Millner, an executive consulting partner with IBM. Unfortunately, there’s a troubling gap between the perceived potential of AI systems and managers’ understanding of the technology. The HR.com survey found that most HR professionals believe AI will be widely used in their organizations over the next five years, with 70 percent saying chatbots will become an important way for employees to access HR information and more than half saying workers will take orders directly from computers, without the involvement of human bosses. However, just 8 percent of HR professionals are confident that they understand AI technologies. That combination of ambition and ignorance is dangerous, Millner says, because it can prevent managers from engaging with AI in a clear-eyed way. “There are early adopters, and that’s great,” Millner says. “But there’s still a lot of ignorance, a lack of knowledge and understanding about what it can do and, more importantly, what it can’t do.” Millner says what is needed is a more considered approach that begins with education and culminates in the implementation of well-understood systems that are designed to avoid bias and other potential pitfalls. “It’s a risk, of course,” he says. “But if it’s introduced in an appropriate way, with testing and piloting and continual learning, then you can mitigate those risks.” The Long-Term View: ‘A Net Positive’ Bock also says workplace AI can be a boon if it’s handled responsibly. “In the long term it’s going to be a net positive,” Bock says. “But in the short/medium term it all depends on the values and perspectives of the people building these systems.” The takeaway for decision-makers isn’t that AI is best avoided, Bock says. The key is to be cognizant of the risks and mindful in reaching for the potential rewards. “It’s a huge opportunity,” he says. “There’s a window in the next three to five years where the companies that are thoughtful about using this technology well are going to crush it, and absolutely win. There’s a huge amount of upside.” Rather than viewing digital assistants in the workplace as money-saving technologies that can automate away the need for human interaction, Bock says, companies should see them as a means to augment human decision making and to give managers more time for the difficult but important tasks of building relationships and nurturing their employees. “Fundamentally I’m an optimist,” he says. “A little machine learning can go a long way toward helping us be better leaders.”
    数据民主
    2018年02月09日