数字助理已经来临，未来已来！乔治亚理工学院的计算机教授助理居然是一个机器人作者： 本·惠特福德 ben whitford
学生们从来没有见过沃森，但觉得他们认识她。在这个学期里，她发了数百个询问，发布到班级的数字公告板上，提供作业提示，领导在线讨论，赢得了她快速，有益的回应。但与其他助教不同，沃森实际上是一个“聊天机器人” - 由Ashok Goel教授创建的虚拟助手，以减轻他的助手的压力。
Goel的老师 - 机器人提供了世界工作场所可能的未来的一瞥。比利·梅塞尔（Bill Meisel）说，戈尔曾经帮助学生和同事们很容易地适应人力资源部门的需求，为员工提供不折不扣的，定制化的，实际上即时的支持。工作场所。
迈塞尔不只是理论。由于苹果，亚马逊，微软等科技巨头以及众多小公司的努力，机器人在工作场所的殖民化进展顺利。Forrester 在2017年报告说，有41％的企业已经在使用或开发AI工具; 预计三年后，至少有8.43亿企业用户在工作场所使用数字助理。
许多数字助理专注于消费者，有些比游戏改变者更具噱头 - 例如，Taco Bell的“Tacobot”让Slack用户通过聊天机器人订购午餐，但仍然需要人来接订单。不过，对于面向销售和服务的机器人来说，未来是光明的：到Gartner报告预测，到2020年，客户将处理85％的业务交易，而不会与人进行交互。
问题的一部分是，“人工智能”这个词有些用词不当 - 即使是最复杂的数字助理，也没有什么是真正聪明的，也没有什么是自我意识的。这意味着一个机器人所表现出来的任何人类或移情终究都是空洞的。
博克说，更大的担忧是人工智能系统容易扩大设计者和用户的有意识或无意识的偏见。Bock指出，微软在2016年推出了一个Twitter聊天机器人，它利用机器学习磨练了基于与真人相互作用的会话技巧 - 在24小时之内，Twitter用户已经训练了机器人鹦鹉种族主义的可怕看法，迫使微软取消这个插件。
Students in a 2016 computer science course at Georgia Tech got a surprise as the semester was wrapping up: It was revealed that one of their teaching assistants, a friendly but serious-minded young woman named Jill Watson, was a robot.
The students had never met Watson, but felt they knew her. Over the course of the semester she had fielded hundreds of inquiries posted to the class’ digital bulletin board, offering homework tips, leading online discussions and winning praise for her quick, helpful responses. But unlike the other teaching assistants, Watson was actually a “chatbot” — a virtual assistant created by Professior Ashok Goel to reduce the strain on his human helpers.
“I was flabbergasted,” one student said after the big reveal. “Just when I wanted to nominate Jill Watson as an outstanding TA,” another declared.
Goel’s teacher-bot offers a glimpse of a possible future for the world’s workplaces. The same techniques Goel used to help students and colleagues could easily be adapted to the needs of a human resources division, offering unflagging, customized and virtually instant support to employees, says Bill Meisel, a consultant who has researched the rise of digital assistants in the workplace.
“A natural-language chatbot, available to employees on mobile devices or a website, could automate much of the burden of answering employee questions without forcing the employee to wade through material to find the answers or require the time of an HR professional,” he says.
Meisel isn’t just theorizing. Thanks to the efforts of tech giants like Apple, Amazon and Microsoft, along with a host of smaller companies, the robotic colonization of the workplace is well underway. Forrester reported in 2017 that 41 percent of businesses were already using or developing AI tools; three years from now at least 843 million enterprise users are expected to be using digital assistants in the workplace.
Many digital assistants focus on the consumer, and some are more gimmick than game-changer — Taco Bell’s “Tacobot,” for instance, lets Slack users order lunch via a chatbot, but still requires a human to pick up the order. Still, the future is bright for sales- and service-oriented bots: By 2020, a Gartner report predicts, customers will handle 85 percent of their dealings with businesses without interacting with a human.
‘A Democracy of Data’
As technologies evolve, bots are expected to become a bigger part of companies’ strategic planning around HR and even management functions. That has the potential to be a democratizing force by giving employees frictionless access to information and helping them to work smarter and make the most of opportunities, says Roberto Masiero, senior vice president of ADP Innovation Labs.
ADP’s own chatbots, which it uses internally and is testing with around 2,000 employees at partner organizations, can issue reminders, offer career tips and provide workers with access to HR information on a 24/7 basis. “It becomes an enabler,” Masiero says. “It creates a democracy of data that didn’t exist before.”
And it’s not just ADP that sees enormous potential for enterprise-focused digital assistants. Microsoft and Amazon are both fighting to bring voice-operated assistants into the workplace, in the hope that workers will one day use Cortana or Alexa to manage their calendars, handle to-do lists and carry out some job functions.
Other companies are developing more specialized tools. Voicera recently launched a voice-operated digital assistant called Eva that can take notes during meetings and send reminders based on the discussions it overhears. And IBM is using AI to improve talent management, saying it envisions a future in which “every employee has a personal mentor.”
Bots could also have a big role to play in onboarding and training. A Navy project found that recruits who received IT training from a digital tutor subsequently outperformed human-trained recruits, and after seven weeks of training could perform at a level that matched that of an specialist with three years of on-the-job experience.
Bracing for a Potential Belly-Flop
Still, not everyone’s excited about the promised AI-powered techno-utopia. An HR.com survey found that 89 percent of HR professionals either “detest,” “dislike” or “have some reservations” about AI adoption in the workplace.
Even former Google HR chief Laszlo Bock, who is upbeat overall about AI’s potential, says he’s a little freaked out by the business community’s rapid embrace of the technology. “It’s the feeling when you stand on top of a high dive for the first time: You know you probably won’t belly-flop, but you’re also a little terrified,” he says.
There are many ways in which AI could make the workplace better and make employees “happier and more productive,” says Bock, who is now leading a startup called Humu with a goal of improving work “through science, machine learning, and a little bit of love.” But there are also plenty of scenarios in which artificial intelligence could alienate workers, reinforce existing institutional biases or impede the human interactions that make good leadership possible.
“You gain a lot of insight into your organization by having human beings talk to people,” Bock says. “In most chatbots, you lose that insight and knowledge.”
Part of the problem is that the term “artificial intelligence” is something of a misnomer — there’s nothing truly intelligent, and certainly nothing self-aware, about even the most sophisticated digital assistants. That means any humanity or empathy manifested by a bot ultimately rings hollow.
That’s not necessarily always a bad thing, because removing the human element from workplace interactions might make it easier for employees to talk about sensitive issues. Computer scientists at DARPA, for instance, found people were more likely to open up to an AI-powered therapist when they believed they were talking to a soulless chatbot rather than to a human-supervised system.
That led psychologists to develop Woebot, a digital assistant that checks in on mental health patients’ wellbeing and that gets franker responses than humans tend to receive. Other digital assistants specialize in discussing end-of-life issues, giving terminal patients a safe space to figure out their options. “It’s hard for humans to be nonjudgmental when they’re having these kinds of conversations. So some people might find it easier to talk to a chatbot about their thoughts,” the Rev. Rosemary Lloyd from The Conversation Project, an end-of-life charity, told New Scientist.
Finding Ways to Mitigate Risks
A bigger concern, Bock says, is that AI systems are prone to amplifying the conscious or unconscious biases of their designers and users. Bock notes that Microsoft launched a Twitter chatbot in 2016 that used machine learning to hone its conversational skills based on interactions with real people — and within 24 hours Twitter users had trained the bot to parrot horrendously racist views, forcing Microsoft to pull the plug.
That’s an extreme example, but all AI systems rely on real-world data for their training and so by their nature tend to reinforce the status quo. Add in features that fine-tune algorithms based on user feedback and it’s all too easy for cognitive technologies to reinforce institutional biases, even as they offer a veneer of objectivity.
“If all you’re doing is training on existing data, you’ll build systems that replicate the bias that already exists, and expand it into new arenas,” Bock warns. “The approach most organizations are taking to applying machine learning today will make problems of bias worse, not better.”
Such problems can be planned for and avoided, but only if managers know what they’re doing, says Dave Millner, an executive consulting partner with IBM. Unfortunately, there’s a troubling gap between the perceived potential of AI systems and managers’ understanding of the technology.
The HR.com survey found that most HR professionals believe AI will be widely used in their organizations over the next five years, with 70 percent saying chatbots will become an important way for employees to access HR information and more than half saying workers will take orders directly from computers, without the involvement of human bosses.
However, just 8 percent of HR professionals are confident that they understand AI technologies. That combination of ambition and ignorance is dangerous, Millner says, because it can prevent managers from engaging with AI in a clear-eyed way. “There are early adopters, and that’s great,” Millner says. “But there’s still a lot of ignorance, a lack of knowledge and understanding about what it can do and, more importantly, what it can’t do.”
Millner says what is needed is a more considered approach that begins with education and culminates in the implementation of well-understood systems that are designed to avoid bias and other potential pitfalls. “It’s a risk, of course,” he says. “But if it’s introduced in an appropriate way, with testing and piloting and continual learning, then you can mitigate those risks.”
The Long-Term View: ‘A Net Positive’
Bock also says workplace AI can be a boon if it’s handled responsibly. “In the long term it’s going to be a net positive,” Bock says. “But in the short/medium term it all depends on the values and perspectives of the people building these systems.”
The takeaway for decision-makers isn’t that AI is best avoided, Bock says. The key is to be cognizant of the risks and mindful in reaching for the potential rewards. “It’s a huge opportunity,” he says. “There’s a window in the next three to five years where the companies that are thoughtful about using this technology well are going to crush it, and absolutely win. There’s a huge amount of upside.”
Rather than viewing digital assistants in the workplace as money-saving technologies that can automate away the need for human interaction, Bock says, companies should see them as a means to augment human decision making and to give managers more time for the difficult but important tasks of building relationships and nurturing their employees.
“Fundamentally I’m an optimist,” he says. “A little machine learning can go a long way toward helping us be better leaders.”