Jill Watson has been a teaching assistant (TA) at the Georgia Institute of Technology for five years now, helping students day and night with all manner of course-related inquiries. But for all the hard work she has done, she still can’t qualify for outstanding TA of the year.
That’s because Jill Watson, contrary to many students’ belief, is not actually human.
This ebook, based on the latest ZDNet / TechRepublic special feature, advises CXOs on how to approach AI and ML initiatives, figure out where the data science team fits in, and what algorithms to buy versus build.
Created back in 2015 by Ashok Goel, professor of computer science and cognitive science at the Institute, Jill Watson is an artificial system based on IBM’s Watson artificial intelligence software. Her role consists of answering students’ questions – a task she remarkably carries out with a 97% accuracy rate, for inquiries ranging from confirming the word count for an assignment, to complex technical questions related to the content of the course.
And she has certainly gone down well with students, many of whom, in 2015, were “flabbergasted” upon discovering that their favorite TA was not the serviceable, human lady that they expected, but in fact a cold-hearted machine.
What students found an amusing experiment is the sort of thing that worries many workers. Automation, we have been told time and again, will displace jobs; so are experiments like Jill Watson the first step towards unemployment for professionals?
In fact, it’s quite the contrary, Goel tells ZDNet. “Job losses are an important concern – Jill Watson, in a way, could replace me as a teacher,” he said. “But among the professors who use her, that question has never come up, because there is a huge need for teachers globally. Instead of replacing teachers, Jill Watson augments and amplifies their work, and that is something we actually need.”
The AI was originally developed for an online masters in computer science, where students interact with teachers via a web discussion forum. Just in the spring of 2015, noticed Goel, 350 students posted 10,000 messages to the forum; to answer all of their questions, he worked out, would have taken a real-life teacher a year, working full time.
- Deep learning godfathers Bengio, Hinton, and LeCun say the field can fix its flaws
- 7 business areas ripe for an artificial intelligence boost
- What is AI? Everything you need to know about Artificial Intelligence
- Will robots ever gain cognitive capabilities? (ZDNet YouTube)
- This phone uses AI to block you taking naked selfies (CNET)
- Artificial intelligence: Cheat sheet (TechRepublic)
Jill Watson has only grown in popularity since 2015, said Goel, and she has now been deployed to a dozen other courses — building her up for a new class takes less than ten hours. And while the artificial TA, for now, is only used at Georgia Institute of Technology, Jill Watson could change the education game if she were to be scaled globally. With UNESCO estimating that an additional 69 million teachers are needed to achieve sustainable development goals, the notion of ‘augmenting’ and ‘amplifying’ teachers’ work could go a long way.
The automation of certain tasks is not such a scary prospect for those working in education. And perhaps neither is it a risk to the medical industry, where AI is already lending a helping hand with tasks ranging from disease diagnosis to prescription monitoring. It’s a welcome support, rather than a looming threat, as the overwhelming majority of health services across the world report staff shortages and lack of resources even at the best of times.
But of course, not all professions are in dire need of more staff. For many workers, the advent of AI-powered technologies seems to be synonymous with permanent lay-off. Retailers are already using robotic fulfillment systems to pick orders in their warehouses. Google’s project to build autonomous vehicles, Waymo, has launched its first commercial self-driving car service in the US, which in the long term will remove the need for a human taxi driver. Ford is even working on automating delivery services from start to finish, with a two-legged, two-armed robot that can walk around neighborhoods carrying parcels from the delivery vehicle right up to your doorstep.
Advancements in AI technology, therefore, don’t bode well for all workers. “Nobody wants to be out of a job,” says David McDonald, professor of human-centered design and engineering at the University of Washington. “Technological changes that impact our work, and thus, our ability to support ourselves and our families, are incredibly threatening.”
“This suggests that when people hear stories saying that their livelihood is going to disappear,” he says, “that they probably will not hear the part of the story that says there will be additional new jobs.”
Consultancy McKinsey estimates that automation will cause up to 800 million individuals around the world to be displaced from their jobs by 2030 – a statistic that will sound ominous, to say the least, to most of the workforce. But the firm’s research also shows that in nearly all scenarios, and provided that there is sufficient investment and growth, most countries can expect to be at very near full employment by the same year.
The potential impact of artificial intelligence needs to be seen as part of the bigger picture. McKinsey highlighted that one of the countries that will face the largest displacement of workers is China, with up to 12% of the workforce needing to switch occupations. But although 12% seems like a lot, the consultancy noted, it’s still relatively small compared with the tens of millions of Chinese who have moved out of agriculture in the past 25 years.
In other words, AI is only the latest news in the long history of technological progress – and as with all previous advancements, the new opportunities that AI will open will balance out the skills that the technology makes out-of-date. At least that’s the theory; one that Brett Frischmann explores in the book he co-authored, Re-engineering Humanity. It’s a project that’s been going on forever – and more recent innovations are building on the efficiencies pioneered by the likes of Frederick Winslow Taylor and Henry Ford.
“At one point, human beings used spears to fish. As we developed fishing technology, fewer people needed that skill and did other things,” he says. “The idea that there is something dramatically different about AI has to be looked at carefully. Ultimately, data-driven systems, for example as a way to optimize factory outputs, are only a ramped-up version of Ford and Taylor’s processes.”
- ZDNet’s top enterprise CEOs of the 2010s
- Executive dies, taking investor cryptocurrency with him. Now they want the body exhumed
- What is a CIO? Everything you need to know about the Chief Information Officer explained
- How machine learning is predicting employee turnover (ZDNet YouTube)
- Ring CEO defends police partnerships (CNET)
- How to overcome procrastination: A CXO guide (TechRepublic)
Seeing AI as simply the next chapter of tech is a common position among experts. The University of Washington’s McDonald is equally convinced that in one form or another, we have been building systems to complement work “for over 50 years”.
So where does the big AI scare come from? A large part of the problem, as often, comes down to misunderstanding. There is one point that Frischmann was determined to clarify: people do tend to think, and wrongly so, that the technology is a force that has its own agenda — one that involves coming against us and stealing our jobs.
“It’s really important for people to understand that the AI doesn’t want anything,” he said. “It’s not a bad guy. It doesn’t have a role of its own, or an agenda. Human beings are the ones that create, design, damage, deploy, control those systems.”
In reality, according to McKinsey, fewer than 5% of occupations can be entirely automated using current technology. But over half of jobs could have 30% of their activities taken on by AI. Rather than robots taking over, therefore, it looks like the future will be about task-sharing.
Gartner previously reported that by 2022 one in five workers engaged in non-routine tasks will rely on AI to get work done. The research firm’s analysts forecasted that combining human and artificial intelligence would be the way forward to maximize the value generated by the technology. AI, said Gartner, will assist workers in all types of jobs, from entry-level to highly-skilled.
The technology could become a virtual assistant, an intern, or another kind of robo-employee; in any case, it will lead to the development of an ‘augmented’ workforce, whose productivity will be enhanced by the tool.
For Gina Neff, associate professor at the Oxford Internet Institute, delegating tasks to AI will only bring about a brighter future for workers. “Humans are very good at lots of tasks, and there are lots of tasks that computers are better at than we are. I don’t want to have to add large lists of sums by hand for my job, and thankfully I have a technology to help me do that.”
“Increasingly, the conversation will shift towards thinking about what type of work we want to do, and how we can use the tools we have at our disposal to enhance our capacity, and make our work both productive and satisfying.”
As machines take on tasks such as collecting and processing data, which they already carry out much better than humans, workers will find that they have more time to apply themselves to projects involving the cognitive skills – logical reasoning, creativity, communication – that robots (at least currently) lack.
Using technology to augment the human value of work is also the prospect that McDonald has in mind. “We should be using AI and complex computational systems to help people achieve their hopes, dreams and goals,” he said. “That is, the AI systems we build should augment and extend our social and our cognitive skills and abilities.”
There is a caveat. For AI systems to effectively bolster our hopes, dreams and goals, as McDonald said, it is crucial that the technology is designed from the start as a human-centered tool – one that is made specifically to fulfil the interests of the human workforce.
Human-centricity might be the next big challenge for AI. Some believe, however, that so far the technology has not done such a good job at ensuring that it enhances humans. In Re-engineering Humanity, Frischmann, for one, does not do AI any favours.
“Smart systems and automation, in my opinion, cause atrophy, more than enhancement,” he argued. “The question of whether robots will take our jobs is the wrong one. What is more relevant is how the deployment of AI affects humans. Are we engineering unintelligent humans, rather than intelligent machines?”
It is certainly a fine line, and going forward, will be a delicate balancing act. For Oxford Internet Institute’s Neff, making AI work in humans’ best interest will require a whole new category of workers, which she called “translators”, to act as intermediaries between the real world and the technology.
For Neff, translators won’t be roboticists or “hot-shot data scientists”, but workers who understand the situation “on the ground” well enough to see how the technology can be applied efficiently to complement human activity.
In an example of good behaviour, and of a way to bridge between humans and technology, Amazon last year launched an initiative to help reconvert up to 1,300 employees that were being made redundant as the company deployed robots to its US fulfilment centres. The e-tailer announced that it would pay workers $10,000 to quit their jobs and set up their own delivery business, in order to tackle retail’s infamous last-mile logistics challenge. Tens of thousands of workers have now applied to the program.
In a similar vein, Gartner recently suggested that HR departments start including a section dedicated to “robot resources”, to better manage employees as they start working alongside robotic colleagues. “Getting an AI to collaborate with humans in the ways that we collaborate with others at work, every day, is incredibly hard,” said McDonald. “One of the emerging areas in design is focused on designing AI that more effectively augments human capacity with respect for people.”
From human-centred design, to participatory design or user-experience design: for McDonald, humans have to be the main focus from the first stage of creating an AI.
And then there is the question of communication. At the Georgia Institute of Technology, Goel recognised that AI “has not done a good job” of selling itself to those who are not inside the experts’ bubble.
“AI researchers like me cannot stay in our glass tower and develop tools while the rest of the world is anxious about the technology,” he said. “We need to look at the social implications of what we do. If we can show that AI can solve previously unsolvable problems, then the value of AI will become clearer to everyone.”
His dream for the future? To get every teacher in the world a Jill Watson assistant within five years; and that in the next decade, every parent can access one too, to help children with after-school questions. In fact, it’s increasingly looking like every industry, not only education, will be getting their own version of a Jill Watson, too – and that we needn’t worry that she will be coming at our jobs anytime soon.
- Elon Musk’s mind-reading technology could be about to take a big leap forward
- Burn, drown, or smash your phone: Forensics can extract data anyway
- Best satellite phone communication gadgets for 2020: Garmin inReach, SPOT, Iridium GO, and more
- How geo-spatial data can help track coronavirus (ZDNet YouTube)
- Robot suitcase uses AI to help visually impaired travelers (CNET)
- AI in 2020: How use cases will drive artificial intelligence deployments (TechRepublic)