A human just triumphed over IBM’s six-year-old AI debater

Natarajan and IBM’s AI system, Project Debater, began by preparing arguments for and against the resolution, “We should subsidize preschool”. Both sides had only 15 minutes to prepare their speech, following which they delivered a four-minute opening statement, a four-minute rebuttal, and a two-minute summary. (read more)

The winner of the event was determined by Project Debater’s ability to convince the audience of the persuasiveness of the arguments. But even though Natarajan was declared the winner, 58% of the audience said Project Debater “better enriched their knowledge about the topic at hand, compared to Harish’s 20%”.

Results were tabulated via a real-time online poll.

IBM’s Project Debater engaged in the first-ever live, public debates with humans in June 2018 when it argued on the topic on whether we should subsidize space exploration or not.

Project Debater is touted as IBM’s next big milestone for AI, having been in the works for almost seven years. It is taught to debate unfamiliar topics, as long as these are well covered in the massive corpus that the system mines—including hundreds of millions of articles from numerous well-known newspapers and magazines. The system uses Watson Speech-to-Text API (application programming interface).

A global IBM Research team led by IBM’s Haifa, Israel lab endowed Project Debater with three capabilities. First, data-driven speech writing and delivery. Second, listening comprehension that can identify key claims hidden within long continuous spoken language. And third, modelling human dilemmas in a unique knowledge graph to enable principled arguments.

But what is so exciting about machines beating human beings at debates and games, other than showcasing the prowess of technology companies?

Consider these developments:

■ A decade ago, IBM’s supercomputer Deep Blue defeated then world chess champion, Gary Kasparov.

■ In March 2016, Alphabet-owned AI firm DeepMind’s computer programme, AlphaGo, beat Go champion Lee Sedol.

■ On 7 December 2017, AlphaZero—modelled on AlphaGo—took just four hours to learn all chess rules and master the game enough to defeat the world’s strongest open-source chess engine, Stockfish.

The AlphaZero algorithm is a more generic version of the AlphaGo Zero algorithm. It uses reinforcement learning, which is an unsupervised training method that uses rewards and punishments. AlphaGo Zero does not need to train on human amateur and professional games to learn how to play the ancient Chinese game of Go.

Further, the new version not only learnt from AlphaGo—the world’s strongest player of the Chinese game Go—but also defeated it in October 2017.

Moreover, in July 2018, AI bots beat humans at the video game Dota 2. Published by Valve Corp., Dota 2 is a free-to-play multiplayer online battle arena video game and is one of the most popular and complex e-sports games. Professionals train throughout the year to earn part of Dota’s annual $40 million prize pool that is the largest of any e-sports game. Hence, a machine beating such players underscores the power of AI.

AI bots, though, lost to professional players at Dota 2, which has been actively developed for over a decade, with the game logic implemented in hundreds of thousands of lines of code. This logic takes milliseconds per tick to execute, versus nanoseconds for Chess or Go engines. The game is updated about once every two weeks.

IBM’s Project Debater and the AI bots have lost to humans, but given the lessons learnt from DeepMind’s AlphaGo, this is not the last we’re hearing from AI-powered machines. The game has just begun.

Originally appeared on: TheSpuzz

Scoophot
Logo