When AI Spews Fake News

The capabilities demonstrated by artificial intelligence (AI) are just beginning to unfold one shock after another. That the technological evolution has the potential to affect human life and society before we are ready for it is already giving researchers and technologists sleepless nights.
Among them is OpenAI, a San Francisco-based non-profit research organisation whose mission is to discover and enact the path to safe artificial general intelligence (AGI). The company focusses on long-term research so that it can redirect the impact that this technology will have very soon. OpenAI is funded and sponsored by several global tech leaders, including Elon Musk and Peter Thiel, among others.
Recently, the OpenAI group decided to hold back the research output and code related to an AI set-up that writes 'news' based on a single sentence prompt. If governments, including our own, are worried about fake news circulating on WhatsApp and other social network platforms, they will have something far more worrying to deal with if this all-new GPT-2 algorithm is to be made fully public (for now, the group has only released a smaller version instead of open-sourcing the code). In the hands of bad actors, it can generate well-written fake news stories and impact audiences who lack the critical thinking skill to analyse and differentiate such content.
So, how does GPT-2 work? An OpenAI blog post titled Better Language Models and Their Implications, says, "We've trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language-modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization - all without task-specific training."
What the algorithm churns out is alarmingly convincing. Brought up on a diet of around eight million web pages, it is particularly good at creating political stories, which it can spew out after a starting sentence is fed to the system. MIT Technology Review, a magazine published by the Massachusetts Institute of Technology, features the following sentence that acts as a trigger. It reads: Russia has declared war on the United States after Donald Trump accidentally fired a missile in the air.
Based on this sentence, GPT-2 develops a full-fledged news story, understanding quite well the context behind such a hypothetical event. It is so well done that no one will be able to guess that humans have not written this piece of news.
As we have mentioned before, OpenAI has released a simpler version of the algorithm for researchers to examine. Although not yet perfect with some kinds of stories, GPT-2 could well be developed and improved by people who can pick up on the methodology and attempt to replicate it for malicious purposes.
Empathy From Smart Speakers
Smart speakers have all but cracked speech recognition and natural language processing. But tech giants are not going to leave them at that. The next frontier to be breached by Amazon's Alexa, Google Assistant and Apple's Home Pod is gesture recognition. In fact, their capabilities could extend to gauging users' facial expressions, moods and stress levels.
In 2018, Abhishek Singh created an app that enables Alexa to recognise gestures and respond to them. Alexa had to be given a computer screen to interface with users, but there are other smart speakers with displays of their own such as the Echo Spot and Echo Show from Amazon and several devices with Google Assistant-and-screen combinations. The idea: People with hearing impairments should be able to use smart speakers, which are, for now, voice-first devices.
Mark Spates, Product Lead for Google smart homes, has also confirmed that voice will not be the only activation trigger for smart speakers. Gestures, proximity sensing and a host of other technologies will play crucial roles.
MacRumors.com has recently spotted a patent by Apple that hints at the possibility of a device recognising a user's gestures and given Apple's Face ID technology, its possible capability of reading expressions. All these, along with voice recognition, might help ascertain a person's emotional state with accuracy. Such features might also be used to ensure that smart speakers of the future emanate empathy and provide helpful responses.