COMPANIES

No Data Found

NEWS

No Data Found
Envisioning ethical AI that promotes responsible content generation

Envisioning ethical AI that promotes responsible content generation

As AI becomes central to the future of the Internet, ensuring its trustworthiness is key to a reliable digital realm

Ram Mohan Naidu and Shruti Shreya
  • Updated Sep 20, 2023 6:32 PM IST
Envisioning ethical AI that promotes responsible content generationAs AI becomes central to the future of the Internet, ensuring its trustworthiness is key to a reliable digital realm

Artificial Intelligence (AI) is reshaping our work, interactions, and daily lives, but it's a double-edged sword. While it has the potential to streamline the generation of different types of content (blogs, news, product branding etc.) with unparalleled speed, it also poses ethical dilemmas and risks, such as unchecked misinformation. As AI becomes central to the future of the Internet, ensuring its trustworthiness is key to a reliable digital realm. Addressing this requires a foundation of evidence-based principles and innovative solutions to manage content-related challenges stemming from AI.

Advertisement

The positive contributions of AI to responsible content generation

AI has significantly revolutionised media and journalism, offering tools ranging from voice-recognition transcription to auto-generated content. When utilised optimally, these innovations allow journalists to dedicate more time to in-depth, impactful storytelling.
 
It plays a pivotal role in countering misinformation, ensuring content authenticity. By cross-referencing with trusted sources, AI highlights potential inaccuracies for expert review and can even alert readers to potential misinformation. Advanced AI techniques, like natural language processing and sentiment analysis, evaluate the context and tone of articles, assisting in gauging their credibility. This equips readers with insights to discern the trustworthiness of the content they consume.

Furthermore, AI algorithms can be trained to automatically detect patterns and characteristics associated with fake news by learning from extensive datasets of known examples. This proactive identification of potential falsehoods enables prompt action to limit the reach of misinformation and mitigate its impact.

Advertisement

Concerns around proliferation of misinformation

While the benefits of the technology are noteworthy, it is also important to acknowledge that AI has limitations. The accuracy of AI models heavily relies on the quality of training data, and biases present in the data can also influence results. As per estimates, AI-generated content could soon account for 99% or more of all information on the internet, further straining the already overwhelmed content moderation systems. Moreover, dozens of news sites filled with machine generated content of dubious quality have already cropped up, with far more likely to follow.

Further, a new study has found that AI-generated misinformation can be even more persuasive than false content created by humans. The researchers for this study analysed the responses of nearly 700 people to a range of tweets on trending issues like Covid, evolution, 5G and vaccines. Some of the 220 total tweets were accurate, while others featured misinformation. The purpose of the analysis was to see the impact AI had on people’s ability to spot fake news. The survey found that not only were tweets generated by large language models easier to identify when they presented the correct information, but they were also better at misguiding people into believing them when they were false.

Advertisement

The solution after all!

It is undeniable that the current pace of innovation in AI tools and platforms has created enormous opportunities. However, the concerns surrounding their use and its unintended repercussions on user safety in particular and society and economy in general can also not be left unaddressed. Towards this, creating holistic and collaborative principle based regulatory models to ensure guided development and deployment of these technologies is crucial. Moreover, to ensure that such regulations and policies are truly efficacious, it is important that we involve a multi stakeholder group of AI experts, researchers and practitioners with representations from academia and civil society.

In addition to responsible policy guidance, it is also important that the principles of transparency and trustworthiness being proposed through credible scholarship by researchers and academia are integrated and applied by the technology developers. The results of a recent survey conducted by IBM shed light on this issue. The findings indicate that despite a strong recognition of the importance of advancing responsible AI, there exists a gap between the intentions of business leaders and their actual implementation of meaningful actions. Approximately 80% of CEOs are willing to integrate AI ethics into their company's business practices. However, the survey reveals that less than a quarter of these organisations have successfully operationalised these principles. It is critical that these implementational gaps are bridged through holistic support from other stakeholders and the principles of trustworthy AI are actually operationalised. This is paramount to fructify the objectives of any regulatory promulgations as well as the efforts of the civil society to tackle the growing information disorder as well as the other pressing downsides of this technology.

Advertisement

Humans are the heart of the Internet, and everyone should benefit from the open and trustworthy Internet. However, the Internet is going through a paradigm shift driven by key technological developments like Artificial Intelligence. Therefore, to transform the status quo, it is important to reinstate trust within disruptive technologies like AI. To this end, an ecosystem-level principle-based approach which appropriately maps the responsibilities of all players in the ecosystem as well as accounts its implementation is cardinal.


Ram Mohan Naidu is Member of Parliament, India and Shruti Shreya is Programme Manager, The Dialogue. (Views are personal)

(DISCLAIMER: Any views, thoughts, and opinions expressed by the author or authors are solely their own and do not reflect the views, opinions, policies, or position of Business Today)

Published on: Sep 20, 2023 6:30 PM IST
    Post a comment