scorecardresearch
Clear all
Search

COMPANIES

No Data Found

NEWS

No Data Found
Sign in Subscribe
Exploring responsible AI: IIT Madras partners with Ericsson for joint research

Exploring responsible AI: IIT Madras partners with Ericsson for joint research

Leaders from Ericsson Research and IIT Madras convened to deliberate upon the developments and advancements in Responsible AI, highlighting the growing importance of integrating AI ethics with technological innovation.

Artificial Intelligence Artificial Intelligence
SUMMARY
  • The Indian Institute of Technology Madras (IIT Madras) has announced a groundbreaking partnership with Ericsson
  • This collaboration signals a new era in AI research and its responsible applications
  • It highlights the growing importance of integrating AI ethics with technological innovation

In a promising stride towards the future of responsible artificial intelligence (AI), the Indian Institute of Technology Madras (IIT Madras) has announced a partnership with Ericsson to engage in collaborative research in the domain of Responsible AI. This collaboration signals a new era in AI research and its responsible applications, setting the stage for the convergence of cutting-edge technology and ethical considerations.

The Centre for Responsible AI (CeRAI) at IIT Madras hosted a Symposium on Responsible AI for Networks of the Future. Leaders from Ericsson Research and IIT Madras convened to deliberate upon the developments and advancements in Responsible AI, highlighting the growing importance of integrating AI ethics with technological innovation.

Ericsson's Commitment as a 'Platinum Consortium Member'

During the symposium held at the IIT Madras campus, Ericsson solidified its commitment to responsible AI by signing a memorandum of understanding (MoU) with CeRAI, designating the company as a 'Platinum Consortium Member' for a duration of five years.

CeRAI, the Centre for Responsible AI, is an interdisciplinary research centre with the goal of becoming a premier research hub for both fundamental and applied research in Responsible AI. It aims to have an immediate impact on the deployment of AI systems within the Indian ecosystem, while also setting a global benchmark for responsible AI research.

The Crucial Role of AI in Future Networks

In his address at the symposium, Dr Magnus Frodigh, Global Head of Ericsson Research, emphasised the critical role AI will play in the forthcoming 6G networks. These networks, expected to be autonomously driven by AI algorithms, promise to seamlessly bridge the physical and digital realms, enabling immersive augmented reality (AR) and virtual reality (VR) experiences. In this context, responsible AI practices become paramount to ensure trust, fairness, and privacy compliance in AI-controlled systems.

Dr Frodigh stated, "Our focus is on developing cutting-edge methods to enhance trust and explainability in AI algorithms for the public good. Our partnership with CeRAI at IIT Madras is aligned with the Indian Government's vision for the Bharat 6G program."

Embedding Responsible AI Principles in Future Networks

Professor B. Ravindran, Faculty Head of CeRAI at IIT Madras, emphasised the importance of embedding responsible AI principles from the inception of future networks. He acknowledged that future networks will facilitate easier access to high-performing AI systems and stressed the imperative of integrating responsible design principles into these systems.

Prof. Ravindran further elaborated, "With the advent of 5G and 6G networks, many critical applications are likely to be deployed on devices such as mobile phones. This requires new research to ensure that AI models and their predictions are explainable and provide performance guarantees appropriate to the applications they are deployed in."

Symposium Highlights and Current Research Activities

The symposium served as a platform for showcasing ongoing research activities at the Centre for Responsible AI. Key projects presented during the event included:

1. Large-Language Models (LLMs) in Healthcare: This project focuses on detecting biases in LLMs, developing scoring methods for real-world applicability, and reducing biases in these models. Custom-scoring methods are being designed based on the Risk Management Framework (RMF) set forth by the U.S. National Institute of Standards and Technology (NIST).

2. Participatory AI: Addressing the black-box nature of AI, this project explores governance mechanisms inspired by fields such as town planning and forest rights. It aims to involve stakeholders in providing constructive inputs for better AI customisation, enhancing accuracy, reliability, and addressing potential negative impacts.

3. Interpretability of Attention-Based Models: The project delves into the interpretability of complex attention-based models. It seeks to understand when these models are accurate but lack interpretability, develop algorithms to improve their interpretability, and uncover patterns in the data they tend to learn.

4. Multi-Agent Reinforcement Learning for Intent-Based Networks: Intent-based network management is gaining traction, and this project explores the use of Multi-Agent Reinforcement Learning (MARL) to handle complex coordination and conflicts in network intents. It leverages explainability and causality to encourage cooperation among agents.

Also Read  

Apple iPhone 15's USB-C port will be able to charge Apple Watch and AirPods

Chandrayaan-3: No signals from Vikram lander and Pragyan rover; efforts to establish contact continue

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Published on: Sep 26, 2023, 2:17 PM IST
IN THIS STORY
×
Advertisement