OpenAI CTO Mira Murati returned to her alma mater, Dartmouth College, this week for a thought-provoking conversation on the future of artificial intelligence and its potential impact on society. During the event, which was held in Dartmouth’s newly constructed engineering building, Murati, a 2012 graduate of the Thayer School of Engineering, engaged in a wide-ranging discussion, touching on topics from her journey at OpenAI to the ethical considerations surrounding large language models and precision health.
However, it was one particular comment that sparked considerable debate among those in attendance and online. When asked about the potential for AI-driven job displacement, Murati stated, "Some creative jobs maybe will go away, but maybe they shouldn't have been there in the first place."
Murati's remark, while seemingly casual, touched upon a deeply sensitive nerve in an era of rapid technological advancement. The prospect of AI automating tasks traditionally performed by humans, particularly within creative fields, has raised concerns about job security and the future of work.
While acknowledging the potential for job losses, Murati's statement appeared to suggest that certain creative roles might be inherently inefficient or unnecessary, implying that their elimination by AI could be a positive development.
Despite her candid acknowledgement of potential job displacement, Murati emphasised the collaborative nature of AI tools like ChatGPT and DALL-E. She suggested that these technologies, rather than replacing human creativity, actually serve to enhance and expand it, providing new avenues for artistic expression and problem-solving. "It's a tool, right?" she remarked, "It certainly can do that as a tool, and I expect that we will actually we will collaborate with it and it's going to make our creativity expand."
She also highlighted the potential for AI to democratise creativity, making it more accessible to a broader range of individuals who might not have had the resources or training to pursue their creative aspirations in the past. "The first part of anything that you're trying to do," she explained, "whether it's um creating new designs, whether it's coding, uh or writing an essay or um, you know, um concepts in topology, you can just learn about these things and interact with them in a much more intuitive way, and that expands your learning."
The discussion at Dartmouth also delved into the ethical considerations surrounding AI development. Murati acknowledged the importance of carefully managing AI systems, particularly as their capabilities advance. She stressed the need for a multi-stakeholder approach to regulation, involving collaboration between developers, policymakers, and the wider public. "You can have this amazing model," she stated, "commercial partners take it and go build amazing products on top of it, and then we found out that that's actually very hard."
She highlighted the crucial role of organisations like OpenAI in driving research and proactively mitigating potential risks associated with AI. "We're thinking a lot about this," she explained. "It's definitely real that you will have AI systems that will have general capabilities, connect to the internet, talk to each other, agents connecting to each other and doing tasks together, or agents working with humans and collaborating seamlessly."