
Google has renamed its AI large language model interface Bard to Gemini and also launched an advanced version of the product which will compete directly with Open AI's ChatGPT Plus. While sharing the reasons behind the product's name change, Google CEO Sundar Pichai, in an interview with US' CNBC TV, also said that he was using Gemini to brush up his coding skills.
"We have been having fun with it. I have been brushing up my coding skills," Pichai told the channel. Pichai also emphasised how Gemini can make sense of images and cited an interesting use-case scenario.
"A friend of mine wanted to put a house up for sale. He put a few pictures in it [Gemini] and asked it to write a copy. It understood the architecture of the home, it looked at the furnishings. It definitely wrote a better copy than either of us could have. When I find interesting things on my phone am looking at, I just ask about it and it can give me added information. All of that is fascinating to see," Pichai said.
In the interview, Pichai also mentioned why Google had decided to change the name from Bard to Gemini: "Gemini is our approach, overall, in terms of how we are building our most capable and safe and responsible AI models. It's the frontier of the technology. Bard was the most direct way people could interact with their models. So, it made sense to just evolve it to be Gemini because you're actually talking directly to the underlying Gemini model when you use it. It will also be the way by which we will keep advancing our model and users can use it directly. So, we thought the name change made sense," Pichai said.
He also touched on the prowess of Gemini Advanced, which is available to users on paid subscription. "Well, Gemini Advanced has access to Ultra1.0, which is our most capable model to date," Pichai said. He added: "Just gives you more capabilities. It is particularly good at complex inquiries, multi-turn queries, it has very good work space integration. It is built from the ground-up to be a multimodal. So, when you attach image and queries, it really shines."
Pichai is particularly fascinated by Gemini's ability to make sense of images. "For me it was really when you gave it a series of images, and it really makes sense of it, almost understanding it as video. And can answer questions related to that," he said.
Google's large language model has been trained on not just text but also images, video and code, according to Pichai.
"In the training data, we included not just text, but also audio, images, video, code. So that plays out in the model when you test it that way. So, it kind of gives you a window into the future. As humans we see the world with the richness of information in front of us. And so we are getting our models to behave in that same way and I think that represents the future frontier."
According to the Google CEO, the era of Gemini has just begun. "Today we are giving it to consumers. We will share more details for developers and enterprises...I view 2024 as the beginning of our Gemini Era," Pichai said.
Also read: Google Gemini Advanced AI subscription now available in India: Check price, benefits
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Copyright©2025 Living Media India Limited. For reprint rights: Syndications Today