Since a long time, has tried to launch an AI revolution.
In 2016, just a few months after becoming Google's CEO,
The company was "AI-first". He lavishly spent to build an all-star AI team whose discoveries influenced products such as Google Translate and Google Photos. He predicted that AI would have a greater impact than "electricity and fire."
It was a shock when the big moment of AI finally arrived and Google wasn't there.
OpenAI, a scrappy AI start-up backed by
ChatGPT stole the spotlight by releasing a code-generating marvel in November. ChatGPT was a sensation overnight, attracting millions and sparking a Silicon Valley frenzy. Google looked sluggish for the first year. It didn't help that Microsoft relaunched Bing with OpenAI technology, ending Bing's decade-long run as a joke.
New York Times
Pichai, in his first long interview since the launch of ChatGPT, said that he was happy to see AI having a moment even though Google was not driving it.
Pichai stated, "It is an exciting moment regardless of whether or not we did it." "Of course, you'll always wish that you had done it."
Google has had a crazy few months. Pichai insists that it was not him who declared "code red" in December, just after ChatGPT had been released, instructing staff to devote more time and resources to AI projects. The company established a rapid review process in order to expedite the release of AI projects. Google co-founders Larry Page and Sergey Brin - who had been apathetic for many years - stepped up to the plate. The company is planning to launch a number of new AI-based products and integrate the technology in many of its current ones this year. This week, the company began testing a Gmail feature which allows users to create AI-generated email. Pichai expressed optimism and concern about the AI race on Thursday. He was blunt in his assessment of Bard, Google's ChatGPT rival that received tepid reviews last week: "I feel we put a souped up Civic into a race against more powerful cars." He also revealed that Bard will be upgraded from the current AI language model LaMDA to PaLM, a model with more power. He responded to an open letter signed by almost 2,000 technology leaders, researchers and other experts, which urged companies not to develop powerful AI systems until at least six month to avoid "profound risks for society." Pichai does not agree with the details of the open letter - nor would he commit to slowing Google's AI initiatives - but said the letter was worth spreading. He spoke of the "whiplash effect" that he feels in today's AI world, where some people want companies like Google and others to be more aggressive, to release more products, and to take greater risks. Others, however, urge caution and to slow down. He said that "you will see us being bold and shipping things" but "we are going to do it in a responsible way." Pichai also made some interesting remarks.
The initial reception of Google's Bard bot was lukewarm:
We wanted to be cautious when we put Bard out. It's not surprising that this is the reaction. In some ways, it feels like we put a boosted Civic in a race against more powerful cars. What surprised me was how well it performed on so many different types of queries. We will be iterating quickly. There are clearly more capable models. We will upgrade Bard, perhaps as soon as this goes live. This will give it more capabilities in terms of reasoning, coding and math. You will be able to see the progress next week.
ChatGPT success was a surprise?
We had a great deal of context with OpenAI. We knew that the team was made up of some very talented people. Some of them had worked at Google previously, so we were familiar with their caliber. OpenAI's success was not a surprise to us. I think ChatGPT... well, they did find something that was product-market compatible. The response from users was, I believe, a pleasant one, both for them and for many of us.
Concerned about the race to AI advances by tech companies:
Occasionally, I am concerned when someone uses the words "race" or "being first." I have been thinking about AI for quite some time. We are working with a technology that will be extremely beneficial but also has the potential to do harm. It's important to me that we all take responsibility in our approach.
The return of Larry Page & Sergey Brin
I've met with them a few times. Sergey has spent some time with our engineers. He is a mathematician who has a strong interest in computers. To him, the technology underpinning it, if I could use his words, would be the most exciting thing that he had ever seen. It's the excitement. I'm happy. They always say, "Call Us Whenever You Need to." They call me.
The open letter signed by Elon Musk and nearly 2,000 AI experts, including Elon Musk himself, urging companies to stop the development of powerful AI for at least 6 months.
It's crucial to me that we hear about concerns in this field. Many thoughtful people are behind it, people who have been thinking about AI for many years. Elon was very concerned about AI security when I spoke to him eight years ago. He has always been concerned. It is a valid concern. Although I don't agree with all the things you say and how you propose to do it, the spirit is what I want to share.
If he is worried about AGI (artificial general intelligence), an AI that surpasses the human intelligence:
What is AGI and when does it occur? What is AGI? What is it? When will we arrive?
All of those are excellent questions. It almost doesn't seem to matter to me because I know that these systems will be extremely capable. It doesn't really matter if you reach AGI, because you will have systems that are capable of delivering real benefits on a scale never before seen and may even cause harm. Is AGI AGI? It doesn't really matter.
Why climate change activism gives him hope for AI
AI is a big issue that affects us all. We live on one world, so both of these issues are similar in that they cannot be solved unilaterally. It affects everyone by definition. This tells me that the collective will to deal with all of this responsibly will come in time.
This article was originally published in
The New York Times