Alphabet, the parent company of Google, recently showed off new artificial intelligence (AI) features to improve its services. These updates include making its Gemini chatbot faster and cheaper and enhancing its main search engine to compete with other AI developers like OpenAI.
At its annual I/O event in Mountain View, California, Google introduced several AI advancements. One is Flash, a part of the Gemini 1.5 AI models, which works faster and costs less. Another is Project Astra, which lets users interact with their environment in real time through their smartphone camera. Google also added AI-generated headlines to search results.
Alphabet's CEO, Sundar Pichai, spoke positively about these AI updates, noting they could help grow the business. Google's efforts aim to match or surpass the capabilities of OpenAI's ChatGPT, known for its human-like responses.
Google DeepMind, another Alphabet division, is developing AI technologies for everyday tasks. For example, Project Astra was shown identifying a speaker and finding misplaced glasses. The company also suggested combining Project Astra with Gemini Live to make a more natural-sounding voice and text assistant than the current Google Assistant.
In video generation, Google previewed Veo, an AI model that creates high-quality videos, similar to efforts by OpenAI in the film industry.
Google also improved its Gemini Pro 1.5 model, doubling its data processing ability to 2 million tokens. This means the AI can handle larger amounts of data, like thousands of pages of text or lengthy video content.
Additionally, Alphabet updated its computing chips and search engine. It unveiled a sixth-generation tensor processing unit (TPU), an alternative to Nvidia's processors, available to Google Cloud customers in late 2024.
For U.S. users, Google Search will soon use AI to organize search results for dining, recipes, and eventually movies and books. The AI Overviews feature, tested since last year, will synthesize information to answer complex queries.
Jacob Bourne, an analyst from eMarketer, mentioned that the response to AI Overviews will show how well Google can adapt its search engine to the AI era. He highlighted the need to turn AI innovations into profitable products and services.
Google confirmed that ads will still appear on its web pages, and AI Overviews will reach over a billion people by year's end. In 2023, Alphabet reported revenues of $307.4 billion, mainly from ads on Google Search and other platforms.
Lastly, Google is testing a feature allowing users to ask questions about videos they upload to Google Search, similar to image interactions. This was shown with a broken record player, demonstrating how the feature could help diagnose issues.
Key Points
Alphabet is enhancing its AI technology with features like Flash, a faster and more affordable AI model, and Project Astra, which lets users interact with their surroundings using their smartphone camera.
Google is updating its search engine to use AI for organizing search results and answering complex questions, while also introducing a new computing chip to improve AI processing power.
FAQs
Q1. What is Flash in Alphabet's new AI features?
Flash is part of the Gemini 1.5 AI models, designed to be faster and more cost-effective, enhancing the performance of Alphabet's AI services.
Q2. What are the improvements in Google's search engine with the new AI updates?
Google has introduced AI-generated headlines for search results and a new feature, AI Overviews, to help organize and answer more complex queries.
Q3. What is the Gemini Pro 1.5 model's significance?
The Gemini Pro 1.5 model is an AI that can process large amounts of data, like thousands of pages of text or extensive video content, to provide more accurate answers.
Reference
留言