Gemini model functionality will be coming to Google Maps Platform for developers, starting with the Places API, the company announced Tuesday at the Google I/O 2024 conference. This new feature will allow developers to display a generated AI overview of a location or area in their own apps or websites.
This overview is based on Gemini's analysis of insights from Google Maps' community of over 300 million contributors. With this new feature, developers no longer need to write their own custom descriptions for locations.
For example, if you have a restaurant reservation app, this new feature can help users understand which restaurants are best for them. When users search for a restaurant in the app, they can instantly see all the most important information, including the restaurant's specialties, happy hour deals, and the atmosphere of the place.
Image credit: Google
The new overview is available for many different types of locations, including restaurants, shops, supermarkets, parks, and movie theaters.
Google is also introducing AI-powered contextual search results to the Places API. When a user searches for a location in a developer's product, the developer can display reviews and photos related to the search.
If a developer has an app that allows users to explore local restaurants, users can search for “dog-friendly restaurants,” for example, and see a list of related dining spots and related reviews of dogs at the restaurant. You can check the photos.
Contextual search results are available worldwide, and location and regional summaries are available in the United States. Google plans to expand search results to more countries in the future.
Publish an AI newsletter. Sign up here to start receiving it in your inbox on June 5th.