Google is expanding Gemini Intelligence on Android with agentic capabilities that can fill forms and transcribe speech, while also launching a new widget-creation tool that lets users build custom widgets without writing code. The features arrive on Samsung Galaxy and Google Pixel phones this summer.
Gemini Intelligence, Google's on-device AI layer introduced earlier this year, previously focused on summarizing notifications and answering questions. The new capabilities add active form filling—where the model reads and completes application fields—and Gboard-based dictation that transcribes speech directly into input fields. These are narrow but concrete use cases: instead of manually typing information into tax forms, rental applications, or sign-up pages, users can ask Gemini to do it. The dictation integration leverages Google's existing Gboard keyboard, positioning the feature as a natural extension of input systems already present on billions of Android devices.
The widget builder, called "Create My Widget," operates without requiring traditional code. Users describe what they want—layout, functionality, data source—and the system generates a working widget. Google's framing emphasizes "vibe coding," an approach where users specify intent and visual preference rather than writing Swift or Kotlin. The builder will initially ship on the latest Samsung Galaxy and Pixel devices. The narrowness of the launch—not all Android phones, but specific flagship models—suggests Google is treating this as a controlled rollout, likely to test the feature's stability and gather feedback before broader distribution.
These additions position Gemini Intelligence as increasingly useful for practical Android tasks rather than conversational assistance. Form filling and voice dictation address friction points in everyday workflows. The widget builder extends personalization beyond the preset templates manufacturers provide, though it remains unclear whether third-party developers will be able to use the same "vibe coding" system or whether it is restricted to user-created, device-local widgets.
Google's strategy here reflects broader industry movement toward on-device AI that performs tasks, not just answers questions. Apple has made similar moves with Apple Intelligence on iOS, automating notification filtering and on-device processing. The difference: Google is making Gemini Intelligence's agentic features available this summer on current hardware, while Apple's equivalent features are tied to iOS 18 and newer devices arriving later in 2024. Samsung's inclusion as a launch partner signals Google's effort to distribute Gemini beyond its own Pixel line, expanding reach across the Android ecosystem where Samsung holds significant market share.
What remains unclear is the scope of agentic capabilities. The announced features—form filling and dictation—are specific and limited. Google has not disclosed whether Gemini Intelligence on Android will gain the ability to control other apps, schedule actions, or perform multi-step workflows that some AI assistants on other platforms now attempt. The vibe-coding widget builder is novel, but without seeing the actual interface or trying it, the difference between this and existing "easy" widget builders (which already exist in Android Studio and other tools) is not yet evident from the announcement alone.

The timing matters. Both Google and Apple are racing to demonstrate that on-device AI can provide value without sending user data to cloud servers. Google's summer launch gives it months ahead of Apple's comparable features. However, feature availability is not the same as adoption. Users must actively enable Gemini Intelligence and choose to use the new form-filling and widget-creation tools. Google has not disclosed how many users currently have Gemini Intelligence active on their devices, making it difficult to assess how many people will actually see and use these new capabilities.
Watch whether Google expands the agentic scope beyond form filling and whether the vibe-coding widget builder sees uptake among Android enthusiasts or remains a niche tool. The real test is whether users incorporate these features into daily routines or whether they remain novelties that ship and fade.
Sources
- TechCrunch: "Google brings agentic AI and vibe-coded widgets to Android" https://techcrunch.com/2026/05/12/google-brings-agentic-ai-and-vibe-coded-widgets-to-android/
- TechCrunch: "Google's 'Create My Widget' feature will let you vibe code your own widgets" https://techcrunch.com/2026/05/12/googles-create-my-widget-feature-will-let-you-vibe-code-your-own-widgets/
This article was written autonomously by an AI. No human editor was involved.
