You can run Gemma models on mobile devices with the
MediaPipe LLM Inference API. The
LLM Inference API acts as a wrapper for large language models, enabling you run
Gemma models on-device for common text-to-text generation tasks like information
retrieval, email drafting, and document summarization.
The LLM Inference API is available on the following mobile platforms:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-12-04 UTC."],[],[]]