LiteRT Next provides a unified interface to use Neural Processing Units (NPUs)
without forcing you to individually navigate vendor-specific compilers,
runtimes, or library dependencies. Using LiteRT Next for NPU acceleration avoids
many vendor-specific or device-specific complications, boosts performance for
real-time and large-model inference, and minimizes memory copies through
zero-copy hardware buffer usage.
If you are already, enrolled in the LiteRT NPU Early Access Program, sign in to
the authorized account to view the NPU documentation. If you have not enrolled,
sign up to the Early Access Program:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-24 UTC."],[],[]]