LiteRT

Access Gemma on Hugging Face

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately.

Log in or Sign Up to review the conditions and access this model content.

litert-community/Gemma3-1B-IT

This model provides a few variants of google/gemma-3-1b-it that are ready for deployment on Android and Web using the LiteRT (fka TFLite) stack and MediaPipe LLM Inference API.

Use the models

Android

  1. Download and install the apk
  2. Follow the instructions in the app.

To build the demo app from source, please follow the instructions from the GitHub repository.

Web

  1. Build and run our sample web app.

To add the model to your web app, please follow the instructions in our documentation.

Colab

Disclaimer: The target deployment surface for the LiteRT models is Android/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.

Open In Colab

Customize

Fine tune Gemma 3 1B and deploy with either LiteRT or Mediapipe LLM Inference API: Open In Colab

Performance

Android

Note that all benchmark stats are from a Samsung S24 Ultra with 2048 KV cache size, 1024 tokens prefill, 256 tokens decode.

Weight Quantization Backend Prefill (tokens/sec) Decode (tokens/sec) Time to first token (sec) Model size (MB) Peak RSS Memory (MB) GPU Memory (MB)
dynamic_int4 CPU 322.5 47.4 3.1s 529 1138.31 -
dynamic_int4 GPU 2585.9 56.4 4.5s 529 1205.28 585.66
  • Model size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
  • The inference on CPU is accelerated via the LiteRT XNNPACK delegate with 4 threads
  • Benchmark on CPU is done assuming XNNPACK cache is enabled
  • Benchmark on GPU is done assuming model is cached
  • Cpufreq governor is set to performance during benchmark. Observed performance may vary depending on your phone’s hardware and current activity level.
  • dynamic_int4: quantized model with int4 weights and float activations.

Web

Note that all benchmark stats are from a MacBook Pro 2023 (Apple M3 Pro chip) with 2048 KV cache size, 1024 tokens prefill, 256 tokens decode.

Weight Quantization Backend Prefill (tokens/sec) Decode (tokens/sec) Model size (MB)
dynamic_int4 GPU 1701.9 77.8 529
  • Model size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
  • dynamic_int4: quantized model with int4 weights and float activations.

Note:

  • We are working on bringing 4k and 8k context window variants of the Gemma3-1B model soon to HuggingFace, please stay tuned!
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for litert-community/Gemma3-1B-IT

Finetuned
(9)
this model