Running on Google Pixel TPU?

#16
by soupslurpr - opened

Hi, I'd like to ask, is it possible to run this on a Google Pixel TPU like the one in the Pixel 7 or Pixel 8. It would be amazing if it could be done as I think it could immensely increase the speed?

Hi @soupslurpr
if I am not mistaked there is no reason the model should'nt work on a TPU device. cc @sanchit-gandhi who recently shared how to run gemma on a free-tier TPU device: https://x.com/sanchitgandhi99/status/1760733806276038811?s=20

Pixel TPU is not open for use to non-Google apps AFAIK.

Pixel TPU is not open for use to non-Google apps AFAIK.

Pretty sure it is, just got to use NNAPI https://developer.android.com/ndk/guides/neuralnetworks

But it also has to use the supported ops for TPU when I used Tensorflow whisper for example could only run on CPU

I'd love to see a version of Google Assistant running this locally. I think that's where they're heading with this since the model is so small. I just hope they don't cancel it like so many other things google does.

That entire NNAPI page does not mention TPU at all lol

"Note: This topic uses the term "device" to refer to CPUs, GPUs, and accelerators. In other topics on this site, "device" refers to Android devices. To clarify this distinction, when referring to an Android device, this topic includes the word "Android." All other instances of the word device refer to processors and accelerators."
https://developer.android.com/ndk/guides/neuralnetworks#:~:text=Note%3A%20This,processors%20and%20accelerators.

How would tensorflow lite use the TPU then?

TensorFlow Lite specifically mentions the use of NPUs while using NNAPI.

(https://www.tensorflow.org/lite/android/delegates/nnapi)
(https://www.tensorflow.org/lite/performance/delegates)

NPU and TPU are the same thing?

From an acronym perspective, they're similar since most neural networks are tensor-based, but in practice:

NPU

  • Usually included as part of an SOC
  • Usually has support for a multitude of formats and can do a wider range of machine learning tasks
  • Wider range of APIs and SDKs

TPU

  • Designed and developed by Google
  • Can be on an SOC (such as the Pixel Tensos) or as add-in cards (such as Google Coral).
  • Designed to handle specific Google models
  • Designed to work with TensorFlow only

In short (from what I gather), TPU is Google's proprietary implementation of an NPU. That said, TPU stands for Tensor Processor Unit... so .... it's mostly a difference in nomenclatures as I've heard them used interchangeably pretty often. Nvidia has Tensor cores, and other frameworks such as Onnix use tensors. In short it's a cluster mess, similar to how some companies use TOPS as "Tera Operations Per Second" and others use it as "Tensor Operations Per Second". Even Nvidia's own employees get confused since it's used interchangeably inside the company, so no one's sure what TOPS is referring to at any given moment.

Google org

Hi @soupslurpr , Running a model like Gemma-7b on a Google Pixel TPU (as seen in Pixel 7 or Pixel 8) is not currently feasible due to the hardware and memory constraints of mobile devices. Could you please use cloud-based or server-side processing for large models like Gemma-7b, with the device acting as a frontend for input and output. Thank you.

Sign up or log in to comment