metadata
title: README
emoji: π
colorFrom: green
colorTo: indigo
sdk: static
pinned: false
short_description: Empower AI inference
Kalray enables AI innovators to build novel AI applications, maximizing your compute processing with MPPA Coolidge 2. Our compute Acceleration Cards offer a very complementary architecture to GPUs, allowing for the processing a large number of different operations in parallel in an asynchronous way. Details can be found here:
- Processor white paper
- Computation cards
- ML & computer vision
- [SDK description]
You should find on this page several of the models that Kalray's SDK support. A part of the ACE (AccessCore Embedded), called Kalray Neural Network is dedicated to optimize inference on the Kalray's processor (MPPA) on the following scheme:
- Design and/or import your Neural Networks from ONNX or tensorflow (PyTorch is supported using the ONNX bridge),
- Build an intermediate represention of the NN in order to be executed on the MPPA,
- Run and exploit predictions from the device
Find out on our github page the possibility to deploy and power your AI solutions over the Kalray's processor at:
Kalray π