Nitro-E-onnx - Onnx Olive DirectML Optimized

Original Model

https://huggingface.co/amd/Nitro-E

C# Inference Demo

https://github.com/TensorStack-AI/TensorStack

  var provider = Provider.GetProvider(DeviceType.GPU);
  var pipeline = NitroPipeline.FromFolder("M:\\Models\\Nitro-E-onnx", ModelType.Turbo, provider);
  var options = pipeline.DefaultOptions with
  {
      Prompt = "cute cat"
  };

  var output = await pipeline.RunAsync(options);
  await output.SaveAsync("Output.png");
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support