Edit model card

Cisco iNAM

Cisco iNAM (Intelligent Networking, Automation, and Management), is a nano sized LLM used for asking questions about Cisco Datacenter Products. It is finetuned from the pretrained Phi-2 model from Microsoft Research.

Model Details

Model Description

Model is quantized to 4-bit to be able to run inference on physical deployments of datacenter products. Initial launch is planned for Nexus Dashboard.

  • Developed by: Cisco
  • Funded by [optional]: Cisco
  • Model type: Transformer
  • Language(s) (NLP): English
  • License: Cisco Commercial

Model Sources [optional]

  • Repository: [More Information Needed]
  • Paper [optional]: [More Information Needed]
  • Demo [optional]: [More Information Needed]

Prompt Format

iNAM uses ChatML as the prompt format.

It's recommended to always prompt with a system instruction (use whatever system prompt you like):

<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
Downloads last month
33
GGUF
Model size
2.78B params
Architecture
phi2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train ndavidson/iNAM-2.7B-v1.0-beta