Cisco iNAM
Cisco iNAM (Intelligent Networking, Automation, and Management), is a nano sized LLM used for asking questions about Cisco Datacenter Products. It is finetuned from the pretrained Phi-2 model from Microsoft Research.
Model Details
Model Description
Model is quantized to 4-bit to be able to run inference on physical deployments of datacenter products. Initial launch is planned for Nexus Dashboard.
- Developed by: Cisco
- Funded by [optional]: Cisco
- Model type: Transformer
- Language(s) (NLP): English
- License: Cisco Commercial
Model Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Prompt Format
iNAM uses ChatML as the prompt format.
It's recommended to always prompt with a system instruction (use whatever system prompt you like):
<|im_start|>system
You are a helpful assistant for Python which outputs in Markdown format.<|im_end|>
<|im_start|>user
Write a function to calculate the Fibonacci sequence<|im_end|>
<|im_start|>assistant
- Downloads last month
- 33
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.