qa-retailpro / README.md
icr-ishop
Initial commit: upload model qa-retailpro and config
7cba67b
metadata
license: apache-2.0
license_link: https://huggingface.co/MIAOAI/qa-retailpro/blob/main/LICENSE
language:
  - en
pipeline_tag: text-generation
tags:
  - chat
  - ecommerce
  - qna
library_name: transformers

πŸ›’ qa-retailpro: Instruction-tuned LLM for E-commerce Customer Support

qa-retailpro is a domain-adapted instruction-tuned language model designed for retail and e-commerce customer service scenarios.
Based on the powerful Qwen2.5-7B backbone, this model is optimized to handle natural conversations involving product queries, logistics, refunds, order tracking, returns, and general shopping support.


πŸ’‘ Key Features

  • Retail-tuned Instruction Model: Trained on common e-commerce Q&A tasks.
  • Context-aware & Conversational: Understands multi-turn shopping dialogues.
  • Multilingual Ready: Supports over 29 languages including English, Chinese, French, Spanish, etc.
  • Structured Output Capable: Great at generating FAQ entries, JSON, and table-friendly responses.
  • Long-Context Support: Up to 128K tokens (with YaRN extension).
  • Built on: Qwen2.5-7B by MIAOAI.

πŸš€ Quickstart

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "MIAOAI/qa-retailpro"

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)

prompt = "What is your return policy for electronics?"
messages = [
    {"role": "system", "content": "You are a helpful retail assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

output = model.generate(**model_inputs, max_new_tokens=512)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)

πŸ›οΈ Example Use Cases

  • πŸ€– E-commerce chatbot agents
  • πŸ“¦ Order/return tracking Q&A
  • ❓ FAQ auto-generation
  • πŸ“Š Product detail and review summarization
  • 🌐 Cross-border retail customer service

🧰 Long Context Configuration

To handle long inputs (over 32K tokens), modify your config.json:

"rope_scaling": {
  "factor": 4.0,
  "original_max_position_embeddings": 32768,
  "type": "yarn"
}

For more info, see YaRN paper.


πŸ“š Citation

@misc{qa-retailpro,
  title = {QA-RetailPro: Instruction-tuned Qwen2.5 model for E-commerce Assistants},
  author = {MIAOAI Team},
  year = {2025},
  url = {https://huggingface.co/MIAOAI/qa-retailpro}
}

πŸ“Ž License

Apache 2.0 License. See LICENSE for full terms.


🀝 Contact

For business inquiries or collaborations, please reach out via Hugging Face Discussions.