Edit model card

eCeLLM-L

This repo contains the models for "eCeLLM: Generalizing Large Language Models for E-commerce from Large-scale, High-quality Instruction Data"

eCeLLM Models

Leveraging ECInstruct, we develop eCeLLM by instruction tuning general-purpose LLMs (base models). The eCeLLM-L model is instruction-tuned from the large base models Llama-2 13B-chat.

Citation

@misc{peng2024ecellm,
      title={eCeLLM: Generalizing Large Language Models for E-commerce from Large-scale, High-quality Instruction Data}, 
      author={Bo Peng and Xinyi Ling and Ziru Chen and Huan Sun and Xia Ning},
      year={2024},
      eprint={2402.08831},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
Downloads last month
168
Safetensors
Model size
13B params
Tensor type
FP16
·

Dataset used to train NingLab/eCeLLM-L