Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

This repository contains the LoRA adapter weights from the fine-tuning of the Llama 3 (8B) model on patent documents. It is optimized for generating embeddings from patent texts. These embeddings are useful for tasks such as classification, clustering, and retrieval. The model leverages domain-specific training, using the second step of the llm2Vec approach (unsupervised contrastive learning), to capture the language of patents, offering high-quality representations for patent analysis.

Framework versions

  • PEFT 0.12.0
Downloads last month
9
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for saroyehun/Llama3-8B-Instruct-mntp-unsup-simcse-patent

Adapter
(622)
this model