Text Generation
Transformers
Safetensors
French
fiscalité
génération-de-texte
français
8-bit precision
Instructions to use Aktraiser/modele-test with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Aktraiser/modele-test with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Aktraiser/modele-test")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Aktraiser/modele-test", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Aktraiser/modele-test with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Aktraiser/modele-test" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Aktraiser/modele-test", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Aktraiser/modele-test
- SGLang
How to use Aktraiser/modele-test with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Aktraiser/modele-test" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Aktraiser/modele-test", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Aktraiser/modele-test" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Aktraiser/modele-test", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Aktraiser/modele-test with Docker Model Runner:
docker model run hf.co/Aktraiser/modele-test
| from transformers import AutoTokenizer, AutoModelForCausalLM, TextGenerationPipeline | |
| import torch | |
| def load_model(model_id): | |
| tokenizer = AutoTokenizer.from_pretrained(model_id) | |
| model = AutoModelForCausalLM.from_pretrained( | |
| model_id, | |
| device_map="auto", | |
| torch_dtype=torch.float16, | |
| load_in_4bit=True | |
| ) | |
| return model, tokenizer | |
| class EndpointHandler: | |
| def __init__(self, path=""): | |
| self.model, self.tokenizer = load_model(path) | |
| self.pipeline = TextGenerationPipeline( | |
| model=self.model, | |
| tokenizer=self.tokenizer | |
| ) | |
| def __call__(self, data): | |
| # Extraire le texte d'entrée | |
| if isinstance(data, dict): | |
| text = data.get("inputs", "") | |
| else: | |
| text = data | |
| # Paramètres de génération par défaut | |
| generation_kwargs = { | |
| "max_new_tokens": 512, | |
| "temperature": 0.7, | |
| "top_p": 0.95, | |
| "repetition_penalty": 1.15, | |
| "do_sample": True, | |
| "pad_token_id": self.tokenizer.pad_token_id, | |
| "eos_token_id": self.tokenizer.eos_token_id, | |
| } | |
| # Mettre à jour avec les paramètres de la requête si fournis | |
| if isinstance(data, dict) and "parameters" in data: | |
| generation_kwargs.update(data["parameters"]) | |
| try: | |
| # Générer la réponse | |
| outputs = self.pipeline( | |
| text, | |
| **generation_kwargs | |
| ) | |
| # Formater la sortie en tableau comme requis par l'API | |
| if isinstance(outputs, list): | |
| return [{"generated_text": output["generated_text"]} for output in outputs] | |
| return [{"generated_text": outputs["generated_text"]}] | |
| except Exception as e: | |
| return [{"error": str(e)}] |