devflow2 / README.md
bhsinghgrid's picture
Upload folder using huggingface_hub
6d34b0d verified
metadata
license: mit
language:
  - sa
  - en
tags:
  - sanskrit
  - paraphrase
  - diffusion
  - d3pm
  - pytorch
pipeline_tag: text2text-generation

Sanskrit D3PM Encoder-Decoder Model

Roman/IAST Sanskrit input to Devanagari output using a custom D3PM checkpoint. This package is configured for the d3pm_encoder_decoder checkpoint stored in best_model.pt. Hugging Face model repo: bhsinghgrid/devflow2

Files Included

  • best_model.pt — trained checkpoint
  • model_settings.json — packaged runtime metadata
  • config.py — runtime config
  • inference.py — model loading + generation loop
  • inference_api.py — simple Python API (predict)
  • handler.py — Hugging Face Endpoint handler
  • model/, diffusion/ — architecture modules
  • sanskrit_src_tokenizer.json, sanskrit_tgt_tokenizer.json — tokenizers

Quick Local Test

from inference_api import predict
print(predict("dharmo rakṣati rakṣitaḥ")["output"])

Runtime Settings

For local/API usage, the runtime first reads model_settings.json, then allows optional environment variable overrides:

  • HF_MODEL_TYPE = d3pm_cross_attention or d3pm_encoder_decoder
  • HF_INCLUDE_NEG = true or false
  • HF_NUM_STEPS = diffusion step count for the packaged checkpoint

Packaged settings for this repo:

export HF_MODEL_TYPE=d3pm_encoder_decoder
export HF_INCLUDE_NEG=false
export HF_NUM_STEPS=4

Use This Model In A Hugging Face Space

In your Space settings, set:

  • HF_CHECKPOINT_REPO=bhsinghgrid/devflow2
  • HF_CHECKPOINT_FILE=best_model.pt

If your Space reads model metadata automatically, no extra model-type variables are required. If it does not, also set:

HF_DEFAULT_MODEL_TYPE=d3pm_encoder_decoder
HF_DEFAULT_INCLUDE_NEG=false
HF_DEFAULT_NUM_STEPS=4

Transformer-Style Usage (Custom Runtime)

This checkpoint is a custom D3PM architecture (.pt), not a native transformers AutoModel format. Use it via the provided runtime:

import torch
from config import CONFIG
from inference import load_model, run_inference, _decode_clean
from model.tokenizer import SanskritSourceTokenizer, SanskritTargetTokenizer

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model, cfg = load_model("best_model.pt", CONFIG, device)

src_tok = SanskritSourceTokenizer(vocab_size=16000, max_len=cfg["model"]["max_seq_len"])
tgt_tok = SanskritTargetTokenizer(vocab_size=16000, max_len=cfg["model"]["max_seq_len"])

text = "dharmo rakṣati rakṣitaḥ"
ids = torch.tensor([src_tok.encode(text)], dtype=torch.long, device=device)
out = run_inference(model, ids, cfg)
print(_decode_clean(tgt_tok, out[0].tolist()))

If you need full transformers compatibility (AutoModel.from_pretrained), export weights to a Hugging Face Transformers model format first.

Endpoint Payload

{
  "inputs": "yadā mano nivarteta viṣayebhyaḥ svabhāvataḥ",
  "parameters": {
    "temperature": 0.7,
    "top_k": 40,
    "repetition_penalty": 1.2,
    "diversity_penalty": 0.0,
    "num_steps": 4,
    "clean_output": true
  }
}

Push This Folder To Model Hub

cd hf_model_repo_encoder_decoder
git add .
git commit -m "Add encoder-decoder T4 model package"
git push -u hf main