Text2Text Generation
Transformers
PyTorch
Safetensors
English
t5
Inference Endpoints
text-generation-inference

Grammar explanation

#9
by Ejentos - opened

Are there any options to obtain a brief explanation for corrections?
E.g. Grammarly plugin can provide it, so I wonder if it is possible for the current model.

Grammarly org

Sorry for the late reply. This model does not provide explanations.

@jbochi , thanks!
Could you please check the issue there https://huggingface.co/grammarly/coedit-xl/discussions/3 ?

I know that the training and validation data are from kaggle probably... but i wonder how the fine-tuning's process is achieved?
I want to redo it the Arabic and other languages !!
Thank you in advance...

I'm utilizing the Grammarly/Coedit-Large model to correct sentence structure using the "Fix grammatical errors in this sentence" feature. However, its response time is notably slow. It takes approximately 16 to 17 seconds to rectify grammatical errors for the given text. Is there a way to optimize its performance to provide faster results?

Regarding the text provided:
"Winston is one of the most laid-back people I know. He is tall and slim with black hair, and he always wears a t-shirt and black jeans. His jeans have holes in them, and his baseball boots are also worn-out. He usually sits at the back of the class, and he often seems to be asleep. However, when the exam results are given out, he always gets an "A". I don't think he's as lazy as he appears to be."

I'm utilizing the Grammarly/Coedit-Large model to correct sentence structure using the "Fix grammatical errors in this sentence" feature. However, its response time is notably slow. It takes approximately 16 to 17 seconds to rectify grammatical errors for the given text. Is there a way to optimize its performance to provide faster results?

Regarding the text provided:
"Winston is one of the most laid-back people I know. He is tall and slim with black hair, and he always wears a t-shirt and black jeans. His jeans have holes in them, and his baseball boots are also worn-out. He usually sits at the back of the class, and he often seems to be asleep. However, when the exam results are given out, he always gets an "A". I don't think he's as lazy as he appears to be."

This will depend on your available resources and how you are making the inference, I am using Text Generation Inference to Rewrite some texts, it ends up responding very quickly without losing quality, I use a 3090 TI 24GB

https://github.com/huggingface/text-generation-inference
look this example in my production:
docker-compose.yml

version: "3.5"
services:
  tgi-coedit:
    image: ghcr.io/huggingface/text-generation-inference:1.1.0
    container_name: coedit-pirr
    entrypoint: text-generation-launcher
    restart: on-failure:5
    stdin_open: true
    tty: true
    env_file:
      - tgi_coedit.env
    shm_size: '8gb'
    ports:
      - 8188:80
    volumes:
      - type: bind
        source: ./models
        target: /llm_downloads
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [ gpu ]

 


networks:
  default:
    driver: bridge

tgi_coedit.env


MODEL_ID=grammarly/coedit-large
SHARDED=false
NUM_SHARD=1
MAX_CONCURRENT_REQUESTS=128
MAX_BEST_OF=1
MAX_STOP_SEQUENCES=4
MAX_WAITING_TOKENS=20

#MAX_INPUT_LENGTH=2048
#MAX_TOTAL_TOKENS=8192
#WAITING_SERVED_RATIO=1.2
#MAX_BATCH_TOTAL_TOKENS=16000
#MAX_BATCH_PREFILL_TOKENS=4096
HUGGINGFACE_HUB_CACHE=/llm_downloads 

I have already tested this model on a 3060 TI 8GB I also had excellent results, it is an incredible model

If you have any questions, I can help speed up your inference even further, I recommend you use cloudflare tunnels

I know that the training and validation data are from kaggle probably... but i wonder how the fine-tuning's process is achieved?
I want to redo it the Arabic and other languages !!
Thank you in advance...

I am doing a fine tuning of it for Brazilian Portuguese, finalizing the dataset and the training script I will publish them on my profile, but I used Grammarly Coedit's own repository as a base https://github.com/vipulraheja/coedit/
thanks @machineteacher

machineteacher changed discussion status to closed

Sign up or log in to comment