Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
urchadeΒ 
posted an update Apr 9
Post
8232
**Some updates on GLiNER**

πŸ†• A new commercially permissible multilingual version is available urchade/gliner_multiv2.1

πŸ› A subtle bug that causes performance degradation on some models has been corrected. Thanks to @yyDing1 for raising the issue.

from gliner import GLiNER

# Initialize GLiNER
model = GLiNER.from_pretrained("urchade/gliner_multiv2.1")

text = "This is a text about Bill Gates and Microsoft."

# Labels for entity prediction
labels = ["person", "organization", "email"]

entities = model.predict_entities(text, labels, threshold=0.5)

for entity in entities:
    print(entity["text"], "=>", entity["label"])

Very exciting! I see you've already created a demo for it here: https://huggingface.co/spaces/urchade/gliner_multiv2.1

The model seems very impressive (Dutch on display here):
image.png

Β·

Great! thanks fot the impressive work.

Are there any plans to put this on huggingface :
"An Autoregressive Text-to-Graph Framework for Joint Entity and Relation Extraction"

Β·

You should be able to fine-tuning your own version: https://github.com/urchade/ATG/issues/3

I am also working on a zero-shot end to end relation extraction, which is as efficient
as GLiNER. Stay tuned πŸ™
Screenshot_20240412-161009.png

I have domain specific usecase. So i need to fine-tune the model. Is it possible to fine-tune this model urchade/gliner_multiv2.1. If so, how should be the structure of the training dataset. IS there any documents where i can refer?