PuruAI commited on
Commit
27e3ce2
·
verified ·
1 Parent(s): 68d3e93

Upload 5 files

Browse files
Files changed (5) hide show
  1. .env.example +2 -0
  2. README.md +37 -0
  3. app.py +50 -0
  4. requirements.txt +5 -0
  5. start.sh +7 -0
.env.example ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ # Optional environment variables for Medini Space
2
+ # HF_TOKEN=your_huggingface_token_here
README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Medini AI Space
3
+ emoji: 🤖
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: "3.39.0"
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ # Medini AI Space
13
+
14
+ ## Overview
15
+ Medini AI is a generative AI application built on Hugging Face Spaces. It uses a text generation model (`PuruAI/Medini_Intelligence`) and a sentence embedding model (`all-MiniLM-L6-v2`) to provide AI-generated responses to user prompts.
16
+
17
+ ## Features
18
+ - Interactive Gradio interface.
19
+ - Public embedding model, no authentication token required.
20
+ - Fallback to GPT-2 if main model fails.
21
+ - Logging for debugging and monitoring.
22
+
23
+ ## How to Run
24
+ 1. Ensure all dependencies are installed:
25
+ ```bash
26
+ pip install -r requirements.txt
27
+ ```
28
+ 2. Launch the app:
29
+ ```bash
30
+ python app.py
31
+ ```
32
+ 3. Open the URL provided by Gradio to interact with the AI.
33
+
34
+ ## Notes
35
+ - No Hugging Face token is required for public model access.
36
+ - Handles model load failures gracefully.
37
+ - Recommended to deploy on Hugging Face Spaces for easy hosting.
app.py ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ from transformers import pipeline
3
+ from sentence_transformers import SentenceTransformer
4
+ import logging
5
+
6
+ # Logging setup
7
+ logging.basicConfig(level=logging.INFO)
8
+ logger = logging.getLogger(__name__)
9
+
10
+ # Model configuration
11
+ MODEL_ID = "PuruAI/Medini_Intelligence"
12
+ FALLBACK_MODEL = "gpt2"
13
+ EMB_MODEL = "sentence-transformers/all-MiniLM-L6-v2"
14
+
15
+ # Load embedding model
16
+ try:
17
+ logger.info(f"Loading embedding model: {EMB_MODEL}")
18
+ embedding_model = SentenceTransformer(EMB_MODEL)
19
+ logger.info("Embedding model loaded successfully.")
20
+ except Exception as e:
21
+ logger.error(f"Failed to load embedding model: {e}")
22
+ embedding_model = None
23
+
24
+ # Load main LLM
25
+ try:
26
+ logger.info(f"Loading main model: {MODEL_ID}")
27
+ generator = pipeline("text-generation", model=MODEL_ID)
28
+ logger.info("Main model loaded successfully.")
29
+ except Exception as e:
30
+ logger.warning(f"Failed to load {MODEL_ID}, falling back to {FALLBACK_MODEL}: {e}")
31
+ generator = pipeline("text-generation", model=FALLBACK_MODEL)
32
+
33
+ # Gradio interface
34
+ def generate_text(prompt):
35
+ try:
36
+ result = generator(prompt, max_length=200)
37
+ return result[0]["generated_text"]
38
+ except Exception as e:
39
+ logger.error(f"Text generation failed: {e}")
40
+ return "Error generating text."
41
+
42
+ with gr.Blocks() as demo:
43
+ gr.Markdown("## Medini AI Space")
44
+ user_input = gr.Textbox(label="Enter your prompt")
45
+ output = gr.Textbox(label="Generated Text")
46
+ submit = gr.Button("Generate")
47
+ submit.click(generate_text, inputs=user_input, outputs=output)
48
+
49
+ if __name__ == "__main__":
50
+ demo.launch()
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ gradio==3.39.0
2
+ transformers==4.42.2
3
+ sentence-transformers>=2.2.2,<3
4
+ torch>=2.0.0
5
+ huggingface_hub>=0.14.1
start.sh ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ #!/bin/bash
2
+
3
+ # Force reinstall dependencies to ensure correct versions
4
+ pip install --upgrade --force-reinstall -r requirements.txt
5
+
6
+ # Launch the app
7
+ python app.py