lily_fast_api / HEARTH_CHAT_INTEGRATION.md
gbrabbit's picture
Fresh start for HF Spaces deployment
526927a

Hearth Chat๊ณผ Lily LLM API ์—ฐ๋™ ๊ฐ€์ด๋“œ

๐Ÿ”— ์—ฐ๋™ ๊ฐœ์š”

Hugging Face Spaces์— ๋ฐฐํฌ๋œ Lily LLM API๋ฅผ Railway์—์„œ ํ˜ธ์ŠคํŒ…๋˜๋Š” Hearth Chat ์„œ๋น„์Šค์™€ ์—ฐ๋™ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์„ค๋ช…ํ•ฉ๋‹ˆ๋‹ค.

1. Hugging Face Spaces ๋ฐฐํฌ ์™„๋ฃŒ ํ™•์ธ

1.1 API ์—”๋“œํฌ์ธํŠธ ํ™•์ธ

๋ฐฐํฌ๋œ Lily LLM API URL:

https://YOUR_USERNAME-lily-llm-api.hf.space

1.2 ์ฃผ์š” ์—”๋“œํฌ์ธํŠธ ํ…Œ์ŠคํŠธ

# ํ—ฌ์Šค ์ฒดํฌ
curl https://YOUR_USERNAME-lily-llm-api.hf.space/health

# ๋ชจ๋ธ ๋ชฉ๋ก ํ™•์ธ
curl https://YOUR_USERNAME-lily-llm-api.hf.space/models

# ํ…์ŠคํŠธ ์ƒ์„ฑ ํ…Œ์ŠคํŠธ
curl -X POST https://YOUR_USERNAME-lily-llm-api.hf.space/generate \
  -F "prompt=์•ˆ๋…•ํ•˜์„ธ์š”! ํ…Œ์ŠคํŠธ์ž…๋‹ˆ๋‹ค."

2. Hearth Chat ์„ค์ • ์—…๋ฐ์ดํŠธ

2.1 AI ์„ค์ • ๋ชจ๋‹ฌ ์—…๋ฐ์ดํŠธ

hearth_chat_react/src/components/AISettingsModal.js์—์„œ Lily LLM ์„ค์ • ์ถ”๊ฐ€:

// Lily LLM API URL ์„ค์ •
{settings.aiProvider === 'lily' && (
    <>
        <div className="setting-group">
            <label className="setting-label">Lily API URL:</label>
            <input
                type="url"
                value={settings.lilyApiUrl}
                onChange={(e) => handleInputChange('lilyApiUrl', e.target.value)}
                placeholder="https://your-username-lily-llm-api.hf.space"
            />
        </div>
        
        <div className="setting-group">
            <label className="setting-label">Lily ๋ชจ๋ธ:</label>
            <select
                value={settings.lilyModel}
                onChange={(e) => handleInputChange('lilyModel', e.target.value)}
            >
                <option value="kanana-1.5-v-3b-instruct">Kanana 1.5 v3B Instruct</option>
            </select>
        </div>
        
        {/* API ์—ฐ๊ฒฐ ์ƒํƒœ ํ‘œ์‹œ */}
        <div className="model-info">
            <small style={{ color: '#4CAF50', fontWeight: 'bold' }}>
                ๐ŸŒ Hugging Face Spaces์—์„œ ํ˜ธ์ŠคํŒ…
            </small>
            <small style={{ color: '#666', display: 'block', marginTop: '4px' }}>
                ๋ฉ€ํ‹ฐ๋ชจ๋‹ฌ AI ๋ชจ๋ธ (ํ…์ŠคํŠธ + ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ)
            </small>
        </div>
    </>
)}

2.2 ์—ฐ๊ฒฐ ํ…Œ์ŠคํŠธ ํ•จ์ˆ˜ ์—…๋ฐ์ดํŠธ

case 'lily':
    testUrl = `${settings.lilyApiUrl}/health`;
    testData = {
        method: 'GET',
        headers: {
            'Accept': 'application/json'
        }
    };
    
    // ์ถ”๊ฐ€ ์ƒ์„ฑ ํ…Œ์ŠคํŠธ
    if (response.ok) {
        const generateTestUrl = `${settings.lilyApiUrl}/generate`;
        const generateResponse = await fetch(generateTestUrl, {
            method: 'POST',
            body: new FormData([
                ['prompt', '์—ฐ๊ฒฐ ํ…Œ์ŠคํŠธ์ž…๋‹ˆ๋‹ค.']
            ])
        });
        
        if (generateResponse.ok) {
            const result = await generateResponse.json();
            console.log('Lily LLM ์ƒ์„ฑ ํ…Œ์ŠคํŠธ ์„ฑ๊ณต:', result);
        }
    }
    break;

3. ๋ฐฑ์—”๋“œ ์—ฐ๋™ ์—…๋ฐ์ดํŠธ

3.1 Consumers.py ์ˆ˜์ •

hearth_chat_django/chat/consumers.py์—์„œ Lily LLM API ํ˜ธ์ถœ ๋ถ€๋ถ„:

async def call_lily_api(user_message, user_emotion, image_urls=None, documents=None):
    """Lily LLM API ํ˜ธ์ถœ (Hugging Face Spaces)"""
    import requests
    import aiohttp
    
    try:
        # ์‚ฌ์šฉ์ž ์„ค์ •์—์„œ API URL ๊ฐ€์ ธ์˜ค๊ธฐ
        user = getattr(self, 'scope', {}).get('user', None)
        ai_settings = None
        if user and hasattr(user, 'is_authenticated') and user.is_authenticated:
            ai_settings = await self.get_user_ai_settings(user)
        
        # API URL ์„ค์ • (๊ธฐ๋ณธ๊ฐ’: Hugging Face Spaces)
        lily_api_url = ai_settings.get('lilyApiUrl', 'https://gbrabbit-lily-math-rag.hf.space') if ai_settings else 'https://gbrabbit-lily-math-rag.hf.space'
        lily_model = ai_settings.get('lilyModel', 'kanana-1.5-v-3b-instruct') if ai_settings else 'kanana-1.5-v-3b-instruct'
        
        # API ์—”๋“œํฌ์ธํŠธ
        generate_url = f"{lily_api_url}/generate"
        
        # ์š”์ฒญ ๋ฐ์ดํ„ฐ ์ค€๋น„
        data = {
            'prompt': f"{emotion_prompt}\n\n์‚ฌ์šฉ์ž ๋ฉ”์‹œ์ง€: {user_message}",
            'max_length': 200,
            'temperature': 0.7
        }
        
        files = {}
        
        # ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ
        if image_urls and len(image_urls) > 0:
            print(f"๐Ÿ–ผ๏ธ ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ: {len(image_urls)}๊ฐœ")
            
            async with aiohttp.ClientSession() as session:
                for i, image_url in enumerate(image_urls[:4]):  # ์ตœ๋Œ€ 4๊ฐœ ์ด๋ฏธ์ง€
                    try:
                        async with session.get(image_url) as img_response:
                            if img_response.status == 200:
                                image_data = await img_response.read()
                                files[f'image{i+1}'] = ('image.jpg', image_data, 'image/jpeg')
                    except Exception as e:
                        print(f"โŒ ์ด๋ฏธ์ง€ {i+1} ๋กœ๋“œ ์‹คํŒจ: {e}")
        
        # API ํ˜ธ์ถœ
        timeout = aiohttp.ClientTimeout(total=120)  # 2๋ถ„ ํƒ€์ž„์•„์›ƒ
        
        async with aiohttp.ClientSession(timeout=timeout) as session:
            if files:
                # ๋ฉ€ํ‹ฐํŒŒํŠธ ์š”์ฒญ (์ด๋ฏธ์ง€ ํฌํ•จ)
                form_data = aiohttp.FormData()
                for key, value in data.items():
                    form_data.add_field(key, str(value))
                for key, (filename, file_data, content_type) in files.items():
                    form_data.add_field(key, file_data, filename=filename, content_type=content_type)
                
                async with session.post(generate_url, data=form_data) as response:
                    if response.status == 200:
                        result = await response.json()
                        lily_response = result.get('generated_text', '์ฃ„์†กํ•ฉ๋‹ˆ๋‹ค. ์‘๋‹ต์„ ์ƒ์„ฑํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.')
                        
                        return {
                            "response": lily_response,
                            "provider": "lily",
                            "ai_name": "Lily LLM",
                            "ai_type": "huggingface"
                        }
                    else:
                        error_text = await response.text()
                        raise Exception(f"Lily API ์˜ค๋ฅ˜: {response.status} - {error_text}")
            else:
                # ์ผ๋ฐ˜ POST ์š”์ฒญ (ํ…์ŠคํŠธ๋งŒ)
                async with session.post(generate_url, data=data) as response:
                    if response.status == 200:
                        result = await response.json()
                        lily_response = result.get('generated_text', '์ฃ„์†กํ•ฉ๋‹ˆ๋‹ค. ์‘๋‹ต์„ ์ƒ์„ฑํ•  ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค.')
                        
                        return {
                            "response": lily_response,
                            "provider": "lily",
                            "ai_name": "Lily LLM",
                            "ai_type": "huggingface"
                        }
                    else:
                        error_text = await response.text()
                        raise Exception(f"Lily API ์˜ค๋ฅ˜: {response.status} - {error_text}")
                        
    except Exception as e:
        print(f"โŒ Lily LLM API ํ˜ธ์ถœ ์‹คํŒจ: {e}")
        raise e

3.2 ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ์„ค์ •

Railway ํ™˜๊ฒฝ์—์„œ ๋‹ค์Œ ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ์ถ”๊ฐ€:

# Lily LLM API ์„ค์ •
LILY_LLM_API_URL=https://YOUR_USERNAME-lily-llm-api.hf.space
LILY_LLM_MODEL=kanana-1.5-v-3b-instruct
LILY_LLM_TIMEOUT=120

4. ํ…Œ์ŠคํŠธ ๋ฐ ๊ฒ€์ฆ

4.1 ์—ฐ๋™ ํ…Œ์ŠคํŠธ ์Šคํฌ๋ฆฝํŠธ

# test_hearth_lily_integration.py

import requests
import json

def test_hearth_chat_lily_integration():
    """Hearth Chat๊ณผ Lily LLM ์—ฐ๋™ ํ…Œ์ŠคํŠธ"""
    
    # Hearth Chat API ์—”๋“œํฌ์ธํŠธ (Railway)
    hearth_chat_url = "https://your-hearth-chat.railway.app"
    
    # 1. ๋กœ๊ทธ์ธ ๋ฐ ์„ธ์…˜ ํš๋“
    session = requests.Session()
    
    # 2. AI ์„ค์ • ์—…๋ฐ์ดํŠธ
    ai_settings = {
        "aiProvider": "lily",
        "lilyApiUrl": "https://YOUR_USERNAME-lily-llm-api.hf.space",
        "lilyModel": "kanana-1.5-v-3b-instruct",
        "aiEnabled": True
    }
    
    settings_response = session.patch(
        f"{hearth_chat_url}/api/chat/user/settings/",
        json=ai_settings
    )
    
    print(f"์„ค์ • ์—…๋ฐ์ดํŠธ: {settings_response.status_code}")
    
    # 3. ์ฑ„ํŒ… ํ…Œ์ŠคํŠธ
    test_messages = [
        "์•ˆ๋…•ํ•˜์„ธ์š”! Lily LLM ํ…Œ์ŠคํŠธ์ž…๋‹ˆ๋‹ค.",
        "์˜ค๋Š˜ ๋‚ ์”จ๊ฐ€ ์–ด๋–ค๊ฐ€์š”?",
        "๊ฐ„๋‹จํ•œ ์ˆ˜ํ•™ ๋ฌธ์ œ๋ฅผ ๋‚ด์ฃผ์„ธ์š”."
    ]
    
    for message in test_messages:
        print(f"\n๐Ÿ“ค ํ…Œ์ŠคํŠธ ๋ฉ”์‹œ์ง€: {message}")
        
        # WebSocket ๋˜๋Š” HTTP API๋ฅผ ํ†ตํ•œ ๋ฉ”์‹œ์ง€ ์ „์†ก
        # (์‹ค์ œ ๊ตฌํ˜„์— ๋”ฐ๋ผ ์กฐ์ •)
        
        # ์‘๋‹ต ํ™•์ธ
        print(f"โœ… ์‘๋‹ต ๋ฐ›์Œ")

if __name__ == "__main__":
    test_hearth_chat_lily_integration()

4.2 ์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ํ…Œ์ŠคํŠธ

def test_image_processing():
    """์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ์—ฐ๋™ ํ…Œ์ŠคํŠธ"""
    
    # ํ…Œ์ŠคํŠธ ์ด๋ฏธ์ง€ ์—…๋กœ๋“œ
    with open("test_image.jpg", "rb") as f:
        files = {"image": f}
        data = {"message": "์ด๋ฏธ์ง€์—์„œ ๋ฌด์—‡์„ ๋ณผ ์ˆ˜ ์žˆ๋‚˜์š”?"}
        
        response = requests.post(
            "https://your-hearth-chat.railway.app/api/chat/send-message/",
            files=files,
            data=data
        )
        
        print(f"์ด๋ฏธ์ง€ ์ฒ˜๋ฆฌ ํ…Œ์ŠคํŠธ: {response.status_code}")
        print(f"์‘๋‹ต: {response.json()}")

5. ๋ชจ๋‹ˆํ„ฐ๋ง ๋ฐ ๋กœ๊ทธ

5.1 Hugging Face Spaces ๋กœ๊ทธ ๋ชจ๋‹ˆํ„ฐ๋ง

# Spaces ๋Œ€์‹œ๋ณด๋“œ์—์„œ ์‹ค์‹œ๊ฐ„ ๋กœ๊ทธ ํ™•์ธ
# API ํ˜ธ์ถœ ๋นˆ๋„ ๋ฐ ์‘๋‹ต ์‹œ๊ฐ„ ๋ชจ๋‹ˆํ„ฐ๋ง

5.2 Railway ๋กœ๊ทธ ๋ชจ๋‹ˆํ„ฐ๋ง

# Railway ๋Œ€์‹œ๋ณด๋“œ์—์„œ Hearth Chat ๋กœ๊ทธ ํ™•์ธ
# Lily LLM API ํ˜ธ์ถœ ์„ฑ๊ณต/์‹คํŒจ ๋ชจ๋‹ˆํ„ฐ๋ง

6. ์„ฑ๋Šฅ ์ตœ์ ํ™”

6.1 ์บ์‹ฑ ์ „๋žต

# Redis๋ฅผ ์ด์šฉํ•œ ์‘๋‹ต ์บ์‹ฑ
import redis

redis_client = redis.Redis(host='localhost', port=6379, db=0)

def cached_lily_response(prompt_hash, response):
    """์‘๋‹ต ์บ์‹ฑ"""
    redis_client.setex(f"lily_cache:{prompt_hash}", 3600, json.dumps(response))

def get_cached_response(prompt_hash):
    """์บ์‹œ๋œ ์‘๋‹ต ์กฐํšŒ"""
    cached = redis_client.get(f"lily_cache:{prompt_hash}")
    return json.loads(cached) if cached else None

6.2 ๋กœ๋“œ ๋ฐธ๋Ÿฐ์‹ฑ

# ์—ฌ๋Ÿฌ Hugging Face Spaces ์ธ์Šคํ„ด์Šค ์‚ฌ์šฉ
LILY_API_ENDPOINTS = [
    "https://username1-lily-llm-api.hf.space",
    "https://username2-lily-llm-api.hf.space"
]

def get_available_endpoint():
    """์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ์—”๋“œํฌ์ธํŠธ ์„ ํƒ"""
    for endpoint in LILY_API_ENDPOINTS:
        try:
            response = requests.get(f"{endpoint}/health", timeout=5)
            if response.status_code == 200:
                return endpoint
        except:
            continue
    return LILY_API_ENDPOINTS[0]  # ๊ธฐ๋ณธ๊ฐ’

7. ๋ณด์•ˆ ๊ณ ๋ ค์‚ฌํ•ญ

7.1 API ํ‚ค ๊ด€๋ฆฌ

# ํ™˜๊ฒฝ ๋ณ€์ˆ˜๋กœ ๋ฏผ๊ฐํ•œ ์ •๋ณด ๊ด€๋ฆฌ
import os

LILY_API_KEY = os.getenv('LILY_API_KEY')  # ํ•„์š”์‹œ
LILY_API_SECRET = os.getenv('LILY_API_SECRET')  # ํ•„์š”์‹œ

7.2 ์š”์ฒญ ์ œํ•œ

# ์‚ฌ์šฉ์ž๋ณ„ ์š”์ฒญ ์ œํ•œ
from django.core.cache import cache

def check_rate_limit(user_id):
    """์‚ฌ์šฉ์ž๋ณ„ ์š”์ฒญ ์ œํ•œ ํ™•์ธ"""
    key = f"lily_api_rate_limit:{user_id}"
    current = cache.get(key, 0)
    
    if current >= 100:  # ์‹œ๊ฐ„๋‹น 100ํšŒ ์ œํ•œ
        return False
        
    cache.set(key, current + 1, 3600)  # 1์‹œ๊ฐ„
    return True

๐ŸŽ‰ ์—ฐ๋™ ์™„๋ฃŒ

๋ชจ๋“  ์„ค์ •์ด ์™„๋ฃŒ๋˜๋ฉด:

  1. Hugging Face Spaces: Lily LLM API ์„œ๋ฒ„ ํ˜ธ์ŠคํŒ…
  2. Railway: Hearth Chat ์„œ๋น„์Šค ํ˜ธ์ŠคํŒ…
  3. ์—ฐ๋™: ๋‘ ์„œ๋น„์Šค ๊ฐ„ ์›ํ™œํ•œ ํ†ต์‹ 

์‚ฌ์šฉ์ž๋Š” Hearth Chat ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ†ตํ•ด Hugging Face์—์„œ ํ˜ธ์ŠคํŒ…๋˜๋Š” ๊ฐ•๋ ฅํ•œ Lily LLM AI๋ฅผ ์‚ฌ์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋ฉ๋‹ˆ๋‹ค! ๐Ÿš€