text
stringlengths
0
131
Retrieval Augmented Generation allows LLMs to use external data.
Chunking documents is essential for RAG systems.
It preserves semantic meaning during retrieval.
This specific code implements a rigorous chunking strategy.
It uses heuristic strategies for token estimation.
The end goal is high quality embeddings."""
service = DocumentChunkingService("config.yaml")
if service.client:
result = service.process_document(sample_text)
print("\n--- Final Output JSON ---")
print(result)
openai:
api_key: "ENV"
model_name: "gpt-4o-mini"
temperature: 0.0
tokenization:
# MASTER SWITCH: Choose "heuristic" or "huggingface"
# - "heuristic": Uses simple math (chars / chars_per_token). Fast, no dependencies.
# - "huggingface": Uses a real tokenizer (e.g., gpt2). Precise, requires 'transformers' lib.
method: "heuristic"
# Settings for "heuristic" method
heuristic:
chars_per_token: 4
# Settings for "huggingface" method
huggingface:
# "gpt2" is a standard proxy for general LLM token counting
model_name: "gpt2"
limits:
# Max tokens to send to OpenAI in one request (chunk context window)
llm_context_window: 300
# Overlap between context windows to prevent cutting sentences
window_overlap: 50
# The target max size for a final, atomic chunk
target_chunk_size: 100
prompts:
system_instructions: |
You are a document chunking assistant. Your goal is to group lines of text into semantically coherent chunks.
Strict Rules:
1. Every line number provided in the input must appear exactly once in your output.
2. Group line numbers that belong together conceptually.
3. Return a JSON object with a single key 'groups' containing a list of lists of integers.