Spaces:
Sleeping
Sleeping
1. The system's architecture is designed to mitigate online toxicity by transforming text inputs into less provocative forms using Large Language Models (LLMs), which are pivotal in analysing and refining text. | |
4. Different workers, or LLM interfaces are defined, each suited for specific operational environments. | |
5. The HTTP server worker is optimised for development purposes, facilitating dynamic updates without necessitating server restarts, it can work offline, with or without a GPU using the `llama-cpp-python` library, provided a downloaded model. | |
6. An in-memory worker is used by the serverless worker. | |
7. For on-demand, scalable processing, the system includes a RunPod API worker that leverages serverless GPU functions. | |
8. Additionally, the Mistral API worker offers a paid service alternative for text processing tasks. | |
9. A set of environment variables are predefined to configure the LLM workers' functionality. | |
10. The `LLM_WORKER` environment variable sets the active LLM worker. | |
11. The `N_GPU_LAYERS` environment variable allows for the specification of GPU layers utilised, defaulting to the maximum available, used when the LLM worker is ran with a GPU. | |
12. `CONTEXT_SIZE` is an adjustable parameter that defines the extent of text the LLM can process concurrently. | |
13. The `LLM_MODEL_PATH` environment variable indicates the LLM model's storage location, which can be either local or sourced from the HuggingFace Hub. | |
14. The system enforces some rate limiting to maintain service integrity and equitable resource distribution. | |
15. The `LAST_REQUEST_TIME` and `REQUEST_INTERVAL` global variables are used for Mistral rate limiting. | |
16. The system's worker architecture is somewhat modular, enabling easy integration or replacement of components such as LLM workers. | |
18. The system is capable of streaming responses in some modes, allowing for real-time interaction with the LLM. | |
19. The `llm_streaming` function handles communication with the LLM via HTTP streaming when the server worker is active. | |
20. The `llm_stream_sans_network` function provides an alternative for local LLM inference without network dependency. | |
21. For serverless deployment, the `llm_stream_serverless` function interfaces with the RunPod API. | |
22. The `llm_stream_mistral_api` function facilitates interaction with the Mistral API for text processing. | |
23. The system includes a utility function, `replace_text`, for template-based text replacement operations. | |
24. A scoring function, `calculate_overall_score`, amalgamates different metrics to evaluate the text transformation's effectiveness. | |
25. The `query_ai_prompt` function serves as a dispatcher, directing text processing requests to the chosen LLM worker. | |
27. The `inference_binary_check` function within `app.py` ensures compatibility with the available hardware, particularly GPU presence. | |
28. The system provides a user interface through Gradio, enabling end-users to interact with the text transformation service. | |
29. The `chill_out` function in `app.py` is the entry point for processing user inputs through the Gradio interface. | |
30. The `improvement_loop` function in `chill.py` controls the iterative process of text refinement using the LLM. | |