text
stringlengths
5
631k
id
stringlengths
14
178
metadata
dict
__index_level_0__
int64
0
647
[ { "question": "Which of the following best describes a Large Language Model (LLM)?", "answer_a": "A model specializing in language recognition", "answer_b": "A massive neural network that understands and generates human language", "answer_c": "A model exclusively used for language data tasks like summarization or classification", "answer_d": "A rule-based chatbot used for conversations", "correct_answer": "B" } ]
agents-course/quiz/data/unit_1.json/0
{ "file_path": "agents-course/quiz/data/unit_1.json", "repo_id": "agents-course", "token_count": 154 }
0
# Build Your Own Pokémon Battle Agent Now that you’ve explored the potential and limitations of Agentic AI in games, it’s time to get hands-on. In this section, you’ll **build your very own AI Agent to battle in Pokémon-style turn-based combat**, using everything you’ve learned throughout the course. We’ll break the system into four key building blocks: - **Poke-env:** A Python library designed to train rule-based or reinforcement learning Pokémon bots. - **Pokémon Showdown:** An online battle simulator where your agent will fight. - **LLMAgentBase:** A custom Python class we’ve built to connect your LLM with the Poke-env battle environment. - **TemplateAgent:** A starter template you’ll complete to create your own unique battle agent. Let’s explore each of these components in more detail. ## 🧠 Poke-env ![Battle gif](https://github.com/hsahovic/poke-env/raw/master/rl-gif.gif) [Poke-env](https://github.com/hsahovic/poke-env) is a Python interface originally built for training reinforcement learning bots by [Haris Sahovic](https://huggingface.co/hsahovic), but we’ve repurposed it for Agentic AI. It allows your agent to interact with Pokémon Showdown through a simple API. It provides a `Player` class from which your Agent will inherit, covering everything needed to communicate with the graphical interface. **Documentation**: [poke-env.readthedocs.io](https://poke-env.readthedocs.io/en/stable/) **Repository**: [github.com/hsahovic/poke-env](https://github.com/hsahovic/poke-env) ## ⚔️ Pokémon Showdown [Pokémon Showdown](https://pokemonshowdown.com/) is an [open-source](https://github.com/smogon/Pokemon-Showdown) battle simulator where your agent will play live Pokémon battles. It provides a full interface to simulate and display battles in real time. In our challenge, your bot will act just like a human player, choosing moves turn by turn. We’ve deployed a server that all participants will use to battle. Let’s see who builds the best AI battle Agent! **Repository**: [github.com/smogon/Pokemon-Showdown](https://github.com/smogon/Pokemon-Showdown) **Website**: [pokemonshowdown.com](https://pokemonshowdown.com/) ## 🔌 LLMAgentBase `LLMAgentBase` is a Python class that extends the `Player` class from **Poke-env**. It serves as the bridge between your **LLM** and the **Pokémon battle simulator**, handling input/output formatting and maintaining battle context. This base agent provides a set of tools (defined in `STANDARD_TOOL_SCHEMA`) to interact with the environment, including: - `choose_move`: for selecting an attack during battle - `choose_switch`: for switching Pokémon The LLM should use these tools to make decisions during a match. ### 🧠 Core Logic - `choose_move(battle: Battle)`: This is the main method invoked each turn. It takes a `Battle` object and returns an action string based on the LLM’s output. ### 🔧 Key Internal Methods - `_format_battle_state(battle)`: Converts the current battle state into a string, making it suitable for sending to the LLM. - `_find_move_by_name(battle, move_name)`: Finds a move by name, used in LLM responses that call `choose_move`. - `_find_pokemon_by_name(battle, pokemon_name)`: Locates a specific Pokémon to switch into, based on the LLM’s switch command. - `_get_llm_decision(battle_state)`: This method is abstract in the base class. You’ll need to implement it in your own agent (see next section), where you define how to query the LLM and parse its response. Here’s an excerpt showing how that decision-making works: ```python STANDARD_TOOL_SCHEMA = { "choose_move": { ... }, "choose_switch": { ... }, } class LLMAgentBase(Player): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.standard_tools = STANDARD_TOOL_SCHEMA self.battle_history = [] def _format_battle_state(self, battle: Battle) -> str: active_pkmn = battle.active_pokemon active_pkmn_info = f"Your active Pokemon: {active_pkmn.species} " \ f"(Type: {'/'.join(map(str, active_pkmn.types))}) " \ f"HP: {active_pkmn.current_hp_fraction * 100:.1f}% " \ f"Status: {active_pkmn.status.name if active_pkmn.status else 'None'} " \ f"Boosts: {active_pkmn.boosts}" opponent_pkmn = battle.opponent_active_pokemon opp_info_str = "Unknown" if opponent_pkmn: opp_info_str = f"{opponent_pkmn.species} " \ f"(Type: {'/'.join(map(str, opponent_pkmn.types))}) " \ f"HP: {opponent_pkmn.current_hp_fraction * 100:.1f}% " \ f"Status: {opponent_pkmn.status.name if opponent_pkmn.status else 'None'} " \ f"Boosts: {opponent_pkmn.boosts}" opponent_pkmn_info = f"Opponent's active Pokemon: {opp_info_str}" available_moves_info = "Available moves:\n" if battle.available_moves: available_moves_info += "\n".join( [f"- {move.id} (Type: {move.type}, BP: {move.base_power}, Acc: {move.accuracy}, PP: {move.current_pp}/{move.max_pp}, Cat: {move.category.name})" for move in battle.available_moves] ) else: available_moves_info += "- None (Must switch or Struggle)" available_switches_info = "Available switches:\n" if battle.available_switches: available_switches_info += "\n".join( [f"- {pkmn.species} (HP: {pkmn.current_hp_fraction * 100:.1f}%, Status: {pkmn.status.name if pkmn.status else 'None'})" for pkmn in battle.available_switches] ) else: available_switches_info += "- None" state_str = f"{active_pkmn_info}\n" \ f"{opponent_pkmn_info}\n\n" \ f"{available_moves_info}\n\n" \ f"{available_switches_info}\n\n" \ f"Weather: {battle.weather}\n" \ f"Terrains: {battle.fields}\n" \ f"Your Side Conditions: {battle.side_conditions}\n" \ f"Opponent Side Conditions: {battle.opponent_side_conditions}" return state_str.strip() def _find_move_by_name(self, battle: Battle, move_name: str) -> Optional[Move]: normalized_name = normalize_name(move_name) # Prioritize exact ID match for move in battle.available_moves: if move.id == normalized_name: return move # Fallback: Check display name (less reliable) for move in battle.available_moves: if move.name.lower() == move_name.lower(): print(f"Warning: Matched move by display name '{move.name}' instead of ID '{move.id}'. Input was '{move_name}'.") return move return None def _find_pokemon_by_name(self, battle: Battle, pokemon_name: str) -> Optional[Pokemon]: normalized_name = normalize_name(pokemon_name) for pkmn in battle.available_switches: # Normalize the species name for comparison if normalize_name(pkmn.species) == normalized_name: return pkmn return None async def choose_move(self, battle: Battle) -> str: battle_state_str = self._format_battle_state(battle) decision_result = await self._get_llm_decision(battle_state_str) print(decision_result) decision = decision_result.get("decision") error_message = decision_result.get("error") action_taken = False fallback_reason = "" if decision: function_name = decision.get("name") args = decision.get("arguments", {}) if function_name == "choose_move": move_name = args.get("move_name") if move_name: chosen_move = self._find_move_by_name(battle, move_name) if chosen_move and chosen_move in battle.available_moves: action_taken = True chat_msg = f"AI Decision: Using move '{chosen_move.id}'." print(chat_msg) return self.create_order(chosen_move) else: fallback_reason = f"LLM chose unavailable/invalid move '{move_name}'." else: fallback_reason = "LLM 'choose_move' called without 'move_name'." elif function_name == "choose_switch": pokemon_name = args.get("pokemon_name") if pokemon_name: chosen_switch = self._find_pokemon_by_name(battle, pokemon_name) if chosen_switch and chosen_switch in battle.available_switches: action_taken = True chat_msg = f"AI Decision: Switching to '{chosen_switch.species}'." print(chat_msg) return self.create_order(chosen_switch) else: fallback_reason = f"LLM chose unavailable/invalid switch '{pokemon_name}'." else: fallback_reason = "LLM 'choose_switch' called without 'pokemon_name'." else: fallback_reason = f"LLM called unknown function '{function_name}'." if not action_taken: if not fallback_reason: if error_message: fallback_reason = f"API Error: {error_message}" elif decision is None: fallback_reason = "LLM did not provide a valid function call." else: fallback_reason = "Unknown error processing LLM decision." print(f"Warning: {fallback_reason} Choosing random action.") if battle.available_moves or battle.available_switches: return self.choose_random_move(battle) else: print("AI Fallback: No moves or switches available. Using Struggle/Default.") return self.choose_default_move(battle) async def _get_llm_decision(self, battle_state: str) -> Dict[str, Any]: raise NotImplementedError("Subclasses must implement _get_llm_decision") ``` **Full source code**: [agents.py](https://huggingface.co/spaces/Jofthomas/twitch_streaming/blob/main/agents.py) ## 🧪 TemplateAgent Now comes the fun part! With LLMAgentBase as your foundation, it’s time to implement your own agent, with your own strategy to climb the leaderboard. You’ll start from this template and build your own logic. We’ve also provided three [complete examples](https://huggingface.co/spaces/Jofthomas/twitch_streaming/blob/main/agents.py) using **OpenAI**, **Mistral**, and **Gemini** models to guide you. Here’s a simplified version of the template: ```python class TemplateAgent(LLMAgentBase): """Uses Template AI API for decisions.""" def __init__(self, api_key: str = None, model: str = "model-name", *args, **kwargs): super().__init__(*args, **kwargs) self.model = model self.template_client = TemplateModelProvider(api_key=...) self.template_tools = list(self.standard_tools.values()) async def _get_llm_decision(self, battle_state: str) -> Dict[str, Any]: """Sends state to the LLM and gets back the function call decision.""" system_prompt = ( "You are a ..." ) user_prompt = f"..." try: response = await self.template_client.chat.completions.create( model=self.model, messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}, ], ) message = response.choices[0].message return {"decision": {"name": function_name, "arguments": arguments}} except Exception as e: print(f"Unexpected error during call: {e}") return {"error": f"Unexpected error: {e}"} ``` This code won’t run out of the box, it’s a blueprint for your custom logic. With all the pieces ready, it’s your turn to build a competitive agent. In the next section, we’ll show how to deploy your agent to our server and battle others in real-time. Let the battle begin! 🔥
agents-course/units/en/bonus-unit3/building_your_pokemon_agent.mdx/0
{ "file_path": "agents-course/units/en/bonus-unit3/building_your_pokemon_agent.mdx", "repo_id": "agents-course", "token_count": 5276 }
1
# Introduction to Agents <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/> Welcome to this first unit, where **you'll build a solid foundation in the fundamentals of AI Agents** including: - **Understanding Agents** - What is an Agent, and how does it work? - How do Agents make decisions using reasoning and planning? - **The Role of LLMs (Large Language Models) in Agents** - How LLMs serve as the “brain” behind an Agent. - How LLMs structure conversations via the Messages system. - **Tools and Actions** - How Agents use external tools to interact with the environment. - How to build and integrate tools for your Agent. - **The Agent Workflow:** - *Think* → *Act* → *Observe*. After exploring these topics, **you’ll build your first Agent** using `smolagents`! Your Agent, named Alfred, will handle a simple task and demonstrate how to apply these concepts in practice. You’ll even learn how to **publish your Agent on Hugging Face Spaces**, so you can share it with friends and colleagues. Finally, at the end of this Unit, you'll take a quiz. Pass it, and you'll **earn your first course certification**: the 🎓 Certificate of Fundamentals of Agents. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/certificate-example.jpg" alt="Certificate Example"/> This Unit is your **essential starting point**, laying the groundwork for understanding Agents before you move on to more advanced topics. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-no-check.jpg" alt="Unit 1 planning"/> It's a big unit, so **take your time** and don’t hesitate to come back to these sections from time to time. Ready? Let’s dive in! 🚀
agents-course/units/en/unit1/introduction.mdx/0
{ "file_path": "agents-course/units/en/unit1/introduction.mdx", "repo_id": "agents-course", "token_count": 530 }
2
# Test Your Understanding of LangGraph Let's test your understanding of `LangGraph` with a quick quiz! This will help reinforce the key concepts we've covered so far. This is an optional quiz and it's not graded. ### Q1: What is the primary purpose of LangGraph? Which statement best describes what LangGraph is designed for? <Question choices={[ { text: "A framework to build control flows for applications containing LLMs", explain: "LangGraph is specifically designed to help build and manage the control flow of applications that use LLMs.", correct: true }, { text: "A library that provides interfaces to interact with different LLM models", explain: "This better describes LangChain's role, which provides standard interfaces for model interaction. LangGraph focuses on control flow.", }, { text: "An Agent library for tool calling", explain: "While LangGraph works with agents, the main purpose of langGraph is 'Ochestration'.", } ]} /> --- ### Q2: In the context of the "Control vs Freedom" trade-off, where does LangGraph stand? Which statement best characterizes LangGraph's approach to agent design? <Question choices={[ { text: "LangGraph maximizes freedom, allowing LLMs to make all decisions independently", explain: "LangGraph actually focuses more on control than freedom, providing structure for LLM workflows.", }, { text: "LangGraph provides strong control over execution flow while still leveraging LLM capabilities for decision making", explain: "LangGraph shines when you need control over your agent's execution, providing predictable behavior through structured workflows.", correct: true }, ]} /> --- ### Q3: What role does State play in LangGraph? Choose the most accurate description of State in LangGraph. <Question choices={[ { text: "State is the latest generation from the LLM", explain: "State is a user-defined class in LangGraph, not LLM generated. It's fields are user defined, the values can be LLM filled", }, { text: "State is only used to track errors during execution", explain: "State has a much broader purpose than just error tracking. But that's still usefull.", }, { text: "State represents the information that flows through your agent application", explain: "State is central to LangGraph and contains all the information needed for decision-making between steps. You provide the fields than you need to compute and the nodes can alter the values to decide on a branching.", correct: true }, { text: "State is only relevant when working with external APIs", explain: "State is fundamental to all LangGraph applications, not just those working with external APIs.", } ]} /> ### Q4: What is a Conditional Edge in LangGraph? Select the most accurate description. <Question choices={[ { text: "An edge that determines which node to execute next based on evaluating a condition", explain: "Conditional edges allow your graph to make dynamic routing decisions based on the current state, creating branching logic in your workflow.", correct: true }, { text: "An edge that is only followed when a specific condition occurs", explain: "Conditional edges control the flow of the application on it's outputs, not on the input.", }, { text: "An edge that requires user confirmation before proceeding", explain: "Conditional edges are based on programmatic conditions, not user interaction requirements.", } ]} /> --- ### Q5: How does LangGraph help address the hallucination problem in LLMs? Choose the best answer. <Question choices={[ { text: "LangGraph eliminates hallucinations entirely by limiting LLM responses", explain: "No framework can completely eliminate hallucinations from LLMs, LangGraph is no exception.", }, { text: "LangGraph provides structured workflows that can validate and verify LLM outputs", explain: "By creating structured workflows with validation steps, verification nodes, and error handling paths, LangGraph helps reduce the impact of hallucinations.", correct: true }, { text: "LangGraph has no effect on hallucinations", explain: "LangGraph's structured approach to workflows can help significantly in mitigating hallucinations at the cost of speed.", } ]} /> Congratulations on completing the quiz! 🎉 If you missed any questions, consider reviewing the previous sections to strengthen your understanding. Next, we'll explore more advanced features of LangGraph and see how to build more complex agent workflows.
agents-course/units/en/unit2/langgraph/quiz1.mdx/0
{ "file_path": "agents-course/units/en/unit2/langgraph/quiz1.mdx", "repo_id": "agents-course", "token_count": 1169 }
3
<CourseFloatingBanner classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/#fileId=https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/multiagent_notebook.ipynb"}, ]} askForHelpUrl="http://hf.co/join/discord" /> # Multi-Agent Systems Multi-agent systems enable **specialized agents to collaborate on complex tasks**, improving modularity, scalability, and robustness. Instead of relying on a single agent, tasks are distributed among agents with distinct capabilities. In **smolagents**, different agents can be combined to generate Python code, call external tools, perform web searches, and more. By orchestrating these agents, we can create powerful workflows. A typical setup might include: - A **Manager Agent** for task delegation - A **Code Interpreter Agent** for code execution - A **Web Search Agent** for information retrieval The diagram below illustrates a simple multi-agent architecture where a **Manager Agent** coordinates a **Code Interpreter Tool** and a **Web Search Agent**, which in turn utilizes tools like the `DuckDuckGoSearchTool` and `VisitWebpageTool` to gather relevant information. <img src="https://mermaid.ink/img/pako:eNp1kc1qhTAQRl9FUiQb8wIpdNO76eKubrmFks1oRg3VSYgjpYjv3lFL_2hnMWQOJwn5sqgmelRWleUSKLAtFs09jqhtoWuYUFfFAa6QA9QDTnpzamheuhxn8pt40-6l13UtS0ddhtQXj6dbR4XUGQg6zEYasTF393KjeSDGnDJKNxzj8I_7hLW5IOSmP9CH9hv_NL-d94d4DVNg84p1EnK4qlIj5hGClySWbadT-6OdsrL02MI8sFOOVkciw8zx8kaNspxnrJQE0fXKtjBMMs3JA-MpgOQwftIE9Bzj14w-cMznI_39E9Z3p0uFoA?type=png" style='background: white;'> ## Multi-Agent Systems in Action A multi-agent system consists of multiple specialized agents working together under the coordination of an **Orchestrator Agent**. This approach enables complex workflows by distributing tasks among agents with distinct roles. For example, a **Multi-Agent RAG system** can integrate: - A **Web Agent** for browsing the internet. - A **Retriever Agent** for fetching information from knowledge bases. - An **Image Generation Agent** for producing visuals. All of these agents operate under an orchestrator that manages task delegation and interaction. ## Solving a complex task with a multi-agent hierarchy <Tip> You can follow the code in <a href="https://huggingface.co/agents-course/notebooks/blob/main/unit2/smolagents/multiagent_notebook.ipynb" target="_blank">this notebook</a> that you can run using Google Colab. </Tip> The reception is approaching! With your help, Alfred is now nearly finished with the preparations. But now there's a problem: the Batmobile has disappeared. Alfred needs to find a replacement, and find it quickly. Fortunately, a few biopics have been done on Bruce Wayne's life, so maybe Alfred could get a car left behind on one of the movie sets, and re-engineer it up to modern standards, which certainly would include a full self-driving option. But this could be anywhere in the filming locations around the world - which could be numerous. So Alfred wants your help. Could you build an agent able to solve this task? > 👉 Find all Batman filming locations in the world, calculate the time to transfer via boat to there, and represent them on a map, with a color varying by boat transfer time. Also represent some supercar factories with the same boat transfer time. Let's build this! This example needs some additional packages, so let's install them first: ```bash pip install 'smolagents[litellm]' plotly geopandas shapely kaleido -q ``` ### We first make a tool to get the cargo plane transfer time. ```python import math from typing import Optional, Tuple from smolagents import tool @tool def calculate_cargo_travel_time( origin_coords: Tuple[float, float], destination_coords: Tuple[float, float], cruising_speed_kmh: Optional[float] = 750.0, # Average speed for cargo planes ) -> float: """ Calculate the travel time for a cargo plane between two points on Earth using great-circle distance. Args: origin_coords: Tuple of (latitude, longitude) for the starting point destination_coords: Tuple of (latitude, longitude) for the destination cruising_speed_kmh: Optional cruising speed in km/h (defaults to 750 km/h for typical cargo planes) Returns: float: The estimated travel time in hours Example: >>> # Chicago (41.8781° N, 87.6298° W) to Sydney (33.8688° S, 151.2093° E) >>> result = calculate_cargo_travel_time((41.8781, -87.6298), (-33.8688, 151.2093)) """ def to_radians(degrees: float) -> float: return degrees * (math.pi / 180) # Extract coordinates lat1, lon1 = map(to_radians, origin_coords) lat2, lon2 = map(to_radians, destination_coords) # Earth's radius in kilometers EARTH_RADIUS_KM = 6371.0 # Calculate great-circle distance using the haversine formula dlon = lon2 - lon1 dlat = lat2 - lat1 a = ( math.sin(dlat / 2) ** 2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon / 2) ** 2 ) c = 2 * math.asin(math.sqrt(a)) distance = EARTH_RADIUS_KM * c # Add 10% to account for non-direct routes and air traffic controls actual_distance = distance * 1.1 # Calculate flight time # Add 1 hour for takeoff and landing procedures flight_time = (actual_distance / cruising_speed_kmh) + 1.0 # Format the results return round(flight_time, 2) print(calculate_cargo_travel_time((41.8781, -87.6298), (-33.8688, 151.2093))) ``` ### Setting up the agent For the model provider, we use Together AI, one of the new [inference providers on the Hub](https://huggingface.co/blog/inference-providers)! The GoogleSearchTool uses the [Serper API](https://serper.dev) to search the web, so this requires either having setup env variable `SERPAPI_API_KEY` and passing `provider="serpapi"` or having `SERPER_API_KEY` and passing `provider=serper`. If you don't have any Serp API provider setup, you can use `DuckDuckGoSearchTool` but beware that it has a rate limit. ```python import os from PIL import Image from smolagents import CodeAgent, GoogleSearchTool, InferenceClientModel, VisitWebpageTool model = InferenceClientModel(model_id="Qwen/Qwen2.5-Coder-32B-Instruct", provider="together") ``` We can start by creating a simple agent as a baseline to give us a simple report. ```python task = """Find all Batman filming locations in the world, calculate the time to transfer via cargo plane to here (we're in Gotham, 40.7128° N, 74.0060° W), and return them to me as a pandas dataframe. Also give me some supercar factories with the same cargo plane transfer time.""" ``` ```python agent = CodeAgent( model=model, tools=[GoogleSearchTool("serper"), VisitWebpageTool(), calculate_cargo_travel_time], additional_authorized_imports=["pandas"], max_steps=20, ) ``` ```python result = agent.run(task) ``` ```python result ``` In our case, it generates this output: ```python | | Location | Travel Time to Gotham (hours) | |--|------------------------------------------------------|------------------------------| | 0 | Necropolis Cemetery, Glasgow, Scotland, UK | 8.60 | | 1 | St. George's Hall, Liverpool, England, UK | 8.81 | | 2 | Two Temple Place, London, England, UK | 9.17 | | 3 | Wollaton Hall, Nottingham, England, UK | 9.00 | | 4 | Knebworth House, Knebworth, Hertfordshire, UK | 9.15 | | 5 | Acton Lane Power Station, Acton Lane, Acton, UK | 9.16 | | 6 | Queensboro Bridge, New York City, USA | 1.01 | | 7 | Wall Street, New York City, USA | 1.00 | | 8 | Mehrangarh Fort, Jodhpur, Rajasthan, India | 18.34 | | 9 | Turda Gorge, Turda, Romania | 11.89 | | 10 | Chicago, USA | 2.68 | | 11 | Hong Kong, China | 19.99 | | 12 | Cardington Studios, Northamptonshire, UK | 9.10 | | 13 | Warner Bros. Leavesden Studios, Hertfordshire, UK | 9.13 | | 14 | Westwood, Los Angeles, CA, USA | 6.79 | | 15 | Woking, UK (McLaren) | 9.13 | ``` We could already improve this a bit by throwing in some dedicated planning steps, and adding more prompting. Planning steps allow the agent to think ahead and plan its next steps, which can be useful for more complex tasks. ```python agent.planning_interval = 4 detailed_report = agent.run(f""" You're an expert analyst. You make comprehensive reports after visiting many websites. Don't hesitate to search for many queries at once in a for loop. For each data point that you find, visit the source url to confirm numbers. {task} """) print(detailed_report) ``` ```python detailed_report ``` In our case, it generates this output: ```python | | Location | Travel Time (hours) | |--|--------------------------------------------------|---------------------| | 0 | Bridge of Sighs, Glasgow Necropolis, Glasgow, UK | 8.6 | | 1 | Wishart Street, Glasgow, Scotland, UK | 8.6 | ``` Thanks to these quick changes, we obtained a much more concise report by simply providing our agent a detailed prompt, and giving it planning capabilities! The model's context window is quickly filling up. So **if we ask our agent to combine the results of detailed search with another, it will be slower and quickly ramp up tokens and costs**. ➡️ We need to improve the structure of our system. ### ✌️ Splitting the task between two agents Multi-agent structures allow to separate memories between different sub-tasks, with two great benefits: - Each agent is more focused on its core task, thus more performant - Separating memories reduces the count of input tokens at each step, thus reducing latency and cost. Let's create a team with a dedicated web search agent, managed by another agent. The manager agent should have plotting capabilities to write its final report: so let us give it access to additional imports, including `plotly`, and `geopandas` + `shapely` for spatial plotting. ```python model = InferenceClientModel( "Qwen/Qwen2.5-Coder-32B-Instruct", provider="together", max_tokens=8096 ) web_agent = CodeAgent( model=model, tools=[ GoogleSearchTool(provider="serper"), VisitWebpageTool(), calculate_cargo_travel_time, ], name="web_agent", description="Browses the web to find information", verbosity_level=0, max_steps=10, ) ``` The manager agent will need to do some mental heavy lifting. So we give it the stronger model [DeepSeek-R1](https://huggingface.co/deepseek-ai/DeepSeek-R1), and add a `planning_interval` to the mix. ```python from smolagents.utils import encode_image_base64, make_image_url from smolagents import OpenAIServerModel def check_reasoning_and_plot(final_answer, agent_memory): multimodal_model = OpenAIServerModel("gpt-4o", max_tokens=8096) filepath = "saved_map.png" assert os.path.exists(filepath), "Make sure to save the plot under saved_map.png!" image = Image.open(filepath) prompt = ( f"Here is a user-given task and the agent steps: {agent_memory.get_succinct_steps()}. Now here is the plot that was made." "Please check that the reasoning process and plot are correct: do they correctly answer the given task?" "First list reasons why yes/no, then write your final decision: PASS in caps lock if it is satisfactory, FAIL if it is not." "Don't be harsh: if the plot mostly solves the task, it should pass." "To pass, a plot should be made using px.scatter_map and not any other method (scatter_map looks nicer)." ) messages = [ { "role": "user", "content": [ { "type": "text", "text": prompt, }, { "type": "image_url", "image_url": {"url": make_image_url(encode_image_base64(image))}, }, ], } ] output = multimodal_model(messages).content print("Feedback: ", output) if "FAIL" in output: raise Exception(output) return True manager_agent = CodeAgent( model=InferenceClientModel("deepseek-ai/DeepSeek-R1", provider="together", max_tokens=8096), tools=[calculate_cargo_travel_time], managed_agents=[web_agent], additional_authorized_imports=[ "geopandas", "plotly", "shapely", "json", "pandas", "numpy", ], planning_interval=5, verbosity_level=2, final_answer_checks=[check_reasoning_and_plot], max_steps=15, ) ``` Let us inspect what this team looks like: ```python manager_agent.visualize() ``` This will generate something like this, helping us understand the structure and relationship between agents and tools used: ```python CodeAgent | deepseek-ai/DeepSeek-R1 ├── ✅ Authorized imports: ['geopandas', 'plotly', 'shapely', 'json', 'pandas', 'numpy'] ├── 🛠️ Tools: │ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ │ ┃ Name ┃ Description ┃ Arguments ┃ │ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ │ calculate_cargo_travel_time │ Calculate the travel time for a cargo │ origin_coords (`array`): Tuple of │ │ │ │ plane between two points on Earth │ (latitude, longitude) for the │ │ │ │ using great-circle distance. │ starting point │ │ │ │ │ destination_coords (`array`): Tuple │ │ │ │ │ of (latitude, longitude) for the │ │ │ │ │ destination │ │ │ │ │ cruising_speed_kmh (`number`): │ │ │ │ │ Optional cruising speed in km/h │ │ │ │ │ (defaults to 750 km/h for typical │ │ │ │ │ cargo planes) │ │ │ final_answer │ Provides a final answer to the given │ answer (`any`): The final answer to │ │ │ │ problem. │ the problem │ │ └─────────────────────────────┴───────────────────────────────────────┴───────────────────────────────────────┘ └── 🤖 Managed agents: └── web_agent | CodeAgent | Qwen/Qwen2.5-Coder-32B-Instruct ├── ✅ Authorized imports: [] ├── 📝 Description: Browses the web to find information └── 🛠️ Tools: ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┃ Name ┃ Description ┃ Arguments ┃ ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩ │ web_search │ Performs a google web search for │ query (`string`): The search │ │ │ your query then returns a string │ query to perform. │ │ │ of the top search results. │ filter_year (`integer`): │ │ │ │ Optionally restrict results to a │ │ │ │ certain year │ │ visit_webpage │ Visits a webpage at the given url │ url (`string`): The url of the │ │ │ and reads its content as a │ webpage to visit. │ │ │ markdown string. Use this to │ │ │ │ browse webpages. │ │ │ calculate_cargo_travel_time │ Calculate the travel time for a │ origin_coords (`array`): Tuple of │ │ │ cargo plane between two points on │ (latitude, longitude) for the │ │ │ Earth using great-circle │ starting point │ │ │ distance. │ destination_coords (`array`): │ │ │ │ Tuple of (latitude, longitude) │ │ │ │ for the destination │ │ │ │ cruising_speed_kmh (`number`): │ │ │ │ Optional cruising speed in km/h │ │ │ │ (defaults to 750 km/h for typical │ │ │ │ cargo planes) │ │ final_answer │ Provides a final answer to the │ answer (`any`): The final answer │ │ │ given problem. │ to the problem │ └─────────────────────────────┴───────────────────────────────────┴───────────────────────────────────┘ ``` ```python manager_agent.run(""" Find all Batman filming locations in the world, calculate the time to transfer via cargo plane to here (we're in Gotham, 40.7128° N, 74.0060° W). Also give me some supercar factories with the same cargo plane transfer time. You need at least 6 points in total. Represent this as spatial map of the world, with the locations represented as scatter points with a color that depends on the travel time, and save it to saved_map.png! Here's an example of how to plot and return a map: import plotly.express as px df = px.data.carshare() fig = px.scatter_map(df, lat="centroid_lat", lon="centroid_lon", text="name", color="peak_hour", size=100, color_continuous_scale=px.colors.sequential.Magma, size_max=15, zoom=1) fig.show() fig.write_image("saved_image.png") final_answer(fig) Never try to process strings using code: when you have a string to read, just print it and you'll see it. """) ``` I don't know how that went in your run, but in mine, the manager agent skilfully divided tasks given to the web agent in `1. Search for Batman filming locations`, then `2. Find supercar factories`, before aggregating the lists and plotting the map. Let's see what the map looks like by inspecting it directly from the agent state: ```python manager_agent.python_executor.state["fig"] ``` This will output the map: ![Multiagent system example output map](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/output_map.png) ## Resources - [Multi-Agent Systems](https://huggingface.co/docs/smolagents/main/en/examples/multiagents) – Overview of multi-agent systems. - [What is Agentic RAG?](https://weaviate.io/blog/what-is-agentic-rag) – Introduction to Agentic RAG. - [Multi-Agent RAG System 🤖🤝🤖 Recipe](https://huggingface.co/learn/cookbook/multiagent_rag_system) – Step-by-step guide to building a multi-agent RAG system.
agents-course/units/en/unit2/smolagents/multi_agent_systems.mdx/0
{ "file_path": "agents-course/units/en/unit2/smolagents/multi_agent_systems.mdx", "repo_id": "agents-course", "token_count": 9133 }
4
# Conclusion **Congratulations on finishing the Agents Course!** Through perseverance and dedication, you’ve built a solid foundation in the world of AI Agents. But finishing this course is **not the end of your journey**. It’s just the beginning: don’t hesitate to explore the next section where we share curated resources to help you continue learning, including advanced topics like **MCPs** and beyond. **Thank you** for being part of this course. **We hope you liked this course as much as we loved writing it**. And don’t forget: **Keep Learning, Stay Awesome 🤗**
agents-course/units/en/unit4/conclusion.mdx/0
{ "file_path": "agents-course/units/en/unit4/conclusion.mdx", "repo_id": "agents-course", "token_count": 142 }
5
# De LLMs a Agentes de IA Aprendimos en la [primera unidad](https://huggingface.co/learn/agents-course/unit1/introduction) del curso que los Agentes de IA son capaces de planificar y tomar decisiones. Y aunque los LLMs han permitido interacciones más naturales con los NPCs, la IA Agéntica va un paso más allá al permitir que los personajes tomen decisiones, planifiquen acciones y se adapten a entornos cambiantes. Para ilustrar la diferencia, piensa en un NPC clásico de RPG: - Con un LLM: el NPC podría responder a tus preguntas de una manera más natural y variada. Es genial para el diálogo, pero el NPC permanece estático, no actuará a menos que tú hagas algo primero. - Con IA Agéntica: el NPC puede decidir ir a buscar ayuda, poner una trampa o evitarte por completo, incluso si no estás interactuando directamente con él. Este pequeño cambio lo cambia todo. Estamos pasando de respondedores con guion a actores autónomos dentro del mundo del juego. Este cambio significa que los NPCs ahora pueden interactuar directamente con su entorno a través de comportamientos dirigidos a objetivos, lo que finalmente conduce a una jugabilidad más dinámica e impredecible. La IA Agéntica empodera a los NPCs con: - **Autonomía**: Tomar decisiones independientes basadas en el estado del juego. - **Adaptabilidad**: Ajustar estrategias en respuesta a las acciones del jugador. - **Persistencia**: Recordar interacciones pasadas para informar el comportamiento futuro. Esto transforma a los NPCs de entidades reactivas (reaccionando a tus entradas) en participantes proactivos en el mundo del juego, abriendo la puerta a una jugabilidad innovadora. ## La gran limitación de los Agentes: **es lento** (por ahora) Sin embargo, no seamos demasiado optimistas todavía. A pesar de su potencial, la IA Agéntica actualmente enfrenta desafíos en aplicaciones en tiempo real. Los procesos de razonamiento y planificación pueden introducir latencia, haciéndola menos adecuada para juegos de ritmo rápido como *Doom* o *Super Mario Bros.* Toma el ejemplo de [_Claude Plays Pokémon_](https://www.twitch.tv/claudeplayspokemon). Si consideras la cantidad de tokens necesarios para **pensar**, más los tokens necesarios para **actuar**, queda claro que necesitaríamos estrategias de decodificación completamente diferentes para que el juego en tiempo real sea factible. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit3/claude-plays-pokemon.png" alt="Claude plays Pokémon"/> La mayoría de los juegos necesitan ejecutarse a unos 30 FPS, lo que significa que un agente de IA en tiempo real necesitaría actuar 30 veces por segundo, lo que actualmente no es factible con los LLMs agénticos de hoy en día. Sin embargo, los juegos por turnos como *Pokémon* son candidatos ideales, ya que le dan a la IA tiempo suficiente para deliberar y tomar decisiones estratégicas. Es por eso que en la próxima sección, construirás tu propio Agente de IA para luchar en combates por turnos al estilo Pokémon, e incluso desafiarlo tú mismo. ¡Manos a la obra!
agents-course/units/es/bonus-unit3/from-llm-to-agents.mdx/0
{ "file_path": "agents-course/units/es/bonus-unit3/from-llm-to-agents.mdx", "repo_id": "agents-course", "token_count": 1073 }
6
# Observar: Integrando Retroalimentación para Reflexionar y Adaptarse Las observaciones son **cómo un Agente percibe las consecuencias de sus acciones**. Proporcionan información crucial que alimenta el proceso de pensamiento del Agente y guía acciones futuras. Son **señales del entorno**—ya sean datos de una API, mensajes de error o registros del sistema—que guían el siguiente ciclo de pensamiento. En la fase de observación, el agente: - **Recopila Retroalimentación:** Recibe datos o confirmación de que su acción fue exitosa (o no). - **Añade Resultados:** Integra la nueva información en su contexto existente, actualizando efectivamente su memoria. - **Adapta su Estrategia:** Utiliza este contexto actualizado para refinar pensamientos y acciones subsiguientes. Por ejemplo, si una API del clima devuelve los datos *"parcialmente nublado, 15°C, 60% de humedad"*, esta observación se añade a la memoria del agente (al final del prompt). El Agente luego la utiliza para decidir si se necesita información adicional o si está listo para proporcionar una respuesta final. Esta **incorporación iterativa de retroalimentación asegura que el agente permanezca dinámicamente alineado con sus objetivos**, aprendiendo y ajustándose constantemente basado en resultados del mundo real. Estas observaciones **pueden tomar muchas formas**, desde leer texto de páginas web hasta monitorear la posición de un brazo robótico. Esto puede verse como "registros" de Herramientas que proporcionan retroalimentación textual de la ejecución de la Acción. | Tipo de Observación | Ejemplo | |---------------------|---------------------------------------------------------------------------| | Retroalimentación del Sistema | Mensajes de error, notificaciones de éxito, códigos de estado | | Cambios de Datos | Actualizaciones de base de datos, modificaciones del sistema de archivos, cambios de estado | | Datos Ambientales | Lecturas de sensores, métricas del sistema, uso de recursos | | Análisis de Respuesta | Respuestas de API, resultados de consultas, salidas de cómputo | | Eventos Basados en Tiempo | Plazos alcanzados, tareas programadas completadas | ## ¿Cómo Se Añaden los Resultados? Después de realizar una acción, el framework sigue estos pasos en orden: 1. **Analiza la acción** para identificar la(s) función(es) a llamar y el/los argumento(s) a utilizar. 2. **Ejecuta la acción.** 3. **Añade el resultado** como una **Observación**. --- Ahora hemos aprendido el Ciclo de Pensamiento-Acción-Observación del Agente. Si algunos aspectos todavía parecen un poco confusos, no te preocupes—revisaremos y profundizaremos estos conceptos en Unidades futuras. Ahora, ¡es hora de poner tu conocimiento en práctica codificando tu primer Agente!
agents-course/units/es/unit1/observations.mdx/0
{ "file_path": "agents-course/units/es/unit1/observations.mdx", "repo_id": "agents-course", "token_count": 1208 }
7
# Índice de Contenidos Este marco de trabajo de LlamaIndex es parte de la unidad 2 del curso. Puedes acceder a la unidad 2 sobre LlamaIndex en hf.co/learn <a href="https://hf.co/learn/agents-course/unit2/llama-index/introduction">aquí</a> | Título | Descripción | | --- | --- | | [Introducción](introduction.mdx) | Introducción a LlamaIndex | | [LlamaHub](llama-hub.mdx) | LlamaHub: un registro de integraciones, agentes y herramientas | | [Componentes](components.mdx) | Componentes: los bloques de construcción de workflows | | [Herramientas](tools.mdx) | Herramientas: cómo construir herramientas en LlamaIndex | | [Cuestionario 1](quiz1.mdx) | Cuestionario 1 | | [Agentes](agents.mdx) | Agentes: cómo construir agentes en LlamaIndex | | [Flujos de Trabajo](workflows.mdx) | Flujos de Trabajo: una secuencia de pasos, eventos compuestos por componentes que se ejecutan en orden | | [Cuestionario 2](quiz2.mdx) | Cuestionario 2 | | [Conclusión](conclusion.mdx) | Conclusión |
agents-course/units/es/unit2/llama-index/README.md/0
{ "file_path": "agents-course/units/es/unit2/llama-index/README.md", "repo_id": "agents-course", "token_count": 382 }
8
# Pequeño Quiz (no calificado) [[quiz2]] Es hora de poner a prueba tu comprensión de las secciones *Agentes de Código*, *Agentes de Llamada a Herramientas* y *Herramientas*. Este quiz es opcional y no está calificado. --- ### P1: ¿Cuál es la diferencia clave entre crear una herramienta con el decorador `@tool` versus crear una subclase de `Tool` en smolagents? ¿Qué afirmación describe mejor la distinción entre estos dos enfoques para definir herramientas? <Question choices={[ { text: "El uso del decorador <code>@tool</code> es obligatorio para herramientas basadas en recuperación, mientras que las subclases de <code>Tool</code> son solo para tareas de generación de texto", explain: "Ambos enfoques pueden usarse para cualquier tipo de herramienta, incluidas las basadas en recuperación o generación de texto.", }, { text: "El decorador <code>@tool</code> se recomienda para herramientas simples basadas en funciones, mientras que las subclases de <code>Tool</code> ofrecen más flexibilidad para funcionalidades complejas o metadatos personalizados", explain: "Esto es correcto. El enfoque del decorador es más simple, pero la subclasificación permite un comportamiento más personalizado.", correct: true }, { text: "<code>@tool</code> solo puede usarse en sistemas multi-agente, mientras que crear una subclase de <code>Tool</code> es para escenarios de un solo agente", explain: "Todos los agentes (individuales o múltiples) pueden usar cualquiera de los enfoques para definir herramientas; no existe tal restricción.", }, { text: "Decorar una función con <code>@tool</code> reemplaza la necesidad de un docstring, mientras que las subclases no deben incluir docstrings", explain: "Ambos métodos se benefician de docstrings claros. El decorador no los reemplaza, y una subclase también puede tener docstrings.", } ]} /> --- ### P2: ¿Cómo maneja un CodeAgent tareas de múltiples pasos utilizando el enfoque ReAct (Reason + Act)? ¿Qué afirmación describe correctamente cómo el CodeAgent ejecuta una serie de pasos para resolver una tarea? <Question choices={[ { text: "Pasa cada paso a un agente diferente en un sistema multi-agente, luego combina los resultados", explain: "Aunque los sistemas multi-agente pueden distribuir tareas, el CodeAgent por sí mismo puede manejar múltiples pasos usando ReAct.", }, { text: "Almacena cada acción en JSON para facilitar el análisis antes de ejecutarlas todas a la vez", explain: "Este comportamiento coincide con el enfoque basado en JSON de ToolCallingAgent, no con CodeAgent.", }, { text: "Cicla entre escribir pensamientos internos, generar código Python, ejecutar el código y registrar los resultados hasta llegar a una respuesta final", explain: "Correcto. Esto describe el patrón ReAct que usa CodeAgent, incluyendo razonamiento iterativo y ejecución de código.", correct: true }, { text: "Se basa en un módulo de visión para validar la salida del código antes de continuar con el siguiente paso", explain: "Las capacidades de visión son compatibles en smolagents, pero no son un requisito predeterminado para CodeAgent o el enfoque ReAct.", } ]} /> --- ### P3: ¿Cuál de las siguientes es una ventaja principal de compartir una herramienta en Hugging Face Hub? Selecciona la mejor razón por la que un desarrollador podría subir y compartir su herramienta personalizada. <Question choices={[ { text: "Integra automáticamente la herramienta con un MultiStepAgent para generación aumentada por recuperación", explain: "Compartir una herramienta no configura automáticamente la lógica de recuperación o de múltiples pasos. Solo hace que la herramienta esté disponible.", }, { text: "Permite que otros descubran, reutilicen e integren tu herramienta en sus smolagents sin configuración adicional", explain: "Sí. Compartir en el Hub hace que las herramientas sean accesibles para que cualquiera (incluido tú mismo) las descargue y reutilice rápidamente.", correct: true }, { text: "Garantiza que solo los CodeAgents puedan invocar la herramienta mientras que los ToolCallingAgents no pueden", explain: "Tanto los CodeAgents como los ToolCallingAgents pueden invocar herramientas compartidas. No hay restricción por tipo de agente.", }, { text: "Convierte tu herramienta en una función completamente capaz de visión para procesamiento de imágenes", explain: "Compartir herramientas no altera la funcionalidad de la herramienta ni agrega capacidades de visión automáticamente.", } ]} /> --- ### P4: ToolCallingAgent difiere de CodeAgent en cómo ejecuta acciones. ¿Qué afirmación es correcta? Elige la opción que describe con precisión cómo funciona ToolCallingAgent. <Question choices={[ { text: "ToolCallingAgent solo es compatible con un sistema multi-agente, mientras que CodeAgent puede ejecutarse solo", explain: "Cualquiera de los agentes puede usarse solo o como parte de un sistema multi-agente.", }, { text: "ToolCallingAgent delega todo el razonamiento a un agente de recuperación separado, luego devuelve una respuesta final", explain: "ToolCallingAgent todavía usa un LLM principal para el razonamiento; no depende únicamente de agentes de recuperación.", }, { text: "ToolCallingAgent genera instrucciones JSON que especifican llamadas a herramientas y argumentos, que luego se analizan y ejecutan", explain: "Esto es correcto. ToolCallingAgent utiliza el enfoque JSON para definir llamadas a herramientas.", correct: true }, { text: "ToolCallingAgent está destinado solo para tareas de un solo paso y se detiene automáticamente después de llamar a una herramienta", explain: "ToolCallingAgent puede realizar múltiples pasos si es necesario, al igual que CodeAgent.", } ]} /> --- ### P5: ¿Qué se incluye en la caja de herramientas predeterminada de smolagents y por qué podrías usarla? ¿Qué afirmación captura mejor el propósito y el contenido de la caja de herramientas predeterminada en smolagents? <Question choices={[ { text: "Proporciona un conjunto de herramientas de uso común como la búsqueda de DuckDuckGo, PythonInterpreterTool y una herramienta de respuesta final para prototipos rápidos", explain: "Correcto. La caja de herramientas predeterminada contiene estas herramientas listas para usar para una fácil integración al construir agentes.", correct: true }, { text: "Solo admite tareas basadas en visión como clasificación de imágenes u OCR por defecto", explain: "Aunque smolagents puede integrar características basadas en visión, la caja de herramientas predeterminada no está exclusivamente orientada a la visión.", }, { text: "Está destinada únicamente para sistemas multi-agente y es incompatible con un solo CodeAgent", explain: "La caja de herramientas predeterminada puede ser utilizada por cualquier tipo de agente, configuraciones de agente único o múltiple por igual.", }, { text: "Agrega funcionalidad avanzada basada en recuperación para responder preguntas a gran escala desde un almacén de vectores", explain: "Si bien puedes construir herramientas de recuperación, la caja de herramientas predeterminada no proporciona automáticamente características avanzadas de RAG.", } ]} /> --- ¡Felicidades por completar este quiz! 🎉 Si alguna pregunta te dio problemas, revisa las secciones *Agentes de Código*, *Agentes de Llamada a Herramientas* o *Herramientas* para fortalecer tu comprensión. Si lo has hecho bien, ¡estás en buen camino para construir aplicaciones robustas con smolagents!
agents-course/units/es/unit2/smolagents/quiz2.mdx/0
{ "file_path": "agents-course/units/es/unit2/smolagents/quiz2.mdx", "repo_id": "agents-course", "token_count": 2768 }
9
# Reclama tu Certificado 🎓 Si obtuviste una puntuación **superior al 30%, ¡felicitaciones! 👏 Ahora eres elegible para reclamar tu certificado oficial.** Sigue los pasos a continuación para recibirlo: 1. Visita la [página del certificado](https://huggingface.co/spaces/agents-course/Unit4-Final-Certificate). 2. **Inicia sesión** con tu cuenta de Hugging Face usando el botón proporcionado. 3. **Ingresa tu nombre completo**. Este es el nombre que aparecerá en tu certificado. 4. Haz clic en **“Obtener Mi Certificado”** para verificar tu puntuación y descargar tu certificado. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit4/congrats.png" alt="¡Felicidades!" /> Una vez que tengas tu certificado, siéntete libre de: - Agregarlo a tu **perfil de LinkedIn** 🧑‍💼 - Compartirlo en **X**, **Bluesky**, etc. 🎉 **No olvides etiquetar a [@huggingface](https://huggingface.co/huggingface). ¡Estaremos súper orgullosos y nos encantaría animarte! 🤗**
agents-course/units/es/unit4/get-your-certificate.mdx/0
{ "file_path": "agents-course/units/es/unit4/get-your-certificate.mdx", "repo_id": "agents-course", "token_count": 387 }
10
# Introduction <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit3/pokemon_thumbnail.png" alt="Bonus Unit 3 AI in Games"/> 🎶Je veux être le meilleur... 🎶 Bienvenue dans cette **unité bonus**, où vous explorerez l'intersection passionnante entre **les agents et les jeux vidéos** ! 🎮🤖 Imaginez un jeu où les personnages non-joueurs (PNJ) ne suivent pas simplement des lignes scriptées, mais tiennent plutôt des conversations dynamiques, s'adaptent à vos stratégies et évoluent au fur et à mesure que l'histoire se déroule. C'est le pouvoir de combiner **les LLM et le comportement agentique dans les jeux** : cela ouvre la porte à **une narration et un *gameplay* émergents comme jamais auparavant**. Dans cette unité bonus, vous allez : - Apprendre comment construire un agent pouvant faire des **combats au tour par tour dans le style de Pokémon** - Jouer contre lui, ou même défier d'autres agents en ligne Nous avons déjà vu [quelques](https://www.anthropic.com/research/visible-extended-thinking) [exemples](https://www.twitch.tv/gemini_plays_pokemon) de la communauté IA pour jouer à Pokémon en utilisant des LLM. Dans cette unité vous apprendrez comment vous pouvez répliquer cela en utilisant votre propre agent avec les idées que vous avez apprises à travers le cours. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit3/claude-plays-pokemon.png" alt="Claude plays Pokémon"/> ## Vous voulez aller plus loin ? - 🎓 **Maîtrisez les LLM dans les jeux vidéos** : Plongez plus profondément dans le développement de jeux avec notre cours complet [*Machine Learning for Games Course*](https://hf.co/learn/ml-games-course) (en anglais). - 📘 **Obtenez le *Playbook*** : Découvrez des informations, idées et conseils pratiques dans le [*AI Playbook for Game Developers*](https://thomassimonini.substack.com/) de Thomas Simonini, où l'avenir de la conception de jeux intelligents est exploré. Mais avant de construire cela, voyons comment les LLM sont déjà utilisés dans les jeux avec **quatre exemples du monde réel**.
agents-course/units/fr/bonus-unit3/introduction.mdx/0
{ "file_path": "agents-course/units/fr/bonus-unit3/introduction.mdx", "repo_id": "agents-course", "token_count": 757 }
11
# Quiz rapide 1 [[quiz1]] --- ### Q1 : Qu'est-ce qu'un agent ? Laquelle des propositions suivantes décrit le mieux un agent en IA ? <Question choices={[ { text: "Un système qui ne traite que du texte statique et n'interagit jamais avec son environnement.", explain: "Un agent doit être capable de prendre une action et d'interagir avec son environnement.", }, { text: "Un modèle capable de raisonner, de planifier et d'utiliser des outils pour interagir avec son environnement afin d'atteindre un objectif précis.", explain: "Cette définition saisit les caractéristiques essentielles d'un agent.", correct: true }, { text: "Un chatbot qui répond aux questions sans aucune capacité à exécuter des actions.", explain: "Un tel chatbot manque de la capacité à agir, ce qui le distingue d'un agent.", }, { text: "Une encyclopédie numérique qui fournit des informations mais qui ne peut pas accomplir de tâches.", explain: "Un agent interagit activement avec son environnement au lieu de se contenter de fournir des informations statiques.", } ]} /> --- ### Q2 : Quel est le rôle de la planification chez un agent ? Pourquoi un agent a-t-il besoin de planifier avant d'agir ? <Question choices={[ { text: "Pour mémoriser les interactions précédentes.", explain: "La planification consiste à déterminer les actions futures, pas à stocker les interactions passées.", }, { text: "Pour décider de la séquence d'actions et sélectionner les outils appropriés nécessaires pour satisfaire la demande de l'utilisateur.", explain: "La planification aide l'agent à déterminer les meilleures étapes et les outils à utiliser pour accomplir une tâche.", correct: true }, { text: "Pour générer des actions aléatoires sans aucun but.", explain: "La planification garantit que les actions de l'agent sont intentionnelles et non aléatoires.", }, { text: "Pour traduire du texte sans raisonnement supplémentaire.", explain: "La planification concerne la structuration des actions et non la simple conversion de texte.", } ]} /> --- ### Q3 : Comment les outils améliorent-ils les capacités d'un agent ? Pourquoi les outils sont-ils essentiels pour un agent ? <Question choices={[ { text: "Les outils sont des composants redondants qui n'affectent pas les performances de l'agent.", explain: "Les outils étendent les capacités d'un agent en lui permettant d'exécuter des actions au-delà de la génération de texte.", }, { text: "Les outils offrent à l'agent la capacité d'exécuter des actions qu'un modèle de génération de texte ne peut pas réaliser nativement, comme préparer un café ou générer des images.", explain: "Les outils permettent aux agents d'interagir avec le monde réel et d'accomplir des tâches.", correct: true }, { text: "Les outils sont utilisés uniquement pour stocker la mémoire.", explain: "Les outils servent principalement à exécuter des actions, et non pas simplement à stocker des données.", }, { text: "Les outils limitent l'Agent aux réponses textuelles uniquement.", explain: "Au contraire, les outils permettent aux agents d'aller au-delà des réponses textuelles.", } ]} /> --- ### Q4 : Quelle est la principale différence entre actions et outils ? Quelle est la principale différence entre les actions et les outils ? <Question choices={[ { text: "Les actions sont les étapes que l'Agent suit, tandis que les outils sont des ressources externes que l'agent peut utiliser pour exécuter ces actions.", explain: "Les actions représentent des objectifs de niveau supérieur, tandis que les outils sont des fonctions spécifiques que l'Agent peut invoquer.", correct: true }, { text: "Les actions et les Outils sont la même chose et peuvent être utilisés de manière interchangeable.", explain: "Non, les actions sont des objectifs ou des tâches, tandis que les outils sont des utilitaires spécifiques que l'agent utilise pour les atteindre.", }, { text: "Les outils sont généraux, tandis que les actions sont réservées aux interactions physiques uniquement.", explain: "Pas nécessairement. Les actions peuvent impliquer des tâches à la fois numériques et physiques.", }, { text: "Les actions nécessitent des LLM, tandis que les outils non.", explain: "Bien que les LLM aident à déterminer les actions, celles-ci ne dépendent pas elles-mêmes des LLM.", } ]} /> --- ### Q5 : Quel rôle jouent les *Large Language Models* (LLM) dans les agents ? Comment les LLM contribuent-ils aux fonctionnalités d'un agent ? <Question choices={[ { text: "Les LLM sont utilisés comme des bases de données statiques qui stockent des informations sans traiter d'entrées.", explain: "Les LLM traitent activement les entrées textuelles et génèrent des réponses, plutôt que de se contenter de stocker des informations.", }, { text: "Les LLM servent de « cerveau » de raisonnement pour l'agent, traitant les entrées textuelles pour comprendre les instructions et planifier les actions.", explain: "Les LLM permettent à l'agent d'interpréter, de planifier et de décider des prochaines étapes.", correct: true }, { text: "Les LLM ne sont utilisés que pour le traitement d'images et non pour le texte.", explain: "Les LLM fonctionnent principalement avec le texte, bien qu'ils puissent parfois interagir avec des entrées multimodales.", }, { text: "Les LLM ne sont pas utilisés.", explain: "Les LLM constituent un composant essentiel des agents modernes.", } ]} /> --- ### Q6 : Lequel des exemples suivants illustre le mieux un agent ? Quel exemple concret illustre le mieux un agent en action ? <Question choices={[ { text: "Une page FAQ statique sur un site web.", explain: "Une page FAQ statique n'interagit pas de manière dynamique avec les utilisateurs et n'exécute aucune action.", }, { text: "Un assistant virtuel comme Siri ou Alexa, capable de comprendre des commandes vocales, d'en raison et d'exécuter des tâches comme définir des rappels ou envoyer des messages.", explain: "Cet exemple intègre le raisonnement, la planification et l'interaction avec l'environnement.", correct: true }, { text: "Une calculatrice basique qui effectue des opérations arithmétiques.", explain: "Une calculatrice suit des règles fixes sans raisonner ni planifier, donc ce n'est pas un agent.", }, { text: "Un PNJ de jeu vidéo qui suit un ensemble de réponses préprogrammées.", explain: "À moins que le PNJ ne puisse raisonner, planifier et utiliser des outils, il ne fonctionne pas comme un agent.", } ]} /> --- Félicitations pour avoir terminé ce Quiz 🥳 ! Si certains éléments vous ont échappé, prenez le temps de relire le chapitre pour renforcer vos connaissances. Si vous le réussissez, vous êtes prêt à plonger plus en profondeur dans le « cerveau des agents » : les LLM.
agents-course/units/fr/unit1/quiz1.mdx/0
{ "file_path": "agents-course/units/fr/unit1/quiz1.mdx", "repo_id": "agents-course", "token_count": 2303 }
12
# Utiliser les agents dans LlamaIndex Vous vous souvenez d'Alfred, notre agent majordome serviable d'avant ? Eh bien, il va recevoir une mise à niveau ! Maintenant que nous comprenons les outils disponibles dans LlamaIndex, nous pouvons lui donner de nouvelles capacités pour mieux nous servir. Mais avant de continuer, rappelons-nous ce qui fait fonctionner un agent comme Alfred. Dans l'Unité 1, nous avons appris que : > Un agent est un système qui exploite un modèle d'IA pour interagir avec son environnement afin d'atteindre un objectif défini par l'utilisateur. Il combine le raisonnement, la planification et l'exécution d'actions (souvent via des outils externes) pour accomplir des tâches. LlamaIndex prend en charge **trois types principaux d'agents avec raisonnement** : ![Agents](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/llama-index/agents.png) 1. `Function Calling Agents` : Ceux-ci fonctionnent avec des modèles qui peuvent appeler des fonctions spécifiques. 2. `ReAct Agents` : Ceux-ci peuvent fonctionner avec n'importe quel modèle qui fait du *chat* ou des *endpoints* de texte et traiter des tâches de raisonnement complexes. 3. `Advanced Custom Agents` : Ceux-ci utilisent des méthodes plus complexes pour traiter des tâches et *workflows* plus complexes. <Tip>Trouvez plus d'informations sur les agents avancés sur <a href="https://github.com/run-llama/llama_index/blob/main/llama-index-core/llama_index/core/agent/workflow/base_agent.py"><i>BaseWorkflowAgent</i></a>.</Tip> ## Initialiser les agents <Tip> Vous pouvez suivre le code dans <a href="https://huggingface.co/agents-course/notebooks/blob/main/fr/unit2/llama-index/agents.ipynb" target="_blank">ce <i>notebook</i></a> que vous pouvez exécuter avec Google Colab. </Tip> Pour créer un agent, nous commençons par lui fournir un **ensemble de fonctions/outils qui définissent ses capacités**. Regardons comment créer un agent avec quelques outils de base. Au moment de la rédaction, l'agent utilisera automatiquement l'API d'appel de fonctions (si disponible), ou une boucle d'agent ReAct standard. Les LLM prennant en charge une API outils/fonctions sont relativement nouveaux, mais ils fournissent un moyen puissant d'appeler des outils en évitant de devoir utiliser un *prompt* spécifique et permettant au LLM de créer des appels d'outils basés sur des schémas fournis. Les agents ReAct sont également bons pour les tâches de raisonnement complexes et peuvent fonctionner avec n'importe quel LLM qui a des capacités de chat ou de complétion de texte. Ils sont plus verbeux et montrent le raisonnement derrière certaines actions qu'ils prennent. ```python from llama_index.llms.huggingface_api import HuggingFaceInferenceAPI from llama_index.core.agent.workflow import AgentWorkflow from llama_index.core.tools import FunctionTool # define example de Tool -- type annotations, noms de fonctions, et docstrings, sont tous inclus dans les schémas analysés ! def multiply(a: int, b: int) -> int: """Multiplies two integers and returns the resulting integer""" return a * b # initialisation du llm llm = HuggingFaceInferenceAPI(model_name="Qwen/Qwen2.5-Coder-32B-Instruct") # initialisation de l'agent agent = AgentWorkflow.from_tools_or_functions( [FunctionTool.from_defaults(multiply)], llm=llm ) ``` **Les agents sont sans état par défaut**, ajouter la mémorisation des interactions passées est optionnel en utilisant un objet `Context`. Cela pourrait être utile si vous voulez utiliser un agent qui a besoin de se souvenir des interactions précédentes, comme un *chatbot* qui maintient le contexte à travers plusieurs messages ou un gestionnaire de tâches qui a besoin de suivre les progrès au fil du temps. ```python # sans état response = await agent.run("What is 2 times 2?") # se souvenir de l'état from llama_index.core.workflow import Context ctx = Context(agent) response = await agent.run("My name is Bob.", ctx=ctx) response = await agent.run("What was my name again?", ctx=ctx) ``` Vous remarquerez que les agents dans `LlamaIndex` sont asynchrones car ils utilisent l'opérateur `await` de Python. Si vous débuté avec le code asynchrone en Python, ou avez besoin d'un rappel, LlamaIndex dispose d'un [excellent guide sur le sujet](https://docs.llamaindex.ai/en/stable/getting_started/async_python/). Maintenant que nous avons les bases, jetons un coup d'œil à comment nous pouvons utiliser des outils plus complexes dans nos agents. ## Créer des agents de RAG avec des *QueryEngineTools* **Le RAG agentique est un moyen puissant d'utiliser des agents pour répondre à des questions sur vos données.** Nous pouvons passer divers outils à Alfred pour l'aider à répondre aux questions. Cependant, au lieu de répondre automatiquement à la question au-dessus des documents, Alfred peut décider d'utiliser n'importe quel autre outil ou flux pour répondre à la question. ![Agentic RAG](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/llama-index/agentic-rag.png) Il est facile d'**envelopper `QueryEngine` comme un outil** pour un agent. Ce faisant, nous devons **définir un nom et une description**. Le LLM utilisera ces informations pour utiliser correctement l'outil. Voyons comment charger un `QueryEngineTool` en utilisant le `QueryEngine` que nous avons créé dans la [section des *components*](components). ```python from llama_index.core.tools import QueryEngineTool query_engine = index.as_query_engine(llm=llm, similarity_top_k=3) # comme indiqué dans la section Composants de LlamaIndex query_engine_tool = QueryEngineTool.from_defaults( query_engine=query_engine, name="name", description="a specific description", return_direct=False, ) query_engine_agent = AgentWorkflow.from_tools_or_functions( [query_engine_tool], llm=llm, system_prompt="You are a helpful assistant that has access to a database containing persona descriptions." ) ``` ## Créer des systèmes multi-agents La classe `AgentWorkflow` prend également en charge directement les systèmes multi-agents. En donnant à chaque agent un nom et une description, le système maintient un seul orateur actif, chaque agent ayant la capacité de passer le relais à un autre agent. En rétrécissant la portée de chaque agent, nous pouvons aider à augmenter leur précision générale lors de la réponse aux messages des utilisateurs. **Les agents dans LlamaIndex peuvent également être directement utilisés comme outils** pour d'autres agents, pour des scénarios plus complexes et personnalisés. ```python from llama_index.core.agent.workflow import ( AgentWorkflow, FunctionAgent, ReActAgent, ) # Définir quelques outils def add(a: int, b: int) -> int: """Add two numbers.""" return a + b def subtract(a: int, b: int) -> int: """Subtract two numbers.""" return a - b # Créer les configurations de l'agent # NOTE : nous pouvons utiliser FunctionAgent ou ReActAgent ici. # FunctionAgent fonctionne pour les LLM avec une API d'appel de fonction. # ReActAgent fonctionne pour n'importe quel LLM. calculator_agent = ReActAgent( name="calculator", description="Performs basic arithmetic operations", system_prompt="You are a calculator assistant. Use your tools for any math operation.", tools=[add, subtract], llm=llm, ) query_agent = ReActAgent( name="info_lookup", description="Looks up information about XYZ", system_prompt="Use your tool to query a RAG system to answer information about XYZ", tools=[query_engine_tool], llm=llm ) # Créer et exécuter le workflow agent = AgentWorkflow( agents=[calculator_agent, query_agent], root_agent="calculator" ) # Exécuter le système response = await agent.run(user_msg="Can you add 5 and 3?") ``` <Tip>Vous n'avez pas encore assez appris ? Il y a beaucoup plus à découvrir sur les agents et les outils dans LlamaIndex dans l'<a href="https://docs.llamaindex.ai/en/stable/examples/agent/agent_workflow_basic/">Introduction de base à <i>AgentWorkflow</i></a> ou le <a href="https://docs.llamaindex.ai/en/stable/understanding/agent/">Guide d'apprentissage sur les agents</a>, où vous pouvez lire plus sur le <i>streaming</i>, la sérialisation de contexte, et l'humain dans la boucle !</Tip> Maintenant que nous comprenons les bases des agents et des outils dans LlamaIndex, voyons comment nous pouvons utiliser LlamaIndex pour **créer des *workflows* configurables et gérables !**
agents-course/units/fr/unit2/llama-index/agents.mdx/0
{ "file_path": "agents-course/units/fr/unit2/llama-index/agents.mdx", "repo_id": "agents-course", "token_count": 2938 }
13
<CourseFloatingBanner classNames="absolute z-10 right-0 top-0" notebooks={[ {label: "Google Colab", value: "https://colab.research.google.com/#fileId=https://huggingface.co/agents-course/notebooks/blob/main/fr/unit2/smolagents/retrieval_agents.ipynb"}, ]} askForHelpUrl="http://hf.co/join/discord" /> # Construction de systèmes de RAG agentiques <Tip> Vous pouvez suivre le code dans <a href="https://huggingface.co/agents-course/notebooks/blob/main/fr/unit2/smolagents/retrieval_agents.ipynb" target="_blank">ce <i>notebook</i></a> que vous pouvez exécuter avec Google Colab. </Tip> Les systèmes de RAG (*Retrieval Augmented Generation*) combinent les capacités de récupération de données et de modèles de génération pour fournir des réponses contextuelles. Par exemple, la requête d'un utilisateur est transmise à un moteur de recherche puis les résultats récupérés sont fournis au LLM avec la requête. Le modèle génère ensuite une réponse basée sur la requête et les informations récupérées. Le RAG agentique (*Agentic RAG*) étend les systèmes de RAG traditionnels en **combinant des agents autonomes avec une récupération dynamique des connaissances**. Alors que les systèmes de RAG traditionnels utilisent un LLM pour répondre aux requêtes basées sur des données récupérées, le RAG agentique **permet un contrôle intelligent des processus de récupération et de génération**, améliorant l'efficacité et la précision. Les systèmes de RAG traditionnels font face à des limitations clés, telles que **s'appuyer sur une seule étape de récupération** et se concentrer sur la similarité sémantique directe avec la requête de l'utilisateur, ce qui peut négliger des informations pertinentes. Le RAG agentique résout ces problèmes en permettant à l'agent de formuler de manière autonome des requêtes, de critiquer les résultats récupérés et de mener plusieurs étapes de récupération pour une sortie plus adaptée et complète. ## Récupération de base avec DuckDuckGo Construisons un agent simple qui peut rechercher sur le web en utilisant DuckDuckGo. Cet agent récupérera des informations et synthétisera des réponses pour répondre aux requêtes. Avec le RAG agentique, l'agent d'Alfred peut : * Rechercher des dernières tendances en matière de fêtes de super-héros * Affiner les résultats pour inclure des éléments luxueux * Synthétiser les informations en un plan complet Voici comment l'agent d'Alfred peut y parvenir : ```python from smolagents import CodeAgent, DuckDuckGoSearchTool, InferenceClientModel # Initialiser l'outil de recherche search_tool = DuckDuckGoSearchTool() # Initialiser le modèle model = InferenceClientModel() agent = CodeAgent( model=model, tools=[search_tool], ) # Exemple d'utilisation response = agent.run( "Search for luxury superhero-themed party ideas, including decorations, entertainment, and catering." ) print(response) ``` L'agent suit ce processus : 1. **Analyse la requête :** identifie les éléments clés de la requête - organisation de fêtes de luxe sur le thème des super-héros, en mettant l'accent sur la décoration, les divertissements et la restauration. 2. **Effectue la récupération :** exploite DuckDuckGo pour rechercher les informations les plus pertinentes et à jour, en s'assurant qu'elles correspondent aux préférences d'Alfred pour un événement luxueux. 3. **Synthétise l'information :** après avoir rassemblé les résultats, l'agent les traite en un plan cohérent et actionnable pour Alfred, couvrant tous les aspects de la fête. 4. **Stocke pour référence future :** stocke les informations récupérées pour un accès facile lors de la planification d'événements futurs, optimisant l'efficacité des tâches ultérieures. ## Outil de base de connaissances personnalisé Pour des tâches spécialisées, une base de connaissances personnalisée peut être inestimable. Créons un outil qui interroge une base de données vectorielle de documentation technique ou de connaissances spécialisées. En utilisant la recherche sémantique, l'agent peut trouver les informations les plus pertinentes pour les besoins d'Alfred. Une base de données vectorielle stocke des représentations numériques (*embeddings*) de texte ou d'autres données, créées par des modèles d'apprentissage automatique. Elle permet la recherche sémantique en identifiant des significations similaires dans un espace de haute dimension. Cette approche combine des connaissances prédéfinies avec une recherche sémantique pour fournir des solutions contextuelles pour la planification d'événements. Avec un accès à des connaissances spécialisées, Alfred peut perfectionner chaque détail de la fête. Dans cet exemple, nous allons créer un outil qui récupère des idées de planification de fête à partir d'une base de connaissances personnalisée. Nous utiliserons un modèle BM25 pour rechercher dans la base de connaissances et retourner les meilleurs résultats, et `RecursiveCharacterTextSplitter` pour diviser les documents en morceaux plus petits pour une recherche plus efficace. ```python from langchain.docstore.document import Document from langchain.text_splitter import RecursiveCharacterTextSplitter from smolagents import Tool from langchain_community.retrievers import BM25Retriever from smolagents import CodeAgent, InferenceClientModel class PartyPlanningRetrieverTool(Tool): name = "party_planning_retriever" description = "Utilise la recherche sémantique pour trouver des idées pertinentes pour l'organisation de la fête d'Alfred au Manoir Wayne sur le thème des super-héros." inputs = { "query": { "type": "string", "description": "La requête à effectuer. Celle-ci doit être liée à l'organisation de fêtes ou à des thèmes de super-héros.", } } output_type = "string" def __init__(self, docs, **kwargs): super().__init__(**kwargs) self.retriever = BM25Retriever.from_documents( docs, k=5 # Récupérer les 5 meilleurs documents ) def forward(self, query: str) -> str: assert isinstance(query, str), "Votre requête doit être une chaîne de caractères" docs = self.retriever.invoke( query, ) return "\nIdées récupérées :\n" + "".join( [ f"\n\n===== Idée {str(i)} =====\n" + doc.page_content for i, doc in enumerate(docs) ] ) # Simuler une base de connaissances sur la planification de la fête party_ideas = [ {"text": "Un bal masqué sur le thème des super-héros avec un décor luxueux, notamment des accents dorés et des rideaux de velours.", "source": "Idées de fête 1"}, {"text": "Engagez un DJ professionnel qui peut jouer de la musique sur le thème des super-héros comme Batman et Wonder Woman.", "source": "Idées de divertissement"}, {"text": "Pour la restauration, servez des plats portant le nom de super-héros, comme 'Le smoothie vert de Hulk' et 'Le steak de puissance d'Iron Man'", "source": "Idées de traiteur"}, {"text": "Décorez le lieu avec des logos de super-héros emblématiques et des projections de Gotham et d'autres villes de super-héros.", "source": "Idées de décoration"}, {"text": "Expériences interactives avec la VR où les invités peuvent participer à des simulations de super-héros ou à des jeux à thème.", "source": "Idées de divertissement"} ] source_docs = [ Document(page_content=doc["text"], metadata={"source": doc["source"]}) for doc in party_ideas ] # Découper les documents en morceaux plus petits pour une recherche plus efficace text_splitter = RecursiveCharacterTextSplitter( chunk_size=500, chunk_overlap=50, add_start_index=True, strip_whitespace=True, separators=["\n\n", "\n", ".", " ", ""], ) docs_processed = text_splitter.split_documents(source_docs) # Créer l'outil de récupération party_planning_retriever = PartyPlanningRetrieverTool(docs_processed) # Initialiser l'agent agent = CodeAgent(tools=[party_planning_retriever], model=InferenceClientModel()) # Exemple d'utilisation response = agent.run( "Trouver des idées pour une fête de luxe sur le thème des super-héros, y compris des options de divertissement, de restauration et de décoration." ) print(response) ``` Cet agent amélioré peut : 1. D'abord vérifier la documentation pour des informations pertinentes 2. Combiner les informations de la base de connaissances 3. Maintenir le contexte de conversation en mémoire ## Capacités de récupération améliorées Lors de la construction de systèmes de RAG agentiques, l'agent peut employer des stratégies sophistiquées comme : 1. **La reformulation de requête :** Au lieu d'utiliser la requête brute de l'utilisateur, l'agent peut élaborer des termes de recherche optimisés qui correspondent mieux aux documents cibles 2. **La décomposition de requête :** Au lieu d'utiliser directement la requête de l'utilisateur, si elle contient plusieurs éléments d'information à interroger, elle peut être décomposée en plusieurs requêtes 3. **L'expansion de requête :** Similaire à la reformulation de requête mais effectuée plusieurs fois pour formuler la requête de plusieurs façons et les interroger toutes 4. **Le reclassement :** Utiliser des [*Cross-Encoders*](https://huggingface.co/models?pipeline_tag=text-ranking&sort=trending) pour attribuer des scores de pertinence sémantique plus complets entre les documents récupérés et la requête 5. **La récupération multi-étapes :** L'agent peut effectuer plusieurs recherches, en utilisant les résultats initiaux pour informer les requêtes suivantes 6. **L'intégration de sources :** Les informations peuvent être combinées à partir de plusieurs sources comme la recherche web et la documentation locale 7. **La validation des résultats :** Le contenu récupéré peut être analysé pour sa pertinence et son exactitude avant d'être inclus dans les réponses Les systèmes de RAG agentiques efficaces nécessitent une considération attentive de plusieurs aspects clés. L'agent **devrait sélectionner entre les outils disponibles en fonction du type de requête et du contexte**. Les systèmes de mémoire aident à maintenir l'historique de conversation et évitent les récupérations répétitives. Avoir des stratégies de secours garantit que le système peut toujours fournir de la valeur même lorsque les méthodes de récupération principales échouent. De plus, l'implémentation d'étapes de validation aide à assurer l'exactitude et la pertinence des informations récupérées. ## Ressources - [Agentic RAG : boostez votre RAG avec la reformulation de requête et l'auto-requête ! 🚀](https://huggingface.co/learn/cookbook/agent_rag) - Recette pour développer un système de RAG agentique en utilisant `smolagents`.
agents-course/units/fr/unit2/smolagents/retrieval_agents.mdx/0
{ "file_path": "agents-course/units/fr/unit2/smolagents/retrieval_agents.mdx", "repo_id": "agents-course", "token_count": 3755 }
14
# Introduction à l'unité finale [[introduction]] <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit4/thumbnail.jpg" alt="AI Agents Course thumbnail" width="100%"/> Bienvenue dans l'unité finale du cours ! 🎉 Jusqu'à présent, vous avez **acquis de solides connaissances sur les agents**, depuis la compréhension de leurs composants jusqu'à la création de vos propres agents. Avec ces connaissances, vous êtes maintenant prêt à en **construire des puissants** et à rester à jour avec les dernières avancées dans ce domaine en rapide évolution. Cette unité consiste entièrement à appliquer ce que vous avez appris. C'est votre **projet pratique final** et le compléter est votre ticket pour obtenir le **certificat du cours**. ## Quel est le défi ? Vous allez créer votre propre agent et **évaluer ses performances en utilisant un sous-ensemble du [*benchmark* GAIA](https://huggingface.co/spaces/gaia-benchmark/leaderboard)**. Pour réussir le cours, votre agent doit obtenir un score de **30% ou plus** sur le *benchmark*. Atteignez cet objectif, et vous obtiendrez votre **Certificat de Réussite**, reconnaissant officiellement votre expertise. 🏅 De plus, voyez comment vous vous classez face à vos pairs ! Un **[Classement des Étudiants](https://huggingface.co/spaces/agents-course/Students_leaderboard)** dédié est disponible pour que vous puissiez soumettre vos scores et voir les progrès de la communauté. > ** 🚨 Attention : Unité Avancée et Pratique** > > Veuillez noter que cette unité adopte une approche plus pratique. La réussite dans cette section nécessitera **des connaissances en programmation plus avancées** et reposera sur le fait que vous naviguerez dans des tâches avec **moins de conseils explicites** que dans les parties précédentes du cours. Cela vous semble excitant ? Commençons ! 🚀
agents-course/units/fr/unit4/introduction.mdx/0
{ "file_path": "agents-course/units/fr/unit4/introduction.mdx", "repo_id": "agents-course", "token_count": 648 }
15
# 셀프 체크! (업데이트됨) [[quiz2]] 뭐라고요?! 또 퀴즈라고요? 우리도 알아요... 😅 하지만 걱정 마세요! 이 퀴즈는 **방금 배운 핵심 개념을 확실히 이해**하는 데 도움을 주기 위해 준비되었습니다. 이번 퀴즈에서는 대규모 언어 모델(LLM), 메시지 시스템, 도구(tool) 등 AI 에이전트를 이해하고 구축하는 데 필수적인 요소들을 다룹니다. ### Q1: AI 도구(tool)를 가장 잘 설명하는 것은 무엇인가요? [[q1-which-of-the-following-best-describes-an-ai-tool]] <Question choices={[ { text: "텍스트 응답만 생성하는 프로세스", explain: "", }, { text: " 에이전트가 특정 작업을 수행하고 외부 환경과 상호작용할 수 있도록 하는 실행 가능한 프로세스 또는 외부 API", explain: "도구는 에이전트가 특정 작업을 수행하고 외부 환경과 상호작용할 수 있도록 해주는 기능입니다.", correct: true }, { text: "에이전트의 대화를 저장하는 기능", explain: "", } ]} /> --- ### Q2: AI 에이전트는 환경에서 "행동(act)"하기 위해 도구를 어떻게 활용하나요? [[q2-how-do-ai-agents-use-tools-as-a-form-of-acting-in-an-environment]] <Question choices={[ { text: "사용자의 명령을 수동적으로 기다린다", explain: "", }, { text: "미리 프로그래밍된 응답만 사용한다", explain: "", }, { text: "LLM이 적절할 때 도구 호출 코드를 생성하도록 요청하고, 모델을 대신하여 도구를 실행한다", explain: "에이전트는 도구를 호출하고, 이를 통해 얻은 정보를 바탕으로 계획을 세우거나 재조정할 수 있습니다.", correct: true } ]} /> --- ### Q3: 대규모 언어 모델(LLM)이란? [[q3-what-is-a-large-language-model-llm]] <Question choices={[ { text: "사전 정의된 답변을 제공하는 단순한 챗봇", explain: "", }, { text: "방대한 텍스트 데이터로 학습된 딥러닝 모델로, 인간과 유사한 언어를 이해하고 생성할 수 있다", explain: "", correct: true }, { text: "엄격하게 사전 정의된 명령만 따르는 규칙 기반 AI", explain: "", } ]} /> --- ### Q4: LLM에서 특수 토큰(special tokens)의 역할을 가장 잘 설명하는 것은 무엇인가요? [[q4-which-of-the-following-best-describes-the-role-of-special-tokens-in-llms]] <Question choices={[ { text: "텍스트 생성 품질을 향상시키기 위해 모델의 어휘에 추가된 단어들이다", explain: "", }, { text: "문장 종료(EOS) 표시나 챗봇 모델에서 서로 다른 메시지 역할을 구분하는 기능을 한다", explain: "", correct: true }, { text: "응답의 다양성을 높이기 위해 무작위로 삽입되는 토큰이다", explain: "", } ]} /> --- ### Q5: AI 챗봇 모델은 사용자 메시지를 내부적으로 어떻게 처리하나요? [[q5-how-do-ai-chat-models-process-user-messages-internally]] <Question choices={[ { text: "사용자 메시지를 변형 없이 구조화된 명령으로 직접 해석한다", explain: "", }, { text: "시스템 메시지, 사용자 메시지, 어시스턴트 메시지를 구조화된 하나의 프롬프트로 변환하여 처리한다", explain: "", correct: true }, { text: "이전 대화를 기반으로 무작위로 응답을 생성한다", explain: "", } ]} /> --- 이해되셨나요? 좋습니다! 이제 **전체 에이전트의 흐름을 살펴보고, 직접 AI 에이전트를 만들어 봅시다!**
agents-course/units/ko/unit1/quiz2.mdx/0
{ "file_path": "agents-course/units/ko/unit1/quiz2.mdx", "repo_id": "agents-course", "token_count": 2638 }
16
# Что такое LLM? <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-check-1.jpg" alt="Unit 1 planning"/> В предыдущем разделе мы узнали, что каждый агент нуждается в ** AI Модели как в ядре**, и что LLM являются наиболее распространенным типом AI моделей использующихся для этой цели. Теперь мы узнаем, что такое LLM и как они наделяют агентов мощью. В этом разделе представлено краткое техническое объяснение использования LLM. Если вы хотите погрузиться глубже, вы можете ознакомиться с нашим <a href="https://huggingface.co/learn/nlp-course/chapter1/1" target="_blank">бесплатным курсом по Обработке Естественного Языка (Natural Language Processing).</a>. ## ## Что такое Большая Языковая Модель? Большая Языковая Модель (Large Language Model, LLM) - это тип AI модели, которая превосходно работает с **пониманием и генерированием человеческого языка**. Они обучаются на огромных объемах текстовых данных, что позволяет им изучать шаблоны, структуру и даже нюансы языка. Эти модели обычно состоят из многих миллионов параметров. Большинство LLM в настоящее время **построены на архитектуре Transformer** - архитектуре глубокого обучения, основанной на алгоритме «Внимания» («Attention» algorithm), который стал вызывать значительный интерес после выхода BERT от Google в 2018 году. <figure> <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/transformer.jpg" alt="Transformer"/> <figcaption>Оригинальная архитектура трансформера выглядела следующим образом: слева располагался кодер, справа - декодер. </figcaption> </figure> Существует 3 типа трансформеров: 1. **Энкодеры (кодеры)** Трансформер на основе кодировщика принимает на вход текст (или другие данные) и выдает плотное векторное представление (или эмбеддинг) этого текста. - **Пример**: BERT от Google - **Примеры использования**: классификация текста, семантический поиск, Распознавание Именованных Сущностей (Named Entity Recognition, NER) - **Типичный размер**: миллионы параметров 2. **Декодеры**. Трансформер на основе декодера фокусируется **на генерации новых токенов для завершения последовательности, по одному токену за раз**. - **Пример**: Llama из Meta - **Примеры использования**: Генерация текста, чат-боты, генерация кода - **Типичный размер**: Миллиарды (в американском понимании, т.е. 10^9) параметров 3. **Seq2Seq (энкодер-декодер)**. Трансформер преобразующие последовательности в последовательность (sequence-to-sequence) объединяет в себе энкодер и декодер. Сначала энкодер преобразует входную последовательность в контекстное представление, а затем декодер генерирует выходную последовательность. - **Пример**: T5, BART - **Примеры использования**: Перевод, обобщение, перефразирование. - **Типичный размер**: Миллионы параметров Хотя Большие Языковые Модели (Large Language Model) бывают разных форм, LLM обычно представляют собой модели на основе декодера с миллиардами параметров. Вот некоторые из наиболее известных LLM: | **Модель** | **Провайдер** | |-----------------------------------|-------------------------------------------| | **Deepseek-R1** | DeepSeek | | **GPT4** | OpenAI | | **Llama 3** | Meta (Facebook AI Research) | | **SmolLM2** | Hugging Face | | **Gemma** | Google | | **Mistral** | Mistral | Принцип, лежащий в основе LLM, прост, но очень эффективен: **его цель - предсказать следующий токен, учитывая последовательность предыдущих токенов**. "Токен" - это единица информации, с которой работает LLM. Вы можете воспринимать "токен" как "слово", но по соображениям эффективности LLM не используют целые слова. Например, если в английском языке насчитывается около 600 000 слов, то в LLM может быть около 32 000 токенов (как в случае с Llama 2). Токенизация часто работает по подсловам, которые можно комбинировать. Например, рассмотрим, как токены "interest" и "ing" могут быть объединены в слово "interesting", или "ed" может быть добавлено в слово "interested". Вы можете поэкспериментировать с различными токенами в интерактивной демонстрации ниже: <iframe src="https://agents-course-the-tokenizer-playground.static.hf.space" frameborder="0" width="850" height="450" ></iframe> Каждая LLM имеет несколько **специальных токенов**, специфичных для данной модели. LLM использует эти токены для открытия и закрытия структурированных компонентов своей генерации. Например, чтобы указать начало или конец последовательности, сообщения или ответа. Кроме того, инструкции для ввода (input prompts), которые мы передаем модели, также структурированы с помощью специальных токенов. Наиболее важным из них является токен **Конец последовательности** (EOS). Формы специальных токенов у разных провайдеров моделей весьма разнообразны. Таблица ниже иллюстрирует разнообразие специальных токенов. <table> <thead> <tr> <th><strong>Model</strong></th> <th><strong>Provider</strong></th> <th><strong>EOS Token</strong></th> <th><strong>Functionality</strong></th> </tr> </thead> <tbody> <tr> <td><strong>GPT4</strong></td> <td>OpenAI</td> <td><code>&lt;|endoftext|&gt;</code></td> <td>End of message text</td> </tr> <tr> <td><strong>Llama 3</strong></td> <td>Meta (Facebook AI Research)</td> <td><code>&lt;|eot_id|&gt;</code></td> <td>End of sequence</td> </tr> <tr> <td><strong>Deepseek-R1</strong></td> <td>DeepSeek</td> <td><code>&lt;|end_of_sentence|&gt;</code></td> <td>End of message text</td> </tr> <tr> <td><strong>SmolLM2</strong></td> <td>Hugging Face</td> <td><code>&lt;|im_end|&gt;</code></td> <td>End of instruction or message</td> </tr> <tr> <td><strong>Gemma</strong></td> <td>Google</td> <td><code>&lt;end_of_turn&gt;</code></td> <td>End of conversation turn</td> </tr> </tbody> </table> <Tip> Мы не ожидаем, что вы запомните эти специальные токены, но важно оценить их разнообразие и роль, которую они играют в генерации текста LLM. Если вы хотите узнать больше о специальных токенах, вы можете посмотреть конфигурацию модели в ее репозитории на Hugging Face Hub. Например, вы можете найти специальные токены модели SmolLM2 в ее <a href="https://huggingface.co/HuggingFaceTB/SmolLM2-135M-Instruct/blob/main/tokenizer_config.json">tokenizer_config.json</a>. </Tip> ## Понимание предсказания следующего токена. Считается, что LLM - это **авторегрессия**, то есть **выход одного прохода становится входом для следующего**. Этот цикл продолжается до тех пор, пока модель не предскажет, что следующим токеном будет токен EOS, на котором модель может остановиться. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AutoregressionSchema.gif" alt="Визуализация процесса авторегрессионного декодирования" width="60%"> Другими словами, LLM будет декодировать текст до тех пор, пока он не достигнет EOS. Но что происходит во время одного цикла декодирования? Хотя полное описание процесса может быть довольно техническим для целей изучения агентов, вот краткий обзор: - После того как входной текст **токинизирован**, модель вычисляет представление последовательности, которое содержит информацию о значении и положении каждого токена во входной последовательности. - Это представление поступает в модель, которая возвращает оценки, оценивающие вероятность для каждого токена из ее словаря быть следующим в последовательности. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/DecodingFinal.gif" alt="Визуализация процесса декодирования" width="60%"> Основываясь на этих оценках, у нас есть несколько стратегий выбора токенов для завершения предложения. - Самой простой стратегией декодирования будет всегда брать токен с максимальным количеством баллов. Вы можете самостоятельно взаимодействовать с процессом декодирования с помощью SmolLM2 в этом Пространстве (помните, что она декодирует до достижения токена **EOS**, которым является **<|im_end|>** для этой модели): <iframe src="https://agents-course-decoding-visualizer.hf.space" frameborder="0" width="850" height="450" ></iframe> - Но есть и более продвинутые стратегии декодирования. Например, *лучевой поиск (beam search)* исследует несколько последовательностей-кандидатов, чтобы найти ту, которая имеет максимальную общую оценку - даже если некоторые отдельные токены имеют более низкие оценки. <iframe src="https://agents-course-beam-search-visualizer.hf.space" frameborder="0" width="850" height="450" ></iframe> Если вы хотите узнать больше о декодировании, вы можете изучить [курс по NLP](https://huggingface.co/learn/nlp-course). ## Внимание - это все, что вам нужно Ключевым аспектом архитектуры трансформера является **Внимание (Attention)**. При предсказании следующего слова, не все слова в предложении одинаково важны; такие слова, как "France" и "capital" в предложении *"The capital of France is ..."*, несут наибольшую смысловую нагрузку. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/AttentionSceneFinal.gif" alt="Визуализация механизма Внимания" width="60%"> Этот процесс определения наиболее релевантных слов для предсказания следующего токена оказался невероятно эффективным. Хотя основной принцип работы LLM - предсказание следующего токена - остается неизменным со времен GPT-2, были достигнуты значительные успехи в масштабировании нейронных сетей и обеспечении работы механизма внимания для все более длинных последовательностей. Если вы взаимодействовали с LLM, вы, вероятно, знакомы с термином *длина контекста (context length)*, который обозначает максимальное количество токенов, которые может обработать LLM, и максимальную _продолжительность внимания (attention span)_, которой она обладает. ## Подсказки для LLM очень важны Учитывая, что единственная задача LLM - предсказать следующий токен, просматривая каждый входной токен, и выбрать "важные" токены, формулировка вашей входной последовательности очень важна. Входная последовательность, которую вы передаете LLM, называется _подсказкой (prompt)_. Тщательное проектирование подсказки облегчает **направление генерации LLM к желаемому результату**. ## Как обучаются LLM? Модели LLM обучаются на больших массивах данных текста, где они учатся предсказывать следующее слово в последовательности с помощью самообучения (self-supervised) или маскированного языкового моделирования (masked language modeling). В результате такого обучения без учителя модель изучает структуру языка и **основные закономерности в тексте, что позволяет модели обобщать ранее не встречавшиеся данные**. После такого начального _предварительного_ обучения LLM могут быть дообучены для выполнения конкретных задач методами обучения с учителем. Например, некоторые модели обучаются разговорным структурам или использованию инструментов, в то время как другие сосредоточены на классификации или генерации кода. ## Как я могу использовать LLM? У вас есть два основных варианта: 1. **Запустить локально** (если у вас достаточно аппаратных ресурсов). 2. **Использовать облако/API** (например, через Hugging Face Serverless Inference API). На протяжении всего курса мы будем использовать модели через API на Hugging Face Hub. Позже мы изучим, как запустить эти модели локально на вашем оборудовании. ## Как LLM используются в AI Агентах? LLM являются ключевым компонентом агентов искусственного интеллекта, **обеспечивая основу для понимания и генерации человеческого языка**. Они могут интерпретировать инструкции пользователя, поддерживать контекст в разговоре, определять план и решать, какие инструменты использовать. Мы рассмотрим эти шаги более подробно в данном Разделе, а пока вам нужно понять, что LLM - это **мозг агента**. --- Это был большой объем информации! Мы рассмотрели основы того, что такое LLM, как они функционируют и какова их роль в работе AI агентов. Если вы хотите еще глубже погрузиться в увлекательный мир языковых моделей и обработки естественного языка, не поленитесь ознакомиться с нашим <a href="https://huggingface.co/learn/nlp-course/chapter1/1" target="_blank">бесплатным курсом по NLP</a>. Теперь, когда мы поняли, как работают LLM, пришло время увидеть **как LLM структурируют свою генерацию в разговорном контексте**. Чтобы запустить <a href="https://huggingface.co/agents-course/notebooks/blob/main/unit1/dummy_agent_library.ipynb" target="_blank">этот блокнот</a>, **вам понадобится токен Hugging Face** который вы можете получить из <a href="https://hf.co/settings/tokens" target="_blank">https://hf.co/settings/tokens</a>. Более подробную информацию о том, как запустить блокноты Jupyter, изучите <a href="https://huggingface.co/docs/hub/notebooks">Блокноты Jupyter на Hugging Face Hub</a>. Вам также необходимо запросить доступ к <a href="https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct" target="_blank">модели Meta Llama</a>.
agents-course/units/ru-RU/unit1/what-are-llms.mdx/0
{ "file_path": "agents-course/units/ru-RU/unit1/what-are-llms.mdx", "repo_id": "agents-course", "token_count": 11630 }
17
# Giới thiệu về Agent <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/thumbnail.jpg" alt="Thumbnail"/> Chào mừng bạn đến với chương đầu tiên, nơi **bạn sẽ xây dựng nền tảng vững chắc về nguyên lý cơ bản của AI agent** bao gồm: - **Hiểu về Agent** - Agent là gì và hoạt động thế nào? - Cách Agent đưa ra quyết định thông qua lập luận và lập kế hoạch? - **Vai trò của Mô hình ngôn ngữ lớn (LLM) trong Agent** - Cách LLM đóng vai trò "bộ não" của Agent. - Cách LLM tổ chức hội thoại qua hệ thống Messages. - **Công cụ và hành động** - Cách Agent sử dụng Công cụ (Tools) bên ngoài để tương tác với môi trường. - Cách xây dựng và tích hợp Tools cho Agent của bạn. - **Quy trình hoạt động của Agent:** - *Tư duy (Thought)* → *Hành động (Action)* → *Quan sát (Observation)*. Sau khi khám phá các chủ đề này, **bạn sẽ xây dựng Agent đầu tiên** bằng `smolagents`! Agent của bạn tên Alfred sẽ xử lý một nhiệm vụ đơn giản và minh họa cách áp dụng các khái niệm vào thực tế. Bạn thậm chí sẽ học cách **đăng Agent lên Hugging Face Spaces** để chia sẻ với bạn bè và đồng nghiệp. Cuối chương này, bạn sẽ làm một bài Kiểm tra nhanh. Hoàn thành thành công, bạn sẽ **nhận được chứng chỉ đầu tiên**: 🎓 Chứng chỉ Nguyên lý cơ bản về Agent. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/certificate-example.jpg" alt="Certificate Example"/> Đây là **điểm khởi đầu quan trọng**, đặt nền móng để hiểu về Agent trước khi chuyển sang các chủ đề nâng cao. <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-no-check.jpg" alt="Unit 1 planning"/> Đây là một chương lớn, vì vậy hãy **dành thời gian** và đừng ngại xem lại các phần này khi cần. Sẵn sàng chưa? Cùng bắt đầu thôi! 🚀
agents-course/units/vi/unit1/introduction.mdx/0
{ "file_path": "agents-course/units/vi/unit1/introduction.mdx", "repo_id": "agents-course", "token_count": 1317 }
18
# 简介 (Introduction) ![附加单元1缩略图](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/bonus-unit1/thumbnail.jpg) 欢迎来到第一个**附加单元**,在这里你将学习如何**为函数调用 (function calling) 微调大语言模型 (Large Language Model, LLM)**。 在大语言模型领域,函数调用正在迅速成为一项*必须掌握*的技术。 这个想法是,不同于我们在第1单元中仅依赖基于提示的方法,函数调用在训练阶段就训练你的模型**采取行动和解释观察结果**,使你的人工智能更加健壮。 > **我应该什么时候学习这个附加单元?** > > 这个部分是**可选的**,比第1单元更高级,所以不要犹豫,你可以现在就学习这个单元,或者在通过本课程提高了知识水平后再回来学习。 > > 但不用担心,这个附加单元设计时包含了你需要的所有信息,所以即使你还没有学习微调的内部工作原理,我们也会带你了解为函数调用微调模型的每个核心概念。 让你能够跟上这个附加单元的最佳方式是: 1. 了解如何使用 Transformers 微调大语言模型,如果你还不了解,[请查看这里](https://huggingface.co/learn/nlp-course/chapter3/1?fw=pt) 2. 了解如何使用 `SFTTrainer` 来微调我们的模型,要了解更多信息,[请查看这份文档](https://huggingface.co/learn/nlp-course/en/chapter11/1) --- ## 你将学到什么 1. **函数调用 (Function Calling)** 现代大语言模型如何有效地构建对话,使它们能够触发**工具 (Tools)**。 2. **LoRA(低秩适应,Low-Rank Adaptation)** 一种**轻量级且高效**的微调方法,减少计算和存储开销。LoRA 使大型模型的训练变得*更快、更便宜、更容易*部署。 3. **函数调用模型中的思考 → 行动 → 观察循环(Thought → Act → Observe Cycle)** 一种简单但强大的方法,用于构建模型如何决定何时(以及如何)调用函数、跟踪中间步骤以及解释来自外部工具或API的结果。 4. **新的特殊词元 (Special Tokens)** 我们将介绍**特殊标记**,帮助模型区分: - 内部"思维链"推理 - 外部函数调用 - 来自外部工具的响应 --- 在完成这个附加单元后,你将能够: - **理解**工具相关的 API 内部工作原理。 - 使用 LoRA 技术**微调**模型。 - **实现**和**修改**思考 → 行动 → 观察循环,以创建健壮和可维护的函数调用工作流。 - **设计和使用**特殊词元,无缝分离模型的内部推理和外部行动。 而且你将**拥有自己微调的模型来进行函数调用。** 🔥 让我们深入了解**函数调用**吧!
agents-course/units/zh-CN/bonus-unit1/introduction.mdx/0
{ "file_path": "agents-course/units/zh-CN/bonus-unit1/introduction.mdx", "repo_id": "agents-course", "token_count": 1875 }
19
# 第一单元测验 (Unit 1 Quiz) <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub4DONE.jpg" alt="Unit 1 planning"/> 恭喜你完成第一单元的学习!让我们测试一下你对目前所学关键概念的理解。 通过测验后,请继续下一部分领取你的证书。 祝你好运! ## 测验 (Quiz) 这是一个交互式测验。测验托管在 Hugging Face Hub 的空间中。你将通过一系列选择题来测试你对本单元所学关键概念的理解。完成测验后,你将能够看到你的分数和正确答案的详细分析。 重要提示:**通过测验后不要忘记点击提交 (Submit),否则你的考试分数将不会被保存!** <iframe src="https://agents-course-unit-1-quiz.hf.space" frameborder="0" width="850" height="450" ></iframe> 你也可以在这里访问测验 👉 [点击这里](https://huggingface.co/spaces/agents-course/unit_1_quiz) ## 学习认证 恭喜通过测验!**您现在可以获取专属结业证书 🎓** 成功完成本单元测评后,系统将为您生成单元结业认证证书。该证书可下载分享,作为课程进度的官方成就证明。 <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/whiteboard-unit1sub5DONE.jpg" alt="第一单元规划示意图"/> 获得证书后,您可将其添加至LinkedIn个人档案 🧑‍💼 或分享到X、Bluesky等社交平台。**如果标注@huggingface,我们将非常荣幸并为您送上祝贺**!🤗
agents-course/units/zh-CN/unit1/final-quiz.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit1/final-quiz.mdx", "repo_id": "agents-course", "token_count": 958 }
20
# 欢迎来到 `LangGraph` 的世界 <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/LangGraph/LangGraph.png" alt="Unit 2.3 缩略图"/> 欢迎来到学习旅程的下一站!在本章节中,您将学习如何使用 [`LangGraph`](https://github.com/langchain-ai/langgraph) 框架来构建应用程序,该框架能帮助您组织和编排复杂的 LLM 工作流。 `LangGraph` 是一个通过提供对智能体流程的**控制**工具,帮助您构建**生产就绪**应用程序的框架。 ## 模块概览 在本单元中,您将探索: ### 1️⃣ [什么是 LangGraph?何时使用它?](./when_to_use_langgraph) ### 2️⃣ [LangGraph 的构建模块](./building_blocks) ### 3️⃣ [邮件分拣管家 Alfred](./first_graph) ### 4️⃣ [文档分析智能体 Alfred](./document_analysis_agent) ### 5️⃣ [随堂测验](./quizz1) <Tip warning={true}> 本节示例需要访问强大的 LLM/VLM 模型。我们使用 GPT-4o API 运行这些示例,因为该模型与 LangGraph 具有最佳兼容性。 </Tip> 通过本单元的学习,您将掌握构建健壮、有序且生产就绪的应用程序的能力! 需要说明的是,本节只是 LangGraph 的入门介绍,更多高级主题可以通过 LangChain 学院的免费课程学习:[LangGraph 入门指南](https://academy.langchain.com/courses/intro-to-langgraph) 让我们即刻启程! ## 扩展资源 - [LangGraph 智能体](https://langchain-ai.github.io/langgraph/) - LangGraph 智能体示例 - [LangChain 学院](https://academy.langchain.com/courses/intro-to-langgraph) - 来自 LangChain 的完整 LangGraph 课程
agents-course/units/zh-CN/unit2/langgraph/introduction.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit2/langgraph/introduction.mdx", "repo_id": "agents-course", "token_count": 983 }
21
# `smolagents` 简介 <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/thumbnail.jpg" alt="Unit 2.1 Thumbnail"/> 欢迎来到本模块,在这里你将学习**如何使用 [`smolagents`](https://github.com/huggingface/smolagents) 库构建有效的智能体**,该库提供了一个轻量级框架,用于创建功能强大的AI智能体。 `smolagents` 是 Hugging Face 的一个库;因此,我们非常感谢您通过**加星标**的方式支持 smolagents [`仓库`](https://github.com/huggingface/smolagents): <img src="https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit2/smolagents/star_smolagents.gif" alt="staring smolagents"/> ## 模块概览 本模块提供了使用 `smolagents` 构建智能体的关键概念和实用策略的全面概述。 面对众多可用的开源框架,了解使 `smolagents` 成为有用选择的组件和功能,或确定何时另一种解决方案可能更合适,这一点至关重要。 我们将探索关键的智能体类型,包括为软件开发任务设计的代码智能体(code agents),用于创建模块化、函数驱动工作流的工具调用智能体(tool calling agents),以及访问和综合信息的检索智能体(retrieval agents)。 此外,我们还将讨论多个智能体的编排,以及视觉能力和网络浏览的集成,这为动态和上下文感知应用开辟了新的可能性。 在本单元中,第一单元的智能体阿尔弗雷德(Alfred)回归了。这次,他使用 `smolagents` 框架进行内部运作。我们将一起探索这个框架背后的关键概念,同时阿尔弗雷德将处理各种任务。阿尔弗雷德正在韦恩庄园(Wayne Manor)组织一场派对,趁韦恩家族🦇外出时,他有很多事情要做。跟随我们一起展示他的旅程,看他如何使用 `smolagents` 处理这些任务! <Tip> 在本单元中,您将学习使用 `smolagents` 库构建AI智能体。您的智能体将能够搜索数据、执行代码并与网页交互。您还将学习如何结合多个智能体来创建更强大的系统。 </Tip> ![Alfred the agent](https://huggingface.co/datasets/agents-course/course-images/resolve/main/en/unit1/this-is-alfred.jpg) ## 内容 在这个关于 `smolagents` 的单元中,我们涵盖: ### 1️⃣ [为什么使用 smolagents](./why_use_smolagents) `smolagents` 是众多可用于应用程序开发的开源智能体框架之一。其他选择包括 `LlamaIndex` 和 `LangGraph`,这些在本课程的其他模块中也有涵盖。`smolagents` 提供了几个关键特性,可能使其非常适合特定用例,但在选择框架时,我们应该始终考虑所有选项。我们将探讨使用 `smolagents` 的优势和缺点,帮助您根据项目需求做出明智的决定。 ### 2️⃣ [代码智能体](./code_agents) `CodeAgents`(代码智能体)是 `smolagents` 中的主要智能体类型。这些智能体不是生成 JSON 或文本,而是生成 Python 代码来执行操作。本模块探讨它们的目的、功能以及工作原理,并提供实际例子来展示它们的能力。 ### 3️⃣ [工具调用智能体](./tool_calling_agents) `ToolCallingAgents`(工具调用智能体)是 `smolagents` 支持的第二种智能体类型。与生成 Python 代码的 `CodeAgents` 不同,这些智能体依赖于系统必须解析和解释以执行操作的 JSON/文本块。本模块涵盖它们的功能、与 `CodeAgents` 的主要区别,并提供示例说明其用法。 ### 4️⃣ [工具](./tools) 正如我们在第 1 单元中看到的,工具是大语言模型(LLM)可以在智能体系统中使用的函数,它们作为智能体行为的基本构建块。本模块涵盖如何创建工具、它们的结构,以及使用 `Tool` 类或 `@tool` 装饰器的不同实现方法。您还将了解默认工具箱、如何与社区共享工具,以及如何加载社区贡献的工具以在您的智能体中使用。 ### 5️⃣ [检索智能体](./retrieval_agents) 检索智能体(Retrieval agents)使模型能够访问知识库,从而可以从多个来源搜索、综合和检索信息。它们利用向量存储(vector stores)进行高效检索,并实现 **检索增强生成(Retrieval-Augmented Generation,RAG)** 模式。这些智能体特别适用于将网络搜索与自定义知识库集成,同时通过记忆系统维持对话上下文。本模块探讨实施策略,包括用于稳健信息检索的回退机制。 ### 6️⃣ [多智能体系统](./multi_agent_systems) 有效地编排多个智能体对于构建强大的多智能体系统至关重要。通过组合具有不同能力的智能体(例如,将网络搜索智能体与代码执行智能体结合),您可以创建更复杂的解决方案。本模块专注于设计、实施和管理多智能体系统,以最大限度地提高效率和可靠性。 ### 7️⃣ [视觉和浏览器智能体](./vision_agents) 视觉智能体(Vision agents)通过整合 **视觉-语言模型(Vision-Language Models,VLMs)** 扩展了传统智能体的能力,使其能够处理和解释视觉信息。本模块探讨如何设计和集成由 VLM 驱动的智能体,从而解锁诸如基于图像的推理、视觉数据分析和多模态交互等高级功能。我们还将使用视觉智能体构建一个浏览器智能体,能够浏览网络并从中提取信息。 ## 资源 - [smolagents 文档](https://huggingface.co/docs/smolagents) - smolagents 库的官方文档 - [构建有效的智能体](https://www.anthropic.com/research/building-effective-agents) - 关于智能体架构的研究论文 - [智能体指南](https://huggingface.co/docs/smolagents/tutorials/building_good_agents) - 构建可靠智能体的最佳实践 - [LangGraph 智能体](https://langchain-ai.github.io/langgraph/) - 智能体实现的其他示例 - [函数调用指南](https://platform.openai.com/docs/guides/function-calling) - 了解大语言模型中的函数调用 - [RAG 最佳实践](https://www.pinecone.io/learn/retrieval-augmented-generation/) - 实施有效 RAG 的指南
agents-course/units/zh-CN/unit2/smolagents/introduction.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit2/smolagents/introduction.mdx", "repo_id": "agents-course", "token_count": 3994 }
22
# 那现在呢?我应该学习哪些主题? Agentic AI 是一个快速发展的领域,了解基础协议对于构建智能自主系统至关重要。 你应该熟悉的两个重要标准是: - **模型上下文协议 (MCP)** - **代理对代理协议 (A2A)** ## 🔌 模型上下文协议 (MCP) Anthropic 的 **模型上下文协议 (MCP)** 是一个开放标准,使 AI 模型能够安全无缝地**连接外部工具、数据源和应用程序**,从而使代理更加智能和自主。 可以将 MCP 想象为一个**通用适配器**,就像 USB-C 接口一样,使 AI 模型能够插入各种数字环境**而无需为每一个进行定制集成**。 MCP 正在迅速获得行业关注,开始被OpenAI 和谷歌等大公司所采用它。 📚 了解更多: - [Anthropic 的官方公告和文档](https://www.anthropic.com/news/model-context-protocol) - [MCP - 维基百科](https://en.wikipedia.org/wiki/Model_Context_Protocol) - [MCP - 博客](https://huggingface.co/blog/Kseniase/mcp) ## 🤝 代理对代理 (A2A) 协议 谷歌开发了 **代理对代理 (A2A) 协议**,作为 Anthropic 的模型上下文协议 (MCP) 的补充。 虽然 MCP 连接代理与外部工具,**A2A 则连接代理之间**,为多智能体系统之间的协作铺平道路,使其能够协同工作以解决复杂问题。 📚 深入了解 A2A: - [谷歌的 A2A 公告](https://developers.googleblog.com/en/a2a-a-new-era-of-agent-interoperability/)
agents-course/units/zh-CN/unit4/additional-readings.mdx/0
{ "file_path": "agents-course/units/zh-CN/unit4/additional-readings.mdx", "repo_id": "agents-course", "token_count": 962 }
23
# Porting a custom kernel
candle/candle-book/src/cuda/porting.md/0
{ "file_path": "candle/candle-book/src/cuda/porting.md", "repo_id": "candle", "token_count": 7 }
24
//! #A simplified example in Rust of training a neural network and then using it based on the Candle Framework by Hugging Face. //! Author: Evgeny Igumnov 2023 igumnovnsk@gmail.com //! This program implements a neural network to predict the winner of the second round of elections based on the results of the first round. //! //! ##Basic moments: //! //! A multilayer perceptron with two hidden layers is used. The first hidden layer has 4 neurons, the second has 2 neurons. //! The input is a vector of 2 numbers - the percentage of votes for the first and second candidates in the first stage. //! The output is the number 0 or 1, where 1 means that the first candidate will win in the second stage, 0 means that he will lose. //! For training, samples with real data on the results of the first and second stages of different elections are used. //! The model is trained by backpropagation using gradient descent and the cross-entropy loss function. //! Model parameters (weights of neurons) are initialized randomly, then optimized during training. //! After training, the model is tested on a deferred sample to evaluate the accuracy. //! If the accuracy on the test set is below 100%, the model is considered underfit and the learning process is repeated. //! Thus, this neural network learns to find hidden relationships between the results of the first and second rounds of voting in order to make predictions for new data. #[rustfmt::skip] mod tests { use candle::{DType, Result, Tensor, D, Device}; use candle_nn::{loss, ops, Linear, Module, VarBuilder, VarMap, Optimizer}; // ANCHOR: book_training_simplified1 const VOTE_DIM: usize = 2; const RESULTS: usize = 1; const EPOCHS: usize = 10; const LAYER1_OUT_SIZE: usize = 4; const LAYER2_OUT_SIZE: usize = 2; const LEARNING_RATE: f64 = 0.05; #[derive(Clone)] pub struct Dataset { pub train_votes: Tensor, pub train_results: Tensor, pub test_votes: Tensor, pub test_results: Tensor, } struct MultiLevelPerceptron { ln1: Linear, ln2: Linear, ln3: Linear, } impl MultiLevelPerceptron { fn new(vs: VarBuilder) -> Result<Self> { let ln1 = candle_nn::linear(VOTE_DIM, LAYER1_OUT_SIZE, vs.pp("ln1"))?; let ln2 = candle_nn::linear(LAYER1_OUT_SIZE, LAYER2_OUT_SIZE, vs.pp("ln2"))?; let ln3 = candle_nn::linear(LAYER2_OUT_SIZE, RESULTS + 1, vs.pp("ln3"))?; Ok(Self { ln1, ln2, ln3 }) } fn forward(&self, xs: &Tensor) -> Result<Tensor> { let xs = self.ln1.forward(xs)?; let xs = xs.relu()?; let xs = self.ln2.forward(&xs)?; let xs = xs.relu()?; self.ln3.forward(&xs) } } // ANCHOR_END: book_training_simplified1 // ANCHOR: book_training_simplified3 #[tokio::test] async fn simplified() -> anyhow::Result<()> { let dev = Device::cuda_if_available(0)?; let train_votes_vec: Vec<u32> = vec![ 15, 10, 10, 15, 5, 12, 30, 20, 16, 12, 13, 25, 6, 14, 31, 21, ]; let train_votes_tensor = Tensor::from_vec(train_votes_vec.clone(), (train_votes_vec.len() / VOTE_DIM, VOTE_DIM), &dev)?.to_dtype(DType::F32)?; let train_results_vec: Vec<u32> = vec![ 1, 0, 0, 1, 1, 0, 0, 1, ]; let train_results_tensor = Tensor::from_vec(train_results_vec, train_votes_vec.len() / VOTE_DIM, &dev)?; let test_votes_vec: Vec<u32> = vec![ 13, 9, 8, 14, 3, 10, ]; let test_votes_tensor = Tensor::from_vec(test_votes_vec.clone(), (test_votes_vec.len() / VOTE_DIM, VOTE_DIM), &dev)?.to_dtype(DType::F32)?; let test_results_vec: Vec<u32> = vec![ 1, 0, 0, ]; let test_results_tensor = Tensor::from_vec(test_results_vec.clone(), test_results_vec.len(), &dev)?; let m = Dataset { train_votes: train_votes_tensor, train_results: train_results_tensor, test_votes: test_votes_tensor, test_results: test_results_tensor, }; let trained_model: MultiLevelPerceptron; loop { println!("Trying to train neural network."); match train(m.clone(), &dev) { Ok(model) => { trained_model = model; break; }, Err(e) => { println!("Error: {}", e); continue; } } } let real_world_votes: Vec<u32> = vec![ 13, 22, ]; let tensor_test_votes = Tensor::from_vec(real_world_votes.clone(), (1, VOTE_DIM), &dev)?.to_dtype(DType::F32)?; let final_result = trained_model.forward(&tensor_test_votes)?; let result = final_result .argmax(D::Minus1)? .to_dtype(DType::F32)? .get(0).map(|x| x.to_scalar::<f32>())??; println!("real_life_votes: {:?}", real_world_votes); println!("neural_network_prediction_result: {:?}", result); Ok(()) } // ANCHOR_END: book_training_simplified3 // ANCHOR: book_training_simplified2 fn train(m: Dataset, dev: &Device) -> anyhow::Result<MultiLevelPerceptron> { let train_results = m.train_results.to_device(dev)?; let train_votes = m.train_votes.to_device(dev)?; let varmap = VarMap::new(); let vs = VarBuilder::from_varmap(&varmap, DType::F32, dev); let model = MultiLevelPerceptron::new(vs.clone())?; let mut sgd = candle_nn::SGD::new(varmap.all_vars(), LEARNING_RATE)?; let test_votes = m.test_votes.to_device(dev)?; let test_results = m.test_results.to_device(dev)?; let mut final_accuracy: f32 = 0.0; for epoch in 1..EPOCHS + 1 { let logits = model.forward(&train_votes)?; let log_sm = ops::log_softmax(&logits, D::Minus1)?; let loss = loss::nll(&log_sm, &train_results)?; sgd.backward_step(&loss)?; let test_logits = model.forward(&test_votes)?; let sum_ok = test_logits .argmax(D::Minus1)? .eq(&test_results)? .to_dtype(DType::F32)? .sum_all()? .to_scalar::<f32>()?; let test_accuracy = sum_ok / test_results.dims1()? as f32; final_accuracy = 100. * test_accuracy; println!("Epoch: {epoch:3} Train loss: {:8.5} Test accuracy: {:5.2}%", loss.to_scalar::<f32>()?, final_accuracy ); if final_accuracy == 100.0 { break; } } if final_accuracy < 100.0 { Err(anyhow::Error::msg("The model is not trained well enough.")) } else { Ok(model) } } // ANCHOR_END: book_training_simplified2 }
candle/candle-book/src/simplified.rs/0
{ "file_path": "candle/candle-book/src/simplified.rs", "repo_id": "candle", "token_count": 2903 }
25
use crate::benchmarks::{BenchDevice, BenchDeviceHandler}; use candle_core::{ quantized::{self, GgmlDType, QMatMul}, Device, Module, Tensor, }; use criterion::{black_box, criterion_group, Criterion, Throughput}; use std::time::Instant; fn run(matmul: &QMatMul, x: &Tensor) { matmul.forward(x).unwrap(); } fn run_bench(c: &mut Criterion, device: &Device, dtype: GgmlDType) { let b = 1; let m = 1; let n = 1024; let k = 1024; let lhs = (0..(m * k)) .map(|v| v as f32 / (m * k) as f32) .collect::<Vec<_>>(); let rhs = (0..(k * n)) .map(|v| v as f32 / (n * k) as f32) .collect::<Vec<_>>(); let lhs = Tensor::from_slice(&lhs, (m, k), device).unwrap(); let rhs = Tensor::from_slice(&rhs, (k, n), device).unwrap(); let qtensor = quantized::QTensor::quantize(&rhs.t().unwrap(), dtype).unwrap(); let matmul = quantized::QMatMul::from_qtensor(qtensor).unwrap(); let flops = b * m * n * k; let mut group = c.benchmark_group(device.bench_name(format!("qmatmul_{:?}", dtype))); group.sample_size(200); group.throughput(Throughput::Bytes(flops as u64)); group.bench_function("iter", move |b| { b.iter_custom(|iters| { let start = Instant::now(); for _i in 0..iters { run(black_box(&matmul), black_box(&lhs)); } device.sync().unwrap(); start.elapsed() }) }); group.finish(); } fn criterion_benchmark(c: &mut Criterion) { let handler = BenchDeviceHandler::new().unwrap(); for device in handler.devices { for dtype in [ GgmlDType::F32, GgmlDType::F16, GgmlDType::Q4_0, GgmlDType::Q4_1, GgmlDType::Q5_0, GgmlDType::Q5_1, GgmlDType::Q8_0, GgmlDType::Q2K, GgmlDType::Q3K, GgmlDType::Q4K, GgmlDType::Q5K, GgmlDType::Q6K, ] { run_bench(c, &device, dtype); } } } criterion_group!(benches, criterion_benchmark);
candle/candle-core/benches/benchmarks/qmatmul.rs/0
{ "file_path": "candle/candle-core/benches/benchmarks/qmatmul.rs", "repo_id": "candle", "token_count": 1085 }
26
pub trait VecOps: num_traits::NumAssign + Copy { fn min(self, rhs: Self) -> Self; fn max(self, rhs: Self) -> Self; /// Dot-product of two vectors. /// /// # Safety /// /// The length of `lhs` and `rhs` have to be at least `len`. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) { *res = Self::zero(); for i in 0..len { *res += *lhs.add(i) * *rhs.add(i) } } /// Sum of all elements in a vector. /// /// # Safety /// /// The length of `xs` must be at least `len`. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_reduce_sum(xs: *const Self, res: *mut Self, len: usize) { *res = Self::zero(); for i in 0..len { *res += *xs.add(i) } } /// Maximum element in a non-empty vector. /// /// # Safety /// /// The length of `xs` must be at least `len` and positive. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_reduce_max(xs: *const Self, res: *mut Self, len: usize) { *res = *xs; for i in 1..len { *res = (*res).max(*xs.add(i)) } } /// Minimum element in a non-empty vector. /// /// # Safety /// /// The length of `xs` must be at least `len` and positive. `res` has to point to a valid /// element. #[inline(always)] unsafe fn vec_reduce_min(xs: *const Self, res: *mut Self, len: usize) { *res = *xs; for i in 1..len { *res = (*res).min(*xs.add(i)) } } } impl VecOps for f32 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } #[inline(always)] unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) { super::vec_dot_f32(lhs, rhs, res, len) } #[inline(always)] unsafe fn vec_reduce_sum(xs: *const Self, res: *mut Self, len: usize) { super::vec_sum(xs, res, len) } } impl VecOps for half::f16 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } #[inline(always)] unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) { let mut res_f32 = 0f32; super::vec_dot_f16(lhs, rhs, &mut res_f32, len); *res = half::f16::from_f32(res_f32); } } impl VecOps for f64 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } } impl VecOps for half::bf16 { #[inline(always)] fn min(self, other: Self) -> Self { Self::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { Self::max(self, other) } #[inline(always)] unsafe fn vec_dot(lhs: *const Self, rhs: *const Self, res: *mut Self, len: usize) { let mut res_f32 = 0f32; super::vec_dot_bf16(lhs, rhs, &mut res_f32, len); *res = half::bf16::from_f32(res_f32); } } impl VecOps for u8 { #[inline(always)] fn min(self, other: Self) -> Self { <Self as Ord>::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { <Self as Ord>::max(self, other) } } impl VecOps for u32 { #[inline(always)] fn min(self, other: Self) -> Self { <Self as Ord>::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { <Self as Ord>::max(self, other) } } impl VecOps for i64 { #[inline(always)] fn min(self, other: Self) -> Self { <Self as Ord>::min(self, other) } #[inline(always)] fn max(self, other: Self) -> Self { <Self as Ord>::max(self, other) } } #[inline(always)] pub fn par_for_each(n_threads: usize, func: impl Fn(usize) + Send + Sync) { if n_threads == 1 { func(0) } else { rayon::scope(|s| { for thread_idx in 0..n_threads { let func = &func; s.spawn(move |_| func(thread_idx)); } }) } } #[inline(always)] pub fn par_range(lo: usize, up: usize, n_threads: usize, func: impl Fn(usize) + Send + Sync) { if n_threads == 1 { for i in lo..up { func(i) } } else { rayon::scope(|s| { for thread_idx in 0..n_threads { let func = &func; s.spawn(move |_| { for i in (thread_idx..up).step_by(n_threads) { func(i) } }); } }) } }
candle/candle-core/src/cpu/kernels.rs/0
{ "file_path": "candle/candle-core/src/cpu/kernels.rs", "repo_id": "candle", "token_count": 2456 }
27
#![allow(dead_code)] use crate::op::{BinaryOpT, CmpOp, ReduceOp, UnaryOpT}; use crate::{CpuStorage, DType, Error, Layout, Result, Shape}; #[derive(Debug, Clone)] pub struct MetalDevice; #[derive(Debug)] pub struct MetalStorage; #[derive(thiserror::Error, Debug)] pub enum MetalError { #[error("{0}")] Message(String), } impl From<String> for MetalError { fn from(e: String) -> Self { MetalError::Message(e) } } macro_rules! fail { () => { unimplemented!("metal support has not been enabled, add `metal` feature to enable.") }; } impl crate::backend::BackendStorage for MetalStorage { type Device = MetalDevice; fn try_clone(&self, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn dtype(&self) -> DType { fail!() } fn device(&self) -> &Self::Device { fail!() } fn const_set(&mut self, _: crate::scalar::Scalar, _: &Layout) -> Result<()> { Err(Error::NotCompiledWithMetalSupport) } fn to_cpu_storage(&self) -> Result<CpuStorage> { Err(Error::NotCompiledWithMetalSupport) } fn affine(&self, _: &Layout, _: f64, _: f64) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn powf(&self, _: &Layout, _: f64) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn elu(&self, _: &Layout, _: f64) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn reduce_op(&self, _: ReduceOp, _: &Layout, _: &[usize]) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn cmp(&self, _: CmpOp, _: &Self, _: &Layout, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn to_dtype(&self, _: &Layout, _: DType) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn unary_impl<B: UnaryOpT>(&self, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn binary_impl<B: BinaryOpT>(&self, _: &Self, _: &Layout, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn where_cond(&self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn conv1d( &self, _: &Layout, _: &Self, _: &Layout, _: &crate::conv::ParamsConv1D, ) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn conv_transpose1d( &self, _l: &Layout, _kernel: &Self, _kernel_l: &Layout, _params: &crate::conv::ParamsConvTranspose1D, ) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn conv2d( &self, _: &Layout, _: &Self, _: &Layout, _: &crate::conv::ParamsConv2D, ) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn conv_transpose2d( &self, _l: &Layout, _kernel: &Self, _kernel_l: &Layout, _params: &crate::conv::ParamsConvTranspose2D, ) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn index_select(&self, _: &Self, _: &Layout, _: &Layout, _: usize) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn gather(&self, _: &Layout, _: &Self, _: &Layout, _: usize) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn scatter_set( &mut self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout, _: usize, ) -> Result<()> { Err(Error::NotCompiledWithMetalSupport) } fn scatter_add_set( &mut self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout, _: usize, ) -> Result<()> { Err(Error::NotCompiledWithMetalSupport) } fn index_add( &self, _: &Layout, _: &Self, _: &Layout, _: &Self, _: &Layout, _: usize, ) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn matmul( &self, _: &Self, _: (usize, usize, usize, usize), _: &Layout, _: &Layout, ) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn copy_strided_src(&self, _: &mut Self, _: usize, _: &Layout) -> Result<()> { Err(Error::NotCompiledWithMetalSupport) } fn copy2d( &self, _: &mut Self, _: usize, _: usize, _: usize, _: usize, _: usize, _: usize, ) -> Result<()> { Err(Error::NotCompiledWithMetalSupport) } fn avg_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn max_pool2d(&self, _: &Layout, _: (usize, usize), _: (usize, usize)) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn upsample_nearest1d(&self, _: &Layout, _: usize) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn upsample_nearest2d(&self, _: &Layout, _: usize, _: usize) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } } impl crate::backend::BackendDevice for MetalDevice { type Storage = MetalStorage; fn new(_: usize) -> Result<Self> { Err(Error::NotCompiledWithMetalSupport) } fn set_seed(&self, _: u64) -> Result<()> { Err(Error::NotCompiledWithMetalSupport) } fn location(&self) -> crate::DeviceLocation { fail!() } fn same_device(&self, _: &Self) -> bool { fail!() } fn zeros_impl(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> { Err(Error::NotCompiledWithMetalSupport) } unsafe fn alloc_uninit(&self, _shape: &Shape, _dtype: DType) -> Result<Self::Storage> { Err(Error::NotCompiledWithMetalSupport) } fn storage_from_slice<T: crate::WithDType>(&self, _: &[T]) -> Result<Self::Storage> { Err(Error::NotCompiledWithMetalSupport) } fn storage_from_cpu_storage(&self, _: &CpuStorage) -> Result<Self::Storage> { Err(Error::NotCompiledWithMetalSupport) } fn storage_from_cpu_storage_owned(&self, _: CpuStorage) -> Result<Self::Storage> { Err(Error::NotCompiledWithMetalSupport) } fn rand_uniform(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> { Err(Error::NotCompiledWithMetalSupport) } fn rand_normal(&self, _: &Shape, _: DType, _: f64, _: f64) -> Result<Self::Storage> { Err(Error::NotCompiledWithMetalSupport) } fn synchronize(&self) -> Result<()> { Ok(()) } }
candle/candle-core/src/dummy_metal_backend.rs/0
{ "file_path": "candle/candle-core/src/dummy_metal_backend.rs", "repo_id": "candle", "token_count": 3182 }
28
//! Support for the [GGUF file format](https://github.com/philpax/ggml/blob/gguf-spec/docs/gguf.md). //! use super::{GgmlDType, QTensor}; use crate::{Context, Device, Result}; use byteorder::{LittleEndian, ReadBytesExt, WriteBytesExt}; use std::collections::HashMap; pub const DEFAULT_ALIGNMENT: u64 = 32; #[derive(Debug, Clone, Copy, PartialEq, Eq)] enum Magic { Gguf, } impl TryFrom<u32> for Magic { type Error = crate::Error; fn try_from(value: u32) -> Result<Self> { let magic = match value { 0x46554747 | 0x47475546 => Self::Gguf, _ => crate::bail!("unknown magic 0x{value:08x}"), }; Ok(magic) } } #[derive(Debug, Clone, Copy, PartialEq, Eq)] pub enum VersionedMagic { GgufV1, GgufV2, GgufV3, } impl VersionedMagic { fn read<R: std::io::Read>(reader: &mut R) -> Result<Self> { let magic = reader.read_u32::<LittleEndian>()?; let magic = Magic::try_from(magic)?; let version = reader.read_u32::<LittleEndian>()?; let versioned_magic = match (magic, version) { (Magic::Gguf, 1) => Self::GgufV1, (Magic::Gguf, 2) => Self::GgufV2, (Magic::Gguf, 3) => Self::GgufV3, _ => crate::bail!("gguf: unsupported magic/version {magic:?}/{version}"), }; Ok(versioned_magic) } } #[derive(Debug)] pub struct TensorInfo { pub ggml_dtype: GgmlDType, pub shape: crate::Shape, pub offset: u64, } impl TensorInfo { pub fn read<R: std::io::Seek + std::io::Read>( &self, reader: &mut R, tensor_data_offset: u64, device: &Device, ) -> Result<QTensor> { let tensor_elems = self.shape.elem_count(); let block_size = self.ggml_dtype.block_size(); if tensor_elems % block_size != 0 { crate::bail!( "the number of elements {tensor_elems} is not divisible by the block size {block_size}" ) } let size_in_bytes = tensor_elems / block_size * self.ggml_dtype.type_size(); let mut raw_data = vec![0u8; size_in_bytes]; reader.seek(std::io::SeekFrom::Start(tensor_data_offset + self.offset))?; reader.read_exact(&mut raw_data)?; super::ggml_file::qtensor_from_ggml( self.ggml_dtype, &raw_data, self.shape.dims().to_vec(), device, ) } } #[derive(Debug)] pub struct Content { pub magic: VersionedMagic, pub metadata: HashMap<String, Value>, pub tensor_infos: HashMap<String, TensorInfo>, pub tensor_data_offset: u64, } fn read_string<R: std::io::Read>(reader: &mut R, magic: &VersionedMagic) -> Result<String> { let len = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let mut v = vec![0u8; len]; reader.read_exact(&mut v)?; // GGUF strings are supposed to be non-null terminated but in practice this happens. while let Some(0) = v.last() { v.pop(); } // GGUF strings are utf8 encoded but there are cases that don't seem to be valid. Ok(String::from_utf8_lossy(&v).into_owned()) } #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] pub enum ValueType { // The value is a 8-bit unsigned integer. U8, // The value is a 8-bit signed integer. I8, // The value is a 16-bit unsigned little-endian integer. U16, // The value is a 16-bit signed little-endian integer. I16, // The value is a 32-bit unsigned little-endian integer. U32, // The value is a 32-bit signed little-endian integer. I32, // The value is a 64-bit unsigned little-endian integer. U64, // The value is a 64-bit signed little-endian integer. I64, // The value is a 32-bit IEEE754 floating point number. F32, // The value is a 64-bit IEEE754 floating point number. F64, // The value is a boolean. // 1-byte value where 0 is false and 1 is true. // Anything else is invalid, and should be treated as either the model being invalid or the reader being buggy. Bool, // The value is a UTF-8 non-null-terminated string, with length prepended. String, // The value is an array of other values, with the length and type prepended. // Arrays can be nested, and the length of the array is the number of elements in the array, not the number of bytes. Array, } #[derive(Debug, Clone)] pub enum Value { U8(u8), I8(i8), U16(u16), I16(i16), U32(u32), I32(i32), U64(u64), I64(i64), F32(f32), F64(f64), Bool(bool), String(String), Array(Vec<Value>), } impl Value { pub fn value_type(&self) -> ValueType { match self { Self::U8(_) => ValueType::U8, Self::I8(_) => ValueType::I8, Self::U16(_) => ValueType::U16, Self::I16(_) => ValueType::I16, Self::U32(_) => ValueType::U32, Self::I32(_) => ValueType::I32, Self::U64(_) => ValueType::U64, Self::I64(_) => ValueType::I64, Self::F32(_) => ValueType::F32, Self::F64(_) => ValueType::F64, Self::Bool(_) => ValueType::Bool, Self::String(_) => ValueType::String, Self::Array(_) => ValueType::Array, } } pub fn to_u8(&self) -> Result<u8> { match self { Self::U8(v) => Ok(*v), v => crate::bail!("not a u8 {v:?}"), } } pub fn to_i8(&self) -> Result<i8> { match self { Self::I8(v) => Ok(*v), v => crate::bail!("not a i8 {v:?}"), } } pub fn to_u16(&self) -> Result<u16> { match self { Self::U16(v) => Ok(*v), v => crate::bail!("not a u16 {v:?}"), } } pub fn to_i16(&self) -> Result<i16> { match self { Self::I16(v) => Ok(*v), v => crate::bail!("not a i16 {v:?}"), } } pub fn to_u32(&self) -> Result<u32> { match self { Self::U32(v) => Ok(*v), v => crate::bail!("not a u32 {v:?}"), } } pub fn to_i32(&self) -> Result<i32> { match self { Self::I32(v) => Ok(*v), v => crate::bail!("not a i32 {v:?}"), } } /// This will also automatically upcast any integral types which will not truncate. pub fn to_u64(&self) -> Result<u64> { match self { Self::U64(v) => Ok(*v), // Autoupcast cases here Self::U8(v) => Ok(*v as u64), Self::U16(v) => Ok(*v as u64), Self::U32(v) => Ok(*v as u64), Self::Bool(v) => Ok(*v as u64), v => crate::bail!("not a u64 or upcastable to u64 {v:?}"), } } pub fn to_i64(&self) -> Result<i64> { match self { Self::I64(v) => Ok(*v), v => crate::bail!("not a i64 {v:?}"), } } pub fn to_f32(&self) -> Result<f32> { match self { Self::F32(v) => Ok(*v), v => crate::bail!("not a f32 {v:?}"), } } pub fn to_f64(&self) -> Result<f64> { match self { Self::F64(v) => Ok(*v), v => crate::bail!("not a f64 {v:?}"), } } pub fn to_bool(&self) -> Result<bool> { match self { Self::Bool(v) => Ok(*v), v => crate::bail!("not a bool {v:?}"), } } pub fn to_vec(&self) -> Result<&Vec<Value>> { match self { Self::Array(v) => Ok(v), v => crate::bail!("not a vec {v:?}"), } } pub fn to_string(&self) -> Result<&String> { match self { Self::String(v) => Ok(v), v => crate::bail!("not a string {v:?}"), } } fn read<R: std::io::Read>( reader: &mut R, value_type: ValueType, magic: &VersionedMagic, ) -> Result<Self> { let v = match value_type { ValueType::U8 => Self::U8(reader.read_u8()?), ValueType::I8 => Self::I8(reader.read_i8()?), ValueType::U16 => Self::U16(reader.read_u16::<LittleEndian>()?), ValueType::I16 => Self::I16(reader.read_i16::<LittleEndian>()?), ValueType::U32 => Self::U32(reader.read_u32::<LittleEndian>()?), ValueType::I32 => Self::I32(reader.read_i32::<LittleEndian>()?), ValueType::U64 => Self::U64(reader.read_u64::<LittleEndian>()?), ValueType::I64 => Self::I64(reader.read_i64::<LittleEndian>()?), ValueType::F32 => Self::F32(reader.read_f32::<LittleEndian>()?), ValueType::F64 => Self::F64(reader.read_f64::<LittleEndian>()?), ValueType::Bool => match reader.read_u8()? { 0 => Self::Bool(false), 1 => Self::Bool(true), b => crate::bail!("unexpected bool value {b}"), }, ValueType::String => Self::String(read_string(reader, magic)?), ValueType::Array => { let value_type = reader.read_u32::<LittleEndian>()?; let value_type = ValueType::from_u32(value_type)?; let len = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let mut vs = Vec::with_capacity(len); for _ in 0..len { vs.push(Value::read(reader, value_type, magic)?) } Self::Array(vs) } }; Ok(v) } fn write<W: std::io::Write>(&self, w: &mut W) -> Result<()> { match self { &Self::U8(v) => w.write_u8(v)?, &Self::I8(v) => w.write_i8(v)?, &Self::U16(v) => w.write_u16::<LittleEndian>(v)?, &Self::I16(v) => w.write_i16::<LittleEndian>(v)?, &Self::U32(v) => w.write_u32::<LittleEndian>(v)?, &Self::I32(v) => w.write_i32::<LittleEndian>(v)?, &Self::U64(v) => w.write_u64::<LittleEndian>(v)?, &Self::I64(v) => w.write_i64::<LittleEndian>(v)?, &Self::F32(v) => w.write_f32::<LittleEndian>(v)?, &Self::F64(v) => w.write_f64::<LittleEndian>(v)?, &Self::Bool(v) => w.write_u8(u8::from(v))?, Self::String(v) => write_string(w, v.as_str())?, Self::Array(v) => { // The `Value` type does not enforce that all the values in an Array have the same // type. let value_type = if v.is_empty() { // Doesn't matter, the array is empty. ValueType::U32 } else { let value_type: std::collections::HashSet<_> = v.iter().map(|elem| elem.value_type()).collect(); if value_type.len() != 1 { crate::bail!("multiple value-types in the same array {value_type:?}") } value_type.into_iter().next().context("empty value_type")? }; w.write_u32::<LittleEndian>(value_type.to_u32())?; w.write_u64::<LittleEndian>(v.len() as u64)?; for elem in v.iter() { elem.write(w)? } } } Ok(()) } } impl ValueType { fn from_u32(v: u32) -> Result<Self> { let v = match v { 0 => Self::U8, 1 => Self::I8, 2 => Self::U16, 3 => Self::I16, 4 => Self::U32, 5 => Self::I32, 6 => Self::F32, 7 => Self::Bool, 8 => Self::String, 9 => Self::Array, 10 => Self::U64, 11 => Self::I64, 12 => Self::F64, v => crate::bail!("unrecognized value-type {v:#08x}"), }; Ok(v) } fn to_u32(self) -> u32 { match self { Self::U8 => 0, Self::I8 => 1, Self::U16 => 2, Self::I16 => 3, Self::U32 => 4, Self::I32 => 5, Self::F32 => 6, Self::Bool => 7, Self::String => 8, Self::Array => 9, Self::U64 => 10, Self::I64 => 11, Self::F64 => 12, } } } impl Content { pub fn read<R: std::io::Seek + std::io::Read>(reader: &mut R) -> Result<Self> { let magic = VersionedMagic::read(reader)?; let tensor_count = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let metadata_kv_count = match magic { VersionedMagic::GgufV1 => reader.read_u32::<LittleEndian>()? as usize, VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { reader.read_u64::<LittleEndian>()? as usize } }; let mut metadata = HashMap::new(); for _idx in 0..metadata_kv_count { let key = read_string(reader, &magic)?; let value_type = reader.read_u32::<LittleEndian>()?; let value_type = ValueType::from_u32(value_type)?; let value = Value::read(reader, value_type, &magic)?; metadata.insert(key, value); } let mut tensor_infos = HashMap::new(); for _idx in 0..tensor_count { let tensor_name = read_string(reader, &magic)?; let n_dimensions = reader.read_u32::<LittleEndian>()?; let mut dimensions: Vec<usize> = match magic { VersionedMagic::GgufV1 => { let mut dimensions = vec![0; n_dimensions as usize]; reader.read_u32_into::<LittleEndian>(&mut dimensions)?; dimensions.into_iter().map(|c| c as usize).collect() } VersionedMagic::GgufV2 | VersionedMagic::GgufV3 => { let mut dimensions = vec![0; n_dimensions as usize]; reader.read_u64_into::<LittleEndian>(&mut dimensions)?; dimensions.into_iter().map(|c| c as usize).collect() } }; dimensions.reverse(); let ggml_dtype = reader.read_u32::<LittleEndian>()?; let ggml_dtype = GgmlDType::from_u32(ggml_dtype)?; let offset = reader.read_u64::<LittleEndian>()?; tensor_infos.insert( tensor_name, TensorInfo { shape: crate::Shape::from(dimensions), offset, ggml_dtype, }, ); } let position = reader.stream_position()?; let alignment = match metadata.get("general.alignment") { Some(Value::U8(v)) => *v as u64, Some(Value::U16(v)) => *v as u64, Some(Value::U32(v)) => *v as u64, Some(Value::I8(v)) if *v >= 0 => *v as u64, Some(Value::I16(v)) if *v >= 0 => *v as u64, Some(Value::I32(v)) if *v >= 0 => *v as u64, _ => DEFAULT_ALIGNMENT, }; let tensor_data_offset = position.div_ceil(alignment) * alignment; Ok(Self { magic, metadata, tensor_infos, tensor_data_offset, }) } pub fn tensor<R: std::io::Seek + std::io::Read>( &self, reader: &mut R, name: &str, device: &Device, ) -> Result<QTensor> { let tensor_info = match self.tensor_infos.get(name) { Some(tensor_info) => tensor_info, None => crate::bail!("cannot find tensor info for {name}"), }; tensor_info.read(reader, self.tensor_data_offset, device) } } fn write_string<W: std::io::Write>(w: &mut W, str: &str) -> Result<()> { let bytes = str.as_bytes(); w.write_u64::<LittleEndian>(bytes.len() as u64)?; w.write_all(bytes)?; Ok(()) } pub fn write<W: std::io::Seek + std::io::Write>( w: &mut W, metadata: &[(&str, &Value)], tensors: &[(&str, &QTensor)], ) -> Result<()> { w.write_u32::<LittleEndian>(0x46554747)?; w.write_u32::<LittleEndian>(2)?; // version 2. w.write_u64::<LittleEndian>(tensors.len() as u64)?; w.write_u64::<LittleEndian>(metadata.len() as u64)?; for (name, value) in metadata.iter() { write_string(w, name)?; w.write_u32::<LittleEndian>(value.value_type().to_u32())?; value.write(w)?; } let mut offset = 0usize; let mut offsets = Vec::with_capacity(tensors.len()); for (name, tensor) in tensors.iter() { write_string(w, name)?; let dims = tensor.shape().dims(); w.write_u32::<LittleEndian>(dims.len() as u32)?; for &dim in dims.iter().rev() { w.write_u64::<LittleEndian>(dim as u64)?; } w.write_u32::<LittleEndian>(tensor.dtype().to_u32())?; w.write_u64::<LittleEndian>(offset as u64)?; offsets.push(offset); let size_in_bytes = tensor.storage_size_in_bytes(); let padding = 31 - (31 + size_in_bytes) % 32; offset += size_in_bytes + padding; } let pos = w.stream_position()? as usize; let padding = 31 - (31 + pos) % 32; w.write_all(&vec![0u8; padding])?; let tensor_start_pos = w.stream_position()? as usize; for (offset, (_name, tensor)) in offsets.iter().zip(tensors.iter()) { let pos = w.stream_position()? as usize; if tensor_start_pos + offset != pos { crate::bail!( "internal error, unexpected current position {tensor_start_pos} {offset} {pos}" ) } let data = tensor.data()?; let size_in_bytes = data.len(); w.write_all(&data)?; let padding = 31 - (31 + size_in_bytes) % 32; w.write_all(&vec![0u8; padding])?; } Ok(()) }
candle/candle-core/src/quantized/gguf_file.rs/0
{ "file_path": "candle/candle-core/src/quantized/gguf_file.rs", "repo_id": "candle", "token_count": 9550 }
29
use crate::{Result, Tensor}; #[macro_export] macro_rules! test_device { // TODO: Switch to generating the two last arguments automatically once concat_idents is // stable. https://github.com/rust-lang/rust/issues/29599 ($fn_name: ident, $test_cpu: ident, $test_cuda: ident, $test_metal: ident) => { #[test] fn $test_cpu() -> Result<()> { $fn_name(&Device::Cpu) } #[cfg(feature = "cuda")] #[test] fn $test_cuda() -> Result<()> { $fn_name(&Device::new_cuda(0)?) } #[cfg(feature = "metal")] #[test] fn $test_metal() -> Result<()> { $fn_name(&Device::new_metal(0)?) } }; } pub fn assert_tensor_eq(t1: &Tensor, t2: &Tensor) -> Result<()> { assert_eq!(t1.shape(), t2.shape()); // Default U8 may not be large enough to hold the sum (`t.sum_all` defaults to the dtype of `t`) let eq_tensor = t1.eq(t2)?.to_dtype(crate::DType::U32)?; let all_equal = eq_tensor.sum_all()?; assert_eq!(all_equal.to_scalar::<u32>()?, eq_tensor.elem_count() as u32); Ok(()) } pub fn to_vec0_round(t: &Tensor, digits: i32) -> Result<f32> { let b = 10f32.powi(digits); let t = t.to_vec0::<f32>()?; Ok(f32::round(t * b) / b) } pub fn to_vec1_round(t: &Tensor, digits: i32) -> Result<Vec<f32>> { let b = 10f32.powi(digits); let t = t.to_vec1::<f32>()?; let t = t.iter().map(|t| f32::round(t * b) / b).collect(); Ok(t) } pub fn to_vec2_round(t: &Tensor, digits: i32) -> Result<Vec<Vec<f32>>> { let b = 10f32.powi(digits); let t = t.to_vec2::<f32>()?; let t = t .iter() .map(|t| t.iter().map(|t| f32::round(t * b) / b).collect()) .collect(); Ok(t) } pub fn to_vec3_round(t: &Tensor, digits: i32) -> Result<Vec<Vec<Vec<f32>>>> { let b = 10f32.powi(digits); let t = t.to_vec3::<f32>()?; let t = t .iter() .map(|t| { t.iter() .map(|t| t.iter().map(|t| f32::round(t * b) / b).collect()) .collect() }) .collect(); Ok(t) }
candle/candle-core/src/test_utils.rs/0
{ "file_path": "candle/candle-core/src/test_utils.rs", "repo_id": "candle", "token_count": 1110 }
30
use candle_core::{DType, Result, Tensor}; struct TmpFile(std::path::PathBuf); impl TmpFile { fn create(base: &str) -> TmpFile { let filename = std::env::temp_dir().join(format!( "candle-{}-{}-{:?}", base, std::process::id(), std::thread::current().id(), )); TmpFile(filename) } } impl std::convert::AsRef<std::path::Path> for TmpFile { fn as_ref(&self) -> &std::path::Path { self.0.as_path() } } impl Drop for TmpFile { fn drop(&mut self) { std::fs::remove_file(&self.0).unwrap() } } #[test] fn npy() -> Result<()> { let npy = Tensor::read_npy("tests/test.npy")?; assert_eq!( npy.to_dtype(DType::U8)?.to_vec1::<u8>()?, [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] ); Ok(()) } #[test] fn npz() -> Result<()> { let npz = Tensor::read_npz("tests/test.npz")?; assert_eq!(npz.len(), 2); assert_eq!(npz[0].0, "x"); assert_eq!(npz[1].0, "x_plus_one"); assert_eq!( npz[1].1.to_dtype(DType::U8)?.to_vec1::<u8>()?, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] ); Ok(()) } #[test] fn safetensors() -> Result<()> { use candle_core::safetensors::Load; let tmp_file = TmpFile::create("st"); let t = Tensor::arange(0f32, 24f32, &candle_core::Device::Cpu)?; t.save_safetensors("t", &tmp_file)?; // Load from file. let st = candle_core::safetensors::load(&tmp_file, &candle_core::Device::Cpu)?; let t2 = st.get("t").unwrap(); let diff = (&t - t2)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0f32); // Load from bytes. let bytes = std::fs::read(tmp_file)?; let st = candle_core::safetensors::SliceSafetensors::new(&bytes)?; let t2 = st.get("t").unwrap().load(&candle_core::Device::Cpu); let diff = (&t - t2)?.abs()?.sum_all()?.to_vec0::<f32>()?; assert_eq!(diff, 0f32); Ok(()) }
candle/candle-core/tests/serialization_tests.rs/0
{ "file_path": "candle/candle-core/tests/serialization_tests.rs", "repo_id": "candle", "token_count": 981 }
31
use candle::Tensor; pub struct Dataset { pub train_images: Tensor, pub train_labels: Tensor, pub test_images: Tensor, pub test_labels: Tensor, pub labels: usize, } pub mod cifar; pub mod fashion_mnist; pub mod mnist;
candle/candle-datasets/src/vision/mod.rs/0
{ "file_path": "candle/candle-datasets/src/vision/mod.rs", "repo_id": "candle", "token_count": 100 }
32
# candle-chinese-clip Contrastive Language-Image Pre-Training (CLIP) is an architecture trained on pairs of images with related texts. This one is trained using in chinese instead of english. ## Running on cpu ```bash $ cargo run --example chinese_clip --release -- --images "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg","candle-examples/examples/yolo-v8/assets/bike.jpg" --cpu --sequences "一场自行车比赛","两只猫的照片","一个机器人拿着蜡烛" > Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg > > 2025-03-25T19:22:01.325177Z INFO chinese_clip: Probability: 0.0000% Text: 一场自行车比赛 > 2025-03-25T19:22:01.325179Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 > 2025-03-25T19:22:01.325181Z INFO chinese_clip: Probability: 100.0000% Text: 一个机器人拿着蜡烛 > 2025-03-25T19:22:01.325183Z INFO chinese_clip: > > Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg > > 2025-03-25T19:22:01.325184Z INFO chinese_clip: Probability: 100.0000% Text: 一场自行车比赛 > 2025-03-25T19:22:01.325186Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 > 2025-03-25T19:22:01.325187Z INFO chinese_clip: Probability: 0.0000% Text: 一个机器人拿着蜡烛 ``` ## Running on metal ```bash $ cargo run --features metal --example chinese_clip --release -- --images "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg","candle-examples/examples/yolo-v8/assets/bike.jpg" --cpu --sequences "一场自行车比赛","两只猫的照片","一个机器人拿着蜡烛" > Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg > > 2025-03-25T19:22:01.325177Z INFO chinese_clip: Probability: 0.0000% Text: 一场自行车比赛 > 2025-03-25T19:22:01.325179Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 > 2025-03-25T19:22:01.325181Z INFO chinese_clip: Probability: 100.0000% Text: 一个机器人拿着蜡烛 > 2025-03-25T19:22:01.325183Z INFO chinese_clip: > > Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg > > 2025-03-25T19:22:01.325184Z INFO chinese_clip: Probability: 100.0000% Text: 一场自行车比赛 > 2025-03-25T19:22:01.325186Z INFO chinese_clip: Probability: 0.0000% Text: 两只猫的照片 > 2025-03-25T19:22:01.325187Z INFO chinese_clip: Probability: 0.0000% Text: 一个机器人拿着蜡烛 ```
candle/candle-examples/examples/chinese_clip/README.md/0
{ "file_path": "candle/candle-examples/examples/chinese_clip/README.md", "repo_id": "candle", "token_count": 1129 }
33
pub const LAYERNORM_KERNELS: &str = include_str!(concat!(env!("OUT_DIR"), "/layernorm_kernels.ptx"));
candle/candle-examples/examples/custom-ops/cuda_kernels.rs/0
{ "file_path": "candle/candle-examples/examples/custom-ops/cuda_kernels.rs", "repo_id": "candle", "token_count": 44 }
34
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use candle_transformers::models::distilbert::{ Config, DistilBertForMaskedLM, DistilBertModel, DTYPE, }; use anyhow::{Context, Error as E, Result}; use candle::{Device, Tensor}; use candle_nn::VarBuilder; use clap::{Parser, ValueEnum}; use hf_hub::{api::sync::Api, Repo, RepoType}; use std::path::PathBuf; use tokenizers::Tokenizer; enum ModelType { Masked(Box<DistilBertForMaskedLM>), UnMasked(Box<DistilBertModel>), } impl ModelType { fn device(&self) -> &Device { match self { ModelType::Masked(model) => &model.bert.device, ModelType::UnMasked(model) => &model.device, } } fn forward(&self, input_ids: &Tensor, attention_mask: &Tensor) -> Result<Tensor> { match self { ModelType::Masked(model) => Ok(model.forward(input_ids, attention_mask)?), ModelType::UnMasked(model) => Ok(model.forward(input_ids, attention_mask)?), } } } #[derive(Clone, Debug, Copy, PartialEq, Eq, ValueEnum)] enum Which { #[value(name = "distilbert")] DistilBert, #[value(name = "distilbertformaskedlm")] DistilbertForMaskedLM, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long, default_value = "distilbert")] model: Which, /// The model to use, check out available models: https://huggingface.co/models?library=sentence-transformers&sort=trending #[arg(long)] model_id: Option<String>, /// Revision or branch #[arg(long)] revision: Option<String>, /// When set, compute embeddings for this prompt. #[arg(long)] prompt: String, /// Use the pytorch weights rather than the safetensors ones #[arg(long)] use_pth: bool, /// The number of times to run the prompt. #[arg(long, default_value = "1")] n: usize, /// Number of top predictions to show for each mask #[arg(long, default_value = "5")] top_k: usize, } impl Args { fn build_model_and_tokenizer(&self) -> Result<(ModelType, Tokenizer)> { let device = candle_examples::device(self.cpu)?; let (model_id, revision) = self.resolve_model_and_revision(); let (config_path, tokenizer_path, weights_path) = self.download_model_files(&model_id, &revision)?; let config = std::fs::read_to_string(config_path)?; let config: Config = serde_json::from_str(&config)?; let tokenizer = Tokenizer::from_file(tokenizer_path).map_err(E::msg)?; let vb = self.load_variables(&weights_path, &device)?; let model = self.create_model(&config, vb)?; Ok((model, tokenizer)) } fn resolve_model_and_revision(&self) -> (String, String) { let default_model = "distilbert-base-uncased".to_string(); let default_revision = "main".to_string(); match (self.model_id.clone(), self.revision.clone()) { (Some(model_id), Some(revision)) => (model_id, revision), (Some(model_id), None) => (model_id, default_revision), (None, Some(revision)) => (default_model, revision), (None, None) => (default_model, default_revision), } } fn download_model_files( &self, model_id: &str, revision: &str, ) -> Result<(PathBuf, PathBuf, PathBuf)> { let repo = Repo::with_revision(model_id.to_string(), RepoType::Model, revision.to_string()); let api = Api::new()?; let api = api.repo(repo); let config = api.get("config.json")?; let tokenizer = api.get("tokenizer.json")?; let weights = if self.use_pth { api.get("pytorch_model.bin")? } else { api.get("model.safetensors")? }; Ok((config, tokenizer, weights)) } fn load_variables(&self, weights_path: &PathBuf, device: &Device) -> Result<VarBuilder<'_>> { if self.use_pth { Ok(VarBuilder::from_pth(weights_path, DTYPE, device)?) } else { Ok(unsafe { VarBuilder::from_mmaped_safetensors(&[weights_path], DTYPE, device)? }) } } fn create_model(&self, config: &Config, vb: VarBuilder) -> Result<ModelType> { match self.model { Which::DistilbertForMaskedLM => Ok(ModelType::Masked( DistilBertForMaskedLM::load(vb, config)?.into(), )), Which::DistilBert => Ok(ModelType::UnMasked( DistilBertModel::load(vb, config)?.into(), )), } } } fn main() -> Result<()> { let args = Args::parse(); let _guard = setup_tracing(&args); let (model, tokenizer) = args.build_model_and_tokenizer()?; let device = model.device(); let (token_ids, mask) = prepare_inputs(&args, &tokenizer, device)?; let output = model.forward(&token_ids, &mask)?; process_output(&model, &output, &token_ids, &tokenizer, &args)?; Ok(()) } fn setup_tracing(args: &Args) -> Option<impl Drop> { if args.tracing { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; println!("tracing..."); let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None } } fn prepare_inputs(args: &Args, tokenizer: &Tokenizer, device: &Device) -> Result<(Tensor, Tensor)> { let mut binding = tokenizer.clone(); let tokenizer_configured = binding .with_padding(None) .with_truncation(None) .map_err(E::msg)?; let tokens = tokenizer_configured .encode(args.prompt.clone(), true) .map_err(E::msg)? .get_ids() .to_vec(); let token_ids = Tensor::new(&tokens[..], device)?.unsqueeze(0)?; let mask = match args.model { Which::DistilbertForMaskedLM => attention_mask_maskedlm(tokenizer, &args.prompt, device)?, Which::DistilBert => attention_mask(tokens.len(), device)?, }; println!("token_ids: {:?}", token_ids.to_vec2::<u32>()?); Ok((token_ids, mask)) } fn process_output( model: &ModelType, output: &Tensor, token_ids: &Tensor, tokenizer: &Tokenizer, args: &Args, ) -> Result<()> { match model { ModelType::UnMasked(_) => { println!("embeddings"); println!("{output}"); } ModelType::Masked(_) => { process_masked_output(output, token_ids, tokenizer, args)?; } } Ok(()) } fn process_masked_output( output: &Tensor, token_ids: &Tensor, tokenizer: &Tokenizer, args: &Args, ) -> Result<()> { let input_ids_vec = token_ids.to_vec2::<u32>()?; let mask_token_id = tokenizer .token_to_id("[MASK]") .context("Mask token, \"[MASK]\", not found in tokenizer.")?; println!("\nInput: {}", args.prompt); for (token_idx, &token_id) in input_ids_vec[0].iter().enumerate() { if token_id == mask_token_id { println!("Predictions for [MASK] at position {token_idx}:"); let pos_logits = output.get(0)?.get(token_idx)?; let probs = candle_nn::ops::softmax(&pos_logits, 0)?; let (top_values, top_indices) = get_top_k(&probs, args.top_k)?; let values = top_values.to_vec1::<f32>()?; let indices = top_indices.to_vec1::<u32>()?; for (i, (&token_id, &prob)) in indices.iter().zip(values.iter()).enumerate() { let token = tokenizer.decode(&[token_id], false).map_err(E::msg)?; println!( " {}: {:15} (probability: {:.2}%)", i + 1, token, prob * 100.0 ); } } } Ok(()) } fn get_top_k(tensor: &Tensor, k: usize) -> Result<(Tensor, Tensor)> { let n = tensor.dims().iter().product::<usize>(); let k = std::cmp::min(k, n); let values = tensor.to_vec1::<f32>()?; let mut value_indices: Vec<(f32, usize)> = values .into_iter() .enumerate() .map(|(idx, val)| (val, idx)) .collect(); value_indices.sort_by(|a, b| b.0.partial_cmp(&a.0).unwrap_or(std::cmp::Ordering::Equal)); let top_k_values: Vec<f32> = value_indices.iter().take(k).map(|(val, _)| *val).collect(); let top_k_indices: Vec<u32> = value_indices .iter() .take(k) .map(|(_, idx)| *idx as u32) .collect(); let device = tensor.device(); let top_values = Tensor::from_vec(top_k_values, (k,), device)?; let top_indices = Tensor::from_vec(top_k_indices, (k,), device)?; Ok((top_values, top_indices)) } fn attention_mask(size: usize, device: &Device) -> Result<Tensor> { let mask: Vec<_> = (0..size) .flat_map(|i| (0..size).map(move |j| u8::from(j > i))) .collect(); Ok(Tensor::from_slice(&mask, (size, size), device)?) } fn attention_mask_maskedlm(tokenizer: &Tokenizer, input: &str, device: &Device) -> Result<Tensor> { let tokens = tokenizer.encode(input, true).map_err(E::msg)?; let seq_len = tokens.get_attention_mask().to_vec().len(); let mask_token_id = tokenizer .token_to_id("[MASK]") .context("Mask token, \"[MASK]\", not found in tokenizer.")?; let mut attention_mask_vec = Vec::with_capacity(seq_len * seq_len); let ids = tokens.get_ids(); for _ in 0..seq_len { for id in ids.iter() { let mask_value = if id == &mask_token_id { 1u8 } else { 0u8 }; attention_mask_vec.push(mask_value); } } let shape = (1, 1, seq_len, seq_len); let mask = Tensor::from_vec(attention_mask_vec, shape, device)?; Ok(mask) }
candle/candle-examples/examples/distilbert/main.rs/0
{ "file_path": "candle/candle-examples/examples/distilbert/main.rs", "repo_id": "candle", "token_count": 4559 }
35
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use candle_transformers::models::jina_bert::{BertModel, Config, PositionEmbeddingType}; use anyhow::Error as E; use candle::{DType, Module, Tensor}; use candle_nn::VarBuilder; use clap::Parser; #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, /// When set, compute embeddings for this prompt. #[arg(long)] prompt: Option<String>, /// The number of times to run the prompt. #[arg(long, default_value = "1")] n: usize, /// L2 normalization for embeddings. #[arg(long, default_value = "true")] normalize_embeddings: bool, #[arg(long)] tokenizer: Option<String>, #[arg(long)] model: Option<String>, #[arg(long)] model_file: Option<String>, } impl Args { fn build_model_and_tokenizer(&self) -> anyhow::Result<(BertModel, tokenizers::Tokenizer)> { use hf_hub::{api::sync::Api, Repo, RepoType}; let model_name = match self.model.as_ref() { Some(model) => model.to_string(), None => "jinaai/jina-embeddings-v2-base-en".to_string(), }; let model = match &self.model_file { Some(model_file) => std::path::PathBuf::from(model_file), None => Api::new()? .repo(Repo::new(model_name.to_string(), RepoType::Model)) .get("model.safetensors")?, }; let tokenizer = match &self.tokenizer { Some(file) => std::path::PathBuf::from(file), None => Api::new()? .repo(Repo::new(model_name.to_string(), RepoType::Model)) .get("tokenizer.json")?, }; let device = candle_examples::device(self.cpu)?; let tokenizer = tokenizers::Tokenizer::from_file(tokenizer).map_err(E::msg)?; let config = Config::new( tokenizer.get_vocab_size(true), 768, 12, 12, 3072, candle_nn::Activation::Gelu, 8192, 2, 0.02, 1e-12, 0, PositionEmbeddingType::Alibi, ); let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model], DType::F32, &device)? }; let model = BertModel::new(vb, &config)?; Ok((model, tokenizer)) } } fn main() -> anyhow::Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { println!("tracing..."); let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; let start = std::time::Instant::now(); let (model, mut tokenizer) = args.build_model_and_tokenizer()?; let device = &model.device; if let Some(prompt) = args.prompt { let tokenizer = tokenizer .with_padding(None) .with_truncation(None) .map_err(E::msg)?; let tokens = tokenizer .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let token_ids = Tensor::new(&tokens[..], device)?.unsqueeze(0)?; println!("Loaded and encoded {:?}", start.elapsed()); let start = std::time::Instant::now(); let embeddings = model.forward(&token_ids)?; let (_n_sentence, n_tokens, _hidden_size) = embeddings.dims3()?; let embeddings = (embeddings.sum(1)? / (n_tokens as f64))?; println!("pooled_embeddigns: {embeddings}"); let embeddings = if args.normalize_embeddings { normalize_l2(&embeddings)? } else { embeddings }; if args.normalize_embeddings { println!("normalized_embeddings: {embeddings}"); } println!("Took {:?}", start.elapsed()); } else { let sentences = [ "The cat sits outside", "A man is playing guitar", "I love pasta", "The new movie is awesome", "The cat plays in the garden", "A woman watches TV", "The new movie is so great", "Do you like pizza?", ]; let n_sentences = sentences.len(); if let Some(pp) = tokenizer.get_padding_mut() { pp.strategy = tokenizers::PaddingStrategy::BatchLongest } else { let pp = tokenizers::PaddingParams { strategy: tokenizers::PaddingStrategy::BatchLongest, ..Default::default() }; tokenizer.with_padding(Some(pp)); } let tokens = tokenizer .encode_batch(sentences.to_vec(), true) .map_err(E::msg)?; let token_ids = tokens .iter() .map(|tokens| { let tokens = tokens.get_ids().to_vec(); Tensor::new(tokens.as_slice(), device) }) .collect::<candle::Result<Vec<_>>>()?; let token_ids = Tensor::stack(&token_ids, 0)?; println!("running inference on batch {:?}", token_ids.shape()); let embeddings = model.forward(&token_ids)?; println!("generated embeddings {:?}", embeddings.shape()); // Apply some avg-pooling by taking the mean embedding value for all tokens (including padding) let (_n_sentence, n_tokens, _hidden_size) = embeddings.dims3()?; let embeddings = (embeddings.sum(1)? / (n_tokens as f64))?; let embeddings = if args.normalize_embeddings { normalize_l2(&embeddings)? } else { embeddings }; println!("pooled embeddings {:?}", embeddings.shape()); let mut similarities = vec![]; for i in 0..n_sentences { let e_i = embeddings.get(i)?; for j in (i + 1)..n_sentences { let e_j = embeddings.get(j)?; let sum_ij = (&e_i * &e_j)?.sum_all()?.to_scalar::<f32>()?; let sum_i2 = (&e_i * &e_i)?.sum_all()?.to_scalar::<f32>()?; let sum_j2 = (&e_j * &e_j)?.sum_all()?.to_scalar::<f32>()?; let cosine_similarity = sum_ij / (sum_i2 * sum_j2).sqrt(); similarities.push((cosine_similarity, i, j)) } } similarities.sort_by(|u, v| v.0.total_cmp(&u.0)); for &(score, i, j) in similarities[..5].iter() { println!("score: {score:.2} '{}' '{}'", sentences[i], sentences[j]) } } Ok(()) } pub fn normalize_l2(v: &Tensor) -> candle::Result<Tensor> { v.broadcast_div(&v.sqr()?.sum_keepdim(1)?.sqrt()?) }
candle/candle-examples/examples/jina-bert/main.rs/0
{ "file_path": "candle/candle-examples/examples/jina-bert/main.rs", "repo_id": "candle", "token_count": 3414 }
36
# candle-mobileclip MobileCLIP is family of efficient CLIP-like models using FastViT-based image encoders. See [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training](https://arxiv.org/abs/2311.17049) ## Running on an example on cpu ``` $ cargo run --example mobileclip --release -- --images "candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg","candle-examples/examples/yolo-v8/assets/bike.jpg" --cpu --sequences "a cycling race","a photo of two cats","a robot holding a candle" softmax_image_vec: [2.4819004e-5, 3.81081e-6, 0.9999714, 0.9999738, 2.382714e-5, 2.3317718e-6] Results for image: candle-examples/examples/stable-diffusion/assets/stable-diffusion-xl.jpg Probability: 0.0025% Text: a cycling race Probability: 0.0004% Text: a photo of two cats Probability: 99.9971% Text: a robot holding a candle Results for image: candle-examples/examples/yolo-v8/assets/bike.jpg Probability: 99.9974% Text: a cycling race Probability: 0.0024% Text: a photo of two cats Probability: 0.0002% Text: a robot holding a candle ```
candle/candle-examples/examples/mobileclip/README.md/0
{ "file_path": "candle/candle-examples/examples/mobileclip/README.md", "repo_id": "candle", "token_count": 379 }
37
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::{Parser, ValueEnum}; use candle_transformers::models::olmo::{Config, Model as OLMo}; use candle_transformers::models::olmo2::{Config as Config2, Model as OLMo2}; use candle::{DType, Device, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; enum Model { OLMo(OLMo), OLMo2(OLMo2), } struct TextGeneration { model: Model, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, repeat_penalty: f32, repeat_last_n: usize, device: &Device, ) -> Self { let logits_processor = LogitsProcessor::new(seed, temp, top_p); Self { model, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, false) .map_err(E::msg)? .get_ids() .to_vec(); for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let mut generated_tokens = 0usize; let eos_token = match self.tokenizer.get_token("<|endoftext|>") { Some(token) => token, None => anyhow::bail!("cannot find the <|endoftext|> token"), }; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; let logits = match &mut self.model { Model::OLMo(m) => m.forward(&input, start_pos)?, Model::OLMo2(m) => m.forward(&input, start_pos)?, }; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; if next_token == eos_token { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Clone, Copy, Debug, ValueEnum, PartialEq, Eq)] enum Which { #[value(name = "1b")] W1b, #[value(name = "7b")] W7b, #[value(name = "7b-twin-2t")] W7bTwin2T, #[value(name = "1.7-7b")] V1_7W7b, #[value(name = "2-1b")] V2W1b, } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long)] prompt: String, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 1000)] sample_len: usize, #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, #[arg(long, default_value = "1b")] model: Which, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] weight_files: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature.unwrap_or(0.), args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let model_id = match args.model_id { Some(model_id) => model_id, None => match args.model { Which::W1b => "allenai/OLMo-1B-hf".to_string(), Which::W7b => "allenai/OLMo-7B-hf".to_string(), Which::W7bTwin2T => "allenai/OLMo-7B-Twin-2T-hf".to_string(), Which::V1_7W7b => "allenai/OLMo-1.7-7B-hf".to_string(), Which::V2W1b => "allenai/OLMo-2-0425-1B-Instruct".to_string(), }, }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let filenames = match args.weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => match args.model { Which::W1b | Which::V2W1b => { vec![repo.get("model.safetensors")?] } _ => candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")?, }, }; let config_filename = repo.get("config.json")?; println!("retrieved the files in {:?}", start.elapsed()); let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let device = candle_examples::device(args.cpu)?; let dtype = if device.is_cuda() { DType::BF16 } else { DType::F32 }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; let model = match args.model { Which::W1b | Which::W7b | Which::W7bTwin2T | Which::V1_7W7b => { let config: Config = serde_json::from_slice(&std::fs::read(config_filename)?)?; let model = OLMo::new(&config, vb)?; Model::OLMo(model) } Which::V2W1b => { let config: Config2 = serde_json::from_slice(&std::fs::read(config_filename)?)?; let model = OLMo2::new(&config, vb)?; Model::OLMo2(model) } }; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, tokenizer, args.seed, args.temperature, args.top_p, args.repeat_penalty, args.repeat_last_n, &device, ); pipeline.run(&args.prompt, args.sample_len)?; Ok(()) }
candle/candle-examples/examples/olmo/main.rs/0
{ "file_path": "candle/candle-examples/examples/olmo/main.rs", "repo_id": "candle", "token_count": 4321 }
38
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::{Error as E, Result}; use clap::Parser; use candle_transformers::models::pixtral::{vision_model, Config, Model}; use candle::{DType, Device, Module, Tensor}; use candle_examples::token_output_stream::TokenOutputStream; use candle_nn::VarBuilder; use candle_transformers::generation::LogitsProcessor; use hf_hub::{api::sync::Api, Repo, RepoType}; use tokenizers::Tokenizer; struct TextGeneration { model: Model, image: Tensor, device: Device, tokenizer: TokenOutputStream, logits_processor: LogitsProcessor, repeat_penalty: f32, repeat_last_n: usize, } impl TextGeneration { #[allow(clippy::too_many_arguments)] fn new( model: Model, image: Tensor, tokenizer: Tokenizer, seed: u64, temp: Option<f64>, top_p: Option<f64>, repeat_penalty: f32, repeat_last_n: usize, device: &Device, ) -> Self { let logits_processor = LogitsProcessor::new(seed, temp, top_p); Self { model, image, tokenizer: TokenOutputStream::new(tokenizer), logits_processor, repeat_penalty, repeat_last_n, device: device.clone(), } } fn run(&mut self, prompt: &str, sample_len: usize) -> Result<()> { use std::io::Write; self.tokenizer.clear(); let mut tokens = self .tokenizer .tokenizer() .encode(prompt, true) .map_err(E::msg)? .get_ids() .to_vec(); let mut generated_tokens = 0usize; let get_token = |v| match self.tokenizer.get_token(v) { Some(token) => Ok(token), None => anyhow::bail!("cannot find the {v} token"), }; let bos_token = get_token("<s>")?; let eos_token = get_token("</s>")?; let inst_token = get_token("[INST]")?; let end_inst_token = get_token("[/INST]")?; let img_break = get_token("[IMG_BREAK]")?; let img_end = get_token("[IMG_END]")?; let start_gen = std::time::Instant::now(); for index in 0..sample_len { let logits = if index > 0 { let context_size = if index > 0 { 1 } else { tokens.len() }; let start_pos = tokens.len().saturating_sub(context_size); let ctxt = &tokens[start_pos..]; let input = Tensor::new(ctxt, &self.device)?.unsqueeze(0)?; self.model.lm_forward(&input)? } else { let (_b, _c, h, w) = self.image.dims4()?; let h = h / self.model.patch_size; let w = w / self.model.patch_size; let image_embeds = self.model.encode_image(&self.image)?; println!("generated image embeddings {image_embeds:?}"); let image_embeds = image_embeds.to_dtype(self.model.dtype)?; for &t in tokens.iter() { if let Some(t) = self.tokenizer.next_token(t)? { print!("{t}") } } std::io::stdout().flush()?; let break_embeds = { let input = Tensor::new(&[img_break], &self.device)?.unsqueeze(0)?; self.model.language_model.embed_tokens().forward(&input)? }; let start_embeds = { let mut in_tokens = vec![bos_token, inst_token]; in_tokens.extend_from_slice(tokens.as_slice()); let input = Tensor::new(in_tokens.as_slice(), &self.device)?.unsqueeze(0)?; self.model.language_model.embed_tokens().forward(&input)? }; let end_embeds = { let input = Tensor::new(&[img_end, end_inst_token], &self.device)?.unsqueeze(0)?; self.model.language_model.embed_tokens().forward(&input)? }; let mut input_embeds = vec![start_embeds]; for h_idx in 0..h { if h_idx > 0 { input_embeds.push(break_embeds.clone()) } let row = image_embeds.narrow(1, h_idx * w, w)?; input_embeds.push(row); } input_embeds.push(end_embeds); let input_embeds = Tensor::cat(&input_embeds, 1)?; self.model.lm_forward_embeds(&input_embeds)? }; let logits = logits.squeeze(0)?.squeeze(0)?.to_dtype(DType::F32)?; let logits = if self.repeat_penalty == 1. { logits } else { let start_at = tokens.len().saturating_sub(self.repeat_last_n); candle_transformers::utils::apply_repeat_penalty( &logits, self.repeat_penalty, &tokens[start_at..], )? }; let next_token = self.logits_processor.sample(&logits)?; tokens.push(next_token); generated_tokens += 1; if next_token == eos_token { break; } if let Some(t) = self.tokenizer.next_token(next_token)? { print!("{t}"); std::io::stdout().flush()?; } } let dt = start_gen.elapsed(); if let Some(rest) = self.tokenizer.decode_rest().map_err(E::msg)? { print!("{rest}"); } std::io::stdout().flush()?; println!( "\n{generated_tokens} tokens generated ({:.2} token/s)", generated_tokens as f64 / dt.as_secs_f64(), ); Ok(()) } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Enable tracing (generates a trace-timestamp.json file). #[arg(long)] tracing: bool, #[arg(long, default_value = "Describe the image.\n")] prompt: String, /// The temperature used to generate samples. #[arg(long)] temperature: Option<f64>, /// Nucleus sampling probability cutoff. #[arg(long)] top_p: Option<f64>, /// The seed to use when generating random samples. #[arg(long, default_value_t = 299792458)] seed: u64, /// The length of the sample to generate (in tokens). #[arg(long, short = 'n', default_value_t = 10000)] sample_len: usize, #[arg(long)] model_id: Option<String>, #[arg(long, default_value = "main")] revision: String, #[arg(long)] tokenizer_file: Option<String>, #[arg(long)] config_file: Option<String>, #[arg(long)] weight_files: Option<String>, /// Penalty to be applied for repeating tokens, 1. means no penalty. #[arg(long, default_value_t = 1.1)] repeat_penalty: f32, /// The context size to consider for the repeat penalty. #[arg(long, default_value_t = 64)] repeat_last_n: usize, #[arg(long)] image: String, #[arg(long)] vision_only: bool, } fn main() -> Result<()> { use tracing_chrome::ChromeLayerBuilder; use tracing_subscriber::prelude::*; let args = Args::parse(); let _guard = if args.tracing { let (chrome_layer, guard) = ChromeLayerBuilder::new().build(); tracing_subscriber::registry().with(chrome_layer).init(); Some(guard) } else { None }; println!( "avx: {}, neon: {}, simd128: {}, f16c: {}", candle::utils::with_avx(), candle::utils::with_neon(), candle::utils::with_simd128(), candle::utils::with_f16c() ); println!( "temp: {:.2} repeat-penalty: {:.2} repeat-last-n: {}", args.temperature.unwrap_or(0.), args.repeat_penalty, args.repeat_last_n ); let start = std::time::Instant::now(); let api = Api::new()?; let model_id = match &args.model_id { Some(model_id) => model_id.to_string(), None => "mistral-community/pixtral-12b".to_string(), }; let repo = api.repo(Repo::with_revision( model_id, RepoType::Model, args.revision, )); let tokenizer_filename = match args.tokenizer_file { Some(file) => std::path::PathBuf::from(file), None => repo.get("tokenizer.json")?, }; let filenames = match args.weight_files { Some(files) => files .split(',') .map(std::path::PathBuf::from) .collect::<Vec<_>>(), None => candle_examples::hub_load_safetensors(&repo, "model.safetensors.index.json")?, }; println!("retrieved the files in {:?}", start.elapsed()); let device = candle_examples::device(args.cpu)?; let dtype = if device.supports_bf16() && !args.vision_only { DType::BF16 } else { DType::F32 }; let config: Config = match args.config_file { Some(config_file) => serde_json::from_slice(&std::fs::read(config_file)?)?, None => { let config_file = repo.get("config.json")?; serde_json::from_slice(&std::fs::read(config_file)?)? } }; let image = if args.image.ends_with(".safetensors") { match candle::safetensors::load(&args.image, &device)?.remove("img") { None => anyhow::bail!("no img tensor in {}", args.image), Some(v) => v, } } else { candle_examples::imagenet::load_image_with_std_mean( &args.image, 1024, &[0.48145466, 0.4578275, 0.40821073], &[0.26862954, 0.261_302_6, 0.275_777_1], )? }; let image = image.to_device(&device)?.unsqueeze(0)?; println!("loaded image with shape {image:?}"); let vb = unsafe { VarBuilder::from_mmaped_safetensors(&filenames, dtype, &device)? }; if args.vision_only { let start = std::time::Instant::now(); let model = vision_model::Model::new(&config.vision_config, vb.pp("vision_tower"))?; println!("loaded the model in {:?}", start.elapsed()); let embs = model.forward(&image)?; println!("EMBS\n{embs}"); } else { let tokenizer = Tokenizer::from_file(tokenizer_filename).map_err(E::msg)?; let start = std::time::Instant::now(); let model = Model::new(&config, vb)?; println!("loaded the model in {:?}", start.elapsed()); let mut pipeline = TextGeneration::new( model, image, tokenizer, args.seed, args.temperature, args.top_p, args.repeat_penalty, args.repeat_last_n, &device, ); pipeline.run(&args.prompt, args.sample_len)?; } Ok(()) }
candle/candle-examples/examples/pixtral/main.rs/0
{ "file_path": "candle/candle-examples/examples/pixtral/main.rs", "repo_id": "candle", "token_count": 5495 }
39
# candle-recurrent-gemma This model card corresponds to the 2B base version of the RecurrentGemma model [huggingface model card](https://huggingface.co/google/recurrentgemma-2b). ```bash cargo run --features cuda -r --example recurrent-gemma -- \ --prompt "Write me a poem about Machine Learning." ```
candle/candle-examples/examples/recurrent-gemma/README.md/0
{ "file_path": "candle/candle-examples/examples/recurrent-gemma/README.md", "repo_id": "candle", "token_count": 101 }
40
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use candle::{DType, IndexOp, D}; use candle_nn::{Module, VarBuilder}; use candle_transformers::models::resnet; use clap::{Parser, ValueEnum}; #[derive(Clone, Copy, Debug, ValueEnum)] enum Which { #[value(name = "18")] Resnet18, #[value(name = "34")] Resnet34, #[value(name = "50")] Resnet50, #[value(name = "101")] Resnet101, #[value(name = "152")] Resnet152, } #[derive(Parser)] struct Args { #[arg(long)] model: Option<String>, #[arg(long)] image: String, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// Variant of the model to use. #[arg(value_enum, long, default_value_t = Which::Resnet18)] which: Which, } pub fn main() -> anyhow::Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let image = candle_examples::imagenet::load_image224(args.image)?.to_device(&device)?; println!("loaded image {image:?}"); let model_file = match args.model { None => { let api = hf_hub::api::sync::Api::new()?; let api = api.model("lmz/candle-resnet".into()); let filename = match args.which { Which::Resnet18 => "resnet18.safetensors", Which::Resnet34 => "resnet34.safetensors", Which::Resnet50 => "resnet50.safetensors", Which::Resnet101 => "resnet101.safetensors", Which::Resnet152 => "resnet152.safetensors", }; api.get(filename)? } Some(model) => model.into(), }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model_file], DType::F32, &device)? }; let class_count = candle_examples::imagenet::CLASS_COUNT as usize; let model = match args.which { Which::Resnet18 => resnet::resnet18(class_count, vb)?, Which::Resnet34 => resnet::resnet34(class_count, vb)?, Which::Resnet50 => resnet::resnet50(class_count, vb)?, Which::Resnet101 => resnet::resnet101(class_count, vb)?, Which::Resnet152 => resnet::resnet152(class_count, vb)?, }; println!("model built"); let logits = model.forward(&image.unsqueeze(0)?)?; let prs = candle_nn::ops::softmax(&logits, D::Minus1)? .i(0)? .to_vec1::<f32>()?; let mut prs = prs.iter().enumerate().collect::<Vec<_>>(); prs.sort_by(|(_, p1), (_, p2)| p2.total_cmp(p1)); for &(category_idx, pr) in prs.iter().take(5) { println!( "{:24}: {:.2}%", candle_examples::imagenet::CLASSES[category_idx], 100. * pr ); } Ok(()) }
candle/candle-examples/examples/resnet/main.rs/0
{ "file_path": "candle/candle-examples/examples/resnet/main.rs", "repo_id": "candle", "token_count": 1288 }
41
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use anyhow::Result; use candle::{DType, IndexOp, Tensor}; use candle_nn::VarBuilder; use candle_transformers::models::snac::{Config, Model}; use clap::{Parser, ValueEnum}; use hf_hub::api::sync::Api; mod audio_io; #[derive(Clone, Debug, Copy, PartialEq, Eq, ValueEnum)] enum Action { AudioToAudio, AudioToCode, CodeToAudio, } #[derive(Clone, Debug, Copy, PartialEq, Eq, clap::ValueEnum)] enum Which { #[value(name = "24khz")] S24khz, #[value(name = "32khz")] S32khz, #[value(name = "44khz")] S44khz, } impl Which { fn sample_rate(&self) -> u32 { match self { Which::S24khz => 24000, Which::S32khz => 32000, Which::S44khz => 44000, } } fn config_repo(&self) -> &'static str { match self { Which::S24khz => "hubertsiuzdak/snac_24khz", Which::S32khz => "hubertsiuzdak/snac_32khz", Which::S44khz => "hubertsiuzdak/snac_44khz", } } fn model_file(&self) -> &'static str { match self { Which::S24khz => "snac_24khz.safetensors", Which::S32khz => "snac_32khz.safetensors", Which::S44khz => "snac_44khz.safetensors", } } } #[derive(Parser, Debug)] #[command(author, version, about, long_about = None)] struct Args { /// The action to be performed, specifies the format for the input and output data. action: Action, /// The input file, either an audio file or some snac tokens stored as safetensors. in_file: String, /// The output file, either a wave audio file or some snac tokens stored as safetensors. out_file: String, /// The model size to use. #[arg(long, default_value = "24khz")] which: Which, /// Run on CPU rather than on GPU. #[arg(long)] cpu: bool, /// The model weight file, in safetensor format. #[arg(long)] model: Option<String>, /// The config file, in safetensor format. #[arg(long)] config: Option<String>, } fn main() -> Result<()> { let args = Args::parse(); let device = candle_examples::device(args.cpu)?; let model_sample_rate = args.which.sample_rate(); let config = match args.config { Some(c) => std::path::PathBuf::from(c), None => Api::new()? .model(args.which.config_repo().to_string()) .get("config.json")?, }; let config: Config = serde_json::from_slice(&std::fs::read(config)?)?; let model = match args.model { Some(model) => std::path::PathBuf::from(model), None => Api::new()? .model("lmz/candle-snac".to_string()) .get(args.which.model_file())?, }; let vb = unsafe { VarBuilder::from_mmaped_safetensors(&[model], DType::F32, &device)? }; let model = Model::new(&config, vb)?; let codes = match args.action { Action::CodeToAudio => { let codes = candle::safetensors::load(args.in_file, &device)?; let num_codebooks = model.num_codebooks(); (0..num_codebooks) .map(|i| { codes .get(&format!("codes-{i}")) .expect("no codes in input file") .clone() }) .collect::<Vec<_>>() } Action::AudioToCode | Action::AudioToAudio => { let pcm = if args.in_file == "-" { println!(">>>> RECORDING AUDIO, PRESS ENTER ONCE DONE <<<<"); let (stream, input_audio) = audio_io::setup_input_stream()?; let mut pcms = vec![]; let stdin = std::thread::spawn(|| { let mut s = String::new(); std::io::stdin().read_line(&mut s) }); while !stdin.is_finished() { let input = input_audio.lock().unwrap().take_all(); if input.is_empty() { std::thread::sleep(std::time::Duration::from_millis(100)); continue; } pcms.push(input) } drop(stream); pcms.concat() } else { let (pcm, sample_rate) = audio_io::pcm_decode(args.in_file)?; if sample_rate != model_sample_rate { println!("WARNING: snac uses a {model_sample_rate} sample rate, input uses {sample_rate}, resampling..."); candle_examples::audio::resample(&pcm, sample_rate, model_sample_rate)? } else { pcm } }; let pcm_len = pcm.len(); let pcm = Tensor::from_vec(pcm, (1, 1, pcm_len), &device)?; println!("input pcm shape: {:?}", pcm.shape()); model.encode(&pcm)? } }; for codes in codes.iter() { println!("codes shape: {:?}", codes.shape()); } match args.action { Action::AudioToCode => { let mut tensors = std::collections::HashMap::new(); for (i, codes) in codes.iter().enumerate() { tensors.insert(format!("codes-{i}"), codes.clone()); } candle::safetensors::save(&tensors, "codes.safetensors")?; } Action::AudioToAudio | Action::CodeToAudio => { let codes = codes.iter().collect::<Vec<_>>(); let pcm = model.decode(&codes)?; println!("output pcm shape: {:?}", pcm.shape()); let pcm = pcm.i(0)?.i(0)?; let pcm = candle_examples::audio::normalize_loudness(&pcm, model_sample_rate, true)?; let pcm = pcm.to_vec1::<f32>()?; if args.out_file == "-" { let (stream, ad) = audio_io::setup_output_stream()?; { let mut ad = ad.lock().unwrap(); ad.push_samples(&pcm)?; } loop { let ad = ad.lock().unwrap(); if ad.is_empty() { break; } // That's very weird, calling thread::sleep here triggers the stream to stop // playing (the callback doesn't seem to be called anymore). // std::thread::sleep(std::time::Duration::from_millis(100)); } drop(stream) } else { let mut output = std::fs::File::create(&args.out_file)?; candle_examples::wav::write_pcm_as_wav(&mut output, &pcm, model_sample_rate)?; } } } Ok(()) }
candle/candle-examples/examples/snac/main.rs/0
{ "file_path": "candle/candle-examples/examples/snac/main.rs", "repo_id": "candle", "token_count": 3485 }
42
# candle-stella-en-v5: Implementation of [stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) embedding model As of 7th Oct 2024, *Stella_en_1.5B_v5* is one of the top ranking model on `retrieval` and `reranking` tasks in [MTEB](https://huggingface.co/spaces/mteb/leaderboard) leaderboard. [Model card](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the HuggingFace Hub. ## Running the example Stella_en_1.5B_v5 is used to generate text embeddings embeddings for a prompt. The model weights are downloaded from the hub on the first run. ```bash $ cargo run --example stella-en-v5 --release -- --query "What are safetensors?" --which 1.5b > [[ 0.3905, -0.0130, 0.2072, ..., -0.1100, -0.0086, 0.6002]] > Tensor[[1, 1024], f32] ``` Stella_en_1.5B_v5 is trained by [MRL](https://arxiv.org/abs/2205.13147) enabling multiple embedding dimensions. The following reproduces the example in the [model card](https://huggingface.co/dunzhang/stella_en_1.5B_v5) for a retrieval task (s2p). The sample queries and docs are hardcoded in the example. ```bash $ cargo run --example stella-en-v5 --release --features <metal | cuda> -- --which 1.5b > > Score: 0.8178786 > Query: What are some ways to reduce stress? > Answer: There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending > time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent > stress from building up. > > > Score: 0.7853528 > Query: What are the benefits of drinking green tea? > Answer: Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage > caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types > > of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties. > $ cargo run --example stella-en-v5 --release --features <metal | cuda> -- --which 400m > > Score: 0.8397539 > Query: What are some ways to reduce stress? > Answer: There are many effective ways to reduce stress. Some common techniques include deep breathing, meditation, and physical activity. Engaging in hobbies, spending > time in nature, and connecting with loved ones can also help alleviate stress. Additionally, setting boundaries, practicing self-care, and learning to say no can prevent > stress from building up. > > > > Score: 0.809545 > Query: What are the benefits of drinking green tea? > Answer: Green tea has been consumed for centuries and is known for its potential health benefits. It contains antioxidants that may help protect the body against damage > caused by free radicals. Regular consumption of green tea has been associated with improved heart health, enhanced cognitive function, and a reduced risk of certain types > of cancer. The polyphenols in green tea may also have anti-inflammatory and weight loss properties. > ``` ## Supported options: - `Stella_en_v5` has 2 model variants published - a 1.5B variant and 400M variant. This is enabled through the flag `--which`. E.g. `--which 400m` or `--which 1.5b`. - `Stella_en_v5` supports 256, 768, 1024, 2048, 4096, 6144 and 8192 embedding dimensions (though the model card mentions 512, I couldn't find weights for the same). In the example run this is supported with `--embed-dim` option. E.g. `... --embed-dim 4096`. Defaults to `1024`. - As per the [model card](https://huggingface.co/dunzhang/stella_en_1.5B_v5), the model has been primarily trained on `s2s` (similarity) and `s2p` (retrieval) tasks. These require a slightly different `query` preprocessing (a different prompt template for each). In this example this is enabled though `--task` option.
candle/candle-examples/examples/stella-en-v5/README.md/0
{ "file_path": "candle/candle-examples/examples/stella-en-v5/README.md", "repo_id": "candle", "token_count": 1149 }
43
# candle-yi Candle implentations of the Yi family of bilingual (English, Chinese) LLMs. ## Running an example ```bash $ cargo run --example yi -- --prompt "Here is a test sentence" > python > print("Hello World") > ```
candle/candle-examples/examples/yi/README.md/0
{ "file_path": "candle/candle-examples/examples/yi/README.md", "repo_id": "candle", "token_count": 73 }
44
// Copied from https://github.com/ruuda/bs1770/blob/master/src/lib.rs // BS1770 -- Loudness analysis library conforming to ITU-R BS.1770 // Copyright 2020 Ruud van Asseldonk // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // A copy of the License has been included in the root of the repository. //! Loudness analysis conforming to [ITU-R BS.1770-4][bs17704]. //! //! This library offers the building blocks to perform BS.1770 loudness //! measurements, but you need to put the pieces together yourself. //! //! [bs17704]: https://www.itu.int/rec/R-REC-BS.1770-4-201510-I/en //! //! # Stereo integrated loudness example //! //! ```ignore //! # fn load_stereo_audio() -> [Vec<i16>; 2] { //! # [vec![0; 48_000], vec![0; 48_000]] //! # } //! # //! let sample_rate_hz = 44_100; //! let bits_per_sample = 16; //! let channel_samples: [Vec<i16>; 2] = load_stereo_audio(); //! //! // When converting integer samples to float, note that the maximum amplitude //! // is `1 << (bits_per_sample - 1)`, one bit is the sign bit. //! let normalizer = 1.0 / (1_u64 << (bits_per_sample - 1)) as f32; //! //! let channel_power: Vec<_> = channel_samples.iter().map(|samples| { //! let mut meter = bs1770::ChannelLoudnessMeter::new(sample_rate_hz); //! meter.push(samples.iter().map(|&s| s as f32 * normalizer)); //! meter.into_100ms_windows() //! }).collect(); //! //! let stereo_power = bs1770::reduce_stereo( //! channel_power[0].as_ref(), //! channel_power[1].as_ref(), //! ); //! //! let gated_power = bs1770::gated_mean( //! stereo_power.as_ref() //! ).unwrap_or(bs1770::Power(0.0)); //! println!("Integrated loudness: {:.1} LUFS", gated_power.loudness_lkfs()); //! ``` use std::f32; /// Coefficients for a 2nd-degree infinite impulse response filter. /// /// Coefficient a0 is implicitly 1.0. #[derive(Clone)] struct Filter { a1: f32, a2: f32, b0: f32, b1: f32, b2: f32, // The past two input and output samples. x1: f32, x2: f32, y1: f32, y2: f32, } impl Filter { /// Stage 1 of th BS.1770-4 pre-filter. pub fn high_shelf(sample_rate_hz: f32) -> Filter { // Coefficients taken from https://github.com/csteinmetz1/pyloudnorm/blob/ // 6baa64d59b7794bc812e124438692e7fd2e65c0c/pyloudnorm/meter.py#L135-L136. let gain_db = 3.999_843_8; let q = 0.707_175_25; let center_hz = 1_681.974_5; // Formula taken from https://github.com/csteinmetz1/pyloudnorm/blob/ // 6baa64d59b7794bc812e124438692e7fd2e65c0c/pyloudnorm/iirfilter.py#L134-L143. let k = (f32::consts::PI * center_hz / sample_rate_hz).tan(); let vh = 10.0_f32.powf(gain_db / 20.0); let vb = vh.powf(0.499_666_78); let a0 = 1.0 + k / q + k * k; Filter { b0: (vh + vb * k / q + k * k) / a0, b1: 2.0 * (k * k - vh) / a0, b2: (vh - vb * k / q + k * k) / a0, a1: 2.0 * (k * k - 1.0) / a0, a2: (1.0 - k / q + k * k) / a0, x1: 0.0, x2: 0.0, y1: 0.0, y2: 0.0, } } /// Stage 2 of th BS.1770-4 pre-filter. pub fn high_pass(sample_rate_hz: f32) -> Filter { // Coefficients taken from https://github.com/csteinmetz1/pyloudnorm/blob/ // 6baa64d59b7794bc812e124438692e7fd2e65c0c/pyloudnorm/meter.py#L135-L136. let q = 0.500_327_05; let center_hz = 38.135_47; // Formula taken from https://github.com/csteinmetz1/pyloudnorm/blob/ // 6baa64d59b7794bc812e124438692e7fd2e65c0c/pyloudnorm/iirfilter.py#L145-L151 let k = (f32::consts::PI * center_hz / sample_rate_hz).tan(); Filter { a1: 2.0 * (k * k - 1.0) / (1.0 + k / q + k * k), a2: (1.0 - k / q + k * k) / (1.0 + k / q + k * k), b0: 1.0, b1: -2.0, b2: 1.0, x1: 0.0, x2: 0.0, y1: 0.0, y2: 0.0, } } /// Feed the next input sample, get the next output sample. #[inline(always)] pub fn apply(&mut self, x0: f32) -> f32 { let y0 = 0.0 + self.b0 * x0 + self.b1 * self.x1 + self.b2 * self.x2 - self.a1 * self.y1 - self.a2 * self.y2; self.x2 = self.x1; self.x1 = x0; self.y2 = self.y1; self.y1 = y0; y0 } } /// Compensated sum, for summing many values of different orders of magnitude /// accurately. #[derive(Copy, Clone, PartialEq)] struct Sum { sum: f32, residue: f32, } impl Sum { #[inline(always)] fn zero() -> Sum { Sum { sum: 0.0, residue: 0.0, } } #[inline(always)] fn add(&mut self, x: f32) { let sum = self.sum + (self.residue + x); self.residue = (self.residue + x) - (sum - self.sum); self.sum = sum; } } /// The mean of the squares of the K-weighted samples in a window of time. /// /// K-weighted power is equivalent to K-weighted loudness, the only difference /// is one of scale: power is quadratic in sample amplitudes, whereas loudness /// units are logarithmic. `loudness_lkfs` and `from_lkfs` convert between power, /// and K-weighted Loudness Units relative to nominal Full Scale (LKFS). /// /// The term “LKFS” (Loudness Units, K-Weighted, relative to nominal Full Scale) /// is used in BS.1770-4 to emphasize K-weighting, but the term is otherwise /// interchangeable with the more widespread term “LUFS” (Loudness Units, /// relative to Full Scale). Loudness units are related to decibels in the /// following sense: boosting a signal that has a loudness of /// -<var>L<sub>K</sub></var> LUFS by <var>L<sub>K</sub></var> dB (by /// multiplying the amplitude by 10<sup><var>L<sub>K</sub></var>/20</sup>) will /// bring the loudness to 0 LUFS. /// /// K-weighting refers to a high-shelf and high-pass filter that model the /// effect that humans perceive a certain amount of power in low frequencies to /// be less loud than the same amount of power in higher frequencies. In this /// library the `Power` type is used exclusively to refer to power after applying K-weighting. /// /// The nominal “full scale” is the range [-1.0, 1.0]. Because the power is the /// mean square of the samples, if no input samples exceeded the full scale, the /// power will be in the range [0.0, 1.0]. However, the power delivered by /// multiple channels, which is a weighted sum over individual channel powers, /// can exceed this range, because the weighted sum is not normalized. #[derive(Copy, Clone, PartialEq, PartialOrd)] pub struct Power(pub f32); impl Power { /// Convert Loudness Units relative to Full Scale into a squared sample amplitude. /// /// This is the inverse of `loudness_lkfs`. pub fn from_lkfs(lkfs: f32) -> Power { // The inverse of the formula below. Power(10.0_f32.powf((lkfs + 0.691) * 0.1)) } /// Return the loudness of this window in Loudness Units, K-weighted, relative to Full Scale. /// /// This is the inverse of `from_lkfs`. pub fn loudness_lkfs(&self) -> f32 { // Equation 2 (p.5) of BS.1770-4. -0.691 + 10.0 * self.0.log10() } } /// A `T` value for non-overlapping windows of audio, 100ms in length. /// /// The `ChannelLoudnessMeter` applies K-weighting and then produces the power /// for non-overlapping windows of 100ms duration. /// /// These non-overlapping 100ms windows can later be combined into overlapping /// windows of 400ms, spaced 100ms apart, to compute instantaneous loudness or /// to perform a gated measurement, or they can be combined into even larger /// windows for a momentary loudness measurement. #[derive(Copy, Clone, Debug)] pub struct Windows100ms<T> { pub inner: T, } impl<T> Windows100ms<T> { /// Wrap a new empty vector. pub fn new() -> Windows100ms<Vec<T>> { Windows100ms { inner: Vec::new() } } /// Apply `as_ref` to the inner value. pub fn as_ref(&self) -> Windows100ms<&[Power]> where T: AsRef<[Power]>, { Windows100ms { inner: self.inner.as_ref(), } } /// Apply `as_mut` to the inner value. pub fn as_mut(&mut self) -> Windows100ms<&mut [Power]> where T: AsMut<[Power]>, { Windows100ms { inner: self.inner.as_mut(), } } #[allow(clippy::len_without_is_empty)] /// Apply `len` to the inner value. pub fn len(&self) -> usize where T: AsRef<[Power]>, { self.inner.as_ref().len() } } /// Measures K-weighted power of non-overlapping 100ms windows of a single channel of audio. /// /// # Output /// /// The output of the meter is an intermediate result in the form of power for /// 100ms non-overlapping windows. The windows need to be processed further to /// get one of the instantaneous, momentary, and integrated loudness /// measurements defined in BS.1770. /// /// The windows can also be inspected directly; the data is meaningful /// on its own (the K-weighted power delivered in that window of time), but it /// is not something that BS.1770 defines a term for. /// /// # Multichannel audio /// /// To perform a loudness measurement of multichannel audio, construct a /// `ChannelLoudnessMeter` per channel, and later combine the measured power /// with e.g. `reduce_stereo`. /// /// # Instantaneous loudness /// /// The instantaneous loudness is the power over a 400ms window, so you can /// average four 100ms windows. No special functionality is implemented to help /// with that at this time. ([Pull requests would be accepted.][contribute]) /// /// # Momentary loudness /// /// The momentary loudness is the power over a 3-second window, so you can /// average thirty 100ms windows. No special functionality is implemented to /// help with that at this time. ([Pull requests would be accepted.][contribute]) /// /// # Integrated loudness /// /// Use `gated_mean` to perform an integrated loudness measurement: /// /// ```ignore /// # use std::iter; /// # use bs1770::{ChannelLoudnessMeter, gated_mean}; /// # let sample_rate_hz = 44_100; /// # let samples_per_100ms = sample_rate_hz / 10; /// # let mut meter = ChannelLoudnessMeter::new(sample_rate_hz); /// # meter.push((0..44_100).map(|i| (i as f32 * 0.01).sin())); /// let integrated_loudness_lkfs = gated_mean(meter.as_100ms_windows()) /// .unwrap_or(bs1770::Power(0.0)) /// .loudness_lkfs(); /// ``` /// /// [contribute]: https://github.com/ruuda/bs1770/blob/master/CONTRIBUTING.md #[derive(Clone)] pub struct ChannelLoudnessMeter { /// The number of samples that fit in 100ms of audio. samples_per_100ms: u32, /// Stage 1 filter (head effects, high shelf). filter_stage1: Filter, /// Stage 2 filter (high-pass). filter_stage2: Filter, /// Sum of the squares over non-overlapping windows of 100ms. windows: Windows100ms<Vec<Power>>, /// The number of samples in the current unfinished window. count: u32, /// The sum of the squares of the samples in the current unfinished window. square_sum: Sum, } impl ChannelLoudnessMeter { /// Construct a new loudness meter for the given sample rate. pub fn new(sample_rate_hz: u32) -> ChannelLoudnessMeter { ChannelLoudnessMeter { samples_per_100ms: sample_rate_hz / 10, filter_stage1: Filter::high_shelf(sample_rate_hz as f32), filter_stage2: Filter::high_pass(sample_rate_hz as f32), windows: Windows100ms::new(), count: 0, square_sum: Sum::zero(), } } /// Feed input samples for loudness analysis. /// /// # Full scale /// /// Full scale for the input samples is the interval [-1.0, 1.0]. If your /// input consists of signed integer samples, you can convert as follows: /// /// ```ignore /// # let mut meter = bs1770::ChannelLoudnessMeter::new(44_100); /// # let bits_per_sample = 16_usize; /// # let samples = &[0_i16]; /// // Note that the maximum amplitude is `1 << (bits_per_sample - 1)`, /// // one bit is the sign bit. /// let normalizer = 1.0 / (1_u64 << (bits_per_sample - 1)) as f32; /// meter.push(samples.iter().map(|&s| s as f32 * normalizer)); /// ``` /// /// # Repeated calls /// /// You can call `push` multiple times to feed multiple batches of samples. /// This is equivalent to feeding a single chained iterator. The leftover of /// samples that did not fill a full 100ms window is not discarded: /// /// ```ignore /// # use std::iter; /// # use bs1770::ChannelLoudnessMeter; /// let sample_rate_hz = 44_100; /// let samples_per_100ms = sample_rate_hz / 10; /// let mut meter = ChannelLoudnessMeter::new(sample_rate_hz); /// /// meter.push(iter::repeat(0.0).take(samples_per_100ms as usize - 1)); /// assert_eq!(meter.as_100ms_windows().len(), 0); /// /// meter.push(iter::once(0.0)); /// assert_eq!(meter.as_100ms_windows().len(), 1); /// ``` pub fn push<I: Iterator<Item = f32>>(&mut self, samples: I) { let normalizer = 1.0 / self.samples_per_100ms as f32; // LLVM, if you could go ahead and inline those apply calls, and then // unroll and vectorize the loop, that'd be terrific. for x in samples { let y = self.filter_stage1.apply(x); let z = self.filter_stage2.apply(y); self.square_sum.add(z * z); self.count += 1; // TODO: Should this branch be marked cold? if self.count == self.samples_per_100ms { let mean_squares = Power(self.square_sum.sum * normalizer); self.windows.inner.push(mean_squares); // We intentionally do not reset the residue. That way, leftover // energy from this window is not lost, so for the file overall, // the sum remains more accurate. self.square_sum.sum = 0.0; self.count = 0; } } } /// Return a reference to the 100ms windows analyzed so far. pub fn as_100ms_windows(&self) -> Windows100ms<&[Power]> { self.windows.as_ref() } /// Return all 100ms windows analyzed so far. pub fn into_100ms_windows(self) -> Windows100ms<Vec<Power>> { self.windows } } /// Combine power for multiple channels by taking a weighted sum. /// /// Note that BS.1770-4 defines power for a multi-channel signal as a weighted /// sum over channels which is not normalized. This means that a stereo signal /// is inherently louder than a mono signal. For a mono signal played back on /// stereo speakers, you should therefore still apply `reduce_stereo`, passing /// in the same signal for both channels. pub fn reduce_stereo( left: Windows100ms<&[Power]>, right: Windows100ms<&[Power]>, ) -> Windows100ms<Vec<Power>> { assert_eq!( left.len(), right.len(), "Channels must have the same length." ); let mut result = Vec::with_capacity(left.len()); for (l, r) in left.inner.iter().zip(right.inner) { result.push(Power(l.0 + r.0)); } Windows100ms { inner: result } } /// In-place version of `reduce_stereo` that stores the result in the former left channel. pub fn reduce_stereo_in_place(left: Windows100ms<&mut [Power]>, right: Windows100ms<&[Power]>) { assert_eq!( left.len(), right.len(), "Channels must have the same length." ); for (l, r) in left.inner.iter_mut().zip(right.inner) { l.0 += r.0; } } /// Perform gating and averaging for a BS.1770-4 integrated loudness measurement. /// /// The integrated loudness measurement is not just the average power over the /// entire signal. BS.1770-4 defines two stages of gating that exclude /// parts of the signal, to ensure that silent parts do not contribute to the /// loudness measurement. This function performs that gating, and returns the /// average power over the windows that were not excluded. /// /// The result of this function is the integrated loudness measurement. /// /// When no signal remains after applying the gate, this function returns /// `None`. In particular, this happens when all of the signal is softer than /// -70 LKFS, including a signal that consists of pure silence. pub fn gated_mean(windows_100ms: Windows100ms<&[Power]>) -> Option<Power> { let mut gating_blocks = Vec::with_capacity(windows_100ms.len()); // Stage 1: an absolute threshold of -70 LKFS. (Equation 6, p.6.) let absolute_threshold = Power::from_lkfs(-70.0); // Iterate over all 400ms windows. for window in windows_100ms.inner.windows(4) { // Note that the sum over channels has already been performed at this point. let gating_block_power = Power(0.25 * window.iter().map(|mean| mean.0).sum::<f32>()); if gating_block_power > absolute_threshold { gating_blocks.push(gating_block_power); } } if gating_blocks.is_empty() { return None; } // Compute the loudness after applying the absolute gate, in order to // determine the threshold for the relative gate. let mut sum_power = Sum::zero(); for &gating_block_power in &gating_blocks { sum_power.add(gating_block_power.0); } let absolute_gated_power = Power(sum_power.sum / (gating_blocks.len() as f32)); // Stage 2: Apply the relative gate. let relative_threshold = Power::from_lkfs(absolute_gated_power.loudness_lkfs() - 10.0); let mut sum_power = Sum::zero(); let mut n_blocks = 0_usize; for &gating_block_power in &gating_blocks { if gating_block_power > relative_threshold { sum_power.add(gating_block_power.0); n_blocks += 1; } } if n_blocks == 0 { return None; } let relative_gated_power = Power(sum_power.sum / n_blocks as f32); Some(relative_gated_power) }
candle/candle-examples/src/bs1770.rs/0
{ "file_path": "candle/candle-examples/src/bs1770.rs", "repo_id": "candle", "token_count": 7220 }
45
/****************************************************************************** * Copyright (c) 2023, Tri Dao. ******************************************************************************/ #pragma once // #include <c10/cuda/CUDAException.h> // For C10_CUDA_CHECK and C10_CUDA_KERNEL_LAUNCH_CHECK #include "error.h" #include "static_switch.h" #include "hardware_info.h" #include "flash.h" #include "flash_fwd_kernel.h" // Determine if the architecture supports FLASH and define a macro to handle parameter modifiers #if defined(__CUDA_ARCH__) && __CUDA_ARCH__ >= 800 #define ARCH_SUPPORTS_FLASH #define KERNEL_PARAM_MODIFIER __grid_constant__ #else #define KERNEL_PARAM_MODIFIER #endif // Define a macro for unsupported architecture handling to centralize the error message #define FLASH_UNSUPPORTED_ARCH printf("FATAL: FlashAttention requires building with sm version sm80-sm90, but was built for < 8.0!"); // Use a macro to clean up kernel definitions #define DEFINE_FLASH_FORWARD_KERNEL(kernelName, ...) \ template<typename Kernel_traits, __VA_ARGS__> \ __global__ void kernelName(KERNEL_PARAM_MODIFIER const Flash_fwd_params params) DEFINE_FLASH_FORWARD_KERNEL(flash_fwd_kernel, bool Is_dropout, bool Is_causal, bool Is_local, bool Has_alibi, bool Is_even_MN, bool Is_even_K, bool Is_softcap, bool Return_softmax) { #if defined(ARCH_SUPPORTS_FLASH) static_assert(!(Is_causal && Is_local)); // Enforce constraints flash::compute_attn<Kernel_traits, Is_dropout, Is_causal, Is_local, Has_alibi, Is_even_MN, Is_even_K, Is_softcap, Return_softmax>(params); #else FLASH_UNSUPPORTED_ARCH #endif } DEFINE_FLASH_FORWARD_KERNEL(flash_fwd_splitkv_kernel, bool Is_causal, bool Is_local, bool Has_alibi, bool Is_even_MN, bool Is_even_K, bool Is_softcap, bool Split, bool Append_KV) { #if defined(ARCH_SUPPORTS_FLASH) flash::compute_attn_splitkv<Kernel_traits, Is_causal, Is_local, Has_alibi, Is_even_MN, Is_even_K, Is_softcap, Split, Append_KV>(params); #else FLASH_UNSUPPORTED_ARCH #endif } DEFINE_FLASH_FORWARD_KERNEL(flash_fwd_splitkv_combine_kernel, int kBlockM, int Log_max_splits, bool Is_even_K) { static_assert(Log_max_splits >= 1); flash::combine_attn_seqk_parallel<Kernel_traits, kBlockM, Log_max_splits, Is_even_K>(params); } template<typename Kernel_traits, bool Is_dropout, bool Is_causal> void run_flash_fwd(Flash_fwd_params &params, cudaStream_t stream) { constexpr size_t smem_size = Kernel_traits::kSmemSize; // printf("smem_size = %d\n", smem_size); // Work-around for gcc 7. It doesn't like nested BOOL_SWITCH. // https://github.com/kokkos/kokkos-kernels/issues/349 // https://github.com/HazyResearch/flash-attention/issues/21 const int num_m_block = (params.seqlen_q + Kernel_traits::kBlockM - 1) / Kernel_traits::kBlockM; dim3 grid(num_m_block, params.b, params.h); const bool is_even_MN = params.cu_seqlens_q == nullptr && params.cu_seqlens_k == nullptr && params.seqlen_k % Kernel_traits::kBlockN == 0 && params.seqlen_q % Kernel_traits::kBlockM == 0; const bool is_even_K = params.d == Kernel_traits::kHeadDim; const bool return_softmax = params.p_ptr != nullptr; BOOL_SWITCH(is_even_MN, IsEvenMNConst, [&] { EVENK_SWITCH(is_even_K, IsEvenKConst, [&] { LOCAL_SWITCH((params.window_size_left >= 0 || params.window_size_right >= 0) && !Is_causal, Is_local, [&] { BOOL_SWITCH(return_softmax, ReturnSoftmaxConst, [&] { ALIBI_SWITCH(params.alibi_slopes_ptr != nullptr, Has_alibi, [&] { SOFTCAP_SWITCH(params.softcap > 0.0, Is_softcap, [&] { // Will only return softmax if dropout, to reduce compilation time. // If not IsEvenKConst, we also set IsEvenMNConst to false to reduce number of templates. // If return_softmax, set IsEvenMNConst to false to reduce number of templates // If head dim > 128, set IsEvenMNConst to false to reduce number of templates // If Is_local, set Is_causal to false auto kernel = &flash_fwd_kernel<Kernel_traits, Is_dropout && !Is_softcap, Is_causal, Is_local && !Is_causal, Has_alibi, IsEvenMNConst && IsEvenKConst && !Is_local && !ReturnSoftmaxConst && Kernel_traits::kHeadDim <= 128, IsEvenKConst, Is_softcap, ReturnSoftmaxConst && Is_dropout && !Is_softcap>; // auto kernel = &flash_fwd_kernel<Kernel_traits, false, Is_causal, false, false, true, true, false>; // printf("IsEvenMNConst = %d, IsEvenKConst = %d, Is_local = %d, Is_causal = %d, ReturnSoftmaxConst = %d, Is_dropout = %d\n", int(IsEvenMNConst), int(IsEvenKConst), int(Is_local), int(Is_causal), int(ReturnSoftmaxConst), int(Is_dropout)); // auto kernel = &flash_fwd_kernel<Kernel_traits, false, Is_causal, false, true, true, false>; if (smem_size >= 48 * 1024) { C10_CUDA_CHECK(cudaFuncSetAttribute( kernel, cudaFuncAttributeMaxDynamicSharedMemorySize, smem_size)); } // int ctas_per_sm; // cudaError status_ = cudaOccupancyMaxActiveBlocksPerMultiprocessor( // &ctas_per_sm, kernel, Kernel_traits::kNThreads, smem_size); // printf("smem_size = %d, CTAs per SM = %d\n", int(smem_size), ctas_per_sm); kernel<<<grid, Kernel_traits::kNThreads, smem_size, stream>>>(params); C10_CUDA_KERNEL_LAUNCH_CHECK(); }); }); }); }); }); }); } template<typename Kernel_traits, bool Is_causal> void run_flash_splitkv_fwd(Flash_fwd_params &params, cudaStream_t stream) { static_assert(!Kernel_traits::Is_Q_in_regs, "SplitKV implementation does not support Is_Q_in_regs"); static_assert(!Kernel_traits::Share_Q_K_smem, "SplitKV implementation does not support Share_Q_K_smem"); constexpr size_t smem_size = Kernel_traits::kSmemSize; const int num_m_block = (params.seqlen_q + Kernel_traits::kBlockM - 1) / Kernel_traits::kBlockM; dim3 grid(num_m_block, params.num_splits > 1 ? params.num_splits : params.b, params.num_splits > 1 ? params.b * params.h : params.h); const bool is_even_MN = params.cu_seqlens_q == nullptr && params.cu_seqlens_k == nullptr && params.seqlen_k % Kernel_traits::kBlockN == 0 && params.seqlen_q % Kernel_traits::kBlockM == 0; const bool is_even_K = params.d == Kernel_traits::kHeadDim; BOOL_SWITCH(is_even_MN, IsEvenMNConst, [&] { EVENK_SWITCH(is_even_K, IsEvenKConst, [&] { LOCAL_SWITCH((params.window_size_left >= 0 || params.window_size_right >= 0) && !Is_causal, Is_local, [&] { BOOL_SWITCH(params.num_splits > 1, Split, [&] { BOOL_SWITCH(params.knew_ptr != nullptr, Append_KV, [&] { ALIBI_SWITCH(params.alibi_slopes_ptr != nullptr, Has_alibi, [&] { SOFTCAP_SWITCH(params.softcap > 0.0, Is_softcap, [&] { // If Append_KV, then we must have seqlen_offsets, which means cu_seqlens_k != nullptr. // If not IsEvenKConst, we also set IsEvenMNConst to false to reduce number of templates. // If Is_local, set Is_causal to false auto kernel = &flash_fwd_splitkv_kernel<Kernel_traits, Is_causal, Is_local && !Is_causal, Has_alibi, IsEvenMNConst && !Append_KV && IsEvenKConst && !Is_local && Kernel_traits::kHeadDim <= 128, IsEvenKConst, Is_softcap, Split, Append_KV>; // auto kernel = &flash_fwd_splitkv_kernel<Kernel_traits, Is_causal, false, true, Split, Append_KV>; // auto kernel = &flash_fwd_splitkv_kernel<Kernel_traits, Is_causal, false, IsEvenKConst>; if (smem_size >= 48 * 1024) { C10_CUDA_CHECK(cudaFuncSetAttribute( kernel, cudaFuncAttributeMaxDynamicSharedMemorySize, smem_size)); } kernel<<<grid, Kernel_traits::kNThreads, smem_size, stream>>>(params); C10_CUDA_KERNEL_LAUNCH_CHECK(); }); }); }); }); }); }); }); if (params.num_splits > 1) { // We want kBlockM to be as small as possible for more parallelism. // With 128 threads we can load 512 elements at a time, so if headdim is divisible by 128, kBlockM = 4. // If headdim is divisible by 64, then we set kBlockM = 8, etc. constexpr static int kBlockM = Kernel_traits::kHeadDim % 128 == 0 ? 4 : (Kernel_traits::kHeadDim % 64 == 0 ? 8 : 16); dim3 grid_combine((params.b * params.h * params.seqlen_q + kBlockM - 1) / kBlockM); EVENK_SWITCH(is_even_K, IsEvenKConst, [&] { if (params.num_splits <= 2) { flash_fwd_splitkv_combine_kernel<Kernel_traits, kBlockM, 1, IsEvenKConst><<<grid_combine, Kernel_traits::kNThreads, 0, stream>>>(params); } else if (params.num_splits <= 4) { flash_fwd_splitkv_combine_kernel<Kernel_traits, kBlockM, 2, IsEvenKConst><<<grid_combine, Kernel_traits::kNThreads, 0, stream>>>(params); } else if (params.num_splits <= 8) { flash_fwd_splitkv_combine_kernel<Kernel_traits, kBlockM, 3, IsEvenKConst><<<grid_combine, Kernel_traits::kNThreads, 0, stream>>>(params); } else if (params.num_splits <= 16) { flash_fwd_splitkv_combine_kernel<Kernel_traits, kBlockM, 4, IsEvenKConst><<<grid_combine, Kernel_traits::kNThreads, 0, stream>>>(params); } else if (params.num_splits <= 32) { flash_fwd_splitkv_combine_kernel<Kernel_traits, kBlockM, 5, IsEvenKConst><<<grid_combine, Kernel_traits::kNThreads, 0, stream>>>(params); } else if (params.num_splits <= 64) { flash_fwd_splitkv_combine_kernel<Kernel_traits, kBlockM, 6, IsEvenKConst><<<grid_combine, Kernel_traits::kNThreads, 0, stream>>>(params); } else if (params.num_splits <= 128) { flash_fwd_splitkv_combine_kernel<Kernel_traits, kBlockM, 7, IsEvenKConst><<<grid_combine, Kernel_traits::kNThreads, 0, stream>>>(params); } C10_CUDA_KERNEL_LAUNCH_CHECK(); }); } } template<typename T, int Headdim, bool Is_causal> void run_mha_fwd_splitkv_dispatch(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int kBlockM = 64; // Fixed for all head dimensions // TD [2023-08-28]: nvcc segfaults for headdim 96 with block size 64 x 256, // and for headdim 192 with block size 64 x 128. // Also for headdim 160 with block size 64 x 128 after the rotary addition. constexpr static int kBlockN = Headdim <= 64 ? 256 : (Headdim <= 128 ? 128 : 64); run_flash_splitkv_fwd<Flash_fwd_kernel_traits<Headdim, kBlockM, kBlockN, 4, false, false, T>, Is_causal>(params, stream); } template<typename T, bool Is_causal> void run_mha_fwd_hdim32(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 32; DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 128, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); }); } template<typename T, bool Is_causal> void run_mha_fwd_hdim64(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 64; DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { if constexpr(!Is_dropout) { // Using 8 warps is 18% slower for seqlen=2k, 2 warps is 5% slower // Using block size (64 x 256) is 27% slower for seqlen=2k // Using block size (256 x 64) is 85% slower for seqlen=2k, because of register spilling run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 128, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, true, T>, Is_dropout, Is_causal>(params, stream); } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, true, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 128, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } }); } inline bool cuda_is_sm8x() { // dprops = at::cuda::getCurrentDeviceProperties(); // return dprops->major == 8 && dprops->minor > 0; return false; } template<typename T, bool Is_causal> void run_mha_fwd_hdim96(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 96; auto [cc_major, cc_minor] = get_compute_capability(get_current_device()); bool is_sm8x = cc_major == 8 && cc_minor > 0; DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { // For sm86 or sm89, 64 x 64 is the fastest for causal (because it's square), if (is_sm8x) { if constexpr(!Is_causal) { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, true, T>, Is_dropout, Is_causal>(params, stream); // These two are always slower // run_flash_fwd<Flash_fwd_kernel_traits<96, 128, 128, 4, true, T>>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<96, 64, 128, 4, true, T>>(params, stream); }); } template<typename T, bool Is_causal> void run_mha_fwd_hdim128(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 128; auto [cc_major, cc_minor] = get_compute_capability(get_current_device()); bool is_sm8x = cc_major == 8 && cc_minor > 0; DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { if constexpr(!Is_dropout) { // For sm86 or sm89, 64 x 64 is the fastest for causal (because it's square), // and 128 x 32 (48 KB smem) is the fastest for non-causal since we get 2 CTAs per SM. if (is_sm8x) { if constexpr(!Is_causal) { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, true, true, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 128, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // Using 8 warps (128 x 128 and 256 x 64) is 28% slower for seqlen=2k // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 128, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); // 1st ones are good for H100, A100 // 2nd one is good for A6000 bc we get slightly better occupancy } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 4, true, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 4, true, true, T>, Is_dropout, Is_causal>(params, stream); } }); } template<typename T, bool Is_causal> void run_mha_fwd_hdim160(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 160; auto [cc_major, cc_minor] = get_compute_capability(get_current_device()); bool is_sm8x = cc_major == 8 && cc_minor > 0; DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { // For A100, H100, 128 x 32 is the fastest. // For sm86 or sm89, 64 x 64 is the fastest for causal (because it's square), // and 128 x 64 with 8 warps is the fastest for non-causal. if (is_sm8x) { if constexpr(!Is_causal) { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 4, false, true, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, false, T>>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 128, 4, false, T>>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, T>>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 8, false, T>>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 128, 8, false, T>>(params, stream); }); } template<typename T, bool Is_causal> void run_mha_fwd_hdim192(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 192; DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { if constexpr(!Is_dropout) { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 32, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 4, false, T>>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 128, 4, false, T>>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 128, 8, false, T>>(params, stream); }); } template<typename T, bool Is_causal> void run_mha_fwd_hdim224(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 224; int device; cudaGetDevice(&device); int max_smem_per_block; cudaError status_ = cudaDeviceGetAttribute( &max_smem_per_block, cudaDevAttrMaxSharedMemoryPerBlockOptin, device); if (status_ != cudaSuccess) { C10_CUDA_CHECK(status_); } // printf("max_smem_per_block = %d\n", max_smem_per_block); DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { if (max_smem_per_block >= 2 * Headdim * (128 + 2 * 64)) { // 112 KB run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 32, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // We can't do 128 x 32 with 8 warps because with headdim 224, kBlockKSmem = 32. // If we have N = 32, there are only 1024 elements to load at once, where each load // is 8 elements. This means we can only use 128 threads and not 256 threads. // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); }); } template<typename T, bool Is_causal> void run_mha_fwd_hdim256(Flash_fwd_params &params, cudaStream_t stream) { constexpr static int Headdim = 256; int device; cudaGetDevice(&device); int max_smem_per_sm, max_smem_per_block; cudaError status_ = cudaDeviceGetAttribute( &max_smem_per_sm, cudaDevAttrMaxSharedMemoryPerMultiprocessor, device); status_ = cudaDeviceGetAttribute( &max_smem_per_block, cudaDevAttrMaxSharedMemoryPerBlockOptin, device); if (status_ != cudaSuccess) { C10_CUDA_CHECK(status_); } // printf("max_smem_per_sm = %d, max_smem_per_block = %d\n", max_smem_per_sm, max_smem_per_block); DROPOUT_SWITCH(params.p_dropout < 1.f, Is_dropout, [&] { // For A100, we want to run with 128 x 64 (128KB smem). // For H100 we want to run with 64 x 64 (96KB smem) since then we can get 2 CTAs per SM. if (max_smem_per_block >= 2 * Headdim * (128 + 2 * 64) && max_smem_per_sm < 4 * Headdim * (64 + 2 * 64)) { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 64, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); } else { run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 64, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); } // 64 KB // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 64, 32, 4, false, false, T>, Is_dropout, Is_causal>(params, stream); // 96 KB // run_flash_fwd<Flash_fwd_kernel_traits<Headdim, 128, 32, 8, false, false, T>, Is_dropout, Is_causal>(params, stream); }); }
candle/candle-flash-attn/kernels/flash_fwd_launch_template.h/0
{ "file_path": "candle/candle-flash-attn/kernels/flash_fwd_launch_template.h", "repo_id": "candle", "token_count": 10705 }
46
# candle-kernels This crate contains CUDA kernels used from candle. Some of these implementations come from the [dfdx crate](https://github.com/coreylowman/dfdx).
candle/candle-kernels/README.md/0
{ "file_path": "candle/candle-kernels/README.md", "repo_id": "candle", "token_count": 45 }
47
#include "cuda_utils.cuh" #include<stdint.h> #define WHERE_OP(TYPENAME, ID_TYPENAME, FN_NAME) \ extern "C" __global__ void FN_NAME( \ const size_t numel, \ const size_t num_dims, \ const size_t *info, \ const ID_TYPENAME *ids, \ const TYPENAME *t, \ const TYPENAME *f, \ TYPENAME *out \ ) { \ const size_t *dims = info; \ const size_t *strides = info + num_dims; \ const size_t *strides_t = info + 2*num_dims; \ const size_t *strides_f = info + 3*num_dims; \ if (is_contiguous(num_dims, dims, strides) \ && is_contiguous(num_dims, dims, strides_f) \ && is_contiguous(num_dims, dims, strides_t)) { \ for (unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; i < numel; i += blockDim.x * gridDim.x) { \ out[i] = ids[i] ? t[i] : f[i]; \ } \ } \ else { \ for (unsigned int i = blockIdx.x * blockDim.x + threadIdx.x; i < numel; i += blockDim.x * gridDim.x) { \ unsigned strided_i = get_strided_index(i, num_dims, dims, strides); \ unsigned strided_i_t = get_strided_index(i, num_dims, dims, strides_t); \ unsigned strided_i_f = get_strided_index(i, num_dims, dims, strides_f); \ out[i] = ids[strided_i] ? t[strided_i_t] : f[strided_i_f]; \ } \ } \ } \ #if __CUDA_ARCH__ >= 800 WHERE_OP(__nv_bfloat16, int64_t, where_i64_bf16) WHERE_OP(__nv_bfloat16, uint32_t, where_u32_bf16) WHERE_OP(__nv_bfloat16, uint8_t, where_u8_bf16) #endif #if __CUDA_ARCH__ >= 890 WHERE_OP(__nv_fp8_e4m3, int16_t, where_i16_fp8_e4m3) WHERE_OP(__nv_fp8_e4m3, int32_t, where_i32_fp8_e4m3) WHERE_OP(__nv_fp8_e4m3, int64_t, where_i64_fp8_e4m3) WHERE_OP(__nv_fp8_e4m3, uint32_t, where_u32_fp8_e4m3) WHERE_OP(__nv_fp8_e4m3, uint8_t, where_u8_fp8_e4m3) #endif #if __CUDA_ARCH__ >= 530 WHERE_OP(__half, int64_t, where_i64_f16) WHERE_OP(__half, uint32_t, where_u32_f16) WHERE_OP(__half, uint8_t, where_u8_f16) #endif WHERE_OP(float, int64_t, where_i64_f32) WHERE_OP(double, int64_t, where_i64_f64) WHERE_OP(uint8_t, int64_t, where_i64_u8) WHERE_OP(uint32_t, int64_t, where_i64_u32) WHERE_OP(int64_t, int64_t, where_i64_i64) WHERE_OP(float, uint32_t, where_u32_f32) WHERE_OP(double, uint32_t, where_u32_f64) WHERE_OP(uint8_t, uint32_t, where_u32_u8) WHERE_OP(uint32_t, uint32_t, where_u32_u32) WHERE_OP(int64_t, uint32_t, where_u32_i64) WHERE_OP(float, uint8_t, where_u8_f32) WHERE_OP(double, uint8_t, where_u8_f64) WHERE_OP(uint8_t, uint8_t, where_u8_u8) WHERE_OP(uint32_t, uint8_t, where_u8_u32) WHERE_OP(int64_t, uint8_t, where_u8_i64)
candle/candle-kernels/src/ternary.cu/0
{ "file_path": "candle/candle-kernels/src/ternary.cu", "repo_id": "candle", "token_count": 1345 }
48
#include <metal_stdlib> #include <metal_integer> #include <metal_atomic> using namespace metal; // Constants // 2^32 and 1/2^32. Useful for converting between float and uint. static constexpr constant ulong UNIF01_NORM32 = 4294967296; static constexpr constant float UNIF01_INV32 = 2.328306436538696289e-10; // 2 * pi static constexpr constant float TWO_PI = 2.0 * M_PI_F; static constexpr constant int3 S1 = {13, 19, 12}; static constexpr constant int3 S2 = {2, 25, 4}; static constexpr constant int3 S3 = {3, 11, 17}; // Used to prevent bad seeds. static constexpr constant uint64_t PHI[16] = { 0x9E3779B97F4A7C15, 0xF39CC0605CEDC834, 0x1082276BF3A27251, 0xF86C6A11D0C18E95, 0x2767F0B153D27B7F, 0x0347045B5BF1827F, 0x01886F0928403002, 0xC1D64BA40F335E36, 0xF06AD7AE9717877E, 0x85839D6EFFBD7DC6, 0x64D325D1C5371682, 0xCADD0CCCFDFFBBE1, 0x626E33B8D04B4331, 0xBBF73C790D94F79D, 0x471C4AB3ED3D82A5, 0xFEC507705E4AE6E5, }; // Combined Tausworthe and LCG Random Number Generator. // https://developer.nvidia.com/gpugems/gpugems3/part-vi-gpu-computing/chapter-37-efficient-random-number-generation-and-application // https://indico.cern.ch/event/93877/contributions/2118070/attachments/1104200/1575343/acat3_revised_final.pdf struct HybridTaus { float state; HybridTaus() thread = default; HybridTaus() threadgroup = default; HybridTaus() device = default; HybridTaus() constant = default; // Generate seeds for each thread. METAL_FUNC static uint4 seed_per_thread(const ulong4 seeds) { return uint4(ulong4(seeds) * ulong4(PHI[0], PHI[1], PHI[2], PHI[3]) * ulong4(1099087573UL)); } // Tausworthe generator. METAL_FUNC static uint taus(const uint z, const int3 s, const uint M) { uint b = (((z << s.x) ^ z) >> s.y); return (((z & M) << s.z) ^ b); } // LCG generator. METAL_FUNC static uint lcg(const uint z) { return (1664525 * z + 1013904223UL); } // Initialize the RNG state. METAL_FUNC static HybridTaus init(const ulong4 seeds) { uint4 seed = seed_per_thread(seeds); // Seed #1 uint z1 = taus(seed.x, S1, 4294967294UL); uint z2 = taus(seed.y, S2, 4294967288UL); uint z3 = taus(seed.z, S3, 4294967280UL); uint z4 = lcg(seed.x); // Seed #2 uint r1 = (z1^z2^z3^z4^seed.y); z1 = taus(r1, S1, 429496729UL); z2 = taus(r1, S2, 4294967288UL); z3 = taus(r1, S3, 429496280UL); z4 = lcg(r1); // Seed #3 r1 = (z1^z2^z3^z4^seed.z); z1 = taus(r1, S1, 429496729UL); z2 = taus(r1, S2, 4294967288UL); z3 = taus(r1, S3, 429496280UL); z4 = lcg(r1); // Seed #4 r1 = (z1^z2^z3^z4^seed.w); z1 = taus(r1, S1, 429496729UL); z2 = taus(r1, S2, 4294967288UL); z3 = taus(r1, S3, 429496280UL); z4 = lcg(r1); HybridTaus rng; rng.state = (z1^z2^z3^z4) * UNIF01_INV32; return rng; } METAL_FUNC float rand() { uint seed = this->state * UNIF01_NORM32; uint z1 = taus(seed, S1, 429496729UL); uint z2 = taus(seed, S2, 4294967288UL); uint z3 = taus(seed, S3, 429496280UL); uint z4 = lcg(seed); thread float result = this->state; this->state = (z1^z2^z3^z4) * UNIF01_INV32; return result; } }; template<typename T> METAL_FUNC void rand_uniform( constant size_t &size, constant float &min, constant float &max, device atomic_uint *seed, device T *out, uint tid [[thread_position_in_grid]] ) { if (tid >= size) { return; } // Evenly sized vectors need an offset when writing the mirror element. uint off = 1 - size % 2; float diff = abs(min - max); uint s = atomic_load_explicit(seed, memory_order_relaxed); HybridTaus rng = HybridTaus::init({ulong(s), tid, 1, 1}); out[tid] = static_cast<T>(rng.rand() * diff + min); if (tid == 0) { atomic_store_explicit(seed, uint(rng.rand() * UNIF01_NORM32), memory_order_relaxed); // Return early if tid == 0 && off == 0, otherwise we will write to out[size]. if (off == 0) return; } // Use symmetry to fill the other half of the array. out[size - off - tid] = static_cast<T>(rng.rand() * diff + min); } // Create Gaussian normal distribution using Box-Muller transform: // https://en.wikipedia.org/wiki/Box–Muller_transform template<typename T> METAL_FUNC void normal( constant size_t &size, constant float &mean, constant float &stddev, device atomic_uint *seed, device T *out, uint tid [[thread_position_in_grid]] ) { if (tid >= size) { return; } // Evenly sized vectors need an offset when writing the mirror element. uint off = 1 - size % 2; uint s = atomic_load_explicit(seed, memory_order_relaxed); HybridTaus rng = HybridTaus::init({ulong(s), tid, 1, 1}); float u1 = rng.rand(); float u2 = rng.rand(); float cosval; float sinval = sincos(TWO_PI * u2, cosval); float mag = stddev * sqrt(-2.0 * log(u1)); float z0 = mag * cosval + mean; float z1 = mag * sinval + mean; out[tid] = static_cast<T>(z0); if (tid == 0) { atomic_store_explicit(seed, uint(rng.rand() * UNIF01_NORM32), memory_order_relaxed); // Return early if tid == 0 && off == 0, otherwise we will write to out[size]. if (off == 0) return; } // Use symmetry to fill the other half of the array. out[size - off - tid] = static_cast<T>(z1); } #define UNIFORM_OP(NAME, T) \ kernel void rand_uniform_##NAME( \ constant size_t &size, \ constant float &min, \ constant float &max, \ device atomic_uint *seed, \ device T *out, \ uint tid [[thread_position_in_grid]] \ ) { \ rand_uniform<T>(size, min, max, seed, out, tid); \ } \ #define NORMAL_OP(NAME, T) \ kernel void rand_normal_##NAME( \ constant size_t &size, \ constant float &mean, \ constant float &stddev, \ device atomic_uint *seed, \ device T *out, \ uint tid [[thread_position_in_grid]] \ ) { \ normal<T>(size, mean, stddev, seed, out, tid); \ } \ #define RANDOM_OPS(NAME, T) \ UNIFORM_OP(NAME, T) \ NORMAL_OP(NAME, T) \ RANDOM_OPS(f32, float) RANDOM_OPS(f16, half) #if __METAL_VERSION__ >= 310 RANDOM_OPS(bf16, bfloat) #endif
candle/candle-metal-kernels/src/random.metal/0
{ "file_path": "candle/candle-metal-kernels/src/random.metal", "repo_id": "candle", "token_count": 3671 }
49
mod benchmarks; use criterion::criterion_main; criterion_main!( benchmarks::softmax::benches, benchmarks::layer_norm::benches, benchmarks::conv::benches );
candle/candle-nn/benches/bench_main.rs/0
{ "file_path": "candle/candle-nn/benches/bench_main.rs", "repo_id": "candle", "token_count": 58 }
50
//! Layer Normalization. //! //! This layer applies Layer Normalization over a mini-batch of inputs as described in [`Layer //! Normalization`]. The input is expected to have three dimensions: a batch dimension, a length, //! and a hidden size, the normalization is applied over the last dimension. //! //! # Example //! //! ```rust //! use candle::{Tensor, Device::Cpu, test_utils::to_vec3_round}; //! use candle_nn::{LayerNorm, Module}; //! # fn main() -> candle::Result<()> { //! //! let w = Tensor::new(&[1f32, 1f32, 1f32], &Cpu)?; //! let b = Tensor::new(&[0f32, 0f32, 0f32], &Cpu)?; //! let layer = LayerNorm::new(w, b, 1e-5); //! //! let xs = Tensor::new( //! &[[[1f32, 2., 3.], [4., 5., 6.], [9., 8., 7.]]], //! &Cpu)?; //! let ys = layer.forward(&xs)?; //! assert_eq!( //! to_vec3_round(&ys, 4)?, //! &[[[-1.2247, 0.0, 1.2247], //! [-1.2247, 0.0, 1.2247], //! [ 1.2247, 0.0, -1.2247]]]); //! # Ok(()) } //! ``` //! //! [`Layer Normalization`]: https://arxiv.org/abs/1607.06450 use candle::{DType, Module, Result, Tensor, D}; #[derive(Debug, Clone, Copy, PartialEq)] pub struct LayerNormConfig { pub eps: f64, /// Whether to remove the mean or not, the default is true and when set to false, this turns /// this layer into RmsNorm. pub remove_mean: bool, pub affine: bool, } impl Default for LayerNormConfig { fn default() -> Self { Self { eps: 1e-5, remove_mean: true, affine: true, } } } impl From<f64> for LayerNormConfig { fn from(eps: f64) -> Self { Self { eps, remove_mean: true, affine: true, } } } // This layer norm version handles both weight and bias so removes the mean. #[derive(Clone, Debug)] pub struct LayerNorm { weight: Tensor, bias: Option<Tensor>, remove_mean: bool, eps: f64, } impl LayerNorm { pub fn new(weight: Tensor, bias: Tensor, eps: f64) -> Self { Self { weight, bias: Some(bias), remove_mean: true, eps, } } pub fn new_no_bias(weight: Tensor, eps: f64) -> Self { Self { weight, bias: None, remove_mean: true, eps, } } pub fn rms_norm(weight: Tensor, eps: f64) -> Self { Self { weight, bias: None, remove_mean: false, eps, } } pub fn weight(&self) -> &Tensor { &self.weight } pub fn bias(&self) -> Option<&Tensor> { self.bias.as_ref() } } impl Module for LayerNorm { fn forward(&self, x: &Tensor) -> Result<Tensor> { if x.is_contiguous() && self.remove_mean { if let Some(bias) = self.bias.as_ref() { return crate::ops::layer_norm(x, &self.weight, bias, self.eps as f32); } } let x_dtype = x.dtype(); let internal_dtype = match x_dtype { DType::F16 | DType::BF16 => DType::F32, d => d, }; let hidden_size = x.dim(D::Minus1)?; let x = x.to_dtype(internal_dtype)?; let x = if self.remove_mean { let mean_x = (x.sum_keepdim(D::Minus1)? / hidden_size as f64)?; x.broadcast_sub(&mean_x)? } else { x }; let norm_x = (x.sqr()?.sum_keepdim(D::Minus1)? / hidden_size as f64)?; let x_normed = x.broadcast_div(&(norm_x + self.eps)?.sqrt()?)?; let x = x_normed.to_dtype(x_dtype)?.broadcast_mul(&self.weight)?; match &self.bias { None => Ok(x), Some(bias) => x.broadcast_add(bias), } } } pub fn layer_norm<C: Into<LayerNormConfig>>( size: usize, config: C, vb: crate::VarBuilder, ) -> Result<LayerNorm> { let config = config.into(); let weight = vb.get_with_hints(size, "weight", crate::Init::Const(1.))?; let bias = if config.affine { Some(vb.get_with_hints(size, "bias", crate::Init::Const(0.))?) } else { None }; Ok(LayerNorm { weight, bias, remove_mean: config.remove_mean, eps: config.eps, }) } pub fn layer_norm_no_bias(size: usize, eps: f64, vb: crate::VarBuilder) -> Result<LayerNorm> { let config = LayerNormConfig { eps, remove_mean: true, affine: false, }; layer_norm(size, config, vb) } /// RmsNorm is a specialized version of the LayerNorm module. #[derive(Clone, Debug)] pub struct RmsNorm(LayerNorm); impl RmsNorm { pub fn new(weight: Tensor, eps: f64) -> Self { Self(LayerNorm::rms_norm(weight, eps)) } pub fn into_inner(self) -> LayerNorm { self.0 } /// Faster variant of the forward kernel, this can only be used on contiguous tensors though. pub fn forward_diff(&self, xs: &Tensor) -> Result<Tensor> { self.0.forward(xs) } } impl Module for RmsNorm { fn forward(&self, xs: &Tensor) -> Result<Tensor> { if xs.is_contiguous() { crate::ops::rms_norm(xs, &self.0.weight, self.0.eps as f32) } else { self.0.forward(xs) } } } pub fn rms_norm(size: usize, eps: f64, vb: crate::VarBuilder) -> Result<RmsNorm> { let config = LayerNormConfig { eps, remove_mean: false, affine: false, }; Ok(RmsNorm(layer_norm(size, config, vb)?)) }
candle/candle-nn/src/layer_norm.rs/0
{ "file_path": "candle/candle-nn/src/layer_norm.rs", "repo_id": "candle", "token_count": 2656 }
51
#[cfg(feature = "mkl")] extern crate intel_mkl_src; #[cfg(feature = "accelerate")] extern crate accelerate_src; use candle::test_utils::to_vec0_round; use candle::{Device, Result, Tensor}; /* Equivalent python code: import torch import torch.nn.functional as F input = torch.tensor([ [ 1.1050, 0.3013, -1.5394, -2.1528, -0.8634], [ 1.0730, -0.9419, -0.1670, -0.6582, 0.5061], [ 0.8318, 1.1154, -0.3610, 0.5351, 1.0830]]) target = torch.tensor([1, 0, 4]) print(F.nll_loss(F.log_softmax(input, dim=1), target)) print(F.cross_entropy(input, target)) */ #[test] fn nll_and_cross_entropy() -> Result<()> { let cpu = Device::Cpu; let input = Tensor::new( &[ [1.1050f32, 0.3013, -1.5394, -2.1528, -0.8634], [1.0730, -0.9419, -0.1670, -0.6582, 0.5061], [0.8318, 1.1154, -0.3610, 0.5351, 1.0830], ], &cpu, )?; let target = Tensor::new(&[1u32, 0, 4], &cpu)?; let log_softmax = candle_nn::ops::log_softmax(&input, 1)?; let loss = candle_nn::loss::nll(&log_softmax, &target)?; assert_eq!(to_vec0_round(&loss, 4)?, 1.1312); let loss = candle_nn::loss::cross_entropy(&input, &target)?; assert_eq!(to_vec0_round(&loss, 4)?, 1.1312); Ok(()) } /* Equivalent python code: import torch import torch.nn.functional as F inp = torch.Tensor([[ 2.3611, -0.8813, -0.5006, -0.2178], [ 0.0419, 0.0763, -1.0457, -1.6692], [-1.0494, 0.8111, 1.5723, 1.2315], [ 1.3081, 0.6641, 1.1802, -0.2547], [ 0.5292, 0.7636, 0.3692, -0.8318]]) target = torch.Tensor([[0., 1., 0., 0.], [0., 1., 0., 0.], [0., 0., 0., 1.], [1., 0., 0., 0.], [0., 0., 1., 0.]]) print(F.binary_cross_entropy_with_logits(inp, target)) */ #[test] fn binary_cross_entropy_with_logit() -> Result<()> { let cpu = Device::Cpu; let inp = [ [2.3611f32, -0.8813, -0.5006, -0.2178], [0.0419, 0.0763, -1.0457, -1.6692], [-1.0494, 0.8111, 1.5723, 1.2315], [1.3081, 0.6641, 1.1802, -0.2547], [0.5292, 0.7636, 0.3692, -0.8318], ]; let target = [ [0.0f32, 1., 0., 0.], [0., 1., 0., 0.], [0., 0., 0., 1.], [1., 0., 0., 0.], [0., 0., 1., 0.], ]; let inp = Tensor::new(&inp, &cpu)?; let target = Tensor::new(&target, &cpu)?; let loss = candle_nn::loss::binary_cross_entropy_with_logit(&inp, &target)?; assert_eq!(to_vec0_round(&loss, 4)?, 0.8224); Ok(()) }
candle/candle-nn/tests/loss.rs/0
{ "file_path": "candle/candle-nn/tests/loss.rs", "repo_id": "candle", "token_count": 1344 }
52
from .module import Module from typing import Optional, Tuple, Any from candle import Tensor import candle class Embedding(Module): """A simple lookup table that stores embeddings of a fixed dictionary and size. This module is often used to store word embeddings and retrieve them using indices. The input to the module is a list of indices, and the output is the corresponding word embeddings. Args: num_embeddings (int): size of the dictionary of embeddings embedding_dim (int): the size of each embedding vector Attributes: weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) initialized from :math:`\mathcal{N}(0, 1)` Shape: - Input: :math:`(*)`, IntTensor or LongTensor of arbitrary shape containing the indices to extract - Output: :math:`(*, H)`, where `*` is the input shape and :math:`H=\text{embedding\_dim}` """ def __init__(self, num_embeddings: int, embedding_dim: int, device=None) -> None: factory_kwargs = {"device": device} super().__init__() self.num_embeddings = num_embeddings self.embedding_dim = embedding_dim self.weight = candle.randn((num_embeddings, embedding_dim), **factory_kwargs) def forward(self, indexes: Tensor) -> Tensor: final_dims = list(indexes.shape) final_dims.append(self.embedding_dim) indexes = indexes.flatten_all() values = self.weight.index_select(indexes, 0) return values.reshape(final_dims)
candle/candle-pyo3/py_src/candle/nn/sparse.py/0
{ "file_path": "candle/candle-pyo3/py_src/candle/nn/sparse.py", "repo_id": "candle", "token_count": 590 }
53
//! Implementation of BLIP text encoder/decoder. //! //! - 📝 [Paper](https://arxiv.org/abs/2201.12086). BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation" //! //! - ⚡ [Interactive Wasm Example](https://huggingface.co/spaces/radames/Candle-BLIP-Image-Captioning) //! - 💻 [GH Link](https://github.com/salesforce/BLIP) //! - 🤗 [HF Link](https://huggingface.co/Salesforce/blip-image-captioning-base) //! - 📝 [Paper](https://arxiv.org/abs/2201.12086) //! use super::with_tracing::{linear, Embedding, Linear}; use candle::{Module, Result, Tensor, D}; use candle_nn::{layer_norm, LayerNorm, VarBuilder}; use serde::Deserialize; #[derive(Debug, Clone, Deserialize)] pub struct Config { pub vocab_size: usize, pub hidden_size: usize, pub encoder_hidden_size: usize, pub intermediate_size: usize, pub projection_dim: usize, pub num_hidden_layers: usize, pub num_attention_heads: usize, pub max_position_embeddings: usize, pub hidden_act: candle_nn::Activation, pub layer_norm_eps: f64, pub is_decoder: bool, } #[derive(Debug, Clone)] struct TextEmbeddings { word_embeddings: Embedding, position_embeddings: Embedding, layer_norm: LayerNorm, position_ids: Tensor, } impl TextEmbeddings { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let word_embeddings = Embedding::new(cfg.vocab_size, cfg.hidden_size, vb.pp("word_embeddings"))?; let position_embeddings = Embedding::new( cfg.max_position_embeddings, cfg.hidden_size, vb.pp("position_embeddings"), )?; let layer_norm = layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("LayerNorm"))?; let position_ids = Tensor::arange(0, cfg.max_position_embeddings as u32, vb.device())?.unsqueeze(0)?; Ok(Self { word_embeddings, position_embeddings, layer_norm, position_ids, }) } fn forward(&self, xs: &Tensor, past_kv_len: usize) -> Result<Tensor> { let seq_len = xs.dim(1)?; let position_ids = self.position_ids.narrow(1, past_kv_len, seq_len)?; let embeddings = self.word_embeddings.forward(xs)?; let position_embeddings = self.position_embeddings.forward(&position_ids)?; (embeddings + position_embeddings)?.apply(&self.layer_norm) } } #[derive(Debug, Clone)] struct TextSelfAttention { query: Linear, key: Linear, value: Linear, attention_head_size: usize, num_attention_heads: usize, attention_scale: f64, kv_cache: Option<(Tensor, Tensor)>, } impl TextSelfAttention { fn new(cfg: &Config, is_cross_attention: bool, vb: VarBuilder) -> Result<Self> { let num_attention_heads = cfg.num_attention_heads; let attention_head_size = cfg.hidden_size / num_attention_heads; let all_head_size = cfg.num_attention_heads * attention_head_size; let query = linear(cfg.hidden_size, all_head_size, vb.pp("query"))?; let in_size = if is_cross_attention { cfg.encoder_hidden_size } else { cfg.hidden_size }; let key = linear(in_size, all_head_size, vb.pp("key"))?; let value = linear(in_size, all_head_size, vb.pp("value"))?; let attention_scale = 1f64 / (attention_head_size as f64).sqrt(); Ok(Self { query, key, value, attention_head_size, num_attention_heads, attention_scale, kv_cache: None, }) } fn transpose_for_scores(&self, xs: &Tensor) -> Result<Tensor> { let (b_size, seq_len, _) = xs.dims3()?; xs.reshape(( b_size, seq_len, self.num_attention_heads, self.attention_head_size, ))? .permute((0, 2, 1, 3)) } fn reset_kv_cache(&mut self) { self.kv_cache = None } fn forward( &mut self, xs: &Tensor, encoder_hidden_states: Option<&Tensor>, attention_mask: Option<&Tensor>, ) -> Result<Tensor> { let query = self .transpose_for_scores(&self.query.forward(xs)?)? .contiguous()?; let (key, value) = match encoder_hidden_states { None => { let key = self.transpose_for_scores(&self.key.forward(xs)?)?; let value = self.transpose_for_scores(&self.value.forward(xs)?)?; let (key, value) = match &self.kv_cache { None => (key, value), Some((prev_key, prev_value)) => { let key = Tensor::cat(&[prev_key, &key], 2)?; let value = Tensor::cat(&[prev_value, &value], 2)?; (key, value) } }; self.kv_cache = Some((key.clone(), value.clone())); (key, value) } Some(xs) => { let key = self.transpose_for_scores(&self.key.forward(xs)?)?; let value = self.transpose_for_scores(&self.value.forward(xs)?)?; // no kv-cache in this case, but the results could probably be memoized. (key, value) } }; let key = key.contiguous()?; let value = value.contiguous()?; let attention_scores = query.matmul(&key.t()?)?; let attention_scores = (attention_scores * self.attention_scale)?; let attention_scores = match attention_mask { Some(mask) => attention_scores.broadcast_add(mask)?, None => attention_scores, }; let attention_probs = candle_nn::ops::softmax_last_dim(&attention_scores)?; attention_probs .matmul(&value)? .permute((0, 2, 1, 3))? .flatten_from(D::Minus2) } } #[derive(Debug, Clone)] struct TextSelfOutput { dense: Linear, layer_norm: LayerNorm, } impl TextSelfOutput { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("dense"))?; let layer_norm = layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("LayerNorm"))?; Ok(Self { dense, layer_norm }) } fn forward(&self, xs: &Tensor, input_tensor: &Tensor) -> Result<Tensor> { (xs.apply(&self.dense) + input_tensor)?.apply(&self.layer_norm) } } #[derive(Debug, Clone)] struct TextAttention { self_: TextSelfAttention, output: TextSelfOutput, } impl TextAttention { fn new(cfg: &Config, is_cross_attention: bool, vb: VarBuilder) -> Result<Self> { let self_ = TextSelfAttention::new(cfg, is_cross_attention, vb.pp("self"))?; let output = TextSelfOutput::new(cfg, vb.pp("output"))?; Ok(Self { self_, output }) } fn reset_kv_cache(&mut self) { self.self_.reset_kv_cache() } fn forward( &mut self, xs: &Tensor, encoder_hidden_states: Option<&Tensor>, attention_mask: Option<&Tensor>, ) -> Result<Tensor> { let self_outputs = self .self_ .forward(xs, encoder_hidden_states, attention_mask)?; self.output.forward(&self_outputs, xs) } } #[derive(Debug, Clone)] struct TextIntermediate { dense: Linear, intermediate_act_fn: candle_nn::Activation, } impl TextIntermediate { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.intermediate_size, vb.pp("dense"))?; Ok(Self { dense, intermediate_act_fn: cfg.hidden_act, }) } } impl Module for TextIntermediate { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.dense)?.apply(&self.intermediate_act_fn) } } #[derive(Debug, Clone)] struct TextOutput { dense: Linear, layer_norm: LayerNorm, } impl TextOutput { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.intermediate_size, cfg.hidden_size, vb.pp("dense"))?; let layer_norm = layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("LayerNorm"))?; Ok(Self { dense, layer_norm }) } fn forward(&self, xs: &Tensor, input_tensor: &Tensor) -> Result<Tensor> { (xs.apply(&self.dense)? + input_tensor)?.apply(&self.layer_norm) } } #[derive(Debug, Clone)] struct TextLayer { attention: TextAttention, cross_attention: Option<TextAttention>, intermediate: TextIntermediate, output: TextOutput, } impl TextLayer { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let attention = TextAttention::new(cfg, false, vb.pp("attention"))?; let cross_attention = if cfg.is_decoder { Some(TextAttention::new(cfg, true, vb.pp("crossattention"))?) } else { None }; let intermediate = TextIntermediate::new(cfg, vb.pp("intermediate"))?; let output = TextOutput::new(cfg, vb.pp("output"))?; Ok(Self { attention, cross_attention, intermediate, output, }) } fn reset_kv_cache(&mut self) { self.attention.reset_kv_cache(); if let Some(ca) = &mut self.cross_attention { ca.reset_kv_cache() } } fn forward( &mut self, xs: &Tensor, encoder_hidden_states: &Tensor, attention_mask: &Tensor, ) -> Result<Tensor> { let attention_output = self.attention.forward(xs, None, Some(attention_mask))?; let attention_output = match &mut self.cross_attention { Some(ca) => ca.forward(&attention_output, Some(encoder_hidden_states), None)?, None => candle::bail!("expected some cross-attn"), }; let intermediate_output = self.intermediate.forward(&attention_output)?; self.output.forward(&intermediate_output, &attention_output) } } #[derive(Debug, Clone)] struct TextEncoder { layers: Vec<TextLayer>, } impl TextEncoder { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb = vb.pp("layer"); let mut layers = Vec::with_capacity(cfg.num_hidden_layers); for i in 0..cfg.num_hidden_layers { let layer = TextLayer::new(cfg, vb.pp(i))?; layers.push(layer) } Ok(Self { layers }) } fn reset_kv_cache(&mut self) { self.layers.iter_mut().for_each(|l| l.reset_kv_cache()) } fn forward( &mut self, xs: &Tensor, encoder_hidden_states: &Tensor, attention_mask: &Tensor, ) -> Result<Tensor> { let mut xs = xs.clone(); for layer in self.layers.iter_mut() { xs = layer.forward(&xs, encoder_hidden_states, attention_mask)? } Ok(xs) } } #[derive(Debug, Clone)] pub struct TextPooler { dense: Linear, } impl TextPooler { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("dense"))?; Ok(Self { dense }) } } impl Module for TextPooler { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.narrow(D::Minus1, 0, 1)? .squeeze(D::Minus1)? .apply(&self.dense)? .tanh() } } #[derive(Debug, Clone)] struct TextPredictionHeadTransform { dense: Linear, transform_act_fn: candle_nn::Activation, layer_norm: LayerNorm, } impl TextPredictionHeadTransform { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("dense"))?; let layer_norm = layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("LayerNorm"))?; Ok(Self { dense, transform_act_fn: cfg.hidden_act, layer_norm, }) } } impl Module for TextPredictionHeadTransform { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.dense)? .apply(&self.transform_act_fn)? .apply(&self.layer_norm) } } #[derive(Debug, Clone)] struct TextLMPredictionHead { transform: TextPredictionHeadTransform, decoder: Linear, } impl TextLMPredictionHead { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let transform = TextPredictionHeadTransform::new(cfg, vb.pp("transform"))?; let weight = vb.get((cfg.vocab_size, cfg.hidden_size), "decoder.weight")?; let bias = vb.get(cfg.vocab_size, "bias")?; let decoder = Linear::from_weights(weight, Some(bias)); Ok(Self { transform, decoder }) } } impl Module for TextLMPredictionHead { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.transform)?.apply(&self.decoder) } } #[derive(Debug, Clone)] struct TextOnlyMLMHead { predictions: TextLMPredictionHead, } impl TextOnlyMLMHead { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let predictions = TextLMPredictionHead::new(cfg, vb.pp("predictions"))?; Ok(Self { predictions }) } } impl Module for TextOnlyMLMHead { fn forward(&self, xs: &Tensor) -> Result<Tensor> { self.predictions.forward(xs) } } #[derive(Debug, Clone)] struct TextModel { embeddings: TextEmbeddings, encoder: TextEncoder, past_kv_len: usize, // We do not need the pooler for caption generation } impl TextModel { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let embeddings = TextEmbeddings::new(cfg, vb.pp("embeddings"))?; let encoder = TextEncoder::new(cfg, vb.pp("encoder"))?; Ok(Self { embeddings, encoder, past_kv_len: 0, }) } fn forward( &mut self, input_ids: &Tensor, encoder_hidden_states: &Tensor, attention_mask: &Tensor, ) -> Result<Tensor> { let (_b_sz, seq_len) = input_ids.dims2()?; let embedding_output = self.embeddings.forward(input_ids, self.past_kv_len)?; let sequence_output = self.encoder .forward(&embedding_output, encoder_hidden_states, attention_mask)?; self.past_kv_len += seq_len; // We're interested in the sequence-output rather than the pooled-output. Ok(sequence_output) } fn reset_kv_cache(&mut self) { self.past_kv_len = 0; self.encoder.reset_kv_cache(); } } #[derive(Debug, Clone)] pub struct TextLMHeadModel { bert: TextModel, cls: TextOnlyMLMHead, } impl TextLMHeadModel { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let bert = TextModel::new(cfg, vb.pp("bert"))?; let cls = TextOnlyMLMHead::new(cfg, vb.pp("cls"))?; Ok(Self { bert, cls }) } pub fn forward( &mut self, input_ids: &Tensor, encoder_hidden_states: &Tensor, ) -> Result<Tensor> { let seq_len = input_ids.dim(1)?; let mask: Vec<_> = (0..seq_len) .flat_map(|i| (0..seq_len).map(move |j| if j > i { f32::NEG_INFINITY } else { 0f32 })) .collect(); let mask = Tensor::from_vec(mask, (seq_len, seq_len), input_ids.device())?; let sequence_output = self.bert.forward(input_ids, encoder_hidden_states, &mask)?; let prediction_scores = self.cls.forward(&sequence_output)?; // return_logits is false so we don't discard the last sequence element. Ok(prediction_scores) } pub fn reset_kv_cache(&mut self) { self.bert.reset_kv_cache() } }
candle/candle-transformers/src/models/blip_text.rs/0
{ "file_path": "candle/candle-transformers/src/models/blip_text.rs", "repo_id": "candle", "token_count": 7345 }
54
//! Implementation of the Depth Anything model from FAIR. //! //! See: //! - ["Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data"](https://github.com/LiheYoung/Depth-Anything) //! use std::sync::Arc; use candle::D::Minus1; use candle::{Module, Result, Tensor}; use candle_nn::ops::Identity; use candle_nn::{ batch_norm, conv2d, conv2d_no_bias, conv_transpose2d, linear, seq, Activation, BatchNorm, BatchNormConfig, Conv2d, Conv2dConfig, ConvTranspose2dConfig, Sequential, VarBuilder, }; use crate::models::dinov2::DinoVisionTransformer; pub struct DepthAnythingV2Config { out_channel_sizes: [usize; 4], in_channel_size: usize, // embed_dim in the Dino model num_features: usize, use_batch_norm: bool, use_class_token: bool, layer_ids_vits: Vec<usize>, input_image_size: usize, target_patch_size: usize, } impl DepthAnythingV2Config { #[allow(clippy::too_many_arguments)] pub fn new( out_channel_sizes: [usize; 4], in_channel_size: usize, num_features: usize, use_batch_norm: bool, use_class_token: bool, layer_ids_vits: Vec<usize>, input_image_size: usize, target_patch_size: usize, ) -> Self { Self { out_channel_sizes, in_channel_size, num_features, use_batch_norm, use_class_token, layer_ids_vits, input_image_size, target_patch_size, } } pub fn vit_small() -> Self { Self { out_channel_sizes: [48, 96, 192, 384], in_channel_size: 384, num_features: 64, use_batch_norm: false, use_class_token: false, layer_ids_vits: vec![2, 5, 8, 11], input_image_size: 518, target_patch_size: 518 / 14, } } pub fn vit_base() -> Self { Self { out_channel_sizes: [96, 192, 384, 768], in_channel_size: 768, num_features: 128, use_batch_norm: false, use_class_token: false, layer_ids_vits: vec![2, 5, 8, 11], input_image_size: 518, target_patch_size: 518 / 14, } } pub fn vit_large() -> Self { Self { out_channel_sizes: [256, 512, 1024, 1024], in_channel_size: 1024, num_features: 256, use_batch_norm: false, use_class_token: false, layer_ids_vits: vec![4, 11, 17, 23], input_image_size: 518, target_patch_size: 518 / 14, } } pub fn vit_giant() -> Self { Self { out_channel_sizes: [1536, 1536, 1536, 1536], in_channel_size: 1536, num_features: 384, use_batch_norm: false, use_class_token: false, layer_ids_vits: vec![9, 19, 29, 39], input_image_size: 518, target_patch_size: 518 / 14, } } } pub struct ResidualConvUnit { activation: Activation, conv1: Conv2d, conv2: Conv2d, batch_norm1: Option<BatchNorm>, batch_norm2: Option<BatchNorm>, } impl ResidualConvUnit { pub fn new( conf: &DepthAnythingV2Config, activation: Activation, vb: VarBuilder, ) -> Result<Self> { const KERNEL_SIZE: usize = 3; let conv_cfg = Conv2dConfig { padding: 1, stride: 1, dilation: 1, groups: 1, cudnn_fwd_algo: None, }; let conv1 = conv2d( conf.num_features, conf.num_features, KERNEL_SIZE, conv_cfg, vb.pp("conv1"), )?; let conv2 = conv2d( conf.num_features, conf.num_features, KERNEL_SIZE, conv_cfg, vb.pp("conv2"), )?; let (batch_norm1, batch_norm2) = match conf.use_batch_norm { true => { let batch_norm_cfg = BatchNormConfig { eps: 1e-05, remove_mean: false, affine: true, momentum: 0.1, }; ( Some(batch_norm(conf.num_features, batch_norm_cfg, vb.pp("bn1"))?), Some(batch_norm(conf.num_features, batch_norm_cfg, vb.pp("bn2"))?), ) } false => (None, None), }; Ok(Self { activation, conv1, conv2, batch_norm1, batch_norm2, }) } } impl Module for ResidualConvUnit { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let out = self.activation.forward(xs)?; let out = self.conv1.forward(&out)?; let out = if let Some(batch_norm1) = &self.batch_norm1 { batch_norm1.forward_train(&out)? } else { out }; let out = self.activation.forward(&out)?; let out = self.conv2.forward(&out)?; let out = if let Some(batch_norm2) = &self.batch_norm2 { batch_norm2.forward_train(&out)? } else { out }; out + xs } } pub struct FeatureFusionBlock { res_conv_unit1: ResidualConvUnit, res_conv_unit2: ResidualConvUnit, output_conv: Conv2d, target_patch_size: usize, } impl FeatureFusionBlock { pub fn new( conf: &DepthAnythingV2Config, target_patch_size: usize, activation: Activation, vb: VarBuilder, ) -> Result<Self> { const KERNEL_SIZE: usize = 1; let conv_cfg = Conv2dConfig { padding: 0, stride: 1, dilation: 1, groups: 1, cudnn_fwd_algo: None, }; let output_conv = conv2d( conf.num_features, conf.num_features, KERNEL_SIZE, conv_cfg, vb.pp("out_conv"), )?; let res_conv_unit1 = ResidualConvUnit::new(conf, activation, vb.pp("resConfUnit1"))?; let res_conv_unit2 = ResidualConvUnit::new(conf, activation, vb.pp("resConfUnit2"))?; Ok(Self { res_conv_unit1, res_conv_unit2, output_conv, target_patch_size, }) } } impl Module for FeatureFusionBlock { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let out = self.res_conv_unit2.forward(xs)?; let out = out.interpolate2d(self.target_patch_size, self.target_patch_size)?; self.output_conv.forward(&out) } } pub struct Scratch { layer1_rn: Conv2d, layer2_rn: Conv2d, layer3_rn: Conv2d, layer4_rn: Conv2d, refine_net1: FeatureFusionBlock, refine_net2: FeatureFusionBlock, refine_net3: FeatureFusionBlock, refine_net4: FeatureFusionBlock, output_conv1: Conv2d, output_conv2: Sequential, } impl Scratch { pub fn new(conf: &DepthAnythingV2Config, vb: VarBuilder) -> Result<Self> { const KERNEL_SIZE: usize = 3; let conv_cfg = Conv2dConfig { padding: 1, stride: 1, dilation: 1, groups: 1, cudnn_fwd_algo: None, }; let layer1_rn = conv2d_no_bias( conf.out_channel_sizes[0], conf.num_features, KERNEL_SIZE, conv_cfg, vb.pp("layer1_rn"), )?; let layer2_rn = conv2d_no_bias( conf.out_channel_sizes[1], conf.num_features, KERNEL_SIZE, conv_cfg, vb.pp("layer2_rn"), )?; let layer3_rn = conv2d_no_bias( conf.out_channel_sizes[2], conf.num_features, KERNEL_SIZE, conv_cfg, vb.pp("layer3_rn"), )?; let layer4_rn = conv2d_no_bias( conf.out_channel_sizes[3], conf.num_features, KERNEL_SIZE, conv_cfg, vb.pp("layer4_rn"), )?; let refine_net1 = FeatureFusionBlock::new( conf, conf.target_patch_size * 8, Activation::Relu, vb.pp("refinenet1"), )?; let refine_net2 = FeatureFusionBlock::new( conf, conf.target_patch_size * 4, Activation::Relu, vb.pp("refinenet2"), )?; let refine_net3 = FeatureFusionBlock::new( conf, conf.target_patch_size * 2, Activation::Relu, vb.pp("refinenet3"), )?; let refine_net4 = FeatureFusionBlock::new( conf, conf.target_patch_size, Activation::Relu, vb.pp("refinenet4"), )?; let conv_cfg = Conv2dConfig { padding: 1, stride: 1, dilation: 1, groups: 1, cudnn_fwd_algo: None, }; let output_conv1 = conv2d( conf.num_features, conf.num_features / 2, KERNEL_SIZE, conv_cfg, vb.pp("output_conv1"), )?; let output_conv2 = seq(); const HEAD_FEATURES_2: usize = 32; const OUT_CHANNELS_2: usize = 1; const KERNEL_SIZE_2: usize = 1; let output_conv2 = output_conv2.add(conv2d( conf.num_features / 2, HEAD_FEATURES_2, KERNEL_SIZE, conv_cfg, vb.pp("output_conv2").pp("0"), )?); let output_conv2 = output_conv2 .add(Activation::Relu) .add(conv2d( HEAD_FEATURES_2, OUT_CHANNELS_2, KERNEL_SIZE_2, conv_cfg, vb.pp("output_conv2").pp("2"), )?) .add(Activation::Relu); Ok(Self { layer1_rn, layer2_rn, layer3_rn, layer4_rn, refine_net1, refine_net2, refine_net3, refine_net4, output_conv1, output_conv2, }) } } const NUM_CHANNELS: usize = 4; pub struct DPTHead { projections: Vec<Conv2d>, resize_layers: Vec<Box<dyn Module>>, readout_projections: Vec<Sequential>, scratch: Scratch, use_class_token: bool, input_image_size: usize, target_patch_size: usize, } impl DPTHead { pub fn new(conf: &DepthAnythingV2Config, vb: VarBuilder) -> Result<Self> { let mut projections: Vec<Conv2d> = Vec::with_capacity(conf.out_channel_sizes.len()); for (conv_index, out_channel_size) in conf.out_channel_sizes.iter().enumerate() { projections.push(conv2d( conf.in_channel_size, *out_channel_size, 1, Default::default(), vb.pp("projects").pp(conv_index.to_string()), )?); } let resize_layers: Vec<Box<dyn Module>> = vec![ Box::new(conv_transpose2d( conf.out_channel_sizes[0], conf.out_channel_sizes[0], 4, ConvTranspose2dConfig { padding: 0, stride: 4, dilation: 1, output_padding: 0, }, vb.pp("resize_layers").pp("0"), )?), Box::new(conv_transpose2d( conf.out_channel_sizes[1], conf.out_channel_sizes[1], 2, ConvTranspose2dConfig { padding: 0, stride: 2, dilation: 1, output_padding: 0, }, vb.pp("resize_layers").pp("1"), )?), Box::new(Identity::new()), Box::new(conv2d( conf.out_channel_sizes[3], conf.out_channel_sizes[3], 3, Conv2dConfig { padding: 1, stride: 2, dilation: 1, groups: 1, cudnn_fwd_algo: None, }, vb.pp("resize_layers").pp("3"), )?), ]; let readout_projections = if conf.use_class_token { let rop = Vec::with_capacity(NUM_CHANNELS); for rop_index in 0..NUM_CHANNELS { seq() .add(linear( 2 * conf.in_channel_size, conf.in_channel_size, vb.pp("readout_projects").pp(rop_index.to_string()), )?) .add(Activation::Gelu); } rop } else { vec![] }; let scratch = Scratch::new(conf, vb.pp("scratch"))?; Ok(Self { projections, resize_layers, readout_projections, scratch, use_class_token: conf.use_class_token, input_image_size: conf.input_image_size, target_patch_size: conf.target_patch_size, }) } } impl Module for DPTHead { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let mut out: Vec<Tensor> = Vec::with_capacity(NUM_CHANNELS); for i in 0..NUM_CHANNELS { let x = if self.use_class_token { let x = xs.get(i)?.get(0)?; let class_token = xs.get(i)?.get(1)?; let readout = class_token.unsqueeze(1)?.expand(x.shape())?; let to_cat = [x, readout]; let cat = Tensor::cat(&to_cat, Minus1)?; self.readout_projections[i].forward(&cat)? } else { xs.get(i)? }; let x_dims = x.dims(); let x = x.permute((0, 2, 1))?.reshape(( x_dims[0], x_dims[x_dims.len() - 1], self.target_patch_size, self.target_patch_size, ))?; let x = self.projections[i].forward(&x)?; let x = self.resize_layers[i].forward(&x)?; out.push(x); } let layer_1_rn = self.scratch.layer1_rn.forward(&out[0])?; let layer_2_rn = self.scratch.layer2_rn.forward(&out[1])?; let layer_3_rn = self.scratch.layer3_rn.forward(&out[2])?; let layer_4_rn = self.scratch.layer4_rn.forward(&out[3])?; let path4 = self.scratch.refine_net4.forward(&layer_4_rn)?; let res3_out = self .scratch .refine_net3 .res_conv_unit1 .forward(&layer_3_rn)?; let res3_out = path4.add(&res3_out)?; let path3 = self.scratch.refine_net3.forward(&res3_out)?; let res2_out = self .scratch .refine_net2 .res_conv_unit1 .forward(&layer_2_rn)?; let res2_out = path3.add(&res2_out)?; let path2 = self.scratch.refine_net2.forward(&res2_out)?; let res1_out = self .scratch .refine_net1 .res_conv_unit1 .forward(&layer_1_rn)?; let res1_out = path2.add(&res1_out)?; let path1 = self.scratch.refine_net1.forward(&res1_out)?; let out = self.scratch.output_conv1.forward(&path1)?; let out = out.interpolate2d(self.input_image_size, self.input_image_size)?; self.scratch.output_conv2.forward(&out) } } pub struct DepthAnythingV2 { pretrained: Arc<DinoVisionTransformer>, depth_head: DPTHead, conf: DepthAnythingV2Config, } impl DepthAnythingV2 { pub fn new( pretrained: Arc<DinoVisionTransformer>, conf: DepthAnythingV2Config, vb: VarBuilder, ) -> Result<Self> { let depth_head = DPTHead::new(&conf, vb.pp("depth_head"))?; Ok(Self { pretrained, depth_head, conf, }) } } impl Module for DepthAnythingV2 { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let features = self.pretrained.get_intermediate_layers( xs, &self.conf.layer_ids_vits, false, false, true, )?; let depth = self.depth_head.forward(&features)?; depth.relu() } }
candle/candle-transformers/src/models/depth_anything_v2.rs/0
{ "file_path": "candle/candle-transformers/src/models/depth_anything_v2.rs", "repo_id": "candle", "token_count": 9268 }
55
//! MetaVoice Studio ML Models //! //! See MetaVoice's TTS and voice cloning models: //! - [Github](https://github.com/metavoiceio/metavoice-src) //! - [Website](https://studio.metavoice.ai/) use candle::{DType, Device, Error as E, IndexOp, Module, Result, Tensor, D}; use candle_nn::{embedding, linear_b, rms_norm, Embedding, Linear, RmsNorm, VarBuilder}; // Equivalent to torch.repeat_interleave pub(crate) fn repeat_interleave(img: &Tensor, repeats: usize, dim: usize) -> Result<Tensor> { let img = img.unsqueeze(dim + 1)?; let mut dims = img.dims().to_vec(); dims[dim + 1] = repeats; img.broadcast_as(dims)?.flatten(dim, dim + 1) } pub mod speaker_encoder { use super::*; #[derive(Debug, Clone, serde::Deserialize)] pub struct Config { pub sampling_rate: usize, pub partial_n_frames: usize, pub model_hidden_size: usize, pub model_embedding_size: usize, pub model_num_layers: usize, pub mel_window_length: usize, pub mel_window_step: usize, pub mel_n_channels: usize, } impl Config { pub fn cfg() -> Self { Self { sampling_rate: 16_000, partial_n_frames: 160, model_hidden_size: 256, model_embedding_size: 256, model_num_layers: 3, mel_window_length: 25, mel_window_step: 10, mel_n_channels: 40, } } } pub struct Model { lstms: Vec<candle_nn::LSTM>, linear: Linear, cfg: Config, } type Slice = (usize, usize); impl Model { pub fn new(cfg: Config, vb: VarBuilder) -> Result<Self> { let mut lstms = Vec::with_capacity(cfg.model_num_layers); let vb_l = vb.pp("lstm"); for layer_idx in 0..cfg.model_num_layers { let c = candle_nn::LSTMConfig { layer_idx, ..Default::default() }; let lstm = candle_nn::lstm( cfg.mel_n_channels, cfg.model_hidden_size, c, vb_l.pp(layer_idx), )?; lstms.push(lstm) } let linear = linear_b( cfg.model_hidden_size, cfg.model_embedding_size, true, vb.pp("linear"), )?; Ok(Self { lstms, linear, cfg }) } fn compute_partial_slices( &self, n_samples: usize, rate: f64, min_coverage: f64, ) -> (Vec<Slice>, Vec<Slice>) { let c = &self.cfg; // Compute how many frames separate two partial utterances let samples_per_frame = c.sampling_rate * c.mel_window_step / 1000; let n_frames = n_samples / samples_per_frame + 1; let frame_step = (c.sampling_rate as f64 / rate / samples_per_frame as f64).round() as usize; let steps = (n_frames + frame_step).saturating_sub(c.partial_n_frames) + 1; // Compute the slices. let mut wav_slices = vec![]; let mut mel_slices = vec![]; for i in (0..steps).step_by(frame_step) { let mel_range = (i, i + c.partial_n_frames); let wav_range = ( i * samples_per_frame, (i + c.partial_n_frames) * samples_per_frame, ); mel_slices.push(mel_range); wav_slices.push(wav_range); } // Evaluate whether extra padding is warranted or not. let last_wav_range = match wav_slices.last() { None => return (wav_slices, mel_slices), Some(l) => *l, }; let coverage = (n_samples - last_wav_range.0) as f64 / (last_wav_range.1 - last_wav_range.0) as f64; if coverage > min_coverage && mel_slices.len() > 1 { mel_slices.pop(); wav_slices.pop(); } (wav_slices, mel_slices) } pub fn embed_utterance( &self, wav: &[f32], mel_filters: &[f32], rate: f64, min_c: f64, device: &Device, ) -> Result<Tensor> { let (wav_slices, mel_slices) = self.compute_partial_slices(wav.len(), rate, min_c); let max_wave_length = match wav_slices.last() { Some(v) => v.1, None => candle::bail!("empty wav slices"), }; let wav = if max_wave_length > wav.len() { let mut wav = wav.to_vec(); wav.resize(max_wave_length - wav.len(), 0.0); std::borrow::Cow::Owned(wav) } else { std::borrow::Cow::Borrowed(wav) }; let mel = crate::models::whisper::audio::log_mel_spectrogram_( wav.as_ref(), mel_filters, /* fft_size */ self.cfg.mel_window_length, /* fft_step */ self.cfg.mel_window_step, self.cfg.mel_n_channels, false, ); let mels = mel_slices .iter() .flat_map(|s| [mel[s.0], mel[s.1]]) .collect::<Vec<_>>(); let mels = Tensor::from_vec(mels, (mel_slices.len(), 2), device)?; let partial_embeds = self.forward(&mels)?; let raw_embed = partial_embeds.mean(0)?; let norm = raw_embed.sqr()?.sum_all()?.sqrt()?; raw_embed.broadcast_div(&norm) } } impl Module for Model { fn forward(&self, xs: &Tensor) -> Result<Tensor> { use candle_nn::RNN; // This is different from the Python transformers version as candle LSTM is batch first. let xs = xs.t()?; let mut xs = xs.clone(); for layer in self.lstms.iter() { let states = layer.seq(&xs)?; xs = layer.states_to_tensor(&states)?; } let xs = xs.t()?; let embeds_raw = xs.apply(&self.linear)?.relu()?; let norm = embeds_raw.sqr()?.sum_keepdim(1)?.sqrt()?; embeds_raw.broadcast_div(&norm) } } } type Rank = u32; pub mod tokenizers { use super::*; use std::collections::HashMap; pub struct BPE { pub re: fancy_regex::Regex, pub end_of_text: usize, pub offset: usize, pub ranks: HashMap<Vec<u8>, Rank>, span: tracing::Span, } impl BPE { pub fn from_json(json: &serde_json::Value, end_of_text: usize) -> Result<Self> { let json = match json.as_object() { None => candle::bail!("json value is not an object"), Some(json) => json, }; let re = match json.get("pat_str") { None => candle::bail!("json object has no pat_str field"), Some(pat_str) => match pat_str.as_str() { None => candle::bail!("pat_str field is not a string"), Some(pat_str) => fancy_regex::Regex::new(pat_str).map_err(E::wrap)?, }, }; let offset = match json.get("offset") { None => candle::bail!("json object has no offset field"), Some(offset) => match offset.as_u64() { None => candle::bail!("offset field is not a positive int"), Some(offset) => offset as usize, }, }; let mut ranks = HashMap::new(); for id in 0u8..=255 { ranks.insert(vec![id], id as u32); } let mergeable_ranks = match json.get("mergeable_ranks") { None => candle::bail!("json object has no mergeable_ranks field"), Some(mr) => match mr.as_object() { None => candle::bail!("mergeable_ranks is not an object"), Some(mr) => mr, }, }; for (key, value) in mergeable_ranks.iter() { let value = match value.as_u64() { None => candle::bail!("mergeable_ranks '{key}' is not a u64"), Some(value) => value as u32, }; if value < 256 { continue; } // No escaping for other keys. let key = key.as_bytes().to_vec(); ranks.insert(key, value); } Ok(Self { re, end_of_text, offset, ranks, span: tracing::span!(tracing::Level::TRACE, "bpe"), }) } // Taken from: // https://github.com/openai/tiktoken/blob/1b9faf2779855124f05174adf1383e53689ed94b/src/lib.rs#L16C1-L82C2 fn _byte_pair_merge(&self, piece: &[u8]) -> Vec<(usize, Rank)> { // This is a vector of (start, rank). // The rank is of the pair starting at position start. let mut parts = Vec::with_capacity(piece.len() + 1); // Note that we hash bytes when indexing into `ranks`, not token pairs. As long as we train BPE // the way we currently do, this is equivalent. An easy way to break this would be to decouple // merge priority from token index or to prevent specific token merges. let mut min_rank: (Rank, usize) = (Rank::MAX, usize::MAX); for i in 0..piece.len() - 1 { let rank = *self.ranks.get(&piece[i..i + 2]).unwrap_or(&Rank::MAX); if rank < min_rank.0 { min_rank = (rank, i); } parts.push((i, rank)); } parts.push((piece.len() - 1, Rank::MAX)); parts.push((piece.len(), Rank::MAX)); let get_rank = { #[inline(always)] |parts: &Vec<(usize, Rank)>, i: usize| { if (i + 3) < parts.len() { // Similar to `piece[i..i + 2]` above. The +3 is because we haven't yet deleted // parts[i + 1], see comment in the main loop. *self .ranks .get(&piece[parts[i].0..parts[i + 3].0]) .unwrap_or(&Rank::MAX) } else { Rank::MAX } } }; // If you have n parts and m merges, this does O(mn) work. // We could do something with a heap and do O(m log n) work. // n is often very small so considerations like cache-locality outweigh the algorithmic // complexity downsides of the `parts` vector. while min_rank.0 != Rank::MAX { let i = min_rank.1; // Update parts[i] and parts[i - 1] before removing parts[i + 1], since // `parts.remove(i + 1)` will thrash the cache. if i > 0 { parts[i - 1].1 = get_rank(&parts, i - 1); } parts[i].1 = get_rank(&parts, i); parts.remove(i + 1); min_rank = (Rank::MAX, usize::MAX); for (i, &(_, rank)) in parts[..parts.len() - 1].iter().enumerate() { if rank < min_rank.0 { min_rank = (rank, i); } } } parts } pub fn byte_pair_encode(&self, piece: &[u8]) -> Vec<Rank> { if piece.is_empty() { return Vec::new(); } if piece.len() == 1 { return vec![self.ranks[piece]]; } assert!(piece.len() > 1); self._byte_pair_merge(piece) .windows(2) .map(|part| self.ranks[&piece[part[0].0..part[1].0]]) .collect() } pub fn encode(&self, text: &str) -> Result<Vec<u32>> { let _enter = self.span.enter(); let mut bpe_tokens: Vec<u32> = Vec::new(); for word in self.re.find_iter(text) { let word = word.map_err(E::wrap)?; let word_tokens = self.byte_pair_encode(word.as_str().as_bytes()); for &token in word_tokens.iter() { bpe_tokens.push(token + self.offset as u32) } } bpe_tokens.push((self.end_of_text + self.offset) as u32); Ok(bpe_tokens) } } } pub mod gpt { use super::*; #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash)] pub enum NormType { LayerNorm, RMSNorm, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash)] pub enum AttnKernelType { Fa2, TorchAttn, Hand, } #[derive(Debug, Clone, Copy, Eq, PartialEq, Hash)] pub enum NonLinearityType { Gelu, Swiglu, } enum Norm { RMSNorm(candle_nn::RmsNorm), LayerNorm(candle_nn::LayerNorm), } // https://github.com/metavoiceio/metavoice-src/blob/11550bb4e8a1ad032cc1556cc924f7a4e767cbfa/fam/llm/model.py#L27 #[derive(Debug, Clone)] pub struct Config { pub block_size: usize, pub vocab_sizes: Vec<usize>, pub target_vocab_sizes: Vec<usize>, pub n_layer: usize, pub n_head: usize, pub n_embd: usize, pub bias: bool, pub causal: bool, pub spk_emb_on_text: bool, pub norm_type: NormType, pub rmsnorm_eps: f64, pub nonlinearity_type: NonLinearityType, pub swiglu_multiple_of: Option<usize>, pub attn_kernel_type: AttnKernelType, pub kv_cache_enabled: bool, } impl Config { pub fn cfg1b_v0_1() -> Self { Self { n_layer: 6, n_head: 6, n_embd: 384, block_size: 1024, bias: false, vocab_sizes: vec![1538, 1025], causal: false, target_vocab_sizes: vec![1025, 1025, 1025, 1025, 1025, 1025], swiglu_multiple_of: Some(256), norm_type: NormType::LayerNorm, kv_cache_enabled: false, attn_kernel_type: AttnKernelType::TorchAttn, spk_emb_on_text: true, nonlinearity_type: NonLinearityType::Gelu, rmsnorm_eps: 1e-5, } } } impl Norm { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { match cfg.norm_type { NormType::RMSNorm => { let rms_norm = candle_nn::rms_norm(cfg.n_embd, cfg.rmsnorm_eps, vb)?; Ok(Self::RMSNorm(rms_norm)) } NormType::LayerNorm => { let ln_cfg = candle_nn::LayerNormConfig { affine: cfg.bias, ..Default::default() }; let layer_norm = candle_nn::layer_norm(cfg.n_embd, ln_cfg, vb)?; Ok(Self::LayerNorm(layer_norm)) } } } } impl Module for Norm { fn forward(&self, xs: &Tensor) -> Result<Tensor> { match self { Self::RMSNorm(m) => m.forward(xs), Self::LayerNorm(m) => m.forward(xs), } } } // https://github.com/metavoiceio/metavoice-src/blob/11550bb4e8a1ad032cc1556cc924f7a4e767cbfa/fam/llm/layers/attn.py#L18 struct SelfAttention { c_attn: Linear, c_proj: Linear, n_head: usize, span: tracing::Span, } impl SelfAttention { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { // The different attention variants are likely to be identical but still we only accept // TorchAttn for now. if cfg.attn_kernel_type != AttnKernelType::TorchAttn { candle::bail!("only TorchAttn is supported") } if cfg.kv_cache_enabled { candle::bail!("kv_cache_enabled=true is not supported") } let c_attn = linear_b(cfg.n_embd, cfg.n_embd * 3, cfg.bias, vb.pp("c_attn"))?; let c_proj = linear_b(cfg.n_embd, cfg.n_embd, cfg.bias, vb.pp("c_proj"))?; Ok(Self { c_attn, c_proj, n_head: cfg.n_head, span: tracing::span!(tracing::Level::TRACE, "self-attn"), }) } } impl Module for SelfAttention { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let (b, t, c) = xs.dims3()?; let c_x = xs .apply(&self.c_attn)? .reshape((b, t, 3, self.n_head, c / self.n_head))?; let q = c_x.i((.., .., 0))?; let k = c_x.i((.., .., 1))?; let v = c_x.i((.., .., 2))?; let q = q.transpose(1, 2)?.contiguous()?; let k = k.transpose(1, 2)?.contiguous()?; let v = v.transpose(1, 2)?.contiguous()?; let att = (q.matmul(&k.t()?)? / (k.dim(D::Minus1)? as f64).sqrt())?; // TODO: causal mask let att = candle_nn::ops::softmax_last_dim(&att)?; let att = att.matmul(&v)?.transpose(1, 2)?; att.reshape((b, t, c))?.apply(&self.c_proj) } } // https://github.com/metavoiceio/metavoice-src/blob/11550bb4e8a1ad032cc1556cc924f7a4e767cbfa/fam/llm/layers/layers.py#L43 #[allow(clippy::upper_case_acronyms)] enum MLP { Gelu { c_fc: Linear, c_proj: Linear, span: tracing::Span, }, Swiglu { w1: Linear, w3: Linear, c_proj: Linear, span: tracing::Span, }, } impl MLP { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_dim = 4 * cfg.n_embd; let slf = match cfg.nonlinearity_type { NonLinearityType::Gelu => { let c_fc = linear_b(cfg.n_embd, hidden_dim, cfg.bias, vb.pp("c_fc"))?; let c_proj = linear_b(hidden_dim, cfg.n_embd, cfg.bias, vb.pp("c_proj"))?; Self::Gelu { c_fc, c_proj, span: tracing::span!(tracing::Level::TRACE, "mlp-gelu"), } } NonLinearityType::Swiglu => { let hidden_dim = (2 * hidden_dim) / 3; let swiglu_multiple_of = match cfg.swiglu_multiple_of { None => candle::bail!("swiglu-multiple-of has to be set"), Some(smo) => smo, }; let hidden_dim = swiglu_multiple_of * (hidden_dim + swiglu_multiple_of - 1) / swiglu_multiple_of; let w1 = linear_b(cfg.n_embd, hidden_dim, cfg.bias, vb.pp("w1"))?; let w3 = linear_b(cfg.n_embd, hidden_dim, cfg.bias, vb.pp("w3"))?; let c_proj = linear_b(hidden_dim, cfg.n_embd, cfg.bias, vb.pp("c_proj"))?; Self::Swiglu { w1, w3, c_proj, span: tracing::span!(tracing::Level::TRACE, "mlp-swiglu"), } } }; Ok(slf) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { match self { Self::Gelu { c_fc, c_proj, span } => { let _enter = span.enter(); xs.apply(c_fc)?.gelu()?.apply(c_proj) } Self::Swiglu { w1, w3, c_proj, span, } => { let _enter = span.enter(); let w1 = xs.apply(w1)?; let w3 = xs.apply(w3)?; (w1.silu()? * w3)?.apply(c_proj) } } } } // https://github.com/metavoiceio/metavoice-src/blob/11550bb4e8a1ad032cc1556cc924f7a4e767cbfa/fam/llm/layers/combined.py#L7 struct Block { ln_1: Norm, ln_2: Norm, attn: SelfAttention, mlp: MLP, span: tracing::Span, } impl Block { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let ln_1 = Norm::new(cfg, vb.pp("ln_1"))?; let ln_2 = Norm::new(cfg, vb.pp("ln_2"))?; let attn = SelfAttention::new(cfg, vb.pp("attn"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; Ok(Block { ln_1, ln_2, attn, mlp, span: tracing::span!(tracing::Level::TRACE, "gpt-block"), }) } } impl Module for Block { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let xs = (xs + xs.apply(&self.ln_1)?.apply(&self.attn))?; let xs = (&xs + xs.apply(&self.ln_2)?.apply(&self.mlp))?; Ok(xs) } } // https://github.com/metavoiceio/metavoice-src/blob/11550bb4e8a1ad032cc1556cc924f7a4e767cbfa/fam/llm/model.py#L79 #[allow(clippy::upper_case_acronyms)] pub struct Model { wtes: Vec<candle_nn::Embedding>, wpe: candle_nn::Embedding, h: Vec<Block>, ln_f: Norm, lm_heads: Vec<Linear>, cfg: Config, dtype: DType, span: tracing::Span, } impl Model { pub fn new(cfg: Config, vb: VarBuilder) -> Result<Self> { let vb_t = vb.pp("transformer"); let ln_f = Norm::new(&cfg, vb_t.pp("ln_f"))?; let mut wtes = Vec::with_capacity(cfg.vocab_sizes.len()); let vb_w = vb_t.pp("wtes"); for (idx, vocab_size) in cfg.vocab_sizes.iter().enumerate() { let wte = candle_nn::embedding(*vocab_size, cfg.n_embd, vb_w.pp(idx))?; wtes.push(wte) } let wpe = candle_nn::embedding(cfg.block_size, cfg.n_embd, vb_t.pp("wpe"))?; let mut h = Vec::with_capacity(cfg.n_layer); let vb_h = vb_t.pp("h"); for idx in 0..cfg.n_layer { let block = Block::new(&cfg, vb_h.pp(idx))?; h.push(block) } let mut lm_heads = Vec::with_capacity(cfg.target_vocab_sizes.len()); let vb_l = vb.pp("lm_heads"); for (idx, vocab_size) in cfg.target_vocab_sizes.iter().enumerate() { let head = linear_b(cfg.n_embd, *vocab_size, false, vb_l.pp(idx))?; lm_heads.push(head) } Ok(Self { wtes, wpe, h, ln_f, lm_heads, cfg, dtype: vb.dtype(), span: tracing::span!(tracing::Level::TRACE, "gpt"), }) } pub fn config(&self) -> &Config { &self.cfg } pub fn forward(&self, idx: &Tensor) -> Result<Vec<Tensor>> { let _enter = self.span.enter(); let device = idx.device(); let (b, _num_hierarchies, t) = idx.dims3()?; let pos = Tensor::arange(0u32, t as u32, device)?; let pos_emb = pos.apply(&self.wpe)?; let mut tok_emb = Tensor::zeros((b, t, self.cfg.n_embd), self.dtype, device)?; for (wte_idx, wte) in self.wtes.iter().enumerate() { let emb = idx.i((.., wte_idx, ..))?.apply(wte)?; tok_emb = (tok_emb + emb)?; } // TODO: speaker embs. let spk_emb = 0f64; let mut xs = (pos_emb.broadcast_add(&tok_emb)? + spk_emb)?; for block in self.h.iter() { xs = xs.apply(block)? } let xs = xs.apply(&self.ln_f)?; let mut logits = Vec::with_capacity(self.lm_heads.len()); for lm_head in self.lm_heads.iter() { // non-causal mode only. let ys = xs.apply(lm_head)?; logits.push(ys) } Ok(logits) } } } pub mod transformer { use super::*; #[derive(Debug, Clone, serde::Deserialize)] pub struct Config { pub block_size: usize, pub vocab_size: usize, pub n_layer: usize, pub n_head: usize, pub dim: usize, pub speaker_emb_dim: usize, pub intermediate_size: Option<usize>, pub n_local_heads: Option<usize>, pub norm_eps: f64, } impl Config { pub fn cfg1b_v0_1() -> Self { Self { n_layer: 24, n_head: 16, dim: 2048, vocab_size: 2562, speaker_emb_dim: 256, block_size: 2048, intermediate_size: None, n_local_heads: None, norm_eps: 1e-5, } } pub(crate) fn n_local_heads(&self) -> usize { self.n_local_heads.unwrap_or(self.n_head) } pub(crate) fn head_dim(&self) -> usize { self.dim / self.n_head } pub(crate) fn intermediate_size(&self) -> usize { match self.intermediate_size { Some(intermediate_size) => intermediate_size, None => { let hidden_dim = self.dim * 4; let n_hidden = ((2 * hidden_dim) as f64 / 3.) as usize; n_hidden.div_ceil(256) * 256 } } } } #[derive(Debug, Clone)] struct FeedForward { w1: Linear, w2: Linear, w3: Linear, span: tracing::Span, } impl FeedForward { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let i_size = cfg.intermediate_size(); let w1 = linear_b(cfg.dim, i_size, false, vb.pp("swiglu.w1"))?; let w2 = linear_b(i_size, cfg.dim, false, vb.pp("w2"))?; let w3 = linear_b(cfg.dim, i_size, false, vb.pp("swiglu.w3"))?; Ok(Self { w1, w2, w3, span: tracing::span!(tracing::Level::TRACE, "feed-forward"), }) } } impl Module for FeedForward { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let swiglu = (candle_nn::ops::silu(&xs.apply(&self.w1)?)? * xs.apply(&self.w3))?; swiglu.apply(&self.w2) } } #[derive(Debug, Clone)] struct Attention { wqkv: Linear, wo: Linear, dim: usize, kv_size: usize, n_local_heads: usize, head_dim: usize, n_head: usize, kv_cache: Option<(Tensor, Tensor)>, span: tracing::Span, } impl Attention { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let n_local_heads = cfg.n_local_heads(); let head_dim = cfg.head_dim(); let total_head_dim = (cfg.n_head + 2 * n_local_heads) * head_dim; let wqkv = linear_b(cfg.dim, total_head_dim, false, vb.pp("wqkv"))?; let wo = linear_b(cfg.dim, cfg.dim, false, vb.pp("wo"))?; Ok(Self { wqkv, wo, dim: cfg.dim, kv_size: n_local_heads * head_dim, n_local_heads, head_dim, n_head: cfg.n_head, kv_cache: None, span: tracing::span!(tracing::Level::TRACE, "feed-forward"), }) } fn forward(&mut self, xs: &Tensor, _pos: usize, mask: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let (b_sz, seqlen, _) = xs.dims3()?; let qkv = xs.apply(&self.wqkv)?; let q = qkv.narrow(D::Minus1, 0, self.dim)?; let k = qkv.narrow(D::Minus1, self.dim, self.kv_size)?; let v = qkv.narrow(D::Minus1, self.dim + self.kv_size, self.kv_size)?; let q = q .reshape((b_sz, seqlen, self.n_head, self.head_dim))? .transpose(1, 2)? .contiguous()?; let k = k .reshape((b_sz, seqlen, self.n_local_heads, self.head_dim))? .transpose(1, 2)?; let v = v .reshape((b_sz, seqlen, self.n_local_heads, self.head_dim))? .transpose(1, 2)?; let (k, v) = match &self.kv_cache { None => (k, v), Some((prev_k, prev_v)) => { let k = Tensor::cat(&[prev_k, &k], 2)?; let v = Tensor::cat(&[prev_v, &v], 2)?; (k, v) } }; self.kv_cache = Some((k.clone(), v.clone())); let k = repeat_interleave(&k, self.n_head / self.n_local_heads, 1)?; let v = repeat_interleave(&v, self.n_head / self.n_local_heads, 1)?; let scale = 1f64 / f64::sqrt(self.head_dim as f64); let attn_weights = (q.matmul(&k.transpose(2, 3)?)? * scale)?; let attn_weights = attn_weights.broadcast_add(mask)?; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?; let attn_output = attn_weights.matmul(&v)?; attn_output .transpose(1, 2)? .reshape((b_sz, seqlen, self.dim))? .apply(&self.wo) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Debug, Clone)] struct Block { attention: Attention, feed_forward: FeedForward, ffn_norm: RmsNorm, attention_norm: RmsNorm, span: tracing::Span, } impl Block { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let attention = Attention::new(cfg, vb.pp("attention"))?; let feed_forward = FeedForward::new(cfg, vb.pp("feed_forward"))?; let ffn_norm = rms_norm(cfg.dim, cfg.norm_eps, vb.pp("ffn_norm"))?; let attention_norm = rms_norm(cfg.dim, cfg.norm_eps, vb.pp("attention_norm"))?; Ok(Self { attention, feed_forward, ffn_norm, attention_norm, span: tracing::span!(tracing::Level::TRACE, "block"), }) } fn forward(&mut self, xs: &Tensor, pos: usize, mask: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let hs = xs.apply(&self.attention_norm)?; let hs = (xs + self.attention.forward(&hs, pos, mask))?; &hs + hs.apply(&self.ffn_norm)?.apply(&self.feed_forward) } fn clear_kv_cache(&mut self) { self.attention.clear_kv_cache() } } #[derive(Debug, Clone)] pub struct Model { tok_embeddings: Embedding, pos_embeddings: Embedding, speaker_cond_pos: Linear, layers: Vec<Block>, norm: RmsNorm, output: Linear, spk_cond_mask: Tensor, span: tracing::Span, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let tok_embeddings = embedding(cfg.vocab_size, cfg.dim, vb.pp("tok_embeddings"))?; let pos_embeddings = embedding(cfg.block_size, cfg.dim, vb.pp("pos_embeddings"))?; let speaker_cond_pos = linear_b( cfg.speaker_emb_dim, cfg.dim, false, vb.pp("speaker_cond_pos"), )?; let mut layers = Vec::with_capacity(cfg.n_layer); let vb_l = vb.pp("layers"); for layer_idx in 0..cfg.n_layer { let layer = Block::new(cfg, vb_l.pp(layer_idx))?; layers.push(layer) } let norm = rms_norm(cfg.dim, cfg.norm_eps, vb.pp("norm"))?; let output = linear_b(cfg.dim, cfg.vocab_size, false, vb.pp("output"))?; let dtype = vb.dtype(); let spk_cond_mask = Tensor::cat( &[ Tensor::ones((1, 1, cfg.dim), dtype, vb.device())?, Tensor::zeros((1, 1, cfg.dim), dtype, vb.device())?, ], 0, )?; Ok(Self { tok_embeddings, pos_embeddings, speaker_cond_pos, layers, norm, output, spk_cond_mask, span: tracing::span!(tracing::Level::TRACE, "transformer"), }) } pub fn clear_kv_cache(&mut self) { for layer in self.layers.iter_mut() { layer.clear_kv_cache() } } pub fn forward(&mut self, xs: &Tensor, spk_emb: &Tensor, pos: usize) -> Result<Tensor> { let _enter = self.span.enter(); let (_b_sz, seqlen) = xs.dims2()?; let mask: Vec<_> = (0..seqlen) .flat_map(|i| (0..seqlen).map(move |j| if i < j { f32::NEG_INFINITY } else { 0. })) .collect(); let mask = Tensor::from_slice(&mask, (1, 1, seqlen, seqlen), xs.device())?; let input_pos = Tensor::arange(pos as u32, (pos + seqlen) as u32, xs.device())?; let tok_embeddings = xs.apply(&self.tok_embeddings)?; let pos_embeddings = input_pos.apply(&self.pos_embeddings)?; let mut xs = tok_embeddings .broadcast_add(&pos_embeddings)? .broadcast_add( &spk_emb .apply(&self.speaker_cond_pos)? .broadcast_mul(&self.spk_cond_mask)?, )?; let mask = mask.to_dtype(xs.dtype())?; for layer in self.layers.iter_mut() { xs = layer.forward(&xs, pos, &mask)? } xs.narrow(1, seqlen - 1, 1)? .apply(&self.norm)? .apply(&self.output) } } } pub mod adapters { // https://github.com/metavoiceio/metavoice-src/blob/9078234c496d76adbec06df789b6b04b1875f129/fam/llm/adapters/tilted_encodec.py pub struct TiltedEncodec { end_of_audio_token: u32, span: tracing::Span, } impl TiltedEncodec { pub fn new(end_of_audio_token: u32) -> Self { Self { end_of_audio_token, span: tracing::span!(tracing::Level::TRACE, "tilted-encodec"), } } pub fn decode(&self, tokens: &[Vec<u32>]) -> (Vec<u32>, Vec<Vec<u32>>) { let _enter = self.span.enter(); let mut text_ids = vec![]; let mut extracted_audio_ids = vec![]; let mut min_audio_ids_len = usize::MAX; for (book_id, tokens) in tokens.iter().enumerate() { let mut audio_ids = vec![]; for &t in tokens.iter() { #[allow(clippy::comparison_chain)] if t > self.end_of_audio_token { if book_id == 0 { text_ids.push(t) } } else if t < self.end_of_audio_token { audio_ids.push(t) } } min_audio_ids_len = usize::min(min_audio_ids_len, audio_ids.len()); extracted_audio_ids.push(audio_ids) } for audio_ids in extracted_audio_ids.iter_mut() { audio_ids.truncate(min_audio_ids_len) } (text_ids, extracted_audio_ids) } } // https://github.com/metavoiceio/metavoice-src/blob/9078234c496d76adbec06df789b6b04b1875f129/fam/llm/adapters/flattened_encodec.py#L4 pub struct FlattenedInterleavedEncodec2Codebook { end_of_audio_token: u32, span: tracing::Span, } impl FlattenedInterleavedEncodec2Codebook { pub fn new(end_of_audio_token: u32) -> Self { Self { end_of_audio_token, span: tracing::span!(tracing::Level::TRACE, "encodec2codebook"), } } pub fn decode(&self, tokens: &[u32]) -> (Vec<u32>, Vec<u32>, Vec<u32>) { let _enter = self.span.enter(); let mut text_ids = vec![]; let mut audio_ids1 = vec![]; let mut audio_ids2 = vec![]; for &t in tokens.iter() { #[allow(clippy::comparison_chain)] if t < self.end_of_audio_token { audio_ids1.push(t) } else if t < 2 * self.end_of_audio_token { audio_ids2.push(t - self.end_of_audio_token) } else { text_ids.push(t) } } (text_ids, audio_ids1, audio_ids2) } } }
candle/candle-transformers/src/models/metavoice.rs/0
{ "file_path": "candle/candle-transformers/src/models/metavoice.rs", "repo_id": "candle", "token_count": 21765 }
56
//! # MobileNet-v4 //! //! MobileNet-v4 inference implementation based on timm. //! //! ## Paper //! //! ["MobileNetV4 - Universal Models for the Mobile Ecosystem"](https://arxiv.org/abs/2404.10518) //! //! ## References //! //! - [PyTorch Implementation](https://github.com/huggingface/pytorch-image-models/blob/main/timm/models/mobilenetv3.py) use candle::{Result, Tensor, D}; use candle_nn::{ batch_norm, conv2d_no_bias, linear, ops::softmax, Activation, Conv2dConfig, Func, VarBuilder, }; #[derive(Clone, Debug)] enum BlockType { Convolutional { out_channels: usize, kernel: usize, stride: usize, }, UniversalBottleneck { out_channels: usize, start_kernel: usize, mid_kernel: usize, stride: usize, expand: usize, }, EdgeResidual { out_channels: usize, kernel: usize, stride: usize, expand: usize, }, Attention { out_channels: usize, heads: usize, kernel: usize, stride: usize, kv_dim: usize, kv_stride: usize, }, } #[derive(Clone, Debug)] pub struct Config { stem_dim: usize, activation: Activation, stages: [Vec<BlockType>; 5], } #[rustfmt::skip] impl Config { pub fn small() -> Self { Self { stem_dim: 32, activation: Activation::Relu, stages: [ vec![ BlockType::Convolutional { out_channels: 32, kernel: 3, stride: 2}, BlockType::Convolutional { out_channels: 32, kernel: 1, stride: 1}, ], vec![ BlockType::Convolutional { out_channels: 96, kernel: 3, stride: 2}, BlockType::Convolutional { out_channels: 64, kernel: 1, stride: 1}, ], vec![ BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 5, mid_kernel: 5, stride: 2, expand: 3}, BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 0, mid_kernel: 3, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 0, mid_kernel: 3, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 0, mid_kernel: 3, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 0, mid_kernel: 3, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 128, start_kernel: 3, mid_kernel: 3, stride: 2, expand: 6}, BlockType::UniversalBottleneck { out_channels: 128, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 128, start_kernel: 0, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 128, start_kernel: 0, mid_kernel: 5, stride: 1, expand: 3}, BlockType::UniversalBottleneck { out_channels: 128, start_kernel: 0, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 128, start_kernel: 0, mid_kernel: 3, stride: 1, expand: 4}, ], vec![ BlockType::Convolutional { out_channels: 960, kernel: 1, stride: 1}, ], ], } } pub fn medium() -> Self { Self { stem_dim: 32, activation: Activation::Relu, stages: [ vec![ BlockType::EdgeResidual { out_channels: 48, kernel: 3, stride: 2, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 80, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 80, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 2}, ], vec![ BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 6}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 5, stride: 2, expand: 6}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 2}, ], vec![ BlockType::Convolutional { out_channels: 960, kernel: 1, stride: 1}, ], ], } } pub fn hybrid_medium() -> Self { Self { stem_dim: 32, activation: Activation::Relu, stages: [ vec![ BlockType::EdgeResidual { out_channels: 48, kernel: 3, stride: 2, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 80, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 80, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 2}, ], vec![ BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 6}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::Attention { out_channels: 160, heads: 4, kernel: 3, stride: 1, kv_stride:2, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::Attention { out_channels: 160, heads: 4, kernel: 3, stride: 1, kv_stride:2, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, BlockType::Attention { out_channels: 160, heads: 4, kernel: 3, stride: 1, kv_stride:2, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::Attention { out_channels: 160, heads: 4, kernel: 3, stride: 1, kv_stride:2, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 160, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 5, stride: 2, expand: 6}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 2}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 0, mid_kernel: 0, stride: 1, expand: 4}, BlockType::Attention { out_channels: 256, heads: 4, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, BlockType::Attention { out_channels: 256, heads: 4, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::Attention { out_channels: 256, heads: 4, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::Attention { out_channels: 256, heads: 4, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 256, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::Convolutional { out_channels: 960, kernel: 1, stride: 1}, ], ], } } pub fn large() -> Self { Self { stem_dim: 24, activation: Activation::Relu, stages: [ vec![ BlockType::EdgeResidual { out_channels: 48, kernel: 3, stride: 2, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::Convolutional { out_channels: 960, kernel: 1, stride: 1}, ], ], } } pub fn hybrid_large() -> Self { Self { stem_dim: 24, activation: Activation::Gelu, stages: [ vec![ BlockType::EdgeResidual { out_channels: 48, kernel: 3, stride: 2, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 96, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::Attention { out_channels: 192, heads: 8, kernel: 3, stride: 1, kv_stride:2, kv_dim: 48}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::Attention { out_channels: 192, heads: 8, kernel: 3, stride: 1, kv_stride:2, kv_dim: 48}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::Attention { out_channels: 192, heads: 8, kernel: 3, stride: 1, kv_stride:2, kv_dim: 48}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::Attention { out_channels: 192, heads: 8, kernel: 3, stride: 1, kv_stride:2, kv_dim: 48}, BlockType::UniversalBottleneck { out_channels: 192, start_kernel: 3, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 2, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 3, stride: 1, expand: 4}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 5, stride: 1, expand: 4}, BlockType::Attention { out_channels: 512, heads: 8, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::Attention { out_channels: 512, heads: 8, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::Attention { out_channels: 512, heads: 8, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, BlockType::Attention { out_channels: 512, heads: 8, kernel: 3, stride: 1, kv_stride:1, kv_dim: 64}, BlockType::UniversalBottleneck { out_channels: 512, start_kernel: 5, mid_kernel: 0, stride: 1, expand: 4}, ], vec![ BlockType::Convolutional { out_channels: 960, kernel: 1, stride: 1}, ], ], } } } fn depthwise_conv( channels: usize, kernel: usize, stride: usize, padding: usize, vb: VarBuilder, ) -> Result<Func<'static>> { let conv2d_cfg = Conv2dConfig { stride, padding, groups: channels, ..Default::default() }; let bn = batch_norm(channels, 1e-5, vb.pp("bn"))?; let conv = conv2d_no_bias(channels, channels, kernel, conv2d_cfg, vb.pp("conv"))?; Ok(Func::new(move |xs| xs.apply(&conv)?.apply_t(&bn, false))) } fn pointwise_conv( in_channels: usize, out_channels: usize, vb: VarBuilder, ) -> Result<Func<'static>> { let conv2d_cfg = Conv2dConfig { ..Default::default() }; let bn = batch_norm(out_channels, 1e-5, vb.pp("bn"))?; let conv = conv2d_no_bias(in_channels, out_channels, 1, conv2d_cfg, vb.pp("conv"))?; Ok(Func::new(move |xs| xs.apply(&conv)?.apply_t(&bn, false))) } //Universal block that uses two pointwise convolutions and all combinations of two depthwise convolutions. #[allow(clippy::too_many_arguments)] fn universal_inverted_bottleneck_block( cfg: &Config, in_channels: usize, out_channels: usize, expand: usize, start_kernel: usize, mid_kernel: usize, stride: usize, vb: VarBuilder, ) -> Result<Func<'static>> { let act = cfg.activation; let skip_connection = (in_channels == out_channels) && (stride == 1); let dw_start_stride = if mid_kernel > 0 { 1 } else { stride }; let dw_start = depthwise_conv( in_channels, start_kernel, dw_start_stride, start_kernel / 2, vb.pp("dw_start"), ); let pw_exp = pointwise_conv(in_channels, in_channels * expand, vb.pp("pw_exp"))?; let dw_mid = depthwise_conv( in_channels * expand, mid_kernel, stride, mid_kernel / 2, vb.pp("dw_mid"), ); let pw_proj = pointwise_conv(in_channels * expand, out_channels, vb.pp("pw_proj"))?; let gamma = vb.get(out_channels, "layer_scale.gamma"); Ok(Func::new(move |xs| { let residual = xs.clone(); let mut xs = xs.clone(); if let Ok(f) = &dw_start { xs = xs.apply(f)?; } xs = xs.apply(&pw_exp)?.apply(&act)?; if let Ok(f) = &dw_mid { xs = xs.apply(f)?.apply(&act)?; } xs = xs.apply(&pw_proj)?; if let Ok(g) = &gamma { xs = xs.broadcast_mul(&g.reshape((1, (), 1, 1))?)?; }; if skip_connection { xs = (xs + residual)?; } Ok(xs) })) } // Convolutional block including norm and activation. fn conv_block( cfg: &Config, in_channels: usize, out_channels: usize, kernel: usize, stride: usize, vb: VarBuilder, ) -> Result<Func<'static>> { let conv2d_cfg = Conv2dConfig { stride, padding: kernel / 2, ..Default::default() }; let act = cfg.activation; let bn = batch_norm(out_channels, 1e-5, vb.pp("bn1"))?; let conv = conv2d_no_bias(in_channels, out_channels, kernel, conv2d_cfg, vb.pp("conv"))?; Ok(Func::new(move |xs| { xs.apply(&conv)?.apply_t(&bn, false)?.apply(&act) })) } fn edge_residual_block( cfg: &Config, in_channels: usize, out_channels: usize, kernel: usize, stride: usize, expand: usize, vb: VarBuilder, ) -> Result<Func<'static>> { let conv_exp_cfg = Conv2dConfig { stride, padding: kernel / 2, ..Default::default() }; let conv_pwl_cfg = Conv2dConfig { ..Default::default() }; let act = cfg.activation; let mid_channels = in_channels * expand; let conv_exp = conv2d_no_bias( in_channels, mid_channels, kernel, conv_exp_cfg, vb.pp("conv_exp"), )?; let bn1 = batch_norm(mid_channels, 1e-5, vb.pp("bn1"))?; let conv_pwl = conv2d_no_bias( mid_channels, out_channels, 1, conv_pwl_cfg, vb.pp("conv_pwl"), )?; let bn2 = batch_norm(out_channels, 1e-5, vb.pp("bn2"))?; Ok(Func::new(move |xs| { let xs = xs .apply(&conv_exp)? .apply_t(&bn1, false)? .apply(&act)? .apply(&conv_pwl)? .apply_t(&bn2, false)?; Ok(xs) })) } fn reshape_kv(t: &Tensor) -> Result<Tensor> { let d = t.dims4()?; let t = t .reshape((d.0, d.1, ()))? .transpose(1, 2)? .unsqueeze(1)? .contiguous()?; Ok(t) } fn reshape_query(t: &Tensor, heads: usize, kv_dim: usize) -> Result<Tensor> { let d = t.dims4()?; let t = t .reshape((d.0, heads, kv_dim, ()))? .transpose(D::Minus1, D::Minus2)? .contiguous()?; Ok(t) } fn reshape_output(t: &Tensor, heads: usize, h: usize, w: usize) -> Result<Tensor> { let d = t.dims4()?; let t = t.transpose(1, 2)?; let t = t .reshape((d.0, h, w, d.3 * heads))? .permute((0, 3, 1, 2))? .contiguous()?; Ok(t) } // Mobile multi-query attention #[allow(clippy::too_many_arguments)] fn mqa_block( in_channels: usize, out_channels: usize, heads: usize, kernel: usize, stride: usize, kv_dim: usize, kv_stride: usize, vb: VarBuilder, ) -> Result<Func<'static>> { let down_conv2d_cfg = Conv2dConfig { stride: kv_stride, padding: kernel / 2, groups: in_channels, ..Default::default() }; let proj_conv2d_cfg = Conv2dConfig { stride, ..Default::default() }; let skip_connection = (in_channels == out_channels) && (stride == 1); let gamma = vb.get(out_channels, "layer_scale.gamma"); let norm = batch_norm(out_channels, 1e-5, vb.pp("norm"))?; let scale = (kv_dim as f64).powf(-0.5); let vb = vb.pp("attn"); let query_proj = conv2d_no_bias( out_channels, kv_dim * heads, 1, proj_conv2d_cfg, vb.pp("query.proj"), )?; let key_down_conv = conv2d_no_bias( in_channels, out_channels, kernel, down_conv2d_cfg, vb.pp("key.down_conv"), ); let key_norm = batch_norm(out_channels, 1e-5, vb.pp("key.norm")); let key_proj = conv2d_no_bias(out_channels, kv_dim, 1, proj_conv2d_cfg, vb.pp("key.proj"))?; let value_down_conv = conv2d_no_bias( in_channels, out_channels, kernel, down_conv2d_cfg, vb.pp("value.down_conv"), ); let value_norm = batch_norm(out_channels, 1e-5, vb.pp("value.norm")); let value_proj = conv2d_no_bias( out_channels, kv_dim, 1, proj_conv2d_cfg, vb.pp("value.proj"), )?; let output_proj = conv2d_no_bias( kv_dim * heads, out_channels, 1, proj_conv2d_cfg, vb.pp("output.proj"), )?; Ok(Func::new(move |xs| { let (_, _, h, w) = xs.dims4()?; let residual = xs.clone(); let xs = xs.apply_t(&norm, false)?; // Query let q = xs.apply(&query_proj)?; let q = reshape_query(&q, heads, kv_dim)?; let q = (q * scale)?; // Keys let mut k = xs.clone(); if let (Ok(kd), Ok(n)) = (&key_down_conv, &key_norm) { k = k.apply(kd)?.apply_t(n, false)?; } let k = k.apply(&key_proj)?; let k = reshape_kv(&k)?; // Value let mut v = xs.clone(); if let (Ok(vd), Ok(n)) = (&value_down_conv, &value_norm) { v = v.apply(vd)?; v = v.apply_t(n, false)?; } let v = v.apply(&value_proj)?; let v = reshape_kv(&v)?; let attn = q.broadcast_matmul(&(k.transpose(D::Minus2, D::Minus1)?))?; let attn = softmax(&attn, D::Minus1)?; let o = attn.broadcast_matmul(&v)?; let o = reshape_output(&o, heads, h, w)?; let mut xs = o.apply(&output_proj)?; // Layer scale if let Ok(g) = &gamma { xs = xs.broadcast_mul(&g.reshape((1, (), 1, 1))?)?; }; if skip_connection { xs = (xs + residual)?; } Ok(xs) })) } // Stem. fn mobilenetv4_stem(cfg: &Config, vb: VarBuilder) -> Result<Func<'static>> { let conv2d_cfg = Conv2dConfig { stride: 2, padding: 1, ..Default::default() }; let act = cfg.activation; let out_channels = cfg.stem_dim; let bn = batch_norm(out_channels, 1e-5, vb.pp("bn1"))?; let conv = conv2d_no_bias(3, out_channels, 3, conv2d_cfg, vb.pp("conv_stem"))?; Ok(Func::new(move |xs| { let xs = xs.apply(&conv)?.apply_t(&bn, false)?.apply(&act)?; Ok(xs) })) } // The blocks in all the 5 stages of the model. fn mobilenetv4_blocks(cfg: &Config, vb: VarBuilder) -> Result<Func<'static>> { let mut in_channels = cfg.stem_dim; let mut blocks = Vec::new(); for stage in 0..5 { let nblocks = cfg.stages[stage].len(); for block in 0..nblocks { match cfg.stages[stage][block] { BlockType::Convolutional { out_channels, kernel, stride, } => { blocks.push(conv_block( cfg, in_channels, out_channels, kernel, stride, vb.pp(format!("{stage}.{block}")), )?); in_channels = out_channels; } BlockType::EdgeResidual { out_channels, kernel, stride, expand, } => { blocks.push(edge_residual_block( cfg, in_channels, out_channels, kernel, stride, expand, vb.pp(format!("{stage}.{block}")), )?); in_channels = out_channels; } BlockType::UniversalBottleneck { out_channels, start_kernel, mid_kernel, stride, expand, } => { blocks.push(universal_inverted_bottleneck_block( cfg, in_channels, out_channels, expand, start_kernel, mid_kernel, stride, vb.pp(format!("{stage}.{block}")), )?); in_channels = out_channels; } BlockType::Attention { out_channels, heads, kernel, stride, kv_dim, kv_stride, } => { blocks.push(mqa_block( in_channels, out_channels, heads, kernel, stride, kv_dim, kv_stride, vb.pp(format!("{stage}.{block}")), )?); in_channels = out_channels; } } } } Ok(Func::new(move |xs| { let mut xs = xs.clone(); for block in blocks.iter() { xs = xs.apply(block)? } Ok(xs) })) } // Classification head. fn mobilenetv4_head( cfg: &Config, outputs: usize, nclasses: usize, vb: VarBuilder, ) -> Result<Func<'static>> { let conv2d_cfg = Conv2dConfig { ..Default::default() }; let act = cfg.activation; let conv = conv2d_no_bias(960, outputs, 1, conv2d_cfg, vb.pp("conv_head"))?; let norm = batch_norm(outputs, 1e-5, vb.pp("norm_head"))?; let cls = linear(outputs, nclasses, vb.pp("classifier"))?; Ok(Func::new(move |xs| { let mut xs = xs.clone(); xs = xs.apply(&conv)?; xs = xs.apply_t(&norm, false)?.apply(&act)?; xs = xs.flatten_from(1)?; xs = xs.apply(&cls)?; Ok(xs) })) } // Build a mobilenetv4 model for a given configuration. fn mobilenetv4_model( cfg: &Config, nclasses: Option<usize>, vb: VarBuilder, ) -> Result<Func<'static>> { let cls = match nclasses { None => None, Some(nclasses) => { let outputs = 1280; let head = mobilenetv4_head(cfg, outputs, nclasses, vb.clone())?; Some(head) } }; let stem = mobilenetv4_stem(cfg, vb.clone())?; let blocks = mobilenetv4_blocks(cfg, vb.pp("blocks"))?; Ok(Func::new(move |xs| { let xs = xs.apply(&stem)?.apply(&blocks)?; let xs = xs.mean_keepdim(D::Minus1)?.mean_keepdim(D::Minus2)?; match &cls { None => Ok(xs), Some(cls) => xs.apply(cls), } })) } pub fn mobilenetv4(cfg: &Config, nclasses: usize, vb: VarBuilder) -> Result<Func<'static>> { mobilenetv4_model(cfg, Some(nclasses), vb) } pub fn mobilenetv4_no_final_layer(cfg: &Config, vb: VarBuilder) -> Result<Func<'static>> { mobilenetv4_model(cfg, None, vb) }
candle/candle-transformers/src/models/mobilenetv4.rs/0
{ "file_path": "candle/candle-transformers/src/models/mobilenetv4.rs", "repo_id": "candle", "token_count": 16908 }
57
//! Microsoft Phi model implementation //! //! The Phi series are decoder-only transformers designed for code and language tasks. //! //! Key characteristics: //! - Decoder-only transformer architecture //! - RoPE embeddings //! - Layer normalization //! - QK normalization //! //! - ⚡ [Interactive Wasm Example](https://huggingface.co/spaces/radames/Candle-phi1-phi2-wasm-demo) //! - 🤗 [HF Link](https://huggingface.co/microsoft/phi-2) //! use crate::models::with_tracing::{layer_norm, linear, Embedding, LayerNorm, Linear}; /// Phi model. /// https://huggingface.co/microsoft/phi-2 /// There is an alternative implementation of the phi model in mixformers.rs. /// This corresponds to the model update made with the following commit: /// https://huggingface.co/microsoft/phi-2/commit/cb2f4533604d8b67de604e7df03bfe6f3ca22869 use candle::{DType, Device, IndexOp, Module, Result, Tensor, D}; use candle_nn::{Activation, VarBuilder}; use serde::Deserialize; // https://huggingface.co/microsoft/phi-2/blob/main/configuration_phi.py #[derive(Debug, Clone, PartialEq, Deserialize)] pub struct Config { pub(crate) vocab_size: usize, pub(crate) hidden_size: usize, pub(crate) intermediate_size: usize, pub(crate) num_hidden_layers: usize, pub(crate) num_attention_heads: usize, pub(crate) num_key_value_heads: Option<usize>, pub(crate) hidden_act: Activation, pub(crate) max_position_embeddings: usize, pub(crate) layer_norm_eps: f64, pub(crate) tie_word_embeddings: bool, pub(crate) rope_theta: f32, pub(crate) partial_rotary_factor: f64, pub(crate) qk_layernorm: bool, } impl Config { fn num_key_value_heads(&self) -> usize { self.num_key_value_heads.unwrap_or(self.num_attention_heads) } fn head_dim(&self) -> usize { self.hidden_size / self.num_attention_heads } } #[derive(Debug, Clone)] struct RotaryEmbedding { dim: usize, sin: Tensor, cos: Tensor, } impl RotaryEmbedding { fn new(cfg: &Config, dev: &Device) -> Result<Self> { let dim = (cfg.partial_rotary_factor * cfg.head_dim() as f64) as usize; let inv_freq: Vec<_> = (0..dim) .step_by(2) .map(|i| 1f32 / cfg.rope_theta.powf(i as f32 / dim as f32)) .collect(); let inv_freq_len = inv_freq.len(); let inv_freq = Tensor::from_vec(inv_freq, (1, inv_freq_len), dev)?; let t = Tensor::arange(0u32, cfg.max_position_embeddings as u32, dev)? .to_dtype(DType::F32)? .reshape((cfg.max_position_embeddings, 1))?; let freqs = t.matmul(&inv_freq)?; Ok(Self { dim, sin: freqs.sin()?, cos: freqs.cos()?, }) } fn apply_rotary_emb(&self, xs: &Tensor, seqlen_offset: usize) -> Result<Tensor> { let (_b_size, _num_heads, seq_len, _headdim) = xs.dims4()?; let xs_rot = xs.i((.., .., .., ..self.dim))?.contiguous()?; let xs_pass = xs.i((.., .., .., self.dim..))?; let c = self.cos.narrow(0, seqlen_offset, seq_len)?; let s = self.sin.narrow(0, seqlen_offset, seq_len)?; let xs_rot = candle_nn::rotary_emb::rope(&xs_rot, &c, &s)?; Tensor::cat(&[&xs_rot, &xs_pass], D::Minus1) } } #[derive(Debug, Clone)] #[allow(clippy::upper_case_acronyms)] struct MLP { fc1: Linear, fc2: Linear, act: Activation, } impl MLP { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let fc1 = linear(cfg.hidden_size, cfg.intermediate_size, vb.pp("fc1"))?; let fc2 = linear(cfg.intermediate_size, cfg.hidden_size, vb.pp("fc2"))?; Ok(Self { fc1, fc2, // This does not match the mixformers implementation where Gelu is used rather than // GeluNew. act: cfg.hidden_act, }) } } impl Module for MLP { fn forward(&self, xs: &Tensor) -> Result<Tensor> { xs.apply(&self.fc1)?.apply(&self.act)?.apply(&self.fc2) } } #[derive(Clone)] struct Attention { q_proj: Linear, k_proj: Linear, v_proj: Linear, dense: Linear, kv_cache: Option<(Tensor, Tensor)>, q_layernorm: Option<LayerNorm>, k_layernorm: Option<LayerNorm>, rotary_emb: RotaryEmbedding, softmax_scale: f64, num_heads: usize, num_kv_heads: usize, head_dim: usize, span: tracing::Span, } fn get_mask(size: usize, device: &Device) -> Result<Tensor> { let mask: Vec<_> = (0..size) .flat_map(|i| (0..size).map(move |j| u8::from(j > i))) .collect(); Tensor::from_slice(&mask, (size, size), device) } fn masked_fill(on_false: &Tensor, mask: &Tensor, on_true: f32) -> Result<Tensor> { let shape = mask.shape(); let on_true = Tensor::new(on_true, on_false.device())?.broadcast_as(shape.dims())?; let m = mask.where_cond(&on_true, on_false)?; Ok(m) } impl Attention { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let num_heads = cfg.num_attention_heads; let num_kv_heads = cfg.num_key_value_heads(); let head_dim = cfg.head_dim(); let q_proj = linear(cfg.hidden_size, num_heads * head_dim, vb.pp("q_proj"))?; let k_proj = linear(cfg.hidden_size, num_kv_heads * head_dim, vb.pp("k_proj"))?; let v_proj = linear(cfg.hidden_size, num_kv_heads * head_dim, vb.pp("v_proj"))?; let dense = linear(num_heads * head_dim, cfg.hidden_size, vb.pp("dense"))?; // Alternative rope scalings are not supported. let rotary_emb = RotaryEmbedding::new(cfg, vb.device())?; let (q_layernorm, k_layernorm) = if cfg.qk_layernorm { let q_layernorm = layer_norm(head_dim, cfg.layer_norm_eps, vb.pp("q_layernorm"))?; let k_layernorm = layer_norm(head_dim, cfg.layer_norm_eps, vb.pp("k_layernorm"))?; (Some(q_layernorm), Some(k_layernorm)) } else { (None, None) }; let softmax_scale = 1f64 / (head_dim as f64).sqrt(); Ok(Self { q_proj, k_proj, v_proj, dense, kv_cache: None, q_layernorm, k_layernorm, rotary_emb, softmax_scale, num_heads, num_kv_heads, head_dim, span: tracing::span!(tracing::Level::TRACE, "attention"), }) } fn repeat_kv(&self, xs: Tensor) -> Result<Tensor> { crate::utils::repeat_kv(xs, self.num_heads / self.num_kv_heads) } fn forward(&mut self, xs: &Tensor, mask: Option<&Tensor>) -> Result<Tensor> { let _enter = self.span.enter(); let (b_size, seq_len, _n_embd) = xs.dims3()?; let query_states = self.q_proj.forward(xs)?; let key_states = self.k_proj.forward(xs)?; let value_states = self.v_proj.forward(xs)?; let query_states = match &self.q_layernorm { None => query_states, Some(ln) => query_states.apply(ln)?, }; let key_states = match &self.k_layernorm { None => key_states, Some(ln) => key_states.apply(ln)?, }; let query_states = query_states .reshape((b_size, seq_len, self.num_heads, self.head_dim))? .transpose(1, 2)?; let key_states = key_states .reshape((b_size, seq_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)?; let value_states = value_states .reshape((b_size, seq_len, self.num_kv_heads, self.head_dim))? .transpose(1, 2)?; // Rotary embeddings. let seqlen_offset = match &self.kv_cache { None => 0, Some((prev_k, _)) => prev_k.dim(2)?, }; let query_states = self .rotary_emb .apply_rotary_emb(&query_states, seqlen_offset)?; let key_states = self .rotary_emb .apply_rotary_emb(&key_states, seqlen_offset)?; // KV cache. let (key_states, value_states) = match &self.kv_cache { None => (key_states, value_states), Some((prev_k, prev_v)) => { let k = Tensor::cat(&[prev_k, &key_states], 2)?; let v = Tensor::cat(&[prev_v, &value_states], 2)?; (k, v) } }; self.kv_cache = Some((key_states.clone(), value_states.clone())); // Repeat kv. let key_states = self.repeat_kv(key_states)?.contiguous()?; let value_states = self.repeat_kv(value_states)?.contiguous()?; let attn_weights = (query_states .to_dtype(DType::F32)? .contiguous()? .matmul(&key_states.to_dtype(DType::F32)?.t()?)? * self.softmax_scale)?; let attn_weights = match mask { None => attn_weights, Some(mask) => masked_fill( &attn_weights, &mask.broadcast_left((b_size, self.num_heads))?, f32::NEG_INFINITY, )?, }; let attn_weights = candle_nn::ops::softmax_last_dim(&attn_weights)?.to_dtype(value_states.dtype())?; let attn_output = attn_weights.matmul(&value_states)?; let attn_output = attn_output .transpose(1, 2)? .reshape((b_size, seq_len, ()))?; attn_output.apply(&self.dense) } fn clear_kv_cache(&mut self) { self.kv_cache = None } } #[derive(Clone)] struct DecoderLayer { self_attn: Attention, mlp: MLP, input_layernorm: LayerNorm, span: tracing::Span, } impl DecoderLayer { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let self_attn = Attention::new(cfg, vb.pp("self_attn"))?; let mlp = MLP::new(cfg, vb.pp("mlp"))?; let input_layernorm = layer_norm( cfg.hidden_size, cfg.layer_norm_eps, vb.pp("input_layernorm"), )?; Ok(Self { self_attn, mlp, input_layernorm, span: tracing::span!(tracing::Level::TRACE, "block"), }) } fn forward(&mut self, xs: &Tensor, mask: Option<&Tensor>) -> Result<Tensor> { let _enter = self.span.enter(); let residual = xs; let xs = xs.apply(&self.input_layernorm)?; let attn_outputs = self.self_attn.forward(&xs, mask)?; let feed_forward_hidden_states = self.mlp.forward(&xs)?; attn_outputs + feed_forward_hidden_states + residual } fn clear_kv_cache(&mut self) { self.self_attn.clear_kv_cache() } } #[derive(Clone)] pub struct Model { embed_tokens: Embedding, layers: Vec<DecoderLayer>, final_layernorm: LayerNorm, lm_head: Linear, span: tracing::Span, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_m = vb.pp("model"); let embed_tokens = Embedding::new(cfg.vocab_size, cfg.hidden_size, vb_m.pp("embed_tokens"))?; let final_layernorm = layer_norm( cfg.hidden_size, cfg.layer_norm_eps, vb_m.pp("final_layernorm"), )?; let mut layers = Vec::with_capacity(cfg.num_hidden_layers); let vb_m = vb_m.pp("layers"); for layer_idx in 0..cfg.num_hidden_layers { let layer = DecoderLayer::new(cfg, vb_m.pp(layer_idx))?; layers.push(layer) } let lm_head = linear(cfg.hidden_size, cfg.vocab_size, vb.pp("lm_head"))?; Ok(Self { embed_tokens, layers, final_layernorm, lm_head, span: tracing::span!(tracing::Level::TRACE, "model"), }) } pub fn forward(&mut self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let (_b_size, seq_len) = xs.dims2()?; let mut xs = xs.apply(&self.embed_tokens)?; let mask = if seq_len <= 1 { None } else { Some(get_mask(seq_len, xs.device())?) }; for layer in self.layers.iter_mut() { xs = layer.forward(&xs, mask.as_ref())?; } xs.apply(&self.final_layernorm)? .narrow(1, seq_len - 1, 1)? .apply(&self.lm_head)? .squeeze(1) } pub fn clear_kv_cache(&mut self) { self.layers.iter_mut().for_each(|b| b.clear_kv_cache()) } }
candle/candle-transformers/src/models/phi.rs/0
{ "file_path": "candle/candle-transformers/src/models/phi.rs", "repo_id": "candle", "token_count": 6213 }
58
//! Phi3 model implementation with quantization support. //! //! Phi3 is a language model intended for research purposes. //! This implementation provides quantization for reduced memory usage. //! //! Key characteristics: //! - Multi-head attention //! - RMSNorm for layer normalization //! - Rotary positional embeddings (RoPE) //! - Support for quantization //! //! References: //! - [Model Card](https://huggingface.co/microsoft/phi-3) //! use std::collections::HashMap; use candle::quantized::gguf_file; use candle::quantized::QTensor; use candle::{DType, Device, IndexOp, Module, Result, Tensor, D}; use candle_nn::{kv_cache::KvCache, Embedding, RmsNorm}; #[derive(Debug, Clone)] struct QLinear { inner: candle::quantized::QMatMul, span: tracing::Span, } impl QLinear { fn new<R: std::io::Read + std::io::Seek>( ct: &gguf_file::Content, r: &mut R, name: &str, device: &Device, ) -> Result<Self> { let span = tracing::span!(tracing::Level::TRACE, "qmatmul"); let w = ct.tensor(r, &format!("{name}.weight"), device)?; let inner = candle::quantized::QMatMul::from_qtensor(w)?; Ok(Self { inner, span }) } } impl Module for QLinear { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); self.inner.forward(xs) } } #[derive(Debug, Clone)] struct Mlp { ffn_up: QLinear, ffn_down: QLinear, i_size: usize, } impl Module for Mlp { fn forward(&self, xs: &Tensor) -> Result<Tensor> { let up_states = xs.apply(&self.ffn_up)?; let gate = up_states.narrow(D::Minus1, 0, self.i_size)?; let up_states = up_states.narrow(D::Minus1, self.i_size, self.i_size)?; let up_states = (up_states * gate.silu()?)?; up_states.apply(&self.ffn_down) } } fn rms_norm(w: QTensor, eps: f64) -> Result<RmsNorm> { let w = w.dequantize(&w.device())?; let rms = RmsNorm::new(w, eps); Ok(rms) } #[derive(Debug, Clone)] struct LayerWeights { attn_qkv: QLinear, attn_output: QLinear, attn_norm: RmsNorm, ffn_norm: RmsNorm, mlp: Mlp, n_head: usize, n_kv_head: usize, head_dim: usize, cos: Tensor, sin: Tensor, neg_inf: Tensor, kv_cache: KvCache, use_flash_attn: bool, span_attn: tracing::Span, span_rot: tracing::Span, } fn masked_fill(on_false: &Tensor, mask: &Tensor, on_true: &Tensor) -> Result<Tensor> { let shape = mask.shape(); let m = mask.where_cond(&on_true.broadcast_as(shape.dims())?, on_false)?; Ok(m) } impl LayerWeights { fn apply_rotary_emb(&self, xs: &Tensor, index_pos: usize) -> Result<Tensor> { let _enter = self.span_rot.enter(); let (_b_sz, _h, seq_len, _n_embd) = xs.dims4()?; let cos = self.cos.narrow(0, index_pos, seq_len)?; let sin = self.sin.narrow(0, index_pos, seq_len)?; candle_nn::rotary_emb::rope(&xs.contiguous()?, &cos, &sin) } fn forward_attn( &mut self, x: &Tensor, mask: Option<&Tensor>, index_pos: usize, ) -> Result<Tensor> { let _enter = self.span_attn.enter(); let (b_sz, seq_len, n_embd) = x.dims3()?; let qkv = self.attn_qkv.forward(x)?; let query_pos = self.n_head * self.head_dim; let q = qkv.narrow(D::Minus1, 0, query_pos)?; let k = qkv.narrow(D::Minus1, query_pos, self.n_kv_head * self.head_dim)?; let v = qkv.narrow( D::Minus1, query_pos + self.n_kv_head * self.head_dim, self.n_kv_head * self.head_dim, )?; let q = q .reshape((b_sz, seq_len, self.n_head, self.head_dim))? .transpose(1, 2)?; let k = k .reshape((b_sz, seq_len, self.n_kv_head, self.head_dim))? .transpose(1, 2)?; let v = v .reshape((b_sz, seq_len, self.n_kv_head, self.head_dim))? .transpose(1, 2)?; let q = self.apply_rotary_emb(&q, index_pos)?.contiguous()?; let k = self.apply_rotary_emb(&k, index_pos)?; let (k, v) = self.kv_cache.append(&k.contiguous()?, &v.contiguous()?)?; let k = crate::utils::repeat_kv(k, self.n_head / self.n_kv_head)?; let v = crate::utils::repeat_kv(v, self.n_head / self.n_kv_head)?; let y = if self.use_flash_attn { // flash-attn expects (b_sz, seq_len, nheads, head_dim) let q = q.to_dtype(DType::BF16)?.transpose(1, 2)?; let k = k.to_dtype(DType::BF16)?.transpose(1, 2)?; let v = v.to_dtype(DType::BF16)?.transpose(1, 2)?; let softmax_scale = 1f32 / (self.head_dim as f32).sqrt(); flash_attn(&q, &k, &v, softmax_scale, seq_len > 1)? .to_dtype(DType::F32)? .transpose(1, 2)? } else { let att = (q.matmul(&k.t()?)? / (self.head_dim as f64).sqrt())?; let att = match mask { None => att, Some(mask) => { let mask = mask.broadcast_as(att.shape())?; masked_fill(&att, &mask, &self.neg_inf)? } }; let att = candle_nn::ops::softmax_last_dim(&att)?; // Convert to contiguous as matmul doesn't support strided vs for now. att.matmul(&v)? }; let y = y.transpose(1, 2)?.reshape(&[b_sz, seq_len, n_embd])?; let y = self.attn_output.forward(&y)?; Ok(y) } } #[cfg(feature = "flash-attn")] fn flash_attn( q: &Tensor, k: &Tensor, v: &Tensor, softmax_scale: f32, causal: bool, ) -> Result<Tensor> { candle_flash_attn::flash_attn(q, k, v, softmax_scale, causal) } #[cfg(not(feature = "flash-attn"))] fn flash_attn(_: &Tensor, _: &Tensor, _: &Tensor, _: f32, _: bool) -> Result<Tensor> { unimplemented!("compile with '--features flash-attn'") } #[derive(Debug, Clone)] pub struct ModelWeights { tok_embeddings: Embedding, layers: Vec<LayerWeights>, output_norm: RmsNorm, output: QLinear, masks: HashMap<usize, Tensor>, span: tracing::Span, span_output: tracing::Span, } fn precomput_freqs_cis( head_dim: usize, max_seq_len: usize, freq_base: f32, device: &Device, ) -> Result<(Tensor, Tensor)> { let theta: Vec<_> = (0..head_dim) .step_by(2) .map(|i| 1f32 / freq_base.powf(i as f32 / head_dim as f32)) .collect(); let theta = Tensor::new(theta.as_slice(), device)?; let idx_theta = Tensor::arange(0, max_seq_len as u32, device)? .to_dtype(DType::F32)? .reshape((max_seq_len, 1))? .matmul(&theta.reshape((1, theta.elem_count()))?)?; let cos = idx_theta.cos()?; let sin = idx_theta.sin()?; Ok((cos, sin)) } impl ModelWeights { pub fn from_gguf<R: std::io::Seek + std::io::Read>( use_flash_attn: bool, ct: gguf_file::Content, reader: &mut R, device: &Device, ) -> Result<Self> { let md_get = |s: &str| match ct.metadata.get(s) { None => candle::bail!("cannot find {s} in metadata"), Some(v) => Ok(v), }; // Parameter extraction from metadata. let head_count = md_get("phi3.attention.head_count")?.to_u32()? as usize; let head_count_kv = md_get("phi3.attention.head_count_kv")?.to_u32()? as usize; let block_count = md_get("phi3.block_count")?.to_u32()? as usize; let embedding_length = md_get("phi3.embedding_length")?.to_u32()? as usize; let max_seq_len = md_get("phi3.context_length")?.to_u32()? as usize; let head_dim = embedding_length / head_count; let i_size = md_get("phi3.feed_forward_length")?.to_u32()? as usize; let rope_dim = md_get("phi3.rope.dimension_count")?.to_u32()? as usize; let rms_eps = md_get("phi3.attention.layer_norm_rms_epsilon")?.to_f32()? as f64; let (cos, sin) = precomput_freqs_cis(rope_dim, max_seq_len, 10_000., device)?; let neg_inf = Tensor::new(f32::NEG_INFINITY, device)?; let tok_embeddings = ct.tensor(reader, "token_embd.weight", device)?; let tok_embeddings = tok_embeddings.dequantize(device)?; let output_norm = rms_norm(ct.tensor(reader, "output_norm.weight", device)?, rms_eps)?; let output = QLinear::new(&ct, reader, "output", device)?; let mut layers = Vec::with_capacity(block_count); for layer_idx in 0..block_count { let prefix = format!("blk.{layer_idx}"); let ffn_up = QLinear::new(&ct, reader, &format!("{prefix}.ffn_up"), device)?; let ffn_down = QLinear::new(&ct, reader, &format!("{prefix}.ffn_down"), device)?; let mlp = Mlp { ffn_up, ffn_down, i_size, }; let attn_norm = rms_norm( ct.tensor(reader, &format!("{prefix}.attn_norm.weight"), device)?, rms_eps, )?; let ffn_norm = rms_norm( ct.tensor(reader, &format!("{prefix}.ffn_norm.weight"), device)?, rms_eps, )?; let span_attn = tracing::span!(tracing::Level::TRACE, "attn"); let span_rot = tracing::span!(tracing::Level::TRACE, "attn-rot"); let kv_cache = KvCache::new(2, max_seq_len); layers.push(LayerWeights { attn_qkv: QLinear::new(&ct, reader, &format!("{prefix}.attn_qkv"), device)?, attn_output: QLinear::new(&ct, reader, &format!("{prefix}.attn_output"), device)?, attn_norm, ffn_norm, mlp, n_head: head_count, n_kv_head: head_count_kv, head_dim, cos: cos.clone(), sin: sin.clone(), neg_inf: neg_inf.clone(), kv_cache, use_flash_attn, span_attn, span_rot, }) } let span = tracing::span!(tracing::Level::TRACE, "model"); let span_output = tracing::span!(tracing::Level::TRACE, "output"); Ok(Self { tok_embeddings: Embedding::new(tok_embeddings, embedding_length), layers, output_norm, output, masks: HashMap::new(), span, span_output, }) } fn mask(&mut self, t: usize, device: &Device) -> Result<Tensor> { if let Some(mask) = self.masks.get(&t) { Ok(mask.clone()) } else { let mask: Vec<_> = (0..t) .flat_map(|i| (0..t).map(move |j| u8::from(j > i))) .collect(); let mask = Tensor::from_slice(&mask, (t, t), device)?; self.masks.insert(t, mask.clone()); Ok(mask) } } pub fn forward(&mut self, xs: &Tensor, index_pos: usize) -> Result<Tensor> { let (_b_sz, seq_len) = xs.dims2()?; let mask = if seq_len == 1 { None } else { Some(self.mask(seq_len, xs.device())?) }; let _enter = self.span.enter(); let mut xs = self.tok_embeddings.forward(xs)?; for layer in self.layers.iter_mut() { let residual = &xs; let ys = xs.apply(&layer.attn_norm)?; let ys = layer.forward_attn(&ys, mask.as_ref(), index_pos)?; let ys = (ys + residual)?; let residual = &ys; let ys = ys.apply(&layer.ffn_norm)?; let ys = layer.mlp.forward(&ys)?; xs = (ys + residual)? } let xs = xs.apply(&self.output_norm)?.i((.., seq_len - 1, ..))?; let _enter = self.span_output.enter(); self.output.forward(&xs) } }
candle/candle-transformers/src/models/quantized_phi3.rs/0
{ "file_path": "candle/candle-transformers/src/models/quantized_phi3.rs", "repo_id": "candle", "token_count": 6108 }
59
//! RWKV v6 model implementation. //! //! The [RWKV model](https://wiki.rwkv.com/) is a recurrent neural network model //! with performance on par with transformer architectures. Several variants are //! available, candle implements the v5 and v6 versions and can be used with //! Eagle 7B([blog post](https://blog.rwkv.com/p/eagle-7b-soaring-past-transformers)). //! //! Key characteristics: //! - Linear attention mechanism //! - Time-mixing for temporal dependencies //! - Group normalization //! - Feed forward gating //! - State recycling for efficient inference //! //! # Example //! //! ```bash //! cargo run --example rwkv --release -- \ //! --prompt "The smallest prime is " //! //! > avx: true, neon: false, simd128: false, f16c: true //! > temp: 0.00 repeat-penalty: 1.10 repeat-last-n: 64 //! > The smallest prime is ϕ(2) = 2. //! > The smallest composite is ϕ(3) = 3. //! > The smallest perfect number is ϕ(5) = 5. //! > The smallest perfect square is ϕ(4) = 4. //! > The smallest perfect cube is ϕ(6) = 6. //! ``` use super::with_tracing::{layer_norm, linear_no_bias as linear, LayerNorm, Linear}; use candle::{IndexOp, Result, Tensor}; use candle_nn::{embedding, Embedding, Module, VarBuilder}; pub use crate::models::rwkv_v5::{Config, State, Tokenizer}; #[derive(Debug, Clone)] struct SelfAttention { key: Linear, receptance: Linear, value: Linear, gate: Linear, output: Linear, ln_x: candle_nn::GroupNorm, time_mix_x: Tensor, time_mix_w: Tensor, time_mix_key: Tensor, time_mix_value: Tensor, time_mix_receptance: Tensor, time_decay: Tensor, time_faaaa: Tensor, time_mix_gate: Tensor, time_decay_w1: Tensor, time_decay_w2: Tensor, time_mix_w1: Tensor, time_mix_w2: Tensor, layer_id: usize, n_attn_heads: usize, } impl SelfAttention { fn new(layer_id: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let hidden_size = cfg.hidden_size; let attn_hidden_size = cfg.attention_hidden_size; let key = linear(hidden_size, attn_hidden_size, vb.pp("key"))?; let receptance = linear(hidden_size, attn_hidden_size, vb.pp("receptance"))?; let value = linear(hidden_size, attn_hidden_size, vb.pp("value"))?; let gate = linear(hidden_size, attn_hidden_size, vb.pp("gate"))?; let output = linear(attn_hidden_size, hidden_size, vb.pp("output"))?; let ln_x = candle_nn::group_norm( hidden_size / cfg.head_size, hidden_size, 1e-5, vb.pp("ln_x"), )?; let time_mix_x = vb.get((1, 1, cfg.hidden_size), "time_mix_x")?; let time_mix_w = vb.get((1, 1, cfg.hidden_size), "time_mix_w")?; let time_mix_key = vb.get((1, 1, cfg.hidden_size), "time_mix_key")?; let time_mix_value = vb.get((1, 1, cfg.hidden_size), "time_mix_value")?; let time_mix_receptance = vb.get((1, 1, cfg.hidden_size), "time_mix_receptance")?; let n_attn_heads = cfg.hidden_size / cfg.head_size; let time_decay = vb.get((1, 1, cfg.hidden_size), "time_decay")?; let time_faaaa = vb.get((n_attn_heads, cfg.head_size), "time_faaaa")?; let time_mix_gate = vb.get((1, 1, cfg.hidden_size), "time_mix_gate")?; let time_decay_w1 = vb.get((cfg.hidden_size, n_attn_heads * 2), "time_decay_w1")?; let time_decay_w2 = vb.get((n_attn_heads * 2, cfg.hidden_size), "time_decay_w2")?; let time_mix_w1 = vb.get((cfg.hidden_size, n_attn_heads * 5), "time_mix_w1")?; let time_mix_w2 = vb.get((5, n_attn_heads, cfg.hidden_size), "time_mix_w2")?; Ok(Self { key, value, receptance, gate, output, ln_x, time_mix_x, time_mix_w, time_mix_key, time_mix_value, time_mix_receptance, time_decay, time_faaaa, time_mix_gate, time_decay_w1, time_decay_w2, time_mix_w1, time_mix_w2, layer_id, n_attn_heads, }) } pub fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let h = self.n_attn_heads; let (b, t, s) = xs.dims3()?; let s = s / h; let (receptance, key, value, gate, w) = { // extract key-value let shifted = state.per_layer[self.layer_id].extract_key_value.clone(); let shifted = if shifted.rank() == 2 { shifted.unsqueeze(1)? } else { shifted }; let sx = (&shifted - xs)?; let xxx = (xs + &sx * &self.time_mix_x)?; let xxx = xxx .broadcast_matmul(&self.time_mix_w1)? .tanh()? .reshape((b * t, 5, ()))? .transpose(0, 1)?; let xxx = xxx.matmul(&self.time_mix_w2)?.reshape((5, b, t, ()))?; let (mw, mk, mv, mr, mg) = (xxx.i(0)?, xxx.i(1)?, xxx.i(2)?, xxx.i(3)?, xxx.i(4)?); let xw = (xs + &sx * (&self.time_mix_w + &mw)?)?; let xk = (xs + &sx * (&self.time_mix_key + &mk)?)?; let xv = (xs + &sx * (&self.time_mix_value + &mv)?)?; let xr = (xs + &sx * (&self.time_mix_receptance + &mr)?)?; let xg = (xs + &sx * (&self.time_mix_gate + &mg)?)?; let w = (&self.time_decay + xw.broadcast_matmul(&self.time_decay_w1)? .tanh()? .broadcast_matmul(&self.time_decay_w2)?)? .reshape(((), 1, 1))? .reshape((self.n_attn_heads, (), 1))?; let key = self.key.forward(&xk)?; let value = self.value.forward(&xv)?; let receptance = self.receptance.forward(&xr)?; let gate = candle_nn::ops::silu(&self.gate.forward(&xg)?)?; state.per_layer[self.layer_id].extract_key_value = xs.i((.., t - 1))?; (receptance, key, value, gate, w) }; // linear attention let mut state_ = state.per_layer[self.layer_id].linear_attention.clone(); let key = key.reshape((b, t, h, s))?.permute((0, 2, 3, 1))?; let value = value.reshape((b, t, h, s))?.transpose(1, 2)?; let receptance = receptance.reshape((b, t, h, s))?.transpose(1, 2)?; let w = w.exp()?.neg()?.exp()?; let time_faaaa = self.time_faaaa .reshape(((), 1, 1))? .reshape((self.n_attn_heads, (), 1))?; let mut out: Vec<Tensor> = Vec::with_capacity(t); for t_ in 0..t { let rt = receptance.i((.., .., t_..t_ + 1))?.contiguous()?; let kt = key.i((.., .., .., t_..t_ + 1))?.contiguous()?; let vt = value.i((.., .., t_..t_ + 1))?.contiguous()?; let at = kt.matmul(&vt)?; let rhs = (time_faaaa.broadcast_mul(&at)? + &state_)?; let out_ = rt.matmul(&rhs)?.squeeze(2)?; state_ = (&at + w.broadcast_mul(&state_))?; out.push(out_) } let out = Tensor::cat(&out, 1)?.reshape((b * t, h * s, 1))?; let out = out.apply(&self.ln_x)?.reshape((b, t, h * s))?; let out = (out * gate)?.apply(&self.output)?; state.per_layer[self.layer_id].linear_attention = state_; Ok(out) } } #[derive(Debug, Clone)] struct FeedForward { time_mix_key: Tensor, time_mix_receptance: Tensor, key: Linear, receptance: Linear, value: Linear, layer_id: usize, } impl FeedForward { fn new(layer_id: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let int_size = cfg .intermediate_size .unwrap_or(((cfg.hidden_size as f64 * 3.5) as usize) / 32 * 32); let key = linear(cfg.hidden_size, int_size, vb.pp("key"))?; let receptance = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("receptance"))?; let value = linear(int_size, cfg.hidden_size, vb.pp("value"))?; let time_mix_key = vb.get((1, 1, cfg.hidden_size), "time_mix_key")?; let time_mix_receptance = vb.get((1, 1, cfg.hidden_size), "time_mix_receptance")?; Ok(Self { key, receptance, value, time_mix_key, time_mix_receptance, layer_id, }) } fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let shifted = state.per_layer[self.layer_id] .feed_forward .broadcast_sub(xs)?; let key = (xs + shifted.broadcast_mul(&self.time_mix_key)?)?; let receptance = (xs + shifted.broadcast_mul(&self.time_mix_receptance)?)?; let key = key.apply(&self.key)?.relu()?.sqr()?; let value = key.apply(&self.value)?; let receptance = candle_nn::ops::sigmoid(&receptance.apply(&self.receptance)?)?; state.per_layer[self.layer_id].feed_forward = xs.i((.., xs.dim(1)? - 1))?; let xs = (receptance * value)?; Ok(xs) } } #[derive(Debug, Clone)] struct Block { pre_ln: Option<LayerNorm>, ln1: LayerNorm, ln2: LayerNorm, attention: SelfAttention, feed_forward: FeedForward, } impl Block { fn new(layer_id: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let ln1 = layer_norm(cfg.hidden_size, cfg.layer_norm_epsilon, vb.pp("ln1"))?; let ln2 = layer_norm(cfg.hidden_size, cfg.layer_norm_epsilon, vb.pp("ln2"))?; let pre_ln = if layer_id == 0 { let ln = layer_norm(cfg.hidden_size, cfg.layer_norm_epsilon, vb.pp("pre_ln"))?; Some(ln) } else { None }; let attention = SelfAttention::new(layer_id, cfg, vb.pp("attention"))?; let feed_forward = FeedForward::new(layer_id, cfg, vb.pp("feed_forward"))?; Ok(Self { pre_ln, ln1, ln2, attention, feed_forward, }) } fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let xs = match self.pre_ln.as_ref() { None => xs.clone(), Some(pre_ln) => xs.apply(pre_ln)?, }; let attention = self.attention.forward(&xs.apply(&self.ln1)?, state)?; let xs = (xs + attention)?; let feed_forward = self.feed_forward.forward(&xs.apply(&self.ln2)?, state)?; let xs = (xs + feed_forward)?; Ok(xs) } } #[derive(Debug, Clone)] pub struct Model { embeddings: Embedding, blocks: Vec<Block>, ln_out: LayerNorm, head: Linear, rescale_every: usize, layers_are_rescaled: bool, } impl Model { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let vb_m = vb.pp("rwkv"); let embeddings = embedding(cfg.vocab_size, cfg.hidden_size, vb_m.pp("embeddings"))?; let mut blocks = Vec::with_capacity(cfg.num_hidden_layers); let vb_b = vb_m.pp("blocks"); for block_index in 0..cfg.num_hidden_layers { let block = Block::new(block_index, cfg, vb_b.pp(block_index))?; blocks.push(block) } let ln_out = layer_norm(cfg.hidden_size, 1e-5, vb_m.pp("ln_out"))?; let head = linear(cfg.hidden_size, cfg.vocab_size, vb.pp("head"))?; Ok(Self { embeddings, blocks, ln_out, head, rescale_every: cfg.rescale_every, layers_are_rescaled: false, // This seem to only happen for the f16/bf16 dtypes. }) } pub fn forward(&self, xs: &Tensor, state: &mut State) -> Result<Tensor> { let (_b_size, _seq_len) = xs.dims2()?; let mut xs = xs.apply(&self.embeddings)?; for (block_idx, block) in self.blocks.iter().enumerate() { xs = block.forward(&xs, state)?; if self.layers_are_rescaled && (block_idx + 1) % self.rescale_every == 0 { xs = (xs / 2.)? } } let xs = xs.apply(&self.ln_out)?.apply(&self.head)?; state.pos += 1; Ok(xs) } }
candle/candle-transformers/src/models/rwkv_v6.rs/0
{ "file_path": "candle/candle-transformers/src/models/rwkv_v6.rs", "repo_id": "candle", "token_count": 6204 }
60
//! Ancestral sampling with Euler method steps. //! //! Based on the original [`k-diffusion` implementation by Katherine Crowson]( https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72). //! use super::{ schedulers::{ betas_for_alpha_bar, BetaSchedule, PredictionType, Scheduler, SchedulerConfig, TimestepSpacing, }, utils::interp, }; use candle::{bail, Error, Result, Tensor}; /// The configuration for the EulerAncestral Discrete scheduler. #[derive(Debug, Clone, Copy)] pub struct EulerAncestralDiscreteSchedulerConfig { /// The value of beta at the beginning of training.n pub beta_start: f64, /// The value of beta at the end of training. pub beta_end: f64, /// How beta evolved during training. pub beta_schedule: BetaSchedule, /// Adjust the indexes of the inference schedule by this value. pub steps_offset: usize, /// prediction type of the scheduler function, one of `epsilon` (predicting /// the noise of the diffusion process), `sample` (directly predicting the noisy sample`) /// or `v_prediction` (see [section 2.4](https://imagen.research.google/video/paper.pdf)) pub prediction_type: PredictionType, /// number of diffusion steps used to train the model pub train_timesteps: usize, /// time step spacing for the diffusion process pub timestep_spacing: TimestepSpacing, } impl Default for EulerAncestralDiscreteSchedulerConfig { fn default() -> Self { Self { beta_start: 0.00085f64, beta_end: 0.012f64, beta_schedule: BetaSchedule::ScaledLinear, steps_offset: 1, prediction_type: PredictionType::Epsilon, train_timesteps: 1000, timestep_spacing: TimestepSpacing::Leading, } } } impl SchedulerConfig for EulerAncestralDiscreteSchedulerConfig { fn build(&self, inference_steps: usize) -> Result<Box<dyn Scheduler>> { Ok(Box::new(EulerAncestralDiscreteScheduler::new( inference_steps, *self, )?)) } } /// The EulerAncestral Discrete scheduler. #[derive(Debug, Clone)] pub struct EulerAncestralDiscreteScheduler { timesteps: Vec<usize>, sigmas: Vec<f64>, init_noise_sigma: f64, pub config: EulerAncestralDiscreteSchedulerConfig, } // clip_sample: False, set_alpha_to_one: False impl EulerAncestralDiscreteScheduler { /// Creates a new EulerAncestral Discrete scheduler given the number of steps to be /// used for inference as well as the number of steps that was used /// during training. pub fn new( inference_steps: usize, config: EulerAncestralDiscreteSchedulerConfig, ) -> Result<Self> { let step_ratio = config.train_timesteps / inference_steps; let timesteps: Vec<usize> = match config.timestep_spacing { TimestepSpacing::Leading => (0..(inference_steps)) .map(|s| s * step_ratio + config.steps_offset) .rev() .collect(), TimestepSpacing::Trailing => std::iter::successors(Some(config.train_timesteps), |n| { if *n > step_ratio { Some(n - step_ratio) } else { None } }) .map(|n| n - 1) .collect(), TimestepSpacing::Linspace => { super::utils::linspace(0.0, (config.train_timesteps - 1) as f64, inference_steps)? .to_vec1::<f64>()? .iter() .map(|&f| f as usize) .rev() .collect() } }; let betas = match config.beta_schedule { BetaSchedule::ScaledLinear => super::utils::linspace( config.beta_start.sqrt(), config.beta_end.sqrt(), config.train_timesteps, )? .sqr()?, BetaSchedule::Linear => { super::utils::linspace(config.beta_start, config.beta_end, config.train_timesteps)? } BetaSchedule::SquaredcosCapV2 => betas_for_alpha_bar(config.train_timesteps, 0.999)?, }; let betas = betas.to_vec1::<f64>()?; let mut alphas_cumprod = Vec::with_capacity(betas.len()); for &beta in betas.iter() { let alpha = 1.0 - beta; alphas_cumprod.push(alpha * *alphas_cumprod.last().unwrap_or(&1f64)) } let sigmas: Vec<f64> = alphas_cumprod .iter() .map(|&f| ((1. - f) / f).sqrt()) .collect(); let sigmas_xa: Vec<_> = (0..sigmas.len()).map(|i| i as f64).collect(); let mut sigmas_int = interp( &timesteps.iter().map(|&t| t as f64).collect::<Vec<_>>(), &sigmas_xa, &sigmas, ); sigmas_int.push(0.0); // standard deviation of the initial noise distribution // f64 does not implement Ord such that there is no `max`, so we need to use this workaround let init_noise_sigma = *sigmas_int .iter() .chain(std::iter::once(&0.0)) .reduce(|a, b| if a > b { a } else { b }) .expect("init_noise_sigma could not be reduced from sigmas - this should never happen"); Ok(Self { sigmas: sigmas_int, timesteps, init_noise_sigma, config, }) } } impl Scheduler for EulerAncestralDiscreteScheduler { fn timesteps(&self) -> &[usize] { self.timesteps.as_slice() } /// Ensures interchangeability with schedulers that need to scale the denoising model input /// depending on the current timestep. /// /// Scales the denoising model input by `(sigma**2 + 1) ** 0.5` to match the K-LMS algorithm fn scale_model_input(&self, sample: Tensor, timestep: usize) -> Result<Tensor> { let step_index = match self.timesteps.iter().position(|&t| t == timestep) { Some(i) => i, None => bail!("timestep out of this schedulers bounds: {timestep}"), }; let sigma = self .sigmas .get(step_index) .expect("step_index out of sigma bounds - this shouldn't happen"); sample / ((sigma.powi(2) + 1.).sqrt()) } /// Performs a backward step during inference. fn step(&mut self, model_output: &Tensor, timestep: usize, sample: &Tensor) -> Result<Tensor> { let step_index = self .timesteps .iter() .position(|&p| p == timestep) .ok_or_else(|| Error::Msg("timestep out of this schedulers bounds".to_string()))?; let sigma_from = &self.sigmas[step_index]; let sigma_to = &self.sigmas[step_index + 1]; // 1. compute predicted original sample (x_0) from sigma-scaled predicted noise let pred_original_sample = match self.config.prediction_type { PredictionType::Epsilon => (sample - (model_output * *sigma_from))?, PredictionType::VPrediction => { ((model_output * (-sigma_from / (sigma_from.powi(2) + 1.0).sqrt()))? + (sample / (sigma_from.powi(2) + 1.0))?)? } PredictionType::Sample => bail!("prediction_type not implemented yet: sample"), }; let sigma_up = (sigma_to.powi(2) * (sigma_from.powi(2) - sigma_to.powi(2)) / sigma_from.powi(2)) .sqrt(); let sigma_down = (sigma_to.powi(2) - sigma_up.powi(2)).sqrt(); // 2. convert to a ODE derivative let derivative = ((sample - pred_original_sample)? / *sigma_from)?; let dt = sigma_down - *sigma_from; let prev_sample = (sample + derivative * dt)?; let noise = prev_sample.randn_like(0.0, 1.0)?; prev_sample + noise * sigma_up } fn add_noise(&self, original: &Tensor, noise: Tensor, timestep: usize) -> Result<Tensor> { let step_index = self .timesteps .iter() .position(|&p| p == timestep) .ok_or_else(|| Error::Msg("timestep out of this schedulers bounds".to_string()))?; let sigma = self .sigmas .get(step_index) .expect("step_index out of sigma bounds - this shouldn't happen"); original + (noise * *sigma)? } fn init_noise_sigma(&self) -> f64 { match self.config.timestep_spacing { TimestepSpacing::Trailing | TimestepSpacing::Linspace => self.init_noise_sigma, TimestepSpacing::Leading => (self.init_noise_sigma.powi(2) + 1.0).sqrt(), } } }
candle/candle-transformers/src/models/stable_diffusion/euler_ancestral_discrete.rs/0
{ "file_path": "candle/candle-transformers/src/models/stable_diffusion/euler_ancestral_discrete.rs", "repo_id": "candle", "token_count": 4097 }
61
use candle::{DType, Device, Error, Tensor}; use crate::models::whisper::audio::{log_mel_spectrogram_, Float}; pub fn pcm_to_mel<T: Float>(samples: &[T], filters: &[T]) -> Vec<T> { log_mel_spectrogram_( samples, filters, super::N_FFT, super::HOP_LENGTH, super::N_MELS, false, ) } /// Process audio using exact WhisperFeatureExtractor algorithm then apply VoxtralProcessor chunking pub fn extract_features(audio: &[f32], filters: &[f32], device: &Device) -> Result<Tensor, Error> { const N_MELS: usize = super::N_MELS; // Use the exact WhisperFeatureExtractor algorithm // Use the whisper implementation from the parent module let mel_vec = pcm_to_mel(audio, filters); // The whisper implementation returns Vec<f32> in shape (n_mel * n_len) // We need to reshape it to match the expected tensor format let n_mel = super::N_MELS; let n_len = mel_vec.len() / n_mel; // Create tensor with shape (n_mel, n_len) then add batch dimension let mel_tensor = Tensor::from_vec(mel_vec, (n_mel, n_len), device)?; let mel_tensor = mel_tensor.unsqueeze(0)?; // Add batch dimension -> (1, n_mel, n_len) // Convert tensor back to Vec<f32> for compatibility with existing code let mel = mel_tensor.flatten_all()?.to_vec1::<f32>()?; let mel_len = mel.len(); // Apply VoxtralProcessor chunking exactly like Python let total_frames = mel_len / N_MELS; let max_source_positions = 3000; // From VoxtralProcessor defaults // Python approach: reshape (feature_size, total_frames) -> (feature_size, -1, max_source_positions) // First, create mel tensor with shape (N_MELS, total_frames) let mel_tensor = Tensor::from_vec(mel, (N_MELS, total_frames), device) .map_err(|e| Error::Msg(format!("Failed to create mel tensor: {e}")))?; // Calculate number of chunks (equivalent to Python's -1 dimension in reshape) let num_chunks = total_frames.div_ceil(max_source_positions); // Pad the mel tensor to be divisible by max_source_positions let padded_frames = num_chunks * max_source_positions; let padding_needed = padded_frames - total_frames; let mel_padded = if padding_needed > 0 { let padding = Tensor::zeros((N_MELS, padding_needed), DType::F32, device)?; Tensor::cat(&[&mel_tensor, &padding], 1)? } else { mel_tensor }; // Reshape to (N_MELS, num_chunks, max_source_positions) let reshaped = mel_padded.reshape((N_MELS, num_chunks, max_source_positions))?; // Transpose to (num_chunks, N_MELS, max_source_positions) - matching Python's transpose(0,1) let audio_features = reshaped.transpose(0, 1)?; Ok(audio_features) }
candle/candle-transformers/src/models/voxtral/audio.rs/0
{ "file_path": "candle/candle-transformers/src/models/voxtral/audio.rs", "repo_id": "candle", "token_count": 1051 }
62
use crate::models::with_tracing::{linear, Linear}; use candle::{DType, Module, Result, Tensor}; use candle_nn::{ embedding, layer_norm, ops::softmax_last_dim, Activation, Embedding, LayerNorm, VarBuilder, }; #[derive(Debug, Clone, serde::Deserialize)] pub struct Config { pub hidden_size: usize, pub layer_norm_eps: f64, pub attention_probs_dropout_prob: f32, pub hidden_dropout_prob: f32, pub num_attention_heads: usize, pub position_embedding_type: String, pub intermediate_size: usize, pub hidden_act: Activation, pub num_hidden_layers: usize, pub vocab_size: usize, pub max_position_embeddings: usize, pub type_vocab_size: usize, pub pad_token_id: u32, } struct XLMRobertaEmbeddings { word_embeddings: Embedding, position_embeddings: Option<Embedding>, token_type_embeddings: Embedding, layer_norm: LayerNorm, padding_idx: u32, span: tracing::Span, } impl XLMRobertaEmbeddings { fn load(vb: VarBuilder, config: &Config) -> Result<Self> { let word_embeddings = embedding( config.vocab_size, config.hidden_size, vb.pp("word_embeddings"), )?; let position_embeddings = embedding( config.max_position_embeddings, config.hidden_size, vb.pp("position_embeddings"), )?; let token_type_embeddings = embedding( config.type_vocab_size, config.hidden_size, vb.pp("token_type_embeddings"), )?; let layer_norm = layer_norm( config.hidden_size, config.layer_norm_eps, vb.pp("LayerNorm"), )?; Ok(Self { word_embeddings, position_embeddings: Some(position_embeddings), token_type_embeddings, layer_norm, padding_idx: config.pad_token_id, span: tracing::span!(tracing::Level::TRACE, "embeddings"), }) } fn forward(&self, input_ids: &Tensor, token_type_ids: &Tensor) -> Result<Tensor> { let _enter = self.span.enter(); let (_bsize, _) = input_ids.dims2()?; let input_embeddings = self.word_embeddings.forward(input_ids)?; let token_type_embeddings = self.token_type_embeddings.forward(token_type_ids)?; let mut embeddings = (&input_embeddings + token_type_embeddings)?; if let Some(position_embeddings) = &self.position_embeddings { let mask = input_ids .ne(self.padding_idx)? .to_dtype(input_embeddings.dtype())?; let cumsum = mask.cumsum(1)?; let position_ids = (cumsum * mask)? .broadcast_add( &Tensor::try_from(self.padding_idx)? .to_dtype(input_embeddings.dtype())? .to_device(input_embeddings.device())?, )? .to_dtype(candle::DType::U32)?; embeddings = embeddings.broadcast_add(&position_embeddings.forward(&position_ids)?)?; } let embeddings = self.layer_norm.forward(&embeddings)?; Ok(embeddings) } } struct XLMRobertaSelfAttention { num_attention_heads: usize, attention_head_size: usize, all_head_size: usize, query: Linear, key: Linear, value: Linear, } impl XLMRobertaSelfAttention { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let attention_head_size = cfg.hidden_size / cfg.num_attention_heads; let all_head_size = cfg.num_attention_heads * attention_head_size; Ok(Self { num_attention_heads: cfg.num_attention_heads, attention_head_size, all_head_size, query: linear(cfg.hidden_size, all_head_size, vb.pp("query"))?, key: linear(cfg.hidden_size, all_head_size, vb.pp("key"))?, value: linear(cfg.hidden_size, all_head_size, vb.pp("value"))?, }) } fn transpose_for_scores(&self, x: &Tensor) -> Result<Tensor> { let mut new_x_shape = x.dims().to_vec(); new_x_shape[2] = self.num_attention_heads; new_x_shape.push(self.attention_head_size); let x = x.reshape(new_x_shape)?; x.permute((0, 2, 1, 3))?.contiguous() } fn forward( &self, hidden_states: &Tensor, encoder_hidden_states: Option<&Tensor>, attention_mask: &Tensor, past_key_value: Option<(&Tensor, &Tensor)>, encoder_attention_mask: Option<&Tensor>, ) -> Result<Tensor> { let mixed_query_layer = self.query.forward(hidden_states)?; let is_cross_attention = encoder_hidden_states.is_some(); let (key_layer, value_layer, attention_mask) = if is_cross_attention { if let Some((past_key, past_value)) = past_key_value { let key_layer = past_key.clone(); let value_layer = past_value.clone(); let attention_mask = encoder_attention_mask.unwrap().clone(); (key_layer, value_layer, Some(attention_mask)) } else { let key_layer = self.transpose_for_scores(&self.key.forward(encoder_hidden_states.unwrap())?)?; let value_layer = self .transpose_for_scores(&self.value.forward(encoder_hidden_states.unwrap())?)?; let attention_mask = encoder_attention_mask.unwrap(); (key_layer, value_layer, Some(attention_mask.clone())) } } else if let Some((past_key, past_value)) = past_key_value { let mut key_layer = self.transpose_for_scores(&self.key.forward(hidden_states)?)?; let mut value_layer = self.transpose_for_scores(&self.value.forward(hidden_states)?)?; key_layer = Tensor::cat(&[past_key.clone(), key_layer], 2)?; value_layer = Tensor::cat(&[past_value.clone(), value_layer], 2)?; (key_layer, value_layer, Some(attention_mask.clone())) } else { let key_layer = self.transpose_for_scores(&self.key.forward(hidden_states)?)?; let value_layer = self.transpose_for_scores(&self.value.forward(hidden_states)?)?; (key_layer, value_layer, Some(attention_mask.clone())) }; let query_layer = self.transpose_for_scores(&mixed_query_layer)?; let mut attention_scores = query_layer.matmul(&key_layer.transpose(2, 3)?)?; let scale = 1f64 / f64::sqrt(self.attention_head_size as f64); attention_scores = (attention_scores * scale)?; attention_scores = match attention_mask { None => attention_scores, Some(mask) => { attention_scores.broadcast_add(&mask.to_dtype(attention_scores.dtype())?)? } }; let attention_probs = softmax_last_dim(&attention_scores)?; let context_layer = attention_probs .matmul(&value_layer)? .permute((0, 2, 1, 3))? .contiguous()?; let mut new_context_layer_shape = context_layer.dims()[..context_layer.dims().len() - 2].to_vec(); new_context_layer_shape.push(self.all_head_size); let context_layer = context_layer.reshape(new_context_layer_shape)?; Ok(context_layer) } } struct XLMRobertaSelfOutput { dense: Linear, layernorm: LayerNorm, } impl XLMRobertaSelfOutput { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("dense"))?; let layernorm = candle_nn::layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("LayerNorm"))?; Ok(Self { dense, layernorm }) } fn forward(&self, hidden_states: &Tensor, input_tensor: &Tensor) -> Result<Tensor> { let hidden_states = self.dense.forward(hidden_states)?; let hidden_states = self.layernorm.forward(&(hidden_states + input_tensor)?)?; Ok(hidden_states) } } struct XLMRobertaAttention { output: XLMRobertaSelfOutput, self_attention: XLMRobertaSelfAttention, } impl XLMRobertaAttention { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let output = XLMRobertaSelfOutput::new(cfg, vb.pp("output"))?; let self_attention = XLMRobertaSelfAttention::new(cfg, vb.pp("self"))?; Ok(Self { output, self_attention, }) } fn forward( &self, hidden_states: &Tensor, attention_mask: &Tensor, encoder_hidden_states: Option<&Tensor>, encoder_attention_mask: Option<&Tensor>, past_key_value: Option<(&Tensor, &Tensor)>, ) -> Result<(Tensor, Tensor)> { let self_outputs = self.self_attention.forward( hidden_states, encoder_hidden_states, attention_mask, past_key_value, encoder_attention_mask, )?; let attention_output = self.output.forward(&self_outputs, hidden_states)?; Ok((attention_output, self_outputs)) } } struct XLMRobertaOutput { dense: Linear, layernorm: LayerNorm, } impl XLMRobertaOutput { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.intermediate_size, cfg.hidden_size, vb.pp("dense"))?; let layernorm = candle_nn::layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("LayerNorm"))?; Ok(Self { dense, layernorm }) } fn forward(&self, hidden_states: &Tensor, input_tensor: &Tensor) -> Result<Tensor> { let hidden_states = self.dense.forward(hidden_states)?; let hidden_states = self.layernorm.forward(&(hidden_states + input_tensor)?)?; Ok(hidden_states) } } struct XLMRobertaIntermediate { dense: Linear, intermediate_act_fn: Activation, } impl XLMRobertaIntermediate { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.intermediate_size, vb.pp("dense"))?; let intermediate_act_fn = cfg.hidden_act; Ok(Self { dense, intermediate_act_fn, }) } fn forward(&self, hidden_states: &Tensor) -> Result<Tensor> { let hidden_states = self.dense.forward(hidden_states)?; let hidden_states = self.intermediate_act_fn.forward(&hidden_states)?; Ok(hidden_states) } } struct XLMRobertaLayer { attention: XLMRobertaAttention, intermediate: XLMRobertaIntermediate, output: XLMRobertaOutput, } impl XLMRobertaLayer { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let attention = XLMRobertaAttention::new(cfg, vb.pp("attention"))?; let intermediate = XLMRobertaIntermediate::new(cfg, vb.pp("intermediate"))?; let output = XLMRobertaOutput::new(cfg, vb.pp("output"))?; Ok(Self { attention, intermediate, output, }) } fn forward( &self, hidden_states: &Tensor, attention_mask: &Tensor, encoder_hidden_states: Option<&Tensor>, encoder_attention_mask: Option<&Tensor>, past_key_value: Option<(&Tensor, &Tensor)>, ) -> Result<(Tensor, Tensor)> { let self_attention_outputs = self.attention.forward( hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, )?; let attention_output = self_attention_outputs.0; let outputs = self_attention_outputs.1; let intermediate_output = self.intermediate.forward(&attention_output)?; let layer_output = self .output .forward(&intermediate_output, &attention_output)?; Ok((layer_output, outputs)) } } struct XLMRobertaEncoder { layers: Vec<XLMRobertaLayer>, } impl XLMRobertaEncoder { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let layers = (0..cfg.num_hidden_layers) .map(|i| XLMRobertaLayer::new(cfg, vb.pp(format!("layer.{i}")))) .collect::<Result<Vec<_>>>()?; Ok(Self { layers }) } fn forward( &self, hidden_states: &Tensor, attention_mask: &Tensor, encoder_hidden_states: Option<&Tensor>, encoder_attention_mask: Option<&Tensor>, past_key_value: Option<(&Tensor, &Tensor)>, ) -> Result<Tensor> { let mut hidden_states = hidden_states.clone(); for layer_module in self.layers.iter() { let layer_outputs = layer_module.forward( &hidden_states, attention_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, )?; hidden_states = layer_outputs.0; } Ok(hidden_states) } } pub struct XLMRobertaModel { encoder: XLMRobertaEncoder, embeddings: XLMRobertaEmbeddings, } impl XLMRobertaModel { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let encoder = XLMRobertaEncoder::new(cfg, vb.pp("encoder"))?; let embeddings = XLMRobertaEmbeddings::load(vb.pp("embeddings"), cfg)?; Ok(Self { encoder, embeddings, }) } pub fn forward( &self, input_ids: &Tensor, attention_mask: &Tensor, token_type_ids: &Tensor, past_key_value: Option<(&Tensor, &Tensor)>, encoder_hidden_states: Option<&Tensor>, encoder_attention_mask: Option<&Tensor>, ) -> Result<Tensor> { let hidden_states = self.embeddings.forward(input_ids, token_type_ids)?; let attention_mask = prepare_4d_attention_mask(attention_mask, DType::F32, None)? .to_device(hidden_states.device())?; let hidden_states = self.encoder.forward( &hidden_states, &attention_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, )?; Ok(hidden_states) } } struct XLMRobertaLMHead { dense: Linear, layer_norm: LayerNorm, } impl XLMRobertaLMHead { fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("dense"))?; let layer_norm = candle_nn::layer_norm(cfg.hidden_size, cfg.layer_norm_eps, vb.pp("layer_norm"))?; Ok(Self { dense, layer_norm }) } fn forward(&self, hidden_states: &Tensor, shared_embeddings: &Tensor) -> Result<Tensor> { let hidden_states = self.dense.forward(hidden_states)?; let hidden_states = candle_nn::Activation::Gelu.forward(&hidden_states)?; let hidden_states = self.layer_norm.forward(&hidden_states)?; let hidden_states = hidden_states.broadcast_matmul(shared_embeddings)?; Ok(hidden_states) } } pub struct XLMRobertaForMaskedLM { roberta: XLMRobertaModel, lm_head: XLMRobertaLMHead, } impl XLMRobertaForMaskedLM { pub fn new(cfg: &Config, vb: VarBuilder) -> Result<Self> { let roberta = XLMRobertaModel::new(cfg, vb.pp("roberta"))?; let lm_head = XLMRobertaLMHead::new(cfg, vb.pp("lm_head"))?; Ok(Self { roberta, lm_head }) } pub fn forward( &self, input_ids: &Tensor, attention_mask: &Tensor, token_type_ids: &Tensor, past_key_value: Option<(&Tensor, &Tensor)>, encoder_hidden_states: Option<&Tensor>, encoder_attention_mask: Option<&Tensor>, ) -> Result<Tensor> { let hidden_states = self.roberta.forward( input_ids, attention_mask, token_type_ids, past_key_value, encoder_hidden_states, encoder_attention_mask, )?; let lm_logits = self.lm_head.forward( &hidden_states, &self .roberta .embeddings .word_embeddings .embeddings() .t()? .unsqueeze(0)?, )?; Ok(lm_logits) } } struct XLMRobertaClassificationHead { dense: Linear, out_proj: Linear, } impl XLMRobertaClassificationHead { fn new(num_labels: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let dense = linear(cfg.hidden_size, cfg.hidden_size, vb.pp("dense"))?; let out_proj = linear(cfg.hidden_size, num_labels, vb.pp("out_proj"))?; Ok(Self { dense, out_proj }) } fn forward(&self, hidden_states: &Tensor) -> Result<Tensor> { let cls_states = hidden_states.get_on_dim(1, 0)?.contiguous()?; let hidden_states = self.dense.forward(&cls_states)?; // The activation used in the classification head is tanh, as per the original // implementation. // https://github.com/huggingface/transformers/blob/6e3063422c4b1c014aa60c32b9254fd2902f0f28/src/transformers/models/xlm_roberta/modeling_xlm_roberta.py#L1454 let hidden_states = self.out_proj.forward(&hidden_states.tanh()?)?; Ok(hidden_states) } } pub struct XLMRobertaForSequenceClassification { roberta: XLMRobertaModel, classifier: XLMRobertaClassificationHead, } impl XLMRobertaForSequenceClassification { pub fn new(num_labels: usize, cfg: &Config, vb: VarBuilder) -> Result<Self> { let roberta = XLMRobertaModel::new(cfg, vb.pp("roberta"))?; let classifier = XLMRobertaClassificationHead::new(num_labels, cfg, vb.pp("classifier"))?; Ok(Self { roberta, classifier, }) } pub fn forward( &self, input_ids: &Tensor, attention_mask: &Tensor, token_type_ids: &Tensor, ) -> Result<Tensor> { let hidden_states = self.roberta .forward(input_ids, attention_mask, token_type_ids, None, None, None)?; self.classifier.forward(&hidden_states) } } fn prepare_4d_attention_mask( mask: &Tensor, dtype: DType, tgt_len: Option<usize>, ) -> Result<Tensor> { let bsz = mask.dim(0)?; let src_len = mask.dim(1)?; let tgt_len = tgt_len.unwrap_or(src_len); let expanded_mask = mask .unsqueeze(1)? .unsqueeze(2)? .expand((bsz, 1, tgt_len, src_len))? .to_dtype(dtype)?; let inverted_mask = (1.0 - expanded_mask)?; (inverted_mask * get_dtype_min_val(dtype))?.to_dtype(dtype) } fn get_dtype_min_val(dtype: DType) -> f64 { match dtype { DType::F32 => f32::MIN as f64, DType::F64 => f64::MIN, _ => panic!("Unsupported data type"), } }
candle/candle-transformers/src/models/xlm_roberta.rs/0
{ "file_path": "candle/candle-transformers/src/models/xlm_roberta.rs", "repo_id": "candle", "token_count": 8889 }
63
use candle_transformers::models::bert; use wasm_bindgen::prelude::*; pub use bert::{BertModel, Config, DTYPE}; pub use tokenizers::{PaddingParams, Tokenizer}; #[wasm_bindgen] extern "C" { // Use `js_namespace` here to bind `console.log(..)` instead of just // `log(..)` #[wasm_bindgen(js_namespace = console)] pub fn log(s: &str); } #[macro_export] macro_rules! console_log { // Note that this is using the `log` function imported above during // `bare_bones` ($($t:tt)*) => ($crate::log(&format_args!($($t)*).to_string())) }
candle/candle-wasm-examples/bert/src/lib.rs/0
{ "file_path": "candle/candle-wasm-examples/bert/src/lib.rs", "repo_id": "candle", "token_count": 226 }
64
use crate::console_log; use crate::worker::{ModelData, Worker, WorkerInput, WorkerOutput}; use std::str::FromStr; use wasm_bindgen::prelude::*; use wasm_bindgen_futures::JsFuture; use yew::{html, Component, Context, Html}; use yew_agent::{Bridge, Bridged}; async fn fetch_url(url: &str) -> Result<Vec<u8>, JsValue> { use web_sys::{Request, RequestCache, RequestInit, RequestMode, Response}; let window = web_sys::window().ok_or("window")?; let opts = RequestInit::new(); opts.set_method("GET"); opts.set_mode(RequestMode::Cors); opts.set_cache(RequestCache::NoCache); let request = Request::new_with_str_and_init(url, &opts)?; let resp_value = JsFuture::from(window.fetch_with_request(&request)).await?; // `resp_value` is a `Response` object. assert!(resp_value.is_instance_of::<Response>()); let resp: Response = resp_value.dyn_into()?; let data = JsFuture::from(resp.blob()?).await?; let blob = web_sys::Blob::from(data); let array_buffer = JsFuture::from(blob.array_buffer()).await?; let data = js_sys::Uint8Array::new(&array_buffer).to_vec(); Ok(data) } pub enum Msg { Refresh, Run, UpdateStatus(String), SetModel(ModelData), WorkerIn(WorkerInput), WorkerOut(Result<WorkerOutput, String>), } pub struct CurrentDecode { start_time: Option<f64>, } pub struct App { status: String, loaded: bool, temperature: std::rc::Rc<std::cell::RefCell<f64>>, top_p: std::rc::Rc<std::cell::RefCell<f64>>, prompt: std::rc::Rc<std::cell::RefCell<String>>, generated: String, n_tokens: usize, current_decode: Option<CurrentDecode>, worker: Box<dyn Bridge<Worker>>, } async fn model_data_load() -> Result<ModelData, JsValue> { let tokenizer = fetch_url("tokenizer.json").await?; let model = fetch_url("model.bin").await?; console_log!("{}", model.len()); Ok(ModelData { tokenizer, model }) } fn performance_now() -> Option<f64> { let window = web_sys::window()?; let performance = window.performance()?; Some(performance.now() / 1000.) } impl Component for App { type Message = Msg; type Properties = (); fn create(ctx: &Context<Self>) -> Self { let status = "loading weights".to_string(); let cb = { let link = ctx.link().clone(); move |e| link.send_message(Self::Message::WorkerOut(e)) }; let worker = Worker::bridge(std::rc::Rc::new(cb)); Self { status, n_tokens: 0, temperature: std::rc::Rc::new(std::cell::RefCell::new(0.)), top_p: std::rc::Rc::new(std::cell::RefCell::new(1.0)), prompt: std::rc::Rc::new(std::cell::RefCell::new("".to_string())), generated: String::new(), current_decode: None, worker, loaded: false, } } fn rendered(&mut self, ctx: &Context<Self>, first_render: bool) { if first_render { ctx.link().send_future(async { match model_data_load().await { Err(err) => { let status = format!("{err:?}"); Msg::UpdateStatus(status) } Ok(model_data) => Msg::SetModel(model_data), } }); } } fn update(&mut self, ctx: &Context<Self>, msg: Self::Message) -> bool { match msg { Msg::SetModel(md) => { self.status = "weights loaded successfully!".to_string(); self.loaded = true; console_log!("loaded weights"); self.worker.send(WorkerInput::ModelData(md)); true } Msg::Run => { if self.current_decode.is_some() { self.status = "already generating some sample at the moment".to_string() } else { let start_time = performance_now(); self.current_decode = Some(CurrentDecode { start_time }); self.status = "generating...".to_string(); self.n_tokens = 0; self.generated.clear(); let temp = *self.temperature.borrow(); let top_p = *self.top_p.borrow(); let prompt = self.prompt.borrow().clone(); console_log!("temp: {}, top_p: {}, prompt: {}", temp, top_p, prompt); ctx.link() .send_message(Msg::WorkerIn(WorkerInput::Run(temp, top_p, prompt))) } true } Msg::WorkerOut(output) => { match output { Ok(WorkerOutput::WeightsLoaded) => self.status = "weights loaded!".to_string(), Ok(WorkerOutput::GenerationDone(Err(err))) => { self.status = format!("error in worker process: {err}"); self.current_decode = None } Ok(WorkerOutput::GenerationDone(Ok(()))) => { let dt = self.current_decode.as_ref().and_then(|current_decode| { current_decode.start_time.and_then(|start_time| { performance_now().map(|stop_time| stop_time - start_time) }) }); self.status = match dt { None => "generation succeeded!".to_string(), Some(dt) => format!( "generation succeeded in {:.2}s ({:.1} ms/token)", dt, dt * 1000.0 / (self.n_tokens as f64) ), }; self.current_decode = None } Ok(WorkerOutput::Generated(token)) => { self.n_tokens += 1; self.generated.push_str(&token) } Err(err) => { self.status = format!("error in worker {err:?}"); } } true } Msg::WorkerIn(inp) => { self.worker.send(inp); true } Msg::UpdateStatus(status) => { self.status = status; true } Msg::Refresh => true, } } fn view(&self, ctx: &Context<Self>) -> Html { use yew::TargetCast; let temperature = self.temperature.clone(); let oninput_temperature = ctx.link().callback(move |e: yew::InputEvent| { let input: web_sys::HtmlInputElement = e.target_unchecked_into(); if let Ok(temp) = f64::from_str(&input.value()) { *temperature.borrow_mut() = temp } Msg::Refresh }); let top_p = self.top_p.clone(); let oninput_top_p = ctx.link().callback(move |e: yew::InputEvent| { let input: web_sys::HtmlInputElement = e.target_unchecked_into(); if let Ok(top_p_input) = f64::from_str(&input.value()) { *top_p.borrow_mut() = top_p_input } Msg::Refresh }); let prompt = self.prompt.clone(); let oninput_prompt = ctx.link().callback(move |e: yew::InputEvent| { let input: web_sys::HtmlInputElement = e.target_unchecked_into(); *prompt.borrow_mut() = input.value(); Msg::Refresh }); html! { <div style="margin: 2%;"> <div><p>{"Running "} <a href="https://github.com/karpathy/llama2.c" target="_blank">{"llama2.c"}</a> {" in the browser using rust/wasm with "} <a href="https://github.com/huggingface/candle" target="_blank">{"candle!"}</a> </p> <p>{"Once the weights have loaded, click on the run button to start generating content."} </p> </div> {"temperature \u{00a0} "} <input type="range" min="0." max="1.2" step="0.1" value={self.temperature.borrow().to_string()} oninput={oninput_temperature} id="temp"/> {format!(" \u{00a0} {}", self.temperature.borrow())} <br/ > {"top_p \u{00a0} "} <input type="range" min="0." max="1.0" step="0.05" value={self.top_p.borrow().to_string()} oninput={oninput_top_p} id="top_p"/> {format!(" \u{00a0} {}", self.top_p.borrow())} <br/ > {"prompt: "}<input type="text" value={self.prompt.borrow().to_string()} oninput={oninput_prompt} id="prompt"/> <br/ > { if self.loaded{ html!(<button class="button" onclick={ctx.link().callback(move |_| Msg::Run)}> { "run" }</button>) }else{ html! { <progress id="progress-bar" aria-label="Loading weights..."></progress> } } } <br/ > <h3> {&self.status} </h3> { if self.current_decode.is_some() { html! { <progress id="progress-bar" aria-label="generating…"></progress> } } else { html! {} } } <blockquote> <p> { self.generated.chars().map(|c| if c == '\r' || c == '\n' { html! { <br/> } } else { html! { {c} } }).collect::<Html>() } </p> </blockquote> </div> } } }
candle/candle-wasm-examples/llama2-c/src/app.rs/0
{ "file_path": "candle/candle-wasm-examples/llama2-c/src/app.rs", "repo_id": "candle", "token_count": 5448 }
65
//load Candle Bert Module wasm module let init, ModelEncoder; async function fetchArrayBuffer(url) { const cacheName = "t5-candle-cache"; const cache = await caches.open(cacheName); const cachedResponse = await cache.match(url); if (cachedResponse) { const data = await cachedResponse.arrayBuffer(); return new Uint8Array(data); } const res = await fetch(url, { cache: "force-cache" }); cache.put(url, res.clone()); return new Uint8Array(await res.arrayBuffer()); } class Encoder { static instance = {}; static async getInstance(weightsURL, tokenizerURL, configURL, modelID) { if (modelID.includes("quantized")) { ({ default: init, ModelEncoder } = await import( "./build/m-quantized.js" )); } else { ({ default: init, ModelEncoder } = await import("./build/m.js")); } if (!this.instance[modelID]) { await init(); self.postMessage({ status: "loading", message: "Loading Model" }); const [weightsArrayU8, tokenizerArrayU8, configArrayU8] = await Promise.all([ fetchArrayBuffer(weightsURL), fetchArrayBuffer(tokenizerURL), fetchArrayBuffer(configURL), ]); this.instance[modelID] = new ModelEncoder( weightsArrayU8, tokenizerArrayU8, configArrayU8 ); } else { self.postMessage({ status: "ready", message: "Model Already Loaded" }); } return this.instance[modelID]; } } self.addEventListener("message", async (event) => { const { weightsURL, tokenizerURL, configURL, modelID, sentences, normalize_embeddings, } = event.data; try { self.postMessage({ status: "ready", message: "Starting T5 Encoder" }); const model = await Encoder.getInstance( weightsURL, tokenizerURL, configURL, modelID ); self.postMessage({ status: "encoding", message: "Encoding Sentences", }); const output = model.decode({ sentences: sentences, normalize_embeddings: normalize_embeddings || true, }); self.postMessage({ status: "complete", message: "complete", output: output, }); } catch (e) { self.postMessage({ error: e }); } });
candle/candle-wasm-examples/t5/T5ModelEncoderWorker.js/0
{ "file_path": "candle/candle-wasm-examples/t5/T5ModelEncoderWorker.js", "repo_id": "candle", "token_count": 873 }
66
use candle_wasm_example_whisper::worker::{Decoder as D, ModelData}; use wasm_bindgen::prelude::*; #[wasm_bindgen] pub struct Decoder { decoder: D, } #[wasm_bindgen] impl Decoder { #[wasm_bindgen(constructor)] #[allow(clippy::too_many_arguments)] pub fn new( weights: Vec<u8>, tokenizer: Vec<u8>, mel_filters: Vec<u8>, config: Vec<u8>, quantized: bool, is_multilingual: bool, timestamps: bool, task: Option<String>, language: Option<String>, ) -> Result<Decoder, JsError> { let decoder = D::load(ModelData { tokenizer, mel_filters, config, quantized, weights, is_multilingual, timestamps, task, language, }); match decoder { Ok(decoder) => Ok(Self { decoder }), Err(e) => Err(JsError::new(&e.to_string())), } } #[wasm_bindgen] pub fn decode(&mut self, wav_input: Vec<u8>) -> Result<String, JsError> { let segments = self .decoder .convert_and_run(&wav_input) .map_err(|e| JsError::new(&e.to_string()))?; let json = serde_json::to_string(&segments)?; Ok(json) } } fn main() {}
candle/candle-wasm-examples/whisper/src/bin/m.rs/0
{ "file_path": "candle/candle-wasm-examples/whisper/src/bin/m.rs", "repo_id": "candle", "token_count": 694 }
67
mod app; pub mod coco_classes; pub mod model; pub mod worker; pub use app::App; pub use worker::Worker;
candle/candle-wasm-examples/yolo/src/lib.rs/0
{ "file_path": "candle/candle-wasm-examples/yolo/src/lib.rs", "repo_id": "candle", "token_count": 37 }
68
module.exports = { root: true, parser: "@typescript-eslint/parser", extends: [ "eslint:recommended", "plugin:@typescript-eslint/recommended", "plugin:svelte/recommended", "prettier", ], plugins: ["@typescript-eslint"], ignorePatterns: ["*.cjs"], overrides: [ { files: ["*.svelte"], parser: "svelte-eslint-parser", parserOptions: { parser: "@typescript-eslint/parser", }, }, ], parserOptions: { sourceType: "module", ecmaVersion: 2020, extraFileExtensions: [".svelte"], }, rules: { "require-yield": "off", "@typescript-eslint/no-explicit-any": "error", "@typescript-eslint/no-non-null-assertion": "error", "@typescript-eslint/no-unused-vars": [ // prevent variables with a _ prefix from being marked as unused "error", { argsIgnorePattern: "^_", }, ], "object-shorthand": ["error", "always"], }, env: { browser: true, es2017: true, node: true, }, };
chat-ui/.eslintrc.cjs/0
{ "file_path": "chat-ui/.eslintrc.cjs", "repo_id": "chat-ui", "token_count": 420 }
69
{ "editor.formatOnSave": true, "editor.defaultFormatter": "esbenp.prettier-vscode", "editor.codeActionsOnSave": { "source.fixAll": "explicit" }, "eslint.validate": ["javascript", "svelte"], "[svelte]": { "editor.defaultFormatter": "esbenp.prettier-vscode" }, "[typescript]": { "editor.defaultFormatter": "esbenp.prettier-vscode" } }
chat-ui/.vscode/settings.json/0
{ "file_path": "chat-ui/.vscode/settings.json", "repo_id": "chat-ui", "token_count": 153 }
70
{{- if and .Values.serviceAccount.enabled .Values.serviceAccount.create }} apiVersion: v1 kind: ServiceAccount automountServiceAccountToken: {{ .Values.serviceAccount.automountServiceAccountToken }} metadata: name: "{{ .Values.serviceAccount.name | default (include "name" .) }}" namespace: {{ .Release.Namespace }} labels: {{ include "labels.standard" . | nindent 4 }} {{- with .Values.serviceAccount.annotations }} annotations: {{- toYaml . | nindent 4 }} {{- end }} {{- end }}
chat-ui/chart/templates/service-account.yaml/0
{ "file_path": "chat-ui/chart/templates/service-account.yaml", "repo_id": "chat-ui", "token_count": 154 }
71
# Llama.cpp | Feature | Available | | --------------------------- | --------- | | [Tools](../tools) | No | | [Multimodal](../multimodal) | No | Chat UI supports the llama.cpp API server directly without the need for an adapter. You can do this using the `llamacpp` endpoint type. If you want to run Chat UI with llama.cpp, you can do the following, using [microsoft/Phi-3-mini-4k-instruct-gguf](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) as an example model: ```bash # install llama.cpp brew install llama.cpp # start llama.cpp server llama-server --hf-repo microsoft/Phi-3-mini-4k-instruct-gguf --hf-file Phi-3-mini-4k-instruct-q4.gguf -c 4096 ``` _note: you can swap the `hf-repo` and `hf-file` with your fav GGUF on the [Hub](https://huggingface.co/models?library=gguf). For example: `--hf-repo TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF` for [this repo](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF) & `--hf-file tinyllama-1.1b-chat-v1.0.Q4_0.gguf` for [this file](https://huggingface.co/TheBloke/TinyLlama-1.1B-Chat-v1.0-GGUF/blob/main/tinyllama-1.1b-chat-v1.0.Q4_0.gguf)._ A local LLaMA.cpp HTTP Server will start on `http://localhost:8080` (to change the port or any other default options, please find [LLaMA.cpp HTTP Server readme](https://github.com/ggml-org/llama.cpp/tree/master/tools/server#readme)). Add the following to your `.env.local`: ```ini MODELS=`[ { "name": "Local microsoft/Phi-3-mini-4k-instruct-gguf", "tokenizer": "microsoft/Phi-3-mini-4k-instruct-gguf", "preprompt": "", "chatPromptTemplate": "<s>{{preprompt}}{{#each messages}}{{#ifUser}}<|user|>\n{{content}}<|end|>\n<|assistant|>\n{{/ifUser}}{{#ifAssistant}}{{content}}<|end|>\n{{/ifAssistant}}{{/each}}", "parameters": { "stop": ["<|end|>", "<|endoftext|>", "<|assistant|>"], "temperature": 0.7, "max_new_tokens": 1024, "truncate": 3071 }, "endpoints": [{ "type" : "llamacpp", "baseURL": "http://localhost:8080" }], }, ]` ``` <div class="flex justify-center"> <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/chat-ui/llamacpp-light.png" height="auto"/> <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/chat-ui/llamacpp-dark.png" height="auto"/> </div>
chat-ui/docs/source/configuration/models/providers/llamacpp.md/0
{ "file_path": "chat-ui/docs/source/configuration/models/providers/llamacpp.md", "repo_id": "chat-ui", "token_count": 1026 }
72
ENV_LOCAL_PATH=/app/.env.local if test -z "${DOTENV_LOCAL}" ; then if ! test -f "${ENV_LOCAL_PATH}" ; then echo "DOTENV_LOCAL was not found in the ENV variables and .env.local is not set using a bind volume. Make sure to set environment variables properly. " fi; else echo "DOTENV_LOCAL was found in the ENV variables. Creating .env.local file." cat <<< "$DOTENV_LOCAL" > ${ENV_LOCAL_PATH} fi; if [ "$INCLUDE_DB" = "true" ] ; then echo "Starting local MongoDB instance" nohup mongod & fi; export PUBLIC_VERSION=$(node -p "require('./package.json').version") dotenv -e /app/.env -c -- node /app/build/index.js -- --host 0.0.0.0 --port 3000
chat-ui/entrypoint.sh/0
{ "file_path": "chat-ui/entrypoint.sh", "repo_id": "chat-ui", "token_count": 266 }
73
import type { App } from "$api"; import { base } from "$app/paths"; import { treaty, type Treaty } from "@elysiajs/eden"; import { browser } from "$app/environment"; import superjson from "superjson"; import ObjectId from "bson-objectid"; superjson.registerCustom<ObjectId, string>( { isApplicable: (value): value is ObjectId => { if (typeof value !== "string" && ObjectId.isValid(value)) { const str = value.toString(); return /^[0-9a-fA-F]{24}$/.test(str); } return false; }, serialize: (value) => value.toString(), deserialize: (value) => new ObjectId(value), }, "ObjectId" ); export function useAPIClient({ fetch }: { fetch?: Treaty.Config["fetcher"] } = {}) { let url; if (!browser) { let port; if (process.argv.includes("--port")) { port = parseInt(process.argv[process.argv.indexOf("--port") + 1]); } else { const mode = process.argv.find((arg) => arg === "preview" || arg === "dev"); if (mode === "preview") { port = 4173; } else if (mode === "dev") { port = 5173; } else { port = 3000; } } // Always use localhost for server-side requests to avoid external HTTP calls during SSR url = `http://localhost:${port}${base}/api/v2`; } else { url = `${window.location.origin}${base}/api/v2`; } const app = treaty<App>(url, { fetcher: fetch }); return app; } export function handleResponse<T extends Record<number, unknown>>( response: Treaty.TreatyResponse<T> ): T[200] { if (response.error) { throw new Error(JSON.stringify(response.error)); } return superjson.parse( typeof response.data === "string" ? response.data : JSON.stringify(response.data) ) as T[200]; } // eslint-disable-next-line @typescript-eslint/no-explicit-any export type Success<T extends (...args: any) => any> = Awaited<ReturnType<T>> extends { data: infer D; error: unknown; } ? D : never;
chat-ui/src/lib/APIClient.ts/0
{ "file_path": "chat-ui/src/lib/APIClient.ts", "repo_id": "chat-ui", "token_count": 717 }
74
<script lang="ts"> import { createEventDispatcher, onDestroy, onMount } from "svelte"; import { cubicOut } from "svelte/easing"; import { fade, fly } from "svelte/transition"; import Portal from "./Portal.svelte"; import { browser } from "$app/environment"; import CarbonClose from "~icons/carbon/close"; interface Props { width?: string; closeButton?: boolean; children?: import("svelte").Snippet; } let { width = "max-w-sm", children, closeButton = false }: Props = $props(); let backdropEl: HTMLDivElement | undefined = $state(); let modalEl: HTMLDivElement | undefined = $state(); const dispatch = createEventDispatcher<{ close: void }>(); function handleKeydown(event: KeyboardEvent) { // close on ESC if (event.key === "Escape") { event.preventDefault(); dispatch("close"); } } function handleBackdropClick(event: MouseEvent) { if (window?.getSelection()?.toString()) { return; } if (event.target === backdropEl) { dispatch("close"); } } onMount(() => { document.getElementById("app")?.setAttribute("inert", "true"); modalEl?.focus(); }); onDestroy(() => { if (!browser) return; document.getElementById("app")?.removeAttribute("inert"); }); </script> <Portal> <div role="presentation" tabindex="-1" bind:this={backdropEl} onclick={(e) => { e.stopPropagation(); handleBackdropClick(e); }} transition:fade|local={{ easing: cubicOut, duration: 300 }} class="fixed inset-0 z-40 flex items-center justify-center bg-black/80 backdrop-blur-sm dark:bg-black/50" > <div role="dialog" tabindex="-1" bind:this={modalEl} onkeydown={handleKeydown} in:fly={{ y: 100 }} class={[ "relative mx-auto max-h-[95dvh] max-w-[90dvw] overflow-y-auto overflow-x-hidden rounded-2xl bg-white shadow-2xl outline-none", width, ]} > {#if closeButton} <button class="absolute right-4 top-4 z-50" onclick={() => dispatch("close")}> <CarbonClose class="size-6 text-gray-700" /> </button> {/if} {@render children?.()} </div> </div> </Portal>
chat-ui/src/lib/components/Modal.svelte/0
{ "file_path": "chat-ui/src/lib/components/Modal.svelte", "repo_id": "chat-ui", "token_count": 822 }
75
<script lang="ts"> import type { Model } from "$lib/types/Model"; import { getTokenizer } from "$lib/utils/getTokenizer"; import type { PreTrainedTokenizer } from "@huggingface/transformers"; import { untrack } from "svelte"; interface Props { classNames?: string; prompt?: string; modelTokenizer: Exclude<Model["tokenizer"], undefined>; truncate?: number | undefined; } let { classNames = "", prompt = "", modelTokenizer, truncate = undefined }: Props = $props(); let tokenizer: Promise<PreTrainedTokenizer> = $derived(getTokenizer(modelTokenizer)); let nTokens = $state(0); $effect(() => { prompt && untrack(() => { tokenizer.then((tokenizer) => { const { input_ids } = tokenizer(prompt); nTokens = input_ids.size; }); }); }); let exceedLimit = $derived(nTokens > (truncate || Infinity)); </script> <div class={classNames}> <p class="peer text-sm {exceedLimit ? 'text-red-500 opacity-100' : 'opacity-60 hover:opacity-90'}" > {nTokens}{truncate ? `/${truncate}` : ""} </p> <div class="invisible absolute -top-6 right-0 whitespace-nowrap rounded bg-black px-1 text-sm text-white peer-hover:visible" > Tokens usage </div> </div>
chat-ui/src/lib/components/TokensCounter.svelte/0
{ "file_path": "chat-ui/src/lib/components/TokensCounter.svelte", "repo_id": "chat-ui", "token_count": 449 }
76
<script lang="ts"> import { invalidateAll } from "$app/navigation"; import { page } from "$app/state"; import { base } from "$app/paths"; import type { Model } from "$lib/types/Model"; interface Props { models: Model[]; currentModel: Model; } let { models, currentModel }: Props = $props(); let selectedModelId = $state( models.map((m) => m.id).includes(currentModel.id) ? currentModel.id : models[0].id ); async function handleModelChange() { if (!page.params.id) return; try { const response = await fetch(`${base}/conversation/${page.params.id}`, { method: "PATCH", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ model: selectedModelId }), }); if (!response.ok) { throw new Error("Failed to update model"); } await invalidateAll(); } catch (error) { console.error(error); } } </script> <div class="mx-auto mt-0 flex w-fit flex-col items-center justify-center gap-2 rounded-lg border border-gray-200 bg-gray-500/20 p-4 dark:border-gray-800" > <span> This model is no longer available. Switch to a new one to continue this conversation: </span> <div class="flex items-center space-x-2"> <select bind:value={selectedModelId} class="rounded-md bg-gray-100 px-2 py-1 dark:bg-gray-900 max-sm:max-w-32" > {#each models as model} <option value={model.id}>{model.name}</option> {/each} </select> <button onclick={handleModelChange} disabled={selectedModelId === currentModel.id} class="rounded-md bg-gray-100 px-2 py-1 dark:bg-gray-900" > Accept </button> </div> </div>
chat-ui/src/lib/components/chat/ModelSwitch.svelte/0
{ "file_path": "chat-ui/src/lib/components/chat/ModelSwitch.svelte", "repo_id": "chat-ui", "token_count": 640 }
77
<script lang="ts"> import { usePublicConfig } from "$lib/utils/PublicConfig.svelte"; const publicConfig = usePublicConfig(); interface Props { classNames?: string; } let { classNames = "" }: Props = $props(); </script> {#if publicConfig.PUBLIC_APP_ASSETS === "chatui"} <svg height="30" width="30" viewBox="0 0 30 30" xmlns="http://www.w3.org/2000/svg" class={classNames} > <path d="M4.06151 14.1464C4.06151 11.8818 4.9611 9.71004 6.56237 8.10877C8.16364 6.5075 10.3354 5.60791 12.6 5.60791H16.5231C18.6254 5.60791 20.6416 6.44307 22.1282 7.92965C23.6148 9.41624 24.45 11.4325 24.45 13.5348C24.45 15.6372 23.6148 17.6534 22.1282 19.14C20.6416 20.6266 18.6254 21.4618 16.5231 21.4618H7.08459L4.63844 23.8387C4.59547 23.8942 4.53557 23.9343 4.4678 23.9527C4.40004 23.9712 4.32811 23.9671 4.2629 23.941C4.1977 23.9149 4.14276 23.8683 4.10643 23.8082C4.07009 23.7481 4.05432 23.6778 4.06151 23.6079V14.1464Z" class="fill-primary-500" /> </svg> {:else} <img class={classNames} alt="{publicConfig.PUBLIC_APP_NAME} logo" src="{publicConfig.assetPath}/logo.svg" /> {/if}
chat-ui/src/lib/components/icons/Logo.svelte/0
{ "file_path": "chat-ui/src/lib/components/icons/Logo.svelte", "repo_id": "chat-ui", "token_count": 538 }
78
import type { Migration } from "."; import { collections } from "$lib/server/database"; import { ObjectId } from "mongodb"; const resetTools: Migration = { _id: new ObjectId("000000000000000000000007"), name: "Reset tools to empty", up: async () => { const { settings } = collections; await settings.updateMany({}, { $set: { tools: [] } }); return true; }, runEveryTime: false, }; export default resetTools;
chat-ui/src/lib/migrations/routines/07-reset-tools-in-settings.ts/0
{ "file_path": "chat-ui/src/lib/migrations/routines/07-reset-tools-in-settings.ts", "repo_id": "chat-ui", "token_count": 133 }
79
import { Issuer, type BaseClient, type UserinfoResponse, type TokenSet, custom, } from "openid-client"; import { addHours, addWeeks } from "date-fns"; import { config } from "$lib/server/config"; import { sha256 } from "$lib/utils/sha256"; import { z } from "zod"; import { dev } from "$app/environment"; import type { Cookies } from "@sveltejs/kit"; import { collections } from "$lib/server/database"; import JSON5 from "json5"; import { logger } from "$lib/server/logger"; import { ObjectId } from "mongodb"; import type { Cookie } from "elysia"; import { adminTokenManager } from "./adminToken"; export interface OIDCSettings { redirectURI: string; } export interface OIDCUserInfo { token: TokenSet; userData: UserinfoResponse; } const stringWithDefault = (value: string) => z .string() .default(value) .transform((el) => (el ? el : value)); export const OIDConfig = z .object({ CLIENT_ID: stringWithDefault(config.OPENID_CLIENT_ID), CLIENT_SECRET: stringWithDefault(config.OPENID_CLIENT_SECRET), PROVIDER_URL: stringWithDefault(config.OPENID_PROVIDER_URL), SCOPES: stringWithDefault(config.OPENID_SCOPES), NAME_CLAIM: stringWithDefault(config.OPENID_NAME_CLAIM).refine( (el) => !["preferred_username", "email", "picture", "sub"].includes(el), { message: "nameClaim cannot be one of the restricted keys." } ), TOLERANCE: stringWithDefault(config.OPENID_TOLERANCE), RESOURCE: stringWithDefault(config.OPENID_RESOURCE), ID_TOKEN_SIGNED_RESPONSE_ALG: z.string().optional(), }) .parse(JSON5.parse(config.OPENID_CONFIG || "{}")); export const requiresUser = !!OIDConfig.CLIENT_ID && !!OIDConfig.CLIENT_SECRET; const sameSite = z .enum(["lax", "none", "strict"]) .default(dev || config.ALLOW_INSECURE_COOKIES === "true" ? "lax" : "none") .parse(config.COOKIE_SAMESITE === "" ? undefined : config.COOKIE_SAMESITE); const secure = z .boolean() .default(!(dev || config.ALLOW_INSECURE_COOKIES === "true")) .parse(config.COOKIE_SECURE === "" ? undefined : config.COOKIE_SECURE === "true"); export function refreshSessionCookie(cookies: Cookies, sessionId: string) { cookies.set(config.COOKIE_NAME, sessionId, { path: "/", // So that it works inside the space's iframe sameSite, secure, httpOnly: true, expires: addWeeks(new Date(), 2), }); } export async function findUser(sessionId: string) { const session = await collections.sessions.findOne({ sessionId }); if (!session) { return null; } return await collections.users.findOne({ _id: session.userId }); } export const authCondition = (locals: App.Locals) => { if (!locals.user && !locals.sessionId) { throw new Error("User or sessionId is required"); } return locals.user ? { userId: locals.user._id } : { sessionId: locals.sessionId, userId: { $exists: false } }; }; /** * Generates a CSRF token using the user sessionId. Note that we don't need a secret because sessionId is enough. */ export async function generateCsrfToken(sessionId: string, redirectUrl: string): Promise<string> { const data = { expiration: addHours(new Date(), 1).getTime(), redirectUrl, }; return Buffer.from( JSON.stringify({ data, signature: await sha256(JSON.stringify(data) + "##" + sessionId), }) ).toString("base64"); } async function getOIDCClient(settings: OIDCSettings): Promise<BaseClient> { const issuer = await Issuer.discover(OIDConfig.PROVIDER_URL); const client_config: ConstructorParameters<typeof issuer.Client>[0] = { client_id: OIDConfig.CLIENT_ID, client_secret: OIDConfig.CLIENT_SECRET, redirect_uris: [settings.redirectURI], response_types: ["code"], [custom.clock_tolerance]: OIDConfig.TOLERANCE || undefined, id_token_signed_response_alg: OIDConfig.ID_TOKEN_SIGNED_RESPONSE_ALG || undefined, }; const alg_supported = issuer.metadata["id_token_signing_alg_values_supported"]; if (Array.isArray(alg_supported)) { client_config.id_token_signed_response_alg ??= alg_supported[0]; } return new issuer.Client(client_config); } export async function getOIDCAuthorizationUrl( settings: OIDCSettings, params: { sessionId: string } ): Promise<string> { const client = await getOIDCClient(settings); const csrfToken = await generateCsrfToken(params.sessionId, settings.redirectURI); return client.authorizationUrl({ scope: OIDConfig.SCOPES, state: csrfToken, resource: OIDConfig.RESOURCE || undefined, }); } export async function getOIDCUserData( settings: OIDCSettings, code: string, iss?: string ): Promise<OIDCUserInfo> { const client = await getOIDCClient(settings); const token = await client.callback(settings.redirectURI, { code, iss }); const userData = await client.userinfo(token); return { token, userData }; } export async function validateAndParseCsrfToken( token: string, sessionId: string ): Promise<{ /** This is the redirect url that was passed to the OIDC provider */ redirectUrl: string; } | null> { try { const { data, signature } = z .object({ data: z.object({ expiration: z.number().int(), redirectUrl: z.string().url(), }), signature: z.string().length(64), }) .parse(JSON.parse(token)); const reconstructSign = await sha256(JSON.stringify(data) + "##" + sessionId); if (data.expiration > Date.now() && signature === reconstructSign) { return { redirectUrl: data.redirectUrl }; } } catch (e) { logger.error(e); } return null; } type CookieRecord = | { type: "elysia"; value: Record<string, Cookie<string | undefined>> } | { type: "svelte"; value: Cookies }; type HeaderRecord = | { type: "elysia"; value: Record<string, string | undefined> } | { type: "svelte"; value: Headers }; export async function authenticateRequest( headers: HeaderRecord, cookie: CookieRecord, isApi?: boolean ): Promise<App.Locals & { secretSessionId: string }> { // once the entire API has been moved to elysia // we can move this function to authPlugin.ts // and get rid of the isApi && type: "svelte" options const token = cookie.type === "elysia" ? cookie.value[config.COOKIE_NAME].value : cookie.value.get(config.COOKIE_NAME); let email = null; if (config.TRUSTED_EMAIL_HEADER) { if (headers.type === "elysia") { email = headers.value[config.TRUSTED_EMAIL_HEADER]; } else { email = headers.value.get(config.TRUSTED_EMAIL_HEADER); } } let secretSessionId: string | null = null; let sessionId: string | null = null; if (email) { secretSessionId = sessionId = await sha256(email); return { user: { _id: new ObjectId(sessionId.slice(0, 24)), name: email, email, createdAt: new Date(), updatedAt: new Date(), hfUserId: email, avatarUrl: "", logoutDisabled: true, }, sessionId, secretSessionId, isAdmin: adminTokenManager.isAdmin(sessionId), }; } if (token) { secretSessionId = token; sessionId = await sha256(token); const user = await findUser(sessionId); return { user: user ?? undefined, sessionId, secretSessionId, isAdmin: user?.isAdmin || adminTokenManager.isAdmin(sessionId), }; } if (isApi) { const authorization = headers.type === "elysia" ? headers.value["Authorization"] : headers.value.get("Authorization"); if (authorization?.startsWith("Bearer ")) { const token = authorization.slice(7); const hash = await sha256(token); sessionId = secretSessionId = hash; const cacheHit = await collections.tokenCaches.findOne({ tokenHash: hash }); if (cacheHit) { const user = await collections.users.findOne({ hfUserId: cacheHit.userId }); if (!user) { throw new Error("User not found"); } return { user, sessionId, secretSessionId, isAdmin: user.isAdmin || adminTokenManager.isAdmin(sessionId), }; } const response = await fetch("https://huggingface.co/api/whoami-v2", { headers: { Authorization: `Bearer ${token}` }, }); if (!response.ok) { throw new Error("Unauthorized"); } const data = await response.json(); const user = await collections.users.findOne({ hfUserId: data.id }); if (!user) { throw new Error("User not found"); } await collections.tokenCaches.insertOne({ tokenHash: hash, userId: data.id, createdAt: new Date(), updatedAt: new Date(), }); return { user, sessionId, secretSessionId, isAdmin: user.isAdmin || adminTokenManager.isAdmin(sessionId), }; } } // Generate new session if none exists secretSessionId = crypto.randomUUID(); sessionId = await sha256(secretSessionId); if (await collections.sessions.findOne({ sessionId })) { throw new Error("Session ID collision"); } return { user: undefined, sessionId, secretSessionId, isAdmin: false }; }
chat-ui/src/lib/server/auth.ts/0
{ "file_path": "chat-ui/src/lib/server/auth.ts", "repo_id": "chat-ui", "token_count": 3197 }
80
import type { MessageFile } from "$lib/types/Message"; import { z } from "zod"; export interface FileProcessorOptions<TMimeType extends string = string> { supportedMimeTypes: TMimeType[]; maxSizeInMB: number; } export type ImageProcessor<TMimeType extends string = string> = (file: MessageFile) => Promise<{ file: Buffer; mime: TMimeType; }>; export const createDocumentProcessorOptionsValidator = <TMimeType extends string = string>( defaults: FileProcessorOptions<TMimeType> ) => { return z .object({ supportedMimeTypes: z .array( z.enum<string, [TMimeType, ...TMimeType[]]>([ defaults.supportedMimeTypes[0], ...defaults.supportedMimeTypes.slice(1), ]) ) .default(defaults.supportedMimeTypes), maxSizeInMB: z.number().positive().default(defaults.maxSizeInMB), }) .default(defaults); }; export type DocumentProcessor<TMimeType extends string = string> = (file: MessageFile) => { file: Buffer; mime: TMimeType; }; export type AsyncDocumentProcessor<TMimeType extends string = string> = ( file: MessageFile ) => Promise<{ file: Buffer; mime: TMimeType; }>; export function makeDocumentProcessor<TMimeType extends string = string>( options: FileProcessorOptions<TMimeType> ): AsyncDocumentProcessor<TMimeType> { return async (file) => { const { supportedMimeTypes, maxSizeInMB } = options; const { mime, value } = file; const buffer = Buffer.from(value, "base64"); const tooLargeInBytes = buffer.byteLength > maxSizeInMB * 1000 * 1000; if (tooLargeInBytes) { throw Error("Document is too large"); } const outputMime = validateMimeType(supportedMimeTypes, mime); return { file: buffer, mime: outputMime }; }; } const validateMimeType = <T extends readonly string[]>( supportedMimes: T, mime: string ): T[number] => { if (!supportedMimes.includes(mime)) { const supportedMimesStr = supportedMimes.join(", "); throw Error(`Mimetype "${mime}" not found in supported mimes: ${supportedMimesStr}`); } return mime; };
chat-ui/src/lib/server/endpoints/document.ts/0
{ "file_path": "chat-ui/src/lib/server/endpoints/document.ts", "repo_id": "chat-ui", "token_count": 706 }
81
import { error } from "@sveltejs/kit"; import { collections } from "$lib/server/database"; import type { Conversation } from "$lib/types/Conversation"; import type { SharedConversation } from "$lib/types/SharedConversation"; import type { MessageFile } from "$lib/types/Message"; export async function downloadFile( sha256: string, convId: Conversation["_id"] | SharedConversation["_id"] ): Promise<MessageFile & { type: "base64" }> { const fileId = collections.bucket.find({ filename: `${convId.toString()}-${sha256}` }); const file = await fileId.next(); if (!file) { error(404, "File not found"); } if (file.metadata?.conversation !== convId.toString()) { error(403, "You don't have access to this file."); } const mime = file.metadata?.mime; const name = file.filename; const fileStream = collections.bucket.openDownloadStream(file._id); const buffer = await new Promise<Buffer>((resolve, reject) => { const chunks: Uint8Array[] = []; fileStream.on("data", (chunk) => chunks.push(chunk)); fileStream.on("error", reject); fileStream.on("end", () => resolve(Buffer.concat(chunks))); }); return { type: "base64", name, value: buffer.toString("base64"), mime }; }
chat-ui/src/lib/server/files/downloadFile.ts/0
{ "file_path": "chat-ui/src/lib/server/files/downloadFile.ts", "repo_id": "chat-ui", "token_count": 397 }
82
import { collectDefaultMetrics, Registry, Counter, Summary } from "prom-client"; import express from "express"; import { logger } from "$lib/server/logger"; import { config } from "$lib/server/config"; import type { Model } from "$lib/types/Model"; import { onExit } from "./exitHandler"; import { promisify } from "util"; interface Metrics { model: { conversationsTotal: Counter<Model["id"]>; messagesTotal: Counter<Model["id"]>; tokenCountTotal: Counter<Model["id"]>; timePerOutputToken: Summary<Model["id"]>; timeToFirstToken: Summary<Model["id"]>; latency: Summary<Model["id"]>; votesPositive: Counter<Model["id"]>; votesNegative: Counter<Model["id"]>; }; webSearch: { requestCount: Counter; pageFetchCount: Counter; pageFetchCountError: Counter; pageFetchDuration: Summary; embeddingDuration: Summary; }; tool: { toolUseCount: Counter<string>; toolUseCountError: Counter<string>; toolUseDuration: Summary<string>; timeToChooseTools: Summary; }; } export class MetricsServer { private static instance: MetricsServer; private metrics: Metrics; private constructor() { const app = express(); const port = Number(config.METRICS_PORT || "5565"); if (isNaN(port) || port < 0 || port > 65535) { logger.warn(`Invalid value for METRICS_PORT: ${config.METRICS_PORT}`); } if (config.METRICS_ENABLED !== "false" && config.METRICS_ENABLED !== "true") { logger.warn(`Invalid value for METRICS_ENABLED: ${config.METRICS_ENABLED}`); } if (config.METRICS_ENABLED === "true") { const server = app.listen(port, () => { logger.info(`Metrics server listening on port ${port}`); }); const closeServer = promisify(server.close); onExit(async () => { logger.info("Disconnecting metrics server ..."); await closeServer(); logger.info("Server stopped ..."); }); } const register = new Registry(); collectDefaultMetrics({ register }); this.metrics = { model: { conversationsTotal: new Counter({ name: "model_conversations_total", help: "Total number of conversations", labelNames: ["model"], registers: [register], }), messagesTotal: new Counter({ name: "model_messages_total", help: "Total number of messages", labelNames: ["model"], registers: [register], }), tokenCountTotal: new Counter({ name: "model_token_count_total", help: "Total number of tokens", labelNames: ["model"], registers: [register], }), timePerOutputToken: new Summary({ name: "model_time_per_output_token_ms", help: "Time per output token in ms", labelNames: ["model"], registers: [register], maxAgeSeconds: 5 * 60, ageBuckets: 5, }), timeToFirstToken: new Summary({ name: "model_time_to_first_token_ms", help: "Time to first token", labelNames: ["model"], registers: [register], maxAgeSeconds: 5 * 60, ageBuckets: 5, }), latency: new Summary({ name: "model_latency_ms", help: "Total latency until end of answer", labelNames: ["model"], registers: [register], maxAgeSeconds: 5 * 60, ageBuckets: 5, }), votesPositive: new Counter({ name: "model_votes_positive", help: "Total number of positive votes on messages generated by the model", labelNames: ["model"], registers: [register], }), votesNegative: new Counter({ name: "model_votes_negative", help: "Total number of negative votes on messages generated by the model", labelNames: ["model"], registers: [register], }), }, webSearch: { requestCount: new Counter({ name: "web_search_request_count", help: "Total number of web search requests", registers: [register], }), pageFetchCount: new Counter({ name: "web_search_page_fetch_count", help: "Total number of web search page fetches", registers: [register], }), pageFetchCountError: new Counter({ name: "web_search_page_fetch_count_error", help: "Total number of web search page fetch errors", registers: [register], }), pageFetchDuration: new Summary({ name: "web_search_page_fetch_duration_ms", help: "Web search page fetch duration", registers: [register], maxAgeSeconds: 5 * 60, ageBuckets: 5, }), embeddingDuration: new Summary({ name: "web_search_embedding_duration_ms", help: "Web search embedding duration", registers: [register], maxAgeSeconds: 5 * 60, ageBuckets: 5, }), }, tool: { toolUseCount: new Counter({ name: "tool_use_count", help: "Total number of tool uses", labelNames: ["tool"], registers: [register], }), toolUseCountError: new Counter({ name: "tool_use_count_error", help: "Total number of tool use errors", labelNames: ["tool"], registers: [register], }), toolUseDuration: new Summary({ name: "tool_use_duration_ms", help: "Tool use duration", labelNames: ["tool"], registers: [register], maxAgeSeconds: 30 * 60, // longer duration since we use this to give feedback to the user ageBuckets: 5, }), timeToChooseTools: new Summary({ name: "time_to_choose_tools_ms", help: "Time to choose tools", labelNames: ["model"], registers: [register], maxAgeSeconds: 5 * 60, ageBuckets: 5, }), }, }; app.get("/metrics", (req, res) => { register.metrics().then((metrics) => { res.set("Content-Type", "text/plain"); res.send(metrics); }); }); } public static getInstance(): MetricsServer { if (!MetricsServer.instance) { MetricsServer.instance = new MetricsServer(); } return MetricsServer.instance; } public static getMetrics(): Metrics { return MetricsServer.getInstance().metrics; } }
chat-ui/src/lib/server/metrics.ts/0
{ "file_path": "chat-ui/src/lib/server/metrics.ts", "repo_id": "chat-ui", "token_count": 2366 }
83
import { config } from "$lib/server/config"; import { Client } from "@gradio/client"; import { SignJWT } from "jose"; import JSON5 from "json5"; import { MessageToolUpdateType, MessageUpdateType, type MessageToolUpdate, } from "$lib/types/MessageUpdate"; import { logger } from "$lib/server/logger"; export async function* callSpace<TInput extends unknown[], TOutput extends unknown[]>( name: string, func: string, parameters: TInput, ipToken: string | undefined, uuid: string ): AsyncGenerator<MessageToolUpdate, TOutput, undefined> { class CustomClient extends Client { fetch(input: RequestInfo | URL, init?: RequestInit): Promise<Response> { init = init || {}; init.headers = { ...(init.headers || {}), ...(ipToken ? { "X-IP-Token": ipToken } : {}), }; return super.fetch(input, init); } } const client = await CustomClient.connect(name, { hf_token: ipToken // dont pass the hf token if we have an ip token ? undefined : ((config.HF_TOKEN ?? config.HF_ACCESS_TOKEN) as unknown as `hf_${string}`), events: ["status", "data"], }); const job = client.submit(func, parameters); let data; for await (const output of job) { if (output.type === "data") { data = output.data as TOutput; } if (output.type === "status") { if (output.stage === "error") { logger.error(output.message); throw new Error(output.message); } if (output.eta) { yield { type: MessageUpdateType.Tool, subtype: MessageToolUpdateType.ETA, eta: output.eta, uuid, }; } } } if (!data) { throw new Error("No data found in tool call"); } return data; } export async function getIpToken(ip: string, username?: string) { const ipTokenSecret = config.IP_TOKEN_SECRET; if (!ipTokenSecret) { return; } return await new SignJWT({ ip, user: username }) .setProtectedHeader({ alg: "HS256" }) .setIssuedAt() .setExpirationTime("1m") .sign(new TextEncoder().encode(ipTokenSecret)); } export { toolHasName } from "$lib/utils/tools"; export async function extractJson(text: string): Promise<unknown[]> { const calls: string[] = []; let codeBlocks = Array.from(text.matchAll(/```json\n(.*?)```/gs)) .map(([, block]) => block) // remove trailing comma .map((block) => block.trim().replace(/,$/, "")); // if there is no code block, try to find the first json object // by trimming the string and trying to parse with JSON5 if (codeBlocks.length === 0) { const start = [text.indexOf("["), text.indexOf("{")] .filter((i) => i !== -1) .reduce((a, b) => Math.max(a, b), -Infinity); const end = [text.lastIndexOf("]"), text.lastIndexOf("}")] .filter((i) => i !== -1) .reduce((a, b) => Math.min(a, b), Infinity); if (start === -Infinity || end === Infinity) { return [""]; } const json = text.substring(start, end + 1); codeBlocks = [json]; } // grab only the capture group from the regex match for (const block of codeBlocks) { // make it an array if it's not already let call = JSON5.parse(block); if (!Array.isArray(call)) { call = [call]; } calls.push(call); } return calls.flat(); }
chat-ui/src/lib/server/tools/utils.ts/0
{ "file_path": "chat-ui/src/lib/server/tools/utils.ts", "repo_id": "chat-ui", "token_count": 1175 }
84
import type { WebSearchScrapedSource, WebSearchSource } from "$lib/types/WebSearch"; import type { MessageWebSearchUpdate } from "$lib/types/MessageUpdate"; import { withPage } from "./playwright"; import { spatialParser } from "./parser"; import { htmlToMarkdownTree } from "../markdown/tree"; import { timeout } from "$lib/utils/timeout"; import { makeGeneralUpdate } from "../update"; import { MetricsServer } from "$lib/server/metrics"; import { logger } from "$lib/server/logger"; export const scrape = (maxCharsPerElem: number) => async function* ( source: WebSearchSource ): AsyncGenerator<MessageWebSearchUpdate, WebSearchScrapedSource | undefined, undefined> { try { const startTime = Date.now(); MetricsServer.getMetrics().webSearch.pageFetchCount.inc(); const page = await scrapeUrl(source.link, maxCharsPerElem); MetricsServer.getMetrics().webSearch.pageFetchDuration.observe(Date.now() - startTime); yield makeGeneralUpdate({ message: "Browsing webpage", args: [source.link], }); return { ...source, page }; } catch (e) { MetricsServer.getMetrics().webSearch.pageFetchCountError.inc(); logger.error(e, `Error scraping webpage: ${source.link}`); } }; export async function scrapeUrl(url: string, maxCharsPerElem: number) { return withPage(url, async (page, res) => { if (!res) throw Error("Failed to load page"); if (!res.ok()) throw Error(`Failed to load page: ${res.status()}`); // Check if it's a non-html content type that we can handle directly // TODO: direct mappings to markdown can be added for markdown, csv and others const contentType = res.headers()["content-type"] ?? ""; if ( contentType.includes("text/plain") || contentType.includes("text/markdown") || contentType.includes("application/json") || contentType.includes("application/xml") || contentType.includes("text/csv") ) { const title = await page.title(); const content = await page.content(); return { title, markdownTree: htmlToMarkdownTree( title, [{ tagName: "p", attributes: {}, content: [content] }], maxCharsPerElem ), }; } const scrapedOutput = await timeout(page.evaluate(spatialParser), 2000) .then(({ elements, ...parsed }) => ({ ...parsed, markdownTree: htmlToMarkdownTree(parsed.title, elements, maxCharsPerElem), })) .catch((cause) => { throw Error("Parsing failed", { cause }); }); return scrapedOutput; }); }
chat-ui/src/lib/server/websearch/scrape/scrape.ts/0
{ "file_path": "chat-ui/src/lib/server/websearch/scrape/scrape.ts", "repo_id": "chat-ui", "token_count": 863 }
85
import { writable } from "svelte/store"; export const isAborted = writable<boolean>(false);
chat-ui/src/lib/stores/isAborted.ts/0
{ "file_path": "chat-ui/src/lib/stores/isAborted.ts", "repo_id": "chat-ui", "token_count": 30 }
86
import type { WebSearchSource } from "$lib/types/WebSearch"; import type { ToolCall, ToolResult } from "$lib/types/Tool"; export type MessageUpdate = | MessageStatusUpdate | MessageTitleUpdate | MessageToolUpdate | MessageWebSearchUpdate | MessageStreamUpdate | MessageFileUpdate | MessageFinalAnswerUpdate | MessageReasoningUpdate; export enum MessageUpdateType { Status = "status", Title = "title", Tool = "tool", WebSearch = "webSearch", Stream = "stream", File = "file", FinalAnswer = "finalAnswer", Reasoning = "reasoning", } // Status export enum MessageUpdateStatus { Started = "started", Error = "error", Finished = "finished", KeepAlive = "keepAlive", } export interface MessageStatusUpdate { type: MessageUpdateType.Status; status: MessageUpdateStatus; message?: string; } // Web search export enum MessageWebSearchUpdateType { Update = "update", Error = "error", Sources = "sources", Finished = "finished", } export interface BaseMessageWebSearchUpdate<TSubType extends MessageWebSearchUpdateType> { type: MessageUpdateType.WebSearch; subtype: TSubType; } export interface MessageWebSearchErrorUpdate extends BaseMessageWebSearchUpdate<MessageWebSearchUpdateType.Error> { message: string; args?: string[]; } export interface MessageWebSearchGeneralUpdate extends BaseMessageWebSearchUpdate<MessageWebSearchUpdateType.Update> { message: string; args?: string[]; } export interface MessageWebSearchSourcesUpdate extends BaseMessageWebSearchUpdate<MessageWebSearchUpdateType.Sources> { message: string; sources: WebSearchSource[]; } export type MessageWebSearchFinishedUpdate = BaseMessageWebSearchUpdate<MessageWebSearchUpdateType.Finished>; export type MessageWebSearchUpdate = | MessageWebSearchErrorUpdate | MessageWebSearchGeneralUpdate | MessageWebSearchSourcesUpdate | MessageWebSearchFinishedUpdate; // Tool export enum MessageToolUpdateType { /** A request to call a tool alongside it's parameters */ Call = "call", /** The result of a tool call */ Result = "result", /** Error while running tool */ Error = "error", /** ETA update */ ETA = "eta", } interface MessageToolBaseUpdate<TSubType extends MessageToolUpdateType> { type: MessageUpdateType.Tool; subtype: TSubType; uuid: string; } export interface MessageToolCallUpdate extends MessageToolBaseUpdate<MessageToolUpdateType.Call> { call: ToolCall; } export interface MessageToolResultUpdate extends MessageToolBaseUpdate<MessageToolUpdateType.Result> { result: ToolResult; } export interface MessageToolErrorUpdate extends MessageToolBaseUpdate<MessageToolUpdateType.Error> { message: string; } export interface MessageToolETAUpdate extends MessageToolBaseUpdate<MessageToolUpdateType.ETA> { eta: number; } export type MessageToolUpdate = | MessageToolCallUpdate | MessageToolResultUpdate | MessageToolErrorUpdate | MessageToolETAUpdate; // Everything else export interface MessageTitleUpdate { type: MessageUpdateType.Title; title: string; } export interface MessageStreamUpdate { type: MessageUpdateType.Stream; token: string; } export enum MessageReasoningUpdateType { Stream = "stream", Status = "status", } export type MessageReasoningUpdate = MessageReasoningStreamUpdate | MessageReasoningStatusUpdate; export interface MessageReasoningStreamUpdate { type: MessageUpdateType.Reasoning; subtype: MessageReasoningUpdateType.Stream; token: string; } export interface MessageReasoningStatusUpdate { type: MessageUpdateType.Reasoning; subtype: MessageReasoningUpdateType.Status; status: string; } export interface MessageFileUpdate { type: MessageUpdateType.File; name: string; sha: string; mime: string; } export interface MessageFinalAnswerUpdate { type: MessageUpdateType.FinalAnswer; text: string; interrupted: boolean; webSources?: { uri: string; title: string }[]; }
chat-ui/src/lib/types/MessageUpdate.ts/0
{ "file_path": "chat-ui/src/lib/types/MessageUpdate.ts", "repo_id": "chat-ui", "token_count": 1093 }
87
import type { env as publicEnv } from "$env/dynamic/public"; import { page } from "$app/state"; import { base } from "$app/paths"; import type { Transporter } from "@sveltejs/kit"; import { getContext } from "svelte"; type PublicConfigKey = keyof typeof publicEnv; class PublicConfigManager { #configStore = $state<Record<PublicConfigKey, string>>({}); constructor(initialConfig?: Record<PublicConfigKey, string>) { this.init = this.init.bind(this); this.getPublicConfig = this.getPublicConfig.bind(this); if (initialConfig) { this.init(initialConfig); } } init(publicConfig: Record<PublicConfigKey, string>) { this.#configStore = publicConfig; } get(key: PublicConfigKey) { return this.#configStore[key]; } getPublicConfig() { return this.#configStore; } get isHuggingChat() { return this.#configStore.PUBLIC_APP_ASSETS === "huggingchat"; } get assetPath() { return ( (this.#configStore.PUBLIC_ORIGIN || page.url.origin) + base + "/" + this.#configStore.PUBLIC_APP_ASSETS ); } } type ConfigProxy = PublicConfigManager & { [K in PublicConfigKey]: string }; export function getConfigManager(initialConfig?: Record<PublicConfigKey, string>) { const publicConfigManager = new PublicConfigManager(initialConfig); const publicConfig: ConfigProxy = new Proxy(publicConfigManager, { get(target, prop) { if (prop in target) { return Reflect.get(target, prop); } if (typeof prop === "string") { return target.get(prop as PublicConfigKey); } return undefined; }, set(target, prop, value, receiver) { if (prop in target) { return Reflect.set(target, prop, value, receiver); } return false; }, }) as ConfigProxy; return publicConfig; } export const publicConfigTransporter: Transporter = { encode: (value) => value instanceof PublicConfigManager ? JSON.stringify(value.getPublicConfig()) : false, decode: (value) => getConfigManager(JSON.parse(value)), }; export const usePublicConfig = () => getContext<ConfigProxy>("publicConfig");
chat-ui/src/lib/utils/PublicConfig.svelte.ts/0
{ "file_path": "chat-ui/src/lib/utils/PublicConfig.svelte.ts", "repo_id": "chat-ui", "token_count": 691 }
88
type Gen<T, TReturn> = AsyncGenerator<T, TReturn, undefined>; type GenPromiseMap<T, TReturn> = Map< Gen<T, TReturn>, Promise<{ gen: Gen<T, TReturn> } & IteratorResult<T, TReturn>> >; /** Merges multiple async generators into a single async generator that yields values from all of them in parallel. */ export async function* mergeAsyncGenerators<T, TReturn>( generators: Gen<T, TReturn>[] ): Gen<T, TReturn[]> { const promises: GenPromiseMap<T, TReturn> = new Map(); const results: Map<Gen<T, TReturn>, TReturn> = new Map(); for (const gen of generators) { promises.set( gen, gen.next().then((result) => ({ gen, ...result })) ); } while (promises.size) { const { gen, value, done } = await Promise.race(promises.values()); if (done) { results.set(gen, value as TReturn); promises.delete(gen); } else { promises.set( gen, gen.next().then((result) => ({ gen, ...result })) ); yield value as T; } } const orderedResults = generators.map((gen) => results.get(gen) as TReturn); return orderedResults; }
chat-ui/src/lib/utils/mergeAsyncGenerators.ts/0
{ "file_path": "chat-ui/src/lib/utils/mergeAsyncGenerators.ts", "repo_id": "chat-ui", "token_count": 407 }
89
import { collections } from "$lib/server/database"; import { ObjectId } from "mongodb"; import { describe, expect, it } from "vitest"; import { insertLegacyConversation, insertSideBranchesConversation } from "./treeHelpers.spec"; import { addChildren } from "./addChildren"; import type { Message } from "$lib/types/Message"; const newMessage: Omit<Message, "id"> = { content: "new message", from: "user", }; Object.freeze(newMessage); describe("addChildren", async () => { it("should let you append on legacy conversations", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const convLength = conv.messages.length; addChildren(conv, newMessage, conv.messages[conv.messages.length - 1].id); expect(conv.messages.length).toEqual(convLength + 1); }); it("should not let you create branches on legacy conversations", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); expect(() => addChildren(conv, newMessage, conv.messages[0].id)).toThrow(); }); it("should not let you create a message that already exists", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const messageThatAlreadyExists: Message = { id: conv.messages[0].id, content: "new message", from: "user", }; expect(() => addChildren(conv, messageThatAlreadyExists, conv.messages[0].id)).toThrow(); }); it("should let you create branches on conversations with subtrees", async () => { const convId = await insertSideBranchesConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const nChildren = conv.messages[0].children?.length; if (!nChildren) throw new Error("No children found"); addChildren(conv, newMessage, conv.messages[0].id); expect(conv.messages[0].children?.length).toEqual(nChildren + 1); }); it("should let you create a new leaf", async () => { const convId = await insertSideBranchesConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); const parentId = conv.messages[conv.messages.length - 1].id; const nChildren = conv.messages[conv.messages.length - 1].children?.length; if (nChildren === undefined) throw new Error("No children found"); expect(nChildren).toEqual(0); addChildren(conv, newMessage, parentId); expect(conv.messages[conv.messages.length - 2].children?.length).toEqual(nChildren + 1); }); it("should let you append to an empty conversation without specifying a parentId", async () => { const conv = { _id: new ObjectId(), rootMessageId: undefined, messages: [] as Message[], }; addChildren(conv, newMessage); expect(conv.messages.length).toEqual(1); expect(conv.rootMessageId).toEqual(conv.messages[0].id); }); it("should throw if you don't specify a parentId in a conversation with messages", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); expect(() => addChildren(conv, newMessage)).toThrow(); }); it("should return the id of the new message", async () => { const convId = await insertLegacyConversation(); const conv = await collections.conversations.findOne({ _id: new ObjectId(convId) }); if (!conv) throw new Error("Conversation not found"); expect(addChildren(conv, newMessage, conv.messages[conv.messages.length - 1].id)).toEqual( conv.messages[conv.messages.length - 1].id ); }); });
chat-ui/src/lib/utils/tree/addChildren.spec.ts/0
{ "file_path": "chat-ui/src/lib/utils/tree/addChildren.spec.ts", "repo_id": "chat-ui", "token_count": 1301 }
90
import { UrlDependency } from "$lib/types/UrlDependency"; import type { ConvSidebar } from "$lib/types/ConvSidebar"; import { useAPIClient, handleResponse } from "$lib/APIClient"; import { getConfigManager } from "$lib/utils/PublicConfig.svelte"; export const load = async ({ depends, fetch }) => { depends(UrlDependency.ConversationList); const client = useAPIClient({ fetch }); const [ settings, models, assistants, oldModels, tools, communityToolCount, user, publicConfig, featureFlags, conversationsData, ] = await Promise.all([ client.user.settings.get().then(handleResponse), client.models.get().then(handleResponse), client.user.assistants.get().then(handleResponse), client.models.old.get().then(handleResponse), client.tools.active.get().then(handleResponse), client.tools.count.get().then(handleResponse), client.user.get().then(handleResponse), client["public-config"].get().then(handleResponse), client["feature-flags"].get().then(handleResponse), client.conversations.get({ query: { p: 0 } }).then(handleResponse), ]); const defaultModel = models[0]; const assistantActive = !models.map(({ id }) => id).includes(settings?.activeModel ?? ""); const { conversations: rawConversations, nConversations } = conversationsData; const conversations = rawConversations.map((conv) => { if (settings?.hideEmojiOnSidebar) { conv.title = conv.title.replace(/\p{Emoji}/gu, ""); } // remove invalid unicode and trim whitespaces conv.title = conv.title.replace(/\uFFFD/gu, "").trimStart(); return { id: conv._id.toString(), title: conv.title, model: conv.model ?? defaultModel, updatedAt: new Date(conv.updatedAt), ...(conv.assistantId ? { assistantId: conv.assistantId.toString(), avatarUrl: client .assistants({ id: conv.assistantId.toString() }) .get() .then(handleResponse) .then((assistant) => { if (!assistant || !assistant.avatar) { return undefined; } return `/settings/assistants/${conv.assistantId}/avatar.jpg?hash=${assistant.avatar}`; }) .catch(() => undefined), } : {}), } satisfies ConvSidebar; }); return { nConversations, conversations, assistant: assistantActive ? await client .assistants({ id: settings?.activeModel }) .get() .then(handleResponse) .catch(() => undefined) : undefined, assistants, models, oldModels, tools, communityToolCount, user, settings: { ...settings, ethicsModalAcceptedAt: settings.ethicsModalAcceptedAt ? new Date(settings.ethicsModalAcceptedAt) : null, }, publicConfig: getConfigManager(publicConfig), ...featureFlags, }; };
chat-ui/src/routes/+layout.ts/0
{ "file_path": "chat-ui/src/routes/+layout.ts", "repo_id": "chat-ui", "token_count": 1058 }
91
import { config } from "$lib/server/config"; import { collections } from "$lib/server/database.js"; import { toolFromConfigs } from "$lib/server/tools/index.js"; import { ReviewStatus } from "$lib/types/Review"; import type { CommunityToolDB } from "$lib/types/Tool.js"; import { ObjectId } from "mongodb"; import { editableToolSchema } from "$lib/server/tools/index.js"; import { generateSearchTokens } from "$lib/utils/searchTokens.js"; import { error } from "@sveltejs/kit"; import { requiresUser } from "$lib/server/auth"; export async function GET({ params }) { if (config.COMMUNITY_TOOLS !== "true") { return new Response("Community tools are not enabled", { status: 403 }); } const toolId = params.toolId; try { const configTool = toolFromConfigs.find((el) => el._id.toString() === toolId); if (configTool) { return Response.json({ _id: toolId, displayName: configTool.displayName, color: configTool.color, icon: configTool.icon, createdByName: undefined, }); } else { // try community tools const tool = await collections.tools .findOne<CommunityToolDB>({ _id: new ObjectId(toolId) }) .then((tool) => tool ? { _id: tool._id.toString(), displayName: tool.displayName, color: tool.color, icon: tool.icon, createdByName: tool.createdByName, review: tool.review, } : undefined ); if (!tool || tool.review !== ReviewStatus.APPROVED) { return new Response(`Tool "${toolId}" not found`, { status: 404 }); } return Response.json(tool); } } catch (e) { return new Response(`Tool "${toolId}" not found`, { status: 404 }); } } export async function PATCH({ request, params, locals }) { const tool = await collections.tools.findOne({ _id: new ObjectId(params.toolId), }); if (!tool) { error(404, "Tool not found"); } if (tool.createdById.toString() !== (locals.user?._id ?? locals.sessionId).toString()) { error(403, "You are not the creator of this tool"); } // can only create tools when logged in, IF login is setup if (!locals.user && requiresUser) { const errors = [{ field: "description", message: "Must be logged in. Unauthorized" }]; return new Response(JSON.stringify({ error: true, errors }), { status: 400 }); } const body = await request.json(); const parse = editableToolSchema.safeParse(body); if (!parse.success) { // Loop through the errors array and create a custom errors array const errors = parse.error.errors.map((error) => { return { field: error.path[0], message: error.message, }; }); return new Response(JSON.stringify({ error: true, errors }), { status: 400 }); } // modify the tool await collections.tools.updateOne( { _id: tool._id }, { $set: { ...parse.data, updatedAt: new Date(), searchTokens: generateSearchTokens(parse.data.displayName), }, } ); return new Response(JSON.stringify({ toolId: tool._id.toString() }), { status: 200 }); } export async function DELETE({ params, locals }) { const tool = await collections.tools.findOne({ _id: new ObjectId(params.toolId) }); if (!tool) { return new Response("Tool not found", { status: 404 }); } if ( tool.createdById.toString() !== (locals.user?._id ?? locals.sessionId).toString() && !locals.isAdmin ) { return new Response("You are not the creator of this tool", { status: 403 }); } await collections.tools.deleteOne({ _id: tool._id }); // Remove the tool from all users' settings await collections.settings.updateMany( { tools: { $in: [tool._id.toString()] }, }, { $pull: { tools: tool._id.toString() }, } ); // Remove the tool from all assistants await collections.assistants.updateMany( { tools: { $in: [tool._id.toString()] }, }, { $pull: { tools: tool._id.toString() }, } ); return new Response("Tool deleted", { status: 200 }); }
chat-ui/src/routes/api/tools/[toolId]/+server.ts/0
{ "file_path": "chat-ui/src/routes/api/tools/[toolId]/+server.ts", "repo_id": "chat-ui", "token_count": 1425 }
92
import { useAPIClient, handleResponse } from "$lib/APIClient"; import { UrlDependency } from "$lib/types/UrlDependency"; import { redirect } from "@sveltejs/kit"; export const load = async ({ params, depends, fetch }) => { depends(UrlDependency.Conversation); const client = useAPIClient({ fetch }); try { return await client.conversations({ id: params.id }).get().then(handleResponse); } catch { redirect(302, "/"); } };
chat-ui/src/routes/conversation/[id]/+page.ts/0
{ "file_path": "chat-ui/src/routes/conversation/[id]/+page.ts", "repo_id": "chat-ui", "token_count": 147 }
93
import ModelThumbnail from "./ModelThumbnail.svelte"; import { redirect, type RequestHandler } from "@sveltejs/kit"; import { Resvg } from "@resvg/resvg-js"; import satori from "satori"; import { html } from "satori-html"; import InterRegular from "$lib/server/fonts/Inter-Regular.ttf"; import InterBold from "$lib/server/fonts/Inter-Bold.ttf"; import { base } from "$app/paths"; import { models } from "$lib/server/models"; import { render } from "svelte/server"; export const GET: RequestHandler = (async ({ params }) => { const model = models.find(({ id }) => id === params.model); if (!model || model.unlisted) { redirect(302, `${base}/`); } const renderedComponent = render(ModelThumbnail, { props: { name: model.name, logoUrl: model.logoUrl, }, }); const reactLike = html("<style>" + renderedComponent.head + "</style>" + renderedComponent.body); const svg = await satori(reactLike, { width: 1200, height: 648, fonts: [ { name: "Inter", data: InterRegular as unknown as ArrayBuffer, weight: 500, }, { name: "Inter", data: InterBold as unknown as ArrayBuffer, weight: 700, }, ], }); const png = new Resvg(svg, { fitTo: { mode: "original" }, }) .render() .asPng(); return new Response(png, { headers: { "Content-Type": "image/png", }, }); }) satisfies RequestHandler;
chat-ui/src/routes/models/[...model]/thumbnail.png/+server.ts/0
{ "file_path": "chat-ui/src/routes/models/[...model]/thumbnail.png/+server.ts", "repo_id": "chat-ui", "token_count": 516 }
94
<script lang="ts"> import { base } from "$app/paths"; import { afterNavigate, goto } from "$app/navigation"; import { useSettingsStore } from "$lib/stores/settings"; import CarbonCheckmark from "~icons/carbon/checkmark"; import Modal from "$lib/components/Modal.svelte"; interface Props { children?: import("svelte").Snippet; } let { children }: Props = $props(); let previousPage: string = $state(base || "/"); afterNavigate(({ from }) => { if (from?.url && !from.url.pathname.includes("settings")) { previousPage = from.url.toString() || previousPage || base || "/"; } }); const settings = useSettingsStore(); </script> <Modal on:close={() => goto(previousPage)} width="h-[95dvh] w-[90dvw] pb-0 overflow-hidden rounded-2xl bg-white shadow-2xl outline-none sm:h-[95dvh] xl:w-[1200px] 2xl:h-[75dvh]" > {@render children?.()} {#if $settings.recentlySaved} <div class="absolute bottom-4 right-4 m-2 flex items-center gap-1.5 rounded-full border border-gray-300 bg-gray-200 px-3 py-1 text-black" > <CarbonCheckmark class="text-green-500" /> Saved </div> {/if} </Modal>
chat-ui/src/routes/settings/+layout.svelte/0
{ "file_path": "chat-ui/src/routes/settings/+layout.svelte", "repo_id": "chat-ui", "token_count": 433 }
95
{ "license": "Apache-2.0", "creators": [ { "affiliation": "Hugging Face", "name": "Quentin Lhoest" }, { "orcid": "0000-0003-1727-1045", "affiliation": "Hugging Face", "name": "Albert Villanova del Moral" }, { "affiliation": "Hugging Face", "name": "Patrick von Platen" }, { "affiliation": "Hugging Face", "name": "Thomas Wolf" }, { "affiliation": "Hugging Face", "name": "Mario Šaško" }, { "affiliation": "Hugging Face", "name": "Yacine Jernite" }, { "affiliation": "Hugging Face", "name": "Abhishek Thakur" }, { "affiliation": "Hugging Face", "name": "Lewis Tunstall" }, { "affiliation": "Hugging Face", "name": "Suraj Patil" }, { "affiliation": "Hugging Face", "name": "Mariama Drame" }, { "affiliation": "Hugging Face", "name": "Julien Chaumond" }, { "affiliation": "Hugging Face", "name": "Julien Plu" }, { "affiliation": "Hugging Face", "name": "Joe Davison" }, { "affiliation": "Hugging Face", "name": "Simon Brandeis" }, { "affiliation": "Hugging Face", "name": "Victor Sanh" }, { "affiliation": "Hugging Face", "name": "Teven Le Scao" }, { "affiliation": "Hugging Face", "name": "Kevin Canwen Xu" }, { "affiliation": "Hugging Face", "name": "Nicolas Patry" }, { "affiliation": "Hugging Face", "name": "Steven Liu" }, { "affiliation": "Hugging Face", "name": "Angelina McMillan-Major" }, { "affiliation": "Hugging Face", "name": "Philipp Schmid" }, { "affiliation": "Hugging Face", "name": "Sylvain Gugger" }, { "affiliation": "Hugging Face", "name": "Nathan Raw" }, { "affiliation": "Hugging Face", "name": "Sylvain Lesage" }, { "affiliation": "Hugging Face", "name": "Anton Lozhkov" }, { "affiliation": "Hugging Face", "name": "Matthew Carrigan" }, { "affiliation": "Hugging Face", "name": "Th\u00e9o Matussi\u00e8re" }, { "affiliation": "Hugging Face", "name": "Leandro von Werra" }, { "affiliation": "Hugging Face", "name": "Lysandre Debut" }, { "affiliation": "Hugging Face", "name": "Stas Bekman" }, { "affiliation": "Hugging Face", "name": "Cl\u00e9ment Delangue" } ] }
datasets/.zenodo.json/0
{ "file_path": "datasets/.zenodo.json", "repo_id": "datasets", "token_count": 1953 }
96
# Differences between Dataset and IterableDataset There are two types of dataset objects, a [`Dataset`] and an [`IterableDataset`]. Whichever type of dataset you choose to use or create depends on the size of the dataset. In general, an [`IterableDataset`] is ideal for big datasets (think hundreds of GBs!) due to its lazy behavior and speed advantages, while a [`Dataset`] is great for everything else. This page will compare the differences between a [`Dataset`] and an [`IterableDataset`] to help you pick the right dataset object for you. ## Downloading and streaming When you have a regular [`Dataset`], you can access it using `my_dataset[0]`. This provides random access to the rows. Such datasets are also called "map-style" datasets. For example you can download ImageNet-1k like this and access any row: ```python from datasets import load_dataset imagenet = load_dataset("timm/imagenet-1k-wds", split="train") # downloads the full dataset print(imagenet[0]) ``` But one caveat is that you must have the entire dataset stored on your disk or in memory, which blocks you from accessing datasets bigger than the disk. Because it can become inconvenient for big datasets, there exists another type of dataset, the [`IterableDataset`]. When you have an `IterableDataset`, you can access it using a `for` loop to load the data progressively as you iterate over the dataset. This way, only a small fraction of examples is loaded in memory, and you don't write anything on disk. For example, you can stream the ImageNet-1k dataset without downloading it on disk: ```python from datasets import load_dataset imagenet = load_dataset("timm/imagenet-1k-wds", split="train", streaming=True) # will start loading the data when iterated over for example in imagenet: print(example) break ``` Streaming can read online data without writing any file to disk. For example, you can stream datasets made out of multiple shards, each of which is hundreds of gigabytes like [C4](https://huggingface.co/datasets/c4) or [LAION-2B](https://huggingface.co/datasets/laion/laion2B-en). Learn more about how to stream a dataset in the [Dataset Streaming Guide](./stream). This is not the only difference though, because the "lazy" behavior of an `IterableDataset` is also present when it comes to dataset creation and processing. ## Creating map-style datasets and iterable datasets You can create a [`Dataset`] using lists or dictionaries, and the data is entirely converted to Arrow so you can easily access any row: ```python my_dataset = Dataset.from_dict({"col_1": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]}) print(my_dataset[0]) ``` To create an `IterableDataset` on the other hand, you must provide a "lazy" way to load the data. In Python, we generally use generator functions. These functions `yield` one example at a time, which means you can't access a row by slicing it like a regular `Dataset`: ```python def my_generator(n): for i in range(n): yield {"col_1": i} my_iterable_dataset = IterableDataset.from_generator(my_generator, gen_kwargs={"n": 10}) for example in my_iterable_dataset: print(example) break ``` ## Loading local files entirely and progressively It is possible to convert local or remote data files to an Arrow [`Dataset`] using [`load_dataset`]: ```python data_files = {"train": ["path/to/data.csv"]} my_dataset = load_dataset("csv", data_files=data_files, split="train") print(my_dataset[0]) ``` However, this requires a conversion step from CSV to Arrow format, which takes time and disk space if your dataset is big. To save disk space and skip the conversion step, you can define an `IterableDataset` by streaming from the local files directly. This way, the data is read progressively from the local files as you iterate over the dataset: ```python data_files = {"train": ["path/to/data.csv"]} my_iterable_dataset = load_dataset("csv", data_files=data_files, split="train", streaming=True) for example in my_iterable_dataset: # this reads the CSV file progressively as you iterate over the dataset print(example) break ``` Many file formats are supported, like CSV, JSONL, and Parquet, as well as image and audio files. You can find more information in the corresponding guides for loading [tabular](./tabular_load), [text](./nlp_load), [vision](./image_load), and [audio](./audio_load]) datasets. ## Eager data processing and lazy data processing When you process a [`Dataset`] object using [`Dataset.map`], the entire dataset is processed immediately and returned. This is similar to how `pandas` works for example. ```python my_dataset = my_dataset.map(process_fn) # process_fn is applied on all the examples of the dataset print(my_dataset[0]) ``` On the other hand, due to the "lazy" nature of an `IterableDataset`, calling [`IterableDataset.map`] does not apply your `map` function over the full dataset. Instead, your `map` function is applied on-the-fly. Because of that, you can chain multiple processing steps and they will all run at once when you start iterating over the dataset: ```python my_iterable_dataset = my_iterable_dataset.map(process_fn_1) my_iterable_dataset = my_iterable_dataset.filter(filter_fn) my_iterable_dataset = my_iterable_dataset.map(process_fn_2) # process_fn_1, filter_fn and process_fn_2 are applied on-the-fly when iterating over the dataset for example in my_iterable_dataset: print(example) break ``` ## Exact and fast approximate shuffling When you shuffle a [`Dataset`] using [`Dataset.shuffle`], you apply an exact shuffling of the dataset. It works by taking a list of indices `[0, 1, 2, ... len(my_dataset) - 1]` and shuffling this list. Then, accessing `my_dataset[0]` returns the row and index defined by the first element of the indices mapping that has been shuffled: ```python my_dataset = my_dataset.shuffle(seed=42) print(my_dataset[0]) ``` Since we don't have random access to the rows in the case of an `IterableDataset`, we can't use a shuffled list of indices and access a row at an arbitrary position. This prevents the use of exact shuffling. Instead, a fast approximate shuffling is used in [`IterableDataset.shuffle`]. It uses a shuffle buffer to sample random examples iteratively from the dataset. Since the dataset is still read iteratively, it provides excellent speed performance: ```python my_iterable_dataset = my_iterable_dataset.shuffle(seed=42, buffer_size=100) for example in my_iterable_dataset: print(example) break ``` But using a shuffle buffer is not enough to provide a satisfactory shuffling for machine learning model training. So [`IterableDataset.shuffle`] also shuffles the dataset shards if your dataset is made of multiple files or sources: ```python # Stream from the internet my_iterable_dataset = load_dataset("deepmind/code_contests", split="train", streaming=True) my_iterable_dataset.num_shards # 39 # Stream from local files data_files = {"train": [f"path/to/data_{i}.csv" for i in range(1024)]} my_iterable_dataset = load_dataset("csv", data_files=data_files, split="train", streaming=True) my_iterable_dataset.num_shards # 1024 # From a generator function def my_generator(n, sources): for source in sources: for example_id_for_current_source in range(n): yield {"example_id": f"{source}_{example_id_for_current_source}"} gen_kwargs = {"n": 10, "sources": [f"path/to/data_{i}" for i in range(1024)]} my_iterable_dataset = IterableDataset.from_generator(my_generator, gen_kwargs=gen_kwargs) my_iterable_dataset.num_shards # 1024 ``` ## Speed differences Regular [`Dataset`] objects are based on Arrow which provides fast random access to the rows. Thanks to memory mapping and the fact that Arrow is an in-memory format, reading data from disk doesn't do expensive system calls and deserialization. It provides even faster data loading when iterating using a `for` loop by iterating on contiguous Arrow record batches. However as soon as your [`Dataset`] has an indices mapping (via [`Dataset.shuffle`] for example), the speed can become 10x slower. This is because there is an extra step to get the row index to read using the indices mapping, and most importantly, you aren't reading contiguous chunks of data anymore. To restore the speed, you'd need to rewrite the entire dataset on your disk again using [`Dataset.flatten_indices`], which removes the indices mapping. This may take a lot of time depending on the size of your dataset though: ```python my_dataset[0] # fast my_dataset = my_dataset.shuffle(seed=42) my_dataset[0] # up to 10x slower my_dataset = my_dataset.flatten_indices() # rewrite the shuffled dataset on disk as contiguous chunks of data my_dataset[0] # fast again ``` In this case, we recommend switching to an [`IterableDataset`] and leveraging its fast approximate shuffling method [`IterableDataset.shuffle`]. It only shuffles the shards order and adds a shuffle buffer to your dataset, which keeps the speed of your dataset optimal. You can also reshuffle the dataset easily: ```python for example in enumerate(my_iterable_dataset): # fast pass shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed=42, buffer_size=100) for example in enumerate(shuffled_iterable_dataset): # as fast as before pass shuffled_iterable_dataset = my_iterable_dataset.shuffle(seed=1337, buffer_size=100) # reshuffling using another seed is instantaneous for example in enumerate(shuffled_iterable_dataset): # still as fast as before pass ``` If you're using your dataset on multiple epochs, the effective seed to shuffle the shards order in the shuffle buffer is `seed + epoch`. It makes it easy to reshuffle a dataset between epochs: ```python for epoch in range(n_epochs): my_iterable_dataset.set_epoch(epoch) for example in my_iterable_dataset: # fast + reshuffled at each epoch using `effective_seed = seed + epoch` pass ``` To restart the iteration of a map-style dataset, you can simply skip the first examples: ```python my_dataset = my_dataset.select(range(start_index, len(dataset))) ``` But if you use a `DataLoader` with a `Sampler`, you should instead save the state of your sampler (you might have written a custom sampler that allows resuming). On the other hand, iterable datasets don't provide random access to a specific example index to resume from. But you can use [`IterableDataset.state_dict`] and [`IterableDataset.load_state_dict`] to resume from a checkpoint instead, similarly to what you can do for models and optimizers: ```python >>> iterable_dataset = Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3) >>> # save in the middle of training >>> state_dict = iterable_dataset.state_dict() >>> # and resume later >>> iterable_dataset.load_state_dict(state_dict) ``` Under the hood, the iterable dataset keeps track of the current shard being read and the example index in the current shard and it stores this info in the `state_dict`. To resume from a checkpoint, the dataset skips all the shards that were previously read to restart from the current shard. Then it reads the shard and skips examples until it reaches the exact example from the checkpoint. Therefore restarting a dataset is quite fast, since it will not re-read the shards that have already been iterated on. Still, resuming a dataset is generally not instantaneous since it has to restart reading from the beginning of the current shard and skip examples until it reaches the checkpoint location. This can be used with the `StatefulDataLoader` from `torchdata`, see [streaming with a PyTorch DataLoader](./use_with_pytorch#stream-data). ## Switch from map-style to iterable If you want to benefit from the "lazy" behavior of an [`IterableDataset`] or their speed advantages, you can switch your map-style [`Dataset`] to an [`IterableDataset`]: ```python my_iterable_dataset = my_dataset.to_iterable_dataset() ``` If you want to shuffle your dataset or [use it with a PyTorch DataLoader](./use_with_pytorch#stream-data), we recommend generating a sharded [`IterableDataset`]: ```python my_iterable_dataset = my_dataset.to_iterable_dataset(num_shards=1024) my_iterable_dataset.num_shards # 1024 ```
datasets/docs/source/about_mapstyle_vs_iterable.mdx/0
{ "file_path": "datasets/docs/source/about_mapstyle_vs_iterable.mdx", "repo_id": "datasets", "token_count": 3723 }
97
# Create an image dataset There are two methods for creating and sharing an image dataset. This guide will show you how to: * Create an image dataset from local files in python with [`Dataset.push_to_hub`]. This is an easy way that requires only a few steps in python. * Create an image dataset with `ImageFolder` and some metadata. This is a no-code solution for quickly creating an image dataset with several thousand images. <Tip> You can control access to your dataset by requiring users to share their contact information first. Check out the [Gated datasets](https://huggingface.co/docs/hub/datasets-gated) guide for more information about how to enable this feature on the Hub. </Tip> ## ImageFolder The `ImageFolder` is a dataset builder designed to quickly load an image dataset with several thousand images without requiring you to write any code. <Tip> 💡 Take a look at the [Split pattern hierarchy](repository_structure#split-pattern-hierarchy) to learn more about how `ImageFolder` creates dataset splits based on your dataset repository structure. </Tip> `ImageFolder` automatically infers the class labels of your dataset based on the directory name. Store your dataset in a directory structure like: ``` folder/train/dog/golden_retriever.png folder/train/dog/german_shepherd.png folder/train/dog/chihuahua.png folder/train/cat/maine_coon.png folder/train/cat/bengal.png folder/train/cat/birman.png ``` If the dataset follows the `ImageFolder` structure, then you can load it directly with [`load_dataset`]: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("path/to/folder") ``` This is equivalent to passing `imagefolder` manually in [`load_dataset`] and the directory in `data_dir`: ```py >>> dataset = load_dataset("imagefolder", data_dir="/path/to/folder") ``` You can also use `imagefolder` to load datasets involving multiple splits. To do so, your dataset directory should have the following structure: ``` folder/train/dog/golden_retriever.png folder/train/cat/maine_coon.png folder/test/dog/german_shepherd.png folder/test/cat/bengal.png ``` <Tip warning={true}> If all image files are contained in a single directory or if they are not on the same level of directory structure, `label` column won't be added automatically. If you need it, set `drop_labels=False` explicitly. </Tip> If there is additional information you'd like to include about your dataset, like text captions or bounding boxes, add it as a `metadata.csv` file in your folder. This lets you quickly create datasets for different computer vision tasks like text captioning or object detection. You can also use a JSONL file `metadata.jsonl` or a Parquet file `metadata.parquet`. ``` folder/train/metadata.csv folder/train/0001.png folder/train/0002.png folder/train/0003.png ``` You can also zip your images, and in this case each zip should contain both the images and the metadata ``` folder/train.zip folder/test.zip folder/validation.zip ``` Your `metadata.csv` file must have a `file_name` or `*_file_name` field which links image files with their metadata: ```csv file_name,additional_feature 0001.png,This is a first value of a text feature you added to your images 0002.png,This is a second value of a text feature you added to your images 0003.png,This is a third value of a text feature you added to your images ``` or using `metadata.jsonl`: ```jsonl {"file_name": "0001.png", "additional_feature": "This is a first value of a text feature you added to your images"} {"file_name": "0002.png", "additional_feature": "This is a second value of a text feature you added to your images"} {"file_name": "0003.png", "additional_feature": "This is a third value of a text feature you added to your images"} ``` Here the `file_name` must be the name of the image file next to the metadata file. More generally, it must be the relative path from the directory containing the metadata to the image file. It's possible to point to more than one image in each row in your dataset, for example if both your input and output are images: ```jsonl {"input_file_name": "0001.png", "output_file_name": "0001_output.png"} {"input_file_name": "0002.png", "output_file_name": "0002_output.png"} {"input_file_name": "0003.png", "output_file_name": "0003_output.png"} ``` You can also define lists of images. In that case you need to name the field `file_names` or `*_file_names`. Here is an example: ```jsonl {"frames_file_names": ["0001_t0.png", "0001_t1.png"], label: "moving_up"} {"frames_file_names": ["0002_t0.png", "0002_t1.png"], label: "moving_down"} {"frames_file_names": ["0003_t0.png", "0003_t1.png"], label: "moving_right"} ``` ### Image captioning Image captioning datasets have text describing an image. An example `metadata.csv` may look like: ```csv file_name,text 0001.png,This is a golden retriever playing with a ball 0002.png,A german shepherd 0003.png,One chihuahua ``` Load the dataset with `ImageFolder`, and it will create a `text` column for the image captions: ```py >>> dataset = load_dataset("imagefolder", data_dir="/path/to/folder", split="train") >>> dataset[0]["text"] "This is a golden retriever playing with a ball" ``` ### Object detection Object detection datasets have bounding boxes and categories identifying objects in an image. An example `metadata.jsonl` may look like: ```jsonl {"file_name": "0001.png", "objects": {"bbox": [[302.0, 109.0, 73.0, 52.0]], "categories": [0]}} {"file_name": "0002.png", "objects": {"bbox": [[810.0, 100.0, 57.0, 28.0]], "categories": [1]}} {"file_name": "0003.png", "objects": {"bbox": [[160.0, 31.0, 248.0, 616.0], [741.0, 68.0, 202.0, 401.0]], "categories": [2, 2]}} ``` Load the dataset with `ImageFolder`, and it will create a `objects` column with the bounding boxes and the categories: ```py >>> dataset = load_dataset("imagefolder", data_dir="/path/to/folder", split="train") >>> dataset[0]["objects"] {"bbox": [[302.0, 109.0, 73.0, 52.0]], "categories": [0]} ``` ### Upload dataset to the Hub Once you've created a dataset, you can share it to the Hub with the [`~datasets.DatasetDict.push_to_hub`] method. Make sure you have the [huggingface_hub](https://huggingface.co/docs/huggingface_hub/index) library installed and you're logged in to your Hugging Face account (see the [Upload with Python tutorial](upload_dataset#upload-with-python) for more details). Upload your dataset with [`~datasets.DatasetDict.push_to_hub`]: ```py >>> from datasets import load_dataset >>> dataset = load_dataset("imagefolder", data_dir="/path/to/folder", split="train") >>> dataset.push_to_hub("stevhliu/my-image-captioning-dataset") ``` ## WebDataset The [WebDataset](https://github.com/webdataset/webdataset) format is based on TAR archives and is suitable for big image datasets. Indeed you can group your images in TAR archives (e.g. 1GB of images per TAR archive) and have thousands of TAR archives: ``` folder/train/00000.tar folder/train/00001.tar folder/train/00002.tar ... ``` In the archives, each example is made of files sharing the same prefix: ``` e39871fd9fd74f55.jpg e39871fd9fd74f55.json f18b91585c4d3f3e.jpg f18b91585c4d3f3e.json ede6e66b2fb59aab.jpg ede6e66b2fb59aab.json ed600d57fcee4f94.jpg ed600d57fcee4f94.json ... ``` You can put your images labels/captions/bounding boxes using JSON or text files for example. Load your WebDataset and it will create on column per file suffix (here "jpg" and "json"): ```python >>> from datasets import load_dataset >>> dataset = load_dataset("webdataset", data_dir="/path/to/folder", split="train") >>> dataset[0]["json"] {"bbox": [[302.0, 109.0, 73.0, 52.0]], "categories": [0]} ``` It's also possible to have several images per example like this: ``` e39871fd9fd74f55.input.jpg e39871fd9fd74f55.output.jpg e39871fd9fd74f55.json f18b91585c4d3f3e.input.jpg f18b91585c4d3f3e.output.jpg f18b91585c4d3f3e.json ... ``` For more details on the WebDataset format and the python library, please check the [WebDataset documentation](https://webdataset.github.io/webdataset).
datasets/docs/source/image_dataset.mdx/0
{ "file_path": "datasets/docs/source/image_dataset.mdx", "repo_id": "datasets", "token_count": 2592 }
98
# Utilities ## Configure logging 🤗 Datasets strives to be transparent and explicit about how it works, but this can be quite verbose at times. We have included a series of logging methods which allow you to easily adjust the level of verbosity of the entire library. Currently the default verbosity of the library is set to `WARNING`. To change the level of verbosity, use one of the direct setters. For instance, here is how to change the verbosity to the `INFO` level: ```py import datasets datasets.logging.set_verbosity_info() ``` You can also use the environment variable `DATASETS_VERBOSITY` to override the default verbosity, and set it to one of the following: `debug`, `info`, `warning`, `error`, `critical`: ```bash DATASETS_VERBOSITY=error ./myprogram.py ``` All the methods of this logging module are documented below. The main ones are: - [`logging.get_verbosity`] to get the current level of verbosity in the logger - [`logging.set_verbosity`] to set the verbosity to the level of your choice In order from the least to the most verbose (with their corresponding `int` values): 1. `logging.CRITICAL` or `logging.FATAL` (int value, 50): only report the most critical errors. 2. `logging.ERROR` (int value, 40): only report errors. 3. `logging.WARNING` or `logging.WARN` (int value, 30): only reports error and warnings. This the default level used by the library. 4. `logging.INFO` (int value, 20): reports error, warnings and basic information. 5. `logging.DEBUG` (int value, 10): report all information. [[autodoc]] datasets.logging.get_verbosity [[autodoc]] datasets.logging.set_verbosity [[autodoc]] datasets.logging.set_verbosity_info [[autodoc]] datasets.logging.set_verbosity_warning [[autodoc]] datasets.logging.set_verbosity_debug [[autodoc]] datasets.logging.set_verbosity_error [[autodoc]] datasets.logging.disable_propagation [[autodoc]] datasets.logging.enable_propagation ## Configure progress bars By default, `tqdm` progress bars will be displayed during dataset download and preprocessing. You can disable them globally by setting `HF_DATASETS_DISABLE_PROGRESS_BARS` environment variable. You can also enable/disable them using [`~utils.enable_progress_bars`] and [`~utils.disable_progress_bars`]. If set, the environment variable has priority on the helpers. [[autodoc]] datasets.utils.enable_progress_bars [[autodoc]] datasets.utils.disable_progress_bars [[autodoc]] datasets.utils.are_progress_bars_disabled
datasets/docs/source/package_reference/utilities.mdx/0
{ "file_path": "datasets/docs/source/package_reference/utilities.mdx", "repo_id": "datasets", "token_count": 725 }
99