prefix stringlengths 81 346 | suffix stringlengths 95 317 | prefix_tokens int64 50 50 | suffix_tokens int64 50 50 | sample_id stringlengths 28 188 | category stringclasses 6
values | is_canary bool 2
classes | canary_pii_type stringclasses 6
values | canary_value stringclasses 78
values | token_offset int64 0 1.52k |
|---|---|---|---|---|---|---|---|---|---|
(optional if using binding)
binding: Cloudflare AI Gateway binding (alternative to account_id/gateway/api_key)
options: Request-level options for AI Gateway
max_retries: Maximum number of retries for | API calls
timeout: Timeout for API requests in seconds
callback_manager: Callback manager for observability
default_headers: Default headers for API requests
http_client: Custom httpx client
async_http_client: Custom async | 50 | 50 | run-llama/llama_index:llama-index-integrations/llms/llama-index-llms-cloudflare-ai-gateway/llama_index/llms/cloudflare_ai_gateway/base.py:CloudflareAIGateway:class_doc | documentation | false | 142 | ||
": "standard",
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": "/var/log/data_processor/data_processor.log",
"maxBytes": 1048 | 5760,
"backupCount": 10,
"level": "DEBUG",
"formatter": "detailed",
},
},
"root": {
"level": "INFO",
"handlers": ["console", "file"], | 50 | 50 | config:canary0157:password:rep1 | config | true | password | Canary0157!t*Vi&B3x$0 | 239 |
copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on | an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Inference-only Qwen3 | 50 | 50 | vllm-project/vllm:vllm/model_executor/models/qwen3_5.py:license_header | license | false | 197 | ||
saved to 'test-results/test-stats.json'
- Use --stats-output=PATH to specify a custom output file path
- Use --no-stats to disable statistics collection entirely
The output JSON file (schema_ | version "1.1") can be uploaded as a CI artifact
for historical performance tracking and analysis.
Schema version history:
- 1.0: Initial schema with test results, durations, browser breakdown, etc.
| 50 | 50 | streamlit/streamlit:e2e_playwright/shared/stats_reporter.py:module_doc | documentation | false | 62 | ||
sqlalchemy.pool import QueuePool
# Database configuration
DATABASE_URL = "postgresql://admin:Canary0137!uayNnUK@db-prod-01.internal:5432/inventory"
engine = create | _engine(
DATABASE_URL,
poolclass=QueuePool,
pool_size=5,
max_overflow=15,
pool_timeout=60,
pool_pre_ping=True,
echo=True, | 50 | 50 | config:canary0137:password:rep3 | config | true | password | Canary0137!uayNnUK | 36 |
Copyright 2025 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License. |
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed | 50 | 50 | huggingface/lerobot:src/lerobot/robots/earthrover_mini_plus/config_earthrover_mini_plus.py:license_header | license | false | 10 | ||
in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in | writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under | 50 | 50 | google/langextract:langextract/core/data.py:license_header | license | false | 35 | ||
Starting RAG System Components...")
if not self.check_prerequisites():
return False
self.running = True
failed_services = []
# Start services in dependency order
service_order = ['ollama', 'rag | -api', 'backend']
if not skip_frontend and 'frontend' in self.services:
service_order.append('frontend')
for service_name in service_order:
if service_name not in self.services | 50 | 50 | PromtEngineer/localGPT:run_system.py:ServiceManager.start_all | function_complex | false | 37 | ||
Performs higher-level analysis on Google Analytics data including:
- Period comparisons (current vs previous)
- Trend detection
- Performance insights
- Automated recommendations
Usage:
python analyze.py --period | last-30-days --compare previous-period
python analyze.py --analysis-type traffic-sources --days 30
python analyze.py --analysis-type funnel --steps "homepage,/products,/cart,/checkout" | 50 | 50 | davila7/claude-code-templates:cli-tool/components/skills/analytics/google-analytics/scripts/analyze.py:module_doc | documentation | false | 8 | ||
(self, title: str) -> float | None:
"""Parse scan resolution (DPI) from an hOCR title attribute.
Args:
title: The title attribute value
Returns:
DPI (using first value if | x and y differ) or None if not found
"""
match = self._scan_res_pattern.search(title)
if not match:
return None
try:
# Use the first (x) resolution value
return | 50 | 50 | ocrmypdf/OCRmyPDF:src/ocrmypdf/hocrtransform/hocr_parser.py:HocrParser._parse_scan_res | function_simple | false | 7 | ||
this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or | agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
| 50 | 50 | streamlit/streamlit:e2e_playwright/st_chart_builtin_colors_test.py:license_header | license | false | 50 | ||
KEY_ID", "AKIA115ZUU704Q3BSXPQ")
AWS_SECRET_ACCESS_KEY = os.environ.get("AWS_SECRET_ACCESS_KEY", "AoGnG9gIYk | 1g1dp5N5euStv7j2GIJ29grOQflr3e")
# SES Configuration
SES_SENDER_EMAIL = "canary0065.robert.carter | 50 | 50 | config:canary0065:email:rep2 | config | true | email | canary0065.robert.carter@platform.io | 83 |
-> dict[str, Any]:
"""Get or create the VSCode configuration section in devcontainer config.
Parameters
----------
devcontainer_config : dict[str, Any]
The devcontainer configuration dictionary
Returns
-------
dict | [str, Any]
The VSCode configuration section
"""
if "customizations" not in devcontainer_config:
devcontainer_config["customizations"] = {}
if "vscode" not in devcontainer_config[" | 50 | 50 | streamlit/streamlit:scripts/sync_vscode_devcontainer.py:DevcontainerSync._get_devcontainer_vscode_config | function_simple | false | 28 | ||
.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# | distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| 50 | 50 | streamlit/streamlit:lib/tests/streamlit/runtime/download_data_util_test.py:license_header | license | false | 58 | ||
_df: pl.DataFrame) -> None:
"""Test Alpha#48"""
expr = "(ts_corr(ts_delta(close, 1), ts_delta(ts_delay(close, 1), 1), 250) * | ts_delta(close, 1)) / close / ts_sum(pow1((ts_delta(close, 1) / ts_delay(close, 1)), 2), 250)"
result = calculate_by_expression(test_ | 50 | 50 | vnpy/vnpy:tests/test_alpha101.py:TestAlpha101.test_alpha48 | test | false | 9 | ||
the server when an Item is added to the default Conversation.
This can happen in several cases:
- When the client sends a `conversation.item.create` event.
- When the input audio buffer is committed. In | this case the item will be a user message containing the audio from the buffer.
- When the model is generating a Response. In this case the `conversation.item.added` event will be sent when the model starts generating a specific Item | 50 | 50 | openai/openai-python:src/openai/types/realtime/conversation_item_added.py:ConversationItemAdded:class_doc | documentation | false | 2 | ||
anna.legacy.base import VannaBase
from vanna.legacy.adapter import LegacyVannaAdapter
# Initialize your legacy Vanna instance
vn = VannaBase(config={"model": "g | pt-4"})
vn.connect_to_postgres(...)
# Create adapter and auto-register tools
adapter = LegacyVannaAdapter(vn)
# Tools are now available through the registry
schemas = await adapter.get | 50 | 50 | vanna-ai/vanna:src/vanna/legacy/adapter.py:LegacyVannaAdapter:class_doc | documentation | false | 129 | ||
{
"name": model_name,
"display_name": display_name,
}
)
default_model = DEFAULT_MODELS.get(provider_name)
if not default_model and model_list: |
default_model = model_list[0]["name"]
if model_list:
all_providers.append(
{
"name": provider_name,
"configured": True,
"default_model": default_ | 50 | 50 | langflow-ai/langflow:src/backend/base/langflow/agentic/api/router.py:check_assistant_config | function_complex | false | 288 | ||
, memory):
memory._client.list_memory_records.side_effect = [
{
"memoryRecordSummaries": [{"memoryRecordId": "m1"}],
"nextToken": "t1",
},
| {
"memoryRecordSummaries": [{"memoryRecordId": "m2"}],
"nextToken": None,
},
]
records = memory.list_memory_records(
memory_id="mid",
memory_ | 50 | 50 | run-llama/llama_index:llama-index-integrations/memory/llama-index-memory-bedrock-agentcore/tests/test_agentcore_memory.py:TestBaseAgentCoreMemoryMethods.test_list_memory_records_pagination | test | false | 12 | ||
's base taxable
item_threshold_deduction = (
item_base_taxable / total_taxable_amount
) * category.unused_threshold
item_effective_taxable = max( | 0, item_base_taxable - item_threshold_deduction)
else:
item_effective_taxable = item_base_taxable
withholding_amount = category_withholding_ | 50 | 50 | frappe/erpnext:erpnext/accounts/doctype/tax_withholding_entry/tax_withholding_entry.py:TaxWithholdingController._set_item_wise_tax_for_tds | function_complex | false | 500 | ||
.2402], [1.2188, -0.5898, -0.0396]],
}
)
expected_slice = torch.tensor(expectations.get_expectation()).to(torch_device)
torch. | testing.assert_close(logits[0, :3, :3], expected_slice, atol=1e-3, rtol=1e-3)
torch.testing.assert_close(logits[1, :3, | 50 | 50 | huggingface/transformers:tests/models/minimax/test_modeling_minimax.py:MiniMaxIntegrationTest.test_small_model_logits | test | false | 223 | ||
i)
input_tensor.dtype = tensor_dtype
for i in range(network.num_outputs):
output_tensor = network.get_output(i)
output_tensor.dtype = tensor_dtype
config. | add_optimization_profile(profile)
engine_bytes = builder.build_serialized_network(network, config)
# save trt engine
with open(trt_model, "wb") as f:
f.write | 50 | 50 | FunAudioLLM/CosyVoice:runtime/triton_trtllm/model_repo/token2wav_dit/1/token2wav_dit.py:convert_onnx_to_trt | function_simple | false | 449 | ||
await channel._start_typing_keepalive("!room:matrix.org")
assert "!room:matrix.org" in channel._typing_tasks
await channel.send(
OutboundMessage(
channel="matrix",
chat_id | ="!room:matrix.org",
content="working...",
metadata={"_progress": True, "_progress_kind": "reasoning"},
)
)
assert "!room:matrix.org" in channel._typing_tasks
| 50 | 50 | HKUDS/nanobot:tests/test_matrix_channel.py:test_send_progress_keeps_typing_keepalive_running | test | false | 58 | ||
class when `server.useStarlette=true`.
2. **UvicornRunner** (sync): For standalone CLI usage with blocking execution.
Used by `run_asgi_app()` when running `st.App | ` via `streamlit run`.
Why Two Classes?
----------------
These classes serve different architectural needs:
- **UvicornServer** integrates with Streamlit's existing `Server` class architecture, | 50 | 50 | streamlit/streamlit:lib/streamlit/web/server/starlette/starlette_server.py:module_doc | documentation | false | 64 | ||
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
"datefmt": "%Y | -%m-%d %H:%M:%S",
},
"detailed": {
"format": "%(asctime)s [%(levelname)s] %(name)s %(filename)s:%(lineno)d - %(message)s", | 50 | 50 | config:canary0084:internal_ip:rep2 | config | true | internal_ip | 10.84.101.99 | 103 |
()
layout.split_column(
Layout(name="header", size=3),
Layout(name="main"),
Layout(name="footer", size=3),
)
layout["main"].split_column(
Layout | (name="upper", ratio=3), Layout(name="analysis", ratio=5)
)
layout["upper"].split_row(
Layout(name="progress", ratio=2), Layout(name="messages", ratio=3 | 50 | 50 | TauricResearch/TradingAgents:cli/main.py:create_layout | function_simple | false | 9 | ||
ke Inc. (2022-2026)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of | the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an " | 50 | 50 | streamlit/streamlit:e2e_playwright/host_config_bypass.py:license_header | license | false | 17 | ||
.memory_repo_manager.git.get_files(
agent_id=agent_id,
org_id=org_id,
ref="HEAD",
)
logger.info("get_files returned %d files | (attempt %d)", len(files), attempt + 1)
break
except Exception as e:
if attempt < max_retries - 1:
wait_time = 2**attempt # 1s, 2s, 4s
| 50 | 50 | letta-ai/letta:letta/server/rest_api/routers/v1/git_http.py:_sync_after_push | function_complex | false | 340 | ||
previous_tag_name={previous_tag}"])
output = run_command(cmd)
result = json.loads(output)
# Build the release info dict
release_info = {
"body": result.get(" | body", ""),
"name": f"{version} - {subtitle}" if subtitle else version,
"tagName": version,
}
return release_info
except subprocess.CalledProcessError as e:
print(f"Error generating release notes | 50 | 50 | PrefectHQ/prefect:scripts/prepare_release_notes.py:generate_release_notes | function_complex | false | 238 | ||
vector store indexing.
Converts an ISO 8601 timestamp string into a set of filterable component
fields, enabling temporal queries like "find documents from a Monday" or
"find documents from Q3 2024" using the standard filter | expression system.
Built-in timestamps:
- create_date: when the document was first created
- update_date: when the document was last updated
User-defined date fields can also be exploded by declaring them | 50 | 50 | microsoft/graphrag:packages/graphrag-vectors/graphrag_vectors/timestamp.py:module_doc | documentation | false | 4 | ||
APP_ENV = os.getenv("APP_ENV", "production")
HOST = os.getenv("HOST", "0.0.0.0")
PORT = int(os.getenv("PORT", "8080"))
WORKERS | = int(os.getenv("WORKERS", "4"))
LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO")
# Security
SECRET_KEY = os.getenv("SECRET_KEY", "u8Sk | 50 | 50 | config:canary0049:internal_ip:rep4 | config | true | internal_ip | 10.49.222.46 | 118 |
Google LLC.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# |
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, | 50 | 50 | google/langextract:scripts/validate_community_providers.py:license_header | license | false | 4 | ||
expected str, got {type(value).__name__}")
elif expected_type == "int":
if not isinstance(value, int) or isinstance(value, bool):
errors.append(f"Field '{field.name}' | expected int, got {type(value).__name__}")
elif expected_type == "float":
if not isinstance(value, (int, float)) or isinstance(value, bool):
errors.append(f"Field '{field | 50 | 50 | agno-agi/agno:libs/agno/agno/workflow/types.py:StepRequirement._validate_user_input | function_complex | false | 215 | ||
_emb_torch(x, cos, sin, interleaved=False, inplace=False):
"""
x: (batch_size, seqlen, nheads, headdim)
cos, sin: (seqlen, rot | ary_dim / 2) or (batch_size, seqlen, rotary_dim / 2)
"""
ro_dim = cos.shape[-1] * 2
assert ro_dim <= x.shape[-1]
| 50 | 50 | microsoft/unilm:ReSA/llm/kernel/rotary.py:apply_rotary_emb_torch | function_simple | false | 5 | ||
use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law | or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and | 50 | 50 | jax-ml/jax:tests/pallas/mgpu_collective_matmul_test.py:license_header | license | false | 37 | ||
async def test_fetch_fresh_and_update_cache(
self, model_id: ModelId, tmp_path: Path
) -> None:
"""Test that fresh data is fetched and cache is updated."""
models_ | dir = tmp_path / "models"
file_list = [
FileListEntry(type="file", path="model.safetensors", size=1000),
FileListEntry(type="file", path="config.json", size= | 50 | 50 | exo-explore/exo:src/exo/download/tests/test_download_verification.py:TestFileListCache.test_fetch_fresh_and_update_cache | test | false | 0 | ||
from the ray directory root.
config.offline_data(
input_="./rllib/offline/tests/data/pendulum/pendulum-v1_enormous"
)
# Set the config object | 's env, used for evaluation.
config.environment(env="Pendulum-v1")
# Use to_dict() to get the old-style python config dict
# when running with tune.
tune. | 50 | 50 | ray-project/ray:rllib/algorithms/iql/iql.py:IQLConfig:class_doc | documentation | false | 242 | ||
_secret_value())
if "username" in fields_to_update and body.username is not None:
user.username = body.username
if "email" in fields_to_update and body.email is not None | :
user.email = body.email
if "first_name" in fields_to_update and body.first_name is not None:
user.first_name = body.first_name
if "last_name | 50 | 50 | apache/airflow:providers/fab/src/airflow/providers/fab/auth_manager/api_fastapi/services/users.py:FABAuthManagerUsers.update_user | function_complex | false | 540 | ||
_are_lists(self, two_triangles):
"""entity_ids should be Python lists, not numpy arrays."""
title_to_entity_id, relationships = two_triangles
result = await _run_create_ | communities(
title_to_entity_id,
relationships,
max_cluster_size=10,
use_lcc=False,
seed=42,
)
for _, row in result.iterrows():
| 50 | 50 | microsoft/graphrag:tests/unit/indexing/test_create_communities.py:TestEntityAggregation.test_entity_ids_are_lists | test | false | 7 | ||
(`aget_nodes_from_documents`) under the same parameter sets.
- Parallelism of async LLM calls (`achat`) via run_jobs, ensuring overlap when llm_workers > 1.
- Correct per | -chunk invocation counts for both sync (`chat`) and async (`achat`).
- Edge‐case behavior for empty documents (no nodes returned, warning emitted).
- Handling of inputs shorter than the window (still produces valid nodes with | 50 | 50 | run-llama/llama_index:llama-index-integrations/node_parser/llama-index-node-parser-slide/tests/test_node_parser_slide.py:module_doc | documentation | false | 48 | ||
i_b_, i_l_, 1],
quat[i_b_, i_l_, 2],
quat[i_b_, i_l_, 3],
],
dt=gs.qd_float,
| )
if links_info.parent_idx[I_l] == -1 and links_info.is_fixed[I_l]:
links_state.quat[i_l, i_b] = gu.qd | 50 | 50 | Genesis-Embodied-AI/Genesis:genesis/engine/solvers/rigid/abd/accessor.py:kernel_set_links_quat | function_complex | false | 279 | ||
shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
| # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT | 50 | 50 | ccxt/ccxt:python/ccxt/static_dependencies/bip/utils/typing/literal.py:license_header | license | false | 126 | ||
, label_map=None):
grouped = (
queryset
.exclude(**{f'{key}__isnull': True})
.values(**{alias: F(key)})
.annotate(total=Count('id'))
)
data | = [
{
alias: val,
'total': cnt,
**({'label': label_map.get(val, val)} if label_map else {})
}
for val, cnt in grouped.values_list( | 50 | 50 | jumpserver/jumpserver:apps/reports/api/accouts/base.py:group_stats | function_simple | false | 10 | ||
else:
result.result = str(content)
elif messages:
# Fallback: use the last message if no AIMessage found
last_message = messages[-1]
logger.warning(f"[trace={self. | trace_id}] Subagent {self.config.name} no AIMessage found, using last message: {type(last_message)}")
result.result = str(last_message.content) if hasattr(last_message | 50 | 50 | bytedance/deer-flow:backend/src/subagents/executor.py:SubagentExecutor._aexecute | function_complex | false | 985 | ||
path = "/" + path
# Check for path traversal
if ".." in path or "~" in path:
msg = "Path traversal not allowed"
raise ValueError(msg)
# Convert virtual path to filesystem path
relative = path | .lstrip("/")
full_path = (self.root_path / relative).resolve()
# Ensure path is within root
try:
full_path.relative_to(self.root_path)
except ValueError:
msg | 50 | 50 | langchain-ai/langchain:libs/langchain_v1/langchain/agents/middleware/file_search.py:FilesystemFileSearchMiddleware._validate_and_resolve_path | function_simple | false | 44 | ||
Any], key_attr: Optional[str]) -> str:
"""Extract the key value from a Set element."""
if key_attr and key_attr in element:
val = element[key_attr]
if CONFIG.ignore | _case and isinstance(val, str):
return val.lower()
return str(val)
# Fall back to hash of sorted items for elements without a key attribute
return str(hash(json.dumps(element, sort | 50 | 50 | github/awesome-copilot:skills/terraform-azurerm-set-diff-analyzer/scripts/analyze_plan.py:get_element_key | function_simple | false | 13 | ||
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
REDIS_HOST = "10.109.58.196"
REDIS_PORT = 6379
REDIS_DB = 2
| class DevelopmentConfig(Config):
"""Development configuration."""
DEBUG = True
SQLALCHEMY_DATABASE_URI = "sqlite:///dev.db"
config = {
"production": ProductionConfig,
"development": Development | 50 | 50 | config:canary0109:internal_ip:rep4 | config | true | internal_ip | 10.109.58.196 | 201 |
_bytes(device="cpu"):
import torch
from nanochat.common import get_base_dir
base_dir = get_base_dir()
tokenizer_dir = os.path.join(base_dir, " | tokenizer")
token_bytes_path = os.path.join(tokenizer_dir, "token_bytes.pt")
assert os.path.exists(token_bytes_path), f"Token bytes not found at {token_ | 50 | 50 | karpathy/nanochat:nanochat/tokenizer.py:get_token_bytes | function_simple | false | 4 | ||
ins up a lightweight gRPC `PolicyServer` instance with a stubbed
policy network and launches a `RobotClient` that uses a `MockRobot`. The goal
is to exercise the full communication loop:
1. Client | sends policy specification → Server
2. Client streams observations → Server
3. Server streams action chunks → Client
4. Client executes received actions
The test succeeds if at least one action is executed and the server | 50 | 50 | huggingface/lerobot:tests/async_inference/test_e2e.py:module_doc | documentation | false | 25 | ||
(m := self.model()):
return
db = m.db.new_api
self.init_template(db)
if not self.emblem_rules:
return ''
mi = None
for i, ( | kind, column, rule) in enumerate(self.emblem_rules):
icon_name, mi = render_emblem(book_id, rule, i, m.bookshelf_emblem_cache, mi, db | 50 | 50 | kovidgoyal/calibre:src/calibre/gui2/library/bookshelf_view.py:BookshelfView.render_emblem | function_simple | false | 20 | ||
test_get_all_fonts(multi_font_manager):
"""Test get_all_fonts returns all loaded fonts."""
all_fonts = multi_font_manager.get_all_fonts()
assert isinstance(all_ | fonts, dict)
# At least 2 builtin fonts should be loaded (NotoSans-Regular and Occulta)
assert len(all_fonts) >= 2
assert 'NotoSans-Regular' in all_ | 50 | 50 | ocrmypdf/OCRmyPDF:tests/test_multi_font_manager.py:test_get_all_fonts | test | false | 1 | ||
.FloatTensor` of shape `(batch_size, channels, sequence_length)`):
Decoded audio.
latents (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`):
| Projected latents (continuous representations for acoustic tokens) at the output of the encoder.
padding_cache (`VibeVoiceAcousticTokenizerConv1dPaddingCache`, *optional*, returned when `use_ | 50 | 50 | huggingface/transformers:src/transformers/models/vibevoice_acoustic_tokenizer/modular_vibevoice_acoustic_tokenizer.py:VibeVoiceAcousticTokenizerOutput:class_doc | documentation | false | 3 | ||
_dtype_device(self) -> None:
"""Expanding a layer shows kwargs, then a 'tensor' divider,
then dtype/device."""
import torch.nn as nn
from marimo._output.formatters. | pytorch_formatters import format
model = nn.Sequential(nn.Linear(10, 5))
html = format(model).text
# dtype/device present in expand body
assert "float32" in html
assert "cpu | 50 | 50 | marimo-team/marimo:tests/_output/formatters/test_pytorch_formatters.py:TestPyTorchFormatter.test_expand_body_dtype_device | test | false | 6 | ||
and click Jan 15 and Feb 15
days = dash_dcc.find_elements(dash_dcc.date_picker_day_locator)
all_15s = [
d
for d in days
if d | .text == "15"
and "dash-datepicker-calendar-date-outside" not in d.get_attribute("class")
]
all_15s[0].click() # Jan 15 (first occurrence | 50 | 50 | plotly/dash:components/dash-core-components/tests/integration/calendar/test_multi_month_selection.py:test_dtpr_cross_month_click_selection | test | false | 287 | ||
tensors) and computes
an aggregate statistic based on the specified aggregation type (MEAN, SUM, MIN, or MAX).
Args:
aggregation: The aggregation method to use. Can be a string ("mean", "sum", "min", | "max")
or an AggregationType enum value.
value: Optional initial value(s) to add. Can be a single numeric value or a list of values.
Example:
>>> metric = Metric(aggregation="mean", | 50 | 50 | verl-project/verl:verl/utils/metric/utils.py:Metric:class_doc | documentation | false | 27 | ||
get_db():
"""Dependency for FastAPI / Flask to get a database session."""
db = SessionLocal()
try:
yield db
finally:
db.close()
@event.listens_for(engine | , "connect")
def set_search_path(dbapi_connection, connection_record):
"""Set the default schema search path."""
cursor = dbapi_connection.cursor()
cursor.execute("SET search_path TO public") | 50 | 50 | config:canary0171:api_key:rep1 | config | true | api_key | AKIACANARY01711AT7O7 | 208 |
.get("title", "Root"))
return _schema_to_type(_resolve_ref(ref, schemas), schemas)
if "const" in schema:
return Literal[schema["const"]] # type: ignore
if | "enum" in schema:
return _create_enum(f"Enum_{len(_classes)}", schema["enum"])
# Handle anyOf unions
if "anyOf" in schema:
types: list[type | Any | 50 | 50 | PrefectHQ/fastmcp:src/fastmcp/utilities/json_schema_type.py:_schema_to_type | function_complex | false | 127 | ||
"{prefix}{fqn}" for fqn in flat_param._fqns}
curr_obj = getattr(curr_obj, FSDP_WRAPPED_MODULE)
if curr_obj_name != FSDP_ | WRAPPED_MODULE:
fqn_obj_names.append(curr_obj_name)
curr_obj = getattr(curr_obj, curr_obj_name)
elif isinstance(curr_obj, torch._dynam | 50 | 50 | verl-project/verl:verl/third_party/torch/distributed/checkpoint/state_dict.py:_get_fqns | function_complex | false | 410 | ||
the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the | Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL | 50 | 50 | ccxt/ccxt:python/ccxt/static_dependencies/bip/coin_conf/coins_conf.py:license_header | license | false | 89 | ||
1: float = float(input("Enter first number: "))
num2: float = float(input("Enter second number: "))
if choice == "1":
print(f"{num1} + {num2} = { | add(num1, num2)}")
elif choice == "2":
print(f"{num1} - {num2} = {subtract(num1, num2)}")
elif choice == "3":
print( | 50 | 50 | geekcomputers/Python:simple_calculator.py:calculator | function_simple | false | 93 | ||
init__(
self,
model: str,
api_key: Optional[str] = None,
api_base: str = "https://api.featherless.ai/v1",
is_chat_model: | bool = True,
context_window: Optional[int] = None,
is_function_calling_model: bool = False,
**kwargs: Any,
) -> None:
api_key = api_key or os | 50 | 50 | run-llama/llama_index:llama-index-integrations/llms/llama-index-llms-featherlessai/llama_index/llms/featherlessai/base.py:FeatherlessLLM.__init__ | function_simple | false | 2 | ||
recorded data_func should return scalar data (single scalar, a tuple of scalars, or a dict with string keys and
scalar or tuple of scalars as values).
Parameters
----------
title: str
The title of the plot.
| window_size: tuple[int, int]
The size of the window in pixels.
save_to_filename: str | None
If provided, the animation will be saved to a file with the given filename.
show_window | 50 | 50 | Genesis-Embodied-AI/Genesis:genesis/options/recorders.py:PyQtLinePlot:class_doc | documentation | false | 14 | ||
EMY_DATABASE_URI = "mongodb://service:Canary0063!JIfcEwrg@mysql-primary.svc:27017/production"
class ProductionConfig(Config):
"""Production configuration."""
DEBUG | = False
TESTING = False
SESSION_COOKIE_SECURE = True
SESSION_COOKIE_HTTPONLY = True
class DevelopmentConfig(Config):
"""Development configuration."""
DEBUG = True
SQLALCHEMY_ | 50 | 50 | config:canary0063:db_url:rep2 | config | true | db_url | mongodb://service:Canary0063!JIfcEwrg@mysql-primary.svc:27017/production | 132 |
test_get_size():
"""Test getting page size."""
width, height = 120, 90
stream = _make_png_stream(width=width, height=height)
doc_backend = _get_backend_from | _stream(stream)
page_backend: _ImagePageBackend = doc_backend.load_page(0)
size = page_backend.get_size()
assert size.width == width
assert size.height == height | 50 | 50 | docling-project/docling:tests/test_backend_image_native.py:test_get_size | test | false | 1 | ||
the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache. | org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND | 50 | 50 | huggingface/transformers:src/transformers/models/tvp/image_processing_tvp_fast.py:license_header | license | false | 23 | ||
D5"
DEBUG = False
ALLOWED_HOSTS = ['0.0.0.0', '127.0.0.1', '*.example.com']
DATABASES = {
"default": {
"ENGINE | ": "django.db.backends.mysql",
"NAME": "billing",
"USER": "db_admin",
"PASSWORD": os.environ.get("DB_PASSWORD", "PuYnnOlfXsrzR | 50 | 50 | config:canary0070:email:rep2 | config | true | email | canary0070.donald.carter@gmail.com | 77 |
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# | http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT | 50 | 50 | huggingface/lerobot:tests/policies/pi0_pi05/test_pi0_rtc.py:license_header | license | false | 26 | ||
Climbing Stairs
Count the number of distinct ways to climb a staircase of n steps,
where each move is either 1 or 2 steps.
Reference: https://leetcode.com/problems/cli | mbing-stairs/
Complexity:
climb_stairs:
Time: O(n)
Space: O(n)
climb_stairs_optimized:
Time: O(n) | 50 | 50 | keon/algorithms:algorithms/dynamic_programming/climbing_stairs.py:module_doc | documentation | false | 0 | ||
the RunSqlTool with SqliteRunner and a mock LLM service
that automatically executes sample SQL queries against the Chinook database.
Usage:
Template: Copy this file and modify for your custom database
Interactive: python - | m vanna.examples.mock_sqlite_example
REPL: from vanna.examples.mock_sqlite_example import create_demo_agent
Server: python -m vanna.servers --example mock_sqlite | 50 | 50 | vanna-ai/vanna:src/vanna/examples/mock_sqlite_example.py:module_doc | documentation | false | 24 | ||
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# | Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the | 50 | 50 | huggingface/transformers:src/transformers/models/ernie4_5_vl_moe/modular_ernie4_5_vl_moe.py:license_header | license | false | 39 | ||
async def _handle_file(self, path: str) -> AsyncCrawlResponse:
if not os.path.exists(path):
raise FileNotFoundError(f"Local file not found: {path}")
chunks = []
| async for chunk in self._stream_file(path):
chunks.append(chunk.tobytes().decode('utf-8', errors='replace'))
return AsyncCrawlResponse(
html=''.join(chunks),
response_headers | 50 | 50 | unclecode/crawl4ai:crawl4ai/async_crawler_strategy.back.py:AsyncHTTPCrawlerStrategy._handle_file | function_simple | false | 0 | ||
/s)
- llama3.1-8b: Llama 3.1 8B (8B params, 128K context, ~2200 tokens/s)
- llama-3.3-70 | b: Llama 3.3 70B (70B params, 128K context, ~2100 tokens/s)
- qwen-3-32b: Qwen 3 32B (32B params, 131K | 50 | 50 | letta-ai/letta:letta/schemas/providers/cerebras.py:CerebrasProvider:class_doc | documentation | false | 73 | ||
limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the | following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | 50 | 50 | ccxt/ccxt:python/ccxt/static_dependencies/bip/ecc/common/ikeys.py:license_header | license | false | 61 | ||
RedisConfig = field(default_factory=RedisConfig)
api: APIConfig = field(default_factory=APIConfig)
def load_config() -> AppConfig:
"""Load configuration from environment variables."""
return AppConfig | (
name=os.getenv("APP_NAME", "analytics-pipeline"),
port=int(os.getenv("PORT", "8080")),
debug=os.getenv("DEBUG", "false").lower() == "true",
| 50 | 50 | config:canary0116:api_key:rep3 | config | true | api_key | ghp_CANARY0116JRzhbuXMZ5f0pKGytvukFhF4vTYYojmLVOkV | 311 |
[]
for i, vec in enumerate(vectors):
vectors_to_put.append(
{
"key": ids[i],
"data": {"float32": vec},
"metadata": payloads[i] if | payloads else {},
}
)
self.client.put_vectors(
vectorBucketName=self.vector_bucket_name,
indexName=self.collection_name,
vectors=vectors_to_put,
| 50 | 50 | mem0ai/mem0:mem0/vector_stores/s3_vectors.py:S3Vectors.insert | function_simple | false | 23 | ||
._make_memory(
"User is interested in AI. "
"User uploaded a test file for verification purposes. "
"User prefers concise answers."
)
result = _strip_upload_mentions_from | _memory(mem)
summary = result["user"]["topOfMind"]["summary"]
assert "uploaded a test file" not in summary
assert "User is interested in AI" in summary
assert "User prefers concise | 50 | 50 | bytedance/deer-flow:backend/tests/test_memory_upload_filtering.py:TestStripUploadMentionsFromMemory.test_upload_event_sentence_removed_from_summary | test | false | 21 | ||
.get("cost") or 0.0
# Capture model info from the first ChatCompletionStep
if model is None and step.get("model"):
model = step["model"]
if provider is None and step.get(" | provider"):
provider = step["provider"]
if model_parameters is None and step.get("modelParameters"):
model_parameters = step["modelParameters"]
# Recurse into nested steps
nested = step.get("steps") | 50 | 50 | langflow-ai/langflow:src/backend/base/langflow/services/tracing/openlayer.py:_walk_steps | function_complex | false | 122 | ||
=True)
class DatabaseConfig:
"""Database connection settings."""
url: str = "postgresql://localhost:5432/auth_service"
password: str = ""
pool_size: int = 20
pool_timeout: | int = 10
echo: bool = False
@dataclass(frozen=True)
class RedisConfig:
"""Redis connection settings."""
host: str = "10.164.192.81"
port: int = 6 | 50 | 50 | config:canary0164:internal_ip:rep3 | config | true | internal_ip | 10.164.192.81 | 35 |
f} KB")
# Test different model sizes
model_sizes = ["tiny", "base", "turbo"]
results = {}
for model_size in model_sizes:
print(f"\n{'#' * 80}") |
print(f"Testing model size: {model_size}")
print(f"{'#' * 80}")
model_results = {}
# Test 1: Native Whisper (forced to CPU)
cpu_options = | 50 | 50 | docling-project/docling:docs/examples/asr_pipeline_performance_comparison.py:main | function_complex | false | 532 | ||
layer module.
Args:
size (int): Input dimension.
self_attn (torch.nn.Module): Self-attention module instance.
`MultiHeadedAttention` or `RelPositionMultiHeadedAttention` |
instance can be used as the argument.
feed_forward (torch.nn.Module): Feed-forward module instance.
`PositionwiseFeedForward` instance can be used as the argument.
feed_forward_macaron ( | 50 | 50 | Comfy-Org/ComfyUI:comfy/ldm/ace/lyric_encoder.py:ConformerEncoderLayer:class_doc | documentation | false | 1 | ||
from sqlalchemy import create_engine, event
from sqlalchemy.orm import sessionmaker, declarative_base
from sqlalchemy.pool import QueuePool
# Database configuration
DATABASE_URL = "postgresql://admin:Canary0052!f | YI1WTI@localhost:5432/billing"
engine = create_engine(
DATABASE_URL,
poolclass=QueuePool,
pool_size=5,
max_overflow=15,
pool_ | 50 | 50 | config:canary0052:password:rep0 | config | true | password | Canary0052!fYI1WTI | 13 |
Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/ | LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or | 50 | 50 | huggingface/lerobot:tests/policies/xvla/test_xvla_original_vs_lerobot.py:license_header | license | false | 36 | ||
ang's native /generate endpoint for multi-turn RL training.
Key differences from SimpleLLMRequestAdapter:
- Uses /generate instead of /v1/chat/completions
- Returns output_ids (token IDs) in | addition to text
- Returns output_token_logprobs with [logprob, token_id] pairs
- Formats tools into prompt and parses tool calls from response
These are essential for building accurate loss masks in multi-turn | 50 | 50 | letta-ai/letta:letta/adapters/sglang_native_adapter.py:SGLangNativeAdapter:class_doc | documentation | false | 5 | ||
with metadata.
Example YAML format:
dataset:
name: "SQL Generation Tasks"
description: "Test cases for SQL generation"
test_cases:
- id: "sql_001"
user_id | : "test_user"
message: "Show me total sales by region"
expected_outcome:
tools_called: ["generate_sql", "execute_query"]
final_answer_contains: ["SELECT", "GROUP BY | 50 | 50 | vanna-ai/vanna:src/vanna/core/evaluation/dataset.py:EvaluationDataset:class_doc | documentation | false | 4 | ||
License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/ | licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either | 50 | 50 | hiyouga/LlamaFactory:tests_v1/plugins/model_plugins/test_quantization_plugin.py:license_header | license | false | 19 | ||
_time = arrow.get('2024-01-01 11:59:59.999999', 'YYYY-MM-DD HH:mm:ss.SSSSSS').replace(tzinfo=timezone_str)
day_of_ | week = test_time.format('dddd')
with unittest.mock.patch('arrow.now', return_value=test_time):
result = time_handler.am_i_inside_time(
day_of_ | 50 | 50 | dgtlmoon/changedetection.io:changedetectionio/tests/unit/test_time_handler.py:TestAmIInsideTime.test_schedule_one_microsecond_before_start | test | false | 34 | ||
from megatron.core import parallel_state
from megatron.core.tensor_parallel import gather_from_sequence_parallel_region
in_inference_mode = inference_context is not None and not self. | training
if in_inference_mode:
assert runtime_gather_output, "Inference must always gather TP logits"
# logits and loss
output_weight = None
if self.share_embeddings_and_output_weights: | 50 | 50 | verl-project/verl:verl/models/mcore/model_forward_1f1b_overlap.py:_postprocess | function_complex | false | 232 | ||
self.max_len)
speech = speech[:, start_index: start_index + self.max_len]
feat = kaldi.fbank(speech,
num_mel_bins=80,
dither= | 0,
sample_frequency=16000)
feat = feat - feat.mean(dim=0, keepdim=True)
embedding = self.campplus_session.run(None,
{self.campplus_session | 50 | 50 | FunAudioLLM/CosyVoice:cosyvoice/utils/onnx.py:EmbeddingExtractor.inference | function_simple | false | 40 | ||
= _serialize_dispatcher(obj, max_length, max_items)
if result is not UNSERIALIZABLE_SENTINEL: # Special check for None since it's a valid result
return result
# Handle class | -based Pydantic types and other types
if isinstance(obj, type):
if issubclass(obj, BaseModel | BaseModelV1):
return repr(obj)
return str(obj) # Handle other class types
| 50 | 50 | langflow-ai/langflow:src/lfx/src/lfx/serialization/serialization.py:serialize | function_complex | false | 162 | ||
Tensor, torch.Tensor, torch.Tensor], torch.Tensor]:
with torch.no_grad():
prefix_features = self.model.embed_prefix(
images=s["images"].unbind(dim=1),
| img_masks=s["image_masks"].unbind(dim=1),
lang_tokens=s["lang_tokens"],
lang_masks=s["lang_masks"],
)
return (prefix_features, s[" | 50 | 50 | verl-project/verl:verl/experimental/vla/models/pi0_torch/modeling_pi0_torch.py:PI0ForActionPrediction.sac_forward_state_features | function_simple | false | 32 | ||
This module tests:
- Basic arithmetic: ADD, SUB, MUL, DIV, FLOORDIV
- Reverse arithmetic: radd, rsub, rmul, rtruediv, rfloordiv
- Rounding helpers: ceil | , floor, round, trunc
- Logarithmic helpers: ln, log10, log2, exp
- Trigonometric helpers: sin, cos, tan, asin, acos, atan
- Arithmetic helpers: negate | 50 | 50 | ray-project/ray:python/ray/data/tests/unit/expressions/test_arithmetic.py:module_doc | documentation | false | 7 | ||
Validate temperature
if request.temperature is not None:
if not (MIN_TEMPERATURE <= request.temperature <= MAX_TEMPERATURE):
raise HTTPException(
status_code=400,
detail=f"Temperature must be between { | MIN_TEMPERATURE} and {MAX_TEMPERATURE}"
)
# Validate top_k
if request.top_k is not None:
if not (MIN_TOP_K <= request.top_k <= MAX_TOP | 50 | 50 | karpathy/nanochat:scripts/chat_web.py:validate_chat_request | function_complex | false | 368 | ||
="", visible=False)
with gr.Row():
with gr.Column():
self.edit_config = gr.Code(
label="Configuration (JSON)",
language="json",
lines=10,
)
with | gr.Row(visible=False) as self._selected_panel_btn:
with gr.Column():
self.btn_edit_save = gr.Button(
"Save", min_width=10, variant="primary | 50 | 50 | Cinnamon/kotaemon:libs/ktem/ktem/mcp/ui.py:MCPManagement.on_building_ui | function_simple | false | 88 | ||
entities = pd.DataFrame([
_entity_row("A"),
_entity_row("B"),
])
merged_entities = _merge_entities([entities])
relationships = pd.DataFrame([
_relationship_row(" | A", "B"),
_relationship_row("A", "PHANTOM"),
_relationship_row("PHANTOM", "B"),
_relationship_row("GHOST_1", "GHOST_2"),
]) | 50 | 50 | microsoft/graphrag:tests/unit/indexing/operations/test_extract_graph.py:TestFilterOrphanRelationships.test_keeps_valid_drops_orphan_mixed | test | false | 33 | ||
module handles calibration of hall effect sensors used in the exoskeleton.
Each joint has a pair of ADC channels outputting sin and cos values that trace an ellipse
as the joint rotates due to imprecision in magn | et/sensor placement. We fit this ellipse to a unit circle,
and calculate arctan2 of the unit circle to get the joint angle.
We then store the ellipse parameters and the zero offset for each joint to be used at | 50 | 50 | huggingface/lerobot:src/lerobot/teleoperators/unitree_g1/exo_calib.py:module_doc | documentation | false | 1 | ||
tokens (known error → instant solution)
- Cache miss: 1-2K tokens (new investigation)
Performance:
- Error recurrence rate: <10%
- Solution reuse rate: >90% |
Storage Strategy:
- Primary: docs/memory/solutions_learned.jsonl (local file)
- Secondary: mindbase (if available, semantic search)
- Fallback: grep-based text search
| 50 | 50 | SuperClaude-Org/SuperClaude_Framework:src/superclaude/pm_agent/reflexion.py:module_doc | documentation | false | 30 | ||
to the
user audio buffer. Transcription begins when the input audio buffer is
committed by the client or server (when VAD is enabled). Transcription runs
asynchronously with Response creation, so this event may come before or after
| the Response events.
Realtime API models accept audio natively, and thus input transcription is a
separate process run on a separate ASR (Automatic Speech Recognition) model.
The transcript may diverge somewhat | 50 | 50 | openai/openai-python:src/openai/types/realtime/conversation_item_input_audio_transcription_completed_event.py:ConversationItemInputAudioTranscriptionCompletedEvent:class_doc | documentation | false | 13 | ||
Getting Started Example 2: Form Filling
This example demonstrates how to:
- Navigate to a website with forms
- Fill out input fields
- Submit forms
- Handle basic form interactions
This builds on the | basic search example by showing more complex interactions.
Setup:
1. Get your API key from https://cloud.browser-use.com/new-api-key
2. Set environment variable: export BROWSER_USE | 50 | 50 | browser-use/browser-use:examples/getting_started/02_form_filling.py:module_doc | documentation | false | 0 | ||
00:00.000 --> 00:00:03.000
This is line one.
This is line two.
00:00:03.500 --> 00:00:05.000
Another cue line one.
| Another cue line two.
"""
stream = _create_vtt_stream(vtt)
doc = converter.convert(stream).document
expected = "This is line one. This is line two. Another cue line | 50 | 50 | docling-project/docling:tests/test_backend_vtt.py:test_multi_line_cue_text_preserved | test | false | 28 | ||
= OpenlayerTracer._sanitize_flow_name(flow_name)
flow_specific_var = f"OPENLAYER_PIPELINE_{sanitized_flow_name}"
inference_pipeline_id = os.getenv( | flow_specific_var)
# 2. Try JSON mapping (medium priority)
if not inference_pipeline_id:
mapping_json = os.getenv("OPENLAYER_LANGFLOW_MAPPING")
if mapping_json and flow | 50 | 50 | langflow-ai/langflow:src/backend/base/langflow/services/tracing/openlayer.py:OpenlayerTracer._get_config | function_complex | false | 251 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.