Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
pandas
Dataset Viewer
Auto-converted to Parquet Duplicate
language
stringclasses
1 value
id
int64
4
196
repo_owner
stringlengths
4
20
repo_name
stringlengths
3
20
head_branch
stringlengths
2
63
workflow_name
stringlengths
2
32
workflow_filename
stringlengths
7
28
workflow_path
stringlengths
25
46
contributor
stringlengths
4
17
sha_fail
stringlengths
40
40
sha_success
stringlengths
40
40
workflow
stringlengths
536
6.66k
logs
listlengths
1
6
diff
stringlengths
382
56.7k
difficulty
int64
0
3
changed_files
sequencelengths
1
8
commit_link
stringlengths
74
106
commit_date
stringlengths
20
20
Python
4
huggingface
diffusers
ipadapterfaceid
Run code quality checks
pr_quality.yml
.github/workflows/pr_quality.yml
fabiorigano
2c06ffa4c9d2c37846c60ad75899b4d72f214ff9
217d9d073981605acab5200fc841f20c798c1449
name: Run code quality checks on: pull_request: branches: - main push: branches: - main concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} cancel-in-progress: true jobs: check_code_quality: runs-on: ubuntu-latest services: pypi_wayback: image: ghcr.io/waleko/pypi-wayback ports: - 8629:8080 steps: - uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.8" - name: Install dependencies run: | python -m pip install --upgrade pip pip install .[quality] - name: Check quality run: | ruff check examples tests src utils scripts ruff format examples tests src utils scripts --check check_repository_consistency: runs-on: ubuntu-latest services: pypi_wayback: image: ghcr.io/waleko/pypi-wayback ports: - 8629:8080 steps: - uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.8" - name: Install dependencies run: | python -m pip install --upgrade pip pip install .[quality] - name: Check quality run: | python utils/check_copies.py python utils/check_dummies.py make deps_table_check_updated env: PIP_INDEX_URL: http://localhost:8629/2024-01-10 UV_INDEX_URL: http://localhost:8629/2024-01-10
[ { "step_name": "check_code_quality/5_Check quality.txt", "log": "##[group]Run ruff check examples tests src utils scripts\n\u001b[36;1mruff check examples tests src utils scripts\u001b[0m\n\u001b[36;1mruff format examples tests src utils scripts --check\u001b[0m\nshell: /usr/bin/bash -e {0}\nenv:\n pythonL...
diff --git a/examples/community/README.md b/examples/community/README.md index f205f3b70..2fdbdb414 100755 --- a/examples/community/README.md +++ b/examples/community/README.md @@ -3307,7 +3307,7 @@ pipeline = DiffusionPipeline.from_pretrained( torch_dtype=torch.float16, scheduler=noise_scheduler, vae=vae, - custom_pipeline="./forked/diffusers/examples/community/ip_adapter_face_id.py" + custom_pipeline="ip_adapter_face_id" ) pipeline.load_ip_adapter_face_id("h94/IP-Adapter-FaceID", "ip-adapter-faceid_sd15.bin") pipeline.to("cuda") diff --git a/examples/community/ip_adapter_face_id.py b/examples/community/ip_adapter_face_id.py index e3c5a2c84..d9325742c 100644 --- a/examples/community/ip_adapter_face_id.py +++ b/examples/community/ip_adapter_face_id.py @@ -14,12 +14,12 @@ import inspect from typing import Any, Callable, Dict, List, Optional, Union -from safetensors import safe_open import torch import torch.nn as nn import torch.nn.functional as F from packaging import version +from safetensors import safe_open from transformers import CLIPImageProcessor, CLIPTextModel, CLIPTokenizer, CLIPVisionModelWithProjection from diffusers.configuration_utils import FrozenDict @@ -27,20 +27,20 @@ from diffusers.image_processor import VaeImageProcessor from diffusers.loaders import FromSingleFileMixin, IPAdapterMixin, LoraLoaderMixin, TextualInversionLoaderMixin from diffusers.models import AutoencoderKL, UNet2DConditionModel from diffusers.models.attention_processor import FusedAttnProcessor2_0 -from diffusers.models.lora import adjust_lora_scale_text_encoder, LoRALinearLayer +from diffusers.models.lora import LoRALinearLayer, adjust_lora_scale_text_encoder +from diffusers.pipelines.pipeline_utils import DiffusionPipeline +from diffusers.pipelines.stable_diffusion.pipeline_output import StableDiffusionPipelineOutput +from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker from diffusers.schedulers import KarrasDiffusionSchedulers from diffusers.utils import ( - _get_model_file, USE_PEFT_BACKEND, + _get_model_file, deprecate, logging, scale_lora_layers, unscale_lora_layers, ) from diffusers.utils.torch_utils import randn_tensor -from diffusers.pipelines.pipeline_utils import DiffusionPipeline -from diffusers.pipelines.stable_diffusion.pipeline_output import StableDiffusionPipelineOutput -from diffusers.pipelines.stable_diffusion.safety_checker import StableDiffusionSafetyChecker logger = logging.get_logger(__name__) # pylint: disable=invalid-name @@ -555,7 +555,7 @@ class IPAdapterFaceIDStableDiffusionPipeline( revision=revision, subfolder=subfolder, user_agent=user_agent, - ) + ) if weight_name.endswith(".safetensors"): state_dict = {"image_proj": {}, "ip_adapter": {}} with safe_open(model_file, framework="pt", device="cpu") as f: @@ -1438,7 +1438,7 @@ class IPAdapterFaceIDStableDiffusionPipeline( extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) # 6.1 Add image embeds for IP-Adapter - added_cond_kwargs ={"image_embeds": image_embeds} if image_embeds is not None else None + added_cond_kwargs = {"image_embeds": image_embeds} if image_embeds is not None else None # 6.2 Optionally get Guidance Scale Embedding timestep_cond = None
0
[ "examples/community/README.md", "examples/community/ip_adapter_face_id.py" ]
https://github.com/huggingface/diffusers/tree/2c06ffa4c9d2c37846c60ad75899b4d72f214ff9
2024-01-10T20:03:01Z
Python
5
huggingface
diffusers
ipadapterfaceid
Run code quality checks
pr_quality.yml
.github/workflows/pr_quality.yml
fabiorigano
db6550a228941b538f340fb5b65ed16c43a21b88
6c29e66eb023f2805e4a4fd697815e9a0d2c6468
name: Run code quality checks on: pull_request: branches: - main push: branches: - main concurrency: group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }} cancel-in-progress: true jobs: check_code_quality: runs-on: ubuntu-latest services: pypi_wayback: image: ghcr.io/waleko/pypi-wayback ports: - 8629:8080 steps: - uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.8" - name: Install dependencies run: | python -m pip install --upgrade pip pip install .[quality] - name: Check quality run: | ruff check examples tests src utils scripts ruff format examples tests src utils scripts --check check_repository_consistency: runs-on: ubuntu-latest services: pypi_wayback: image: ghcr.io/waleko/pypi-wayback ports: - 8629:8080 steps: - uses: actions/checkout@v3 - name: Set up Python uses: actions/setup-python@v4 with: python-version: "3.8" - name: Install dependencies run: | python -m pip install --upgrade pip pip install .[quality] - name: Check quality run: | python utils/check_copies.py python utils/check_dummies.py make deps_table_check_updated env: PIP_INDEX_URL: http://localhost:8629/2023-12-26 UV_INDEX_URL: http://localhost:8629/2023-12-26
[ { "step_name": "check_code_quality/5_Check quality.txt", "log": "##[group]Run ruff check examples tests src utils scripts\n\u001b[36;1mruff check examples tests src utils scripts\u001b[0m\n\u001b[36;1mruff format examples tests src utils scripts --check\u001b[0m\nshell: /usr/bin/bash -e {0}\nenv:\n pythonL...
diff --git a/src/diffusers/loaders/ip_adapter.py b/src/diffusers/loaders/ip_adapter.py index df9caa946..0c310019f 100644 --- a/src/diffusers/loaders/ip_adapter.py +++ b/src/diffusers/loaders/ip_adapter.py @@ -12,7 +12,7 @@ # See the License for the specific language governing permissions and # limitations under the License. import os -from typing import Dict, Optional, Union +from typing import Dict, Union import torch from huggingface_hub.utils import validate_hf_hub_args
0
[ "src/diffusers/loaders/ip_adapter.py" ]
https://github.com/huggingface/diffusers/tree/db6550a228941b538f340fb5b65ed16c43a21b88
2023-12-26T09:43:43Z
Python
21
mindsdb
mindsdb
staging
MindsDB Code Checks
code_checks.yml
.github/workflows/code_checks.yml
mindsdb
102f918deb2532bb7b825f00258f2c1414cf94da
1cf75d7a1071fba82d4190725e984e8d55db48fb
name: MindsDB Code Checks on: pull_request: branches: [stable, staging] jobs: check_requirements: runs-on: ubuntu-latest services: pypi_wayback: image: ghcr.io/waleko/pypi-wayback ports: - 8629:8080 steps: - uses: actions/checkout@v3.5.3 - name: Set up Python 3.8 uses: actions/setup-python@v4.7.0 with: python-version: 3.9 - name: Check main requirements shell: bash run: | pip install -r requirements/requirements-dev.txt python tests/scripts/check_requirements.py - name: Check requirements files are installable shell: bash run: | pip install --dry-run --ignore-installed . # Install only the default handlers. We can expand this to all handlers later with: .[all_handlers_extras] env: PIP_INDEX_URL: http://localhost:8629/2023-12-29 UV_INDEX_URL: http://localhost:8629/2023-12-29
[ { "step_name": "check_requirements/4_Check main requirements.txt", "log": "##[group]Run pip install -r requirements/requirements-dev.txt\n\u001b[36;1mpip install -r requirements/requirements-dev.txt\u001b[0m\n\u001b[36;1m\u001b[0m\n\u001b[36;1mpython tests/scripts/check_requirements.py\u001b[0m\nshell: /usr...
diff --git a/mindsdb/__main__.py b/mindsdb/__main__.py index cac3482883..6cc92a7899 100644 --- a/mindsdb/__main__.py +++ b/mindsdb/__main__.py @@ -35,6 +35,7 @@ from mindsdb.utilities.fs import create_dirs_recursive, clean_process_marks, cle from mindsdb.utilities.telemetry import telemetry_file_exists, disable_telemetry from mindsdb.utilities.context import context as ctx from mindsdb.utilities.auth import register_oauth_client, get_aws_meta_data +import type_infer # noqa try: import torch.multiprocessing as mp
1
[ "mindsdb/__main__.py" ]
https://github.com/mindsdb/mindsdb/tree/102f918deb2532bb7b825f00258f2c1414cf94da
2023-12-29T13:03:05Z
Python
22
sanic-org
sanic
py312
Tests
tests.yml
.github/workflows/tests.yml
iAndriy
2e41e783672597e2e0c7b2842b5934d879374028
c3fa036f4b8a8b7bccc0c967fb94de809ef0e320
name: Tests on: push: branches: - main - current-release - "*LTS" tags: - "!*" pull_request: branches: - main - current-release - "*LTS" types: [opened, synchronize, reopened, ready_for_review] jobs: run_tests: name: "${{ matrix.config.platform == 'windows-latest' && 'Windows' || 'Linux' }} / Python ${{ matrix.config.python-version }} / tox -e ${{ matrix.config.tox-env }}" if: github.event.pull_request.draft == false runs-on: ${{ matrix.config.platform || 'ubuntu-latest' }} services: pypi_wayback: image: ghcr.io/waleko/pypi-wayback ports: - 8629:8080 strategy: fail-fast: true matrix: config: - { python-version: "3.8", tox-env: security } - { python-version: "3.9", tox-env: security } - { python-version: "3.10", tox-env: security } - { python-version: "3.11", tox-env: security } - { python-version: "3.10", tox-env: lint } # - { python-version: "3.10", tox-env: docs } - { python-version: "3.8", tox-env: type-checking } - { python-version: "3.9", tox-env: type-checking } - { python-version: "3.10", tox-env: type-checking } - { python-version: "3.11", tox-env: type-checking } - { python-version: "3.8", tox-env: py38, max-attempts: 3 } - { python-version: "3.8", tox-env: py38-no-ext, max-attempts: 3 } - { python-version: "3.9", tox-env: py39, max-attempts: 3 } - { python-version: "3.9", tox-env: py39-no-ext, max-attempts: 3 } - { python-version: "3.10", tox-env: py310, max-attempts: 3 } - { python-version: "3.10", tox-env: py310-no-ext, max-attempts: 3 } - { python-version: "3.11", tox-env: py311, max-attempts: 3 } - { python-version: "3.12", tox-env: py312, max-attempts: 3 } - { python-version: "3.11", tox-env: py311-no-ext, max-attempts: 3 } - { python-version: "3.8", tox-env: py38-no-ext, platform: windows-latest, ignore-errors: true } - { python-version: "3.9", tox-env: py39-no-ext, platform: windows-latest, ignore-errors: true } - { python-version: "3.10", tox-env: py310-no-ext, platform: windows-latest, ignore-errors: true } - { python-version: "3.11", tox-env: py310-no-ext, platform: windows-latest, ignore-errors: true } steps: - name: Run tests uses: sanic-org/simple-tox-action@v1 with: python-version: ${{ matrix.config.python-version }} tox-env: ${{ matrix.config.tox-env }} max-attempts: ${{ matrix.config.max-attempts || 1 }} ignore-errors: ${{ matrix.config.ignore-errors || false }} env: PIP_INDEX_URL: http://localhost:8629/2023-12-17 UV_INDEX_URL: http://localhost:8629/2023-12-17
[ { "step_name": "Linux Python 3.12 tox -e py312/2_Run tests.txt", "log": "##[group]Run sanic-org/simple-tox-action@v1\nwith:\n python-version: 3.12\n tox-env: py312\n max-attempts: 3\n ignore-errors: false\n timeout-minutes: 10\n warning-on-retry: false\n##[endgroup]\n##[group]Run actions/checkout@v3...
diff --git a/sanic/__version__.py b/sanic/__version__.py index 1cea76e9..5e62ad89 100644 --- a/sanic/__version__.py +++ b/sanic/__version__.py @@ -1 +1 @@ -__version__ = "23.12.0" +__version__ = "23.12.1" diff --git a/tests/test_websockets.py b/tests/test_websockets.py index dd8413b9..5809cfc0 100644 --- a/tests/test_websockets.py +++ b/tests/test_websockets.py @@ -5,7 +5,7 @@ from unittest.mock import Mock, call import pytest -from websockets.frames import CTRL_OPCODES, DATA_OPCODES, Frame +from websockets.frames import CTRL_OPCODES, DATA_OPCODES, Frame, OP_TEXT from sanic.exceptions import ServerError from sanic.server.websockets.frame import WebsocketFrameAssembler @@ -210,17 +210,14 @@ async def test_ws_frame_put_message_complete(opcode): @pytest.mark.asyncio @pytest.mark.parametrize("opcode", DATA_OPCODES) async def test_ws_frame_put_message_into_queue(opcode): + foo = 'foo' if (opcode == OP_TEXT) else b"foo" assembler = WebsocketFrameAssembler(Mock()) assembler.chunks_queue = AsyncMock(spec=Queue) assembler.message_fetched = AsyncMock() assembler.message_fetched.is_set = Mock(return_value=False) - await assembler.put(Frame(opcode, b"foo")) - assembler.chunks_queue.put.has_calls( - call(b"foo"), - call(None), - ) + assert assembler.chunks_queue.put.call_args_list == [call(foo), call(None)] @pytest.mark.asyncio
2
[ "sanic/__version__.py", "tests/test_websockets.py" ]
https://github.com/sanic-org/sanic/tree/2e41e783672597e2e0c7b2842b5934d879374028
2023-12-17T22:26:10Z
Python
23
tornadoweb
tornado
iostream-hostname-test
Test
test.yml
.github/workflows/test.yml
bdarnell
d1b0280fb92d0d8590cf403ca46af3550507d4d2
2da0a9912bc5207e2ac8207b40035377de3e1cd5
"# The \"test\" workflow is run on every PR and runs tests across all\n# supported python versions a(...TRUNCATED)
[{"step_name":"Run windows tests/4_Run test suite.txt","log":"##[group]Run py -m tornado.test --fail(...TRUNCATED)
"diff --git a/tornado/test/iostream_test.py b/tornado/test/iostream_test.py\nindex 02318db3..02fcd3e(...TRUNCATED)
3
[ "tornado/test/iostream_test.py" ]
https://github.com/tornadoweb/tornado/tree/d1b0280fb92d0d8590cf403ca46af3550507d4d2
2023-11-14T03:15:55Z
Python
24
canonical
cloud-init
wsl-datasource
Lint Tests
check_format.yml
.github/workflows/check_format.yml
CarlosNihelton
f18f82de3e0270f6dfddf22f1f487104b2428e35
a56c2fa8719ba2bffef04b4355cd5fd459eb946e
"name: Lint Tests\non:\n pull_request:\n push:\n branches:\n - main\n\nconcurrency:\n gro(...TRUNCATED)
[{"step_name":"Check ruff/6_Test.txt","log":"##[group]Run tox\n\u001b[36;1mtox\u001b[0m\nshell: /usr(...TRUNCATED)
"diff --git a/tests/unittests/sources/test_wsl.py b/tests/unittests/sources/test_wsl.py\nindex 9653c(...TRUNCATED)
0
[ "tests/unittests/sources/test_wsl.py" ]
https://github.com/canonical/cloud-init/tree/f18f82de3e0270f6dfddf22f1f487104b2428e35
2024-01-18T13:21:59Z
Python
25
canonical
cloud-init
main
Lint Tests
check_format.yml
.github/workflows/check_format.yml
ani-sinha
55d2e8d4abb024997be878797d5625effad65d43
9b3b3632cb86b74b79ed2b1fb3672a9f50604992
"name: Lint Tests\non:\n pull_request:\n push:\n branches:\n - main\n\nconcurrency:\n gro(...TRUNCATED)
[{"step_name":"Check pylint/6_Test.txt","log":"##[group]Run tox\n\u001b[36;1mtox\u001b[0m\nshell: /u(...TRUNCATED)
"diff --git a/tests/unittests/test_net_activators.py b/tests/unittests/test_net_activators.py\nindex(...TRUNCATED)
1
[ "tests/unittests/test_net_activators.py" ]
https://github.com/canonical/cloud-init/tree/55d2e8d4abb024997be878797d5625effad65d43
2023-12-07T12:43:24Z
Python
26
canonical
cloud-init
holman/dhcpcd
Lint Tests
check_format.yml
.github/workflows/check_format.yml
holmanb
385c14d0ae500918cff5565ea836884bfaa2bfa5
0193e09ca15ed70a351203ab6f21dc52a60d6253
"name: Lint Tests\non:\n pull_request:\n push:\n branches:\n - main\n\nconcurrency:\n gro(...TRUNCATED)
[{"step_name":"Check pylint/6_Test.txt","log":"##[group]Run tox\n\u001b[36;1mtox\u001b[0m\nshell: /u(...TRUNCATED)
"diff --git a/cloudinit/net/dhcp.py b/cloudinit/net/dhcp.py\nindex 1b8caee4f..6a4e5a3fa 100644\n--- (...TRUNCATED)
0
[ "cloudinit/net/dhcp.py", "tests/unittests/net/test_dhcp.py" ]
https://github.com/canonical/cloud-init/tree/385c14d0ae500918cff5565ea836884bfaa2bfa5
2024-01-06T01:46:19Z
Python
27
canonical
cloud-init
holman/dhcpcd
Lint Tests
check_format.yml
.github/workflows/check_format.yml
holmanb
4d5898b8a73c93e1ed4434744c2fa7c3f7fbd501
2bd296ef8983a716a7b8d107571181333f093109
"name: Lint Tests\non:\n pull_request:\n push:\n branches:\n - main\n\nconcurrency:\n gro(...TRUNCATED)
[{"step_name":"Check pylint/6_Test.txt","log":"##[group]Run tox\n\u001b[36;1mtox\u001b[0m\nshell: /u(...TRUNCATED)
"diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py\nindex 5c891f26e..c02166(...TRUNCATED)
0
["cloudinit/distros/__init__.py","cloudinit/net/dhcp.py","cloudinit/sources/__init__.py","tests/unit(...TRUNCATED)
https://github.com/canonical/cloud-init/tree/4d5898b8a73c93e1ed4434744c2fa7c3f7fbd501
2024-01-12T20:05:57Z
Python
29
canonical
cloud-init
main
Lint Tests
check_format.yml
.github/workflows/check_format.yml
phsm
ecb486addc70aecc9b28f2b30a77eaf2fd587091
5e42147f3332b7694dacaada70f86a21f709d139
"name: Lint Tests\non:\n pull_request:\n push:\n branches:\n - main\n\nconcurrency:\n gro(...TRUNCATED)
[{"step_name":"Check mypy/6_Test.txt","log":"##[group]Run tox\n\u001b[36;1mtox\u001b[0m\nshell: /usr(...TRUNCATED)
"diff --git a/cloudinit/distros/__init__.py b/cloudinit/distros/__init__.py\nindex 9a073742c..c857a8(...TRUNCATED)
2
[ "cloudinit/distros/__init__.py", "cloudinit/sources/DataSourceCloudStack.py" ]
https://github.com/canonical/cloud-init/tree/ecb486addc70aecc9b28f2b30a77eaf2fd587091
2024-01-05T08:18:36Z
End of preview. Expand in Data Studio

๐ŸŸ๏ธ Long Code Arena (CI builds repair)

This is the benchmark for CI builds repair task as part of the ๐ŸŸ๏ธ Long Code Arena benchmark.

๐Ÿ› ๏ธ Task. Given the logs of a failed GitHub Actions workflow and the corresponding repository snapshot, repair the repository contents in order to make the workflow pass.

All the data is collected from repositories published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.

To score your model on this dataset, you can use CI build repair benchmark. ๐Ÿ“ฉ If you have any questions or requests concerning this dataset, please contact lca@jetbrains.com

How-to

List all the available configs

via datasets.get_dataset_config_names and choose an appropriate one.

Current configs: python

Load the data

via load_dataset:

from datasets import load_dataset

dataset = load_dataset("JetBrains-Research/lca-ci-builds-repair", split="test")

Note that all the data we have is considered to be in the test split.
NOTE: If you encounter any errors with loading the dataset on Windows, update the datasets library (was tested on datasets==2.16.1)

Usage

For the dataset usage please refer to our CI builds repair benchmark. Its workflow is following:

  1. Repairs repo by fix_repo_function function that utilizes repo state and logs of fails;
  2. Sends the datapoints to GitHub to run workflows;
  3. Requests results from GitHub;
  4. Analyzes results and prints them;
  5. Clones the necessary repos to the user's local machine.

The user should run their model to repair the failing CI workflows, and the benchmark will push commits to GitHub, returning the results of the workflow runs for all the datapoints.

Dataset Structure

This dataset contains logs of the failed GitHub Action workflows for some commits followed by the commit that passes the workflow successfully.

Note that, unlike other ๐ŸŸ๏ธ Long Code Arena datasets, this dataset does not contain repositories.

Datapoint Schema

Each example has the following fields:

Field Description
contributor Username of the contributor that committed changes
difficulty Difficulty of the problem (assessor-based. 1 means that the repair requires only the code formatting)
diff Contents of the diff between the failed and the successful commits
head_branch Name of the original branch that the commit was pushed at
id Unique ID of the datapoint
language Main language of the repository
logs List of dicts with keys log (logs of the failed job, particular step) and step_name (name of the failed step of the job)
repo_name Name of the original repository (second part of the owner/name on GitHub)
repo owner Owner of the original repository (first part of the owner/name on GitHub)
sha_fail SHA of the failed commit
sha_success SHA of the successful commit
workflow Contents of the workflow file
workflow_filename The name of the workflow file (without directories)
workflow_name The name of the workflow
workflow_path The full path to the workflow file
changed_files List of files changed in diff
commit_link URL to commit corresponding to failed job

Datapoint Example

{'contributor': 'Gallaecio',
 'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
 'difficulty': '2',
 'head_branch': 'component-getters',
 'id': 18,
 'language': 'Python',
 'logs': [{'log': '##[group]Run pip install -U tox\n<...>',
           'step_name': 'checks (3.12, pylint)/4_Run check.txt'}],
 'repo_name': 'scrapy',
 'repo_owner': 'scrapy',
 'sha_fail': '0f71221cf9875ed8ef3400e1008408e79b6691e6',
 'sha_success': 'c1ba9ccdf916b89d875628ba143dc5c9f6977430',
 'workflow': 'name: Checks\non: [push, pull_request]\n\n<...>',
 'workflow_filename': 'checks.yml',
 'workflow_name': 'Checks',
 'workflow_path': '.github/workflows/checks.yml',
 'changed_files': ["scrapy/crawler.py"],
 'commit_link': "https://github.com/scrapy/scrapy/tree/0f71221cf9875ed8ef3400e1008408e79b6691e6"}

Citing

@article{bogomolov2024long,
  title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models},
  author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey},
  journal={arXiv preprint arXiv:2406.11612},
  year={2024}
}

You can find the paper here.

Downloads last month
87

Spaces using JetBrains-Research/lca-ci-builds-repair 3

Collection including JetBrains-Research/lca-ci-builds-repair

Paper for JetBrains-Research/lca-ci-builds-repair