organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
home-assistant | core | b9753a9f920f002312dc115534afdb422043007c | https://github.com/home-assistant/core/issues/83852 | integration: homekit
integration: braviatv | Sony Bravia TV Integration: Error setting up entry for homekit | ### The problem
Starting in version homeassistant=='2012.11.0' Sony Bravia TV integration can't work with Apple HomeKit ( HomeKit Integration) due some errors:
2022-12-12 16:07:46.673 WARNING (MainThread) [homeassistant.components.homekit.type_remotes] media_player.sony_xbr_49x835d: Reached maximum number of source... | null | https://github.com/home-assistant/core/pull/83890 | null | {'base_commit': 'b9753a9f920f002312dc115534afdb422043007c', 'files': [{'path': 'homeassistant/components/homekit/type_remotes.py', 'status': 'modified', 'Loc': {"('RemoteInputSelectAccessory', None, 78)": {'add': [145]}, '(None, None, None)': {'mod': [21]}, "('RemoteInputSelectAccessory', '__init__', 81)": {'mod': [99]... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/homekit/type_remotes.py"
],
"doc": [],
"test": [
"tests/components/homekit/test_type_media_players.py"
],
"config": [],
"asset": []
} | 1 |
home-assistant | core | 185f7beafc05fc355109fd417350591459650366 | https://github.com/home-assistant/core/issues/59106 | integration: octoprint | Error adding entities for domain sensor with platform octoprint when no tool0 | ### The problem
One of my octoprint instances is connect to a CNC which does not have a tool0 as there is no extruder. In the previous integration, you could just not set it to monitor this, however, in the the new UI there is no option to ignore components. As a result, I have an "Octoprint target tool0 temp" that is... | null | https://github.com/home-assistant/core/pull/59130 | null | {'base_commit': '185f7beafc05fc355109fd417350591459650366', 'files': [{'path': 'homeassistant/components/octoprint/sensor.py', 'status': 'modified', 'Loc': {"('OctoPrintTemperatureSensor', 'native_value', 206)": {'add': [219], 'mod': [214, 217, 218]}}}, {'path': 'tests/components/octoprint/test_sensor.py', 'status': 'm... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/octoprint/sensor.py"
],
"doc": [],
"test": [
"tests/components/octoprint/test_sensor.py"
],
"config": [],
"asset": []
} | 1 |
home-assistant | core | dbaca51bb3b7b0cea2acd5d3cc6fd1b7a396daf9 | https://github.com/home-assistant/core/issues/45426 | integration: synology_dsm | Synology DSM CPU sensors report usage above 100% | ## The problem
CPU load for 15 and 5 minutes are reported above 100%

## Environment
Running version 2021.1.4 as Home Assistant OS VM running on the synology nas itself
## Problem-relevant `configura... | null | https://github.com/home-assistant/core/pull/45500 | null | {'base_commit': 'dbaca51bb3b7b0cea2acd5d3cc6fd1b7a396daf9', 'files': [{'path': 'homeassistant/components/synology_dsm/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38], 'mod': [97, 104, 111, 118, 125, 126, 132, 133, 139, 140]}}}, {'path': 'homeassistant/components/synology_dsm/sensor.py', 'sta... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/synology_dsm/const.py",
"homeassistant/components/synology_dsm/sensor.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
home-assistant | core | ed3ebdfea52b222560ee6cae21c84f1e73df4d9a | https://github.com/home-assistant/core/issues/97324 | integration: renault | Error setting up entry Renault for renault | ### The problem
Renault integration fails to start
### What version of Home Assistant Core has the issue?
core-2023.7.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
... | null | https://github.com/home-assistant/core/pull/97530 | null | {'base_commit': 'ed3ebdfea52b222560ee6cae21c84f1e73df4d9a', 'files': [{'path': 'homeassistant/components/renault/__init__.py', 'status': 'modified', 'Loc': {"(None, 'async_setup_entry', 15)": {'mod': [29]}}}, {'path': 'tests/components/renault/test_init.py', 'status': 'modified', 'Loc': {"(None, 'test_setup_entry_excep... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/renault/__init__.py"
],
"doc": [],
"test": [
"tests/components/renault/test_init.py"
],
"config": [],
"asset": []
} | 1 |
home-assistant | core | 0eae0cca2bf841f2c2cb87fc602bc8afa3557174 | https://github.com/home-assistant/core/issues/35196 | integration: metoffice | Met office componant does not provide future forecast data | <!-- READ THIS FIRST:
- If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/
- Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/core/releases
- Do not report issues for int... | null | https://github.com/home-assistant/core/pull/50876 | null | {'base_commit': '0eae0cca2bf841f2c2cb87fc602bc8afa3557174', 'files': [{'path': 'homeassistant/components/metoffice/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 4, 16, 18], 'mod': [14, 15]}, "(None, 'async_setup_entry', 25)": {'add': [50], 'mod': [33, 34, 35, 38, 41, 42, 48, 49, 54, 55, ... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/metoffice/weather.py",
"homeassistant/components/metoffice/sensor.py",
"tests/fixtures/metoffice.json",
"homeassistant/components/metoffice/const.py",
"homeassistant/components/metoffice/data.py",
"homeassistant/components/metoffice/config_flow.py",
"h... | 1 |
zylon-ai | private-gpt | 5a695e9767e24778ffd725ab195bf72916e27ba5 | https://github.com/zylon-ai/private-gpt/issues/133 | Need help with ingest.py | Running into this error - python ingest.py
-Traceback (most recent call last):
File "C:\Users\krstr\OneDrive\Desktop\privategpt\privateGPT\privateGPT\ingest.py", line 11, in <module>
from constants import CHROMA_SETTINGS
File "C:\Users\krstr\OneDrive\Desktop\privategpt\privateGPT\privateGPT\constants.py"... | null | https://github.com/zylon-ai/private-gpt/pull/168 | null | {'base_commit': '5a695e9767e24778ffd725ab195bf72916e27ba5', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | 1 | |
zylon-ai | private-gpt | 57a829a8e8cf5c31410c256ae59e0eda9f129a41 | https://github.com/zylon-ai/private-gpt/issues/1258 | Add a list of supported file types to README and Docs | Maybe I'm blind, but I couldn't find a list of the file types supported by privateGPT.
One might add a list with the supported file types to the [README.md](https://github.com/imartinez/privateGPT/blob/main/README.md) and [PrivateGPT Docs](https://docs.privategpt.dev/).
Kinda related https://github.com/imartinez/... | null | https://github.com/zylon-ai/private-gpt/pull/1264 | null | {'base_commit': '57a829a8e8cf5c31410c256ae59e0eda9f129a41', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49]}}}, {'path': 'fern/docs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 6], 'mod': [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"fern/docs.yml",
"fern/docs/pages/welcome.mdx",
"fern/docs/pages/quickstart.mdx",
"fern/docs/pages/sdks.mdx",
"fern/docs/pages/ingestion.mdx",
"fern/docs/pages/installation.mdx"
],
"test": [],
"config": [
"Makefile"
],
"asset": []
} | 1 | |
zylon-ai | private-gpt | 60e6bd25eb7e54a6d62ab0a9642c09170c1729e3 | https://github.com/zylon-ai/private-gpt/issues/448 | bug
primordial | ingest.py extracts only the first row from the CSV files | My suggestion for fixing the bug:
1. Modify the load_single_document function as follows:
def load_single_document(file_path: str) -> List[Document]:
ext = "." + file_path.rsplit(".", 1)[-1]
if ext in LOADER_MAPPING:
loader_class, loader_args = LOADER_MAPPING[ext]
loader = loader_class... | null | https://github.com/zylon-ai/private-gpt/pull/560 | null | {'base_commit': '60e6bd25eb7e54a6d62ab0a9642c09170c1729e3', 'files': [{'path': 'ingest.py', 'status': 'modified', 'Loc': {"(None, 'load_single_document', 84)": {'mod': [84, 89]}, "(None, 'load_documents', 94)": {'mod': [108, 109]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"ingest.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | 86c2dcfe1b33ac467558487a1df408abee0d2321 | https://github.com/zylon-ai/private-gpt/issues/875 | bug | I got a Traceback error while running privateGPT on Ubuntu 22.04 | While running privateGPT.py, the error started after "gptj_model_load: model size = 3609.38 MB / num tensors = 285". The error reads as follows:
Traceback (most recent call last):
File "/home/dennis/privateGPT/privateGPT.py", line 83, in <module>
main()
File "/home/dennis/privateGPT/privateGPT.py", line 3... | null | https://github.com/zylon-ai/private-gpt/pull/881 | null | {'base_commit': '86c2dcfe1b33ac467558487a1df408abee0d2321', 'files': [{'path': 'privateGPT.py', 'status': 'modified', 'Loc': {"(None, 'main', 25)": {'mod': [36, 38]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"privateGPT.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | fdb45741e521d606b028984dbc2f6ac57755bb88 | https://github.com/zylon-ai/private-gpt/issues/15 | llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this | llama.cpp: loading model from ./models/ggml-model-q4_0.bin
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab = 32000
llama_mo... | null | https://github.com/zylon-ai/private-gpt/pull/224 | null | {'base_commit': 'fdb45741e521d606b028984dbc2f6ac57755bb88', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {'path': 'example.env', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [2]}}}, {'path': 'ingest.py', 's... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"ingest.py",
"privateGPT.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
"example.env"
],
"asset": []
} | 1 | |
yt-dlp | yt-dlp | c999bac02c5a4f755b2a82488a975e91c988ffd8 | https://github.com/yt-dlp/yt-dlp/issues/9506 | site-bug | [TikTok] Failed to parse JSON/ No video formats found | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt... | null | https://github.com/yt-dlp/yt-dlp/pull/9960 | null | {'base_commit': '3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4', 'files': [{'path': 'yt_dlp/extractor/tiktok.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21]}, "('TikTokBaseIE', None, 33)": {'add': [241]}, "('TikTokBaseIE', '_parse_aweme_video_app', 242)": {'add': [298], 'mod': [246, 247, 248, 249, 250,... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/tiktok.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4 | https://github.com/yt-dlp/yt-dlp/issues/2350 | site-enhancement | [YouTube] [ChannelTab] extract subscriber count and channel views | ### Checklist
- [X] I'm reporting a site feature request
- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've searched the [bugtracker](https://github... | null | https://github.com/yt-dlp/yt-dlp/pull/2399 | null | {'base_commit': '135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1140]}}}, {'path': 'yt_dlp/extractor/common.py', 'status': 'modified', 'Loc': {"('InfoExtractor', None, 94)": {'add': [262]}}}, {'path': 'yt_dlp/extractor/youtube.py',... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/youtube.py",
"yt_dlp/extractor/common.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | c8a61a910096c77ce08dad5e1b2fbda5eb964156 | https://github.com/yt-dlp/yt-dlp/issues/9635 | site-bug | Vkplay Unsupported URL | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#re... | null | https://github.com/yt-dlp/yt-dlp/pull/9636 | null | {'base_commit': 'c8a61a910096c77ce08dad5e1b2fbda5eb964156', 'files': [{'path': 'yt_dlp/extractor/vk.py', 'status': 'modified', 'Loc': {"('VKPlayBaseIE', None, 709)": {'add': [709]}, "('VKPlayIE', None, 767)": {'add': [785], 'mod': [768, 779]}, "('VKPlayLiveIE', None, 804)": {'add': [824], 'mod': [805, 816]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/vk.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 93864403ea7c982be9a78af38835ac0747ed12d1 | https://github.com/yt-dlp/yt-dlp/issues/2043 | bug
external issue | [ceskatelevize.cz] Cannot download manifest - SSLV3_ALERT_HANDSHAKE_FAILURE | I'm sorry, but I think that the extractor is still broken. For instance:
```
$ yt-dlp --verbose "https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/"
[debug] Command-line config: ['--verbose', 'https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/']
[debug] En... | null | https://github.com/yt-dlp/yt-dlp/pull/1904 | null | {'base_commit': '93864403ea7c982be9a78af38835ac0747ed12d1', 'files': [{'path': 'yt_dlp/extractor/ceskatelevize.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [15, 16]}, "('CeskaTelevizeIE', '_real_extract', 89)": {'mod': [102, 103, 104, 105, 106]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/ceskatelevize.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 195c22840c594c8f9229cb47ffec2a8984c53a0c | https://github.com/yt-dlp/yt-dlp/issues/2239 | bug | --no-continue is bugged and does nothing (--force-overwrites also) | ### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and ar... | null | https://github.com/yt-dlp/yt-dlp/pull/2901 | null | {'base_commit': '195c22840c594c8f9229cb47ffec2a8984c53a0c', 'files': [{'path': 'yt_dlp/downloader/fragment.py', 'status': 'modified', 'Loc': {"('FragmentFD', '_prepare_frag_download', 165)": {'mod': [181]}}}, {'path': 'yt_dlp/downloader/http.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18], 'mod': ... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/utils.py",
"yt_dlp/downloader/http.py",
"yt_dlp/downloader/fragment.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 1f6b90ed8db7006e2f2d539c41c8f3e59058dd00 | https://github.com/yt-dlp/yt-dlp/issues/4587 | good first issue
site-enhancement | 9gag.com - NineGagIE - InfoExtractor - add Uploader info to the returned metadata | ### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I'm running yt-dlp version **2022.07.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login de... | null | https://github.com/yt-dlp/yt-dlp/pull/4597 | null | {'base_commit': '1f6b90ed8db7006e2f2d539c41c8f3e59058dd00', 'files': [{'path': 'yt_dlp/extractor/ninegag.py', 'status': 'modified', 'Loc': {"('NineGagIE', None, 12)": {'add': [13, 23, 34], 'mod': [20, 25]}, "('NineGagIE', '_real_extract', 37)": {'add': [119], 'mod': [49, 101, 113, 117, 122, 123, 124]}, '(None, None, No... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/ninegag.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2b18a8c59018a863cfac5b959ee14e474a7a87bc | https://github.com/yt-dlp/yt-dlp/issues/417 | bug | [Broken] [YouTube] Can't get full Chat Replay when using cookies | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check lis... | null | https://github.com/yt-dlp/yt-dlp/pull/437 | null | {'base_commit': '2b18a8c59018a863cfac5b959ee14e474a7a87bc', 'files': [{'path': 'yt_dlp/downloader/youtube_live_chat.py', 'status': 'modified', 'Loc': {"('YoutubeLiveChatFD', 'real_download', 22)": {'add': [61, 144, 146], 'mod': [93, 94, 95, 96, 98, 157, 158, 159, 160, 161]}, "('YoutubeLiveChatFD', 'download_and_parse_f... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/downloader/youtube_live_chat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | f6c73aad5f1a67544bea137ebd9d1e22e0e56567 | https://github.com/yt-dlp/yt-dlp/issues/9512 | site-bug | [Globo] Unable to download JSON metadata: HTTP Error 404: Not Found | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instruc... | null | https://github.com/yt-dlp/yt-dlp/pull/11795 | null | {'base_commit': 'f6c73aad5f1a67544bea137ebd9d1e22e0e56567', 'files': [{'path': 'yt_dlp/extractor/globo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 11, 14, 15], 'mod': [1, 2, 4, 8, 10]}, "('GloboIE', None, 18)": {'add': [20], 'mod': [19, 22, 28, 29, 41, 42, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/globo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 8c53322cda75394a8d551dde20b2529ee5ad6e89 | https://github.com/yt-dlp/yt-dlp/issues/5744 | site-enhancement
patch-available | [ok.ru] Download subtitle | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#up... | null | https://github.com/yt-dlp/yt-dlp/pull/5920 | null | {'base_commit': '8c53322cda75394a8d551dde20b2529ee5ad6e89', 'files': [{'path': 'yt_dlp/extractor/odnoklassniki.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}, "('OdnoklassnikiIE', None, 21)": {'add': [155, 204]}, "('OdnoklassnikiIE', '_extract_desktop', 222)": {'add': [296, 307]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/odnoklassniki.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 5f2da312fa66d6f001ca4d8d79ee281b9b62e9ed | https://github.com/yt-dlp/yt-dlp/issues/840 | enhancement | UnicodeDecodeError when configuration saved as UTF-8 and OS default encoding is GBK | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check lis... | null | https://github.com/yt-dlp/yt-dlp/pull/4357 | null | {'base_commit': '5f2da312fa66d6f001ca4d8d79ee281b9b62e9ed', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1163]}}}, {'path': 'test/test_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [41, 1824]}}}, {'path': 'yt_dlp/utils.py', 'status': 'modified', '... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/utils.py"
],
"doc": [
"README.md"
],
"test": [
"test/test_utils.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2fd226f6a76715e429709d7172183d48e07c7ab3 | https://github.com/yt-dlp/yt-dlp/issues/544 | bug | Program not running without `_sqlite3` module | ## Checklist
- [ ] I'm reporting a broken site support issue
- [x] I've verified that I'm running yt-dlp version **2021.07.21**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x]... | null | https://github.com/yt-dlp/yt-dlp/pull/554 | null | {'base_commit': '2fd226f6a76715e429709d7172183d48e07c7ab3', 'files': [{'path': 'yt_dlp/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25], 'mod': [5]}, "(None, '_extract_firefox_cookies', 91)": {'add': [92]}, "(None, '_extract_chrome_cookies', 196)": {'add': [197]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/cookies.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | f14c2333481c63c24017a41ded7d8f36726504b7 | https://github.com/yt-dlp/yt-dlp/issues/3005 | site-bug | Can't extract from sportdeutschland.tv | ### Checklist
- [X] I'm reporting a site feature request
- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've searched the [bugtracker](https://gith... | null | https://github.com/yt-dlp/yt-dlp/pull/6041 | null | {'base_commit': 'f14c2333481c63c24017a41ded7d8f36726504b7', 'files': [{'path': 'yt_dlp/extractor/sportdeutschland.py', 'status': 'modified', 'Loc': {"('SportDeutschlandIE', '_real_extract', 42)": {'add': [95], 'mod': [44, 45, 47, 48, 49, 52, 53, 54, 56, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/sportdeutschland.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4 | https://github.com/yt-dlp/yt-dlp/issues/9652 | DRM
site-bug
patch-available | on.orf.at not complete DRM detection | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
### Checklist
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update ... | null | https://github.com/yt-dlp/yt-dlp/pull/9677 | null | {'base_commit': '3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4', 'files': [{'path': 'yt_dlp/extractor/orf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16]}, "('ORFONIE', None, 570)": {'add': [585], 'mod': [572, 588]}, "('ORFONIE', '_extract_video', 588)": {'add': [606, 611], 'mod': [591, 598, 601]}, "('... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/orf.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2314b4d89fc111ddfcb25937210f1f1c2390cc4a | https://github.com/yt-dlp/yt-dlp/issues/4776 | bug | `InfoExtractor._get_cookies` fails if values contain quotes | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.08.19** ([update instructions](https://github.com/yt-dlp... | null | https://github.com/yt-dlp/yt-dlp/pull/4780 | null | {'base_commit': '2314b4d89fc111ddfcb25937210f1f1c2390cc4a', 'files': [{'path': 'test/test_cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('TestCookies', 'test_pbkdf2_sha1', 137)": {'add': [139]}}}, {'path': 'yt_dlp/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add':... | [] | [] | [] | {
"iss_type": "2有点犹豫,出错了但是该报错是用于验证某个问题。",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/cookies.py",
"yt_dlp/extractor/common.py"
],
"doc": [],
"test": [
"test/test_cookies.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | a79cba0c95b8b74d2ca4f7fbf6ffe76e34ed7221 | https://github.com/yt-dlp/yt-dlp/issues/2840 | site-request | Site support request for: ixigua.com | ### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running yt-dlp version **2022.02.04**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that none of provided U... | null | https://github.com/yt-dlp/yt-dlp/pull/3953 | null | {'base_commit': 'a79cba0c95b8b74d2ca4f7fbf6ffe76e34ed7221', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [722]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 700444c23ddb65f618c2abd942acdc0c58c650b1 | https://github.com/yt-dlp/yt-dlp/issues/3355 | bug
patch-available
regression | problem with double-dot segments (`/../`) after the hostname | ### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.04.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've che... | null | https://github.com/yt-dlp/yt-dlp/pull/7662 | null | {'base_commit': '25b6e8f94679b4458550702b46e61249b875a4fd', 'files': [{'path': 'test/test_networking.py', 'status': 'modified', 'Loc': {"('HTTPTestRequestHandler', 'do_GET', 142)": {'add': [175]}, "('TestHTTPRequestHandler', None, 316)": {'add': [357]}}}, {'path': 'test/test_utils.py', 'status': 'modified', 'Loc': {'(N... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/utils/_legacy.py",
"yt_dlp/networking/_urllib.py",
"yt_dlp/utils/_utils.py",
"yt_dlp/utils/networking.py",
"yt_dlp/cookies.py",
"yt_dlp/networking/common.py"
],
"doc": [],
"test": [
"test/test_utils.py",
"test/test_networking.py"
],
"config": [],
"asset"... | 1 |
yt-dlp | yt-dlp | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | https://github.com/yt-dlp/yt-dlp/issues/11641 | site-bug
patch-available | [TikTok] ERROR: Postprocessing: Conversion failed! when embedding thumbnail | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instruc... | null | https://github.com/yt-dlp/yt-dlp/pull/11645 | null | {'base_commit': '4b5eec0aaa7c02627f27a386591b735b90e681a8', 'files': [{'path': 'yt_dlp/extractor/tiktok.py', 'status': 'modified', 'Loc': {"('TikTokBaseIE', '_parse_aweme_video_app', 322)": {'mod': [416, 417, 418, 419, 420, 421, 422, 423, 470]}, "('TikTokBaseIE', '_parse_aweme_video_web', 567)": {'mod': [603, 604, 605,... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/tiktok.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | a40b0070c2a00d3ed839897462171a82323aa875 | https://github.com/yt-dlp/yt-dlp/issues/9003 | site-enhancement | [linkedin] yt-dlp see no subtitles but they exist (webvtt) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instruc... | null | https://github.com/yt-dlp/yt-dlp/pull/9056 | null | {'base_commit': 'a40b0070c2a00d3ed839897462171a82323aa875', 'files': [{'path': 'yt_dlp/extractor/linkedin.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 14, 15], 'mod': [6, 7, 10, 13]}, "('LinkedInIE', '_real_extract', 98)": {'add': [112], 'mod': [102, 103, 104, 105, 107, 117, 118, 119, 121]}, "('... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/linkedin.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | b965087396ddb2d40dfe5bc12391ee000945129d | https://github.com/yt-dlp/yt-dlp/issues/110 | PR-needed | zsh completions are not installed | ## Checklist
- [ ] I'm reporting a broken site support issue
- [x] I've verified that I'm running yt-dlp version **2021.02.24**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I... | null | https://github.com/yt-dlp/yt-dlp/pull/114 | null | {'base_commit': 'b965087396ddb2d40dfe5bc12391ee000945129d', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 7, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 64, 105, 108, 110, 113, 115, 118, 126, 139, 140, 141]}}}, {'path': 'devscripts/bash-completion.py', 'status': 'modifie... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"devscripts/bash-completion.py",
"devscripts/fish-completion.py",
"setup.py",
"devscripts/zsh-completion.py"
],
"doc": [],
"test": [],
"config": [
"Makefile"
],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 4f08e586553755ab61f64a5ef9b14780d91559a7 | https://github.com/yt-dlp/yt-dlp/issues/4409 | site-bug | ERROR: 03354: An extractor error has occurred. | ### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.07.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and sam... | null | https://github.com/yt-dlp/yt-dlp/pull/4416 | null | {'base_commit': '4f08e586553755ab61f64a5ef9b14780d91559a7', 'files': [{'path': 'yt_dlp/extractor/tubitv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, "('TubiTvShowIE', '_entries', 130)": {'add': [137]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/tubitv.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | c459d45dd4d417fb80a52e1a04e607776a44baa4 | https://github.com/yt-dlp/yt-dlp/issues/6029 | site-bug
patch-available | Chilloutzone: Unable to extract video data | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or l... | null | https://github.com/yt-dlp/yt-dlp/pull/6445 | null | {'base_commit': 'c459d45dd4d417fb80a52e1a04e607776a44baa4', 'files': [{'path': 'yt_dlp/extractor/chilloutzone.py', 'status': 'modified', 'Loc': {"('ChilloutzoneIE', None, 12)": {'add': [21, 33], 'mod': [13, 15, 25, 32, 36, 37, 38, 40, 42, 43, 44, 45, 46]}, "('ChilloutzoneIE', '_real_extract', 50)": {'add': [54], 'mod':... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/chilloutzone.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 68be95bd0ca3f76aa63c9812935bd826b3a42e53 | https://github.com/yt-dlp/yt-dlp/issues/6551 | good first issue
site-bug
patch-available | [youku] HTML in error message | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://g... | null | https://github.com/yt-dlp/yt-dlp/pull/6690 | null | {'base_commit': '68be95bd0ca3f76aa63c9812935bd826b3a42e53', 'files': [{'path': 'yt_dlp/extractor/youku.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, "('YoukuIE', None, 16)": {'add': [83], 'mod': [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55,... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/youku.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 8e6e3651727b0b85764857fc6329fe5e0a3f00de | https://github.com/yt-dlp/yt-dlp/issues/7520 | enhancement | ValueError: could not find firefox container "XYZ" in containers.json | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://g... | null | https://github.com/yt-dlp/yt-dlp/pull/9016 | null | {'base_commit': '8e6e3651727b0b85764857fc6329fe5e0a3f00de', 'files': [{'path': 'yt_dlp/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "(None, '_extract_firefox_cookies', 117)": {'mod': [125, 127, 129, 131]}, "(None, '_firefox_browser_dir', 185)": {'mod': [185, 187, 189, 190]}, "(None, '_... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/cookies.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | aebb4f4ba78ec7542416832e9dd5e47788cb12aa | https://github.com/yt-dlp/yt-dlp/issues/4649 | site-request | https://nos.nl/ | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.08.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or l... | null | https://github.com/yt-dlp/yt-dlp/pull/4822 | null | {'base_commit': 'aebb4f4ba78ec7542416832e9dd5e47788cb12aa', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1182]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2530b68d4476fe6cb4b25897b906cbb1774ca7c9 | https://github.com/yt-dlp/yt-dlp/issues/5209 | site-request | Genius.com support request | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#... | null | https://github.com/yt-dlp/yt-dlp/pull/5221 | null | {'base_commit': '2530b68d4476fe6cb4b25897b906cbb1774ca7c9', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [631]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | d1c4f6d4da75ac55cf573afe53b1e4a0f776a8f7 | https://github.com/yt-dlp/yt-dlp/issues/982 | geo-blocked | [Broken] TF1.fr multi-language videos: no detection of other languages than French | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check lis... | null | https://github.com/yt-dlp/yt-dlp/pull/3739 | null | {'base_commit': 'd1c4f6d4da75ac55cf573afe53b1e4a0f776a8f7', 'files': [{'path': 'yt_dlp/extractor/wat.py', 'status': 'modified', 'Loc': {"('WatIE', '_real_extract', 47)": {'mod': [57]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/wat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | aa4b0545120becc11a5992384ce52c943da8ead5 | https://github.com/yt-dlp/yt-dlp/issues/1945 | site-bug | SonyLIV Premium Content giving 406 ERROR | ### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2021.12.01**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and arguments with spe... | null | https://github.com/yt-dlp/yt-dlp/pull/1959 | null | {'base_commit': 'aa4b0545120becc11a5992384ce52c943da8ead5', 'files': [{'path': 'yt_dlp/extractor/sonyliv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "('SonyLIVIE', '_call_api', 61)": {'add': [69], 'mod': [62, 63, 64, 68]}, "('SonyLIVIE', None, 16)": {'mod': [59]}, "('SonyLIVIE', '_real_initia... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2由于项目不完善导致的报错",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/sonyliv.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4 | https://github.com/yt-dlp/yt-dlp/issues/9640 | site-request | Support NTS.live | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://gith... | null | https://github.com/yt-dlp/yt-dlp/pull/9641 | null | {'base_commit': '3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1334]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
comfyanonymous | ComfyUI | 3cd7d84b53724a97c1436f70b6da6975e3d93484 | https://github.com/comfyanonymous/ComfyUI/issues/5627 | Potential Bug | Boolean value of Tensor with more than one value is ambiguous | ### Expected Behavior
Generate image using Pulid with flux model
### Actual Behavior
Stops generation. Few hours earlier everything was fine
### Steps to Reproduce
[Pulid_workglow_v1.json](https://github.com/user-attachments/files/17777510/Pulid_workglow_v1.json)
### Debug Logs
```powershell
# ComfyUI Error Re... | null | https://github.com/comfyanonymous/ComfyUI/pull/27 | null | {'base_commit': '3cd7d84b53724a97c1436f70b6da6975e3d93484', 'files': [{'path': 'webshit/index.html', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [274, 275, 276, 277, 278, 279, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 299, 301, 302, 303, 304, 305, 307, 308, 311, 312, 313, 315, 316, 318, 319... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"webshit/index.html"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
comfyanonymous | ComfyUI | f7695b5f9e007136da72bd3e79d601e2814a3890 | https://github.com/comfyanonymous/ComfyUI/issues/5890 | Feature | Support wildcard type "*" in ComfyUI core | ### Feature Idea
There are many custom nodes that currently hacking the string comparison to achieve wildcard type ("*"). This implementation is very hacky and hard to debug. We should properly support wildcard types in ComfyUI core.
### Existing Solutions
- https://github.com/pythongosssss/ComfyUI-Custom-Scripts/bl... | null | https://github.com/comfyanonymous/ComfyUI/pull/5900 | null | {'base_commit': 'f7695b5f9e007136da72bd3e79d601e2814a3890', 'files': [{'path': 'execution.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}, "(None, 'validate_inputs', 531)": {'mod': [592, 593]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"execution.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
comfyanonymous | ComfyUI | f81dbe26e2e363c28ad043db67b59c11bb33f446 | https://github.com/comfyanonymous/ComfyUI/issues/2671 | Feature Request: Support Differential Diffusion for inpainting. | This is a nice alternative to standard inpainting, it allows for the mask to be a gradient for control of strength on top of denoising.
https://github.com/exx8/differential-diffusion | null | https://github.com/comfyanonymous/ComfyUI/pull/2876 | null | {'base_commit': 'f81dbe26e2e363c28ad043db67b59c11bb33f446', 'files': [{'path': 'comfy/samplers.py', 'status': 'modified', 'Loc': {"('KSamplerX0Inpaint', 'forward', 277)": {'add': [278]}}}, {'path': 'nodes.py', 'status': 'modified', 'Loc': {"(None, 'init_custom_nodes', 1936)": {'add': [1963]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"nodes.py",
"comfy/samplers.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
ageitgey | face_recognition | fe421d4acd76e8a19098e942b7bd9c3bbef6ebc4 | https://github.com/ageitgey/face_recognition/issues/242 | imread() got an unexpected keyword argument 'mode' | * face_recognition version: 1.0.0
* Python version: 2.7
* Operating System: mac EI Capitan 10.11.6
### Description
after install the face_recognition, I tried to run examples/facerec_from_webcam_faster.py, but it show error as following:
Traceback (most recent call last):
File "/Users/johnwang/workspace/Pyc... | null | https://github.com/ageitgey/face_recognition/pull/383 | null | {'base_commit': 'fe421d4acd76e8a19098e942b7bd9c3bbef6ebc4', 'files': [{'path': 'docs/conf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25]}}}, {'path': 'face_recognition/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'face_recognition/api.py', 'status': '... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"face_recognition/api.py",
"face_recognition/__init__.py",
"docs/conf.py",
"setup.py",
"setup.cfg",
"face_recognition/cli.py"
],
"doc": [],
"test": [
"tests/test_face_recognition.py"
],
"config": [],
"asset": []
} | 1 | |
ageitgey | face_recognition | 8322e7c00b7da9cbde8216c01d42330f03c5dcb9 | https://github.com/ageitgey/face_recognition/issues/59 | PIL/Image.py - ValueError: height and width must be > 0 | * face_recognition version: latest
* Python version: import dlib works for Python 2 and 3
* Operating System: Ubuntu 16.04.2 LTS
### Description
known_people directory has three images of each of four different people
pic1.jpg has 10 unidentified people in it, 2 of which are in known_people
pic2.jpg has 4 uni... | null | https://github.com/ageitgey/face_recognition/pull/65 | null | {'base_commit': '8322e7c00b7da9cbde8216c01d42330f03c5dcb9', 'files': [{'path': 'face_recognition/cli.py', 'status': 'modified', 'Loc': {"(None, 'test_image', 32)": {'mod': [37]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"face_recognition/cli.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
PaddlePaddle | PaddleOCR | 2062b5097ce6800a6dc23fcc1648e128a27d6353 | https://github.com/PaddlePaddle/PaddleOCR/issues/10223 | good first issue
status/close | 🏅️飞桨套件快乐开源常规赛 | ## 活动说明
飞桨套件快乐开源常规赛活动旨在让众多开发者能参与到各大CV/NLP套件的建设工作中(也是我们原有Issue攻关活动的升级版本),包括不限于新增基础功能、论文复现、Issue回复等,任何有利于社区意见流动和问题解决的行为都热切希望大家的参与。让我们共同成长为成为飞桨CV/NLP套件的重要contributors。🎉🎉
在套件快乐开源常规赛活动中,我们会结合技术研讨和任务发布两种活动形式互相促进。任何愿意参与社区贡献(新增代码、Issue解答等),对增长在分割、OCR方向(后续我们会持续开放包括图像检测、部署、图像分类、3D、自然语言处理等方向)知识感兴趣的开发者都可以加入😊。在这个过程中,**让大家保持对... | null | https://github.com/PaddlePaddle/PaddleOCR/pull/3261 | null | {'base_commit': '2062b5097ce6800a6dc23fcc1648e128a27d6353', 'files': [{'path': 'PPOCRLabel/PPOCRLabel.py', 'status': 'modified', 'Loc': {"('MainWindow', '__init__', 95)": {'add': [400], 'mod': [568]}, "('MainWindow', None, 92)": {'add': [762]}}}, {'path': 'PPOCRLabel/libs/utils.py', 'status': 'modified', 'Loc': {"(None... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"PPOCRLabel/libs/utils.py",
"PPOCRLabel/PPOCRLabel.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"PPOCRLabel/resources/strings/strings-zh-CN.properties",
"PPOCRLabel/resources/strings/strings.properties"
]
} | 1 |
AntonOsika | gpt-engineer | 19446faaa12743f0a2f729a7beab0e561626f530 | https://github.com/AntonOsika/gpt-engineer/issues/841 | bug
triage | ValueError: Could not parse following text as code edit: | ## Expected Behavior
Improve the code
## Current Behavior
Error gets thrown
## Failure Information
Traceback (most recent call last):
File "/home/riccardo/.local/bin/gpt-engineer", line 8, in <module>
sys.exit(app())
File "/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/cli... | null | https://github.com/AntonOsika/gpt-engineer/pull/1005 | null | {'base_commit': '19446faaa12743f0a2f729a7beab0e561626f530', 'files': [{'path': 'gpt_engineer/applications/cli/file_selector.py', 'status': 'modified', 'Loc': {"('FileSelector', 'get_current_files', 327)": {'add': [354]}}}, {'path': 'gpt_engineer/core/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)'... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"tests/caching_ai.py",
"gpt_engineer/applications/cli/file_selector.py",
"projects/example-improve/controller.py",
"gpt_engineer/core/files_dict.py",
"gpt_engineer/core/default/steps.py",
"gpt_engineer/core/chat_to_files.py"
],
"doc": [],
"test": [
"tests/core/test_chat_t... | 1 |
AntonOsika | gpt-engineer | a248d8104eeb9deffc8c3819b376bfdcf6f8df83 | https://github.com/AntonOsika/gpt-engineer/issues/205 | good first issue | Run pytest in pre-commit | - Add requirement to pyproject.toml
- Setup `.pre-commit-config.yaml` config
- test that everything is working with `pre-commit run` and in github actions | null | https://github.com/AntonOsika/gpt-engineer/pull/210 | null | {'base_commit': 'a248d8104eeb9deffc8c3819b376bfdcf6f8df83', 'files': [{'path': '.github/workflows/pre-commit.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [13]}}}, {'path': '.pre-commit-config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5], 'mod': [12, 29, 30, 31, 32]}}}, {'... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
".pre-commit-config.yaml",
".github/workflows/pre-commit.yaml",
"requirements.txt",
"pyproject.toml"
],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | b27461a871c972ef1c6f080b4608331bc7b01255 | https://github.com/AntonOsika/gpt-engineer/issues/476 | [Feature] Using a open-source LLM instead of Open AI | null | null | https://github.com/AntonOsika/gpt-engineer/pull/639 | null | {'base_commit': 'b27461a871c972ef1c6f080b4608331bc7b01255', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [83]}}}, {'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {"(None, 'create_chat_model', 342)": {'mod': [368, 370, 371, 372, 373, 374, 375, 376, 377, 383]}}}]... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/ai.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | bf206a5a1abeaa2b274a799e96933869e02d4c0a | https://github.com/AntonOsika/gpt-engineer/issues/898 | bug | Incompatibility with Python 3.8 and 3.9: TypeError in file_store.py | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Expected Behavior
The proje... | null | https://github.com/AntonOsika/gpt-engineer/pull/909 | null | {'base_commit': 'bf206a5a1abeaa2b274a799e96933869e02d4c0a', 'files': [{'path': 'gpt_engineer/applications/cli/cli_agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "('CliAgent', 'improve', 125)": {'mod': [126]}}}, {'path': 'gpt_engineer/applications/cli/learning.py', 'status': 'modified', 'Lo... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/core/default/file_store.py",
"gpt_engineer/applications/cli/learning.py",
"gpt_engineer/core/default/simple_agent.py",
"gpt_engineer/core/default/disk_execution_env.py",
"gpt_engineer/core/files_dict.py",
"gpt_engineer/core/base_execution_env.py",
"gpt_engineer/ap... | 1 |
AntonOsika | gpt-engineer | 7020fea81bef927fe4184e351be12aedf32e7545 | https://github.com/AntonOsika/gpt-engineer/issues/758 | bug
sweep | UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 436: character maps to <undefined> | ## Expected Behavior
gpt-engineer "path" -i command to work properly
## Current Behavior
Error after "Press enter to proceed with modifications."
### Steps to Reproduce
windows
python 3.9
### Failure Logs
Traceback (most recent call last):
File "C:\tools\Anaconda3\envs\gpteng\lib\runpy.py", ... | null | https://github.com/AntonOsika/gpt-engineer/pull/801 | null | {'base_commit': '7020fea81bef927fe4184e351be12aedf32e7545', 'files': [{'path': 'gpt_engineer/core/chat_to_files.py', 'status': 'modified', 'Loc': {"(None, 'get_code_strings', 140)": {'mod': [179]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/core/chat_to_files.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | ebfa59e4f462b1503d9706d3282a6b9751b3dcd7 | https://github.com/AntonOsika/gpt-engineer/issues/754 | bug | the code fails after giving additional information at the questions. | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Expected Behavior
That I ... | null | https://github.com/AntonOsika/gpt-engineer/pull/769 | null | {'base_commit': 'ebfa59e4f462b1503d9706d3282a6b9751b3dcd7', 'files': [{'path': 'gpt_engineer/core/ai.py', 'status': 'modified', 'Loc': {"('AI', 'deserialize_messages', 329)": {'mod': [343]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/core/ai.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | dc24bb846464f953e8bb2dbcbcb6ad4faaaeff32 | https://github.com/AntonOsika/gpt-engineer/issues/786 | bug | gpt-engineer doesn't respect the COLLECT_LEARNINGS_OPT_OUT=true env variable | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Expected Behavior
When ... | null | https://github.com/AntonOsika/gpt-engineer/pull/806 | null | {'base_commit': 'dc24bb846464f953e8bb2dbcbcb6ad4faaaeff32', 'files': [{'path': 'gpt_engineer/cli/learning.py', 'status': 'modified', 'Loc': {"(None, 'check_consent', 149)": {'add': [161], 'mod': [149, 157, 165, 168]}, "(None, 'human_review_input', 96)": {'mod': [106]}, "(None, 'collect_consent', 172)": {'mod': [172, 17... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/cli/learning.py",
"gpt_engineer/cli/main.py"
],
"doc": [],
"test": [
"tests/test_collect.py"
],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 2058edb3cfb8764cf642d73035af4bb6c783b7e5 | https://github.com/AntonOsika/gpt-engineer/issues/670 | enhancement
good first issue | Make improve flag less intrusive by moving over files like "all_output.txt" and "file_list" to the .gpteng folder | This is done by simply using the new DB in #665 and writing to it | null | https://github.com/AntonOsika/gpt-engineer/pull/720 | null | {'base_commit': '2058edb3cfb8764cf642d73035af4bb6c783b7e5', 'files': [{'path': 'gpt_engineer/db.py', 'status': 'modified', 'Loc': {"('DBs', None, 118)": {'add': [124]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {"(None, 'main', 27)": {'add': [78], 'mod': [66, 68]}}}, {'path': 'gpt_engineer/steps.p... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/db.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/steps/test_archive.py",
"tests/test_db.py",
"tests/test_collect.py"
],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | f84754d54ee311146c4f52b5e3ceb0fa8d0b731b | https://github.com/AntonOsika/gpt-engineer/issues/563 | It's only using python... | ## Expected Behavior
I've seen 3 or 4 issues here asking if gpt-engineer could use languages other than python. the answer was always something like "yes, of course, it's chatgpt writing the code, so you can use everything"
## Current Behavior
no matter what i do, it is always using python. even if i explicitl... | null | https://github.com/AntonOsika/gpt-engineer/pull/568 | null | {'base_commit': 'f84754d54ee311146c4f52b5e3ceb0fa8d0b731b', 'files': [{'path': 'gpt_engineer/preprompts/philosophy', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 4, 5, 6, 7]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"gpt_engineer/preprompts/philosophy"
]
} | 1 | |
AntonOsika | gpt-engineer | e55f84041c522b03ce09c958deb9822095b3e84e | https://github.com/AntonOsika/gpt-engineer/issues/943 | documentation | Instructions for running it with local models is lacking. | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Description
Instruction... | null | https://github.com/AntonOsika/gpt-engineer/pull/1082 | null | {'base_commit': '164730a5b933ec0ebc9003c72f60e58176ef0dc6', 'files': [{'path': 'docs/open_models.md', 'status': 'modified', 'Loc': {'(None, None, 17)': {'add': [17]}, '(None, None, 21)': {'add': [21]}, '(None, None, 4)': {'mod': [4]}, '(None, None, 9)': {'mod': [9]}, '(None, None, 12)': {'mod': [12]}, '(None, None, 14)... | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/applications/cli/main.py"
],
"doc": [
"docs/open_models.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 29e891c1a7bc6a0a46f8ce9d337a1b4bb82dcf85 | https://github.com/AntonOsika/gpt-engineer/issues/650 | enhancement
good first issue | Fix the "improve" prompt to make sure that it generates diffs, and parse and apply those diffs to the existing codebase | One way to do this is to write the prompt for gpt-engineer with `-i` flag to annotate each codeblock with one of:
1. `NEW CODE`
2. `REPLACING ONE FUNCTION`
If 1., the generated code can just be written to a new file (or appended to an existing file).
If it is replacing an existing function, we could make sure... | null | https://github.com/AntonOsika/gpt-engineer/pull/714 | null | {'base_commit': '29e891c1a7bc6a0a46f8ce9d337a1b4bb82dcf85', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [55, 56, 57, 58, 59, 62, 63, 64, 65]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 8e95858f3867faf1198c0631bd060172991bb523 | https://github.com/AntonOsika/gpt-engineer/issues/872 | enhancement
triage | Default launch command is too cumbersome | ## Policy and info
- good first issue
## Feature description
Currently, to use the tool `gpt-engineer` command has to be used. Although this can be resolved using an alias, would be nice to have a command such as `gpte` be available by default.
Can refer https://clig.dev/#naming for more details.
## Motivat... | null | https://github.com/AntonOsika/gpt-engineer/pull/889 | null | {'base_commit': '8e95858f3867faf1198c0631bd060172991bb523', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [74], 'mod': [64, 65, 70, 71]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | b60185ac6a02c1366324221eb143c9e37a64f1e6 | https://github.com/AntonOsika/gpt-engineer/issues/718 | Separate `core` and `cli` into separate modules (directories) and only allow cli to import from core | The idea is to separate the core logic and CLI UX specific things. To make it easier to take decisions on what makes sense from UX perspective, and how the core building blocks should work.
Would look something like:
```
gpt_engineer
├── core
│ ├── ai.py
│ ├── domain.py
│ ├── chat_to_files.py
│ ├── ... | null | https://github.com/AntonOsika/gpt-engineer/pull/766 | null | {'base_commit': 'fb35323551c3404283fdb04297f961a05a587caf', 'files': [{'path': 'evals/evals_existing_code.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14, 15]}}}, {'path': 'evals/evals_new_code.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14]}}}, {'path': 'gpt_engineer/api.py',... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"evals/evals_new_code.py",
"gpt_engineer/learning.py",
"gpt_engineer/db.py",
"evals/evals_existing_code.py",
"gpt_engineer/ai.py",
"gpt_engineer/chat_to_files.py",
"gpt_engineer/main.py",
"scripts/rerun_edited_message_logs.py",
"gpt_engineer/api.py",
"gpt_engineer/s... | 1 | |
AntonOsika | gpt-engineer | ba00896c5673990923abd0e99dba147938871512 | https://github.com/AntonOsika/gpt-engineer/issues/79 | Analysis - Give context of a project to GPT Engineer | GPT Engineer is amazing. But right now the purpose is for small projects, projects where you need little implementations or requirements.
But... What about to give a full context of a project? If ChatGPT can understand what methods and classes has some projects on GitHub or packages in npm, maybe he can have a fully... | null | https://github.com/AntonOsika/gpt-engineer/pull/465 | null | {'base_commit': 'ba00896c5673990923abd0e99dba147938871512', 'files': [{'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'to_files', 37)": {'add': [42]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/chat_to_files.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | 0596b07a39c2c99c46509c17660f5c8aef4b2114 | https://github.com/AntonOsika/gpt-engineer/issues/388 | good first issue | Remove "run_id" and "delete_existing" options: instead move old memory/workspace folder to "archive" by default | The first step in the main file would be to check for memory folder and workspace, if they exist create a new folder in "archive" e.g. with the name "currentdate_currenttime", and move everything there.
This would make main.py much nicer, and make it clearly defined that all files, apart from `archive` folder, in th... | null | https://github.com/AntonOsika/gpt-engineer/pull/409 | null | {'base_commit': '0596b07a39c2c99c46509c17660f5c8aef4b2114', 'files': [{'path': 'gpt_engineer/db.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "('DBs', None, 44)": {'add': [49]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {"(None, 'main', 19)": {'add': [53, 59], 'mod': [21, ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/db.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/test_db.py",
"tests/test_collect.py"
],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | dc7a2bd0f546ea29929faa57b8e618c413c86bb2 | https://github.com/AntonOsika/gpt-engineer/issues/582 | triage | RuntimeError: ('Message exceeds %skb limit. (%s)', AFTER it asks me to run the code | I am running a quite complex prompt to create a python app with a PSQL DB backend. I already have the whole DB schema ready and pasted it into the prompt.
## Expected Behavior
the app is created according to my prompt.
## Current Behavior
Only a part of the files are created, then it asks me to run the code ... | null | https://github.com/AntonOsika/gpt-engineer/pull/632 | null | {'base_commit': 'dc7a2bd0f546ea29929faa57b8e618c413c86bb2', 'files': [{'path': 'gpt_engineer/collect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "(None, 'send_learning', 11)": {'mod': [31, 32, 33, 34, 35]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/collect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 66cd09c789bfcae57e144fcaea86050b97230f18 | https://github.com/AntonOsika/gpt-engineer/issues/150 | bug | AttributeError: 'tuple' object has no attribute 'expandtabs' | I'm getting the following error when running `python -m gpt_engineer.main`. I'm using python 3.11/
```
File "/opt/miniconda3/envs/gpt-eng/lib/python3.11/inspect.py", line 873, in cleandoc
lines = doc.expandtabs().split('\n')
^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'expa... | null | https://github.com/AntonOsika/gpt-engineer/pull/152 | null | {'base_commit': '66cd09c789bfcae57e144fcaea86050b97230f18', 'files': [{'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {"(None, 'chat', 16)": {'mod': [21]}}}, {'path': 'identity/generate', 'status': 'modified', 'Loc': {}}, {'path': 'scripts/benchmark.py', 'status': 'modified', 'Loc': {"(None, 'main', 13)":... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/main.py",
"scripts/benchmark.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"identity/generate"
]
} | 1 |
AntonOsika | gpt-engineer | 6ccd05ab65dcd83d6057c6c068a3f5290ab09176 | https://github.com/AntonOsika/gpt-engineer/issues/49 | GPT4ALL support or open source models | OpenAI's model 3.5 breaks frequently and is low quality in general.
Falcon, Vicuna, Hermes and more should be supported as they're open source, free, and moving away from paid closed source is good practice and opens applications to huge user base who wants free access to these tools. | null | https://github.com/AntonOsika/gpt-engineer/pull/63 | null | {'base_commit': '6ccd05ab65dcd83d6057c6c068a3f5290ab09176', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {}}, {'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [7]}, "('AI', 'next', 42)": {'add': [63], 'mod': [46, 48, 50, 51, 60, 61, 62]}, "('AI', No... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/chat_to_files.py",
"gpt_engineer/ai.py",
"gpt_engineer/main.py",
"scripts/rerun_edited_message_logs.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/test_ai.py"
],
"config": [
".gitignore",
"pyproject.toml",
"requirements.txt"
],
"... | 1 | |
AntonOsika | gpt-engineer | dc7a2bd0f546ea29929faa57b8e618c413c86bb2 | https://github.com/AntonOsika/gpt-engineer/issues/530 | Using gpt-engineer with Azure OpenAI |
Hi, I am trying to test gpt-engineer by using Azure OpenAI but I am getting authentication error. I have added all the additional details that are required for the Azure OpenAI like api_base url, model, etc. in the python file ai.py in the gpt_engineer folder. Am I missing out something can you please help me out wi... | null | https://github.com/AntonOsika/gpt-engineer/pull/640 | null | {'base_commit': 'dc7a2bd0f546ea29929faa57b8e618c413c86bb2', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [57]}}}, {'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {"('AI', '__init__', 40)": {'add': [54], 'mod': [52, 53]}, "(None, 'create_chat_model', 338)": {'ad... | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/ai.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py",
"gpt_engineer/learning.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | 3e589bf1356024fb471a9d17738e4626f21a953b | https://github.com/AntonOsika/gpt-engineer/issues/1143 | enhancement
good first issue
triage | Add GPTE CLI argument to output system information | When running GPTE, it will be quite helpful to be able to quickly generate useful system information for use in debugging issues.
For example, this should be invoked as `gpte --sysinfo`.
This invocation should output system information in a standardized and useful way, so that users can readily copy and paste the... | null | https://github.com/AntonOsika/gpt-engineer/pull/1169 | null | {'base_commit': '3e589bf1356024fb471a9d17738e4626f21a953b', 'files': [{'path': 'gpt_engineer/applications/cli/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [28, 30, 239]}, "(None, 'main', 250)": {'add': [331, 371, 382]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/applications/cli/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | e7e329211655d08e48d04ce828f929c9108050ad | https://github.com/AntonOsika/gpt-engineer/issues/14 | exporting the api key to the environment doesn't work for me | I can't get the export command to work, so an alternative solution like using an extern file or hardcoding the api in the code would be a nice solution. I personally created an external json config file and parsed the api key from that to the python script.
So a solution could be:
1) Make a json file named "conf... | null | https://github.com/AntonOsika/gpt-engineer/pull/22 | null | {'base_commit': 'e7e329211655d08e48d04ce828f929c9108050ad', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}}}, {'path': 'ai.py', 'status': 'modified', 'Loc': {'(None, None, None)'... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"ai.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
".gitignore"
],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | 164730a5b933ec0ebc9003c72f60e58176ef0dc6 | https://github.com/AntonOsika/gpt-engineer/issues/819 | enhancement | Automatic benchmarking of gpt-engineer with APPS | ## Feature description
gpt-engineer has an automatic evals suite in "evals/eval_new_code.py". However, only 2 test cases are given in evals/new_code_eval.yaml . An alternative to filling in more testcases manually, we should parse in prompts and tests from the (very large) APPS dataset (https://paperswithcode.com/data... | null | https://github.com/AntonOsika/gpt-engineer/pull/1051 | null | {'base_commit': '164730a5b933ec0ebc9003c72f60e58176ef0dc6', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [90]}}}, {'path': 'gpt_engineer/benchmark/benchmarks/load.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12, 19]}}}, {'path': 'gpt_engineer/benchmar... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/benchmark/types.py",
"gpt_engineer/benchmark/benchmarks/load.py",
"gpt_engineer/benchmark/run.py"
],
"doc": [],
"test": [],
"config": [
".gitignore",
"poetry.lock"
],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 1ad0892697e8468939a914f12bbf7378a1e045a2 | https://github.com/AntonOsika/gpt-engineer/issues/914 | enhancement | Automatic benchmarking of gpt-engineer with MBPP | ## Feature description
We have a way to easily add benchmarks:
https://www.loom.com/share/206805143fbb4302b5455a5329eaab17?sid=f689608f-8e49-44f7-b55f-4c81e9dc93e6
This issue is about looking into if [mbpp](https://huggingface.co/datasets/mbpp) is a good benchmark to add and then add a simple version of it. | null | https://github.com/AntonOsika/gpt-engineer/pull/1103 | null | {'base_commit': '1ad0892697e8468939a914f12bbf7378a1e045a2', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [93]}}}, {'path': 'gpt_engineer/benchmark/__main__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30]}, "(None, 'main', 54)": {'add': [89]}}}, {'pat... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/benchmark/benchmarks/load.py",
"gpt_engineer/benchmark/benchmarks/apps/load.py",
"gpt_engineer/benchmark/__main__.py"
],
"doc": [],
"test": [],
"config": [
".gitignore"
],
"asset": []
} | 1 |
lllyasviel | Fooocus | d16a54edd69f82158ae7ffe5669618db33a01ac7 | https://github.com/lllyasviel/Fooocus/issues/2863 | bug | [Bug]: app-1 | sh: 1: /content/entrypoint.sh: not found (docker compose) | ### Checklist
- [ ] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- [ ] The issue exists on a clean installation of Fooocus
- [ ] The issue exists in the current version of Fooocus
- [ ] The issue has not been reported before r... | null | https://github.com/lllyasviel/Fooocus/pull/2865 | null | {'base_commit': 'd16a54edd69f82158ae7ffe5669618db33a01ac7', 'files': [{'path': 'entrypoint.sh', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33], 'mod': [1]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"entrypoint.sh"
]
} | 1 |
lllyasviel | Fooocus | 179bcb2c4e6e6b9574c5a38e28e3c9813ed95bd7 | https://github.com/lllyasviel/Fooocus/issues/1247 | Canvas zoom for the inpainting canvas | Can we get a canvas zoom feature similar to what https://github.com/richrobber2/canvas-zoom provides for A1111?
Fooocus has by far the best inpainting/outpainting backend. It would be nice if the frontend was spruced up a bit too. | null | https://github.com/lllyasviel/Fooocus/pull/1428 | null | {'base_commit': '179bcb2c4e6e6b9574c5a38e28e3c9813ed95bd7', 'files': [{'path': 'css/style.css', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [96]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"css/style.css"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
odoo | odoo | 4213eebe2ebe6b0c81580176b263aeee9fa6a3fd | https://github.com/odoo/odoo/issues/304 | Bug 1089229: Wrong treatment of UoS among objects | **Impacted versions:**
6.1 and above
**Steps to reproduce:**
See https://bugs.launchpad.net/openobject-addons/+bug/1089229
**Current behavior:**
- If you change units of sale (uos) quantity in sales order, uom quantity is not recalculated, thus breaking the relation between uom and uos (uos_coeff).
- If you change th... | null | https://github.com/odoo/odoo/pull/7311 | null | {'base_commit': '4213eebe2ebe6b0c81580176b263aeee9fa6a3fd', 'files': [{'path': 'addons/sale_stock/sale_stock_view.xml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49, 50, 51, 52, 53]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"addons/sale_stock/sale_stock_view.xml"
]
} | 1 | |
binary-husky | gpt_academic | 2a003e8d494bdfb3132dd40dc8d7face7e52be49 | https://github.com/binary-husky/gpt_academic/issues/1697 | ToDo | [Feature]: 接入"gpt-4-turbo-2024-04-09"模型 | ### Class | 类型
程序主体
### Feature Request | 功能请求
能不能接入gpt-4-turbo-2024-04-09和gpt-4-0125-preview这两个模型。 | null | https://github.com/binary-husky/gpt_academic/pull/1698 | null | {'base_commit': '2a003e8d494bdfb3132dd40dc8d7face7e52be49', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [35]}}}, {'path': 'request_llms/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [202]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"request_llms/bridge_all.py",
"config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | 8d7ca46b2c1fcf0fe8983b0d6effc5fd9d009bff | https://github.com/deepfakes/faceswap/issues/32 | ImportError: No module named pathlib | I have already installed pathlib in python3.6:Requirement already satisfied: pathlib in /usr/local/lib/python3.6/dist-packages
Command executed: python3 faceswap.py extract -i ~/faceswap/photo/trump -o ~/faceswap/data/trump
Traceback (most recent call last):
File "faceswap.py", line 3, in <module>
from ... | null | https://github.com/deepfakes/faceswap/pull/33 | null | {'base_commit': '8d7ca46b2c1fcf0fe8983b0d6effc5fd9d009bff', 'files': [{'path': 'USAGE.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [39, 41, 55]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"USAGE.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | 47b43191031d0901371d0be362fcccdf547cb4e5 | https://github.com/deepfakes/faceswap/issues/306 | enhancement | Is it possible to implement occlusion masks to original model? | I think GAN model's most interesting feature is occlusion masks. But original model is more stable than GAN and the output of GAN code here is not good. So my question is can we implement this occlusion mask feature to original model? Or is it exclusive to GAN? | null | https://github.com/deepfakes/faceswap/pull/576 | null | {'base_commit': '47b43191031d0901371d0be362fcccdf547cb4e5', 'files': [{'path': '.github/ISSUE_TEMPLATE.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}}}, {'path': '.github/ISSUE_TEMPLATE/bug_report.md', 'status': 'removed', 'Loc': {}}, {'path': '.github/ISSUE_TEMPLATE/feature_request.md', 'status'... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/gui/options.py",
"lib/gui/display_page.py",
"lib/alignments.py",
"lib/sysinfo.py",
"lib/config.py",
"lib/logger.py",
"lib/keypress.py",
"lib/utils.py",
"lib/model/normalization.py",
"lib/umeyama.py",
"lib/model/initializers.py",
"lib/aligner.py",
"l... | 1 |
deepfakes | faceswap | 3f04e8cd06e1816e6aa87f3826ebb919cfa983b2 | https://github.com/deepfakes/faceswap/issues/279 | Sharpening the face before applying it | Sharpen by multiplying every pixel by 2, and then subtracting the average value of the neighborhood from it.
I modified Convert_Masked.py and I find the face less blurry on closeups on hi-res pics, though it's a bit too sharp on normal/low res compared to the rest of the image.
YMMV.
```
def apply_new_face(s... | null | https://github.com/deepfakes/faceswap/pull/285 | null | {'base_commit': '3f04e8cd06e1816e6aa87f3826ebb919cfa983b2', 'files': [{'path': 'plugins/Convert_Masked.py', 'status': 'modified', 'Loc': {"('Convert', '__init__', 9)": {'add': [20]}, "('Convert', None, 8)": {'mod': [9]}, "('Convert', 'apply_new_face', 36)": {'mod': [42]}}}, {'path': 'scripts/convert.py', 'status': 'mod... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"plugins/Convert_Masked.py",
"scripts/convert.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | a561f5b78bf09e785686b500c4825641b0823791 | https://github.com/deepfakes/faceswap/issues/628 | Increase training_data generation speed | For some settings the training_data generation takes a long time especially "warp to landmarks" is pretty slow.
IMO using multiprocessing would speed stuff a lot.
But there is also some stuff that could be cached, like `get_closest_match` (used in warp to landmarks).
I did some quick and dirty profiling.
See http... | null | https://github.com/deepfakes/faceswap/pull/690 | null | {'base_commit': 'a561f5b78bf09e785686b500c4825641b0823791', 'files': [{'path': 'lib/training_data.py', 'status': 'modified', 'Loc': {"('TrainingDataGenerator', '__init__', 23)": {'add': [34]}, "('TrainingDataGenerator', 'load_batches', 84)": {'add': [89]}, '(None, None, None)': {'mod': [7]}, "('TrainingDataGenerator', ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/training_data.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | b057b719ce5665590beb3ba1782721bc6257963a | https://github.com/deepfakes/faceswap/issues/1143 | bug | Disabling AMD and CUDA sets backed to "cpu" in config, but running faceswap -h still tries to load CUDA | Turning off all GPU related config items during setup does create config/.faceswap, which contains {"backend": "cpu"}.
However, running faceswap.py -h throws an exception and terminates the program:
Setting Faceswap backend to CPU
Traceback (most recent call last):
File "faceswap.py", line 6, in <module>
... | null | https://github.com/deepfakes/faceswap/pull/1216 | null | {'base_commit': 'b057b719ce5665590beb3ba1782721bc6257963a', 'files': [{'path': 'INSTALL.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22, 147], 'mod': [57]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22, 40]}}}, {'path': 'lib/gpu_stats/__init__.py', 'status... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/gpu_stats/__init__.py",
"setup.py",
"lib/utils.py"
],
"doc": [
"README.md",
"INSTALL.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | b057b719ce5665590beb3ba1782721bc6257963a | https://github.com/deepfakes/faceswap/issues/1197 | Please Support for Apple M1 pro/max | As we know, Apple release New silicon named M1 pro/max.
It has powerful GPUs and CPUs.
Is there any chance to run FaceSwap on new Mac book pro?
| null | https://github.com/deepfakes/faceswap/pull/1216 | null | {'base_commit': 'b057b719ce5665590beb3ba1782721bc6257963a', 'files': [{'path': 'INSTALL.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22, 147], 'mod': [57]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22, 40]}}}, {'path': 'lib/gpu_stats/__init__.py', 'status... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/gpu_stats/__init__.py",
"setup.py",
"lib/utils.py"
],
"doc": [
"README.md",
"INSTALL.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | 85c5e8b66c00b096c31f416cc4954d611c3fdb14 | https://github.com/deepfakes/faceswap/issues/39 | bug
good first issue
dev
performance | Don't reload models everytime `convert_one_image` is called | ## Expected behavior
Use the convert command to convert a directory. `convert_one_image` loads the model once.
## Actual behavior
Use the convert command to convert a directory. `convert_one_image` loads the model every time that it is called.
| null | https://github.com/deepfakes/faceswap/pull/52 | null | {'base_commit': '85c5e8b66c00b096c31f416cc4954d611c3fdb14', 'files': [{'path': 'faceswap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 8, 17, 18, 19, 20]}}}, {'path': 'lib/DetectedFace.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/aligner.py', 'status': 'modified', 'Loc': {"(None, 'get_alig... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/aligner.py",
"lib/model.py",
"lib/training_data.py",
"plugins/Convert_Adjust.py",
"plugins/Extract_Align.py",
"plugins/Extract_Crop.py",
"scripts/train.py",
"faceswap.py",
"plugins/PluginLoader.py",
"plugins/Convert_Masked.py",
"lib/DetectedFace.py",
"l... | 1 |
deepfakes | faceswap | 9438672b1cf80602fc93536670d9601d655377f5 | https://github.com/deepfakes/faceswap/issues/213 | code to integrate | check for duplicates in extract folder | Hello all,
I have been having trouble with cloud servers shutting down unexpectedly so I edited the original `extract.py` to not overwrite if the image has already been processed in a previous run.
Note that I am currently assuming an `idx` of `0` (i.e. single face was found in photo, usually denoting successful fa... | null | https://github.com/deepfakes/faceswap/pull/214 | null | {'base_commit': '9438672b1cf80602fc93536670d9601d655377f5', 'files': [{'path': 'lib/cli.py', 'status': 'modified', 'Loc': {"('DirectoryProcessor', 'process_arguments', 39)": {'add': [53], 'mod': [56]}, "('DirectoryProcessor', 'write_alignments', 80)": {'add': [84]}, "('DirectoryProcessor', 'get_faces_alignments', 105)"... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/cli.py",
"lib/utils.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | b4ce0b910cd7265d046923162c922be840fa60c8 | https://github.com/3b1b/manim/issues/1677 | bug | Questionable indexing of Tex | ### Describe the bug
When I made videos, I used many math equations with specific variables or sub-expressions colored, and noticed there were some bugs in manim dealing with indexing components in `Tex` mobjects. Recently I'm trying to refactor the code of `Tex` class and fix bugs concerning with coloring, so I dive ... | null | https://github.com/3b1b/manim/pull/1678 | null | {'base_commit': 'b4ce0b910cd7265d046923162c922be840fa60c8', 'files': [{'path': 'manimlib/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"manimlib/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 05db6174e9d677fe26eb863592d88e5cf02cf8cb | https://github.com/3b1b/manim/issues/28 | Windows 10 - No module named . (period) | I've tried the python extract_scene.py -p example_scenes.py SquareToCircle example on cmd, and I get the above error.
*I've looked around and it seems that a few a people have had this problem, but I can't find any one who has a solution. .(period) is syntax for relative import, but I don't know how to fix from there... | null | https://github.com/3b1b/manim/pull/38 | null | {'base_commit': '05db6174e9d677fe26eb863592d88e5cf02cf8cb', 'files': [{'path': 'extract_scene.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 161]}, "(None, 'get_module', 154)": {'mod': [154, 156, 158]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"extract_scene.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | 43d28a8595450d39f800f650c25a7570b228db44 | https://github.com/3b1b/manim/issues/627 | Text rendering problem | ### Steps to reproduce
```
from manimlib.imports import *
class Playground(Scene):
def construct(self):
text = TextMobject("print('Hello, world!')",
tex_to_color_map={'print': YELLOW})
self.play(FadeIn(text))
```
### The unexpected behavior that occurred
Notice... | null | https://github.com/3b1b/manim/pull/628 | null | {'base_commit': '43d28a8595450d39f800f650c25a7570b228db44', 'files': [{'path': 'manimlib/mobject/svg/tex_mobject.py', 'status': 'modified', 'Loc': {"('TextMobject', None, 241)": {'add': [244]}, "('TexMobject', 'break_up_tex_strings', 152)": {'mod': [160]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"manimlib/mobject/svg/tex_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | 3362f93964cae6f610a47d2da0e076b51a9eab42 | https://github.com/3b1b/manim/issues/1017 | Text(" ") don't move. Because of that Text("a b") shows wrong transform animation. | ```python
class test(Scene):
def construct(self):
text = Text(" ")
text.to_corner(DOWN+LEFT)
rect = SurroundingRectangle(text)
self.add(text,rect)
```
## Output
": {'mod': [90, 91, 92]}}}, {'path': 'manimlib/mobject/svg/text_mobject.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'a... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"manimlib/mobject/svg/svg_mobject.py",
"manimlib/mobject/svg/text_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | 994749ceadf9f87f2ebe40bbb795fbb2b696f377 | https://github.com/3b1b/manim/issues/39 | Python version problem? | While running the demo, ( python extract_scene.py -p example_scenes.py SquareToCirclepython extract_scene.py -p example_scenes.py SquareToCircle ) I get the following exception:
File "extract_scene.py", line 46
print str(err)
^
SyntaxError: invalid syntax
I believe it is somehow related to p... | null | https://github.com/3b1b/manim/pull/97 | null | {'base_commit': '994749ceadf9f87f2ebe40bbb795fbb2b696f377', 'files': [{'path': 'active_projects/WindingNumber.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'point_to_rev', 381)": {'add': [384], 'mod': [381, 386]}, "('TestDual', 'construct', 86)": {'mod': [88]}, "(None, 'split_interval',... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"eop/bayes.py",
"mobject/svg_mobject.py",
"mobject/tex_mobject.py",
"animation/playground.py",
"eop/bayes_footnote.py",
"mobject/__init__.py",
"active_projects/WindingNumber.py",
"animation/transform.py",
"eop/combinations.py",
"helpers.py",
"animation/__init__.... | 1 | |
All-Hands-AI | OpenHands | cf439fa89cf45a5462336a10c3dfee4ab4c0ace8 | https://github.com/All-Hands-AI/OpenHands/issues/7060 | bug
openhands | [Bug]: Obsolete attribute in a unit test file | ### Is there an existing issue for the same bug?
- [x] I have checked the existing issues.
### Describe the bug and reproduction steps
openhands-agent,
The file test_long_term_memory.py uses an attribute 'micro_agent_name' which is obsolete and has been removed from AgentConfig.
Please remove it from the tests too... | null | https://github.com/All-Hands-AI/OpenHands/pull/7061 | null | {'base_commit': 'cf439fa89cf45a5462336a10c3dfee4ab4c0ace8', 'files': [{'path': 'tests/unit/test_long_term_memory.py', 'status': 'modified', 'Loc': {"(None, 'mock_agent_config', 24)": {'mod': [26]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [
"tests/unit/test_long_term_memory.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 6e3b554317de7bc5d96ef81b4097287e05c0c4d0 | https://github.com/All-Hands-AI/OpenHands/issues/226 | enhancement
backend | Redesign docker sandbox | **What problem or use case are you trying to solve?**
We're using `exec_run` to run commands in the sandbox. This isn't stateful, and doesn't handle CLI interactions via stdin very well.
Things we struggle with today:
* We don't keep track of cd commands
* The agent can't interact with stdin (e.g. it runs apt-get... | null | https://github.com/All-Hands-AI/OpenHands/pull/847 | null | {'base_commit': '6e3b554317de7bc5d96ef81b4097287e05c0c4d0', 'files': [{'path': 'opendevin/sandbox/Dockerfile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16, 17]}}}, {'path': 'opendevin/sandbox/Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}}}, {'path': 'opendevin/sandbox/s... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"opendevin/sandbox/sandbox.py"
],
"doc": [],
"test": [],
"config": [
"pyproject.toml",
"opendevin/sandbox/Makefile",
"opendevin/sandbox/Dockerfile",
"poetry.lock"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 07f0d1ccb347d1c67a189d53c7147916d05cd528 | https://github.com/All-Hands-AI/OpenHands/issues/4783 | bug
fix-me | [Bug]: Tool call metadata should NOT be None when function calling is enabled | ### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Describe the bug and reproduction steps
1. Manually run command in the client terminal (e.g., `pwd`)
2. Error is thrown
### OpenHands Installation
Docker command in README
### OpenHands Version
main
### Operating Sys... | null | https://github.com/All-Hands-AI/OpenHands/pull/4955 | null | {'base_commit': '07f0d1ccb347d1c67a189d53c7147916d05cd528', 'files': [{'path': 'openhands/agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {"('CodeActAgent', 'get_action_message', 112)": {'add': [186], 'mod': [151, 156]}, "('CodeActAgent', 'get_observation_message', 189)": {'mod': [222, 223, 224]}... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/agenthub/codeact_agent/codeact_agent.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 123968f887a5eb101b549472805e4b9e4ac7bce0 | https://github.com/All-Hands-AI/OpenHands/issues/1686 | bug
severity:low | [Bug]: Error creating controller | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
I followed the quickstart guide and was able to open the UI, but I keep getting "Err... | null | https://github.com/All-Hands-AI/OpenHands/pull/1788 | null | {'base_commit': '123968f887a5eb101b549472805e4b9e4ac7bce0', 'files': [{'path': 'containers/app/Dockerfile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"containers/app/Dockerfile"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 32ee6a5a646454a9dc2dae43275313e2d6f77073 | https://github.com/All-Hands-AI/OpenHands/issues/6440 | bug | [Bug]: KeyError: 'ExposedPorts' | ### Is there an existing issue for the same bug?
- [x] I have checked the existing issues.
### Describe the bug and reproduction steps
```
23:07:30 - openhands:ERROR: session.py:128 - Error creating agent_session: 'ExposedPorts'
Traceback (most recent call last):
File "/workspaces/OpenHands/openhands/server/sessio... | null | https://github.com/All-Hands-AI/OpenHands/pull/6460 | null | {'base_commit': '32ee6a5a646454a9dc2dae43275313e2d6f77073', 'files': [{'path': 'openhands/core/config/sandbox_config.py', 'status': 'modified', 'Loc': {"('SandboxConfig', None, 6)": {'mod': [75]}}}, {'path': 'openhands/runtime/impl/docker/docker_runtime.py', 'status': 'modified', 'Loc': {"('DockerRuntime', '_attach_to_... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/impl/docker/docker_runtime.py",
"openhands/core/config/sandbox_config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | f9088766e826e208195345a7fcde4920a87df3dd | https://github.com/All-Hands-AI/OpenHands/issues/3527 | bug | [Bug]: openhands-ai Python package requires agenthub | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
When attempting to use the openhands-ai package from PyPi, I encounter an issue that `ag... | null | https://github.com/All-Hands-AI/OpenHands/pull/3548 | null | {'base_commit': 'f9088766e826e208195345a7fcde4920a87df3dd', 'files': [{'path': 'openhands/runtime/utils/runtime_build.py', 'status': 'modified', 'Loc': {"(None, '_create_project_source_dist', 34)": {'mod': [62]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 45], 'mod': [2... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/utils/runtime_build.py"
],
"doc": [],
"test": [
"tests/unit/test_runtime_build.py"
],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 356caf0960df558be438f8c3e357e808c0619238 | https://github.com/All-Hands-AI/OpenHands/issues/1514 | enhancement
severity:low | Micro-agent: typo checker | **What problem or use case are you trying to solve?**
Micro-agents are small agents that specialize in one field. You don't have to write code to define a new micro-agent! Take a look at existing micro-agents: https://github.com/OpenDevin/OpenDevin/tree/main/agenthub/micro
We could add a new micro-agent that scan... | null | https://github.com/All-Hands-AI/OpenHands/pull/1613 | null | {'base_commit': '356caf0960df558be438f8c3e357e808c0619238', 'files': [{'path': 'agenthub/micro/agent.py', 'status': 'modified', 'Loc': {"(None, 'parse_response', 16)": {'mod': [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"agenthub/micro/agent.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 97e938d5450728128ccbf896ecbc5963ac223012 | https://github.com/All-Hands-AI/OpenHands/issues/6382 | bug
awaiting release | [Bug]: The sandbox container is being recreated when rejoining an existing conversation (all changes are lost) | ### Is there an existing issue for the same bug?
- [x] I have checked the existing issues.
### Expected result
- When joining an existing conversation, OH must start the same container (already created for this conversation), instead of creating a new one from scratch.
- Each conversation must have their own exclusi... | null | https://github.com/All-Hands-AI/OpenHands/pull/6402 | null | {'base_commit': 'b468150f2abf0f4c8bcf05072f808dd8a086e9c6', 'files': [{'path': 'openhands/runtime/impl/docker/docker_runtime.py', 'status': 'modified', 'Loc': {"('DockerRuntime', '__init__', 57)": {'mod': [69]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/impl/docker/docker_runtime.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 5f61885e44cf1841fe9ec82befd38cf45b13869b | https://github.com/All-Hands-AI/OpenHands/issues/2866 | bug | [Bug]: azure open ai config | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
Based on the documentation I ran the azure open ai config, I managed to open the ui ... | null | https://github.com/All-Hands-AI/OpenHands/pull/2894 | null | {'base_commit': '5f61885e44cf1841fe9ec82befd38cf45b13869b', 'files': [{'path': 'docs/modules/usage/llms/azureLLMs.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17], 'mod': [15, 35, 36]}}}, {'path': 'docs/modules/usage/llms/localLLMs.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/modules/usage/llms/localLLMs.md",
"docs/modules/usage/llms/azureLLMs.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 3661893161826c2a36bacdb3b08d12c805134bee | https://github.com/All-Hands-AI/OpenHands/issues/4142 | documentation
enhancement
fix-me | Documentation: Create a "Usage Methods -> GUI Mode" page | **What problem or use case are you trying to solve?**
Currently we have pages about different usage methods, CLI and headless, and soon to by github actions (#4113).
However, we don't have a page describing GUI mode, other than the Getting Started page. We can start out by copying the information from the "Gettin... | null | https://github.com/All-Hands-AI/OpenHands/pull/4156 | null | {'base_commit': '3661893161826c2a36bacdb3b08d12c805134bee', 'files': [{'path': 'docs/sidebars.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [24]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"docs/sidebars.ts"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | bfa1de4a6b18d3b8493b94f6e54e360012957fdc | https://github.com/All-Hands-AI/OpenHands/issues/2714 | bug
good first issue | [Bug]: The long filename will stretch the workspace panel | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
The issue manifests as follows:
<img width="1513" alt="image" src="https://github.c... | null | https://github.com/All-Hands-AI/OpenHands/pull/2731 | null | {'base_commit': 'bfa1de4a6b18d3b8493b94f6e54e360012957fdc', 'files': [{'path': 'frontend/src/components/file-explorer/FileExplorer.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [219, 222]}}}, {'path': 'frontend/src/components/file-explorer/TreeNode.tsx', 'status': 'modified', 'Loc': {'(None, None, N... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"frontend/src/components/file-explorer/FileExplorer.tsx",
"frontend/src/components/file-explorer/TreeNode.tsx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 93d2e4a338adcaa8acaa602adad14364abca821f | https://github.com/All-Hands-AI/OpenHands/issues/3903 | bug | [Bug]: LocalBox has been removed from 0.9.0 | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
Hey team,
We built our setup based on local sandbox in Openshift with restricted perm... | null | https://github.com/All-Hands-AI/OpenHands/pull/5284 | null | {'base_commit': '93d2e4a338adcaa8acaa602adad14364abca821f', 'files': [{'path': '.github/workflows/ghcr-build.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [236]}}}, {'path': 'openhands/runtime/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [111, 114]}}}, {'path': 'openhands/... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/__init__.py",
"openhands/runtime/plugins/jupyter/__init__.py",
"openhands/runtime/plugins/vscode/__init__.py",
"openhands/runtime/utils/bash.py",
"openhands/runtime/utils/runtime_init.py",
"tests/runtime/conftest.py",
"openhands/runtime/action_execution_serve... | 1 |
All-Hands-AI | OpenHands | 0403b460f10207075b7472f5127bfdd4ab1a66f8 | https://github.com/All-Hands-AI/OpenHands/issues/272 | enhancement | Add latest tag for docker image | **What problem or use case are you trying to solve?**
Proposed [here](https://github.com/OpenDevin/OpenDevin/pull/263#issuecomment-2023918115). Better to add `latest` tag for image. Then user do not need to pull image at specific version. We also do not need to always change the tags in [code](https://github.com/OpenD... | null | https://github.com/All-Hands-AI/OpenHands/pull/290 | null | {'base_commit': '2def49e79409108eacb4e797f7fdc2422cc5bd19', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 32)': {'mod': [32]}}}, {'path': 'evaluation/SWE-bench/scripts/run_docker_interactive.sh', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'opendevin/README.md... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"opendevin/sandbox/sandbox.py"
],
"doc": [
"README.md",
"opendevin/README.md",
"evaluation/SWE-bench/scripts/run_docker_interactive.sh"
],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 221a4e83f1e438950591d183b0a6e7c5e15de6be | https://github.com/All-Hands-AI/OpenHands/issues/2308 | enhancement | [Feature]: Confirmation Mode for Agent | **What problem or use case are you trying to solve?**
Context: https://opendevin.slack.com/archives/C06P5NCGSFP/p1717733829670139
If the agent is NOT operating inside a sandbox or if a user care a lot about not letting the agent mess around with their environment, we should better let user confirm action (command... | null | https://github.com/All-Hands-AI/OpenHands/pull/2774 | null | {'base_commit': '456690818c94a266935888f1e56e0afa2c4d5219', 'files': [{'path': 'frontend/package-lock.json', 'status': 'modified', 'Loc': {'(None, None, 20)': {'mod': [20]}, '(None, None, 8256)': {'mod': [8256, 8257, 8258]}}}, {'path': 'frontend/package.json', 'status': 'modified', 'Loc': {'(None, None, 19)': {'mod': [... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"frontend/src/components/modals/settings/SettingsForm.tsx",
"frontend/package-lock.json",
"frontend/src/components/chat/Chat.test.tsx",
"opendevin/core/config.py",
"frontend/package.json",
"frontend/src/components/AgentControlBar.tsx",
"opendevin/controller/state/state.py",
... | 1 |
scrapy | scrapy | c8f3d07e86dd41074971b5423fb932c2eda6db1e | https://github.com/scrapy/scrapy/issues/3370 | AttributeError from contract errback | When running a contract with a URL that returns non-200 response, I get the following:
```
2018-08-09 14:40:23 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.bureauxlocaux.com/annonce/a-louer-bureaux-a-louer-a-nantes--1289-358662> (referer: None)
Traceback (most recent call last):
File "/us... | null | https://github.com/scrapy/scrapy/pull/3371 | null | {'base_commit': 'c8f3d07e86dd41074971b5423fb932c2eda6db1e', 'files': [{'path': 'scrapy/contracts/__init__.py', 'status': 'modified', 'Loc': {"('ContractsManager', 'eb_wrapper', 85)": {'mod': [87]}}}, {'path': 'tests/test_contracts.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 4]}, "('ContractsMan... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/contracts/__init__.py"
],
"doc": [],
"test": [
"tests/test_contracts.py"
],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | b337c986ca1188f4b26d30c9ae4bb7ff457ed505 | https://github.com/scrapy/scrapy/issues/5811 | bug
good first issue | `BaseSettings.setdefault` does nothing | ### Description
Calling `setdefault` method of class `BaseSettings` does nothing.
### Steps to Reproduce
```python
from scrapy.settings import BaseSettings
settings = BaseSettings()
stored = settings.setdefault('key', 'value')
print(stored) # prints None
print(settings.copy_to_dict()) # prints emp... | null | https://github.com/scrapy/scrapy/pull/5821 | null | {'base_commit': 'b337c986ca1188f4b26d30c9ae4bb7ff457ed505', 'files': [{'path': 'scrapy/settings/__init__.py', 'status': 'modified', 'Loc': {"('BaseSettings', None, 56)": {'add': [295]}}}, {'path': 'tests/test_settings/__init__.py', 'status': 'modified', 'Loc': {"('BaseSettingsTest', None, 64)": {'add': [67]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/settings/__init__.py",
"tests/test_settings/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.