organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
scikit-learn | scikit-learn | 559609fe98ec2145788133687e64a6e87766bc77 | https://github.com/scikit-learn/scikit-learn/issues/25525 | Bug
module:feature_extraction | Extend SequentialFeatureSelector example to demonstrate how to use negative tol | ### Describe the bug
I utilized the **SequentialFeatureSelector** for feature selection in my code, with the direction set to "backward." The tolerance value is negative and the selection process stops when the decrease in the metric, AUC in this case, is less than the specified tolerance. Generally, increasing the ... | null | https://github.com/scikit-learn/scikit-learn/pull/26205 | null | {'base_commit': '559609fe98ec2145788133687e64a6e87766bc77', 'files': [{'path': 'examples/feature_selection/plot_select_from_model_diabetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [145], 'mod': [123, 124, 125]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/feature_selection/plot_select_from_model_diabetes.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
pallets | flask | cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4 | https://github.com/pallets/flask/issues/2264 | cli | Handle app factory in FLASK_APP | `FLASK_APP=myproject.app:create_app('dev')`
[
Gunicorn does this with `eval`](https://github.com/benoitc/gunicorn/blob/fbd151e9841e2c87a18512d71475bcff863a5171/gunicorn/util.py#L364), which I'm not super happy with. Instead, we could use `literal_eval` to allow a simple list of arguments. The line should never be so ... | null | https://github.com/pallets/flask/pull/2326 | null | {'base_commit': 'cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 12]}, "(None, 'find_best_app', 32)": {'mod': [58, 62, 69, 71]}, "(None, 'call_factory', 82)": {'mod': [82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 93]}, "(None, 'lo... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"flask/cli.py"
],
"doc": [],
"test": [
"tests/test_cli.py"
],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 737ca72b7bce6e377dd6876eacee63338fa8c30c | https://github.com/localstack/localstack/issues/894 | ERROR:localstack.services.generic_proxy: Error forwarding request: | Starting local dev environment. CTRL-C to quit.
Starting mock API Gateway (http port 4567)...
Starting mock DynamoDB (http port 4569)...
Starting mock SES (http port 4579)...
Starting mock Kinesis (http port 4568)...
Starting mock Redshift (http port 4577)...
Starting mock S3 (http port 4572)...
Starting mock Cl... | null | https://github.com/localstack/localstack/pull/1526 | null | {'base_commit': '737ca72b7bce6e377dd6876eacee63338fa8c30c', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [186]}}}, {'path': 'localstack/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'localstack/services/kinesis/kinesis_starter.py... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/config.py",
"localstack/services/kinesis/kinesis_starter.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
huggingface | transformers | d2871b29754abd0f72cf42c299bb1c041519f7bc | https://github.com/huggingface/transformers/issues/30 | [Feature request] Add example of finetuning the pretrained models on custom corpus | null | https://github.com/huggingface/transformers/pull/25107 | null | {'base_commit': 'd2871b29754abd0f72cf42c299bb1c041519f7bc', 'files': [{'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [75, 108]}, "('PreTrainedModel', 'from_pretrained', 1959)": {'add': [2227]}, "(None, 'load_state_dict', 442)": {'mod': [461]}, "('PreTrainedMod... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/trainer.py",
"src/transformers/modeling_utils.py",
"src/transformers/training_args.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | ||
pandas-dev | pandas | 51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319 | https://github.com/pandas-dev/pandas/issues/11080 | Indexing
Performance | PERF: checking is_monotonic_increasing/decreasing before sorting on an index | We don't keep the sortedness state in an index per-se, but it is rather cheap to check
- `is_monotonic_increasing` or `is_monotonic_decreasing` on a reg-index
- MultiIndex should check `is_lexsorted` (this might be done already)
```
In [8]: df = DataFrame(np.random.randn(1000000,2),columns=list('AB'))
In [9]: %timei... | null | https://github.com/pandas-dev/pandas/pull/11294 | null | {'base_commit': '51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319', 'files': [{'path': 'asv_bench/benchmarks/frame_methods.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [932]}}}, {'path': 'doc/source/whatsnew/v0.17.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [54]}}}, {'path': 'pandas/... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/frame.py",
"asv_bench/benchmarks/frame_methods.py"
],
"doc": [
"doc/source/whatsnew/v0.17.1.txt"
],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | fdb45741e521d606b028984dbc2f6ac57755bb88 | https://github.com/zylon-ai/private-gpt/issues/10 | Suggestions for speeding up ingestion? | I presume I must be doing something wrong, as it is taking hours to ingest a 500kbyte text on an i9-12900 with 128GB. In fact it's not even done yet. Using models are recommended.
Help?
Thanks
Some output:
llama_print_timings: load time = 674.34 ms
llama_print_timings: sample time = 0.0... | null | https://github.com/zylon-ai/private-gpt/pull/224 | null | {'base_commit': 'fdb45741e521d606b028984dbc2f6ac57755bb88', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {'path': 'example.env', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [2]}}}, {'path': 'ingest.py', 's... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"ingest.py",
"privateGPT.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
"example.env"
],
"asset": []
} | 1 | |
huggingface | transformers | 9fef668338b15e508bac99598dd139546fece00b | https://github.com/huggingface/transformers/issues/9 | Crash at the end of training | Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:
I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8
Is this an issue you know about?
```
11/08/2... | null | https://github.com/huggingface/transformers/pull/16310 | null | {'base_commit': '9fef668338b15e508bac99598dd139546fece00b', 'files': [{'path': 'tests/big_bird/test_modeling_big_bird.py', 'status': 'modified', 'Loc': {"('BigBirdModelTester', '__init__', 47)": {'mod': [73]}, "('BigBirdModelTest', 'test_fast_integration', 561)": {'mod': [584]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"tests/big_bird/test_modeling_big_bird.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | ccabcf1fca906bfa6b65a3189c1c41061e6c1042 | https://github.com/psf/requests/issues/3698 | AttributeError: 'NoneType' object has no attribute 'read' | Hello :)
After a recent upgrade for our [coala](https://github.com/coala/coala) project to `requests` 2.12.1 we encounter an exception in our test suites which seems to be caused by `requests`.
Build: https://ci.appveyor.com/project/coala/coala-bears/build/1.0.3537/job/1wm7b4u9yhgkxkgn
Relevant part:
```
===... | null | https://github.com/psf/requests/pull/3718 | null | {'base_commit': 'ccabcf1fca906bfa6b65a3189c1c41061e6c1042', 'files': [{'path': 'requests/models.py', 'status': 'modified', 'Loc': {"('Response', 'content', 763)": {'mod': [772]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {"('TestRequests', None, 55)": {'add': [1096]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/models.py"
],
"doc": [],
"test": [
"tests/test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | fc805074be7b3b507bc1699e537f9b691c6f91b9 | https://github.com/AntonOsika/gpt-engineer/issues/674 | bug
documentation | ModuleNotFoundError: No module named 'tkinter' | **Bug description**
When running `gpt-engineer --improve` (using the recent version from PyPI), I get the following output:
```
$ gpt-engineer --improve
Traceback (most recent call last):
File "/home/.../.local/bin/gpt-engineer", line 5, in <module>
from gpt_engineer.main import app
File "/home/.../.lo... | null | https://github.com/AntonOsika/gpt-engineer/pull/675 | null | {'base_commit': 'fc805074be7b3b507bc1699e537f9b691c6f91b9', 'files': [{'path': 'docs/installation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docs/installation.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
pallets | flask | 85dce2c836fe03aefc07b7f4e0aec575e170f1cd | https://github.com/pallets/flask/issues/593 | blueprints | Nestable blueprints | I'd like to be able to register "sub-blueprints" using `Blueprint.register_blueprint(*args, **kwargs)`. This would register the nested blueprints with an app when the "parent" is registered with it. All parameters are preserved, other than `url_prefix`, which is handled similarly to in `add_url_rule`. A naíve implement... | null | https://github.com/pallets/flask/pull/3923 | null | {'base_commit': '85dce2c836fe03aefc07b7f4e0aec575e170f1cd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 71)': {'add': [71]}}}, {'path': 'docs/blueprints.rst', 'status': 'modified', 'Loc': {'(None, None, 122)': {'add': [122]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/flask/blueprints.py",
"src/flask/app.py"
],
"doc": [
"docs/blueprints.rst",
"CHANGES.rst"
],
"test": [
"tests/test_blueprints.py"
],
"config": [],
"asset": []
} | null |
AUTOMATIC1111 | stable-diffusion-webui | f92d61497a426a19818625c3ccdaae9beeb82b31 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14263 | bug | [Bug]: KeyError: "do_not_save" when trying to save a prompt | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
When I try to save a prompt, it errors in the console saying
```
File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py", line 212, in save_styles
s... | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14276 | null | {'base_commit': 'f92d61497a426a19818625c3ccdaae9beeb82b31', 'files': [{'path': 'modules/styles.py', 'status': 'modified', 'Loc': {"('StyleDatabase', '__init__', 95)": {'mod': [101, 102, 103, 104]}, "('StyleDatabase', None, 94)": {'mod': [158, 159, 160, 161]}, "('StyleDatabase', 'get_style_paths', 158)": {'mod': [175, 1... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"modules/styles.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
home-assistant | core | c3e9c1a7e8fdc949b8e638d79ab476507ff92f18 | https://github.com/home-assistant/core/issues/60067 | integration: environment_canada
by-code-owner | Environment Canada (EC) radar integration slowing Environment Canada servers | ### The problem
The `config_flow` change to the EC integration did not change the way the underlying radar retrieval works, but did enable radar for everyone. As a result the EC servers are getting far too many requests. We (the codeowners) have been working with EC to diagnose this issue and understand their concer... | null | https://github.com/home-assistant/core/pull/60087 | null | {'base_commit': 'c3e9c1a7e8fdc949b8e638d79ab476507ff92f18', 'files': [{'path': 'homeassistant/components/environment_canada/camera.py', 'status': 'modified', 'Loc': {"('ECCamera', '__init__', 49)": {'add': [57]}}}, {'path': 'homeassistant/components/environment_canada/manifest.json', 'status': 'modified', 'Loc': {'(Non... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"homeassistant/components/environment_canada/camera.py",
"homeassistant/components/environment_canada/manifest.json"
],
"doc": [],
"test": [],
"config": [
"requirements_all.txt",
"requirements_test_all.txt"
],
"asset": []
} | 1 |
abi | screenshot-to-code | 939539611f0cad12056f7be78ef6b2128b90b779 | https://github.com/abi/screenshot-to-code/issues/336 | bug
p2 | Handle Nones in chunk.choices[0].delta | 
There is a successful request for the openai interface, but it seems that no code is generated.
backend-1 | ERROR: Exception in ASGI application
backend-1 | Traceback (most recent call last):
... | null | https://github.com/abi/screenshot-to-code/pull/341 | null | {'base_commit': '939539611f0cad12056f7be78ef6b2128b90b779', 'files': [{'path': 'backend/llm.py', 'status': 'modified', 'Loc': {"(None, 'stream_openai_response', 32)": {'mod': [62, 63, 64]}}}, {'path': 'frontend/package.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49]}}}, {'path': 'frontend/src/Ap... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"backend/llm.py",
"frontend/src/App.tsx",
"frontend/package.json"
],
"doc": [],
"test": [],
"config": [
"frontend/yarn.lock"
],
"asset": []
} | 1 |
Significant-Gravitas | AutoGPT | bf895eb656dee9084273cd36395828bd06aa231d | https://github.com/Significant-Gravitas/AutoGPT/issues/6 | enhancement
good first issue
API costs | Make Auto-GPT aware of it's running cost | Auto-GPT is expensive to run due to GPT-4's API cost.
We could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost.
This could also be displayed to the user to help them be more aware of exactly how much they are spending. | null | https://github.com/Significant-Gravitas/AutoGPT/pull/762 | null | {'base_commit': 'bf895eb656dee9084273cd36395828bd06aa231d', 'files': [{'path': 'autogpt/chat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'chat_with_ai', 54)": {'add': [135]}}}, {'path': 'autogpt/config/ai_config.py', 'status': 'modified', 'Loc': {"('AIConfig', None, 21)": {'add': [28]... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"autogpt/chat.py",
"autogpt/prompts/prompt.py",
"autogpt/config/ai_config.py",
"autogpt/memory/base.py",
"autogpt/setup.py",
"autogpt/llm_utils.py"
],
"doc": [],
"test": [
"tests/unit/test_commands.py",
"tests/unit/test_setup.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 3e01ce744a981d8f19ae77ec695005e7000f4703 | https://github.com/yt-dlp/yt-dlp/issues/5855 | bug | Generic extractor can crash if Brotli is not available | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp... | null | null | https://github.com/yt-dlp/yt-dlp/commit/3e01ce744a981d8f19ae77ec695005e7000f4703 | {'base_commit': '3e01ce744a981d8f19ae77ec695005e7000f4703', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {"('GenericIE', None, 42)": {'add': [2156]}, "('GenericIE', '_real_extract', 2276)": {'mod': [2315]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/generic.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
CorentinJ | Real-Time-Voice-Cloning | ded7b37234e229d9bde0a9a506f7c65605803731 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/543 | Lack of pre-compiled results in lost interest | so I know the first thing people are going to say is, this isn't an issue. However, it is. by not having a precompiled version to download over half the people that find their way to this GitHub are going to lose interest. Honestly, I'm one of them. I attempted to compile it but then I saw that I had to track down eac... | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/546 | null | {'base_commit': 'ded7b37234e229d9bde0a9a506f7c65605803731', 'files': [{'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [11]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"toolbox/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 96b5814de70ad2435b6db5f49b607b136921f701 | https://github.com/scikit-learn/scikit-learn/issues/26948 | Documentation | The copy button on install copies an extensive comman including env activation | ### Describe the issue linked to the documentation
https://scikit-learn.org/stable/install.html
Above link will lead you to the sklearn downlanding for link .
when you link copy link button it will copy
`python3 -m venv sklearn-venvpython -m venv sklearn-venvpython -m venv sklearn-venvsource sklearn-venv/bin/ac... | null | https://github.com/scikit-learn/scikit-learn/pull/27052 | null | {'base_commit': '96b5814de70ad2435b6db5f49b607b136921f701', 'files': [{'path': 'doc/install.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]}}}, {'path': 'doc/themes/scik... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"doc/themes/scikit-learn-modern/static/css/theme.css"
],
"doc": [
"doc/install.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
keras-team | keras | 49b9682b3570211c7d8f619f8538c08fd5d8bdad | https://github.com/keras-team/keras/issues/10036 | [API DESIGN REVIEW] sample weight in ImageDataGenerator.flow | https://docs.google.com/document/d/14anankKROhliJCpInQH-pITatdjO9UzSN6Iz0MwcDHw/edit?usp=sharing
Makes it easy to use data augmentation when sample weights are available. | null | https://github.com/keras-team/keras/pull/10092 | null | {'base_commit': '49b9682b3570211c7d8f619f8538c08fd5d8bdad', 'files': [{'path': 'keras/preprocessing/image.py', 'status': 'modified', 'Loc': {"('ImageDataGenerator', 'flow', 715)": {'add': [734, 759], 'mod': [754]}, "('NumpyArrayIterator', None, 1188)": {'add': [1201]}, "('NumpyArrayIterator', '__init__', 1216)": {'add'... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"tests/keras/preprocessing/image_test.py",
"keras/preprocessing/image.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | efb53aafdcaae058962c6189ddecb3dc62b02c31 | https://github.com/scrapy/scrapy/issues/6514 | enhancement | Migrate from setup.py to pyproject.toml | We should migrate to the modern declarative setuptools metadata approach as discussed in https://setuptools.pypa.io/en/latest/userguide/quickstart.html and https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html, but only after the 2.12 release. | null | https://github.com/scrapy/scrapy/pull/6547 | null | {'base_commit': 'efb53aafdcaae058962c6189ddecb3dc62b02c31', 'files': [{'path': '.bandit.yml', 'status': 'removed', 'Loc': {}}, {'path': '.bumpversion.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.coveragerc', 'status': 'removed', 'Loc': {}}, {'path': '.isort.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.pre-com... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"tests/test_spiderloader/__init__.py",
".isort.cfg",
".coveragerc",
"setup.cfg",
"setup.py",
".bumpversion.cfg"
],
"doc": [],
"test": [
"tests/test_crawler.py"
],
"config": [
"pytest.ini",
".pre-commit-config.yaml",
"tox.ini",
"pylintrc",
".bandit.... | 1 |
fastapi | fastapi | c6e950dc9cacefd692dbd8987a3acd12a44b506f | https://github.com/fastapi/fastapi/issues/5859 | question
question-migrate | FastAPI==0.89.0 Cannot use `None` as a return type when `status_code` is set to 204 with `from __future__ import annotations` | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I alre... | null | https://github.com/fastapi/fastapi/pull/2246 | null | {'base_commit': 'c6e950dc9cacefd692dbd8987a3acd12a44b506f', 'files': [{'path': '.github/workflows/preview-docs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
".github/workflows/preview-docs.yml"
],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 3938f81c1b4a5ee81d5bfc6563c17a225f7e5068 | https://github.com/3b1b/manim/issues/1330 | Error after installing manim | I installed all manim & dependecies, but when I ran `python -m manim example_scenes.py OpeningManimExample`, I got the following error:
`Traceback (most recent call last):
File "c:\users\jm\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\jm... | null | https://github.com/3b1b/manim/pull/1343 | null | {'base_commit': '3938f81c1b4a5ee81d5bfc6563c17a225f7e5068', 'files': [{'path': 'manimlib/window.py', 'status': 'modified', 'Loc': {"('Window', None, 10)": {'mod': [15]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"manimlib/window.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
keras-team | keras | 84b283e6200bcb051ed976782fbb2b123bf9b8fc | https://github.com/keras-team/keras/issues/19793 | type:bug/performance | model.keras format much slower to load | Anyone experiencing unreasonably slow load times when loading a keras-format saved model? I have noticed this repeated when working in ipython, where simply instantiating a model via `Model.from_config` then calling `model.load_weights` is much (several factors) faster than loading a `model.keras` file.
My understan... | null | https://github.com/keras-team/keras/pull/19852 | null | {'base_commit': '84b283e6200bcb051ed976782fbb2b123bf9b8fc', 'files': [{'path': 'keras/src/saving/saving_lib.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 34]}, "(None, '_save_model_to_fileobj', 95)": {'mod': [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 129, 130, 131, 132... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"keras/src/saving/saving_lib_test.py",
"keras/src/saving/saving_lib.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 4cdb266dac852859f695b0555cbe49e58343e69a | https://github.com/ansible/ansible/issues/3539 | bug | Bug in Conditional Include | Hi,
I know that when using conditionals on an include, 'All the tasks get evaluated, but the conditional is applied to each and every task'. However this breaks when some of that tasks register variables and other tasks in the group use those variable.
Example:
main.yml:
```
- include: extra.yml
when: do_extra i... | null | https://github.com/ansible/ansible/pull/20158 | null | {'base_commit': '4cdb266dac852859f695b0555cbe49e58343e69a', 'files': [{'path': 'lib/ansible/modules/windows/win_robocopy.ps1', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25, 26, 27, 28, 73, 76, 93, 94, 95, 114, 115, 167, 168]}}}, {'path': 'lib/ansible/modules/windows/win_robocopy.py', 'status': 'modif... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/windows/win_robocopy.ps1",
"lib/ansible/modules/windows/win_robocopy.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
psf | requests | f5dacf84468ab7e0631cc61a3f1431a32e3e143c | https://github.com/psf/requests/issues/2654 | Feature Request
Contributor Friendly | utils.get_netrc_auth silently fails when netrc exists but fails to parse | My .netrc contains a line for the github auth, [like this](https://gist.github.com/wikimatze/9790374).
It turns out that `netrc.netrc()` doesn't like that:
```
>>> from netrc import netrc
>>> netrc()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.fra... | null | https://github.com/psf/requests/pull/2656 | null | {'base_commit': 'f5dacf84468ab7e0631cc61a3f1431a32e3e143c', 'files': [{'path': 'requests/utils.py', 'status': 'modified', 'Loc': {"(None, 'get_netrc_auth', 70)": {'mod': [70, 108, 109]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
oobabooga | text-generation-webui | 0877741b0350d200be7f1e6cca2780a25ee29cd0 | https://github.com/oobabooga/text-generation-webui/issues/5851 | bug | Inference failing using ExLlamav2 version 0.0.18 | ### Describe the bug
Since ExLlamav2 was upgraded to version 0.0.18 in the requirements.txt, inference using it is no longer working and fails with the error in the logs below. Reverting to version 0.0.17 resolves the issue.
### Is there an existing issue for this?
- [X] I have searched the existing issues
... | null | null | https://github.com/oobabooga/text-generation-webui/commit/0877741b0350d200be7f1e6cca2780a25ee29cd0 | {'base_commit': '0877741b0350d200be7f1e6cca2780a25ee29cd0', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}, {'path': 'requirements_amd.txt', 'status': 'modified', 'Loc': {'(None, None, 45)': {'mod': [45, 46, 47]}}}, {'path': 'requirements_amd_noa... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements_apple_silicon.txt",
"requirements_amd_noavx2.txt",
"requirements_apple_intel.txt",
"requirements_amd.txt",
"requirements.txt",
"requirements_noavx2.txt"
],
"asset": []
} | null |
zylon-ai | private-gpt | 89477ea9d3a83181b0222b732a81c71db9edf142 | https://github.com/zylon-ai/private-gpt/issues/2013 | bug | [BUG] Another permissions error when installing with docker-compose | ### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
This looks similar, but not the same as #1876
As for following the instructions, I've not seen any relevant guide to installing with Docker, hence working a bit blind.
Background: I'm trying to run this on an Asus... | null | https://github.com/zylon-ai/private-gpt/pull/2059 | null | {'base_commit': '89477ea9d3a83181b0222b732a81c71db9edf142', 'files': [{'path': 'Dockerfile.llamacpp-cpu', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 23, 30]}}}, {'path': 'Dockerfile.ollama', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 13, 20]}}}, {'path': 'docker-compose.yaml', ... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docker-compose.yaml"
],
"test": [],
"config": [
"Dockerfile.ollama",
"Dockerfile.llamacpp-cpu"
],
"asset": []
} | 1 |
scikit-learn | scikit-learn | e04b8e70e60df88751af5cd667cafb66dc32b397 | https://github.com/scikit-learn/scikit-learn/issues/26590 | Bug | KNNImputer add_indicator fails to persist where missing data had been present in training | ### Describe the bug
Hello, I've encountered an issue where the KNNImputer fails to record the fields where there were missing data at the time when `.fit` is called, but not recognised if `.transform` is called on a dense matrix. I would have expected it to return a 2x3 matrix rather than 2x2, with `missingindicato... | null | https://github.com/scikit-learn/scikit-learn/pull/26600 | null | {'base_commit': 'e04b8e70e60df88751af5cd667cafb66dc32b397', 'files': [{'path': 'doc/whats_new/v1.3.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'sklearn/impute/_knn.py', 'status': 'modified', 'Loc': {"('KNNImputer', 'transform', 242)": {'mod': [285]}}}, {'path': 'sklearn/impute/te... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/impute/_knn.py"
],
"doc": [
"doc/whats_new/v1.3.rst"
],
"test": [
"sklearn/impute/tests/test_common.py"
],
"config": [],
"asset": []
} | 1 |
nvbn | thefuck | 9660ec7813a0e77ec3411682b0084d07b540084e | https://github.com/nvbn/thefuck/issues/543 | Adding sudo works for `aura -Sy` but not `aura -Ay` | `fuck` is unable to add `sudo` to an `aura -Ay` command:
```
$ aura -Ay foobar-beta-git # from AUR
aura >>= You have to use `sudo` for that.
$ fuck
No fucks given
```
But works as expected for `aura -Sy`:
```
$ aura -Sy foobar # pacman alias
error: you cannot perform this operation unless you are root.
aura >>= Pl... | null | https://github.com/nvbn/thefuck/pull/557 | null | {'base_commit': '9660ec7813a0e77ec3411682b0084d07b540084e', 'files': [{'path': 'thefuck/rules/sudo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"thefuck/rules/sudo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 2707099b23a0a8580731553629566c1182d26f48 | https://github.com/scikit-learn/scikit-learn/issues/29294 | Moderate
help wanted | ConvergenceWarnings cannot be turned off | Hi, I'm unable to turn off convergence warnings from `GraphicalLassoCV`.
I've tried most of the solutions from, and none of them worked (see below for actual implementations):
https://stackoverflow.com/questions/879173/how-to-ignore-deprecation-warnings-in-python
https://stackoverflow.com/questions/32612180/elimin... | null | https://github.com/scikit-learn/scikit-learn/pull/30380 | null | {'base_commit': '2707099b23a0a8580731553629566c1182d26f48', 'files': [{'path': 'sklearn/utils/parallel.py', 'status': 'modified', 'Loc': {"('_FuncWrapper', 'with_config', 121)": {'add': [122]}, "(None, '_with_config', 24)": {'mod': [24, 26, 27]}, "('Parallel', '__call__', 54)": {'mod': [73, 74, 77]}, "('_FuncWrapper', ... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/utils/parallel.py"
],
"doc": [],
"test": [
"sklearn/utils/tests/test_parallel.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 7b2b1eff57e41364b4b427e36e766607e7eed3a0 | https://github.com/All-Hands-AI/OpenHands/issues/20 | Control Loop: long term planning and execution | The biggest, most complicated aspect of Devin is long-term planning and execution. I'd like to start a discussion about how this might work in OpenDevin.
There's some [recent prior work from Microsoft](https://arxiv.org/pdf/2403.08299.pdf) with some impressive results. I'll summarize here, with some commentary.
#... | null | https://github.com/All-Hands-AI/OpenHands/pull/3771 | null | {'base_commit': '7b2b1eff57e41364b4b427e36e766607e7eed3a0', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [230]}}}, {'path': 'containers/runtime/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3, 5, 9]}}}, {'path': 'frontend/src/components/Agent... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"openhands/runtime/e2b/runtime.py",
"frontend/src/types/Message.tsx",
"frontend/src/types/ResponseType.tsx",
"frontend/src/store.ts",
"openhands/runtime/remote/runtime.py",
"openhands/runtime/runtime.py",
"frontend/src/services/session.ts",
"openhands/server/session/agent.p... | 1 | |
All-Hands-AI | OpenHands | 2242702cf94eab7275f2cb148859135018d9b280 | https://github.com/All-Hands-AI/OpenHands/issues/1251 | enhancement | Sandbox Capabilities Framework | **Summary**
We have an existing use case for a Jupyter-aware agent, which always runs in a sandbox where Jupyter is available. There are some other scenarios I can think of where an agent might want some guarantees about what it can do with the sandbox:
* We might want a "postgres migration writer", which needs acces... | null | https://github.com/All-Hands-AI/OpenHands/pull/1255 | null | {'base_commit': '2242702cf94eab7275f2cb148859135018d9b280', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [220]}}}, {'path': 'agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17]}, "('CodeActAgent', None, 66)": {'add': [7... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"opendevin/sandbox/docker/ssh_box.py",
"opendevin/schema/config.py",
"agenthub/codeact_agent/codeact_agent.py",
"opendevin/controller/action_manager.py",
"opendevin/sandbox/docker/local_box.py",
"opendevin/sandbox/e2b/sandbox.py",
"opendevin/sandbox/sandbox.py",
"opendevin/... | 1 |
deepfakes | faceswap | 0ea743029db0d47f09d33ef90f50ad84c20b085f | https://github.com/deepfakes/faceswap/issues/263 | Very slow extraction with scripts vs fakeapp 1.1 | 1080ti + OC'd 2600k using winpython 3.6.2 cuda 9.0 and tensorflow 1.6
**Training** utilizes ~50% of the GPU now (which is better than the ~25% utilized with FA 1.1) but extraction doesn't seem to utilize the GPU at all (getting around 1.33it/s) wheras with FA 1.1 I get around 17it/s - tried CNN and it dropped down t... | null | https://github.com/deepfakes/faceswap/pull/259 | null | {'base_commit': '0ea743029db0d47f09d33ef90f50ad84c20b085f', 'files': [{'path': 'lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py', 'status': 'modified', 'Loc': {"(None, 'initialize', 108)": {'add': [126], 'mod': [108, 117, 123, 124, 125]}, "(None, 'extract', 137)": {'mod': [137, 138, 150, 151, 152, 153, 154, 155, 1... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/faces_detect.py",
"lib/cli.py",
"lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
fastapi | fastapi | ef176c663195489b44030bfe1fb94a317762c8d5 | https://github.com/fastapi/fastapi/issues/3323 | feature
reviewed | Support PEP 593 `Annotated` for specifying dependencies and parameters | ### First check
* [x] I added a very descriptive title to this issue.
* [x] I used the GitHub search to find a similar issue and didn't find it.
* [x] I searched the FastAPI documentation, with the integrated search.
* [x] I already searched in Google "How to X in FastAPI" and didn't find any information.
* [x] ... | null | https://github.com/fastapi/fastapi/pull/4871 | null | {'base_commit': 'ef176c663195489b44030bfe1fb94a317762c8d5', 'files': [{'path': 'fastapi/dependencies/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [58], 'mod': [51]}, "(None, 'get_dependant', 282)": {'add': [336], 'mod': [301, 303, 307, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"fastapi/dependencies/utils.py",
"fastapi/utils.py",
"fastapi/param_functions.py",
"tests/main.py",
"fastapi/params.py"
],
"doc": [],
"test": [
"tests/test_params_repr.py",
"tests/test_application.py",
"tests/test_path.py"
],
"config": [],
"asset": []
} | 1 |
python | cpython | e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0 | https://github.com/python/cpython/issues/92417 | docs | Many references to unsupported Python versions in the stdlib docs | **Documentation**
There are currently many places in the stdlib docs where there are needless comments about how to maintain compatibility with Python versions that are now end-of-life. Many of these can now be removed, to improve brevity and clarity in the documentation.
I plan to submit a number of PRs to fix t... | null | https://github.com/python/cpython/pull/92539 | null | {'base_commit': 'e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0', 'files': [{'path': 'Doc/library/unittest.mock-examples.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [663]}}}, {'path': 'Doc/library/unittest.mock.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2384]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": " ",
"info_type": ""
} | {
"code": [],
"doc": [
"Doc/library/unittest.mock-examples.rst",
"Doc/library/unittest.mock.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | 23d8761615d0417eef5f52cc796518e44d41ca2a | https://github.com/scikit-learn/scikit-learn/issues/19248 | Documentation
module:cluster | Birch should be called BIRCH | C.f. the original paper.
Zhang, T.; Ramakrishnan, R.; Livny, M. (1996). "BIRCH: an efficient data clustering method for very large databases". Proceedings of the 1996 ACM SIGMOD international conference on Management of data - SIGMOD '96. pp. 103–114. doi:10.1145/233269.233324 | null | https://github.com/scikit-learn/scikit-learn/pull/19368 | null | {'base_commit': '23d8761615d0417eef5f52cc796518e44d41ca2a', 'files': [{'path': 'doc/modules/clustering.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [106, 946, 965, 999, 1001, 1005]}}}, {'path': 'examples/cluster/plot_birch_vs_minibatchkmeans.py', 'status': 'modified', 'Loc': {'(None, None, None)': ... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/cluster/plot_birch_vs_minibatchkmeans.py",
"sklearn/cluster/_birch.py",
"examples/cluster/plot_cluster_comparison.py"
],
"doc": [
"doc/modules/clustering.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 65b807e4e95fe6da3e30f13e4271dc9dcfaa334e | https://github.com/localstack/localstack/issues/402 | type: bug | Dynamodbstreams Use Kinesis Shard Identifiers | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
Dynamodbstreams seem to be making use of Kinesis shard identifiers which are considered invalid by botocore request validators.
Error response from boto3 when attempting to `get_shard_iterator` f... | null | https://github.com/localstack/localstack/pull/403 | null | {'base_commit': '65b807e4e95fe6da3e30f13e4271dc9dcfaa334e', 'files': [{'path': 'localstack/services/dynamodbstreams/dynamodbstreams_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 119]}, "(None, 'post_request', 47)": {'add': [76], 'mod': [70, 78]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/services/dynamodbstreams/dynamodbstreams_api.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pallets | flask | ee76129812419d473eb62434051e81d5855255b6 | https://github.com/pallets/flask/issues/602 | Misspelling in docs @ flask.Flask.handle_exception | `Default exception handling that kicks in when an exception occours that is not caught. In debug mode the exception will be re-raised immediately, otherwise it is logged and the handler for a 500 internal server error is used. If no such handler exists, a default 500 internal server error message is displayed.`
Occour... | null | https://github.com/pallets/flask/pull/603 | null | {'base_commit': 'ee76129812419d473eb62434051e81d5855255b6', 'files': [{'path': 'flask/app.py', 'status': 'modified', 'Loc': {"('Flask', 'handle_exception', 1266)": {'mod': [1268]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "不太确定问题类别,因为是开发者询问typo error",
"info_type": ""
} | {
"code": [
"flask/app.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
ansible | ansible | 79d00adc52a091d0ddd1d8a96b06adf2f67f161b | https://github.com/ansible/ansible/issues/36378 | cloud
aws
module
affects_2.4
support:certified
docs | Documentation Error for ec2_vpc_nacl rules | ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ec2_vpc_nacl
##### ANSIBLE VERSION
```
ansible 2.4.3.0
config file = None
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/pyt... | null | https://github.com/ansible/ansible/pull/36380 | null | {'base_commit': '79d00adc52a091d0ddd1d8a96b06adf2f67f161b', 'files': [{'path': 'lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [87]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
geekan | MetaGPT | a32e238801d0a8f3c1bd97b98d038b40977a8cc6 | https://github.com/geekan/MetaGPT/issues/1174 | New provider: Amazon Bedrock (AWS) | **Feature description**
Please include support for Amazon Bedrock models. These models can be from Amazon, Anthropic, AI21, Cohere, Mistral, or Meta Llama 2.
**Your Feature**
1. Create a new LLM Provides under [metagpt/provider](https://github.com/geekan/MetaGPT/tree/db65554c4931d4a95e20331b770cf4f7e5202264/metag... | null | https://github.com/geekan/MetaGPT/pull/1231 | null | {'base_commit': 'a32e238801d0a8f3c1bd97b98d038b40977a8cc6', 'files': [{'path': 'config/puppeteer-config.json', 'status': 'modified', 'Loc': {}}, {'path': 'metagpt/configs/llm_config.py', 'status': 'modified', 'Loc': {"('LLMType', None, 17)": {'add': [34]}, "('LLMConfig', None, 40)": {'add': [80], 'mod': [77]}}}, {'path... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"metagpt/utils/token_counter.py",
"metagpt/provider/__init__.py",
"metagpt/configs/llm_config.py",
"config/puppeteer-config.json",
"tests/metagpt/provider/mock_llm_config.py",
"tests/metagpt/provider/req_resp_const.py"
],
"doc": [],
"test": [],
"config": [
"requirements... | 1 | |
pandas-dev | pandas | 862cd05df4452592a99dd1a4fa10ce8cfb3766f7 | https://github.com/pandas-dev/pandas/issues/37494 | Enhancement
Groupby
ExtensionArray
NA - MaskedArrays
Closing Candidate | ENH: improve the resulting dtype for groupby operations on nullable dtypes | Follow-up on https://github.com/pandas-dev/pandas/pull/37433, and partly related to https://github.com/pandas-dev/pandas/issues/37493
Currently, after groupby operations we try to cast back to the original dtype when possible (at least in case of extension arrays). But this is not always correct, and also not done c... | null | https://github.com/pandas-dev/pandas/pull/38291 | null | {'base_commit': '862cd05df4452592a99dd1a4fa10ce8cfb3766f7', 'files': [{'path': 'pandas/core/dtypes/cast.py', 'status': 'modified', 'Loc': {"(None, 'maybe_cast_result_dtype', 342)": {'mod': [360, 362, 363, 364, 365]}}}, {'path': 'pandas/core/groupby/ops.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/dtypes/cast.py",
"pandas/core/groupby/ops.py"
],
"doc": [],
"test": [
"pandas/tests/groupby/aggregate/test_cython.py",
"pandas/tests/arrays/integer/test_arithmetic.py",
"pandas/tests/resample/test_datetime_index.py",
"pandas/tests/groupby/test_function.py"
],
... | 1 |
scikit-learn | scikit-learn | eaf0a044fdc084ebeeb9bbfbcf42e6df2b1491bb | https://github.com/scikit-learn/scikit-learn/issues/16730 | Bug
Blocker
module:decomposition | BUG: MLE for PCA mis-estimates rank | After #16224 it looks like this code no longer produces the correct result:
```
import numpy as np
from sklearn.decomposition import PCA
n_samples, n_dim = 1000, 10
X = np.random.RandomState(0).randn(n_samples, n_dim)
X[:, -1] = np.mean(X[:, :-1], axis=-1) # true X dim is ndim - 1
pca_skl = PCA('mle', svd_solve... | null | https://github.com/scikit-learn/scikit-learn/pull/16841 | null | {'base_commit': 'eaf0a044fdc084ebeeb9bbfbcf42e6df2b1491bb', 'files': [{'path': 'doc/whats_new/v0.23.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [142, 143, 144, 145]}}}, {'path': 'sklearn/decomposition/_pca.py', 'status': 'modified', 'Loc': {"(None, '_assess_dimension', 31)": {'mod': [31, 32, 39, 4... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/decomposition/_pca.py"
],
"doc": [
"doc/whats_new/v0.23.rst"
],
"test": [
"sklearn/decomposition/tests/test_pca.py"
],
"config": [],
"asset": []
} | 1 |
pallets | flask | 07c7d5730a2685ef2281cc635e289685e5c3d478 | https://github.com/pallets/flask/issues/2813 | Allow flexible routing with SERVER_NAME config | ### Expected Behavior
Deployed a flask application which is reachable over multiple domains and ports:
- external via load balancer: `client - Host: example.org -> LB -> flask app`
- internal via DNS service discovery without load balancer: `client - Host: instance-1231.example.org -> flask app`
If the client ... | null | https://github.com/pallets/flask/pull/5634 | null | {'base_commit': '07c7d5730a2685ef2281cc635e289685e5c3d478', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25]}}}, {'path': 'docs/config.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [270], 'mod': [263, 264, 266, 267]}}}, {'path': 'src/flask/app.py', '... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"src/flask/app.py"
],
"doc": [
"docs/config.rst",
"CHANGES.rst"
],
"test": [
"tests/test_blueprints.py",
"tests/test_basic.py"
],
"config": [],
"asset": []
} | 1 | |
ansible | ansible | 0ffacedb3e41ec49df3606c0df1a1f0688868c32 | https://github.com/ansible/ansible/issues/20199 | affects_2.2
module
bug | Failure while using htpasswd module | _From @apolatynski on December 4, 2016 15:42_
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
htpasswd
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path =... | null | https://github.com/ansible/ansible/pull/20202 | null | {'base_commit': '0ffacedb3e41ec49df3606c0df1a1f0688868c32', 'files': [{'path': 'lib/ansible/modules/web_infrastructure/htpasswd.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [106]}, "(None, 'present', 126)": {'mod': [140, 151]}, "(None, 'absent', 174)": {'mod': [178]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/web_infrastructure/htpasswd.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4 | https://github.com/yt-dlp/yt-dlp/issues/2237 | site-enhancement | [YouTube] Add the Channel Banner link to the info.json when downloading a channel's videos | ### Checklist
- [X] I'm reporting a site feature request
- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've searched the [bugtracker](https://g... | null | https://github.com/yt-dlp/yt-dlp/pull/2400 | null | {'base_commit': '135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4', 'files': [{'path': 'yt_dlp/extractor/youtube.py', 'status': 'modified', 'Loc': {"('YoutubeTabBaseInfoExtractor', '_extract_from_tabs', 3894)": {'mod': [3916, 3917, 3918, 3919, 3938]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/youtube.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | a8968bfa696d51f73769c54f2630a9530488236a | https://github.com/pandas-dev/pandas/issues/46804 | Docs | DOC: building page for nested methods doesn't work | The following
```
python make.py --single pandas.Series.str.rsplit
```
fails to produce the docs:
```
(pandas-dev) marcogorelli@OVMG025 doc % python make.py clean && python make.py --single pandas.Series.str.rsplit
Running Sphinx v4.4.0
loading translations [en]... done
making output directory... done
[autosu... | null | https://github.com/pandas-dev/pandas/pull/46806 | null | {'base_commit': 'a8968bfa696d51f73769c54f2630a9530488236a', 'files': [{'path': '.github/workflows/code-checks.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82]}}}, {'path': '.github/workflows/docbuild-and-upload.yml', 'status': 'modified', 'Loc': {}}, {'path': 'ci/code_checks.sh', 'status': 'modifi... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
".github/workflows/docbuild-and-upload.yml",
"doc/source/index.rst.template"
],
"test": [],
"config": [
".github/workflows/code-checks.yml"
],
"asset": [
"ci/code_checks.sh"
]
} | 1 |
pandas-dev | pandas | e88c39225ef545123860c679822f1b567fe65c27 | https://github.com/pandas-dev/pandas/issues/33428 | Docs
good first issue | DOC: Data links in Pandas API Reference are broken 404 | #### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.plotting.parallel_coordinates.html
...probably many examples in other sections
#### Documentation problem
Results in 404 not found error
df = pd.read_csv('https://raw.github.com/pandas-dev/pandas/master'
... | null | https://github.com/pandas-dev/pandas/pull/33099 | null | {'base_commit': 'e88c39225ef545123860c679822f1b567fe65c27', 'files': [{'path': 'pandas/plotting/_misc.py', 'status': 'modified', 'Loc': {"(None, 'parallel_coordinates', 311)": {'mod': [362]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/plotting/_misc.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | c1bed601e9b9a3f5fa8fb529cfa40df7a3a0b903 | https://github.com/ultralytics/yolov5/issues/4970 | question | Cannot load the model | I get an error when I run this code torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/path/last.pt', force_reload=True)
It was working until yesterday and now I receive an error "raise ValueError("{!r} does not start with {!r}"
ValueError: 'C:\\Users\\aaa\\.cache\\torch\\hub\\ultralytics_yolov5_master' does... | null | https://github.com/ultralytics/yolov5/pull/4974 | null | {'base_commit': 'c1bed601e9b9a3f5fa8fb529cfa40df7a3a0b903', 'files': [{'path': 'models/tf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}}}, {'path': 'models/yolo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [18]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"models/tf.py",
"models/yolo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 674fb96b33c07c680844f674fcdf0767b6e3c2f9 | https://github.com/pandas-dev/pandas/issues/17200 | IO Data
IO JSON | read_json(lines=True) broken for s3 urls in Python 3 (v0.20.3) | #### Code Sample, a copy-pastable example if possible
Using Python
```python
import pandas as pd
inputdf = pd.read_json(path_or_buf="s3://path/to/python-lines/file.json", lines=True)
```
The file is similar to:
```
{"url": "blah", "other": "blah"}
{"url": "blah", "other": "blah"}
{"url": "blah", "other": ... | null | https://github.com/pandas-dev/pandas/pull/17201 | null | {'base_commit': '674fb96b33c07c680844f674fcdf0767b6e3c2f9', 'files': [{'path': 'doc/source/whatsnew/v0.21.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [91]}}}, {'path': 'pandas/io/json/json.py', 'status': 'modified', 'Loc': {"('JsonReader', 'read', 456)": {'add': [460], 'mod': [462]}, '(None, Non... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/io/json/json.py"
],
"doc": [
"doc/source/whatsnew/v0.21.1.txt"
],
"test": [
"pandas/tests/io/parser/test_network.py",
"pandas/tests/io/json/test_pandas.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 1ddf398a81d23772fc9ac231a4e774af932f8360 | https://github.com/All-Hands-AI/OpenHands/issues/3031 | bug
enhancement
severity:medium
tracked | [Runtime] Mega-issue to track all issues related to bash Interactive terminal | This is a mega-issue tracker for the **Interactive terminal** issue peoples run into.
- [ ] https://github.com/OpenDevin/OpenDevin/issues/2754
- [ ] https://github.com/OpenDevin/OpenDevin/issues/3008
- [ ] https://github.com/OpenDevin/OpenDevin/issues/2799
- [ ] https://github.com/OpenDevin/OpenDevin/issues/892
... | null | https://github.com/All-Hands-AI/OpenHands/pull/4881 | null | {'base_commit': '1ddf398a81d23772fc9ac231a4e774af932f8360', 'files': [{'path': '.github/workflows/dummy-agent-test.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}, {'path': '.github/workflows/eval-runner.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [31]}}}, {'path': '.gith... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"openhands/agenthub/codeact_agent/codeact_agent.py",
"evaluation/benchmarks/agent_bench/run_infer.py",
"evaluation/benchmarks/bird/run_infer.py",
"frontend/src/services/observations.ts",
"evaluation/benchmarks/humanevalfix/run_infer.py",
"evaluation/benchmarks/scienceagentbench/run... | 1 |
All-Hands-AI | OpenHands | 23a7057be29ed7de44b5705d5bb4c4d0bbdea089 | https://github.com/All-Hands-AI/OpenHands/issues/813 | bug | error seed': 42 | Hi! I'm OpenDevin, an AI Software Engineer. What would you like to build with me today?
user avatar
bana mali danışmanlık firması için web sitesi tasarla ve çalıştır. Detaylı ve kapsamlı bir çalışma olsun.
assistant avatar
Starting new task...
assistant avatar
Oops. Something went wrong: gemini does not support p... | null | https://github.com/All-Hands-AI/OpenHands/pull/830 | null | {'base_commit': '23a7057be29ed7de44b5705d5bb4c4d0bbdea089', 'files': [{'path': 'agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {"('CodeActAgent', 'step', 83)": {'mod': [126, 127]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"agenthub/codeact_agent/codeact_agent.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
keras-team | keras | 818c9fadd9cb1748f2b5545e8ef5f141526ec14e | https://github.com/keras-team/keras/issues/19281 | type:feature | Scatter update variable in TF optimizer | In TensorFlow there is a cool (fast) variable update operation - scatter_update (like "assign" for dense variables).
It would be cool if you override assign operation for such cases (i think it should looks like https://github.com/keras-team/keras/blob/master/keras/backend/tensorflow/optimizer.py#L45 )
P.S.
Found ... | null | https://github.com/keras-team/keras/pull/19313 | null | {'base_commit': '818c9fadd9cb1748f2b5545e8ef5f141526ec14e', 'files': [{'path': 'keras/backend/tensorflow/optimizer.py', 'status': 'modified', 'Loc': {"('TFOptimizer', None, 8)": {'add': [44]}}}, {'path': 'keras/optimizers/optimizer_sparse_test.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 99]}}}... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"keras/backend/tensorflow/optimizer.py",
"keras/optimizers/optimizer_sparse_test.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | d558bce8e9d5d4adfb0ab587be20b8a231dd1eea | https://github.com/pandas-dev/pandas/issues/39636 | Regression
Apply | BUG: ValueError on ".transform" method applied to an empty DataFrame | - [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
Output on version 1.1.5:
```python
... | null | https://github.com/pandas-dev/pandas/pull/39639 | null | {'base_commit': 'd558bce8e9d5d4adfb0ab587be20b8a231dd1eea', 'files': [{'path': 'doc/source/whatsnew/v1.2.2.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23]}}}, {'path': 'pandas/core/aggregation.py', 'status': 'modified', 'Loc': {"(None, 'transform', 404)": {'mod': [460]}}}, {'path': 'pandas/tests/... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"pandas/core/aggregation.py"
],
"doc": [
"doc/source/whatsnew/v1.2.2.rst"
],
"test": [
"pandas/tests/apply/test_frame_transform.py"
],
"config": [],
"asset": []
} | 1 |
fastapi | fastapi | 92c825be6a7362099400c9c3fe8b01ea13add3dc | https://github.com/fastapi/fastapi/issues/19 | question
answered
reviewed
question-migrate | accessing the request object | In starlette you can access request object in function decorated with the route decorator.
it seems very handy to be able to access middlewares etc,
is there a way in fastapi to do that using the provided get/post/options.... decorators?
same question for the ApiRouter.
```
@app.route("/notes", methods=["GET"])
... | null | https://github.com/fastapi/fastapi/pull/25 | null | {'base_commit': '92c825be6a7362099400c9c3fe8b01ea13add3dc', 'files': [{'path': 'docs/tutorial/extra-starlette.md', 'status': 'removed', 'Loc': {}}, {'path': 'mkdocs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [56], 'mod': [61]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"mkdocs.yml",
"docs/tutorial/extra-starlette.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 9572a2e00ddadb9fc7e2125c3e723b8a3b54be05 | https://github.com/pandas-dev/pandas/issues/33238 | CI/COMPAT: Linux py37_np_dev pipeline timeouts | #### Problem description
Linux py37_np_dev pipeline appears to timeout for everyone after 60 minutes.
There are a couple hundred thousand errors like this:
```
Exception ignored in: 'pandas.io.sas._sas.Parser.process_byte_array_with_data'
DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
Depr... | null | https://github.com/pandas-dev/pandas/pull/33241 | null | {'base_commit': '9572a2e00ddadb9fc7e2125c3e723b8a3b54be05', 'files': [{'path': 'pandas/_libs/writers.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [115]}}}, {'path': 'pandas/io/sas/sas.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [434]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/io/sas/sas.pyx",
"pandas/_libs/writers.pyx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 2086ff4065a43fa40d909f81e62623e265df5759 | https://github.com/scrapy/scrapy/issues/2390 | bug | Sitemap spider not robust against wrong sitemap URLs in robots.txt | [The "specs"](http://www.sitemaps.org/protocol.html#submit_robots) do say that the URL should be a "full URL":
> You can specify the location of the Sitemap using a robots.txt file. To do this, simply add the following line including the full URL to the sitemap:
> `Sitemap: http://www.example.com/sitemap.xml`
Bu... | null | https://github.com/scrapy/scrapy/pull/2395 | null | {'base_commit': '2086ff4065a43fa40d909f81e62623e265df5759', 'files': [{'path': 'scrapy/spiders/sitemap.py', 'status': 'modified', 'Loc': {"('SitemapSpider', '_parse_sitemap', 33)": {'mod': [35]}}}, {'path': 'scrapy/utils/sitemap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "(None, 'sitemap_url... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/spiders/sitemap.py",
"scrapy/utils/sitemap.py"
],
"doc": [],
"test": [
"tests/test_spider.py",
"tests/test_utils_sitemap.py"
],
"config": [],
"asset": []
} | 1 |
ageitgey | face_recognition | f21631401119e4af2e919dd662c3817b2c480c75 | https://github.com/ageitgey/face_recognition/issues/149 | Tolerance factor not working from cli | * face_recognition version:
* Python version: 3.5
* Operating System: Ubuntu 16
### Description
Hi! I tried to set the tolerance factor in the cli but it doesn't work....It says: "Error: no such option: --tolerance". I am using the preconfigured VM available on Medium Website.
### What I Did
```
face_... | null | https://github.com/ageitgey/face_recognition/pull/137 | null | {'base_commit': 'f21631401119e4af2e919dd662c3817b2c480c75', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [132]}}}, {'path': 'face_recognition/cli.py', 'status': 'modified', 'Loc': {"(None, 'test_image', 35)": {'mod': [35, 48]}, "(None, 'process_images_in_process_pool', 60)... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"face_recognition/cli.py",
"setup.py"
],
"doc": [
"README.md"
],
"test": [
"tests/test_face_recognition.py"
],
"config": [],
"asset": []
} | 1 | |
oobabooga | text-generation-webui | 9ab90d8b608170fe57d893c2150eda3bc11a8b06 | https://github.com/oobabooga/text-generation-webui/issues/2435 | bug | Failed to load embedding model: all-mpnet-base-v2 While Running Textgen in Colab Notebook | ### Describe the bug
I have used this command instead of using old Cuda in my ipynb
`!git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa`
Now, I ran the server using following code -
`!python server.py --extensions openai --model guanaco-7B-GPTQ --model_type LLaMa --api --public-api --share --wbits 4... | null | https://github.com/oobabooga/text-generation-webui/pull/2443 | null | {'base_commit': '9ab90d8b608170fe57d893c2150eda3bc11a8b06', 'files': [{'path': 'extensions/openai/script.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20]}, "('Handler', 'do_POST', 159)": {'mod': [197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"extensions/openai/script.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
Textualize | rich | ef1b9b91ccff680b7f931d75fd92c3caa6fcd622 | https://github.com/Textualize/rich/issues/2083 | Needs triage | [BUG] typing: Progress in Group isn't happy | **Describe the bug**
Running mypy on the following code:
```python
from rich.console import Group
from rich.progress import Progress
outer_progress = Progress()
inner_progress = Progress()
live_group = Group(outer_progress, inner_progress)
```
Produces:
```console
$ mypy --strict tmp.py
tmp.py:6... | null | https://github.com/Textualize/rich/pull/2089 | null | {'base_commit': 'ef1b9b91ccff680b7f931d75fd92c3caa6fcd622', 'files': [{'path': 'rich/console.py', 'status': 'modified', 'Loc': {"('RichCast', None, 265)": {'mod': [268]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"rich/console.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
nvbn | thefuck | 3da26192cba7dbaa3109fc0454e658ec417aaf5f | https://github.com/nvbn/thefuck/issues/89 | feature request: replace history with corrected command. | It would be a nice feature to correct the command and the history.
I would also like an option to not add {fuck,thefuck} to the history.
| null | https://github.com/nvbn/thefuck/pull/384 | null | {'base_commit': '3da26192cba7dbaa3109fc0454e658ec417aaf5f', 'files': [{'path': 'thefuck/shells.py', 'status': 'modified', 'Loc': {"('Fish', 'app_alias', 128)": {'mod': [129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"thefuck/shells.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 61e722aa126207efcdbc1ddcd4453854ad44ea09 | https://github.com/scikit-learn/scikit-learn/issues/10251 | Extending Criterion | Unless I'm missing something, it's not completely trivial how one can use a custom `sklearn.tree._criterion.Criterion` for a decision tree. See my use case [here](https://stats.stackexchange.com/q/316954/98500).
Things I have tried include:
- Import the `ClassificationCriterion` in Python and subclass it. It seem... | null | https://github.com/scikit-learn/scikit-learn/pull/10325 | null | {'base_commit': '61e722aa126207efcdbc1ddcd4453854ad44ea09', 'files': [{'path': 'sklearn/tree/_criterion.pxd', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [67]}}}, {'path': 'sklearn/tree/_criterion.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [215, 216, 707]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/tree/_criterion.pxd",
"sklearn/tree/_criterion.pyx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 3d19272be75fe32edd4cf01cb2eeac2281305e42 | https://github.com/scikit-learn/scikit-learn/issues/27682 | good first issue
cython | MAINT Directly `cimport` interfaces from `std::algorithm` | Some Cython implementations use interfaces from the standard library of C++, namely `std::algorithm::move` and `std::algorithm::fill` from [`std::algorithm`](https://en.cppreference.com/w/cpp/algorithm/).
Before Cython 3, those interfaces had to be imported directly using the verbose syntax from Cython:
- https://... | null | https://github.com/scikit-learn/scikit-learn/pull/28489 | null | {'base_commit': '3d19272be75fe32edd4cf01cb2eeac2281305e42', 'files': [{'path': 'sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp', 'status': 'modified', 'Loc': {'(None, None, 16)': {'add': [16]}, '(None, None, 28)': {'mod': [28, 29, 30, 31, 32, 33]}}}, {'path': 'sklearn/metrics/_pairwise_dista... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp",
"sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8 | https://github.com/fastapi/fastapi/issues/429 | bug
reviewed | OpenAPI: HTTP_422 response does not use custom media_type | **Describe the bug**
FastAPI automatically adds an HTTP_422 response to all paths in the OpenAPI specification that have parameters or request body. This response does not use the media_type of response_class if any custom defined. Furthermore, it overwrites any error object format with the default one.
**To Reprod... | null | https://github.com/fastapi/fastapi/pull/437 | null | {'base_commit': '033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8', 'files': [{'path': 'fastapi/openapi/utils.py', 'status': 'modified', 'Loc': {"(None, 'get_openapi_path', 142)": {'add': [227], 'mod': [162, 163, 164, 165, 175, 176, 177, 178, 179, 191, 219, 220]}, "(None, 'get_openapi_operation_parameters', 72)": {'mod': [74, ... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"fastapi/openapi/utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | d692a72bf3809df35d802041211fcd81d56b1dc6 | https://github.com/All-Hands-AI/OpenHands/issues/710 | enhancement
severity:low | Tune rate-limit backoff | **What problem or use case are you trying to solve?**
Due to the AnthropicException error, which indicates that the request limit has been reached, it is necessary to increase the interval between requests. This will prevent system overload and provide a stable service.
**Describe the UX of the solution you'd like*... | null | https://github.com/All-Hands-AI/OpenHands/pull/1120 | null | {'base_commit': 'd692a72bf3809df35d802041211fcd81d56b1dc6', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [179]}}}, {'path': 'agenthub/monologue_agent/utils/memory.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 7, 12]}}}, {'path': 'agenthub/monologue_a... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"agenthub/monologue_agent/utils/memory.py",
"opendevin/schema/config.py",
"opendevin/llm/llm.py",
"agenthub/monologue_agent/utils/monologue.py",
"opendevin/config.py",
"opendevin/controller/agent_controller.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"a... | 1 |
AntonOsika | gpt-engineer | d16396138e8a61f9bc2c3c36ae8c4d7420d23782 | https://github.com/AntonOsika/gpt-engineer/issues/663 | enhancement
sweep | Sweep: Bump the release version in pyproject.toml |
<details open>
<summary>Checklist</summary>
- [X] `pyproject.toml`
> • Locate the line where the version number is specified. It should be under the [project] section and the line should start with "version = ".
> • Determine the new version number according to the semantic versioning rules. If only minor changes o... | null | https://github.com/AntonOsika/gpt-engineer/pull/666 | null | {'base_commit': 'd16396138e8a61f9bc2c3c36ae8c4d7420d23782', 'files': [{'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
scrapy | scrapy | e748ca50ca3e83ac703e02538a27236fedd53a7d | https://github.com/scrapy/scrapy/issues/728 | bug | get_func_args maximum recursion | https://github.com/scrapy/scrapy/blob/master/scrapy/utils/python.py#L149
Today I was working on a project were I have to skip the first item of a list, and then join the rest. Instead of writing the typical slice I tried something much more good looking `Compose(itemgetter(slice(1, None)), Join())` but I found out thi... | null | https://github.com/scrapy/scrapy/pull/809 | null | {'base_commit': 'e748ca50ca3e83ac703e02538a27236fedd53a7d', 'files': [{'path': 'scrapy/tests/test_utils_python.py', 'status': 'modified', 'Loc': {"('UtilsPythonTestCase', 'test_get_func_args', 158)": {'add': [195]}}}, {'path': 'scrapy/utils/python.py', 'status': 'modified', 'Loc': {"(None, 'get_func_args', 134)": {'add... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"scrapy/utils/python.py"
],
"doc": [],
"test": [
"scrapy/tests/test_utils_python.py"
],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 626a0a01471accc32ded29ccca3ed93c4995fcd6 | https://github.com/huggingface/transformers/issues/9954 | TensorFlow
Tests
Good First Issue | [Good first issue] LXMERT TensorFlow Integration tests | The TensorFlow implementation of the LXMERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_tf_lxmert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_lxmert.py) file should be updated to include int... | null | https://github.com/huggingface/transformers/pull/12497 | null | {'base_commit': '626a0a01471accc32ded29ccca3ed93c4995fcd6', 'files': [{'path': 'tests/test_modeling_tf_lxmert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, "('TFLxmertModelTest', 'test_saved_model_creation_extended', 710)": {'add': [770]}, "('TFLxmertModelTest', 'test_pt_tf_model_equivalence',... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"tests/test_modeling_tf_lxmert.py"
],
"config": [],
"asset": []
} | null |
pandas-dev | pandas | 710df2140555030e4d86e669d6df2deb852bcaf5 | https://github.com/pandas-dev/pandas/issues/24115 | Bug
Datetime
Algos | DTA/TDA/PA inplace methods should actually be inplace | At the moment we are using the implementations designed for Index subclasses, which return new objects. | null | https://github.com/pandas-dev/pandas/pull/30505 | null | {'base_commit': '710df2140555030e4d86e669d6df2deb852bcaf5', 'files': [{'path': 'doc/source/whatsnew/v1.0.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [719]}}}, {'path': 'pandas/core/arrays/datetimelike.py', 'status': 'modified', 'Loc': {"('DatetimeLikeArrayMixin', None, 316)": {'mod': [1314]}, "(... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"pandas/core/arrays/datetimelike.py"
],
"doc": [
"doc/source/whatsnew/v1.0.0.rst"
],
"test": [
"pandas/tests/arrays/test_datetimelike.py"
],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 0092ac9a2a20873c7c077cefc4d68397a6df2ada | https://github.com/3b1b/manim/issues/30 | TypeError while running a triangle.py scene | I got an error when I try to run some of the [old_projects/triangle_of_power/triangle.py](https://github.com/3b1b/manim/blob/master/old_projects/triangle_of_power/triangle.py) scene.
My command is:
```
python extract_scene.py -p old_projects/triangle_of_power/triangle.py DrawInsideTriangle
```
But after that I g... | null | https://github.com/3b1b/manim/pull/31 | null | {'base_commit': '0092ac9a2a20873c7c077cefc4d68397a6df2ada', 'files': [{'path': 'mobject/mobject.py', 'status': 'modified', 'Loc': {"('Mobject', 'shift', 121)": {'mod': [123]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"mobject/mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
geekan | MetaGPT | 5cae13fd0a9b6e5a6f3f39c798cf693675795d89 | https://github.com/geekan/MetaGPT/issues/733 | LLM may generate comments inside [CONTENT][/CONTENT] , which causes parsing the JSON to fail. | **Bug description**
```
parse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting ',' delimiter: line 6 column 27 (char 135)
```
**Bug solved method**
<!-- If you solved the bug, describe the idea or process to solve the current bug. Of course, you can also paste the URL address of y... | null | https://github.com/geekan/MetaGPT/pull/963 | null | {'base_commit': '5cae13fd0a9b6e5a6f3f39c798cf693675795d89', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15], 'mod': [6]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.example.yaml"
],
"asset": []
} | 1 | |
huggingface | transformers | da1d0d404f05523d37b37207a4c1ff419cc1f47f | https://github.com/huggingface/transformers/issues/26809 | Feature request | Add Mistral Models to Flax | ### Feature request
I would like to implement the ~~Llama~~ Mistral model in flax
### Motivation
I've been trying to get familiar with jax and as such I started migrating the llama model, and I think I am at a point where both models are quite comparable in outcome
### Your contribution
Yes I could submi... | null | https://github.com/huggingface/transformers/pull/24587 | null | {'base_commit': 'da1d0d404f05523d37b37207a4c1ff419cc1f47f', 'files': [{'path': 'docs/source/en/index.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [97, 170, 171]}}}, {'path': 'docs/source/en/model_doc/llama.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [52, 114]}}}, {'path': 'src/t... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"utils/check_docstrings.py",
"src/transformers/__init__.py",
"src/transformers/utils/dummy_flax_objects.py",
"src/transformers/modeling_flax_utils.py",
"src/transformers/models/mpt/modeling_mpt.py",
"src/transformers/models/bloom/modeling_bloom.py",
"src/transformers/models/fuy... | 1 |
python | cpython | 0aa58fa7a62cd0ee7ec27fa87122425aeff0467d | https://github.com/python/cpython/issues/91043 | build
3.11 | ./Programs/_freeze_module fails with MSAN: Uninitialized value was created by an allocation of 'stat.i' | BPO | [46887](https://bugs.python.org/issue46887)
--- | :---
Nosy | @vstinner
PRs | <li>python/cpython#31633</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```pyth... | null | https://github.com/python/cpython/pull/102510 | null | {'base_commit': '0aa58fa7a62cd0ee7ec27fa87122425aeff0467d', 'files': [{'path': 'Objects/longobject.c', 'status': 'modified', 'Loc': {'(None, None, 140)': {'add': [165]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"Objects/longobject.c"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 0b74c72e1c7fe320440fa97a3d256107ea329307 | https://github.com/pandas-dev/pandas/issues/6403 | Bug
IO Excel | ExcelFile parse of empty sheet fails with "IndexError: list index out of range" | Using pandas 0.13.1 on OS X Mavericks to parse a blank Excel spreadsheet causes "IndexError: list index out of range". Apparently the default header=0 in `_parse_excel` causes the execution of `_trim_excel_header(data[header])`. Perhaps when nrows==0 this should not be executed.
``` python
import pandas as pd
xl_file ... | null | https://github.com/pandas-dev/pandas/pull/10376 | null | {'base_commit': '0b74c72e1c7fe320440fa97a3d256107ea329307', 'files': [{'path': 'ci/requirements-3.4.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}}}, {'path': 'ci/requirements-3.4_SLOW.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}}}, {'path': 'doc/source/install.rst', 's... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/frame.py",
"vb_suite/packers.py",
"pandas/io/excel.py"
],
"doc": [
"doc/source/install.rst",
"doc/source/io.rst",
"doc/source/whatsnew/v0.17.0.txt"
],
"test": [
"pandas/io/tests/test_excel.py"
],
"config": [
"ci/requirements-3.4.txt",
"ci/requir... | 1 |
langflow-ai | langflow | 395c2d7372dffcf1d4f9577f623a2966183595d9 | https://github.com/langflow-ai/langflow/issues/2126 | bug | Error in the Code Export: Boolean values are in the incorrect syntax. 'false' should be changed to 'False', 'true' should be changed to 'True'. | Error in the Code Export: Boolean values are in the incorrect syntax. 'false' should be changed to 'False', 'true' should be changed to 'True'.
**To Reproduce**
Steps to reproduce the behavior:
click to export code, and turn on tweaks
**Screenshots**
<img width="1728" alt="Screenshot 2024-06-10 at 1 42 59 PM" ... | null | https://github.com/langflow-ai/langflow/pull/2130 | null | {'base_commit': '395c2d7372dffcf1d4f9577f623a2966183595d9', 'files': [{'path': 'src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14], 'mod': [37]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 2cf3d4dbf9da66cbff30f54a032b9c60d6e6073c | https://github.com/3b1b/manim/issues/401 | The video doesn't concatenate, I can only get partial videos | I have only the partial videos with the next error:
"[concat @ 000001ff22102900] Impossible to open '0.mp4'
media\videos\example_scenes\480p15\partial_movie_files\WriteStuff\partial_movie_file_list.txt: No such file or directory
File ready at media\videos\example_scenes\480p15\WriteStuff.mp4"
But I don't have t... | null | https://github.com/3b1b/manim/pull/402 | null | {'base_commit': '2cf3d4dbf9da66cbff30f54a032b9c60d6e6073c', 'files': [{'path': 'manimlib/scene/scene.py', 'status': 'modified', 'Loc': {"('Scene', 'combine_movie_files', 758)": {'add': [782, 799], 'mod': [798]}}}, {'path': 'manimlib/utils/output_directory_getters.py', 'status': 'modified', 'Loc': {"(None, 'guarantee_ex... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"manimlib/scene/scene.py",
"manimlib/utils/output_directory_getters.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
Textualize | rich | 2ee992b17ef5ff3c34f89545b0d57ad4690a64fc | https://github.com/Textualize/rich/issues/2422 | Needs triage | [BUG] Databricks is not identified as Jupyter | You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).
**Describe the bug**
Databricks is not considered as "Jupyter", therefore `JUPYTER_LINES` and `JUPYTER_COLUMNS` has no effect on the console log
... | null | https://github.com/Textualize/rich/pull/2424 | null | {'base_commit': '2ee992b17ef5ff3c34f89545b0d57ad4690a64fc', 'files': [{'path': 'CHANGELOG.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}}}, {'path': 'CONTRIBUTORS.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16]}}}, {'path': 'rich/console.py', 'status': 'modified', 'Loc': {"... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"rich/console.py"
],
"doc": [
"CONTRIBUTORS.md",
"CHANGELOG.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 65d7a9b9902ad85f27b17d759bd13b59c2afc474 | https://github.com/AntonOsika/gpt-engineer/issues/590 | Please update README.md | I recently tried using it by following the steps in the README.md file and it does not work, please update the file.
I keep getting this error when i try to export/set the API key
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the ... | null | https://github.com/AntonOsika/gpt-engineer/pull/592 | null | {'base_commit': '65d7a9b9902ad85f27b17d759bd13b59c2afc474', 'files': [{'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'load_env_if_needed', 19)": {'add': [21]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"gpt_engineer/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
langflow-ai | langflow | 2b6f70fdb4f0238b2cf6afdb6473a764e090060f | https://github.com/langflow-ai/langflow/issues/226 | Cannot import name 'BaseLanguageModel' from 'langchain.schema' | **Describe the bug**
A clear and concise description of what the bug is.
**Browser and Version**
- N/A
- macOS 13.3.1 (22E261)
**To Reproduce**
Steps to reproduce the behavior:
1. Install miniconda with Python 3.10.10
2. Install langflow
3. Run langflow
4. See error:
ImportError: cannot import name 'Ba... | null | https://github.com/langflow-ai/langflow/pull/229 | null | {'base_commit': '2b6f70fdb4f0238b2cf6afdb6473a764e090060f', 'files': [{'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [706, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, ... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/backend/langflow/interface/agents/custom.py",
"src/backend/langflow/interface/utils.py",
"src/backend/langflow/template/nodes.py",
"src/backend/langflow/interface/tools/util.py",
"src/backend/langflow/interface/agents/prebuilt.py"
],
"doc": [],
"test": [],
"config": [
... | 1 | |
All-Hands-AI | OpenHands | a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6 | https://github.com/All-Hands-AI/OpenHands/issues/834 | bug | Old node modules need cleared out (Cannot read properties of null (reading 'edgesOut') | <!-- You MUST fill out this template. We will close issues that don't include enough information to reproduce -->
#### Describe the bug
trying to run make build on the latest code and it ends up in this error:
Cannot read properties of null (reading 'edgesOut')
#### Setup and configuration
**Current version**:... | null | https://github.com/All-Hands-AI/OpenHands/pull/867 | null | {'base_commit': 'a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6', 'files': [{'path': 'opendevin/logging.py', 'status': 'modified', 'Loc': {"(None, 'get_llm_prompt_file_handler', 118)": {'mod': [123]}, "(None, 'get_llm_response_file_handler', 128)": {'mod': [133]}, '(None, None, None)': {'mod': [139, 144]}}}]} | [] | [
"frontend/node_modules"
] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"opendevin/logging.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"frontend/node_modules"
]
} | null |
localstack | localstack | 2fe8440b619329891db150e45910e8aaad97b7ce | https://github.com/localstack/localstack/issues/4987 | type: bug
status: triage needed
aws:s3 | bug: The Content-MD5 you specified did not match what we received | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I started getting the following exception
```
com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received.
(Service: Amazon S3; Status Code: 400; Error Cod... | null | https://github.com/localstack/localstack/pull/5001 | null | {'base_commit': '2fe8440b619329891db150e45910e8aaad97b7ce', 'files': [{'path': 'localstack/services/s3/s3_listener.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 883], 'mod': [61, 62]}, "(None, 'check_content_md5', 884)": {'add': [884]}}}, {'path': 'tests/integration/test_s3.py', 'status': 'modifi... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/services/s3/s3_listener.py"
],
"doc": [],
"test": [
"tests/integration/test_s3.py"
],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 8c9d9b0475247f667a0f184f2fbc6d66b955749f | https://github.com/localstack/localstack/issues/11696 | type: bug
status: resolved/fixed
aws:apigateway | bug: API Gateway does not persist correctly when you restart the localstack docker container | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I have a working api gateway created with localstack. When I restart the container and try to query the same url, I get this message:
`{"message": "The API id '0e0cf92f' does not correspond to a deployed AP... | null | https://github.com/localstack/localstack/pull/11702 | null | {'base_commit': '8c9d9b0475247f667a0f184f2fbc6d66b955749f', 'files': [{'path': 'localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12]}, "('ApiGatewayEndpoint', None, 34)": {'mod': [41]}, "('ApiGatewayEndpoint', '__init__', 41)": ... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py",
"localstack-core/localstack/services/apigateway/next_gen/provider.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | d865e5213515cef6344f16f4c77386be9ce8f223 | https://github.com/pandas-dev/pandas/issues/23814 | Performance
Categorical
good first issue | equality comparison with a scalar is slow for category (performance regression) | Are the following 2 ways to compare a series to a scalar equivalent (ignore missing values)? I have to write the hard way in order to take advantage of the category properties.
```python
x = pd.Series(list('abcd') * 1000000).astype('category')
%timeit x == 'a'
# 10 loops, best of 3: 25.2 ms per lo... | null | https://github.com/pandas-dev/pandas/pull/23888 | null | {'base_commit': 'd865e5213515cef6344f16f4c77386be9ce8f223', 'files': [{'path': 'asv_bench/benchmarks/categoricals.py', 'status': 'modified', 'Loc': {"('Constructor', 'setup', 33)": {'add': [48]}, '(None, None, None)': {'add': [70]}}}, {'path': 'doc/source/whatsnew/v0.24.0.rst', 'status': 'modified', 'Loc': {'(None, Non... | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/arrays/categorical.py",
"asv_bench/benchmarks/categoricals.py"
],
"doc": [
"doc/source/whatsnew/v0.24.0.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
geekan | MetaGPT | ef5304961edbc194148bc5fbdb4591d2f27c2cfc | https://github.com/geekan/MetaGPT/issues/795 | Human Engagement不生效 | 
我尝试运行博客站关于Human Engagement的源代码,在运行到
team.hire(
[
SimpleCoder(),
SimpleTester(),
SimpleReviewer(),
SimpleReviewer(is_human=True)
]
)
中的SimpleRe... | null | https://github.com/geekan/MetaGPT/pull/717 | null | {'base_commit': 'ef5304961edbc194148bc5fbdb4591d2f27c2cfc', 'files': [{'path': 'metagpt/roles/role.py', 'status': 'modified', 'Loc': {"('Role', '__init__', 160)": {'add': [168]}}}, {'path': 'tests/metagpt/roles/test_role.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 14]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"metagpt/roles/role.py"
],
"doc": [],
"test": [
"tests/metagpt/roles/test_role.py"
],
"config": [],
"asset": []
} | 1 | |
geekan | MetaGPT | f201b2f5f32c2d48eab6632bf103e9b3a92fc999 | https://github.com/geekan/MetaGPT/issues/1213 | RAG Faiss AssertionError | **Environment information**
<!-- Environment:System version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->
- LLM type and model name: ollama ,nomic-embed-text
- System version:win 11
- Python version:3.9
- MetaGPT version or branch:0.8
**Bug descrip... | null | https://github.com/geekan/MetaGPT/pull/1241 | null | {'base_commit': 'f201b2f5f32c2d48eab6632bf103e9b3a92fc999', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20]}}}, {'path': 'metagpt/configs/embedding_config.py', 'status': 'modified', 'Loc': {"('EmbeddingConfig', None, 16)": {'add': [22, 27, 34, 43]}}}, {... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"metagpt/rag/schema.py",
"metagpt/configs/embedding_config.py"
],
"doc": [],
"test": [],
"config": [
"config/config2.example.yaml"
],
"asset": []
} | 1 | |
scrapy | scrapy | fe7043a648eac1e0ec0af772a21b283566ecd020 | https://github.com/scrapy/scrapy/issues/3903 | enhancement | Can I get remote server's ip address via response? | Can I get remote server's ip address via response?
For some reason. I'll need get remote site's ip address when parsing response. I looked the document but found nothing.
Any one know that?
Thanks! | null | https://github.com/scrapy/scrapy/pull/3940 | null | {'base_commit': 'fe7043a648eac1e0ec0af772a21b283566ecd020', 'files': [{'path': 'conftest.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'docs/topics/request-response.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [618, 707], 'mod': [39]}}}, {'path': 'scrapy/core/do... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"scrapy/http/response/__init__.py",
"scrapy/core/downloader/handlers/http11.py",
"scrapy/core/downloader/__init__.py",
"tests/mockserver.py",
"conftest.py"
],
"doc": [
"docs/topics/request-response.rst"
],
"test": [
"tests/test_crawler.py",
"tests/test_crawl.py"
]... | 1 |
psf | requests | 7eaa5ee37f2ef0fb37dc6e9efbead726665810b4 | https://github.com/psf/requests/issues/3659 | URL proxy auth with empty passwords doesn't emit auth header. | I'm using a proxy that requires authentication to send request that receives 302 response with Location header. I would like python.requests to follow this redirect and make request via proxy with specified credentials. But it seems like this doesn't happen, if I provide credentials in HTTPProxyAuth they will work ok f... | null | https://github.com/psf/requests/pull/3660 | null | {'base_commit': '7eaa5ee37f2ef0fb37dc6e9efbead726665810b4', 'files': [{'path': 'requests/adapters.py', 'status': 'modified', 'Loc': {"('HTTPAdapter', 'proxy_headers', 353)": {'mod': [369]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {"('TestRequests', None, 55)": {'add': [1474]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/adapters.py"
],
"doc": [],
"test": [
"tests/test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
pandas-dev | pandas | 923ac2bdee409e4fa8c47414b07f52e036bb21bc | https://github.com/pandas-dev/pandas/issues/25828 | Docs
good first issue | Use Substitution Decorator for CustomBusinessMonthEnd | This is a follow up to https://github.com/pandas-dev/pandas/pull/21093/files#r188805397 which wasn't working with Py27. Now that that is a thing of the past we should be able to use the more idiomatic Substitution approach to generating this docstring
| null | https://github.com/pandas-dev/pandas/pull/25868 | null | {'base_commit': '923ac2bdee409e4fa8c47414b07f52e036bb21bc', 'files': [{'path': 'pandas/tseries/offsets.py', 'status': 'modified', 'Loc': {"('_CustomBusinessMonth', None, 972)": {'add': [979, 987, 988], 'mod': [974, 975, 981, 983, 985, 986]}, '(None, None, None)': {'add': [1054, 1061], 'mod': [18]}, "('CustomBusinessMon... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/tseries/offsets.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 59a240cd311f5cedbcd5e12421f1d3bd596d9070 | https://github.com/ansible/ansible/issues/71254 | easyfix
support:core
docs
affects_2.11 | Files contain broken references 404 | <!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Files contain broken references (return 404):
- [ ] docs/... | null | https://github.com/ansible/ansible/pull/71705 | null | {'base_commit': '59a240cd311f5cedbcd5e12421f1d3bd596d9070', 'files': [{'path': 'docs/docsite/rst/scenario_guides/guide_packet.rst', 'status': 'modified', 'Loc': {'(None, None, 126)': {'mod': [126]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docs/docsite/rst/scenario_guides/guide_packet.rst"
],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | f8464b4f66e627ed2778c9a27dbe4a8642482baf | https://github.com/ultralytics/yolov5/issues/2226 | bug | Yolov5 crashes with RTSP stream analysis | ## 🐛 Bug
If I want to analyze an rtsp stream with Yolov5 in a docker container, regardless the latest or the v4.0 version, it crashes.
## To Reproduce (REQUIRED)
Input:
```
docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server
ffmpeg -i video.mp4 -s 640x480 -c:v libx264 -f rtsp ... | null | https://github.com/ultralytics/yolov5/pull/2231 | null | {'base_commit': 'f8464b4f66e627ed2778c9a27dbe4a8642482baf', 'files': [{'path': 'detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12, 13]}, "(None, 'detect', 18)": {'mod': [48, 121]}}}, {'path': 'utils/general.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [97]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"utils/general.py",
"detect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 8fcdf3b60b2930a4273cab4e3df22b77680ff41d | https://github.com/ultralytics/yolov5/issues/6515 | bug | GPU Memory Leak on Loading Pre-Trained Checkpoint | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training
### Bug
Training YOLO from a checkpoint (*.pt) consumes more GPU memory than training from a pre-trained weight (i.e. yolov5l).... | null | https://github.com/ultralytics/yolov5/pull/6516 | null | {'base_commit': '8fcdf3b60b2930a4273cab4e3df22b77680ff41d', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 65)": {'mod': [123]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
sherlock-project | sherlock | 2a9297f2444f912c354168c6c0df1c782edace0e | https://github.com/sherlock-project/sherlock/issues/1189 | bug | Sites Giving 404 error or no profile | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you ha... | null | https://github.com/sherlock-project/sherlock/pull/1192 | null | {'base_commit': '2a9297f2444f912c354168c6c0df1c782edace0e', 'files': [{'path': 'removed_sites.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1255]}}}, {'path': 'sherlock/resources/data.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [68, 69, 70, 71, 72, 73, 74, 75, 387, 388, 389, 3... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sherlock/resources/data.json"
],
"doc": [
"removed_sites.md",
"sites.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
home-assistant | core | 9e41a37284b8796bf3a190fe4bd2a4aee8616ec2 | https://github.com/home-assistant/core/issues/55095 | integration: honeywell | Rate limiting in Honeywell TCC | ### The problem
Multiple Honeywell TCC users are reporting rate limit errors in #53981. Restarting HomeAssistant seems to temporarily clear it up
### What is version of Home Assistant Core has the issue?
2021.8.8
### What was the last working version of Home Assistant Core?
_No response_
### What type... | null | https://github.com/home-assistant/core/pull/55304 | null | {'base_commit': '9e41a37284b8796bf3a190fe4bd2a4aee8616ec2', 'files': [{'path': 'homeassistant/components/honeywell/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1], 'mod': [12]}, "(None, 'async_setup_entry', 16)": {'mod': [45]}, "('HoneywellData', None, 68)": {'mod': [105, 111]}, "('Honeywe... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"homeassistant/components/honeywell/__init__.py",
"homeassistant/components/honeywell/climate.py"
],
"doc": [],
"test": [
"tests/components/honeywell/test_init.py"
],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | f542c58a48e87878028b7639a3c0296bdb351071 | https://github.com/deepfakes/faceswap/issues/3 | dev
advuser | Improve command line usage | Adding a command line args parsing with an help would be great !
Preferably with `argparse` | null | https://github.com/deepfakes/faceswap/pull/13 | null | {'base_commit': 'f542c58a48e87878028b7639a3c0296bdb351071', 'files': [{'path': 'extract.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [1, 4, 6, 7, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 46, 47, 48, 50, 51,... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": "Code"
} | {
"code": [
"extract.py",
"lib/utils.py",
"lib/faces_detect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
xai-org | grok-1 | e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8 | https://github.com/xai-org/grok-1/issues/14 | Grok implementation details | not an issue but would be nice if it was in the readme/model.py header:
314B parameters
Mixture of 8 Experts
2 experts used per token
64 layers
48 attention heads for queries
8 attention heads for keys/values
embeddings size: 6,144
rotary embeddings (RoPE)
SentencePiece tokenizer; 131,072 tokens
Supports acti... | null | https://github.com/xai-org/grok-1/pull/27 | null | {'base_commit': 'e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
pytorch | pytorch | a63524684d02131aef4f2e9d2cea7bfe210abc96 | https://github.com/pytorch/pytorch/issues/84408 | module: onnx
triaged
topic: bug fixes | Exporting the operator ::col2im to ONNX opset version 11 is not supported | ### 🐛 Describe the bug
When I converted the model in “.pt” format to onnx format, I received an error that the operator col2im is not supported.
## code
import torch
from cvnets import get_model
from options.opts import get_segmentation_eval_arguments
def pt2onnx():
opts = ge... | null | null | https://github.com/pytorch/pytorch/commit/a63524684d02131aef4f2e9d2cea7bfe210abc96 | {'base_commit': 'a63524684d02131aef4f2e9d2cea7bfe210abc96', 'files': [{'path': 'test/onnx/test_pytorch_onnx_no_runtime.py', 'status': 'modified', 'Loc': {"('TestONNXExport', None, 79)": {'add': [1158]}}}, {'path': 'test/onnx/test_pytorch_onnx_onnxruntime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': ... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"torch/onnx/_constants.py",
"torch/onnx/__init__.py",
"torch/csrc/jit/serialization/export.cpp"
],
"doc": [],
"test": [
"test/onnx/test_pytorch_onnx_onnxruntime.py",
"test/onnx/test_pytorch_onnx_no_runtime.py"
],
"config": [],
"asset": []
} | null |
Z4nzu | hackingtool | 64a46031b9c22e2a0526d0216eef627a91da880d | https://github.com/Z4nzu/hackingtool/issues/384 | install error | Traceback (most recent call last):
File "/usr/share/hackingtool/hackingtool.py", line 106, in <module>
os.mkdir(archive)
FileNotFoundError: [Errno 2] No such file or directory: ''
and i was in root mode also but this showing what to do help | null | https://github.com/Z4nzu/hackingtool/pull/387 | null | {'base_commit': '64a46031b9c22e2a0526d0216eef627a91da880d', 'files': [{'path': 'hackingtool.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [105, 106]}}}, {'path': 'tools/others/socialmedia.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, "('Faceshell', 'run', 48)": {'mod': [51]}}... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"hackingtool.py",
"tools/others/socialmedia.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
ultralytics | yolov5 | b754525e99ca62424c484fe529b6142f6bab939e | https://github.com/ultralytics/yolov5/issues/5160 | bug
Stale | Docker Multi-GPU DDP training hang on `destroy_process_group()` with `wandb` option 3 | Hello, when I try to training using multi gpu based on docker file images. I got the below error. I use Ubuntu 18.04, python 3.8.
<<<<<<<<<<<<<<<<<ERROR>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
```
root@5a70a5f2d489:/usr/src/app# python -m torch.distributed.run --nproc_per_node 2 train.py --batch 64 --data data.yaml -... | null | https://github.com/ultralytics/yolov5/pull/5163 | null | {'base_commit': 'b754525e99ca62424c484fe529b6142f6bab939e', 'files': [{'path': 'utils/loggers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 8, 17, 22]}}}, {'path': 'utils/loggers/wandb/wandb_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24, 25, 27, 28, 29, 30, 3... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"utils/loggers/wandb/wandb_utils.py",
"utils/loggers/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 4c77f62f806567644571b6b3f496f7b332b12327 | https://github.com/AntonOsika/gpt-engineer/issues/656 | Remove unnecessary configs such as: tdd, tdd_plus, clarify, respec | If we have time: benchmark them and store insights before deletion | null | https://github.com/AntonOsika/gpt-engineer/pull/737 | null | {'base_commit': '4c77f62f806567644571b6b3f496f7b332b12327', 'files': [{'path': 'gpt_engineer/preprompts/fix_code', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/preprompts/spec', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/preprompts/unit_tests', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engi... | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/test_collect.py"
],
"config": [],
"asset": [
"gpt_engineer/preprompts/unit_tests",
"gpt_engineer/preprompts/fix_code",
"gpt_engineer/preprompts/spec"
]
} | 1 | |
OpenInterpreter | open-interpreter | d57ed889c27d5e95e39ea7db59fe518b5f18f942 | https://github.com/OpenInterpreter/open-interpreter/issues/209 | Bug | UnicodeDecodeError - help will be appriciate! | _Exception in thread Thread-1 (save_and_display_stream):
Traceback (most recent call last):
File "C:\Users\ziv\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\ziv\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
... | null | https://github.com/OpenInterpreter/open-interpreter/pull/742 | null | {'base_commit': 'd57ed889c27d5e95e39ea7db59fe518b5f18f942', 'files': [{'path': 'interpreter/code_interpreters/subprocess_code_interpreter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('SubprocessCodeInterpreter', 'start_process', 39)": {'add': [42, 50]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"interpreter/code_interpreters/subprocess_code_interpreter.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | d9e798b48f62fdc2b604a84c36eb83c985f87754 | https://github.com/ansible/ansible/issues/82683 | bug
has_pr
P3
affects_2.13
affects_2.16 | ansible fact_cache permissions changed after ansible-core update | ### Summary
After update to ansible-core 2.13.2 or higher (It is still an issue with 2.16.3), the default permission of ansible fact cache files changed.
ansible-core 2.13.1 is OK and uses 0644 on the fact files. 2.13.2 and higher uses 0600.
I could not figure out how to change the behavior back.
We need read... | null | https://github.com/ansible/ansible/pull/82761 | null | {'base_commit': 'd9e798b48f62fdc2b604a84c36eb83c985f87754', 'files': [{'path': 'lib/ansible/plugins/cache/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30]}, "('BaseFileCacheModule', 'set', 154)": {'add': [166]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/plugins/cache/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AUTOMATIC1111 | stable-diffusion-webui | 98947d173e3f1667eba29c904f681047dea9de90 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6010 | bug-report | [Bug]: Extension Updates Overwrite with a git reset --hard | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
I can't rely on users config files not being overwritten. If I use `install.py` to rename them, `install.py` does not run until next cold boot. This causes the extension to not ... | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4646 | null | {'base_commit': '98947d173e3f1667eba29c904f681047dea9de90', 'files': [{'path': 'modules/extensions.py', 'status': 'modified', 'Loc': {"('Extension', None, 17)": {'mod': [68]}, "('Extension', 'pull', 68)": {'mod': [70]}}}, {'path': 'modules/ui_extensions.py', 'status': 'modified', 'Loc': {"(None, 'apply_and_restart', 23... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"modules/extensions.py",
"modules/ui_extensions.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.