organization
string
repo_name
string
base_commit
string
iss_html_url
string
iss_label
string
title
string
body
string
code
null
pr_html_url
string
commit_html_url
string
file_loc
string
own_code_loc
list
ass_file_loc
list
other_rep_loc
list
analysis
dict
loctype
dict
iss_has_pr
int64
deepfakes
faceswap
629c02a61e1ad5f769f8f7388a091d5ce9aa8160
https://github.com/deepfakes/faceswap/issues/1254
Can't Open GUI on Windows
**Describe the bug** Whenever I try to open the GUI of Faceswap, I get an error and it doesn't open. I am on Windows, and I have uninstalled and reinstalled multiple times, including redoing the conda environment. CLI functions work, but the main GUI does not open, either from the shortcut or a manual terminal run. I ...
null
null
null
{'base_commit': '629c02a61e1ad5f769f8f7388a091d5ce9aa8160', 'files': [{'path': 'requirements/_requirements_base.txt', 'Loc': {'(None, None, 15)': {'mod': [15]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements/_requirements_base.txt" ], "asset": [] }
null
deepfakes
faceswap
9696b5606fd0963814fc0c3644565aa60face69d
https://github.com/deepfakes/faceswap/issues/462
Modify extractor to focus on mouth
I'd like to modify the extractor script to focus on the lower half of the face - specifically the mouth area. I'm experimenting with changing people's mouth movements, and I want to train a higher resolution "mouth only" network, so I can create new speech patterns that are re-composited onto the original footage. ...
null
null
null
{'base_commit': '9696b5606fd0963814fc0c3644565aa60face69d', 'files': [{'path': 'lib/aligner.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/aligner.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
deepfakes
faceswap
9fb70f13552927bea1bf65fe35f4866f99171eaf
https://github.com/deepfakes/faceswap/issues/656
Not showing graph in gui
in log gui: `Exception in Tkinter callback Traceback (most recent call last): File "/usr/lib/python3.6/tkinter/__init__.py", line 1705, in __call__ return self.func(*args) File "/home/telecast/Documents/faceswap/lib/gui/command.py", line 461, in <lambda> command=lambda cmd=action: cmd(self.command)) ...
null
null
null
{'base_commit': '9fb70f13552927bea1bf65fe35f4866f99171eaf', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "2", "info_type": "Config" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "Version" ] }
null
deepfakes
faceswap
e518206c8ef935ebc1b1ff64ae2901cc8ef05f94
https://github.com/deepfakes/faceswap/issues/57
Cannot install tensorflow-gpu requirement
Tried installing the requirements-gpu.txt and get this error: Collecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versio...
null
null
null
{'base_commit': 'e518206c8ef935ebc1b1ff64ae2901cc8ef05f94', 'files': [{'path': 'requirements-gpu.txt', 'Loc': {'(None, None, 6)': {'mod': [6]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n依赖声明" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements-gpu.txt" ], "asset": [] }
null
deepfakes
faceswap
51f1993d93e0ffb581d44416f327f0cf731c34e8
https://github.com/deepfakes/faceswap/issues/209
doesn't work on 2GB GTX 960 even with LowMem model (what params could be reduced?)
LowMem is different from the common model with 2 lines: ENCODER_DIM = 512 # instead of 1024 #x = self.conv(1024)(x) - commented out. But it's still not enough to run under Ubuntu 16.04, cuda8, 1.7Gb of free video RAM. It fails with OOM on any batch size, even with bs=1 and bs=2. What about having s...
null
null
null
{'base_commit': '51f1993d93e0ffb581d44416f327f0cf731c34e8', 'files': [{'path': 'faceswap.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "faceswap.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
deepfakes
faceswap
a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd
https://github.com/deepfakes/faceswap/issues/1361
Bounding boxes coordinates
It has been 2 weeks I have been working on it but cannot find the solution. I want the bounding boxes on the original image, of the result that is produced by the "Extract" process of faceswap code. "Extract" writes the faces extracted from the input image(s). I just want the coordinates from which this face is e...
null
null
null
{'base_commit': 'a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd', 'files': [{'path': 'lib/align/detected_face.py', 'Loc': {"('DetectedFace', '__init__', 82)": {'mod': [84, 85, 86, 87]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "lib/align/detected_face.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
49582c35919097585699598ad0ca49fe3f2117b5
https://github.com/3b1b/manim/issues/659
Problem with FadeOutAndShift
t3 text is not going through FadeOutAndShift. Also tell me how I can FadeOutAndShift t1 and t3 together ```# python -m manim try3.py test1 -pm from manimlib.imports import * class test1(Scene): def construct(self): t1=TextMobject("Hi!") t2=TextMobject("My name is") t3=TextMobject("Girish") t1....
null
null
null
{'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {"('Scene', 'play', 455)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "manimlib/scene/scene.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
ce06e58505dff26cccd497a9bd43969f74ae0da9
https://github.com/3b1b/manim/issues/274
ImportError: No module named animation
I've installed manim on Win10. After run "python extract_scene.py -s example_scenes.py", the next error is shown in the python interactive interpretor: > Traceback (most recent call last): File "extract_scene.py", line 15, in <module> from scene.scene import Scene File "G:\python\manim\scene\scene.py...
null
null
null
{'base_commit': 'ce06e58505dff26cccd497a9bd43969f74ae0da9', 'files': [{'path': 'animation/transform.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "animation/transform.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
55ece141e898577ce44e71d718212a1ee816ed74
https://github.com/3b1b/manim/issues/658
How to add sound to video?
null
null
null
{'base_commit': '55ece141e898577ce44e71d718212a1ee816ed74', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {"('Scene', 'add_sound', 543)": {'mod': []}}, 'status': 'modified'}, {'path': 'old_projects/clacks/solution2/simple_scenes.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "old_projects/clacks/solution2/simple_scenes.py", "manimlib/scene/scene.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
97a0a707d759e0235450ea8c20f55a2529bd2973
https://github.com/3b1b/manim/issues/878
Swedish characters not working
Include at least: 1. Steps to reproduce the issue (e.g. the command you ran) 2. The unexpected behavior that occurred (e.g. error messages or screenshots) 3. The environment (e.g. operating system and version of manim) I am new to manim and want to include swedish characters in a text, but it gives an error m...
null
null
null
{}
[ { "Loc": [ 12 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
6880ebcbc2525b2f3c0731439bef7ff981b4b5b4
https://github.com/3b1b/manim/issues/924
Reconsidering TEX_USE_CTEX / using XeLaTeX
I worked on manim back in 2018. I added the function for using CTeX (XeLaTeX package for Chinese) and XeLaTeX instead of LaTeX using the flag `TEX_USE_CTEX` in constants.py (#315). I have stopped working on manim since 2019, but over the months there are apparently more and more people who want to use LaTeX renderin...
null
null
null
{}
[]
[]
[ { "pro": "ManimCommunity" }, { "pro": "manim", "path": [ "manim/utils/tex_templates.py" ] } ]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "manim/utils/tex_templates.py" ], "doc": [], "test": [], "config": [], "asset": [ "ManimCommunity" ] }
null
3b1b
manim
49582c35919097585699598ad0ca49fe3f2117b5
https://github.com/3b1b/manim/issues/660
ColorByCaracter help
I want to color only theta of ```{ e }^{ i\theta }``` I was going through ColorByCaracter in 3_text_like_arrays.py . But I fail to understand how you people separate the tex formula into arrays. I know about arrays but I can only copy the tex code from [Daum Equation Editor](http://s1.daumcdn.net/editor/fp/servic...
null
null
null
{'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/mobject/svg/tex_mobject.py', 'Loc': {"('TexMobject', None, 132)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "manimlib/mobject/svg/tex_mobject.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
32abbb9371308e8dff7410de387fe78e64b6fe7a
https://github.com/3b1b/manim/issues/700
OSError: No file matching Suv.svg in image directory
I've tried putting the .SVG image into */media/designs/svg_images. But when I want to quote it in the .py file it still reports errors: ``` Traceback (most recent call last): File "/home/jason/Documents/manim/manimlib/extract_scene.py", line 155, in main scene = SceneClass(**scene_kwargs) File "/home/jas...
null
null
null
{'base_commit': '32abbb9371308e8dff7410de387fe78e64b6fe7a', 'files': [{'path': 'manimlib/mobject/svg/svg_mobject.py', 'Loc': {"('SVGMobject', 'ensure_valid_file', 49)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "manimlib/mobject/svg/svg_mobject.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
b74e5ca254bccc1575b4c7b7de3c1cb2010aac75
https://github.com/3b1b/manim/issues/694
can't graph trigonometric function of secx, cscx, cotx, tanx,...
source code: class PlotFunctions(GraphScene): CONFIG = { "x_min" : -10, "x_max" : 10.3, "y_min" : -1.5, "y_max" : 1.5, "graph_origin" : ORIGIN , "function_color" : RED , "axes_color" : GREEN, "x_labeled_nums" :range(-10,12,2), } ...
null
null
null
{'base_commit': 'b74e5ca254bccc1575b4c7b7de3c1cb2010aac75', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {"('VGroup', None, 868)": {'mod': []}}, 'status': 'modified'}, {'Loc': [17], 'path': None}]}
[ { "Loc": [ 17 ], "path": null } ]
[]
[]
{ "iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code" }
{ "code": [ null, "manimlib/mobject/types/vectorized_mobject.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
fc153bb49a529e8cbb02dd1514f06387cbf0ee6e
https://github.com/3b1b/manim/issues/1206
Manim can't find my png file
I'm new to coding and am trying to learn manim, which I'm using on my macbook pro. I'm trying to create a scene where manim draws a png file I saved. I saved the png file as "shirt.png" in my manim folder. I then ran the following code: ``` from manimlib.imports import * class OutFit(Scene): def construct(se...
null
null
null
{'base_commit': 'fc153bb49a529e8cbb02dd1514f06387cbf0ee6e', 'files': [{'path': 'manimlib/animation/fading.py', 'Loc': {"('FadeIn', None, 34)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "manimlib/animation/fading.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
3b1b
manim
64c960041b5b9dcb0aac50019268a3bdf69d9563
https://github.com/3b1b/manim/issues/608
What is VMobject exactly?
Can anyone explain what is the purpose of `VMobject` and how it differs from `Mobject`? I am trying to make some `old_projects` work. For example, I had to change `PMobject` to inherit from `VMobject` instead of `Mobject` in order to fix `NumberLineScene`. I do not know if it is correct thing to do or how will it af...
null
null
null
{'base_commit': '64c960041b5b9dcb0aac50019268a3bdf69d9563', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "manimlib/mobject/types/vectorized_mobject.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
All-Hands-AI
OpenHands
a2779fe2f6c9ab29508676f21242b1c6b88e2f67
https://github.com/All-Hands-AI/OpenHands/issues/5229
documentation enhancement fix-me
[Documentation]: Micro-agents
**What problem or use case are you trying to solve?** Currently in the `openhands/agenthub/codeact_agent` directory, we have an implementation of micro agents, but this is not documented. To do so, we can: 1. read the implementation of codeact agent 2. read an example microagent in `openhands/agenthub/codeact_a...
null
null
null
{'base_commit': 'a2779fe2f6c9ab29508676f21242b1c6b88e2f67', 'files': [{'path': 'microagents/README.md', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [], "doc": [ "microagents/README.md" ], "test": [], "config": [], "asset": [] }
null
All-Hands-AI
OpenHands
08a2dfb01af1aec6743f5e4c23507d63980726c0
https://github.com/All-Hands-AI/OpenHands/issues/635
bug
Ollama support issue.
<!-- You MUST fill out this template. We will close issues that don't include enough information to reproduce --> #### Describe the bug When trying to configure OpenDevin to run with Ollama there are requests that are being sent to the ollama server like this: ![image](https://github.com/OpenDevin/OpenDevin/as...
null
null
null
{'base_commit': '08a2dfb01af1aec6743f5e4c23507d63980726c0', 'files': [{'path': 'opendevin/llm/LOCAL_LLM_GUIDE.md', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [ "opendevin/llm/LOCAL_LLM_GUIDE.md" ], "test": [], "config": [], "asset": [] }
null
scrapy
scrapy
d636e5baa8a077e2869bfe3b76525efec42392ec
https://github.com/scrapy/scrapy/issues/2276
can LinkExtractor extract scrapy.link with node info
the html is like below, i want to extract the link `/example/category/pg{page}/`, but the `scrapy.link` does not contains the node info(`currentPage` and `totalPage`), how can i extract the link with the node info ``` html <div class="page-box"> <div page-url="/example/category/pg{page}/" totalPage=...
null
null
null
{'base_commit': 'd636e5baa8a077e2869bfe3b76525efec42392ec', 'files': [{'path': 'scrapy/http/response/text.py', 'Loc': {"('TextResponse', 'css', 117)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "scrapy/http/response/text.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
scrapy
scrapy
892467cb8a40c54840284a08d0f98ab1b3af7bc4
https://github.com/scrapy/scrapy/issues/4565
AttributeError: module 'resource' has no attribute 'getrusage'
version : Scrapy 2.1.0 ``` 2020-05-11 20:05:28 [scrapy.core.engine] INFO: Spider opened 2020-05-11 20:05:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2020-05-11 20:05:28 [dy] INFO: Spider opened: dy 2020-05-11 20:05:28 [scrapy.utils.signal] ERROR: Error...
null
null
null
{'base_commit': '892467cb8a40c54840284a08d0f98ab1b3af7bc4', 'files': [{'path': 'scrapy/commands/settings.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "scrapy/commands/settings.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
commaai
openpilot
ce9559cc54433244cb01d4781302eb072a3fd519
https://github.com/commaai/openpilot/issues/30078
bug fingerprint car ford
2023 Ford Maverick Not Recognized
### Describe the bug Car Not Recognized Looks like all the values for firmware are the same as what is already in values.py ### Which car does this affect? Ford Maverick 2023 ### Provide a route where the issue occurs 66833387c2bbbca0|2023-09-27--21-13-05 ### openpilot version master-ci ### Additional info ...
null
null
null
{'base_commit': 'ce9559cc54433244cb01d4781302eb072a3fd519', 'files': []}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [] }
null
psf
requests
27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66
https://github.com/psf/requests/issues/775
Content marked as consumed in 0.13.6
Content is immediately marked as consumed in 0.13.6, causing calls to e.g. response.iter_content() to throw an error. Test code (tested with python 2.6): ``` import requests r = requests.get('http://docs.python-requests.org/') if r._content_consumed: print 'consumed' else: print 'not consumed' ``` In 0.13.5...
null
null
null
{'base_commit': '27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66', 'files': [{'path': 'requests/models.py', 'Loc': {"('Request', '__init__', 47)": {'mod': [62]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "requests/models.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
psf
requests
2de907ad778de270911acaffe93883f0e2729a4a
https://github.com/psf/requests/issues/4602
Chunk-encoded request doesn't recognize iter_content generator
Passing a generator created by iter_content() as request data raises "TypeError: sendall() argument 1 must be string or buffer, not generator". ## Expected Result The POST request successfully delives the content from the GET request. ## Actual Result A TypeError is raised: ``` Traceback (most recent call...
null
null
null
{}
[]
[]
[ { "pro": "requests" }, { "pro": "toolbelt", "path": [ "requests_toolbelt/streaming_iterator.py" ] } ]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code" }
{ "code": [ "requests_toolbelt/streaming_iterator.py" ], "doc": [], "test": [], "config": [], "asset": [ "requests" ] }
null
psf
requests
f17ef753d2c1f4db0d7f5aec51261da1db20d611
https://github.com/psf/requests/issues/3031
Needs Info Question/Not a bug
[WinError 10048] Only one usage of each socket address ...
I notice that despite using requests.Session() - I still seem to be creating new connections/sockets which eventually exhaust (TIME_WAIT) and I get the following error: > [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted',)) ``` s = requests.Session() data = ...
null
null
null
{}
[ { "Loc": [ 8 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
psf
requests
6f659a41794045292b836859f1281d33eeed8260
https://github.com/psf/requests/issues/3740
File download weirdness
I noticed this issue building conda recipes. Conda uses requests to download files from the internet. The file that is being fetched is: https://dakota.sandia.gov/sites/default/files/distributions/public/dakota-6.5-public.src.tar.gz (link found here: https://dakota.sandia.gov/download.html) Downloading with cur...
null
null
null
{'base_commit': '6f659a41794045292b836859f1281d33eeed8260', 'files': [{'path': 'docs/user/quickstart.rst', 'Loc': {'(None, None, 166)': {'mod': [166]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [], "doc": [ "docs/user/quickstart.rst" ], "test": [], "config": [], "asset": [] }
null
psf
requests
62176a1ca7207db37273365b4691ed599203b828
https://github.com/psf/requests/issues/3849
Received response with content-encoding: gzip, but failed to decode it
```python import requests requests.get('http://gett.bike/') ``` This code raises the following exception: ```python ContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.', error('Error -3 while decompressing data: incorrect data check',)) ``` Arch linux x64 requests...
null
null
null
{'base_commit': '62176a1ca7207db37273365b4691ed599203b828', 'files': [{'path': 'src/requests/api.py', 'Loc': {"(None, 'request', 14)": {'mod': [24]}}, 'status': 'modified'}, {'Loc': [4], 'path': None}]}
[ { "Loc": [ 4 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null, "src/requests/api.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
psf
requests
057722af23edf3f69bf7bdfed7c6c32cbe1ce2e7
https://github.com/psf/requests/issues/3015
Ability to set timeout after response
For devs who use this great library, it would be very beneficial to be able to set the timeout AFTER initial connection. There are a few scenarios where this is useful but one of the main patterns/use cases is this: ``` import requests import socket # May or may not subclass threading.Thread class Getter(object): ...
null
null
null
{}
[ { "Loc": [ 20 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
psf
requests
1285f576ae0a848de27af10d917c19b60940d1fa
https://github.com/psf/requests/issues/3774
bad handshake error with ssl3
I have an inhouse IIS server with ssl3 but an expired certificate, so I used requests without certificate verification and it was working fine with requests 2.11.1. But after I upgrade requests to 2.12.0, there was an error occured. the code is: ... requests.get('https://10.192.8.89:8080/yps_report', verify=False...
null
null
null
{}
[ { "Loc": [ 41 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n需要将下面的user的一个comment中user的代码放入其中", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
ansible
ansible
a6d4c3ff7cf43c24be6622102cee834fc5096496
https://github.com/ansible/ansible/issues/78759
module support:core bug affects_2.9
"Invalid data passed to 'loop', it requires a list, got this instead: <built-in method values of dict object at 0x7f63b782bf80>.
### Summary When trying to pass a variable called i.e. sysctl.values to loop, I will get the above error. ### Issue Type Bug Report ### Component Name debug (only used for debugging) ### Ansible Version ```console $ ansible --version ansible 2.9.27 config file = /home/rf/.ansible.cfg conf...
null
null
null
{}
[ { "Loc": [ 59 ], "path": null } ]
[]
[]
{ "iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
ansible
ansible
8af920c8924b2fd9a0e4192c3c7e6085b687bfdc
https://github.com/ansible/ansible/issues/82382
bug affects_2.16
Ansible core 2.16.1 broke AnsibleUnsafeBytes iteration
### Summary Upgrading form 2.16.0 to 2.16.1 (Ansible 9.0.1 to 9.1.0), iterating over AnsibleUnsafeBytes does not create a list of numbers anymore. ### Issue Type Bug Report ### Component Name core, unsafe_proxy ### Ansible Version ```console $ ansible --version ansible [core 2.16.1] config...
null
null
null
{'base_commit': '8af920c8924b2fd9a0e4192c3c7e6085b687bfdc', 'files': [{'path': 'Version', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Other" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "Version" ] }
null
ansible
ansible
bcf9cd1e2a01d8e111a28db157ebc255a5592dca
https://github.com/ansible/ansible/issues/20085
cloud affects_2.1 module docker bug
docker_container task fail on exit code
Unless i'm missing something i expect that if I were to do something like the following the task would fail? But it does not 😟 ```yaml tasks: docker_container: name: "exit-test" image: "ubuntu:latest" command: "bash -c 'exit 123'" ``` ##### ISSUE TYPE - Bug Report ####...
null
null
null
{}
[]
[]
[ { "org": "ansible", "pro": "ansible-modules-core", "path": [ "cloud/docker/docker_container.py" ] } ]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code" }
{ "code": [ "cloud/docker/docker_container.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ansible
ansible
d5324c11a0c389d2ede8375e2024cb37b9eb8ce5
https://github.com/ansible/ansible/issues/19352
affects_2.0 module support:core bug files
Template update convert \n to actual new line
##### ISSUE TYPE Bug Report ##### COMPONENT NAME template ##### ANSIBLE VERSION 2.0 and higher CONFIGURATION ``` [ssh_connection] control_path = %(directory)s/%%C ``` ##### OS / ENVIRONMENT Mac OS X 10.11.6 Centos 6.x, 7.x SUMMARY In the input .j2 file, we substitute a variable with an ...
null
null
null
{'base_commit': 'd5324c11a0c389d2ede8375e2024cb37b9eb8ce5', 'files': [{'path': 'lib/ansible/template/__init__.py', 'Loc': {}}, {'path': 't.yml', 'Loc': [60]}]}
[ { "path": "t.yml", "Loc": [ 60 ] } ]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code" }
{ "code": [ "lib/ansible/template/__init__.py" ], "doc": [], "test": [], "config": [ "t.yml" ], "asset": [] }
null
ansible
ansible
a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7
https://github.com/ansible/ansible/issues/73922
python3 module support:core bug affects_2.10
cron: Remove/delete an environment variable
### Summary With `env=yes`, `cron` add environment variable (with the `name` & `value`) parameters. I though that having `env` + `state=absent` would remove said variable, but that's not the case (the cron file is actually removed). As such there is no way to remove a variable and the more obvious way to attempt t...
null
null
null
{'base_commit': 'a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7', 'files': [{'path': 'lib/ansible/modules/cron.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [ "lib/ansible/modules/cron.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ansible
ansible
7490044bbe28029afa9e3099d86eae9fda5f88b7
https://github.com/ansible/ansible/issues/11351
affects_2.0 affects_2.3 c:executor/playbook_executor support:core feature P3
enable do/until with async tasks
##### ISSUE TYPE Feature Idea ##### COMPONENT NAME core ##### ANSIBLE VERSION 2.0 ##### CONFIGURATION ##### OS / ENVIRONMENT ##### SUMMARY When a task is marked as async, there is no way to loop until a condition is met. With poll:0 and async_status you can poll for async task to complete but you cannot repeat t...
null
null
null
{}
[ { "path": "/tmp/async-test.yml", "Loc": [ 33 ] } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Config" }
{ "code": [], "doc": [], "test": [], "config": [ "/tmp/async-test.yml" ], "asset": [] }
null
ansible
ansible
833970483100bfe89123a5718606234115921aec
https://github.com/ansible/ansible/issues/67993
cloud aws openstack module support:community affects_2.5 bug traceback system
Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol(unable to disable stickiness not supported in NLB)
##### SUMMARY We are using Ansible 2.5 to deploy AWS resources in our environment. From March 02, 2019 our deployment is failing with the below error. ERROR: ===== TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] *** An exception occurred during task execution. To see the full traceba...
null
null
null
{}
[ { "Loc": [ 20 ], "path": null } ]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code" }
{ "code": [ null ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
6f718cee740e7cd423edd1136db78c5be49fa7c0
https://github.com/ultralytics/yolov5/issues/2467
question Stale
Problems with weights
## ❔Question Hello, I have just run trainy.py script with my data and faced a problem - you wrote that weights are saved in runs directory, but in my case I have not found them. Everything is fine with hyp.yaml and opt.yaml but folder "weights" is empty. Do you have any guesses about this issue? ## Additional co...
null
null
null
{'base_commit': '6f718cee740e7cd423edd1136db78c5be49fa7c0', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [470, 454]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2\nweights找不见", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
06831aa9e905e0fa703958f6b3f3db443cf477f3
https://github.com/ultralytics/yolov5/issues/9079
Does adjusting the number of classes of a pretrained model work?
### Search before asking - [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. * ### Question Hi everyone, I'm a bit confused about how to properly load a pretrained model ...
null
null
null
{'base_commit': '06831aa9e905e0fa703958f6b3f3db443cf477f3', 'files': [{'path': 'train.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
ee8988b8a2ed07af1b7c8807d39aad35369f0e28
https://github.com/ultralytics/yolov5/issues/8
Stale
training actually can not work
After trained on several epochs, I found the mAP is still very low. Does the training really works? ``` Epoch gpu_mem GIoU obj cls total targets img_size 14/299 6.4G 0.02273 0.002925 0.0003764 0.02603 11 640: 100%|████████████████████████████████████████████...
null
null
null
{'base_commit': 'ee8988b8a2ed07af1b7c8807d39aad35369f0e28', 'files': [{'path': 'models/yolov5s.yaml', 'Loc': {'(None, None, 2)': {'mod': [2]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code" }
{ "code": [], "doc": [ "README.md" ], "test": [], "config": [ "models/yolov5s.yaml" ], "asset": [] }
null
ultralytics
yolov5
901243c7806be07b31073440cf721e73532a0734
https://github.com/ultralytics/yolov5/issues/894
question
training stuck when loading dataset
## ❔Question I follow the instructions to run coco128, ``` python train.py --img 640 --batch 16 --epochs 5 --data ./data/coco128.yaml --cfg ./models/yolov5s.yaml --weights '', ``` the ouput is ``` Image sizes 640 train, 640 test Using 8 dataloader workers Starting training for 5 epochs... Epoch gpu...
null
null
null
{'base_commit': '901243c7806be07b31073440cf721e73532a0734', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [388]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
63060910a68bfde238872d629ab88e2e7bc736e8
https://github.com/ultralytics/yolov5/issues/3735
question Stale
Results interpretation
Hello, Another question to do with results interpretation. I am not very sure how to interpret the results.txt file that gets generated after training is over. Also, is there any way to extract the number of false positives, true positives, false negatives, as well as to see the total mean average accuracy and loss ...
null
null
null
{'base_commit': '63060910a68bfde238872d629ab88e2e7bc736e8', 'files': [{'path': 'README.md', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
dc54ed5763720ced4f6784552c47534af5413d45
https://github.com/ultralytics/yolov5/issues/6062
question Stale
How to add some private information into .pt file?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question yolov5 is a great algorithm, but I'm having some problems. Specifically, I want to add so...
null
null
null
{'base_commit': 'dc54ed5763720ced4f6784552c47534af5413d45', 'files': [{'path': 'train.py', 'Loc': {"(None, 'train', 58)": {'mod': [377]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
79af1144c270ac7169553d450b9170f9c60f92e4
https://github.com/ultralytics/yolov5/issues/4517
question Stale
what is moasic and what is its default and how to delete it
what is the meaning of moasic where I can find its default parameter how to stop moasic and stop augmentation in general I use only this line is it augment data by default or not? how to stop augmentation if exist ``` !python train.py --img 640 --batch 16 --epochs 400 --data /mydrive/data.yaml \ --weigh...
null
null
null
{'base_commit': '79af1144c270ac7169553d450b9170f9c60f92e4', 'files': [{'path': 'data/hyps/hyp.scratch.yaml', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "配置文件" }
{ "code": [], "doc": [], "test": [], "config": [ "data/hyps/hyp.scratch.yaml" ], "asset": [] }
null
ultralytics
yolov5
0d8a1842373e55f8f639adede0c3d378f1ffbea5
https://github.com/ultralytics/yolov5/issues/4717
bug
[onnx export.py error] Unsupported ONNX opset version
`ONNX: starting export with onnx 1.10.1...` `ONNX: export failure: Unsupported ONNX opset version: 13` I'm using yolov5-5.0, pytorch1.7.0+cu101 and python3.7.9. How to solve it?
null
null
null
{'base_commit': '0d8a1842373e55f8f639adede0c3d378f1ffbea5', 'files': [{'path': 'export.py', 'Loc': {"(None, 'parse_opt', 166)": {'mod': [179]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "export.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
886f1c03d839575afecb059accf74296fad395b6
https://github.com/ultralytics/yolov5/issues/2432
question
Experiments on GhostNet
## ❔Question I am just wondering about the performance when using GhostNet in experimental.py. Could you please share this experiment? ## Additional context
null
null
null
{'base_commit': '886f1c03d839575afecb059accf74296fad395b6', 'files': [{'path': 'Models/yolov5l.yaml', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "配置" }
{ "code": [], "doc": [], "test": [], "config": [ "Models/yolov5l.yaml" ], "asset": [] }
null
ultralytics
yolov5
2026d4c5eb4e3e48b5295106db85c844000d95d1
https://github.com/ultralytics/yolov5/issues/1498
question Stale
calculate fps on local system
## ❔Question I have been using the code to do detection from webcam. How can I know what is the speed of detection (fps) in my local system?
null
null
null
{'base_commit': '2026d4c5eb4e3e48b5295106db85c844000d95d1', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 61)': {'mod': [61]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\nDoc" }
{ "code": [], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
14797370646d25e226f0093a5982d5cd54ba729a
https://github.com/ultralytics/yolov5/issues/2797
question
large scale dataset use --cache-images flag
## ❔Question hello ~ , i have dataset with a million images about 450GB and i want to use --cache-images accelerate training(i have 128GB RAM),can i split the whole dataset into many sub dataset and training them one by one(like resume training) ? ## Additional context
null
null
null
{'base_commit': '14797370646d25e226f0093a5982d5cd54ba729a', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [466]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
f5335f22bbd6037124d60edb3c2d1934d7673e23
https://github.com/ultralytics/yolov5/issues/8907
question Stale
I am making UI by QT for Yolov5 training. Where is making the result image (results.png) after training?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I am making UI by QT for Yolov5 training. Where is making the result image (results.png) ...
null
null
null
{'base_commit': 'f5335f22bbd6037124d60edb3c2d1934d7673e23', 'files': [{'path': 'utils/plots.py', 'Loc': {"(None, 'plot_results', 418)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "utils/plots.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
0ab303b04499b6b912d8212a4fa10fe3fcb78efa
https://github.com/ultralytics/yolov5/issues/8708
question Stale
Significance of --half?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Can you please let me know the significance of --half during training process.... ### Ad...
null
null
null
{'base_commit': '0ab303b04499b6b912d8212a4fa10fe3fcb78efa', 'files': [{'path': 'val.py', 'Loc': {"(None, 'parse_opt', 330)": {'mod': [351]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "val.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
b74929c910f9cd99d2ece587e57bce1ae000d3ba
https://github.com/ultralytics/yolov5/issues/4252
question
Training speed and memory
I noticed your instructions about training, Run commands below to reproduce results on COCO dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest --batch-size your GPU allows (batch sizes shown for 16 GB devices). I ...
null
null
null
{'base_commit': 'b74929c910f9cd99d2ece587e57bce1ae000d3ba', 'files': [{'path': 'train.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
404749a33cc29d119f54b2ce35bf3b33a847a487
https://github.com/ultralytics/yolov5/issues/2186
question
Can we return objectness score and class score?
## ❔Question I am wondering if it is possible to return confidence scores for objectness and classification separately for each predicted box during inference? I might be conceptually off base here, but I am interested in understanding if the model is unsure if the box itself is correct or if the class it is assigning...
null
null
null
{'base_commit': '404749a33cc29d119f54b2ce35bf3b33a847a487', 'files': [{'path': 'detect.py', 'Loc': {"(None, 'detect', 18)": {'mod': [103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113]}}, 'status': 'modified'}, {'path': 'utils/general.py', 'Loc': {"(None, 'non_max_suppression', 340)": {'mod': []}}, 'status': 'modifi...
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "utils/general.py", "detect.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
dabad5793a638cba1e5a2bbb878c9b87fe1a14a0
https://github.com/ultralytics/yolov5/issues/3942
enhancement Stale
For online cutting training and detection can be improve
## 🚀 Feature For big image training, usually people thinking about to cut the images, but yolov5 can only resize the image to small size. Such as VisDrone dataset, the smallest image can have 960*540 size, if resize to 640*640, size would be 640*360, but the target in dataset mostly are small object, resize the ima...
null
null
null
{'base_commit': 'dabad5793a638cba1e5a2bbb878c9b87fe1a14a0', 'files': [{'path': 'utils/augmentations.py', 'Loc': {"('Albumentations', '__init__', 16)": {'mod': [22]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "utils/augmentations.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
c8c5ef36c9a19c7843993ee8d51aebb685467eca
https://github.com/ultralytics/yolov5/issues/1238
question
img-weights
## ❔Question parser.add_argument('--img-weights', action='store_true', help='use weighted image selection for training') in order to make --iimg-weights work, what else I need to do? dataset = LoadImagesAndLabels(path, imgsz, batch_size, augment=augment, # augment images ...
null
null
null
{'base_commit': 'c8c5ef36c9a19c7843993ee8d51aebb685467eca', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [397]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "train.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31
https://github.com/ultralytics/yolov5/issues/7072
question
why can't I reproduce the mAP provided by README.md(v6.1)?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question I used the method recommended by README.md(v6.1) to reproduce the mAP, but I failed. 'p...
null
null
null
{'base_commit': '9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31', 'files': [{'path': 'data/scripts/get_coco.sh', 'Loc': {'(None, None, 13)': {'mod': [13]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\nDoc" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "data/scripts/get_coco.sh" ] }
null
ultralytics
yolov5
079b36d72ba2ef298f7ae4dc283d8c7975eb02f6
https://github.com/ultralytics/yolov5/issues/6540
question
Is YOLOv5 able to detect a specific number of classes according to the project's need, like just 2 or 3 classes?
### Search before asking - [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. ### Question Hi, I'm using YOLOv5 in my project and I have a question. If I use "--classes " it could ...
null
null
null
{'base_commit': '079b36d72ba2ef298f7ae4dc283d8c7975eb02f6', 'files': [{'path': 'detect.py', 'Loc': {"(None, 'parse_opt', 216)": {'mod': [231]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "detect.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
e96c74b5a1c4a27934c5d8ad52cde778af248ed8
https://github.com/ultralytics/yolov5/issues/4357
question Stale
Average Precision for each class
## Is there any way to see the average precision for each class? I have run my model for 1,000 epochs and I have a bunch of metrics (which are AMAZING by the way, thanks so making it so easy to see them!) and I have mAP, but I was wondering if there was a way to see the AP for each class? Like a table or something. ...
null
null
null
{'base_commit': 'e96c74b5a1c4a27934c5d8ad52cde778af248ed8', 'files': [{'path': 'val.py', 'Loc': {"(None, 'parse_opt', 293)": {'mod': [305]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "val.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
ultralytics
yolov5
96e36a7c913e2433446ff410a4cf60041010a524
https://github.com/ultralytics/yolov5/issues/4152
question
Format of data for testing trained model
In what format do I need to feed the validation dataset to the val.py file? Should images and markup be in the same folder or in different ones? In what format should the coordinates of the bounding boxes be in - yolo or something else?
null
null
null
{'base_commit': '96e36a7c913e2433446ff410a4cf60041010a524', 'files': [{'path': 'README.md', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
eaf5ec4467795344e7d9601515b017fd8c46e44b
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/439
decoding error in preprocessing synthesizer
I get the following error while running `synthesizer_preprocess_audio.py`. ``` Arguments: datasets_root: /home/amin/voice_cloning/libri_100 out_dir: /home/amin/voice_cloning/libri_100/SV2TTS/synthesizer n_processes: None skip_existing: True hparams: Using data fr...
null
null
null
{'base_commit': 'eaf5ec4467795344e7d9601515b017fd8c46e44b', 'files': [{'path': 'synthesizer/preprocess.py', 'Loc': {"(None, 'preprocess_speaker', 54)": {'mod': [60]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "synthesizer/preprocess.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
5425557efe30863267f805851f918124191e0be0
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/629
Error in macOS when trying to launch the toolbox
Traceback (most recent call last): File "/Users/luke/Documents/Real-Time-Voice-Cloning-master/demo_toolbox.py", line 2, in <module> from toolbox import Toolbox File "/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/__init__.py", line 1, in <module> from toolbox.ui import UI File "/Users/l...
null
null
null
{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'encoder/inference.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "encoder/inference.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1156
missing SV2TTS/
Hey, I'm trying to finetune the pretrained model but it looks like I am missing the SV2TTS/ directory which contains train.txt, etc. I have a saved_models/ directory which has three *.pt files for the three components of this TTS architecture.
null
null
null
{'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'synthesizer_preprocess_audio.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "synthesizer_preprocess_audio.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
e32cf8f4ddb63d9a7603eeb31f1855b54926aee6
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/549
Import Error
Hey, i am trying to run this code and everytime i run demo_toolbox.py there comes an error "failed to load qt binding" i tried reinstalling matplotlib and also tried installing PYQt5 . Need Help !!!
null
null
null
{'base_commit': 'e32cf8f4ddb63d9a7603eeb31f1855b54926aee6', 'files': [{'path': 'toolbox/ui.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "toolbox/ui.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/117
ModuleNotFoundError: No module named 'tensorflow.contrib.seq2seq'
When running demo_cli.py Python = 3.7.4 TensorFlow = 2.0 RC CUDA = 10.1 cuDNN = Installed for right CUDA version Windows = 10
null
null
null
{'base_commit': '8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
c5c2261c97afe6ec5db1ef389eba1257f6cce8a2
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/275
Speaker verification implementation
I need just the speaker verification part which is the implementation of [GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION](https://arxiv.org/pdf/1710.10467.pdf) paper, how I can proceed to get it please?
null
null
null
{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder/', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "5\n询问功能实现所在地", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "encoder/" ] }
null
CorentinJ
Real-Time-Voice-Cloning
7432046efc23cabf176f9fdc8d2fd67020059478
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/855
Output audio spectrum - low frequences
Hi, Im am trying to train new model in polish language but after 476k steps output sound is very "robotic". I was trying to find why that's happened and noticed (based on my output and @blue-fish samples: https://blue-fish.github.io/experiments/RTVC-FT-1.html) that spectrum of this model don't include high frequences c...
null
null
null
{'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [77]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "synthesizer/hparams.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1122
Requirements.txt failed to install with obscure issue with installing audioread
I ran into a few issues along the way that I was able to solve, namely errors like this: WARNING: Failed to write executable - trying to use .deleteme logic ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: 'C:\\Python310\\Scripts\\f2py.exe'...
null
null
null
{'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n依赖声明" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
95adc699c1deb637f485e85a5107d40da0ad94fc
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/717
I can't use Dataset/Speaker/Utterance
I can't use the upper section in the software. when loading it shows: Warning: you did not pass a root directory for datasets as argument. How can I fix this? Thank you
null
null
null
{'base_commit': '95adc699c1deb637f485e85a5107d40da0ad94fc', 'files': [{'path': 'demo_toolbox.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2\nwarning", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "demo_toolbox.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
039f7e5402e6d9da7fad5022dae038cdfb507b39
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/13
problem with utils.argutils in python 3.6
Hi under win 10 64 bits trying using python 3.6 it failed to import the print_args wiht the fact that he can't find the argutils. think i have a relative import error but can't solve it btw nice job on what i heard on the youtube demo if i mnaully try to import the utils from the root dir seems he load another uti...
null
null
null
{'base_commit': '039f7e5402e6d9da7fad5022dae038cdfb507b39', 'files': [{'path': 'synthesizer/__init__.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "synthesizer/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
7432046efc23cabf176f9fdc8d2fd67020059478
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/884
Using a different speaker encoder
Hello, I really appreciate the work on display here. I was just wondering if I could use a different speaker encoder. If someone used a different encoder, could you explain the difficulties of replacing the encoder and how the results were different from the speaker encoder already in use?
null
null
null
{'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {"('Toolbox', 'add_real_utterance', 182)": {'mod': [191]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "toolbox/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
a32962bb7b4827660646ac6dabf62309aea08a91
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/488
preprocessing VoxCele2 is not working
While running encoder_preprocess on voxceleb2 dataset, I'm getting the following warning and nothing else happens. Not sure why? ``` raw: Preprocessing data for 5994 speakers. raw: 0%| | 0/5994 [00:00<?, ?speakers/s] /ho...
null
null
null
{'base_commit': 'a32962bb7b4827660646ac6dabf62309aea08a91', 'files': [{'path': 'encoder/preprocess.py', 'Loc': {"(None, 'preprocess_voxceleb2', 164)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "encoder/preprocess.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
0713f860a3dd41afb56e83cff84dbdf589d5e11a
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1065
vocoder_dataset.py ValueError
I am trying to use the Librispeech dataset to train the vocoder. And I got a ValueError while training. ```numpy.random._bounded_integers._rand_int32 ValueError: low >= high``` It occurs in line 61 of vocoder_dataset.py, ```mel_offsets = [np.random.randint(0, offset) for offset in max_offsets]``` So I assume ...
null
null
null
{'base_commit': '0713f860a3dd41afb56e83cff84dbdf589d5e11a', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [88]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "synthesizer/hparams.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
5425557efe30863267f805851f918124191e0be0
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/651
Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
hello. Please help me, I do not know how to solve my problem problem. I run and completed without errors `python synthesizer_preprocess_audio.py <datasets_root>` `python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer` but after typing `python synthesizer_train.py my_run <datasets_root>/SV2T...
null
null
null
{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [243]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "synthesizer/hparams.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
77c0bd169d8158ed1cdb180cda73c24d3cacd778
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1263
Python 3.10.12 is not supported
When I ran python3.10 -m pip install numpy==1.20.3 on linux mint, I got an error while I was trying to install it. But it was totally fine when I used python3.8 ![12](https://github.com/CorentinJ/Real-Time-Voice-Cloning/assets/100217654/99071c68-bf38-4ffe-b789-9d292ed539a5)
null
null
null
{'base_commit': '77c0bd169d8158ed1cdb180cda73c24d3cacd778', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, None)': {'mod': [4]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n依赖声明" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
c5c2261c97afe6ec5db1ef389eba1257f6cce8a2
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/250
[Errno 2] No such file or directory: 'encoder/_sources.txt'
I have this problem, but I can't understand what does this file contain? There is not _sources.txt in this repo
null
null
null
{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder_preprocess.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "encoder_preprocess.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
5e400d474043044ba0e3e907a74b4baccb16ee7c
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/425
Tensorflow.contrib file missing what to do
null
null
null
{'base_commit': '5e400d474043044ba0e3e907a74b4baccb16ee7c', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 35)': {'mod': [35]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nand\n2\n这里是指导是doc\n问题原因是依赖的库的版本", "info_type": "Doc" }
{ "code": [], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
9553eaa1748cf94814be322ec7b096d2d6bc7f28
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/419
Getting an exception when browsing for files
For some reason, importing mp3 files is not working. Anyone got an idea on why this might be the case.?
null
null
null
{'base_commit': '9553eaa1748cf94814be322ec7b096d2d6bc7f28', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 40)': {'mod': [40]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [], "doc": [ "README.md" ], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
c5c2261c97afe6ec5db1ef389eba1257f6cce8a2
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/221
A couple inquiries about the colab version
So I have a setup using a copy of the colaboratory version, but I want to be able to generate a few sentences at a time without having to generate per sentence. I understand that commas and periods don't work, but in the demonstration video it was mentioned that line breaks are a way to get around this for now... ho...
null
null
null
{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {"('Toolbox', 'synthesize', 158)": {'mod': [170, 171, 172, 173, 174, 175]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "toolbox/__init__.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
c5c2261c97afe6ec5db1ef389eba1257f6cce8a2
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/225
Not code-savy but want to experiment with code
I have Python Spyder downloaded, but I do not know much about coding, or how to get to the stage where I can add audio and synthesize it. What would you recommend?
null
null
null
{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
070a3c187f87136ebe92aa72766f8343772d414e
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/378
i cant install NVIDIA CUDA
I can't install NVIDIA CUDA even though I followed everything that [this guide](https://poorlydocumented.com/2019/11/installing-corentinjs-real-time-voice-cloning-project-on-windows-10-from-scratch/l) told me to do. I also have tried searching for this problem on the internet, but none of them solves my problem. I also...
null
null
null
{'base_commit': '070a3c187f87136ebe92aa72766f8343772d414e', 'files': [{'path': 'demo_cli.py', 'Loc': {'(None, None, None)': {'mod': [34]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "demo_cli.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
CorentinJ
Real-Time-Voice-Cloning
9553eaa1748cf94814be322ec7b096d2d6bc7f28
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/420
New Audio Issue: Assertion Failed
This was working yesterday fine, and no big changes were made. However, today starting up the demo toolbox loaded: Assertion failed! Program: C:\Users\paul1\AppData\Local\Programs\Python\Python37\python.exe File: src/hostapi/wdmks/pa_win_wdmks.c, Line 1061 Expression: FALSE I have tried reinstalling visual...
null
null
null
{}
[]
[]
[ { "pro": "sounddevice" } ]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "库" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "sounddevice" ] }
null
AUTOMATIC1111
stable-diffusion-webui
39827a3998afa3ea612e7cc9a475085765d4d509
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5134
asking-for-help-with-local-system-issues
[Bug]: Non checkpoints found. Can't run without a checkpoint.
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What happened? During the installation (windows), an error occurs : ``` venv "G:\Dev\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:...
null
null
null
{'base_commit': '39827a3998afa3ea612e7cc9a475085765d4d509', 'files': [{'path': 'modules/sd_models.py', 'Loc': {"(None, 'load_model', 230)": {'mod': []}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config" }
{ "code": [ "modules/sd_models.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
AUTOMATIC1111
stable-diffusion-webui
fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11458
bug-report
[Bug]: ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'
### Is there an existing issue for this? - [X] I have searched the existing issues and checked the recent builds/commits ### What happened? Launching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue 2023-06-27 13:53:22.297173: I tensorflow/co...
null
null
null
{'base_commit': 'fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c', 'files': [{'path': 'extensions-builtin/LDSR/sd_hijack_ddpm_v1.py', 'Loc': {'(None, None, None)': {'mod': [17]}}, 'status': 'modified'}, {'path': 'modules/models/diffusion/ddpm_edit.py', 'Loc': {'(None, None, None)': {'mod': [22]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "modules/models/diffusion/ddpm_edit.py", "extensions-builtin/LDSR/sd_hijack_ddpm_v1.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
AUTOMATIC1111
stable-diffusion-webui
ef4c94e1cfe66299227aa95a28c2380d21cb1600
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3902
[Feature Request]:
Finer control of CFG Scale? now it goes by 0.5 steps. I'm trying to replicate work i did on other app which have CFG scale control by 0.1. i cannot get the same result, of course.
null
null
null
{}
[]
[ "ui-config.json" ]
[]
{ "iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Config" }
{ "code": [ "ui-config.json" ], "doc": [], "test": [], "config": [], "asset": [] }
null
AUTOMATIC1111
stable-diffusion-webui
bf30673f5132c8f28357b31224c54331e788d3e7
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3301
bug-report
Expected all tensors to be on the same device
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select) how to pick the CUDA:0 ?
null
null
null
{'base_commit': 'bf30673f5132c8f28357b31224c54331e788d3e7', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 17)': {'mod': [17]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n依赖声明" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
AUTOMATIC1111
stable-diffusion-webui
39919c40dd18f5a14ae21403efea1b0f819756c7
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2190
bug-report
How to use .ckpt model on repo
Hello everyone! I was able to train a custom model using Dreambooth and I now have a custom ckpt trained on myself. Where do I put this file to be able to use it in this repo? I dropped in into models but not sure what to do next? Appreciate any help
null
null
null
{'base_commit': '39919c40dd18f5a14ae21403efea1b0f819756c7', 'files': [{'path': 'models/Stable-diffusion', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "models/Stable-diffusion" ] }
null
AUTOMATIC1111
stable-diffusion-webui
556c36b9607e3f4eacdddc85f8e7a78b29476ea7
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1614
enhancement
Feature request: GPU temperature control
**Is your feature request related to a problem? Please describe.** I don't like 85 degrees (Celsius) on my GPU, especially if it lasts more than 30 minutes or even 1 hour **Describe the solution you'd like** If temp on a GPU is more than {maxTemp} and it lasts {accumulateTempTime} it will pause processing for {coo...
null
null
null
{}
[]
[]
[ { "org": "w-e-w", "pro": "stable-diffusion-webui-GPU-temperature-protection" } ]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "2", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "stable-diffusion-webui-GPU-temperature-protection" ] }
null
python
cpython
c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba
https://github.com/python/cpython/issues/39472
docs
Wrong reference for specific minidom methods
BPO | [832251](https://bugs.python.org/issue832251) --- | :--- Nosy | @freddrake <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = 'https://github.com...
null
null
null
{'base_commit': 'c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba', 'files': [{'path': 'Doc/lib/xmldomminidom.tex', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "2\ndoc问题", "iss_reason": "2\ndoc错误,不是bug", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "Doc/lib/xmldomminidom.tex" ] }
null
python
cpython
5a65c2d43607a5033d7171445848cde21f07d81d
https://github.com/python/cpython/issues/32681
interpreter-core
.pyc writing/reading race condition (PR#189)
BPO | [210610](https://bugs.python.org/issue210610) --- | :--- Nosy | @gvanrossum <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup> <details><summary>Show more details</summary><p> GitHub fields: ```python assignee = 'https://github.co...
null
null
null
{'base_commit': '5a65c2d43607a5033d7171445848cde21f07d81d', 'files': [{'path': 'Doc/library/os.rst', 'Loc': {}}]}
[]
[ "fcntl.h" ]
[]
{ "iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code" }
{ "code": [ "fcntl.h" ], "doc": [ "Doc/library/os.rst" ], "test": [], "config": [], "asset": [] }
null
python
cpython
adf03c3544084359d89e7a0bc2a5aa0561f1a0f2
https://github.com/python/cpython/issues/68620
stdlib release-blocker
Upgrade windows builds to use OpenSSL 1.0.2c
BPO | [24432](https://bugs.python.org/issue24432) --- | :--- Nosy | @pfmoore, @pitrou, @larryhastings, @giampaolo, @tiran, @tjguk, @benjaminp, @ned-deily, @alex, @bitdancer, @zware, @zooba, @dstufft <sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current st...
null
null
null
{'base_commit': 'adf03c3544084359d89e7a0bc2a5aa0561f1a0f2', 'files': [{'path': 'PCbuild/get_externals.bat', 'Loc': {'(None, None, 57)': {'mod': [57]}}, 'status': 'modified'}, {'path': 'PCbuild/python.props', 'Loc': {'(None, None, 37)': {'mod': [37]}}, 'status': 'modified'}, {'path': 'PCbuild/readme.txt', 'Loc': {'(None...
[]
[]
[]
{ "iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [ "PCbuild/readme.txt" ], "test": [], "config": [ "PCbuild/get_externals.bat", "PCbuild/python.props" ], "asset": [] }
null
python
cpython
5198a5c7aa77367765ae03542b561845094ca30d
https://github.com/python/cpython/issues/48435
type-bug stdlib topic-regex
re module treats raw strings as normal strings
BPO | [4185](https://bugs.python.org/issue4185) --- | :--- Nosy | @gvanrossum, @loewis, @akuchling, @birkenfeld, @ezio-melotti Files | <li>[raw-strings-with-re.txt](https://bugs.python.org/file11868/raw-strings-with-re.txt "Uploaded as text/plain at 2008-10-23.03:55:27 by @ezio-melotti"): Interactive Python session wit...
null
null
null
{'base_commit': '5198a5c7aa77367765ae03542b561845094ca30d', 'files': [{'path': 'Doc/library/re.rst', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "2\nor\n3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc" }
{ "code": [], "doc": [ "Doc/library/re.rst" ], "test": [], "config": [], "asset": [] }
null
THUDM
ChatGLM-6B
ab6bcb4968bef335175c0b01972657961b2b1250
https://github.com/THUDM/ChatGLM-6B/issues/565
[BUG/Help] <title>使用ptuning微调时报错RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Traceback (most recent call last): File "main.py", line 429, in <module> main() File "main.py", line 112, in main tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, trust_remo...
null
null
null
{}
[]
[ "ice_text.model" ]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "ice_text.model" ] }
null
THUDM
ChatGLM-6B
801b1bb57690f0a99943f0a80c839b9ee120f3a7
https://github.com/THUDM/ChatGLM-6B/issues/388
为什么不能用共享GPU内存呢[Feature] <title>
### Is your feature request related to a problem? Please describe. 为什么不能用共享GPU内存呢 专用6G都满了但是共享GPU内存一点都没动 ### Solutions emm ### Additional context _No response_
null
null
null
{}
[]
[]
[ { "org": "Jittor", "pro": "JittorLLMs" } ]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "JittorLLMs" ] }
null
THUDM
ChatGLM-6B
afe08a19ccadc8b238c218b245bb4c1c62598588
https://github.com/THUDM/ChatGLM-6B/issues/770
RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())]
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 运行python cli_demo.py报错 root@4uot40mdrplpv-0:/yx/ChatGLM-6B# python mycli_demo.py Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contrib...
null
null
null
{}
[]
[ "ice_text.model" ]
[]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "ice_text.model" ] }
null
THUDM
ChatGLM-6B
d11eb5213e3c17225b47bb806a120dd45a80b126
https://github.com/THUDM/ChatGLM-6B/issues/63
How to fix error like this: torch.cuda.OutOfMemoryError: CUDA out of memory ?
OS: ubuntu 20.04 The error message said we need to change value of max_split_size_mb, but I search source code and cannot find any file contains max_split_size_mb, would you please provide some guidance about how to fix? ``` Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████...
null
null
null
{'base_commit': 'd11eb5213e3c17225b47bb806a120dd45a80b126', 'files': [{'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "cli_demo.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
THUDM
ChatGLM-6B
a9fc0184446fba7f4f27addf519fea0b371df83a
https://github.com/THUDM/ChatGLM-6B/issues/417
[Help] <title> Oracle Linux 7.9 运行int4模型出错,AttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior /x/home/chatglm_env/lib/python3.7/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version! RequestsD...
null
null
null
{}
[]
[]
[ { "pro": "gcc" } ]
{ "iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "库" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "gcc" ] }
null
THUDM
ChatGLM-6B
0c6d1750ef6042338534c3c97002175fa1ae0499
https://github.com/THUDM/ChatGLM-6B/issues/10
question
可以使用自己的数据微调吗
null
null
null
null
{'base_commit': '0c6d1750ef6042338534c3c97002175fa1ae0499', 'files': [{'path': 'ptuning/', 'Loc': {}}, {'path': 'ptuning/', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "ptuning/" ] }
null
THUDM
ChatGLM-6B
c55ecd89a079b86620cc722f2e21a14e3718d0f3
https://github.com/THUDM/ChatGLM-6B/issues/39
6GB显卡提示显存不足
显卡:3060laptop 6GB 报错:RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Mem...
null
null
null
{'base_commit': 'c55ecd89a079b86620cc722f2e21a14e3718d0f3', 'files': [{'path': 'web_demo.py', 'Loc': {'(None, None, None)': {'mod': [5]}}, 'status': 'modified'}, {'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "web_demo.py", "cli_demo.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
THUDM
ChatGLM-6B
1d87dac585c8fafd708db16860b628928ec5a821
https://github.com/THUDM/ChatGLM-6B/issues/532
[BUG/Help] 这两天更新版本后,chat的微调好像用不了了
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 前几天使用chat微调还是可以用的,那时候output文件是完整的包,而不是增量微调包。 这两天更新后,使用的还是项目自带的train_chat.sh,模型用的是int4。 output文件确实小了,但是却无法运行了,具体形式为运行以下代码 ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.f...
null
null
null
{'base_commit': '1d87dac585c8fafd708db16860b628928ec5a821', 'files': [{'path': 'ptuning/main.py', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [ "ptuning/main.py" ], "doc": [], "test": [], "config": [], "asset": [] }
null
THUDM
ChatGLM-6B
edb127326a2d5afd855484f12a38e0119151f826
https://github.com/THUDM/ChatGLM-6B/issues/723
centos上,2个12g显存的显卡如何配置可以同时使用
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior centos上,2个12g显存的显卡,无论是训练还是web,都始终用0号显卡,如何配置可以同时使用 ### Expected Behavior _No response_ ### Steps To Reproduce Centos7 12G nvida *2 ### Environment ```markdown - OS:Centos7 - Python:3.8 - Transformers:4.26....
null
null
null
{'base_commit': 'edb127326a2d5afd855484f12a38e0119151f826', 'files': [{'path': 'ptuning/train.sh', 'Loc': {'(None, None, 4)': {'mod': [4]}}, 'status': 'modified'}]}
[]
[]
[]
{ "iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nOther 脚本" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "ptuning/train.sh" ] }
null
THUDM
ChatGLM-6B
801b1bb57690f0a99943f0a80c839b9ee120f3a7
https://github.com/THUDM/ChatGLM-6B/issues/394
[BUG/Help] ValueError: 150000 is not in list
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 0%| | 19/30000 [31:30<828:54:23, 99.53s/it] 0%| | 20/30000 [33:09<828:37:17, 99.50s/it] 0%| | 21/30000 [34:48<828:09:42, 99.45s/it]Traceback (most recent call last): File "/ro...
null
null
null
{}
[]
[ "ice_text.model", "modeling_chatglm.py" ]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\n2", "info_type": "Code" }
{ "code": [ "modeling_chatglm.py" ], "doc": [], "test": [], "config": [], "asset": [ "ice_text.model" ] }
null
THUDM
ChatGLM-6B
1047e446e5387aa06c856c95800f67beab8b80d4
https://github.com/THUDM/ChatGLM-6B/issues/224
[BUG/Help] ImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils'
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior >>> model = AutoModel.from_pretrained("THUDM/chatglm-6b-int4",trust_remote_code=True).float() Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious cod...
null
null
null
{'base_commit': '1047e446e5387aa06c856c95800f67beab8b80d4', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}
[]
[]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code" }
{ "code": [], "doc": [], "test": [], "config": [ "requirements.txt" ], "asset": [] }
null
THUDM
ChatGLM-6B
b65142b5e54e52b27c1c1269e1b4abd83efcce45
https://github.com/THUDM/ChatGLM-6B/issues/422
[BUG/Help] <title>KeyError: 'lm_head.weight'
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior 报错:KeyError: 'lm_head.weight' ### Expected Behavior Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Expl...
null
null
null
{}
[]
[ "pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin" ]
[]
{ "iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Models/数据" }
{ "code": [], "doc": [], "test": [], "config": [], "asset": [ "pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin" ] }
null