organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
THUDM | ChatGLM-6B | 8633db1503fc3b0edc1d035f64aa35dce5d97969 | https://github.com/THUDM/ChatGLM-6B/issues/622 | [BUG/Help] ptuning时,指定PRE_SEQ_LEN=512,训练后,回答的问题仍旧有回答一百字左右就断了,该如何调整? | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
训练参数如下:
PRE_SEQ_LEN=512
LR=2e-2
CUDA_VISIBLE_DEVICES=0 python3 main.py \
--do_train \
--train_file ./data/gwddc.json \
--validation_file ./data/gwddc_test.json \
--prompt_column instructio... | null | null | null | {'base_commit': '8633db1503fc3b0edc1d035f64aa35dce5d97969', 'files': [{'path': 'ptuning/README.md', 'Loc': {'(None, None, 180)': {'mod': [180]}}, 'status': 'modified'}, {'path': 'ptuning/arguments.py', 'Loc': {"('DataTrainingArguments', None, 65)": {'mod': [123]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "4",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [
"ptuning/arguments.py"
],
"doc": [
"ptuning/README.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
THUDM | ChatGLM-6B | a14bc1d32452d92613551eb5d523e00950913710 | https://github.com/THUDM/ChatGLM-6B/issues/353 | enhancement | [Help] 如何支持多显卡 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
公司内部使用,装了2卡,发现默认配置只有1卡在跑,请问如何使用才可以使用多卡
### Expected Behavior
_No response_
### Steps To Reproduce
无
### Environment
```markdown
OS: Ubuntu 20.04
Python: 3.8
Transformers: 4.26.1
PyTorch: 1.12
CUDA Suppor... | null | null | null | {'base_commit': 'a14bc1d32452d92613551eb5d523e00950913710', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3\n如何支持多显卡",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
huggingface | transformers | 34f28b2a1342fd72c2e4d4e5613855bfb9f35d34 | https://github.com/huggingface/transformers/issues/1225 | wontfix | Bert output last hidden state | ## ❓ Questions & Help
Hi,
Suppose we have an utterance of length 24 (considering special tokens) and we right-pad it with 0 to max length of 64.
If we use Bert pertained model to get the last hidden states, the output would be of size [1, 64, 768].
Can we use just the first 24 as the hidden states of the utter... | null | null | null | {'base_commit': '34f28b2a1342fd72c2e4d4e5613855bfb9f35d34', 'files': [{'path': 'src/transformers/models/bert/modeling_bert.py', 'Loc': {"('BertSelfAttention', 'forward', 276)": {'mod': [279]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/models/bert/modeling_bert.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
huggingface | transformers | 82c7e879876822864b5ceaf2c99eb01159266bcd | https://github.com/huggingface/transformers/issues/27200 | dataset download error in speech recognition examples | ### System Info
- `transformers` version: 4.35.0.dev0
- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17
- Python version: 3.8.18
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- ... | null | null | null | {'base_commit': '82c7e879876822864b5ceaf2c99eb01159266bcd', 'files': [{'path': 'examples/pytorch/speech-recognition/README.md', 'Loc': {'(None, None, 69)': {'mod': [69]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"examples/pytorch/speech-recognition/README.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494 | https://github.com/huggingface/transformers/issues/12081 | GPT2 Flax "TypeError: JAX only supports number and bool dtypes, got dtype object in array" | On GPU
```
>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM
>>> tokenizer = AutoTokenizer.from_pretrained("gpt2-medium")
>>> model = FlaxAutoModelForCausalLM.from_pretrained("gpt2-medium")
>>> input_context = "The dog"
>>> # encode input context
>>> input_ids = tokenizer(input_context, re... | null | null | null | {'base_commit': '0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494', 'files': [{'path': 'src/transformers/models/gpt2/modeling_flax_gpt2.py', 'Loc': {"('FlaxGPT2LMHeadModule', None, 553)": {'mod': []}}, 'status': 'modified'}, {'path': 'src/transformers/models/gpt2/tokenization_gpt2_fast.py', 'Loc': {"('GPT2TokenizerFast', None,... | [
{
"Loc": [
6,
7
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null,
"src/transformers/models/gpt2/tokenization_gpt2_fast.py",
"src/transformers/models/gpt2/modeling_flax_gpt2.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 322037e842e5e89080918c824998c17722df6f19 | https://github.com/huggingface/transformers/issues/10079 | Unclear error "NotImplementedError: "while saving tokenizer. How fix it? | Here is my tokenizer code and how I save it to a json file" /content/bert-datas7.json"
````
from tokenizers import normalizers
from tokenizers.normalizers import Lowercase, NFD, StripAccents
bert_tokenizer.pre_tokenizer = Whitespace()
from tokenizers.processors import TemplateProcessing
bert_tokenizer.pos... | null | null | null | {'base_commit': '322037e842e5e89080918c824998c17722df6f19', 'files': [{'path': 'src/transformers/tokenization_utils_fast.py', 'Loc': {"('PreTrainedTokenizerFast', '_save_pretrained', 505)": {'mod': [509]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/tokenization_utils_fast.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 77a257fc210a56f1fd0d75166ecd654cf58111f3 | https://github.com/huggingface/transformers/issues/8403 | [s2s finetune] huge increase in memory demands with --fp16 native amp | While working on https://github.com/huggingface/transformers/issues/8353 I discovered that `--fp16` causes a 10x+ increase in gpu memory demands.
e.g. I can run bs=12 w/o `--fp16`
```
cd examples/seq2seq
export BS=12; rm -rf distilbart-cnn-12-6; python finetune.py --learning_rate=3e-5 --gpus 1 \
--do_train -... | null | null | https://github.com/pytorch/pytorch/commit/57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57 | {} | [] | [] | [
{
"org": "pytorch",
"pro": "pytorch",
"path": [
"{'base_commit': '57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57', 'files': [{'path': 'aten/src/ATen/autocast_mode.cpp', 'status': 'modified', 'Loc': {\"(None, 'cached_cast', 67)\": {'mod': [71]}}}, {'path': 'test/test_cuda.py', 'status': 'modified', 'Loc'... | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [
"aten/src/ATen/autocast_mode.cpp"
],
"doc": [],
"test": [
"test/test_cuda.py"
],
"config": [],
"asset": [
"pytorch"
]
} | null | |
huggingface | transformers | 1a688709b34b10bd372e3e0860c8d39d170ebf53 | https://github.com/huggingface/transformers/issues/17201 | a memory leak in qqp prediction using bart | ### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed... | null | null | null | {'base_commit': '1a688709b34b10bd372e3e0860c8d39d170ebf53', 'files': [{'path': 'src/transformers/trainer.py', 'Loc': {"('Trainer', 'evaluation_loop', 2549)": {'mod': [2635]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2\nOr\n5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/trainer.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5 | https://github.com/huggingface/transformers/issues/28435 | Skip some weights for load_in_8bit and keep them as fp16/32? | ### Feature request
Hello,
I am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.
### Motivation
My motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 1... | null | null | null | {'base_commit': 'cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5', 'files': [{'path': 'src/transformers/modeling_utils.py', 'Loc': {"('PreTrainedModel', 'from_pretrained', 2528)": {'mod': [3524]}}, 'status': 'modified'}, {'path': 'src/transformers/utils/quantization_config.py', 'Loc': {"('BitsAndBytesConfig', None, 151)": {'m... | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/modeling_utils.py",
"src/transformers/utils/quantization_config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 705ca7f21b2b557e0cfd5d0853b297fa53489d20 | https://github.com/huggingface/transformers/issues/14938 | Question: Object of type EncoderDecoderConfig is not JSON serializable | Hi.
An error occurred when I used Trainer to train and save EncoderDecoderModel.
```python
File "/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py", line 482, in <module>
run(model_args, data_args, training_args)
File "/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.p... | null | null | null | {} | [
{
"Loc": [
46,
47
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 45d21502f0b67eb8a5ad244d469dcc0dfb7517a7 | https://github.com/huggingface/transformers/issues/653 | Different Results from version 0.4.0 to version 0.5.0 | Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained ... | null | null | null | {'base_commit': '45d21502f0b67eb8a5ad244d469dcc0dfb7517a7', 'files': [{'path': 'pytorch_pretrained_bert/modeling.py', 'Loc': {"('BertPreTrainedModel', 'init_bert_weights', 515)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"pytorch_pretrained_bert/modeling.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885 | https://github.com/huggingface/transformers/issues/10202 | Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True | ## Environment info
- `transformers` version: 4.3.2
- Platform: macOS-11.2.1-x86_64-i386-64bit
- Python version: 3.9.1
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
See ... | null | null | null | {'base_commit': '1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885', 'files': [{'path': 'src/transformers/tokenization_utils_base.py', 'Loc': {"('SpecialTokensMixin', 'add_special_tokens', 900)": {'mod': []}}, 'status': 'modified'}, {'Loc': [33], 'path': None}]} | [
{
"Loc": [
33
],
"path": null
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "Cment指出用户代码问题,给出需要使用的API\n自己代码的问题 另一个issue中指出cmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them... | {
"code": [
"src/transformers/tokenization_utils_base.py",
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 5bcbdff15922b1d0eeb035879630ca61c292122a | https://github.com/huggingface/transformers/issues/32661 | bug | RoBERTa config defaults are inconsistent with fairseq implementation | ### System Info
python 3.12, transformers 4.14, latest mac os
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give detail... | null | null | null | {'base_commit': '5bcbdff15922b1d0eeb035879630ca61c292122a', 'files': [{'path': 'src/transformers/models/roberta/configuration_roberta.py', 'Loc': {"('RobertaConfig', None, 29)": {'mod': [59]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/transformers/models/roberta/configuration_roberta.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
geekan | MetaGPT | f0df3144d68ed288f5ccce0c34d3939f8462ba98 | https://github.com/geekan/MetaGPT/issues/1345 | Not able to run any MetaGPT examples | Referred Issue #1322 , but not able to resolve the issue. I added azure based api endpoint and api key in config2.yaml
│ 105 │ │ typer.echo("Missing argument 'IDEA'. Run 'metagpt --help' for more information." │
│ 106 │ │ raise typer.Exit() ... | null | null | null | {'base_commit': 'f0df3144d68ed288f5ccce0c34d3939f8462ba98', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.yaml"
],
"asset": []
} | null | |
geekan | MetaGPT | e43aaec9322054f4dec92f44627533816588663b | https://github.com/geekan/MetaGPT/issues/576 | 请问metagpt是否支持向量数据,构建自己的知识库 | 请问metagpt是否支持向量数据,构建自己的知识库 | null | null | null | {'base_commit': 'e43aaec9322054f4dec92f44627533816588663b', 'files': [{'path': '/metagpt/document_store', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"/metagpt/document_store"
],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | be56351e000a0f08562820fb04f6fdbe34d9e655 | https://github.com/geekan/MetaGPT/issues/205 | Rate Limited error | openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-fK5bb25UFhVbebfBtfCejGc4 on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.
Maybe a way to resume so all the runtime isn't just lost... | null | null | null | {'base_commit': 'be56351e000a0f08562820fb04f6fdbe34d9e655', 'files': [{'path': 'metagpt/provider/openai_api.py', 'Loc': {"('OpenAIGPTAPI', '_achat_completion_stream', 150)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"metagpt/provider/openai_api.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | fd7feb57fac8d37509b1325cad502d2f65d59956 | https://github.com/geekan/MetaGPT/issues/1553 | inactive | ValueError: Creator not registered for key: LLMType.OLLAMA | **Bug description**
<!-- Clearly and directly describe the current bug -->
I using ***MetaGPT ver 0.8.1*** but when use RAG with method **SimpleEngine.from_docs** have error ***ValueError: Creator not registered for key: LLMType.OLLAMA***
<!-- **Bug solved method** -->
<!-- If you solved the bug, describe the i... | null | null | null | {} | [
{
"path": "config/config2.yaml",
"Loc": [
28
]
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.yaml"
],
"asset": []
} | null |
geekan | MetaGPT | df8d1124c68b62bb98c71b6071abf5efe6293dba | https://github.com/geekan/MetaGPT/issues/15 | 请问如何配置使用Azure上的api? | 你好,
我看到文档中需要配置openAI的key,但是我注意到在provider中有azure_api的相关文件,
请问是否在哪个地方可以配置让他使用azure提供的服务? | null | null | null | {'base_commit': 'df8d1124c68b62bb98c71b6071abf5efe6293dba', 'files': [{'path': 'config/config.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config.yaml"
],
"asset": []
} | null | |
geekan | MetaGPT | dfa33fcdaade1e4f8019835bf065d372d76724ae | https://github.com/geekan/MetaGPT/issues/924 | GLM4一直报错 | 2024-02-22 16:50:26.666 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 80.109(s), this was the 5th time calling it. exp: 1 validation error for PM_NODE_AN
Value error, Missing fields: {'Full API spec', 'Required Python packages', 'Required Othe... | null | null | null | {'base_commit': 'dfa33fcdaade1e4f8019835bf065d372d76724ae', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config\nCode"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.yaml"
],
"asset": []
} | null | |
geekan | MetaGPT | 80a189ad4a1546f8c1a9dbe00c42725868c35e5e | https://github.com/geekan/MetaGPT/issues/135 | failed to launch chromium browser process errors | get errors on launch of browser process; below is the error from terminal which happens for all browser processes trying to launch.
```
INFO | metagpt.utils.mermaid:mermaid_to_file:38 - Generating /Users/lopezdp/DevOps/Ai_MetaGPT/workspace/test_app/resources/competitive_analysis.pdf..
Error: Failed to launch... | null | null | null | {'base_commit': '80a189ad4a1546f8c1a9dbe00c42725868c35e5e', 'files': [{'path': 'config/puppeteer-config.json', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [
"config/puppeteer-config.json"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | 8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d | https://github.com/geekan/MetaGPT/issues/1115 | The following error appears on every run | 
2024-03-27 11:15:59.019 | ERROR | metagpt.utils.common:wrapper:631 - Exception occurs, start to serialize the project, exp:
Traceback (most recent call last):
File "D:\andconda\envs\metagpt\lib\site-packages\tena... | null | null | null | {'base_commit': '8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d', 'files': [{'path': 'metagpt/strategy/planner.py', 'Loc': {"('Planner', 'update_plan', 68)": {'mod': [75]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"metagpt/strategy/planner.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | bdf9d224b5a05228897553a29214adc074fbc465 | https://github.com/geekan/MetaGPT/issues/754 | SubscriptionRunner | import asyncio
from metagpt.subscription import SubscriptionRunner
from metagpt.roles import Searcher
from metagpt.schema import Message
async def trigger():
while True:
yield Message("the latest news about OpenAI")
await asyncio.sleep(1)
async def callback(msg: Message):
print(ms... | null | null | null | {'base_commit': 'bdf9d224b5a05228897553a29214adc074fbc465', 'files': [{'path': 'metagpt/environment.py', 'Loc': {"('Environment', None, 27)": {'mod': []}}, 'status': 'modified'}, {'Loc': [21], 'path': None}]} | [
{
"Loc": [
21
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null,
"metagpt/environment.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
geekan | MetaGPT | f88fa9e2df09c28f867bda54ec24fa25b50be830 | https://github.com/geekan/MetaGPT/issues/178 | Specify Directory of pdf documents as Knowledge Base | Hi, how can we specify any folder which includes pdf documents as a knowledge base and create a new Role of Document Controller to extract specific information from within the documents in KB?
Any help would be highly appreciated
Thanks much appreciated | null | null | null | {'base_commit': 'f88fa9e2df09c28f867bda54ec24fa25b50be830', 'files': [{'path': 'metagpt/document_store', 'Loc': {}}, {'path': 'tests/metagpt/document_store', 'Loc': {}}, {'path': 'examples/search_kb.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"examples/search_kb.py"
],
"doc": [
"metagpt/document_store",
"tests/metagpt/document_store"
],
"test": [],
"config": [],
"asset": []
} | null | |
langflow-ai | langflow | 7e756b9db56677636e6920c1e6628d13e980aec7 | https://github.com/langflow-ai/langflow/issues/6006 | bug | All custom components throw errors after update to latest version | ### Bug Description
```
[01/29/25 00:15:00] ERROR 2025-01-29 00:15:00 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405
<class 'pydantic._internal._model_construction.ModelMetaclass'>
```
### Reproduct... | null | null | null | {} | [
{
"Loc": [
40
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
langflow-ai | langflow | 19818db68b507332be71f30dd90d16bf4c7d6f83 | https://github.com/langflow-ai/langflow/issues/3718 | enhancement | Add pgVector in the building instructions for the PostgreSQL Docker image | ### Feature Request
Include the pgVector component with the Docker build instructions. This would provide the use with a fully functional PostgreSQL Vector DB, ready to be used inside LangFlow.
### Motivation
I am not a programmer, neither I do have proper knowledge of SQL, but I liked to play with some RAG id... | null | null | null | {'base_commit': '19818db68b507332be71f30dd90d16bf4c7d6f83', 'files': [{'path': 'docker_example/docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3\nor\n4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config\nCode"
} | {
"code": [],
"doc": [
"docker_example/docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null |
langflow-ai | langflow | 2ddd7735129b0f35fd617f2634d35a3690b06630 | https://github.com/langflow-ai/langflow/issues/4528 | bug | Can't access flow directly by link | ### Bug Description
When you try to access a flow using it's URL (ex. http://localhost:55650/flow/0b95342f-6ce4-43d0-9d60-c28bf66a3781), the page doesn't load and in the browser's console is shown the following message: ``Uncaught SyntaxError: Unexpected token '<' (at index-DK9323ab.js:1:1)``. I think that this proble... | null | null | null | {'base_commit': '2ddd7735129b0f35fd617f2634d35a3690b06630', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null |
langflow-ai | langflow | ed53fcd3b042ecb5ed04c9c4562c459476bd6763 | https://github.com/langflow-ai/langflow/issues/3896 | bug | redis.exceptions.ResponseError: unknown command 'module' | ### Bug Description
redis.exceptions.ResponseError: unknown command 'module'
https://github.com/user-attachments/assets/32ea6046-d5f1-4d85-96b5-41d381776986
### Reproduction
Add a redis click run error, see the video
### Expected behavior
ResponseError: unknown command 'MODULE'
### Who can help?
_No respo... | null | null | null | {'base_commit': 'ed53fcd3b042ecb5ed04c9c4562c459476bd6763', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null |
langflow-ai | langflow | 7d400903644230a8842ce189ca904ea9f8048b07 | https://github.com/langflow-ai/langflow/issues/1239 | bug | cannot import name 'DEFAULT_CONNECTION_STRING' in v0.6.3a5 |
```
% git branch
* (HEAD detached at v0.6.3a5)
dev
% cd docker_example
% docker compose up
[+] Running 1/0
✔ Container docker_example-langflow-1 Created 0.0s
Attaching to langflow-1
langflow-1 | Traceback (most recent call last):
... | null | null | null | {'base_commit': '7d400903644230a8842ce189ca904ea9f8048b07', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"Version"
]
} | null |
langflow-ai | langflow | 12a46b6936e23829d9956d4d5f1fa51faff76137 | https://github.com/langflow-ai/langflow/issues/965 | stale | Method for Dynamically Manipulating Parameters of a Custom Component | ```python
class DynamicConfigCustomComponent(CustomComponent):
def build_config(self, prev_selection=None):
config = {
"param1": {"display_name": "Parameter 1"},
"param2": {
"display_name": "Parameter 2",
"options": [1, 2, 3],
"... | null | null | null | {'base_commit': '12a46b6936e23829d9956d4d5f1fa51faff76137', 'files': [{'path': 'src/frontend/src/types/components/index.ts', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"src/frontend/src/types/components/index.ts"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Significant-Gravitas | AutoGPT | ad7cefa10c0647feee85114d58559fcf83ba6743 | https://github.com/Significant-Gravitas/AutoGPT/issues/1902 | setup | Error with 'python -m autogpt' command. Please set your OpenAI API key in .env or as an environment variable. You can get your key from https://beta.openai.com/account/api-keys | ### Duplicates
- [X] I have searched the existing issues
### Steps to reproduce 🕹
Installed the 'stable' version of the program
I run 'python -m autogpt' command and comes up with an error.
": {'mod': [593]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"fastapi/routing.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | c6aa28bea2f751a91078bd8d845133ff83f352bf | https://github.com/fastapi/fastapi/issues/5422 | question
question-migrate | Unidirectional websocket connections where only the server pushes data to the clients | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I alre... | null | null | null | {} | [
{
"Loc": [
23
],
"path": null
}
] | [] | [] | {
"iss_type": "3",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 55afb70b3717969565499f5dcaef54b1f0acc7da | https://github.com/fastapi/fastapi/issues/891 | question
answered
question-migrate | SQL related tables and corresponding nested pydantic models in async | Really impressed with FastAPI so far... I have search docs github, tickets and googled the issue described below.
### Description
How best to work with related tables and corresponding nested pydantic models whilst persisting data in a relational database in an async application?
### Additional context
I ha... | null | null | null | {} | [
{
"Loc": [
31
],
"path": null
}
] | [] | [] | {
"iss_type": "3",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 1760da0efa55585c19835d81afa8ca386036c325 | https://github.com/fastapi/fastapi/issues/3882 | question
question-migrate | Doing work after the HTTP response has been sent | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I alre... | null | null | null | {'base_commit': '1760da0efa55585c19835d81afa8ca386036c325', 'files': [{'path': 'fastapi/background.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"fastapi/background.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | a0e4d38bea74940de013e04a6d6f399d62f04280 | https://github.com/fastapi/fastapi/issues/1498 | question
reviewed
question-migrate | RedirectResponse from a POST request route to GET request route shows 405 Error code. | _Summary of the total issue is:_ **How to do a Post/Redirect/Get (PRG) in FastAPI?**
_This is not necessarily a bug, rather a question._
### Things i tried:
I want to redirect response from 2nd route to 1st route. This [Issue#199](https://github.com/tiangolo/fastapi/issues/199) here explains **GET to GET** but not... | null | null | null | {'base_commit': 'a0e4d38bea74940de013e04a6d6f399d62f04280', 'files': [{'Loc': [58], 'path': None}]} | [
{
"Loc": [
58
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | b93f8a709ab3923d1268dbc845f41985c0302b33 | https://github.com/fastapi/fastapi/issues/4551 | question
question-migrate | Attribute not found while testing a Beanie Model inside fast api | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] ... | null | null | null | {'base_commit': 'b93f8a709ab3923d1268dbc845f41985c0302b33', 'files': [{'path': 'docs/en/docs/advanced/testing-events.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"docs/en/docs/advanced/testing-events.md"
],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 78b07cb809e97f400e196ff3d89862b9d5bd5dc2 | https://github.com/fastapi/fastapi/issues/4587 | question
question-migrate | Use the raw response in Reponse classes | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] ... | null | null | null | {'base_commit': '78b07cb809e97f400e196ff3d89862b9d5bd5dc2', 'files': [{'path': 'fastapi/routing.py', 'Loc': {"('APIRoute', None, 300)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"fastapi/routing.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92 | https://github.com/oobabooga/text-generation-webui/issues/3341 | bug | state isn't clearly understood how to incorporate for script.py | ### Describe the bug
I see that output_modifier and a few other functions require state object, which is not defined in script.py nor are any of the existing plugins (that I looked at) use a state object.
As a result, I am unable to use the functions. I get a message about needing to pass state
### Is there an ex... | null | null | null | {'base_commit': 'ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92', 'files': []} | [] | [] | [
{
"org": "ChobPT",
"pro": "oobaboogas-webui-langchain_agen",
"path": [
"script.py"
]
}
] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
"script.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 8962bb173e9bdc36eb9cf28fe9e1952b2976e781 | https://github.com/oobabooga/text-generation-webui/issues/5337 | bug | Generation slows at max context, even when truncated | ### Describe the bug
### Issue Summary
When generating, if the context is near the maximum set via n_ctx (and the truncate value in Parameters is set to match it), generation will be quite slow. This does not occur if the context is more than approximately 300-500 below the set value. It still occurs even if the n_... | null | null | null | {'base_commit': '8962bb173e9bdc36eb9cf28fe9e1952b2976e781', 'files': [{'path': 'modules/ui_model_menu.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui_model_menu.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 564a8c507fffc8b25a056d8930035c63da71fc7b | https://github.com/oobabooga/text-generation-webui/issues/3042 | bug | ERROR:Task exception was never retrieved | ### Describe the bug
Right after installation i open the webui in the browser and i receive an error.
### Is there an existing issue for this?
- [x] I have searched the existing issues
### Reproduction
Right after installation i open the webui in the browser and i receive this error.
### Screenshot
_No response_... | null | null | null | {'base_commit': '564a8c507fffc8b25a056d8930035c63da71fc7b', 'files': [{'path': 'requirements.txt', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null |
oobabooga | text-generation-webui | 07510a24149cbd6fd33df0c4a440d60b9783a18e | https://github.com/oobabooga/text-generation-webui/issues/2171 | enhancement
stale | support for fastest-inference-4bit branch of GPTQ-for-LLaMa | **Description**
There is new branch of GPTQ-for-LLaMa - fastest-inference-4bit that combines triton and cuda and people say it's much faster. It would be nice if it was supported here. I tried to compile it myself but it doesnt work with this webui because there is no llama_inference_offload.py in the new branch.
... | null | null | null | {'base_commit': '07510a24149cbd6fd33df0c4a440d60b9783a18e', 'files': [{'path': 'modules/GPTQ_loader.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/GPTQ_loader.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 7ddf6147accfb5b95e7dbbd7f1822cf976054a2a | https://github.com/oobabooga/text-generation-webui/issues/446 | bug | Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ | ### Describe the bug
I get factual answers in ?? like this Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Common sense questions and answers
Question: Hi
Factual answer: ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ ⁇
### ... | null | null | null | {'base_commit': '7ddf6147accfb5b95e7dbbd7f1822cf976054a2a', 'files': [{'path': 'download-model.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2\n结果奇怪",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"download-model.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 3609ea69e4c4461a4f998bd12cc559d5a016f328 | https://github.com/oobabooga/text-generation-webui/issues/5761 | bug | api broke: AttributeError: 'NoneType' object has no attribute 'replace' | ### Describe the bug
api calls result in
AttributeError: 'NoneType' object has no attribute 'replace'
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
install no requirements and llama-cpp-python by source then try to run curl
curl http://192.168.3.17:5000/v1/... | null | null | null | {'base_commit': '3609ea69e4c4461a4f998bd12cc559d5a016f328', 'files': [{'path': 'modules/chat.py', 'Loc': {"(None, 'replace_character_names', 637)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/chat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
oobabooga | text-generation-webui | 1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a | https://github.com/oobabooga/text-generation-webui/issues/5774 | bug | The checksum verification for miniconda_installer.exe has failed. | ### Describe the bug
The checksum verification for miniconda_installer.exe has failed.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
After I extracted the files, I clicked start_windows.bat.
### Screenshot
_No response_
### Logs
```shell
Downloading Minicon... | null | null | null | {'base_commit': '1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a', 'files': [{'path': 'start_windows.bat', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"start_windows.bat"
]
} | null |
oobabooga | text-generation-webui | c17624432726ab5743dfa21af807d559e4f4ff8c | https://github.com/oobabooga/text-generation-webui/issues/6209 | bug
stale | Oobabooga login not working through reverse proxy | ### Describe the bug
I have the latest text-generation-webui (just ran the update script) running on my home computer running Windows 11. I am running it on a LAN IP (192.168.1.102) and reverse-proxying it with Nginx so I can access it remotely over the Internet.
Some recent update to text-generation-webui appea... | null | null | null | {'base_commit': 'c17624432726ab5743dfa21af807d559e4f4ff8c', 'files': [{'path': 'requirements/full/requirements.txt', 'Loc': {'(None, None, 7)': {'mod': [7]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements/full/requirements.txt"
],
"asset": []
} | null |
hacksider | Deep-Live-Cam | 69d863b44ab5c7dad6eea04b7e3563f491c714a4 | https://github.com/hacksider/Deep-Live-Cam/issues/376 | Unable to select camera device through UI | It would be nice to have a way to select which camera to use. I am on Ubuntu 22.04 with a Linux laptop. Since I use an external camera and keep my laptop closed, the program is defaulting to the on-board camera.
I was unable to find a quick/easy way to change the default camera in Ubuntu, so it would be nice if the ... | null | null | null | {'base_commit': '69d863b44ab5c7dad6eea04b7e3563f491c714a4', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'webcam_preview', 252)": {'mod': [259]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 080d6f5110d2e185e8ce4e10451ac96313079be2 | https://github.com/hacksider/Deep-Live-Cam/issues/315 | How to select the correct camera? | How to select the correct camera ?
Is there any method to improve the output resolution of the camera? | null | null | null | {'base_commit': '080d6f5110d2e185e8ce4e10451ac96313079be2', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'webcam_preview', 252)": {'mod': [259]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 5bc3ada6324a28a8d8556da1176b546f2d2140f8 | https://github.com/hacksider/Deep-Live-Cam/issues/922 | ERROR: Cannot install -r requirements.txt (line 13), tensorflow and typing-extensions>=4.8.0 because these package versions have conflicting dependencies. | The conflict is caused by:
The user requested typing-extensions>=4.8.0
torch 2.5.1+cu121 depends on typing-extensions>=4.8.0
tensorflow-intel 2.12.1 depends on typing-extensions<4.6.0 and >=3.6.6 | null | null | null | {'base_commit': '5bc3ada6324a28a8d8556da1176b546f2d2140f8', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 19)': {'mod': [19]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n依赖声明"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 6b0cc749574d7307b2f7deedfa2a0dbb363329da | https://github.com/hacksider/Deep-Live-Cam/issues/243 | [experimental] doesn't show the camera I want.. | I'm using the `experimental` branch so I could choose the camera I wanted (OBS Virtual Camera) which is (2) but it only shows "Camera 0", so I made a test script and I was able to pull my OBS Virtual Camera using 'matplotlib',
```
(venv) (base) PS E:\deep-live-cam> python list.py
[ WARN:0@10.769] global cap_msmf.c... | null | null | null | {'base_commit': '6b0cc749574d7307b2f7deedfa2a0dbb363329da', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'webcam_preview', 307)": {'mod': [322]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | 513e41395687921d589fc10bbaf2f72ed579c84a | https://github.com/hacksider/Deep-Live-Cam/issues/915 | Subject: Missing ui.py file in modules directory - preventing project execution | Hi,
I'm trying to run the Deep-Live-Cam project, but I'm encountering a problem. The ui.py file is missing from the modules directory. I've tried the following:
* Cloning the repository using git clone: `git clone https://github.com/hacksider/Deep-Live-Cam.git`
* Cloning the repository using GitHub Desktop.
* D... | null | null | null | {'base_commit': '513e41395687921d589fc10bbaf2f72ed579c84a', 'files': [{'path': 'modules/ui.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "4",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | a49d3fc6e5a228a6ac92e25831c507996fdc0042 | https://github.com/hacksider/Deep-Live-Cam/issues/697 | [Solved] inswapper_128_fp16.onnx failed:Protobuf parsing failed | I have this error on macOS Apple Silicon.
`Exception in Tkinter callback
Traceback (most recent call last):
File "/opt/homebrew/Cellar/python@3.10/3.10.15/Frameworks/Python.framework/Versions/3.10/lib/python3.10/tkinter/__init__.py", line 1921, in __call__
return self.func(*args)
File "/Users//PycharmProje... | null | null | null | {} | [] | [] | [
{
"org": "hacksider",
"pro": "deep-live-cam",
"path": [
"inswapper_128_fp16.onnx"
]
}
] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2\n+\n0",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"inswapper_128_fp16.onnx"
]
} | null | |
hacksider | Deep-Live-Cam | d4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33 | https://github.com/hacksider/Deep-Live-Cam/issues/94 | Can't find onnxruntime-silicon==1.13.1 | Hi,
Currently on MacOS (Silicon, M2 Max), it seems not possible to download (with pip at least) the 1.13.1 version of onnxruntime.
`ERROR: Could not find a version that satisfies the requirement onnxruntime-silicon==1.13.1 (from versions: 1.14.1, 1.15.0, 1.16.0, 1.16.3)
ERROR: No matching distribution found for ... | null | null | null | {'base_commit': 'd4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 16)': {'mod': [16]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "install require"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | null | |
hacksider | Deep-Live-Cam | eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa | https://github.com/hacksider/Deep-Live-Cam/issues/345 | Program crashes when processing with DirectML | I am using an AMD RX 6600 XT GPU with the latest drivers and attempting to run the program with DirectML. The program's UI turns white and then crashes. It works fine with CPU execution but fails with DirectML.
I already tried to reinstall onnxruntime-directml with no effect. Terminal:
(myenv) E:\Edesktop\deep-liv... | null | null | null | {'base_commit': 'eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa', 'files': [{'path': 'modules/ui.py', 'Loc': {"(None, 'create_root', 93)": {'mod': [139, 140, 141]}}, 'status': 'modified'}, {'path': 'modules/core.py', 'Loc': {"(None, 'parse_args', 47)": {'mod': [67, 71]}, '(None, None, None)': {'mod': [11]}}, 'status': 'modif... | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"modules/ui.py",
"modules/core.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
Textualize | rich | 7e1928efee53da1ac7d156912df04aef83eefea5 | https://github.com/Textualize/rich/issues/1247 | Needs triage | [REQUEST] Extra caching for `get_character_cell_size` | **How would you improve Rich?**
Add a small `lru_cache` to https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L28 , similar to cache one layer down for https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L46
Size `4096` was plenty for what I describe below.
**What problem does it solved fo... | null | null | null | {'base_commit': '7e1928efee53da1ac7d156912df04aef83eefea5', 'files': [{'path': 'rich/cells.py', 'Loc': {"(None, 'get_character_cell_size', 28)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"rich/cells.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Textualize | rich | 5c9161d0c48254fb579827249a9ee7d88f4589b7 | https://github.com/Textualize/rich/issues/1489 | Needs triage | [REQUEST] current item of a progress | when creating progress bars for logical items (that are then supported with additional progress pars,
i would consider it helpful if it was possible to add a name/render able for the current item, and to push those in updates
i`m not yet sure how this is best expressed/implemented | null | null | null | {'base_commit': '5c9161d0c48254fb579827249a9ee7d88f4589b7', 'files': [{'path': 'rich/progress.py', 'Loc': {"('Progress', 'update', 739)": {'mod': []}}, 'status': 'modified'}, {'path': 'rich/progress.py', 'Loc': {"('Task', None, 437)": {'mod': [466]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"rich/progress.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Textualize | rich | 0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80 | https://github.com/Textualize/rich/issues/2457 | bug | [BUG] Console(no_color=True) does not work on Windows 10 | You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).
**Describe the bug**
The "no_color=True" Console parameter does not seem to do anything on Windows 10. I tested on both Cmder and native cmd.exe t... | null | null | null | {'base_commit': '0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80', 'files': [{'path': 'rich/console.py', 'Loc': {"('Console', None, 583)": {'mod': [612]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"rich/console.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ytdl-org | youtube-dl | 427cc215310804127b55744fcc3664ede38a4a0d | https://github.com/ytdl-org/youtube-dl/issues/21363 | question | How does youtube-dl detect advertisements? | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check lis... | null | null | null | {'base_commit': '427cc215310804127b55744fcc3664ede38a4a0d', 'files': [{'path': 'youtube_dl/downloader/hls.py', 'Loc': {"('HlsFD', 'is_ad_fragment_start', 78)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "5",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/downloader/hls.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ytdl-org | youtube-dl | 8b7340a45eb0e3aeaa996896ff8690b6c3a32af6 | https://github.com/ytdl-org/youtube-dl/issues/15955 | use youtube-dl with cookies file in code not from command line | ## Please follow the guide below
- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly
- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)
- Use the *Preview* tab to see what your issue will actually look like
... | null | null | null | {'base_commit': '8b7340a45eb0e3aeaa996896ff8690b6c3a32af6', 'files': [{'path': 'youtube_dl/YoutubeDL.py', 'Loc': {"('YoutubeDL', None, 113)": {'mod': [208]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/YoutubeDL.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ytdl-org | youtube-dl | 267d81962a0709f15f82f96b7aadbb5473a06992 | https://github.com/ytdl-org/youtube-dl/issues/16870 | [bilibili]how can i download video on page2? | ### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.25*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.
- [x] I've **verified**... | null | null | null | {'base_commit': '267d81962a0709f15f82f96b7aadbb5473a06992', 'files': [{'path': 'youtube_dl/extractor/bilibili.py', 'Loc': {"('BiliBiliIE', None, 25)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/extractor/bilibili.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ytdl-org | youtube-dl | eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71 | https://github.com/ytdl-org/youtube-dl/issues/16883 | [Feature request] Network retry, with configurability | I just ran some large youtube-dl scripts, and noticed a few videos were missing finally.
This was probably due to intermittent network downtimes, and apparently youtube-dl doesn't do any network retry at all (I may be wrong).
Thus, I suggest adding an option named for example `--network-retry`, related to `--sock... | null | null | null | {'base_commit': 'eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {"(None, 'parseOpts', 41)": {'mod': [458, 462]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/options.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
ytdl-org | youtube-dl | 5014bd67c22b421207b2650d4dc874b95b36dda1 | https://github.com/ytdl-org/youtube-dl/issues/30539 | question | velocidad de descarga limitada | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check lis... | null | null | null | {'base_commit': '5014bd67c22b421207b2650d4dc874b95b36dda1', 'files': [{'path': 'youtube_dl/extractor/youtube.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/extractor/youtube.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
ytdl-org | youtube-dl | e90d175436e61e207e0b0cae7f699494dcf15922 | https://github.com/ytdl-org/youtube-dl/issues/9104 | Chinese title was missing ! | ```
root@kangland:/var/www/ydy# youtube-dl -v w0dMz8RBG7g
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'w0dMz8RBG7g']
[debug] Encodings: locale ANSI_X3.4-1968, fs ANSI_X3.4-1968, out ANSI_X3.4-1968, pref ANSI_X3.4-1968
[debug] youtube-dl version 2016.04.01
[debug] Python version... | null | null | null | {'base_commit': 'e90d175436e61e207e0b0cae7f699494dcf15922', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {"(None, 'parseOpts', 22)": {'mod': [447]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"youtube_dl/options.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
localstack | localstack | 3794f1e20a56f3b7bcd23f82a006e266f2a57a05 | https://github.com/localstack/localstack/issues/2511 | type: usage | Cannot connect to DynamoDB from lambda | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
# Type of request: This is a ...
- [x] bug report
- [ ] feature request
# Detailed description
I'm using localstack for local development. I have a DynamoDB table named `readings` and I'd li... | null | null | null | {} | [
{
"Loc": [
19
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
localstack | localstack | 1c5f2e9650155a839cc842a9cd07faf3e76ed5d2 | https://github.com/localstack/localstack/issues/1078 | Connect to localhost:4568 [localhost/127.0.0.1] failed: Connection refused (Connection refused) | Hi there, I am having trouble connecting to Kinesis on localstack. Everything runs fine when I run it locally, the error happens inside of our Jenkins pipeline.
Here is the Dockerfile I am using:
```
FROM hseeberger/scala-sbt:8u181_2.12.7_1.2.6
USER root
RUN apt-get update
RUN apt-get -y install curl
RUN cur... | null | null | null | {'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config\nCode"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
localstack | localstack | 1c5f2e9650155a839cc842a9cd07faf3e76ed5d2 | https://github.com/localstack/localstack/issues/1095 | Healthcheck when running in docker | I'm running localstack with docker-compose as a dependency for a service that I'm developing. The problem is that my service calls localstack before it's fully initialized. The only solution I could find so far is a hard `sleep <seconds>` at start-up, but that only works on my specific system and produces unexpected re... | null | null | null | {'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
localstack | localstack | 5d11af78ae1d19560f696a9e1abb707bd115c390 | https://github.com/localstack/localstack/issues/4970 | type: bug
status: triage needed
area: configuration
aws:cloudformation
area: networking | Lambda invocation exception | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Creating and/or updating Lambda functions in docker does not work after updating LocalStack image to the latest version with the following error in LocalStack logs:
```
2021-11-20T03:33:32.357:DEBUG:locals... | null | null | null | {} | [
{
"Loc": [
96
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
localstack | localstack | c07094dbf52c947e77d952825eb4daabf409655d | https://github.com/localstack/localstack/issues/5516 | type: bug
status: triage needed
status: response required
aws:cognito | bug: JWT ID Token issued by cognito-idp can not be verified in v0.14.0 but can in 0.11.5 | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
JWT tokens issued by cognito can not be verified.
### Expected Behavior
JWT tokens issues by cognito should be verifiable.
### How are you starting LocalStack?
With the `localstack` script
###... | null | null | null | {} | [
{
"Loc": [
82
],
"path": null
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
OpenInterpreter | open-interpreter | dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad | https://github.com/OpenInterpreter/open-interpreter/issues/499 | Bug | raise Exception("`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.") | ### Describe the bug
Fresh install on ubuntu 22,
I'm using interpreter in terminal.
After sending a prompt, at some point on the answer the program crashes
```
> Traceback (most recent call last):
File "/home/fauxprophet/Documents/Ops/openai/bin/interpreter", line 8, in <module>
sys.exit(cli())
File... | null | null | null | {'base_commit': 'dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad', 'files': [{'path': 'interpreter/core/core.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"interpreter/core/core.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
OpenInterpreter | open-interpreter | 1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d | https://github.com/OpenInterpreter/open-interpreter/issues/15 | Error: cannot import name 'cli' from 'interpreter' | ```console
╰─$ uname -a
Linux lab 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux
╰─$ pip --version 1 ↵
pip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)
╰─$ interpreter ... | null | null | null | {'base_commit': '1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d', 'files': [{'path': 'interpreter/interpreter.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"interpreter/interpreter.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
OpenInterpreter | open-interpreter | 36ec07125efec86594c91e990f68e0ab214e7edf | https://github.com/OpenInterpreter/open-interpreter/issues/1548 | run interpreter --model ollama/qwen2.5:3b error | ### Bug Description
When executing the command `interpreter --model ollama/qwen2.5:3b`, an error occurs with the specific error message:
```
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
```
This error indicates that there is an unterminated string while trying to pa... | null | null | null | {'base_commit': '36ec07125efec86594c91e990f68e0ab214e7edf', 'files': [{'path': 'docs/usage/terminal/arguments.mdx', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc\n"
} | {
"code": [],
"doc": [
"docs/usage/terminal/arguments.mdx"
],
"test": [],
"config": [],
"asset": []
} | null | |
OpenInterpreter | open-interpreter | 8fb4668dc7451ac58ac57ba587ed77194469f739 | https://github.com/OpenInterpreter/open-interpreter/issues/1175 | Error when inporting interpreter | ### Describe the bug
I have the following error when I try to import interpreter:
```
Traceback (most recent call last):
File "/home/seba/workspace/AutoProgrammer/interpreter.py", line 1, in <module>
from interpreter import interpreter
File "/home/seba/workspace/AutoProgrammer/interpreter.py", line 1, in ... | null | null | null | {} | [
{
"path": "/home/seba/workspace/AutoProgrammer/interpreter.py"
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
"/home/seba/workspace/AutoProgrammer/interpreter.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 3bc25680529cdb6b5d407c8332e820aeb2e0b948 | https://github.com/abi/screenshot-to-code/issues/66 | WebSocket error code |
"Your demonstration website has the same error, please take a look." | null | null | null | {'base_commit': '3bc25680529cdb6b5d407c8332e820aeb2e0b948', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Config"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 2f88cf9b2568163954ecc7c20ef9879263bfc9ba | https://github.com/abi/screenshot-to-code/issues/476 | Error generating code. Please contact support. | I have already started the project both frontend and backend but when placing the image I get the following error "Error generating code. Please contact support." Could you help me with this problem?

| null | null | null | {} | [] | [
".env"
] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "1",
"info_type": "Other\n环境变量\n文档的一个loc的误解"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
".env"
],
"asset": []
} | null | |
abi | screenshot-to-code | 4e30b207c1ee9ddad05a37c31a11ac5a182490b7 | https://github.com/abi/screenshot-to-code/issues/270 | Error configuring ANTHROPIC API KEY in.env file | I added "ANTHROPIC_API_KEY=s****" to the.env file
"No Anthropic API key found. Please add the environment variable ANTHROPIC_API_KEY to backend/.env"
| null | null | null | {'base_commit': '4e30b207c1ee9ddad05a37c31a11ac5a182490b7', 'files': [{'path': 'backend/config.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "",
"info_type": "Code"
} | {
"code": [
"backend/config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 226af5bf4183539c97c7bab825cb9324b8c570c0 | https://github.com/abi/screenshot-to-code/issues/136 | error generating code | Error generating code. Check the Developer Console AND the backend logs for details. Feel free to open a Github issue.
while hiiting the url and pasting the screenshot it shows below error ,am i doing it correctly
<img width="940" alt="Screenshot 2023-11-30 212304" src="https://github.com/abi/screenshot-to-code/as... | null | null | null | {'base_commit': '226af5bf4183539c97c7bab825cb9324b8c570c0', 'files': [{'path': 'Troubleshooting.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [],
"doc": [
"Troubleshooting.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1 | https://github.com/abi/screenshot-to-code/issues/452 | build failed | **Describe the bug**
Docker container Exited for `screenshot-to-code-main-frontend-1`
**To Reproduce**
OS: Ubuntu 22.04.4 LTS
Docker Compose version v2.28.1
Build version: (commit id) b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1
**Screenshots of backend AND frontend terminal logs**
Nginx conf
```
loc... | null | null | null | {'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"frontend/tailwind.config.js"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 214163b0e02176333b5543740cf6262e5da99602 | https://github.com/abi/screenshot-to-code/issues/268 | model evaluation method | How to evaluate the performance of the model on generalized data, such as comparing the original screenshots with the generated results? Are there any indicators? | null | null | null | {'base_commit': '214163b0e02176333b5543740cf6262e5da99602', 'files': [{'path': 'blog/evaluating-claude.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"blog/evaluating-claude.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1 | https://github.com/abi/screenshot-to-code/issues/443 | ReferenceError: module is not defined | When running the frontend yarn dev command, I get the error below.
Steps to reproduce the behavior:
1. Go to frontend folder
2. execute: `yarn`
3. execute: `yarn dev`
Immediately after executing the yarn dev command, I get a message that says:
```
ERROR ... | null | null | null | {'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"frontend/tailwind.config.js"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b | https://github.com/abi/screenshot-to-code/issues/132 | Why Connection closed 1006 | 

... | null | null | null | {'base_commit': '1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b', 'files': [{'path': 'backend/main.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"backend/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 689783eabd552151fa511e44cba90c14f3ee4dcd | https://github.com/abi/screenshot-to-code/issues/83 | code error | Hi, I tried the [online version](https://picoapps.xyz/free-tools/screenshot-to-code) of your tool with my API key but I got an error from that following screenshot

which ... | null | null | null | {'base_commit': '689783eabd552151fa511e44cba90c14f3ee4dcd', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null | |
abi | screenshot-to-code | 7d6fde2deafa014dc1a90c3b1dcb2ed88680a2ff | https://github.com/abi/screenshot-to-code/issues/1 | Error: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte | Hello, thank you for your contribution, I am having the above problem, can you help me?
` File "<frozen codecs>", line 322, in decode
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte` | null | null | null | {} | [] | [
".env"
] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "1",
"info_type": "Other\n环境变量"
} | {
"code": [],
"doc": [],
"test": [],
"config": [
".env"
],
"asset": []
} | null | |
abi | screenshot-to-code | fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c | https://github.com/abi/screenshot-to-code/issues/150 | Error generating code. Check the Developer Console AND the backend logs for details | My ChatGPT has access to GPT-VISION. and the web app loads well but when I upload an image. it returns this error 'Error generating code. Check the Developer Console AND the backend logs for details'
<img width="466" alt="error" src="https://github.com/abi/screenshot-to-code/assets/100529823/97c337b7-de54-45f9-8def-f9... | null | null | null | {'base_commit': 'fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c', 'files': [{'path': 'docker-compose.yml', 'Loc': {'(None, None, 20)': {'mod': [20]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [],
"doc": [
"docker-compose.yml"
],
"test": [],
"config": [],
"asset": []
} | null | |
pytorch | pytorch | 4622b3395276b37e10141fab43ffea33941ca0c2 | https://github.com/pytorch/pytorch/issues/2384 | How the grad is transferred between layer | consider a simple example here:
```python
import torch
from torch.autograd import Variable
input = Variable(torch.randn(20, 3, 28, 28), requires_grad=True)
m = torch.nn.Conv2d(3, 16, 5)
output = m(input)
loss = torch.sum(output)# define loss to perform backprop
m.zero_grad()
loss.backward()
print(type(i... | null | null | null | {'base_commit': '4622b3395276b37e10141fab43ffea33941ca0c2', 'files': [{'path': 'torch/autograd/variable.py', 'Loc': {"('Variable', 'retain_grad', 236)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"torch/autograd/variable.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
pytorch | pytorch | 2abcafcfd8beb4f6a22e08532d58f9f09c490f0f | https://github.com/pytorch/pytorch/issues/96983 | module: binaries
triaged
module: arm | PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support | ### 🐛 Describe the bug
PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support, where as PyTorch 1.13.0 had support.
Solution:
the wheels need to be built with the `--enable-mkldnn` option while building them from the pytorch/builder repo.
example command for pytorch wheel builder script:
`./... | null | null | null | {'base_commit': '2abcafcfd8beb4f6a22e08532d58f9f09c490f0f', 'files': [{'path': '.ci/aarch64_linux/build_aarch64_wheel.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
".ci/aarch64_linux/build_aarch64_wheel.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
pytorch | pytorch | 2dff0b3e918530719f7667cb31541f036a25e3f2 | https://github.com/pytorch/pytorch/issues/48435 | AttributeError: module 'torch.cuda' has no attribute 'comm' | ## ❓ Questions and Help
I'm using torch 1.7.0, and get this kind of error
my torch is installed via
pip install torch==1.7.0+cu101 torchvision==0.8.1+cu101 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html
my os is win10 | null | null | https://github.com/facebookresearch/InterHand2.6M/commit/874eb9f740ef54c275433d1bd27f8fb8f6a8f17d | {} | [] | [] | [
{
"org": "facebookresearch",
"pro": "InterHand2.6M",
"path": [
"{'base_commit': '874eb9f740ef54c275433d1bd27f8fb8f6a8f17d', 'files': [{'path': 'common/nets/module.py', 'status': 'modified', 'Loc': {\"('PoseNet', 'soft_argmax_1d', 41)\": {'mod': [43]}}}]}"
]
}
] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [
"common/nets/module.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"InterHand2.6M"
]
} | null | |
xtekky | gpt4free | e8f6013d0349229fd8f7d298952cfe56fc4b8761 | https://github.com/xtekky/gpt4free/issues/2070 | bug
stale | Liaobots and You don't work | Liaobots and You do not work, they give the following errors:
```
Liaobots: ResponseStatusError: Response 500: Error
```
```
You: ResponseStatusError: Response 401: {"status_code":401,"request_id":"request-id-live-183191e7-adc1-4838-8e29-6e0c5c3ca048","error_type":"endpoint_not_authorized_for_sdk","error_mess... | null | null | null | {'base_commit': 'e8f6013d0349229fd8f7d298952cfe56fc4b8761', 'files': [{'path': 'g4f/Provider/Liaobots.py', 'Loc': {"('Liaobots', 'create_async_generator', 111)": {'mod': [149]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/Provider/Liaobots.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | fa2d608822540c9b73350bfa036e8822ade4e23f | https://github.com/xtekky/gpt4free/issues/2305 | stale | ValueError: Unknown model: dall-e-3 | ```
C:\Users\MAX\Desktop>pip install -U g4f[all]
Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: g4f[all] in c:\users\max\appdata\local\packages\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\localcache\local-packages\python312\site-packages (0.3.3.2)
... | null | null | null | {'base_commit': 'fa2d608822540c9b73350bfa036e8822ade4e23f', 'files': [{'path': 'g4f/models.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/models.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | 1ade1d959cbc9aea7cf653bbe5b6c414ba486c97 | https://github.com/xtekky/gpt4free/issues/1292 | bug
stale | RecursionError: maximum recursion depth exceeded while calling a Python object | Ubuntu 22, g4f-0.1.9.0, pip installation method, python3.10
**Bug description**
G4F API has these errors after 5-10 requests. I have to restart constantly. It is very uncomfortable. This problem did not exist in the previous version.
**Errors**
```
RecursionError: maximum recursion depth exceeded in comparison... | null | null | null | {'base_commit': '1ade1d959cbc9aea7cf653bbe5b6c414ba486c97', 'files': [{'path': 'g4f/cli.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/cli.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | c159eebd494b1aef06340429b7b62cdfb84f783d | https://github.com/xtekky/gpt4free/issues/2556 | bug | Errors when generating images in the following models: | Hi!
errors when generating images in the following models:
Response 404: The page could not be found
sdxl, playground-v2.5, sd-3
dall-e-3: Missing "_U" cookie
midjourney: Cannot connect to host image.pollinations.ai:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate ve... | null | null | null | {'base_commit': 'c159eebd494b1aef06340429b7b62cdfb84f783d', 'files': [{'path': 'projects/windows/main.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"projects/windows/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | b7eee50930dbd782d7c068d1d29cd270b97bc741 | https://github.com/xtekky/gpt4free/issues/1710 | bug
stale | AttributeError: module 'g4f' has no attribute 'client' | **Bug description**
When trying to run script from Quickstart, i get this error.
Traceback (most recent call last):
File "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py", line 3, in <module>
engine = g4f.client.Client()
AttributeError: module 'g4f' has no attribute 'client'
**Environmen... | null | null | null | {'base_commit': 'b7eee50930dbd782d7c068d1d29cd270b97bc741', 'files': [{'path': 'g4f/client/__init__.py', 'Loc': {}}, {'path': 'C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py'}]} | [
{
"path": "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"
}
] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3",
"info_type": "Code"
} | {
"code": [
"g4f/client/__init__.py"
],
"doc": [],
"test": [
"C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"
],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | 2a54c36043b9d87b96c4b7699ce194f8523479b8 | https://github.com/xtekky/gpt4free/issues/552 | bug | Unable to fetch the response, Please try again. | 
| null | null | null | {'base_commit': '2a54c36043b9d87b96c4b7699ce194f8523479b8', 'files': [{'path': 'gpt4free/you/__init__.py', 'Loc': {"('Completion', 'create', 22)": {'mod': [41]}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"gpt4free/you/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
xtekky | gpt4free | c29487cdb522a2655ccff45bdfc33895ed4daf84 | https://github.com/xtekky/gpt4free/issues/2078 | bug | HuggingChat provider is not working - ResponseStatusError: Response 500 | ### Bug description
When I try to use the HuggingChat provider, having added a cookies/har file, I always get the same error: `An error occurred: HuggingChat: ResponseStatusError: Response 500:`
```
Using HuggingChat provider and CohereForAI/c4ai-command-r-plus model
INFO:werkzeug:192.168.80.1 - - [22/Jun/2024 ... | null | null | null | {'base_commit': 'c29487cdb522a2655ccff45bdfc33895ed4daf84', 'files': [{'path': 'g4f/Provider/HuggingChat.py', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"g4f/Provider/HuggingChat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
Z4nzu | hackingtool | c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e | https://github.com/Z4nzu/hackingtool/issues/68 | question | default username and password of social fish | hay man the tool works fine but what is the default username and password of social fish | null | null | null | {'base_commit': 'c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e', 'files': [{'path': 'README.md', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Doc"
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | null |
scikit-learn | scikit-learn | f7026b04f5e5909aa15848b25de2becd675871a9 | https://github.com/scikit-learn/scikit-learn/issues/2475 | Multinomial Naive Bayes: Scikit and Weka have different results | Hi All,
I used the sklearn.naive_bayes.MultinomialNB on a toy example.
Comparing the results with WEKA, I've noticed a quite different AUC.
Scikit (0.579) - Weka (0.664)
| null | null | null | {'base_commit': 'f7026b04f5e5909aa15848b25de2becd675871a9', 'files': [{'path': 'sklearn/cross_validation.py', 'Loc': {"(None, 'cross_val_score', 1075)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"sklearn/cross_validation.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
scikit-learn | scikit-learn | 0ab5c678bba02888b62b777b4c757e367b3458d5 | https://github.com/scikit-learn/scikit-learn/issues/8470 | How to let gbdt = GradientBoostingRegressor(), gbdt.fit(X_feature, X_label) know whether the feature of input X is categorical or numerical? | null | null | null | {'base_commit': '0ab5c678bba02888b62b777b4c757e367b3458d5', 'files': [{'path': 'sklearn/preprocessing/_encoders.py', 'Loc': {"('OneHotEncoder', None, 151)": {'mod': []}}, 'status': 'modified'}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"sklearn/preprocessing/_encoders.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | ||
pandas-dev | pandas | 184f2dba255f279697cb1d7567428b3e6403c2d0 | https://github.com/pandas-dev/pandas/issues/3209 | BUG: read_csv: dtype={'id' : np.str}: Datatype not understood | I have a CSV with several columns. The first of which is a field called `id` with entries of the type `0001`, `0002`, etc.
When loading this file, the following works:
``` python
pd.read_csv(my_path, dtype={'id' : np.int})
```
but the following doesn't:
``` python
pd.read_csv(my_path, dtype={'id' : np.str})
```
n... | null | null | null | {} | [
{
"Loc": [
12,
18
],
"path": null
}
] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "comment",
"loc_scope": "3\nand\n2",
"info_type": "Code"
} | {
"code": [
null
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
meta-llama | llama | 53011c3d7946dadb8274a4c5c7586ab54edf792d | https://github.com/meta-llama/llama/issues/48 | How to run 13B model on 4*16G V100? | RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 15.78 GiB total capacity; 14.26 GiB already allocated; 121.19 MiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management ... | null | null | null | {} | [] | [] | [
{
"org": "fabawi",
"pro": "wrapyfi"
},
{
"org": "modular-ml",
"pro": "wrapyfi-examples_llama"
}
] | {
"iss_type": "1",
"iss_reason": "5",
"loc_way": "comment",
"loc_scope": "2",
"info_type": "Code"
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"wrapyfi",
"wrapyfi-examples_llama"
]
} | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.