repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
google-research/bert | nlp | 1,169 | Access to TFRC | Hello all Dears,
How can we access free tpu via TFRC for pre-training BERT language Model on a specific language?
the link below explains that we have to sign up here: https://services.google.com/fb/forms/tpusignup/ , but it seems that it's a dead URL.
https://ai.googleblog.com/2017/05/introducing-tensorflow-research-cloud.html
Nothing happened By Apply now at : https://www.tensorflow.org/tfrc | open | 2020-11-08T18:17:26Z | 2020-11-08T18:17:26Z | https://github.com/google-research/bert/issues/1169 | [] | alighofrani95 | 0 |
roboflow/supervision | deep-learning | 1,337 | What models or data sets are used for object detection | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
The following picture is an example, but no documentation was found

### Additional
_No response_ | closed | 2024-07-10T03:50:18Z | 2024-07-10T07:03:01Z | https://github.com/roboflow/supervision/issues/1337 | [
"question"
] | dearMOMO | 1 |
Esri/arcgis-python-api | jupyter | 1,323 | pip install arcgis==2.0.1 fails on MacOS | **Describe the bug**
Cannot install _arcgis_ Python SDK v2.0.1 on MacOS using _pip install .._ which makes server installs using _requirements.txt_ difficult (_arcgis_ Python SDK v2.0.0 installs just fine).
**To Reproduce**
Steps:
```
pip install arcgis==2.0.1
```
Error:
```
pip install arcgis==2.0.1
ERROR: Ignored the following versions that require a different python version: 2.0.1 Requires-Python >=3.7, <3.10
ERROR: Could not find a version that satisfies the requirement arcgis==2.0.1 (from versions: 1.3.0, 1.3.0.post1, 1.3.0.post2, 1.4.0, 1.4.1, 1.4.2, 1.5.0, 1.5.1, 1.5.2, 1.5.2.post1, 1.5.3, 1.6.0, 1.6.1, 1.6.1.post1, 1.6.2, 1.6.2.post1, 1.7.0, 1.7.1, 1.8.0, 1.8.0.post1, 1.8.1, 1.8.2, 1.8.3, 1.8.3.post1, 1.8.3.post2, 1.8.4, 1.8.5.post1, 1.8.5.post2, 1.8.5.post3, 1.9.0, 1.9.1, 2.0.0)
ERROR: No matching distribution found for arcgis==2.0.1
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Platform (please complete the following information):**
- OS: MacOS
- Python API Version [e.g. `2.0.1`]
**Additional context**
`pip install arcgis=2.0.0` works just fine.
Related:
1. https://github.com/Esri/arcgis-python-api/issues/1299
| closed | 2022-08-08T23:06:01Z | 2024-05-15T13:05:19Z | https://github.com/Esri/arcgis-python-api/issues/1323 | [
"bug"
] | twystd | 17 |
google-research/bert | nlp | 650 | Using bert for Document Classification | How can i use BERT to fine tune for document classifications? Has anyone implemented it? Any example or lead would be really helpful. I want to use it for document which are way bigger than current max length(512 tokens).
| open | 2019-05-15T15:19:20Z | 2021-03-03T18:51:42Z | https://github.com/google-research/bert/issues/650 | [] | sandeeppilania | 6 |
ultralytics/ultralytics | pytorch | 19,658 | Questions about yolov11 output printing | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question

This is the latest yolov11 model I downloaded, the number of layers printed is 181, but I noticed that the model I downloaded before outputs 319 layers, does this have any effect?

### Additional
_No response_ | closed | 2025-03-12T07:09:27Z | 2025-03-12T23:44:43Z | https://github.com/ultralytics/ultralytics/issues/19658 | [
"question",
"fixed",
"detect"
] | xukaixu | 3 |
google-research/bert | tensorflow | 1,272 | run_pretraining.py ignores undefined flags. | Hi, I'm trying to make bert model.
I found that **`run_pretraining.py` ignores undefined flags**.
for example, If I do like this,
```
python3 bert/run_pretraining.py \
--input_file=gs://input_dir/*.tfrecord \
--output_dir=gs://output_dir/ \
--do_train=True \
--do_eval=True \
--bert_config_file=./bert_config.json \
--train_batch_size=1024 \
--max_seq_length=128 \
--max_predictions_per_seq=20 \
--num_train_steps=1000000 \
--num_warmup_steps=10000 \
--learning_rate=5e-5 \
--save_checkpoints_steps=10000 \
--init_checkpoints=340000 \
--use_tpu=True \
--tpu_name=tpu2 \
--tpu_zone=us-central1-f \
--fake=undifined_flags
--gcp_project=my_project \
--num_tpu_cores=8
```
**I included an undefined arg** **`--fake=undifined_flags`**
I think It should throw out an error, but It doesn't.
It train well. (Maybe) There is no problem in progress.
TPU was connected normally, and ckpt was made normally too.
And fine tuning result of ckpt is not bad.
Why it doesn't throw out error? Why it works?
| open | 2021-10-27T03:18:12Z | 2021-10-27T03:18:42Z | https://github.com/google-research/bert/issues/1272 | [] | kyle-bong | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,053 | How to control /socket.io/ endpoint | Hello,
This is less of an issue and more of a question, but is it possible to control the `/socket.io/` endpoint?
I am using `Flask-SocketIO` for authenticated users only, using the `authenticated_only` function wrapper:
```def authenticated_only(f):
@functools.wraps(f)
def wrapped(*args, **kwargs):
if not current_user.is_authenticated:
disconnect()
else:
return f(*args, **kwargs)
return wrapped
```
All `@socketio.on` handlers are making use of this wrapper, including `@socketio.on('connect')`.
If an unauthenticated user makes a direct `GET` request to my server to the `/socket.io/` endpoint, for example `/socket.io/?EIO=3&transport=polling&t=MqK2tJx` this request will time-out and eventually result in a 502 error (`socket.io` is behind `nginx`). But it seems to still be "stuck" in the async queue.
Is it possible to control the `/socket.io/` and immediately return a `403` for unauthenticated users?
Otherwise I eventually end up with errors as a core is "stuck" trying to manage this request `[DANGER] async queue is full !!!` | closed | 2019-09-09T06:55:49Z | 2019-09-09T10:40:32Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1053 | [] | dank69 | 2 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 453 | [BUG]: <Provide a clear, descriptive title> Cannot generate resume: Chromedriver version not discovered | ### Describe the bug
2024-09-30 11:27:11.553 | ERROR | src.aihawk_easy_applier:_create_and_upload_resume:470 - Failed to generate resume: Message: Selenium Manager failed for: /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/macos/selenium-manager --browser chrome --output json --debug. The chromedriver version cannot be discovered 2024-09-30 11:27:11.556 | ERROR | src.aihawk_easy_applier:_create_and_upload_resume:472 - Traceback: Traceback (most recent call last): File "/Users/maauri/Projects/Auto_jobs/linkedIn_auto_jobs_applier_with_AI/src/aihawk_easy_applier.py", line 442, in _create_and_upload_resume resume_pdf_base64 = self.resume_generator_manager.pdf_base64(job_description_text=job.description) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/lib_resume_builder_AIHawk/manager_facade.py", line 81, in pdf_base64 pdf_base64 = HTML_to_PDF(temp_html_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/lib_resume_builder_AIHawk/utils.py", line 25, in HTML_to_PDF driver = create_driver_selenium() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/lib_resume_builder_AIHawk/utils.py", line 18, in create_driver_selenium return webdriver.Chrome(service=service, options=options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/chrome/webdriver.py", line 82, in __init__ service.path = DriverFinder.get_path(service, options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/driver_finder.py", line 43, in get_path raise err File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/driver_finder.py", line 40, in get_path path = shutil.which(service.path) or SeleniumManager().driver_location(options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/selenium_manager.py", line 91, in driver_location result = self.run(args) ^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/selenium_manager.py", line 112, in run raise SeleniumManagerException(f"Selenium Manager failed for: {command}.\n{result}{stderr}") selenium.common.exceptions.SeleniumManagerException: Message: Selenium Manager failed for: /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/macos/selenium-manager --browser chrome --output json --debug. The chromedriver version cannot be discovered 2024-09-30 11:27:11.556 | ERROR | src.aihawk_easy_applier:fill_up:337 - Failed to find form elements: Message: Selenium Manager failed for: /Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/selenium/webdriver/common/macos/selenium-manager --browser chrome --output json --debug. The chromedriver version cannot be discovered
### Steps to reproduce
Run the program after cloning using python3 main.py
### Expected behavior
The resume should have been generated
### Actual behavior
Selects an already uploaded resume and proceeds
### Branch
main
### Branch name
_No response_
### Python version
3.12.2
### LLM Used
OpenAI
### Model used
GPT-4o-mini
### Additional context
_No response_ | closed | 2024-09-30T16:30:48Z | 2024-10-25T00:04:02Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/453 | [
"bug"
] | nitSubedi | 8 |
PaddlePaddle/ERNIE | nlp | 21 | neglect、、 | closed | 2019-03-16T06:36:33Z | 2019-03-16T06:40:19Z | https://github.com/PaddlePaddle/ERNIE/issues/21 | [] | JianboTang | 0 |
|
dask/dask | numpy | 10,848 | pandas 2.2: changed behaviour in `test_multi.py::test_concat5` | This test has started failing when pandas got upgraded to 2.2 last weekend:
```
FAILED dask/dataframe/tests/test_multi.py::test_concat5 - AssertionError: DataFrame are different
DataFrame shape mismatch
[left]: (14, 6)
[right]: (14, 5)
``` | closed | 2024-01-22T13:52:57Z | 2024-01-23T09:59:57Z | https://github.com/dask/dask/issues/10848 | [
"dataframe"
] | crusaderky | 1 |
JaidedAI/EasyOCR | pytorch | 557 | ddd | ddd | closed | 2021-10-01T02:34:34Z | 2021-10-01T02:37:25Z | https://github.com/JaidedAI/EasyOCR/issues/557 | [] | dayoorang | 0 |
yzhao062/pyod | data-science | 542 | issues about scikit-learn | 1- in requirements.txt you have "scikit_learn" i think should scikit-learn. should change underscore "_" to hiffen "-".
2 - pyod is a indepencence of pycaret, i'm tryng making support to scikit-learn 1.4, but pyod have a conflict, please make support to scikit-learn 1.4 and release a new version
thanks | closed | 2024-01-20T16:51:13Z | 2024-02-09T20:39:34Z | https://github.com/yzhao062/pyod/issues/542 | [] | celestinoxp | 3 |
pydata/bottleneck | numpy | 466 | [BUG]move_rank is much slower than pd.rolling(window).rank() when window is a large value(>1000) | move_rank is much slower than pd.rolling(window).rank() when window is a large value(>1000) | open | 2024-12-04T08:57:35Z | 2024-12-04T08:57:35Z | https://github.com/pydata/bottleneck/issues/466 | [
"bug"
] | sysy007uuu | 0 |
wandb/wandb | data-science | 9,110 | [Bug-App]: Invalid License json: cannot unmarshal string into Go struct field WeaveLimits.weaveLimits.weaveLimitBytes of type int64 | ### Describe the bug
I installed wandb locally, I got a free license.

But when I click on update settings, I'm getting the error
```
Invalid License json: cannot unmarshal string into Go struct field WeaveLimits.weaveLimits.weaveLimitBytes of type int64
```

When I look at the payload, it's of type string:

Can you fix that in your license ? | open | 2024-12-18T23:22:33Z | 2025-01-15T06:02:53Z | https://github.com/wandb/wandb/issues/9110 | [
"ty:bug",
"a:app"
] | 6be709c0 | 3 |
SALib/SALib | numpy | 170 | question: how do I select the optimum number of resamples value to use? | Hi,
It may be possible that the results of Sobol, Morris, Delta-Moment, and Derivative-based Global Sensitivity Measure depend on the used number of resamples value. If this is the case, is there a way to estimate the optimum value?
For example, I have this problem definition:
problem= {
'num_vars': 6,
'names': ['Amplitude', 'Bandwidth', 'Envelope', 'Instantaneous Frequency', 'Sweetness', 'Thin Bed'],
'bounds': [[min_amplitude, max_amplitude],
[min_bandwidth, max_bandwidth],
[min_envelope, max_envelope],
[min_instantaneous_frequency, max_instantaneous_frequency],
[min_sweetness, max_sweetness],
[min_thin_bed, max_thin_bed]],
'distributions': ['norm', 'norm', 'norm', 'norm', 'norm', 'norm']
}
The input and model data is in a 2D array of size 7344 x 7. The last column contains the model output.
What would be the optimum value for the number of resamples? is there a formula that I can use?
Many thanks,
Ivan
| closed | 2017-11-02T17:08:21Z | 2017-11-02T22:36:41Z | https://github.com/SALib/SALib/issues/170 | [
"question"
] | ivan-marroquin | 3 |
Layout-Parser/layout-parser | computer-vision | 181 | google-api-core error with lp.TesseractAgent(languages='eng') | **Describe the bug**
code: `ocr_agent = lp.TesseractAgent(languages='eng')` <br>
Error: ContextualVersionConflict: (google-api-core 2.11.0 (/usr/local/lib/python3.10/dist-packages), Requirement.parse('google-api-core[grpc]<2.0.0dev,>=1.14.0'), {'google-cloud-vision'})
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version, see the [Layout Parser Releases](https://github.com/Layout-Parser/layout-parser/releases/)
**To Reproduce**
Steps to reproduce the behavior:
1. What command or script did you run?
```
ocr_agent = lp.TesseractAgent(languages='eng')
```
**Environment**
1. Platform: Google Colab
| open | 2023-05-22T06:15:55Z | 2023-05-22T06:15:55Z | https://github.com/Layout-Parser/layout-parser/issues/181 | [
"bug"
] | mlbrothers | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 10,650 | pymysql Connection.ping AttributeError | ### Describe the bug
sqlalchemy with PyMySQL 0.9.3 ping cause exception.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.23 1.4.50
### DBAPI (i.e. the database driver)
pymysql
### Database Vendor and Major Version
MySQL 8
### Python Version
3.9
### Operating system
Linux
### To Reproduce
```python
from sqlalchemy import create_engine
from sqlalchemy import text
SQLALCHEMY_DATABASE_URI='mysql+pymysql://xxxx:xxxx@xxxx:3306/mysql'
engine = create_engine(SQLALCHEMY_DATABASE_URI, pool_pre_ping=True, pool_size=1)
def test():
with engine.connect() as conn:
conn.execute(text("select user from mysql.user limit 1"))
for _ in range(2):
test()
```
### Error
```
Traceback (most recent call last):
File "/workspace/test-env/../greatrds/.vscode/test.py", line 13, in <module>
test()
File "/workspace/test-env/../greatrds/.vscode/test.py", line 9, in test
with engine.connect() as conn:
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3268, in connect
return self._connection_cls(self)
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 145, in __init__
self._dbapi_connection = engine.raw_connection()
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 3292, in raw_connection
return self.pool.connect()
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 452, in connect
return _ConnectionFairy._checkout(self)
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 1378, in _checkout
del fairy
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 146, in __exit__
raise exc_value.with_traceback(exc_tb)
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/pool/base.py", line 1306, in _checkout
result = pool._dialect._do_ping_w_event(
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 709, in _do_ping_w_event
return self.do_ping(dbapi_connection)
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/dialects/mysql/pymysql.py", line 104, in do_ping
if self._send_false_to_ping:
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 1146, in __get__
obj.__dict__[self.__name__] = result = self.fget(obj)
File "/workspace/test-env/venv/lib/python3.9/site-packages/sqlalchemy/dialects/mysql/pymysql.py", line 93, in _send_false_to_ping
insp = langhelpers.get_callable_argspec(Connection.ping)
AttributeError: 'function' object has no attribute 'ping'
```
### Additional context
Package Version
---------- -------
greenlet 3.0.1
pip 21.2.4
PyMySQL 0.9.3
setuptools 58.1.0
SQLAlchemy 1.4.50
| closed | 2023-11-17T16:10:05Z | 2023-11-24T19:48:30Z | https://github.com/sqlalchemy/sqlalchemy/issues/10650 | [
"bug",
"mysql",
"external driver issues"
] | luoye2018 | 20 |
huggingface/diffusers | pytorch | 11,041 | WAN2.1 apply_group_offloading **ERROR** result | ### Describe the bug
I am attempting to use the WAN 2.1 model from the diffusers library to complete an image-to-video task on an NVIDIA RTX 4090. To optimize memory usage, I chose the group offload method and intended to compare resource consumption across different configurations. However, during testing, I encountered two main issues:
1. When using the group_offload_leaf_stream method:
I received warnings that some layers were not executed during the forward pass:
```
It seems like some layers were not executed during the forward pass. This may lead to problems when applying lazy prefetching with automatic tracing and lead to device-mismatch related errors. Please make sure that all layers are executed during the forward pass. The following layers were not executed:
unexecuted_layers=['blocks.25.attn2.norm_added_q', 'blocks.10.attn2.norm_added_q', 'blocks.13.attn2.norm_added_q', 'blocks.11.attn2.norm_added_q', 'blocks.34.attn2.norm_added_q', 'blocks.0.attn2.norm_added_q', 'blocks.35.attn2.norm_added_q', 'blocks.33.attn2.norm_added_q', 'blocks.21.attn2.norm_added_q', 'blocks.20.attn2.norm_added_q', 'blocks.3.attn2.norm_added_q', 'blocks.7.attn2.norm_added_q', 'blocks.22.attn2.norm_added_q', 'blocks.14.attn2.norm_added_q', 'blocks.29.attn2.norm_added_q', 'blocks.9.attn2.norm_added_q', 'blocks.1.attn2.norm_added_q', 'blocks.37.attn2.norm_added_q', 'blocks.18.attn2.norm_added_q', 'blocks.30.attn2.norm_added_q', 'blocks.4.attn2.norm_added_q', 'blocks.32.attn2.norm_added_q', 'blocks.36.attn2.norm_added_q', 'blocks.26.attn2.norm_added_q', 'blocks.6.attn2.norm_added_q', 'blocks.38.attn2.norm_added_q', 'blocks.17.attn2.norm_added_q', 'blocks.12.attn2.norm_added_q', 'blocks.19.attn2.norm_added_q', 'blocks.16.attn2.norm_added_q', 'blocks.15.attn2.norm_added_q', 'blocks.28.attn2.norm_added_q', 'blocks.24.attn2.norm_added_q', 'blocks.31.attn2.norm_added_q', 'blocks.8.attn2.norm_added_q', 'blocks.5.attn2.norm_added_q', 'blocks.27.attn2.norm_added_q', 'blocks.2.attn2.norm_added_q', 'blocks.39.attn2.norm_added_q', 'blocks.23.attn2.norm_added_q']
```

This issue resulted in severe degradation of the generated output.
这是我选择的图像:

我得到了错误的视频:
https://github.com/user-attachments/assets/7a8b55a2-6a71-493a-b7ae-64566b321954
当我使用默认pipe即不采用 group_offload_leaf_stream我得到了正确的结果:
https://github.com/user-attachments/assets/9b54c2f2-fa93-422f-b3df-619ee96bb3c8
2.When using the group_offload_block_1_stream method:
I encountered a runtime error: "RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same". It appears that the VAE module was not correctly assigned to the GPU device.
```
Traceback (most recent call last):
File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 171, in <module>
main(args)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 143, in main
run_inference()
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/memory_profiler.py", line 1188, in wrapper
val = prof(func)(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/memory_profiler.py", line 761, in f
return func(*args, **kwds)
File "/maindata/data/shared/public/haobang.geng/code/video-generate/i2v-baseline/wanx-all-profile.py", line 130, in run_inference
output = pipe(
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 587, in __call__
latents, condition = self.prepare_latents(
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/pipelines/wan/pipeline_wan_i2v.py", line 392, in prepare_latents
latent_condition = retrieve_latents(self.vae.encode(video_condition), generator)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/utils/accelerate_utils.py", line 46, in wrapper
return method(self, *args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 795, in encode
h = self._encode(x)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 762, in _encode
out = self.encoder(x[:, :, :1, :, :], feat_cache=self._enc_feat_map, feat_idx=self._enc_conv_idx)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 439, in forward
x = self.conv_in(x, feat_cache[idx])
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/diffusers/models/autoencoders/autoencoder_kl_wan.py", line 78, in forward
return super().forward(x)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 725, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/maindata/data/shared/public/haobang.geng/miniconda/envs/vdm/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 720, in _conv_forward
return F.conv3d(
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
```
Request for Help:
Are there recommended approaches to ensure all layers are properly executed, especially for the group_offload_leaf_stream method?
How can I resolve the device mismatch issue related to the VAE?
Any suggestions or guidance would be greatly appreciated!
### Reproduction
here is my code
```python
import argparse
import functools
import json
import os
import pathlib
import psutil
import time
import torch
from diffusers import FluxPipeline
from diffusers.hooks import apply_group_offloading
from memory_profiler import profile
import torch
import numpy as np
from diffusers import AutoencoderKLWan, WanImageToVideoPipeline
from diffusers.utils import export_to_video, load_image
from transformers import CLIPVisionModel
from diffusers import FlowMatchEulerDiscreteScheduler, UniPCMultistepScheduler, WanPipeline
def get_memory_usage():
process = psutil.Process(os.getpid())
mem_bytes = process.memory_info().rss
return mem_bytes
@profile(precision=2)
def apply_offload(pipe: FluxPipeline, method: str) -> None:
if method == "full_cuda":
pipe.to("cuda")
elif method == "model_offload":
pipe.enable_model_cpu_offload()
elif method == "sequential_offload":
pipe.enable_sequential_cpu_offload()
elif method == "group_offload_block_1":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="block_level",
num_blocks_per_group=1,
use_stream=False,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
elif method == "group_offload_leaf":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="leaf_level",
use_stream=False,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
elif method == "group_offload_block_1_stream":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="block_level",
num_blocks_per_group=1,
use_stream=True,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
elif method == "group_offload_leaf_stream":
offloader_fn = functools.partial(
apply_group_offloading,
onload_device=torch.device("cuda"),
offload_device=torch.device("cpu"),
offload_type="leaf_level",
use_stream=True,
)
list(map(offloader_fn, [pipe.text_encoder, pipe.transformer, pipe.vae, pipe.image_encoder]))
@profile(precision=2)
def load_pipeline():
model_id = "Wan2.1-I2V-14B-480P-Diffusers"
image_encoder = CLIPVisionModel.from_pretrained(
model_id, subfolder="image_encoder", torch_dtype=torch.float32
)
vae = AutoencoderKLWan.from_pretrained(model_id, subfolder="vae", torch_dtype=torch.float32)
scheduler_b = UniPCMultistepScheduler(prediction_type="flow_prediction", use_flow_sigmas=True, flow_shift=3.0)
pipe = WanImageToVideoPipeline.from_pretrained(
model_id, vae=vae, image_encoder=image_encoder, torch_dtype=torch.bfloat16, scheduler=scheduler_b
)
return pipe
@torch.no_grad()
def main(args):
os.makedirs(args.output_dir, exist_ok=True)
os.makedirs(f"./results/check-wanmulti-framework/{args.method}/", exist_ok=True)
pipe = load_pipeline()
apply_offload(pipe, args.method)
apply_offload_memory_usage = get_memory_usage()
torch.cuda.reset_peak_memory_stats()
cuda_model_memory = torch.cuda.max_memory_reserved()
output_dir = pathlib.Path(args.output_dir)
output_dir.mkdir(exist_ok=True, parents=True)
run_inference_memory_usage_list = []
def cpu_mem_callback():
nonlocal run_inference_memory_usage_list
run_inference_memory_usage_list.append(get_memory_usage())
@profile(precision=2)
def run_inference():
image = load_image("./dataset/character-img/imgs3/1.jpeg")
max_area = 480 * 832
aspect_ratio = image.height / image.width
mod_value = pipe.vae_scale_factor_spatial * pipe.transformer.config.patch_size[1]
height = round(np.sqrt(max_area * aspect_ratio)) // mod_value * mod_value
width = round(np.sqrt(max_area / aspect_ratio)) // mod_value * mod_value
prompt = (
"A person smile."
)
negative_prompt = "Bright tones, overexposed, static, blurred details, subtitles, style, works, paintings, images, static, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, misshapen limbs, fused fingers, still picture, messy background, three legs, many people in the background, walking backwards"
generator = torch.Generator("cuda").manual_seed(100)
output = pipe(
image=image,
prompt=prompt,
negative_prompt=negative_prompt,
height=height,
width=width,
num_frames=81,
guidance_scale=5.0,
generator=generator,
).frames[0]
export_to_video(output, f"./results/check-wanmulti-framework/{args.method}/wanx_diffusers.mp4", fps=16)
t1 = time.time()
run_inference()
torch.cuda.synchronize()
t2 = time.time()
cuda_inference_memory = torch.cuda.max_memory_reserved()
time_required = t2 - t1
# run_inference_memory_usage = sum(run_inference_memory_usage_list) / len(run_inference_memory_usage_list)
# print(f"Run inference memory usage list: {run_inference_memory_usage_list}")
info = {
"time": round(time_required, 2),
"cuda_model_memory": round(cuda_model_memory / 1024**3, 2),
"cuda_inference_memory": round(cuda_inference_memory / 1024**3, 2),
"cpu_offload_memory": round(apply_offload_memory_usage / 1024**3, 2),
}
with open(output_dir / f"memory_usage_{args.method}.json", "w") as f:
json.dump(info, f, indent=4)
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--method", type=str, default="full_cuda", choices=["full_cuda", "model_offload", "sequential_offload", "group_offload_block_1", "group_offload_leaf", "group_offload_block_1_stream", "group_offload_leaf_stream"])
parser.add_argument("--output_dir", type=str, default="./results/offload_profiling")
return parser.parse_args()
if __name__ == "__main__":
args = get_args()
main(args)
```
here is my environment
```
Package Version
--------------------------------- --------------------
absl-py 2.1.0
accelerate 1.4.0
addict 2.4.0
aiofiles 23.2.1
aiohappyeyeballs 2.4.3
aiohttp 3.10.10
aiosignal 1.3.1
airportsdata 20241001
albucore 0.0.17
albumentations 1.4.18
aliyun-python-sdk-core 2.16.0
aliyun-python-sdk-kms 2.16.5
altair 5.4.1
annotated-types 0.7.0
antlr4-python3-runtime 4.9.3
anyio 4.6.2.post1
astor 0.8.1
asttokens 2.4.1
astunparse 1.6.3
async-timeout 4.0.3
attrs 24.2.0
av 13.1.0
beautifulsoup4 4.12.3
blake3 1.0.4
blinker 1.9.0
boto3 1.35.60
botocore 1.35.60
braceexpand 0.1.7
certifi 2024.8.30
cffi 1.17.1
charset-normalizer 3.4.0
click 8.1.7
clip 0.2.0
cloudpickle 3.1.0
coloredlogs 15.0.1
comm 0.2.2
compressed-tensors 0.8.0
ConfigArgParse 1.7
contourpy 1.3.0
controlnet_aux 0.0.7
cpm-kernels 1.0.11
crcmod 1.7
cryptography 44.0.1
cupy-cuda12x 13.3.0
cycler 0.12.1
Cython 3.0.12
dash 2.18.2
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
dashscope 1.22.2
datasets 3.0.1
debugpy 1.8.10
decorator 4.4.2
decord 0.6.0
deepspeed 0.15.2
depyf 0.18.0
diffsynth 1.1.2
diffusers 0.33.0.dev0
dill 0.3.8
diskcache 5.6.3
distro 1.9.0
dnspython 2.7.0
docker-pycreds 0.4.0
easydict 1.13
einops 0.8.0
email_validator 2.2.0
eval_type_backport 0.2.0
exceptiongroup 1.2.2
executing 2.1.0
facexlib 0.3.0
fairscale 0.4.13
fastapi 0.115.2
fastjsonschema 2.20.0
fastrlock 0.8.3
ffmpy 0.4.0
filelock 3.16.1
filterpy 1.4.5
flash-attn 2.6.3
Flask 3.0.3
flatbuffers 24.3.25
fonttools 4.54.1
frozenlist 1.4.1
fsspec 2024.6.1
ftfy 6.3.0
func_timeout 4.3.5
future 1.0.0
fvcore 0.1.5.post20221221
gast 0.6.0
gguf 0.10.0
gitdb 4.0.11
GitPython 3.1.43
google-pasta 0.2.0
gradio 5.5.0
gradio_client 1.4.2
grpcio 1.66.2
h11 0.14.0
h5py 3.12.1
hjson 3.1.0
httpcore 1.0.6
httptools 0.6.4
httpx 0.27.2
huggingface-hub 0.29.1
humanfriendly 10.0
idna 3.10
imageio 2.36.0
imageio-ffmpeg 0.5.1
imgaug 0.4.0
importlib_metadata 8.5.0
iniconfig 2.0.0
interegular 0.3.3
iopath 0.1.10
ipykernel 6.29.5
ipython 8.29.0
ipywidgets 8.1.5
itsdangerous 2.2.0
jaxtyping 0.2.34
jedi 0.19.1
Jinja2 3.1.4
jiter 0.7.0
jmespath 0.10.0
joblib 1.4.2
jsonschema 4.23.0
jsonschema-specifications 2024.10.1
jupyter_client 8.6.3
jupyter_core 5.7.2
jupyterlab_widgets 3.0.13
keras 3.7.0
kiwisolver 1.4.7
lark 1.2.2
lazy_loader 0.4
libclang 18.1.1
libigl 2.5.1
linkify-it-py 2.0.3
llvmlite 0.43.0
lm-format-enforcer 0.10.9
lmdb 1.6.2
loguru 0.7.3
lvis 0.5.3
Markdown 3.7
markdown-it-py 2.2.0
MarkupSafe 2.1.5
matplotlib 3.9.2
matplotlib-inline 0.1.7
mdit-py-plugins 0.3.3
mdurl 0.1.2
memory-profiler 0.61.0
mistral_common 1.5.1
ml-dtypes 0.4.1
modelscope 1.23.2
moviepy 1.0.3
mpmath 1.3.0
msgpack 1.1.0
msgspec 0.18.6
multidict 6.1.0
multiprocess 0.70.16
namex 0.0.8
narwhals 1.10.0
natsort 8.4.0
nbformat 5.10.4
nest-asyncio 1.6.0
networkx 3.4.1
ninja 1.11.1.3
numba 0.60.0
numpy 1.26.4
nvdiffrast 0.3.3
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-cusparselt-cu12 0.6.2
nvidia-ml-py 12.560.30
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
omegaconf 2.3.0
onnxruntime 1.20.0
open3d 0.18.0
openai 1.54.4
openai-clip 1.0.1
opencv-python 4.10.0.84
opencv-python-headless 4.10.0.84
opt_einsum 3.4.0
optree 0.13.1
orjson 3.10.7
oss2 2.19.1
outlines 0.0.46
packaging 24.1
pandas 2.2.3
parso 0.8.4
partial-json-parser 0.2.1.1.post4
peft 0.13.2
pexpect 4.9.0
pillow 10.4.0
pip 24.2
platformdirs 4.3.6
plotly 5.24.1
pluggy 1.5.0
pooch 1.8.2
portalocker 2.10.1
proglog 0.1.10
prometheus_client 0.21.0
prometheus-fastapi-instrumentator 7.0.0
prompt_toolkit 3.0.48
propcache 0.2.0
protobuf 5.28.2
psutil 6.0.0
ptyprocess 0.7.0
pudb 2024.1.2
pure_eval 0.2.3
py-cpuinfo 9.0.0
pyairports 2.1.1
pyarrow 17.0.0
pybind11 2.13.6
pycocoevalcap 1.2
pycocotools 2.0.8
pycountry 24.6.1
pycparser 2.22
pycryptodome 3.21.0
pydantic 2.9.2
pydantic_core 2.23.4
pydub 0.25.1
Pygments 2.18.0
pyiqa 0.1.10
PyMatting 1.1.12
PyMCubes 0.1.6
pyparsing 3.2.0
pyquaternion 0.9.9
pytest 8.3.4
python-dateutil 2.9.0.post0
python-dotenv 1.0.1
python-multipart 0.0.12
pytorch3d 0.7.8
pytz 2024.2
PyYAML 6.0.2
pyzmq 26.2.0
qwen-vl-utils 0.0.10
ray 2.37.0
referencing 0.35.1
regex 2024.9.11
rembg 2.0.59
requests 2.32.3
requests-toolbelt 1.0.0
retrying 1.3.4
rich 13.9.2
rpds-py 0.20.0
ruff 0.6.9
s3transfer 0.10.3
safehttpx 0.1.1
safetensors 0.4.5
scikit-image 0.24.0
scikit-learn 1.5.2
scikit-video 1.1.11
scipy 1.14.1
semantic-version 2.10.0
sentencepiece 0.2.0
sentry-sdk 2.18.0
setproctitle 1.3.3
setuptools 75.2.0
shapely 2.0.7
shellingham 1.5.4
six 1.16.0
sk-video 1.1.10
smmap 5.0.1
sniffio 1.3.1
soupsieve 2.6
stack-data 0.6.3
starlette 0.40.0
SwissArmyTransformer 0.4.12
sympy 1.13.1
tabulate 0.9.0
tenacity 9.0.0
tensorboard 2.18.0
tensorboard-data-server 0.7.2
tensorboardX 2.6.2.2
tensorflow-io-gcs-filesystem 0.37.1
termcolor 2.5.0
thop 0.1.1.post2209072238
threadpoolctl 3.5.0
tifffile 2024.9.20
tiktoken 0.7.0
timm 1.0.11
tokenizers 0.20.3
tomesd 0.1.3
tomli 2.2.1
tomlkit 0.12.0
torch 2.6.0
torchaudio 2.6.0
torchdiffeq 0.2.4
torchsde 0.2.6
torchvision 0.21.0
tornado 6.4.2
tqdm 4.66.5
traitlets 5.14.3
trampoline 0.1.2
transformers 4.46.2
transformers-stream-generator 0.0.4
trimesh 4.5.2
triton 3.2.0
typeguard 2.13.3
typer 0.12.5
typing_extensions 4.12.2
tzdata 2024.2
uc-micro-py 1.0.3
urllib3 2.2.3
urwid 2.6.16
urwid_readline 0.15.1
uvicorn 0.32.0
uvloop 0.21.0
wandb 0.18.7
watchfiles 0.24.0
wcwidth 0.2.13
webdataset 0.2.100
websocket-client 1.8.0
websockets 12.0
Werkzeug 3.0.4
wheel 0.44.0
widgetsnbextension 4.0.13
wrapt 1.17.0
xatlas 0.0.9
xxhash 3.5.0
yacs 0.1.8
yapf 0.43.0
yarl 1.15.3
zipp 3.20.2
```
### Logs
```shell
```
### System Info
- 🤗 Diffusers version: 0.33.0.dev0
- Platform: Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.35
- Running on Google Colab?: No
- Python version: 3.10.15
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.1
- Transformers version: 4.46.2
- Accelerate version: 1.4.0
- PEFT version: 0.13.2
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA A800-SXM4-80GB, 81251 MiB
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help?
@DN6 @a-r-r-o-w | closed | 2025-03-12T08:49:48Z | 2025-03-18T09:14:11Z | https://github.com/huggingface/diffusers/issues/11041 | [
"bug"
] | Passenger12138 | 6 |
JoeanAmier/TikTokDownloader | api | 293 | 已经下载过的视频,本地删除后,再次下载提示存在下载记录或文件已存在,跳过下载,怎么解决? | closed | 2024-09-09T02:22:09Z | 2024-12-29T06:05:20Z | https://github.com/JoeanAmier/TikTokDownloader/issues/293 | [] | arvinws | 1 |
|
collerek/ormar | pydantic | 993 | Lost connection to MySQL server during query | When my application throws an exception and I try to access an endpoint again, it returns the error "**(2013, 'Lost connection to MySQL server during query ([Errno 104] Connection reset by peer)')"**.
It is as if it were not connected to the database or as if it did not refresh. Using SQLAlchemist I edited the parameters "**pool_timeout=60, pool_recycle=280, pool_size=20, max_overflow=50**" when I call create_engine, but in ORMAR I don't know how to do it.
Any idea how to do it?
Thanks! | closed | 2023-01-21T23:36:13Z | 2023-02-03T03:05:07Z | https://github.com/collerek/ormar/issues/993 | [
"enhancement"
] | alexol91 | 1 |
jmcarpenter2/swifter | pandas | 32 | Progress Bar doesn't seem to be working | Hi, I just installed swifter and I'm running something like:
```
df.swifter.apply(lambda x: custom_function(x), axis=1)
```
But the progress bar doesn't show up (there is no console output at all). The documentation looks like `self._progress_bar` should be set to `True` by default, but I also tried:
```
df.swifter.progress_bar(enable=True).apply(lambda x: custom_function(x), axis=1)
```
In this case the progress bar still does not display. I was initially running this in a Jupyter notebook, but also moved it over into a regular .py script run from terminal, and both scenarios there was no progress bar.
Any ideas why this could be happening? Thanks. | closed | 2018-11-21T21:29:37Z | 2019-03-05T06:28:11Z | https://github.com/jmcarpenter2/swifter/issues/32 | [] | basilvetas | 8 |
mitmproxy/pdoc | api | 92 | Is it possible to remove author's information at the bottom of the produced documentation? | Is it possible to remove author's information at the bottom of the produced documentation? I mean, that might depend on the license of `pdoc`, but I think it's not interesting to show in every page the author of `pdoc`, with all respect and gratitude for what he's made, which is amazing.
Note that I am not asking how to manually do it, but if it's possible according to pdoc's license or not.
| closed | 2016-02-03T19:56:33Z | 2018-06-03T02:44:57Z | https://github.com/mitmproxy/pdoc/issues/92 | [] | nbro | 2 |
mars-project/mars | pandas | 2,813 | How to disable console logs ? | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
There are a lot of logs in console , how to disable console logs output?
Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
we use our mailing list mars-dev@googlegroups.com.
| closed | 2022-03-11T14:49:53Z | 2022-03-15T03:00:33Z | https://github.com/mars-project/mars/issues/2813 | [
"reso: invalid"
] | tianlinzx | 1 |
ivy-llc/ivy | tensorflow | 28,604 | Fix Frontend Failing Test: paddle - tensor.torch.Tensor.fix | To-do List: https://github.com/unifyai/ivy/issues/27500 | closed | 2024-03-14T21:25:39Z | 2024-03-25T15:21:22Z | https://github.com/ivy-llc/ivy/issues/28604 | [
"Sub Task"
] | ZJay07 | 0 |
recommenders-team/recommenders | deep-learning | 1,725 | [BUG] Container usage from external registry 'docker.io' found cause a warning in the tests | ### Description
<!--- Describe your issue/bug/request in detail -->
We started to get a warning that can be seen [here](https://dev.azure.com/best-practices/recommenders/_build/results?buildId=60846&view=logs&j=475df697-7465-54db-fcd2-cb9bdea8ab03&t=b7db66b6-fa35-5c82-573e-eabcb50ded02)
```
tools/docker/Dockerfile - Container usage from external registry 'docker.io' found.
##[warning]Container security analysis found 1 violations. This repo has one or more docker files having references to images from external registries. Please review https://aka.ms/containers-security-guidance to remove the reference of container images from external registries. Please reach out via teams (https://aka.ms/cssc-teams) or email (cssc@microsoft.com) for any questions or clarifications.
```
This is weird because I don't see any reference to docker.io in the code
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
### Other Comments
| closed | 2022-05-27T08:33:00Z | 2022-05-30T12:52:09Z | https://github.com/recommenders-team/recommenders/issues/1725 | [
"bug"
] | miguelgfierro | 4 |
adbar/trafilatura | web-scraping | 53 | Bypass catchas/cookies/consent windows? | Are there Python libraries which allow to bypass diverse consent mechanisms put in place by news outlets for the readers to allow cookies? It would be too cumbersome to develop a headless browser exclusively for trafilatura.
A good example would be the newspaper zeit.de. Related to #18.
Potential solutions:
- headless browser with automatic clickling mechanism
- use [AMP](https://www.howtogeek.com/284166/what-is-google-amp-and-why-is-it-in-my-search-results/)-links
The output could be piped directly to trafilatura (in a terminal or via Python). | closed | 2021-01-12T18:00:15Z | 2023-01-04T16:41:00Z | https://github.com/adbar/trafilatura/issues/53 | [
"feedback"
] | adbar | 3 |
huggingface/transformers | deep-learning | 36,467 | Enhance the memory efficiency of loading large models (400B) to prevent out-of-memory errors when using tensor parallelism. | ### Feature request
Support shredded checkpoint file that matches the process rank for Pytorch distributed model creation and tensor papalism inference.
### Motivation
When I attempted to test the Llama 405B model with FP8 precision using tensor parallelism (TP = 4), the server, which has 1.5TB of RAM, experienced process termination due to all four processes consuming the entire memory. However, if each process could import the model using a shared weight file and create a model with PyTorch distributed tensor, it would only require 405GB of RAM.
### Your contribution
I can help to create test cases for this feature. | open | 2025-02-27T22:56:56Z | 2025-03-08T15:34:46Z | https://github.com/huggingface/transformers/issues/36467 | [
"Feature request"
] | amd-xiaoyu12 | 5 |
thunlp/OpenPrompt | nlp | 70 | detail about prefix tuning | In prefix_tuning_template.py file
https://github.com/thunlp/OpenPrompt/blob/675545ce1f946aa186efda8e8640dbc29fd1159f/openprompt/prompts/prefix_tuning_template.py#L207
The above code pad the attention_mask for extra prompt tokens.
Why the function 'torch.zeros' is used here? Should we use 'torch.ones' here? | closed | 2021-12-04T15:27:34Z | 2021-12-25T07:18:58Z | https://github.com/thunlp/OpenPrompt/issues/70 | [] | yuto3o | 3 |
gradio-app/gradio | python | 10,691 | Demo for highlighting text in a pdf file for LLM RAG purposes | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
I wanted a way to simply show a PDF with highlighted text at the right page (for LLM RAG purposes).
**Describe the solution you'd like**
A demo to build such an app.
| closed | 2025-02-27T15:10:48Z | 2025-03-06T11:20:24Z | https://github.com/gradio-app/gradio/issues/10691 | [
"docs/website"
] | maxenceleguery | 0 |
PaddlePaddle/models | nlp | 5,115 | Optimizer file [/home/weidawang/.paddle/weights/BMN.pdopt] not exits | I got error when i am getting started with BMN:
```
(base) weidawang@weidawang-TUF-Gaming-FX506LU-FX506LU:~/Repo/PaddlePaddle/models/PaddleCV/video$ bash run.sh predict BMN ./configs/bmn.yaml
predict BMN ./configs/bmn.yaml
DALI is not installed, you can improve performance if use DALI
[INFO: predict.py: 199]: Namespace(batch_size=1, config='./configs/bmn.yaml', filelist=None, infer_topk=20, log_interval=1, model_name='BMN', save_dir='data/predict_results', use_gpu=True, video_path='', weights=None)
[INFO: config_utils.py: 70]: ---------------- Infer Arguments ----------------
[INFO: config_utils.py: 72]: MODEL:
[INFO: config_utils.py: 74]: name:BMN
[INFO: config_utils.py: 74]: tscale:100
[INFO: config_utils.py: 74]: dscale:100
[INFO: config_utils.py: 74]: feat_dim:400
[INFO: config_utils.py: 74]: prop_boundary_ratio:0.5
[INFO: config_utils.py: 74]: num_sample:32
[INFO: config_utils.py: 74]: num_sample_perbin:3
[INFO: config_utils.py: 74]: anno_file:data/dataset/bmn/activitynet_1.3_annotations.json
[INFO: config_utils.py: 74]: feat_path:/media/weidawang/DATA/dataset/ActionLocalization/bmn_feat
[INFO: config_utils.py: 72]: TRAIN:
[INFO: config_utils.py: 74]: subset:train
[INFO: config_utils.py: 74]: epoch:9
[INFO: config_utils.py: 74]: batch_size:16
[INFO: config_utils.py: 74]: num_threads:8
[INFO: config_utils.py: 74]: use_gpu:True
[INFO: config_utils.py: 74]: num_gpus:4
[INFO: config_utils.py: 74]: learning_rate:0.001
[INFO: config_utils.py: 74]: learning_rate_decay:0.1
[INFO: config_utils.py: 74]: lr_decay_iter:4200
[INFO: config_utils.py: 74]: l2_weight_decay:0.0001
[INFO: config_utils.py: 72]: VALID:
[INFO: config_utils.py: 74]: subset:validation
[INFO: config_utils.py: 74]: batch_size:16
[INFO: config_utils.py: 74]: num_threads:8
[INFO: config_utils.py: 74]: use_gpu:True
[INFO: config_utils.py: 74]: num_gpus:4
[INFO: config_utils.py: 72]: TEST:
[INFO: config_utils.py: 74]: subset:validation
[INFO: config_utils.py: 74]: batch_size:1
[INFO: config_utils.py: 74]: num_threads:1
[INFO: config_utils.py: 74]: snms_alpha:0.001
[INFO: config_utils.py: 74]: snms_t1:0.5
[INFO: config_utils.py: 74]: snms_t2:0.9
[INFO: config_utils.py: 74]: output_path:data/output/EVAL/BMN_results
[INFO: config_utils.py: 74]: result_path:data/evaluate_results
[INFO: config_utils.py: 72]: INFER:
[INFO: config_utils.py: 74]: subset:test
[INFO: config_utils.py: 74]: batch_size:1
[INFO: config_utils.py: 74]: num_threads:1
[INFO: config_utils.py: 74]: snms_alpha:0.4
[INFO: config_utils.py: 74]: snms_t1:0.5
[INFO: config_utils.py: 74]: snms_t2:0.9
[INFO: config_utils.py: 74]: filelist:data/dataset/bmn/infer.list
[INFO: config_utils.py: 74]: output_path:data/output/INFER/BMN_results
[INFO: config_utils.py: 74]: result_path:data/predict_results
[INFO: config_utils.py: 75]: -------------------------------------------------
W1218 16:29:50.778240 31472 device_context.cc:338] Please NOTE: device: 0, CUDA Capability: 75, Driver API Version: 11.1, Runtime API Version: 10.2
W1218 16:29:50.779249 31472 device_context.cc:346] device: 0, cuDNN Version: 8.0.
test subset video numbers: 5
Traceback (most recent call last):
File "predict.py", line 201, in <module>
infer(args)
File "predict.py", line 132, in infer
fluid.default_main_program(), place)
File "/home/weidawang/Repo/PaddlePaddle/models/PaddleCV/video/models/model.py", line 158, in load_test_weights
fluid.load(prog, weights, executor=exe, var_list=params_list)
File "<decorator-gen-76>", line 2, in load
File "/home/weidawang/miniconda3/lib/python3.7/site-packages/paddle/fluid/wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "/home/weidawang/miniconda3/lib/python3.7/site-packages/paddle/fluid/framework.py", line 215, in __impl__
return func(*args, **kwargs)
File "/home/weidawang/miniconda3/lib/python3.7/site-packages/paddle/fluid/io.py", line 1882, in load
"Optimizer file [{}] not exits".format(opt_file_name)
AssertionError: Optimizer file [/home/weidawang/.paddle/weights/BMN.pdopt] not exits
```
| closed | 2020-12-18T08:37:09Z | 2020-12-22T05:34:56Z | https://github.com/PaddlePaddle/models/issues/5115 | [] | wwdok | 11 |
gunthercox/ChatterBot | machine-learning | 2,037 | Is there a better way to handle big data? | I have recently downloaded a big data file and was processing it. I successfully extracted required data and stored it in a .db file. I then proceeded to convert data from the .db file to a Yaml file. But I keep on getting errors when trying to train the bot.
```
Training train0.yml: [## ] 9%
Failed to process ./trainfiles/train0.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Training train1.yml: [# ] 3%
Failed to process ./trainfiles/train1.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Failed to process ./trainfiles/train2.yml
unacceptable character #x007f: special characters are not allowed
in "./trainfiles/train2.yml", position 676364
<class 'yaml.reader.ReaderError'> Train.py 16
Failed to process ./trainfiles/train3.yml
unacceptable character #x007f: special characters are not allowed
in "./trainfiles/train3.yml", position 260191
<class 'yaml.reader.ReaderError'> Train.py 16
Training train4.yml: [## ] 8%
Failed to process ./trainfiles/train4.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Training train5.yml: [##### ] 24%
Failed to process ./trainfiles/train5.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Training train6.yml: [ ] 2%
Failed to process ./trainfiles/train6.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Training train7.yml: [# ] 6%
Failed to process ./trainfiles/train7.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Training train8.yml: [# ] 7%
Failed to process ./trainfiles/train8.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Training train9.yml: [# ] 5%
Failed to process ./trainfiles/train9.yml
'bool' object has no attribute 'strip'
<class 'AttributeError'> Train.py 16
Failed to process ./trainfiles/train10.yml
[Errno 2] No such file or directory: './trainfiles/train10.yml'
<class 'FileNotFoundError'> Train.py 16
```
The 1st set of errors was that the data started with a symbol. But I avoided that by removing each line containing such. Now I am not really sure what to do with errors
>'bool' object has no attribute 'strip'
and
>unacceptable character #x007f: special characters are not allowed
Any help on how to solve this or any advice to handle big data would be very much appreciated. | closed | 2020-08-29T13:04:15Z | 2021-08-07T17:23:03Z | https://github.com/gunthercox/ChatterBot/issues/2037 | [
"answered"
] | AtomynosAtom | 4 |
mage-ai/mage-ai | data-science | 5,062 | Add support for running a job in BQ | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
Mage's internal BigQuery class only allows load() and export() but not to run a job in BQ without moving data from or to BQ.
**Describe the solution you'd like**
Add job creation to BigQuery class
**Describe alternatives you've considered**
Right now, as a workaround, it is possible to start an external BQ job by using a regular bigquery client (not Mage's internal one) but that does not allow to use Mage's with_config().
**Additional context**
[Response from feature request in Slack channel](https://mageai.slack.com/archives/C03KW780J2F/p1715600352377749)
| open | 2024-05-13T17:46:25Z | 2024-10-09T18:57:54Z | https://github.com/mage-ai/mage-ai/issues/5062 | [
"feature",
"io"
] | ucom | 0 |
fastapi-users/fastapi-users | fastapi | 391 | Put user creation logic in a importable function | Following discussion in #375, I think it would be nice to have a user creation function with all the related logic instead of having it nested in register route. | closed | 2020-11-17T07:20:58Z | 2021-11-05T13:53:06Z | https://github.com/fastapi-users/fastapi-users/issues/391 | [
"enhancement"
] | frankie567 | 3 |
svc-develop-team/so-vits-svc | deep-learning | 344 | [Help]: 为何扩散模型生成的效果要远好於 sovits 的模型? | ### 请勾选下方的确认框。
- [X] 我已仔细阅读[README.md](https://github.com/svc-develop-team/so-vits-svc/blob/4.0/README_zh_CN.md)和[wiki中的Quick solution](https://github.com/svc-develop-team/so-vits-svc/wiki/Quick-solution)。
- [X] 我已通过各种搜索引擎排查问题,我要提出的问题并不常见。
- [X] 我未在使用由第三方用户提供的一键包/环境包。
### 系统平台版本号
win 10
### GPU 型号
3060 12g
### Python版本
3.10.6
### PyTorch版本
2.0.1+cu118
### sovits分支
4.0(默认)
### 数据集来源(用于判断数据集质量)
自行录制,一半是歌声
### 出现问题的环节或执行的命令
训练 so-vits-svc 4.0 模型
### 问题描述
我是新手,使用 so-vits-svc 4.0 推理 webUI,为何 diffusion 模型生成的效果要远好於 sovits 的模型?
一般使用30至45分钟的资料,使用预设 config,没有修改批次大小, sovits 模型训练约20万步,推理出来的歌声总是沙哑,或突然出现电流声音.
但是 diffusion 模型只训练了3万步,推理出来的歌声已很不错,虽还不够完美,但比 sovits 模型要好很多。
混合 sovits 模型和 diffusion 模型後,感觉比只用 diffusion 要差一些,但比只用sovits 模型要好。
为什麽会有这情况,是否训练不够多?
### 日志
```python
N/A
```
### 截图`so-vits-svc`、`logs/44k`文件夹并粘贴到此处

### 补充说明
_No response_ | closed | 2023-07-25T16:01:24Z | 2023-08-01T09:10:40Z | https://github.com/svc-develop-team/so-vits-svc/issues/344 | [
"help wanted"
] | happyman2025 | 3 |
databricks/koalas | pandas | 1,724 | Koalas read_json cannot read json when Pandas can | ```
import requests
full_url_with_date = 'https://demo.matomo.org/index.php?module=API&method=Live.getLastVisitsDetails&format=json&period=day&filter_limit=99&date=2020-08-02&idSite=62'
matomo_pull = requests.get(full_url_with_date).content
import pandas as pd
pdf = pd.read_json(matomo_pull)
from databricks import koalas as ks
ks.read_json(matomo_pull)
```
| closed | 2020-08-24T18:59:59Z | 2021-08-02T07:47:05Z | https://github.com/databricks/koalas/issues/1724 | [
"question"
] | ericbugin | 4 |
dgtlmoon/changedetection.io | web-scraping | 2,552 | [bug] One site doesnt work with price tracker | https://www.vitacost.com/method-stain-remover
`Exception: float() argument must be a string or a real number, not 'list'`
| closed | 2024-08-07T07:45:12Z | 2024-10-11T09:43:37Z | https://github.com/dgtlmoon/changedetection.io/issues/2552 | [
"restock-price-monitoring"
] | dgtlmoon | 1 |
xonsh/xonsh | data-science | 5,648 | Edge case: callable alias with full capturing and redirect in the middle: `I/O operation on closed file` | Two ways for reproduce:
```xsh
$XONSH_SHOW_TRACEBACK = True
@aliases.register
def _e():
echo -n O
echo -n E 1>2
execx("echo -n O")
execx("echo -n E 1>2")
print("o")
print("O", file=o)
print("E", file=e)
for i in range(5):
p = !(e > /tmp/ttt)
$[e > /tmp/ttt2]
p.end()
print(p)
```
```xsh
$XONSH_SHOW_TRACEBACK = True
@aliases.register
def _e():
echo -n O
echo -n E 1>2
execx("echo -n O")
execx("echo -n E 1>2")
print("o")
print("O", file=o)
print("E", file=e)
for i in range(5):
print(!(e > /tmp/ttt), $[e > /tmp/ttt2])
```
Exception:
```xsh
Exonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>
Traceback (most recent call last):
File "<stdin>", line 7, in <module>
File "/Users/pc/git/xonsh/xonsh/procs/pipelines.py", line 217, in __str__
self.end()
File "/Users/pc/git/xonsh/xonsh/procs/pipelines.py", line 478, in end
self._end(tee_output=tee_output)
File "/Users/pc/git/xonsh/xonsh/procs/pipelines.py", line 486, in _end
for _ in self.tee_stdout():
File "/Users/pc/git/xonsh/xonsh/procs/pipelines.py", line 388, in tee_stdout
for line in self.iterraw():
File "/Users/pc/git/xonsh/xonsh/procs/pipelines.py", line 260, in iterraw
stdout = NonBlockingFDReader(stdout.fileno(), timeout=timeout)
^^^^^^^^^^^^^^^
ValueError: I/O operation on closed file
```
(Found in https://github.com/xonsh/xonsh/issues/5631#issuecomment-2266321712)
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2024-08-07T10:03:49Z | 2024-08-09T11:52:28Z | https://github.com/xonsh/xonsh/issues/5648 | [
"threading",
"edge-case",
"aliases-callable"
] | anki-code | 0 |
tensorpack/tensorpack | tensorflow | 574 | Faster R-CNN example bug report | On https://github.com/ppwwyyxx/tensorpack/blob/master/examples/FasterRCNN/train.py#L105
`decoded_boxes = decode_bbox_target(rpn_box_logits, fm_anchors) # fHxfWxNAx4, floatbox`
the `decoded_boxes` is defined on the feature_map which is 1/16.0 scale smaller than original image, because `fm_anchors `is defined on feature_map but not the original image size.
But in https://github.com/ppwwyyxx/tensorpack/blob/master/examples/FasterRCNN/train.py#L115
The `rcnn_sampled_boxes `is also come from `decoded_boxes `and not resize, so it should define already on 1/16.0 scale smaller feature map, you should not multiply 1/16.0
| closed | 2018-01-03T09:48:00Z | 2018-05-30T20:59:30Z | https://github.com/tensorpack/tensorpack/issues/574 | [
"examples"
] | machanic | 13 |
arogozhnikov/einops | numpy | 207 | [BUG] torch2trt assert(permutation[0] == 0) # cannot move batch dim | **Describe the bug**
https://github.com/cszn/SCUNet/blob/main/models/network_scunet.py
```
File "C:\Program Files\Python310\lib\site-packages\einops\einops.py", line 487, in rearrange
return reduce(tensor, pattern, reduction='rearrange', **axes_lengths)
File "C:\Program Files\Python310\lib\site-packages\einops\einops.py", line 410, in reduce
return _apply_recipe(recipe, tensor, reduction_type=reduction)
File "C:\Program Files\Python310\lib\site-packages\einops\einops.py", line 236, in _apply_recipe
tensor = backend.transpose(tensor, axes_reordering)
File "C:\Program Files\Python310\lib\site-packages\einops\_backends.py", line 331, in transpose
return x.permute(axes)
File "C:\Program Files\Python310\lib\site-packages\torch2trt-0.4.0-py3.10.egg\torch2trt\torch2trt.py", line 307, in wrapper
converter["converter"](ctx)
File "C:\Program Files\Python310\lib\site-packages\torch2trt-0.4.0-py3.10.egg\torch2trt\converters\permute.py", line 17, in convert_permute
assert(permutation[0] == 0) # cannot move batch dim
```
**Reproduction steps**
```
if __name__ == '__main__':
from torch2trt import torch2trt
device = 'cuda'
net = SCUNet().eval().to(device)
x = torch.randn((1, 3, 128, 128)).to(device)
net = torch2trt(net, [x])
x = net(x)
print(x.shape)
```
**Expected behavior**
Maybe:
https://github.com/NVIDIA-AI-IOT/torch2trt/issues/742#issuecomment-1170672478
**Your platform**
```
Win10 x64
Package Version
----------------------- --------------------
absl-py 1.2.0
addict 2.4.0
basicsr 1.4.2
cachetools 5.2.0
certifi 2022.9.14
charset-normalizer 2.1.1
colorama 0.4.5
contourpy 1.0.5
cycler 0.11.0
einops 0.4.1
fairscale 0.4.9
focal-frequency-loss 0.3.0
fonttools 4.37.3
future 0.18.2
google-auth 2.11.1
google-auth-oauthlib 0.4.6
graphsurgeon 0.4.6
grpcio 1.49.1
idna 3.4
imageio 2.22.0
kiwisolver 1.4.4
lmdb 1.3.0
lpips 0.1.4
Markdown 3.4.1
MarkupSafe 2.1.1
matplotlib 3.6.0
networkx 2.8.6
numpy 1.23.3
oauthlib 3.2.1
onnx 1.12.0
onnx-graphsurgeon 0.3.12
opencv-python 4.6.0.66
packaging 21.3
Pillow 9.2.0
pip 22.2.2
protobuf 3.20.1
pyasn1 0.4.8
pyasn1-modules 0.2.8
pyparsing 3.0.9
python-dateutil 2.8.2
PyWavelets 1.4.1
PyYAML 6.0
requests 2.28.1
requests-oauthlib 1.3.1
rsa 4.9
scikit-image 0.19.3
scipy 1.9.1
setuptools 63.2.0
six 1.16.0
tb-nightly 2.11.0a20220922
tensorboard-data-server 0.6.1
tensorboard-plugin-wit 1.8.1
tensorrt 8.4.3.1
thop 0.1.1.post2209072238
tifffile 2022.8.12
timm 0.6.7
torch 1.12.1+cu116
torch2trt 0.4.0
torchaudio 0.12.1+cu116
torchinfo 1.7.0
torchvision 0.13.1+cu116
tqdm 4.64.1
typing_extensions 4.3.0
uff 0.6.9
urllib3 1.26.12
Werkzeug 2.2.2
wheel 0.37.1
yapf 0.32.0
```
| closed | 2022-09-24T06:38:02Z | 2022-09-27T04:56:27Z | https://github.com/arogozhnikov/einops/issues/207 | [
"bug"
] | Ken1256 | 1 |
sunscrapers/djoser | rest-api | 781 | Implement urls domain, protocol and site name for frontend | If you're develop a backend and frontend in separate projects, cannot add `ACTIVATION_URL` in DJOSER settings like this
```python
DJOSER = {
"SEND_ACTIVATION_EMAIL": True,
"SEND_CONFIRMATION_EMAIL": True,
"USER_CREATE_PASSWORD_RETYPE": True,
# specially this
"ACTIVATION_URL": os.environ.get(
"DJANGO_ACTIVATION_URL", "auth/users/activate/{uid}/{token}"
),
}
```
because implicitly check inside the backend this path from example `localhost:8000`, and maybe the frontend project is in another repo and another host
From documentation:

This PR #729 by @n-borges allows this functionality
Please check this out ❤️
| closed | 2023-11-14T10:53:22Z | 2024-03-31T10:54:57Z | https://github.com/sunscrapers/djoser/issues/781 | [] | FraCata00 | 3 |
aws/aws-sdk-pandas | pandas | 3,107 | awswrangler.data_api.redshift.read_sql_query support query parameters | **Is your idea related to a problem? Please describe.**
can't specify query parameters via awswrangler.data_api.redshift.read_sql_query
AWS docs describe [underlying support](https://docs.aws.amazon.com/redshift/latest/mgmt/data-api.html#data-api-calling-considerations-parameters) for this, but seems it's not exposed via awswrangler.
**Describe the solution you'd like**
in the same way we can provide query parameters via `awswrangler.redshift.read_sql_query(sql, con, params='...')` we should be able to do the same via the awswrangler.data_api.redshift.read_sql_query
awswrangler==3.11.0
| closed | 2025-03-07T03:35:32Z | 2025-03-12T15:17:05Z | https://github.com/aws/aws-sdk-pandas/issues/3107 | [
"enhancement"
] | robmcd | 1 |
lgienapp/aquarel | matplotlib | 12 | Add legend styling options | Introduce a `set_legend` option to style the legend.
Maybe also legend-specific transforms, for example location? | closed | 2022-08-13T08:19:08Z | 2022-08-31T11:23:06Z | https://github.com/lgienapp/aquarel/issues/12 | [
"enhancement"
] | lgienapp | 0 |
explosion/spaCy | nlp | 13,679 | Pipeline for opentapioca not working | ### Discussed in https://github.com/explosion/spaCy/discussions/13678
<div type='discussions-op-text'>
<sup>Originally posted by **piaschwarz** October 25, 2024</sup>
I am using opentapioca for entity linking.
The code worked before but now calling the opentapioca endpoint throws an error (happens with _and_ without the endpoint url in the config variable).
Is it a problem on my side or is the endpoint not working?
**Versions:**
Python 3.10.6
spacy 3.3.1
spacy-transformers 1.1.7
spacyopentapioca 0.1.7
**My code**:
```
import spacy
nlp_spacy_trf = spacy.load('de_dep_news_trf')
dummy_text = "Christian Drosten arbeitet an der Charité in Berlin."
nlp_spacy_trf.add_pipe('opentapioca', config={"url": "https://opentapioca.wordlift.io/api/annotate?lc=de"})
doc = nlp_spacy_trf(dummy_text)
for span in doc.ents:
print((span.text, span.kb_id_, span.label_, span._.description, span._.score))
```
keeps throwing this **error**:
```
Traceback (most recent call last):
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/connection.py", line 414, in connect
self.sock = ssl_wrap_socket(
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 449, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/util/ssl_.py", line 493, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "/usr/lib/python3.10/ssl.py", line 513, in wrap_socket
return self.sslsocket_class._create(
File "/usr/lib/python3.10/ssl.py", line 1071, in _create
self.do_handshake()
File "/usr/lib/python3.10/ssl.py", line 1342, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:997)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/requests/adapters.py", line 489, in send
resp = conn.urlopen(
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='opentapioca.wordlift.io', port=443): Max retries exceeded with url: /api/annotate?lc=de (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:997)')))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/c/Users/XXXXX/PycharmProjects/EntityLinking/experiments.py", line 10, in <module>
doc = nlp_spacy_trf(dummy_text)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/spacy/language.py", line 1025, in __call__
error_handler(name, proc, [doc], e)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/spacy/util.py", line 1630, in raise_error
raise e
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/spacy/language.py", line 1020, in __call__
doc = proc(doc, **component_cfg.get(name, {})) # type: ignore[call-arg]
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/spacyopentapioca/entity_linker.py", line 101, in __call__
r = self.make_request(doc)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/spacyopentapioca/entity_linker.py", line 93, in make_request
return requests.post(url=self.url,
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/requests/sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/requests/sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "/home/xxxxxxxx/.local/lib/python3.10/site-packages/requests/adapters.py", line 563, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='opentapioca.wordlift.io', port=443): Max retries exceeded with url: /api/annotate?lc=de (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate (_ssl.c:997)')))
```</div> | open | 2024-10-28T07:40:29Z | 2024-10-28T07:40:29Z | https://github.com/explosion/spaCy/issues/13679 | [] | piaschwarz | 0 |
xmu-xiaoma666/External-Attention-pytorch | pytorch | 121 | HaloAttention掩码问题 | # mask out padding (in the paper, they claim to not need masks, but what about padding?)
mask = torch.ones(1, 1, h, w, device = device)
mask = F.unfold(mask, kernel_size = block + (halo * 2), stride = block, padding = halo)
mask = repeat(mask, '() j i -> (b i h) () j', b = b, h = heads)
mask = mask.bool()
max_neg_value = -torch.finfo(sim.dtype).max
sim.masked_fill_(mask, max_neg_value)
代码似乎将所有非0值置为无穷小,根据注释似乎是想将padding位置 置为无穷小 | open | 2024-12-19T10:21:21Z | 2024-12-19T10:21:21Z | https://github.com/xmu-xiaoma666/External-Attention-pytorch/issues/121 | [] | L-wwww | 0 |
huggingface/datasets | numpy | 7,135 | Bug: Type Mismatch in Dataset Mapping | # Issue: Type Mismatch in Dataset Mapping
## Description
There is an issue with the `map` function in the `datasets` library where the mapped output does not reflect the expected type change. After applying a mapping function to convert an integer label to a string, the resulting type remains an integer instead of a string.
## Reproduction Code
Below is a Python script that demonstrates the problem:
```python
from datasets import Dataset
# Original data
data = {
'text': ['Hello', 'world', 'this', 'is', 'a', 'test'],
'label': [0, 1, 0, 1, 1, 0]
}
# Creating a Dataset object
dataset = Dataset.from_dict(data)
# Mapping function to convert label to string
def add_one(example):
example['label'] = str(example['label'])
return example
# Applying the mapping function
dataset = dataset.map(add_one)
# Iterating over the dataset to show results
for item in dataset:
print(item)
print(type(item['label']))
```
## Expected Output
After applying the mapping function, the expected output should have the `label` field as strings:
```plaintext
{'text': 'Hello', 'label': '0'}
<class 'str'>
{'text': 'world', 'label': '1'}
<class 'str'>
{'text': 'this', 'label': '0'}
<class 'str'>
{'text': 'is', 'label': '1'}
<class 'str'>
{'text': 'a', 'label': '1'}
<class 'str'>
{'text': 'test', 'label': '0'}
<class 'str'>
```
## Actual Output
The actual output still shows the `label` field values as integers:
```plaintext
{'text': 'Hello', 'label': 0}
<class 'int'>
{'text': 'world', 'label': 1}
<class 'int'>
{'text': 'this', 'label': 0}
<class 'int'>
{'text': 'is', 'label': 1}
<class 'int'>
{'text': 'a', 'label': 1}
<class 'int'>
{'text': 'test', 'label': 0}
<class 'int'>
```
## Why necessary
In the case of Image process we often need to convert PIL to tensor with same column name.
Thank for every dev who review this issue. 🤗 | open | 2024-09-03T16:37:01Z | 2024-09-05T14:09:05Z | https://github.com/huggingface/datasets/issues/7135 | [] | marko1616 | 3 |
scikit-optimize/scikit-optimize | scikit-learn | 385 | Examples involving skopt.callbacks | Right now, we have no examples illustrating callbacks. It would be great to have a number of examples illustrating at least
1. Usage of different types of callbacks.
2. How to write a custom callback. | open | 2017-05-31T03:55:20Z | 2017-05-31T03:55:34Z | https://github.com/scikit-optimize/scikit-optimize/issues/385 | [
"Documentation",
"Moderate"
] | MechCoder | 0 |
OthersideAI/self-operating-computer | automation | 170 | [BUG] -m gemini-pro-vision asking for OPENAI_API_KEY | Found a bug? Please fill out the sections below. 👍
### Describe the bug
Ran `operate -m gemini-pro-vision`, entered my gemini API key from google AIstudio, but when I request something I always get
```
[Self-Operating Computer | gemini-pro-vision]
Hello, I can help you with anything. What would you like done?
[User]
turn on night mode
[Self-Operating Computer][Operate] That did not work. Trying another method
[Self-Operating Computer][Error] -> The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
```
### Steps to Reproduce
1. Install with pip
2. run `operate -m gemini-pro-vision` and enter API key
3. insert any prompt
4. also tried by setting up the GOOGLE_API_KEY env variable in a .env file as well as export GOOGLE_API_KEY=abc123
### Expected Behavior
The machine, self-operates
### Actual Behavior:
Error in the console
### Environment
- OS: Mac os sonomo 14.3
- Model Used (e.g., GPT-4v, Gemini Pro Vision): Gemini pro vision
- Framework Version (optional):
### Screenshots
<img width="857" alt="image" src="https://github.com/OthersideAI/self-operating-computer/assets/34170261/674aec9d-9f66-4d8b-b3fc-ce77fdb7c4a4">
### Additional context
Add any other context about the problem here. | open | 2024-02-27T02:29:01Z | 2025-02-28T14:09:21Z | https://github.com/OthersideAI/self-operating-computer/issues/170 | [
"bug"
] | FelipeLujan | 3 |
sunscrapers/djoser | rest-api | 626 | Incorrect validation from the e-mail when the user is not logged in. | ```
def validate(self, attrs):
user = self.context["request"].user or self.user
# why assert? There are ValidationError / fail everywhere
assert user is not None
```
When the user is not logged in then AnonymousUser is always sent to the function` validate_password (attrs ["new_password"], user)` . If we have validation of the amount of password used, the validation will not be performed.
change:
```
def validate(self, attrs):
request_user = self.context["request"].user
user = self.user if request_user.is_anonymous is True else request_user
assert user is not None
``` | open | 2021-08-12T11:48:31Z | 2021-08-12T11:48:58Z | https://github.com/sunscrapers/djoser/issues/626 | [] | sylwesterwalczak | 0 |
Guovin/iptv-api | api | 535 | 楼主生成的txt 放在那个云盘上使用的啊? | 楼主使用你的软件生成了txt文件,你们是放在那个平台上使用的啊?给家里的电视提供直播源?我之前都是放在彩虹网盘上使用的 看直播挺卡的 | closed | 2024-11-08T09:59:13Z | 2024-11-08T12:30:09Z | https://github.com/Guovin/iptv-api/issues/535 | [
"question"
] | Hi-360 | 3 |
gradio-app/gradio | python | 10,061 | Lots of components on gradio.app component gallery are broken | ### Describe the bug
Visit https://www.gradio.app/custom-components/gallery
Click on logsview, popup, chatpdf, agentchatbot, etc...
Observe this:

It says "Your space is in error, check its status on hf.co"
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
See above.
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
N/A
```
### Severity
I can work around it | open | 2024-11-28T08:15:29Z | 2024-11-28T17:44:56Z | https://github.com/gradio-app/gradio/issues/10061 | [
"bug",
"docs/website"
] | umarbutler | 0 |
matplotlib/mplfinance | matplotlib | 513 | `vlines` (vertical lines, sharing x axis) on multiple panels | Hello Daniel, 👋
Thank you for your dedication in improving the MPF library ! 📈
I am looking for different options for the X_AXIS/GRID adjustments:
- [1] : Drawing vertical line for all panels ( when passing ```**kwargs(vlines)``` to ```mpf.plot()``` it is not drawing on all panels)
- [2] : Adjusting the ```grid``` for ```x_axis``` with MPF. I seen other examples where we have to use ```axes``` . Is there a more efficient way to do that with MPF?
If i well understood what others did is, returning the ```axes``` and then calling the ```add_subplot``` method from MPL wich will force me to start again from scratch using only MPL.
I guess I missed something, here is a part of my code:
```
df = 'pandas dataframe with datetime index and ohlcv and RSI values'
vls = pd.date_range(df.index.min(), df.index.max(), freq='D').tolist()
ap1 = [
mpf.make_addplot(df['RSI'], panel=1, color='royalblue')
]
kwargs = dict(type='candle', vlines=dict(vlines=vls, linewidths=0.5, colors=('r')))
mpf.plot(df, addplot=ap1, **kwargs)
```
(If you prefer 2 different posts i can separate each request..)
| open | 2022-03-23T12:17:56Z | 2024-01-13T10:07:59Z | https://github.com/matplotlib/mplfinance/issues/513 | [
"enhancement",
"question",
"hacktoberfest"
] | Tirbo06 | 17 |
sinaptik-ai/pandas-ai | pandas | 1,223 | Unable to save chart image, or setting not to save chart will throw error "No such file or directory" | ### System Info
OS version: win 10 22h2
Python version: 3.12.3
The current version of pandasai being used: 2.1.1
### 🐛 Describe the bug
Originally, these codes were working properly.
But since I specified running parameters such as "open_charts" and "save_charts"<sup>*</sup>, I found that the parameters seemed to be ineffective.
I kept throwing errors afterwards, even deleting all the parameters and deleting the "charts" and "exports" folders(The folder will be rebuilt), but it still didn't work properly.
[car.csv](https://github.com/user-attachments/files/15781120/car.csv)
```python
import pandas as pd
from pandasai import SmartDataframe
from os.path import dirname, join
from os import environ
from datetime import datetime
api_key = environ.get('PANDASAI_API_KEY')
if api_key is None:
api_key = '' # todo
environ.setdefault('PANDASAI_API_KEY', api_key)
base_path = dirname(__file__)
files_path = join(base_path, "files")
csv_path = join(base_path, "car.csv")
data = pd.read_csv(csv_path)
ai_data = SmartDataframe(data)
print('ok')
respone = ai_data.chat('生成马力与油耗关系图?') # Generate a graph of the relationship between horsepower and fuel consumption?
print(respone)
```
```
Traceback (most recent call last):
File "D:\anaconda3\envs\zyzs-ai\Lib\site-packages\pandasai\pipelines\chat\generate_chat_pipeline.py", line 310, in run
).run(input)
^^^^^^^^^^
File "D:\anaconda3\envs\zyzs-ai\Lib\site-packages\pandasai\pipelines\pipeline.py", line 137, in run
raise e
File "D:\anaconda3\envs\zyzs-ai\Lib\site-packages\pandasai\pipelines\pipeline.py", line 101, in run
step_output = logic.execute(
^^^^^^^^^^^^^^
File "D:\anaconda3\envs\zyzs-ai\Lib\site-packages\pandasai\pipelines\chat\code_execution.py", line 133, in execute
{"content_type": "response", "value": ResponseSerializer.serialize(result)},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\anaconda3\envs\zyzs-ai\Lib\site-packages\pandasai\responses\response_serializer.py", line 35, in serialize
with open(result["value"], "rb") as image_file:
^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'D:/Projects/zyzs-ai/exports/charts/temp_chart.png'
Unfortunately, I was not able to answer your question, because of the following error:
[Errno 2] No such file or directory: 'D:/Projects/zyzs-ai/exports/charts/temp_chart.png'
```
<sup>*</sup>Code marked with an asterisk:
```python
ai_df = SmartDataframe(df, config={
'open_charts': True, # or False
'save_charts': True, # or False
'save_charts_path': join(project_path, 'exports', 'charts', datetime.now().strftime('%Y-%m-%d')),
'enable_cache': False
})
``` | closed | 2024-06-11T02:06:18Z | 2024-07-02T02:39:36Z | https://github.com/sinaptik-ai/pandas-ai/issues/1223 | [
"bug"
] | wuhuanyan | 3 |
cleanlab/cleanlab | data-science | 718 | Datalab tutorials: reduce size of plots to look nicer on docs.cleanlab.ai | helper functions to make plots should reduce their size so they look nicer than this

| open | 2023-05-12T23:07:41Z | 2024-12-25T19:50:57Z | https://github.com/cleanlab/cleanlab/issues/718 | [
"good first issue",
"help-wanted"
] | jwmueller | 2 |
wagtail/wagtail | django | 12,227 | Uncaptured error raised when invalid HTML is stored in RichTestField and processed by `handle_endtag` in edit interface (throws 500 error) | ### Issue Summary
When invalid HTML is imported programmatically into a `RichTextField` when viewing custom Wagtail Page model in the CMS interface an unhandled exception is thrown when using the default widget for this field type.
ref: `wagtail/admin/rich_text/converters/html_to_contentstate.py`
### Steps to Reproduce
1. Start a new project with `wagtail start myproject`
2. Edit `models.py` to include new field as follows...
3. `example_field = RichTextField(blank=True)`
4. `FieldPanel('reviews'),`
5. import some invalid HTML into this field, sample invalid HTML below: `"<p><em>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</em>, 15 August 1897</p>\n<p><em>\n<p><em>Lorem ipsum, </em>12 August 1926</p>\n<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit.</p>\n</em></p>\n<p>429, September 1906, pp. 800-44 <em> </em></p>\n<p> </p>\n<p> </p>"`
6. View page in the CMS edit interface
7. 500 error will be thrown, sample stacktrace below.
```
Environment:
Request Method: GET
Request URL: http://localhost:9000/cms/pages/266/edit/
Django Version: 4.2.15
Python Version: 3.12.4
Installed Applications:
[...,
'wagtailmedia',
'wagtail.contrib.forms',
'wagtail.contrib.redirects',
'wagtail.embeds',
'wagtail.sites',
'wagtail.users',
'wagtail.snippets',
'wagtail.documents',
'wagtail.images',
'wagtail.search',
'wagtail.admin',
'wagtail',
'modelcluster',
'taggit',
'import_export',
'django.contrib.sitemaps',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.gis',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django.contrib.sites']
Installed Middleware:
['django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
'wagtail.contrib.redirects.middleware.RedirectMiddleware']
Template error:
In template /opt/venv/lib/python3.12/site-packages/wagtail/admin/templates/wagtailadmin/panels/object_list.html, error at line 9
End of block reached without closing inline style elements
1 : {% load wagtailadmin_tags %}
2 :
3 : <div class="w-form-width" {% include "wagtailadmin/shared/attrs.html" with attrs=self.attrs %}>
4 : {% if self.help_text %}
5 : {% help_block status="info" %}{{ self.help_text }}{% endhelp_block %}
6 : {% endif %}
7 : {% for child, identifier in self.visible_children_with_identifiers %}
8 : {% panel id_prefix=self.prefix id=identifier classname=child.classes|join:' ' attrs=child.attrs heading=child.heading heading_size="label" icon=child.icon id_for_label=child.id_for_label is_required=child.is_required %}
9 : {% component child %}
10 : {% endpanel %}
11 : {% endfor %}
12 : </div>
13 :
Traceback (most recent call last):
File "/opt/venv/lib/python3.12/site-packages/django/core/handlers/exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/core/handlers/base.py", line 220, in _get_response
response = response.render()
^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/auth.py", line 171, in overridden_render
return render()
^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/response.py", line 114, in render
self.content = self.rendered_content
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/response.py", line 92, in rendered_content
return template.render(context, self._request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 175, in render
return self._render(context)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/loader_tags.py", line 157, in render
return compiled_parent._render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/loader_tags.py", line 157, in render
return compiled_parent._render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/loader_tags.py", line 157, in render
return compiled_parent._render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/loader_tags.py", line 157, in render
return compiled_parent._render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/loader_tags.py", line 63, in render
result = block.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/loader_tags.py", line 63, in render
result = block.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/loader_tags.py", line 63, in render
result = block.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1064, in render
output = self.filter_expression.resolve(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 715, in resolve
obj = self.var.resolve(context)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 847, in resolve
value = self._resolve_lookup(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 914, in _resolve_lookup
current = current()
^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/panels/base.py", line 317, in render_form_content
return mark_safe(self.render_html() + self.render_missing_fields())
^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/laces/components.py", line 55, in render_html
return template.render(context_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 175, in render
return self._render(context)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/defaulttags.py", line 238, in render
nodelist.append(node.render_annotated(context))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/defaulttags.py", line 321, in render
return nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1064, in render
output = self.filter_expression.resolve(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 715, in resolve
obj = self.var.resolve(context)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 847, in resolve
value = self._resolve_lookup(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 914, in _resolve_lookup
current = current()
^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/laces/components.py", line 55, in render_html
return template.render(context_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/backends/django.py", line 61, in render
return self.template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 175, in render
return self._render(context)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 167, in _render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/defaulttags.py", line 238, in render
nodelist.append(node.render_annotated(context))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/templatetags/wagtailadmin_tags.py", line 1070, in render
children = self.nodelist.render(context) if self.nodelist else ""
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/template/base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/laces/templatetags/laces.py", line 81, in render
html = component.render_html(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/laces/components.py", line 50, in render_html
context_data = self.get_context_data(parent_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/panels/field_panel.py", line 273, in get_context_data
context.update(self.get_editable_context_data())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/panels/field_panel.py", line 315, in get_editable_context_data
rendered_field = self.bound_field.as_widget(attrs=widget_attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/forms/boundfield.py", line 107, in as_widget
return widget.render(
File "/opt/venv/lib/python3.12/site-packages/django/forms/widgets.py", line 280, in render
context = self.get_context(name, value, attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/rich_text/editors/draftail/__init__.py", line 72, in get_context
context = super().get_context(name, value, attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/forms/widgets.py", line 333, in get_context
context = super().get_context(name, value, attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/django/forms/widgets.py", line 272, in get_context
"value": self.format_value(value),
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/rich_text/editors/draftail/__init__.py", line 69, in format_value
return self.converter.from_database_format(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/rich_text/converters/contentstate.py", line 141, in from_database_format
self.html_to_contentstate_handler.feed(html)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/html/parser.py", line 111, in feed
self.goahead(0)
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/html/parser.py", line 173, in goahead
k = self.parse_endtag(i)
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/html/parser.py", line 414, in parse_endtag
self.handle_endtag(elem)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/rich_text/converters/html_to_contentstate.py", line 396, in handle_endtag
element_handler.handle_endtag(name, self.state, self.contentstate)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/venv/lib/python3.12/site-packages/wagtail/admin/rich_text/converters/html_to_contentstate.py", line 125, in handle_endtag
not state.current_inline_styles
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception Type: AssertionError at /cms/pages/266/edit/
Exception Value: End of block reached without closing inline style elements
```
Any other relevant information. For example, why do you consider this a bug and what did you expect to happen instead?
I would have expected that the error would be captured and reported back to the end user in the CMS edit interface near the field(s) that had the problem data.
Desirable features might include a report of the offending HTML so the field's contents can be amended manually, but I appreciate this is an edge use case. However, better error handling would be essential. Thanks.
- I have confirmed that this issue can be reproduced as described on a fresh Wagtail project: (no)
### Technical details
- Django Version: 4.2.15
- Python Version: 3.12.4
- wagtail 6.2
- Browser version: Chrome 127
### Working on this
Requires Python & Wagtail/Django knowledge of exception handling, may require some Wagtail admin template skills to ensure error message is relayed correctly to the end user.
Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| closed | 2024-08-12T16:16:22Z | 2024-08-12T17:07:36Z | https://github.com/wagtail/wagtail/issues/12227 | [
"type:Bug",
"status:Unconfirmed"
] | asset-web | 2 |
mage-ai/mage-ai | data-science | 5,438 | Pipeline Level success/failure callbacks | **Is your feature request related to a problem? Please describe.**
There are times where I want to call a function when a pipeline fails, but having it be block level and applied to every block can lead to multiple calls when concurrent blocks fail at the same time. Having a pipeline-level callback feature would prevent this issue
**Describe the solution you'd like**
Similar to the block-level callbacks, there would be a pipeline-level callback where a failure/success function would be called depending on the status of the pipeline as a whole, as opposed to an individual block.
**Describe alternatives you've considered**
Having a sensor pipeline running adjacently to the main pipeline that detects for success/failure has been a decent solution, but the check_status function doesn't have great handling of failure, it simply throws an error on failure as opposed to an output like True or False.
| open | 2024-09-23T14:03:15Z | 2024-10-09T18:47:53Z | https://github.com/mage-ai/mage-ai/issues/5438 | [
"feature"
] | Cgc2000 | 0 |
neuml/txtai | nlp | 9 | Switch from faiss-gpu to faiss-cpu | Given that GPU builds aren't being used and reported issues with macOS, switch to faiss-cpu package | closed | 2020-08-17T14:39:01Z | 2021-05-13T14:58:42Z | https://github.com/neuml/txtai/issues/9 | [
"bug"
] | davidmezzetti | 0 |
AutoGPTQ/AutoGPTQ | nlp | 178 | Example for quant Gpt-neox-20b | Hey guys, I am using `GPTQ` to quantize the `GPT-NeoX-20B` model. Previously, when quantizing the Llama Family model, I usually used `C4` as the example tokenizer. May I ask which dataset is suitable for `GPT-NeoX-20B`? | open | 2023-06-27T08:51:01Z | 2023-06-29T07:42:10Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/178 | [] | sgwhat | 1 |
jpadilla/django-rest-framework-jwt | django | 31 | Is JWT in Authorization header a standard? | Hi there,
Great work on django-rest-framework-jwt so far. It is a perfect fit with DRF.
I noticed that you are using JWT in your Authorization header, as in your example:
$ curl -H "Authorization: JWT <your_token>" http://localhost:8000/protected-url/
Is it the recommended way in the current JWT draft? Other server side token framework often accept Bearer instead of JWT, which I think is very common. The reason I am asking because satellizer (an oauth2 token based javascript library for angularjs) does not work with your django-rest-framework-jwt.
I suspect because of the JWT text in the Authorization header. Could you please have a look?
| closed | 2014-08-25T03:19:53Z | 2014-08-30T11:55:23Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/31 | [] | pmcao | 6 |
microsoft/nni | machine-learning | 5,142 | Build from source on MacOS M1 failed | **Describe the issue**:
I followed instructions from https://nni.readthedocs.io/zh/stable/notes/build_from_source.html,
run command
```bash
conda create --name my_env
conda activate my_env
git clone https://github.com/microsoft/nni.git
cd nni
pip install --upgrade setuptools pip wheel
pip install jupyterlab==3.0.9
export NNI_RELEASE=2.0
python setup.py build_ts
```
I got error when running 'python setup.py build_ts', as shown below
```
running build_ts
# Downloading node.js from https://nodejs.org/dist/v16.14.2/node-v16.14.2-darwin-arm64.tar.xz
# Extracting node.js
# Downloading yarn from https://github.com/yarnpkg/yarn/releases/download/v1.22.10/yarn-v1.22.10.tar.gz
# Extracting yarn
# Building NNI manager
# yarn (path: ts/nni_manager)
yarn install v1.22.10
[1/5] 🔍 Validating package.json...
[2/5] 🔍 Resolving packages...
[3/5] 🚚 Fetching packages...
[4/5] 🔗 Linking dependencies...
warning " > express-joi-validator@2.0.1" has incorrect peer dependency "joi@6.x.x".
warning " > @typescript-eslint/eslint-plugin@2.34.0" has incorrect peer dependency "@typescript-eslint/parser@^2.0.0".
warning " > @typescript-eslint/eslint-plugin@2.34.0" has incorrect peer dependency "eslint@^5.0.0 || ^6.0.0".
warning Workspaces can only be enabled in private projects.
[5/5] 🔨 Building fresh packages...
[1/3] ⢀ sqlite3
[2/3] ⢀ cpu-features
warning Error running install script for optional dependency: "/Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features: Command failed.
Exit code: 1
Command: node-gyp rebuild
Arguments:
Directory: /Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features
Output:
gyp info it worked if it ends with ok
gyp info using node-gyp@9.1.0
gyp info using node@16.14.2 | darwin | arm64
gyp info find Python using Python version 3.10.4 found at \"/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/bin/python3\"
gyp info spawn /Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/bin/python3
gyp info spawn args [
gyp info spawn args '/Users/shuffleofficial/.config/yarn/global/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args 'binding.gyp',
gyp info spawn args '-f',
gyp info spawn args 'make',
gyp info spawn args '-I',
gyp info spawn args '/Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features/build/config.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/shuffleofficial/.config/yarn/global/node_modules/node-gyp/addon.gypi',
gyp info spawn args '-I',
gyp info spawn args '/Users/shuffleofficial/Library/Caches/node-gyp/16.14.2/include/node/common.gypi',
gyp info spawn args '-Dlibrary=shared_library',
gyp info spawn args '-Dvisibility=default',
gyp info spawn args '-Dnode_root_dir=/Users/shuffleofficial/Library/Caches/node-gyp/16.14.2',
gyp info spawn args '-Dnode_gyp_dir=/Users/shuffleofficial/.config/yarn/global/node_modules/node-gyp',
gyp info spawn args '-Dnode_lib_file=/Users/shuffleofficial/Library/Caches/node-gyp/16.14.2/<(target_arch)/node.lib',
gyp info spawn args '-Dmodule_root_dir=/Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features',
gyp info spawn args '-Dnode_engine=v8',
gyp info spawn args '--depth=.',
gyp info spawn args '--no-parallel',
gyp info spawn args '--generator-output',
gyp info spawn args 'build',
gyp info spawn args '-Goutput_dir=.'
gyp info spawn args ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
ACTION Configuring dependencies /Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features/deps/cpu_features/build/Makefile
-- The C compiler identification is AppleClang 14.0.0.14000029
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Looking for dlfcn.h
-- Looking for dlfcn.h - found
-- Looking for getauxval
-- Looking for getauxval - not found
-- Configuring done
-- Generating done
-- Build files have been written to: /Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features/deps/cpu_features/build
TOUCH Release/obj.target/config_deps.stamp
ACTION Building dependencies /Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features/deps/cpu_features/build/libcpu_features.a
[ 11%] Building C object CMakeFiles/utils.dir/src/filesystem.c.o
[ 22%] Building C object CMakeFiles/utils.dir/src/stack_line_reader.c.o
[ 33%] Building C object CMakeFiles/utils.dir/src/string_view.c.o
[ 33%] Built target utils
[ 44%] Building C object CMakeFiles/unix_based_hardware_detection.dir/src/hwcaps.c.o
[ 55%] Building C object CMakeFiles/unix_based_hardware_detection.dir/src/unix_features_aggregator.c.o
[ 55%] Built target unix_based_hardware_detection
[ 66%] Building C object CMakeFiles/cpu_features.dir/src/cpuinfo_arm.c.o
In file included from /Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features/deps/cpu_features/src/cpuinfo_arm.c:15:
/Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features/deps/cpu_features/include/cpuinfo_arm.h:118:2: error: \"Including cpuinfo_arm.h from a non-arm target.\"
#error \"Including cpuinfo_arm.h from a non-arm target.\"
^
1 error generated.
make[3]: *** [CMakeFiles/cpu_features.dir/src/cpuinfo_arm.c.o] Error 1
make[2]: *** [CMakeFiles/cpu_features.dir/all] Error 2
make[1]: *** [all] Error 2
make: *** [/Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features/deps/cpu_features/build/libcpu_features.a] Error 2
gyp ERR! build error
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack at ChildProcess.onExit (/Users/shuffleofficial/.config/yarn/global/node_modules/node-gyp/lib/build.js:201:23)
gyp ERR! stack at ChildProcess.emit (node:events:526:28)
gyp ERR! stack at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
gyp ERR! System Darwin 21.6.0
gyp ERR! command \"/Users/shuffleofficial/nni/nni_node/node\" \"/Users/shuffleofficial/.yarn/bin/node-gyp\" \"rebuild\"
gyp ERR! cwd /Users/shuffleofficial/nni/ts/nni_manager/node_modules/cpu-features
gyp ERR! node -v v16.14.2
gyp ERR! node-gyp -v v9.1.0
✨ Done in 39.75s.
# yarn build (path: ts/nni_manager)
yarn run v1.22.10
$ tsc
/bin/sh: tsc: command not found
error Command failed with exit code 127.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
Traceback (most recent call last):
File "/Users/shuffleofficial/nni/setup.py", line 300, in <module>
_setup()
File "/Users/shuffleofficial/nni/setup.py", line 93, in _setup
setuptools.setup(
File "/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/lib/python3.10/site-packages/setuptools/__init__.py", line 87, in setup
return distutils.core.setup(**attrs)
File "/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 185, in setup
return run_commands(dist)
File "/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/lib/python3.10/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
dist.run_commands()
File "/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 968, in run_commands
self.run_command(cmd)
File "/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/lib/python3.10/site-packages/setuptools/dist.py", line 1217, in run_command
super().run_command(command)
File "/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/lib/python3.10/site-packages/setuptools/_distutils/dist.py", line 987, in run_command
cmd_obj.run()
File "/Users/shuffleofficial/nni/setup.py", line 224, in run
setup_ts.build(release)
File "/Users/shuffleofficial/nni/setup_ts.py", line 53, in build
compile_ts(release)
File "/Users/shuffleofficial/nni/setup_ts.py", line 173, in compile_ts
_yarn('ts/nni_manager', 'build')
File "/Users/shuffleofficial/nni/setup_ts.py", line 272, in _yarn
subprocess.run([str(_yarn_path), *args], cwd=path, check=True, env=_yarn_env)
File "/Users/shuffleofficial/opt/anaconda3/envs/ds_wgan/lib/python3.10/subprocess.py", line 524, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['/Users/shuffleofficial/nni/toolchain/yarn/bin/yarn', 'build']' returned non-zero exit status 127.
```
**Environment**:
- NNI version: 2.0
- Training service (local|remote|pai|aml|etc): local
- Client OS: MacOS 12.6
- Server OS (for remote mode only):
- Python version: Python 3.10.4 (main, Mar 31 2022, 03:37:37) [Clang 12.0.0 ] on darwin
- PyTorch/TensorFlow version: Not applicable
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: No
**How to reproduce it?**: On a macos with M1 chip, run the terminal command above. | closed | 2022-09-27T09:12:47Z | 2023-02-27T02:47:10Z | https://github.com/microsoft/nni/issues/5142 | [
"waiting user confirm",
"support",
"macOS"
] | DDDOH | 8 |
tensorflow/tensor2tensor | deep-learning | 1,900 | Why there is no square root at area_temperature? | https://github.com/tensorflow/tensor2tensor/blob/5623deb79cfcd28f8f8c5463b58b5bd76a81fd0d/tensor2tensor/layers/area_attention.py#L415
In typical dot product attention, logit which is the input matrix of softmax supposed to be divided by square rooted temperature like the equation below.

However, in this code, logit is just divided with temperature without a square root. Is it correct or wrong? If it is correct, could you explain why you didn't add square root? | open | 2021-11-04T12:57:30Z | 2021-11-04T12:57:30Z | https://github.com/tensorflow/tensor2tensor/issues/1900 | [] | jiminbot20 | 0 |
polakowo/vectorbt | data-visualization | 155 | Is Vectorbt suitable for price-based aggregations? | Congratulations on an exciting project - a seriously impressive achievement in such a short timeframe!
I'd be grateful if you could help me assess its suitability for our trading style. I'm a developer, but have never used Python. But I'd learn it to access Vectorbt provided you can reassure me that I'm not going to hit any show-stopping issues with price-based aggregations.
Our trading style uses aggregations such as Renko, which as you know can form at any time. So bars in a portfolio test are not aligned by time as with conventional time-based candles.
This means that a portfolio dataframe would normally have just a single value for each datetime-indexed row, with all the other instruments set to NaN. I'd be importing the data from CSV files in OHLCV format.
Will this pose problems for analysis or visualisations? So many backtester projects seem to assume that data is aligned by time that it's proving challenging to find any serious tool for backtesting Renko.
Thanks in advance for your advice! | closed | 2021-05-30T21:05:40Z | 2024-03-16T09:26:59Z | https://github.com/polakowo/vectorbt/issues/155 | [] | gcaplan | 5 |
slackapi/bolt-python | fastapi | 280 | Django thread-local connection cleanup in multi threads | This issue illustrates a potential issues that may affect Django adapter users.
Django's ORM establishes a database connection per thread. For the threads managed by Django (which are supposed to be used for directly handing web requests), Django ORM cleans the established connections up under the hood once the processing on the thread completes (= request_finished signal).
On the other hand, in the case where Django ORM models are used in an unmanaged thread, Django does not handle the cleanup of the connections bound to the thread although the Django framework automatically associates a new connection to the thread. This half-heartedly maintained resource possibly may cause stale DB connection related issues.
To learn more details of this Django behavior, checking the following ticket is helpful for understanding this:
https://code.djangoproject.com/ticket/9878
As Bolt for Python utilizes threads for enabling developers to run `ack()` method asynchronously (as long as `process_before_response=False`, which is the default), this framework should have a special treatment for this possible issue with Django.
As the solution for this issue (and possible similar needs), we'll introduce a new completion callback to the `ListenerRunner` mechanism. The handler will be quite similar to the `ListenerErrorHandler` callback but the given callback function is executed in the `finally` clause of listener runner invocation:
https://github.com/slackapi/bolt-python/blob/v1.4.4/slack_bolt/listener/thread_runner.py#L99-L126
Also, for this Django specific issue, we'll add a custom lazy listener runner to the Django adapter module.
I've already implemented the initial version of this. We can discuss its details in my upcoming pull request.
| closed | 2021-04-02T08:31:07Z | 2022-07-15T20:46:52Z | https://github.com/slackapi/bolt-python/issues/280 | [
"bug",
"area:async",
"area:adapter",
"area:sync"
] | seratch | 2 |
jmcnamara/XlsxWriter | pandas | 609 | Issue with write_url when internally referencing a sheet (with spaces) in the same book | Hi,
I am trying to use XlsxWriter to create a table of contents that has hyperlinks that are linked to the various sheets within the book. The sheet names have spaces in them, so I follow the syntax guidelines of single-quoting the sheet:
```
import xlsxwriter
workbook = xlsxwriter.Workbook('hello.xlsx')
worksheet = workbook.add_worksheet()
call = "internal:'Summary Figure 1'!A1"
worksheet.write_url(5, 0, call, url_format, string = string + str(end_year))
```
When I print the string 'call', I get "internal:'Summary Figure 1'!A1" out, but when I run the program and edit the hyperlink in the output xlsx file, it shows:
%22internal:'Summary%20Figure%201'!A1%22
which breaks the link.
I am using Python version 3.6 and XlsxWriter 1.0.4 and Excel version 2013.
I believe this same problem was fixed for external references (https://github.com/jmcnamara/XlsxWriter/issues/350), but not for internal.
Thanks
| closed | 2019-03-25T19:02:25Z | 2019-03-26T02:15:11Z | https://github.com/jmcnamara/XlsxWriter/issues/609 | [
"awaiting user feedback"
] | baym7 | 4 |
graphql-python/graphene-django | graphql | 1,043 | Don't know how to convert the Django field class 'djongo.models.fields.ArrayField' | from djongo.models import Model, CharField, ObjectIdField, IntegerField, ArrayField
from django.forms import ModelForm
class Contact(Model):
name = CharField(max_length=50)
phone_number = IntegerField()
class Meta:
abstract = True
class ContactForm(ModelForm):
class Meta:
model = Contact
fields = ("name", "phone_number")
class Organization(Model):
_id = ObjectIdField()
name = CharField(max_length=50, unique=True)
desk_number = IntegerField()
registered_address = CharField(max_length=256)
communication_address = CharField(max_length=256)
email_address = CharField(max_length=150)
contacts = ArrayField(model_container=Contact, model_form_class=ContactForm, null=False)
type = CharField(max_length=256)
pan_number = CharField(max_length=15)
gst_number = CharField(max_length=25)
member_count = IntegerField()
account_number = IntegerField()
bank_name = CharField(70)
ifsc_code = CharField(10)
registration_year = CharField(4)
class Meta:
"""
to set table name in database
"""
db_table = "organization"
File "D:\miniconda\envs\deeplearning\lib\threading.py", line 932, in _bootstrap_inner
self.run()
File "D:\miniconda\envs\deeplearning\lib\threading.py", line 870, in run
self._target(*self._args, **self._kwargs)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\utils\autoreload.py", line 53, in wrapper
fn(*args, **kwargs)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\core\management\commands\runserver.py", line 117, in inner_run
self.check(display_num_errors=True)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\core\management\base.py", line 392, in check
all_issues = self._run_checks(
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\core\management\base.py", line 382, in _run_checks
return checks.run_checks(**kwargs)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\core\checks\registry.py", line 72, in run_checks
new_errors = check(app_configs=app_configs)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\core\checks\urls.py", line 13, in check_url_config
return check_resolver(resolver)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\core\checks\urls.py", line 23, in check_resolver
return check_method()
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\urls\resolvers.py", line 407, in check
for pattern in self.url_patterns:
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\urls\resolvers.py", line 588, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\utils\functional.py", line 48, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "D:\miniconda\envs\deeplearning\lib\site-packages\django\urls\resolvers.py", line 581, in urlconf_module
return import_module(self.urlconf_name)
File "D:\miniconda\envs\deeplearning\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "D:\Work\datascience\deeplearning\main\urls.py", line 26, in <module>
from .schema import schema
File "D:\Work\datascience\deeplearning\main\schema.py", line 14, in <module>
from api.organization.query import Query as QOrganization
File "D:\Work\datascience\deeplearning\api\organization\query.py", line 15, in <module>
class OrganizationType(DjangoObjectType):
File "D:\miniconda\envs\deeplearning\lib\site-packages\graphene\utils\subclass_with_meta.py", line 52, in __init_subclass__
super_class.__init_subclass_with_meta__(**options)
File "D:\miniconda\envs\deeplearning\lib\site-packages\graphene_django\types.py", line 227, in __init_subclass_with_meta__
construct_fields(model, registry, fields, exclude, convert_choices_to_enum),
File "D:\miniconda\envs\deeplearning\lib\site-packages\graphene_django\types.py", line 63, in construct_fields
converted = convert_django_field_with_choices(
File "D:\miniconda\envs\deeplearning\lib\site-packages\graphene_django\converter.py", line 112, in convert_django_field_with_choices
converted = convert_django_field(field, registry)
File "D:\miniconda\envs\deeplearning\lib\functools.py", line 875, in wrapper
return dispatch(args[0].__class__)(*args, **kw)
File "D:\miniconda\envs\deeplearning\lib\site-packages\graphene_django\converter.py", line 120, in convert_django_field
raise Exception(
Exception: Don't know how to convert the Django field organization.Organization.contacts (<class 'djongo.models.fields.ArrayField'>) | open | 2020-10-12T14:39:30Z | 2021-02-04T13:35:40Z | https://github.com/graphql-python/graphene-django/issues/1043 | [
"🐛bug",
"✨enhancement"
] | sfahad1414 | 2 |
strawberry-graphql/strawberry | fastapi | 3,809 | Automatic object type resolution does not trigger in reference resolvers | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
This is a bit of a tricky one to explain, so do let me know if any further clarity is needed.
TL;DR: Strawberry is unable to resolve federated types where the reference resolver does not explicitly return an object type, or where the type being federated defines inline field resolvers.
It's easier to explain with an MRE. This system has two services, `groups` and `users`, each pertaining to group and user queries respectively. The `Group` type is federated, and the `users` service has two queries. `Users` contains `group` as a field.
groups/app.py:
```py
from types import SimpleNamespace
from typing import Self
import strawberry
groups = {
"1": SimpleNamespace(id="1", name="Hello", altname="Hey"),
"2": SimpleNamespace(id="2", name="Strawberry"),
"3": SimpleNamespace(id="3", name="World", altname="Earth"),
}
@strawberry.federation.type(keys=["id"])
class Group:
id: strawberry.ID
name: str
altname: str = strawberry.field(
resolver=lambda root: getattr(root, "altname", root.name),
)
@classmethod
def resolve_reference(cls, id: str) -> Self:
return groups.get(id)
schema = strawberry.federation.Schema(
types=[Group],
enable_federation_2=True,
)
```
users/app.py:
```py
from types import SimpleNamespace
import strawberry
users = {
"1": SimpleNamespace(id="1", group_id="1"),
"2": SimpleNamespace(id="2", group_id="2"),
"3": SimpleNamespace(id="3", group_id="3"),
}
@strawberry.federation.type(keys=["id"])
class Group:
id: strawberry.ID
@strawberry.type
class User:
id: int
group: Group = strawberry.field(
resolver=lambda root: Group(id=root.group_id),
)
@strawberry.type
class Query:
@strawberry.field
def users(self) -> list[User]:
return list(users.values())
@strawberry.field
def user(self) -> User:
return users.get("1")
schema = strawberry.federation.Schema(
query=Query,
enable_federation_2=True,
)
```
Posting the following query (`altname` is intentionally omitted for now):
```json
{"query": "query { users { id group { id name } } }"
```
returns the following error in the `groups` service:
```
GraphQL request:1:37
1 | query($representations: [_Any!]!) { _entities(representations: $representations) { ... on Group { name } } }
| ^
Traceback (most recent call last):
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/execution/execute.py", line 728, in complete_list_value
completed_item = self.complete_value(
item_type, field_nodes, info, item_path, item
)
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/execution/execute.py", line 646, in complete_value
return self.complete_abstract_value(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
cast(GraphQLAbstractType, return_type), field_nodes, info, path, result
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/execution/execute.py", line 798, in complete_abstract_value
runtime_type = resolve_type_fn(result, info, return_type)
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/strawberry/types/union.py", line 185, in _resolve_union_type
raise WrongReturnTypeForUnion(info.field_name, str(type(root)))
strawberry.exceptions.WrongReturnTypeForUnion: The type "<class 'types.SimpleNamespace'>" cannot be resolved for the field "_entities" , are you using a strawberry.field?
```
This error can be resolved by explicitly returning a `Group` object type, like so:
```py
@classmethod
def resolve_reference(cls, id: str) -> Self:
group = groups.get(id)
return Group(id=group.id, name=group.name)
```
However, when querying `altname` as well, which uses an inline resolver:
```json
{"query": "query { users { id group { id name altname } } }"
```
The `groups` service raises this error instead:
```
GraphQL request:1:104
1 | query($representations: [_Any!]!) { _entities(representations: $representations) { ... on Group { name altname } } }
| ^
Traceback (most recent call last):
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/execution/execute.py", line 542, in execute_field
completed = self.complete_value(
return_type, field_nodes, info, path, result
)
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/execution/execute.py", line 614, in complete_value
completed = self.complete_value(
cast(GraphQLNonNull, return_type).of_type,
...<3 lines>...
result,
)
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/execution/execute.py", line 641, in complete_value
return self.complete_leaf_value(cast(GraphQLLeafType, return_type), result)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/execution/execute.py", line 776, in complete_leaf_value
serialized_result = return_type.serialize(result)
File "/Users/ethanhenderson/Programs/Playground/strawberry_test/.venv/lib/python3.13/site-packages/graphql/type/scalars.py", line 177, in serialize_string
raise GraphQLError("String cannot represent value: " + inspect(output_value))
graphql.error.graphql_error.GraphQLError: String cannot represent value: <method>
```
`altname` can't be passed as an argument to the constructor as it has an inline resolver, and it can't be resolved when being federated from another type as Strawberry doesn't perform the necessary resolution. This makes it very difficult to use inline resolvers in federated types.
If there's another way of doing this I'm missing, do let me know.
## System Information
- Operating system: MacOS 15.3.2
- Strawberry version (if applicable): 0.262.3
## Additional Context
The MRE is based on the setup in the [Federation v2 Guide](https://strawberry.rocks/docs/guides/federation#apollo-federation-2-guide) to try and make it simpler.
| open | 2025-03-13T16:14:33Z | 2025-03-13T16:21:00Z | https://github.com/strawberry-graphql/strawberry/issues/3809 | [
"bug"
] | parafoxia | 0 |
adap/flower | tensorflow | 4,399 | Update embedded-devices example for the latest PyTorch and JetPack | ### Describe the type of feature and its functionality.
Following @jafermarq's suggestion(#4381 , #4382 ), I am creating this new issue.
How would you feel about providing guidelines for the build script?
I’ve modified the build script (`build_jetson_flower_client.sh`) to use Flower (`flwr`) with JetPack 6.0 and successfully tested it. I believe it would be beneficial to provide guidelines for the build script. Utilizing the BASE_PYTORCH variable could simplify the process compared to reinstalling PyTorch from scratch.
Currently, the [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/l4t-pytorch) offers base images supporting up to PyTorch v2.0.0. Additionally, with a bit more searching, we can find [the dustynv base image](https://github.com/dusty-nv/jetson-containers) for PyTorch v2.2.0, which is compatible with JetPack 6.0. I have also confirmed on [the NVIDIA forums](https://forums.developer.nvidia.com/t/jetson-container-for-jetpack-6/292662/6) that the R36.2 container image is compatible with R36.3.
Let me know if this approach aligns with your plans or if there's anything else I can assist with. I'm happy to help further, whether it's refining the build script guidelines or contributing in other ways!
### Describe step by step what files and adjustments are you planning to include.
**`build_jetson_flower_client.sh`**
`Update the build script to accept the BASE_PYTORCH and BASE_TF variables as input arguments.
Ensure that the script defaults to a standard base image if no argument is provided.
**`Dockerfile`**
`In my experience using the dusty-nv base image, I encountered an issue where libsndfile1 was missing. To resolve this, I added the following command to the Dockerfile:
```
RUN apt-get update && \
apt-get install -y --no-install-recommends libsndfile1 && \
rm -rf /var/lib/apt/lists/*
```
**`README.md`**
`Add a table that lists the available base images along with a brief description and reference links.
### Is there something else you want to add?
_No response_ | open | 2024-10-30T02:27:03Z | 2025-03-07T14:34:48Z | https://github.com/adap/flower/issues/4399 | [
"feature request",
"part: examples",
"state: revision needed"
] | wwjang | 6 |
StackStorm/st2 | automation | 5,136 | Json definition in core.remote action command | ## SUMMARY
I need to insert a workflow parameter inside a Json definition for my action.
The first problem is that I can't even run the command with a Json definition in it (withou the parameter in the Json).
```
input:
- parameter
tasks:
run:
action: core.remote
input:
cmd: command --json= '{"key1": "someValeu", "key": "secondValue" }'
```
This results in the error:
```
result:
errors:
- message: "mapping values are not allowed here
in "<unicode string>", line 16, column 78:
... command --json= '{ "key1": "someValeu", "key2": "secondValue" }
^"
```
The second problem is, assuming I can resolve the first one, as far as I saw in the examples and tested, `<% ctx().parameter %>` cannot be used inside single quotes, it only works when used inside double quotes. But the Json only works if I define it using single quotes. In the end I would like to have the workflow:
```
input:
- parameter
tasks:
run:
action: core.remote
input:
cmd: command --json= '{"key1": "someValeu", "key2": <% ctx().parameter %> }'
```
### STACKSTORM VERSION
st2 3.4dev (c422f0029), on Python 3.6.9
##### OS, environment, install method
Running on Kubenetes on k3os system. Deployed using stackstorm-ha
## Expected Results
Is there a way to get around these problems?
| closed | 2021-02-02T14:12:18Z | 2021-02-08T13:08:41Z | https://github.com/StackStorm/st2/issues/5136 | [] | pavanfhw | 3 |
tatsu-lab/stanford_alpaca | deep-learning | 124 | I consider redoing this for other languages, is this possible for a private person? | I would love to recrate this experiment in German and maybe also Esperanto and it looks easy enough, I bascially just have to adapt [prompt.txt](https://github.com/tatsu-lab/stanford_alpaca/blob/main/prompt.txt) to another language and follow the rest of the instructions, right?
I am willing to invest arount 100€ into this, do you think it could be feasible? I don't care if the training is slow or if I need a few months for it. So do you see more ways to optimize the process? For example could I train it on Google Colab? | open | 2023-03-22T12:52:23Z | 2023-03-28T10:09:16Z | https://github.com/tatsu-lab/stanford_alpaca/issues/124 | [] | stefangrotz | 10 |
TencentARC/GFPGAN | deep-learning | 464 | 如何加载自己训练的模型 | 基于GFPGAN训练了自己的数据集,如何加载自己训练的模型?加载哪一个?
| open | 2023-11-08T09:27:23Z | 2023-11-08T09:27:23Z | https://github.com/TencentARC/GFPGAN/issues/464 | [] | summerhuwenjun | 0 |
serengil/deepface | machine-learning | 713 | is dlib Detector use gpu ? | Hello
I have tried Face recognition using (Facenet512 and dlib) on 300 images it takes 8 min.
And I have tried to install TensorFlow-GPU but the time is still 8 min.
If use (Facenet512 and ssd )for example it uses GPU but with using (Facenet512 and dlib) it's now.
#######
```
python version 3.9.16
TensorFlow-GPU version 2.11.1
```
#######
Any suggest for solving the problem ?? | closed | 2023-04-08T10:37:08Z | 2023-04-08T10:42:48Z | https://github.com/serengil/deepface/issues/713 | [
"question"
] | Shamaseen-iotistic | 3 |
JaidedAI/EasyOCR | pytorch | 389 | Batch prediction | Hi! Is there any plan for batch inference or passing list of images/image path list in the `readtext` function?
I see that `recognize` takes `batch_size` arg but `img_cv_grey` seem to be single image only. How do I infer on a batch of image?
```python
def recognize(self, img_cv_grey, horizontal_list=None, free_list=None,\
decoder = 'greedy', beamWidth= 5, batch_size = 1,\
workers = 0, allowlist = None, blocklist = None, detail = 1,\
rotation_info = None,\
paragraph = False,\
contrast_ths = 0.1,adjust_contrast = 0.5, filter_ths = 0.003,\
reformat=True):
``` | closed | 2021-03-08T15:52:48Z | 2022-03-02T09:24:34Z | https://github.com/JaidedAI/EasyOCR/issues/389 | [] | aniketmaurya | 3 |
google-deepmind/sonnet | tensorflow | 225 | __init__() got an unexpected keyword argument 'w_init' | I got this error when I try to create a instance with initializers. This error comes from **sonnet.nets.MLP** . Any solutions? thanks | open | 2021-12-08T17:20:05Z | 2021-12-09T09:12:06Z | https://github.com/google-deepmind/sonnet/issues/225 | [] | gautica | 1 |
zappa/Zappa | django | 664 | [Migrated] zappa deploy fails with import error even though the module is present in the package | Originally from: https://github.com/Miserlou/Zappa/issues/1687 by [wholeinsoul](https://github.com/wholeinsoul)
## Context
I have django project that works locally and in elasticbeanstalk. Now I'm trying to deploy it to AWS lambda using zappa. But I'm getting the following error even though the module urllib3 is present in the package zip.
```
No module named urllib3: ImportError
Traceback (most recent call last):
File "/var/task/handler.py", line 580, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 245, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 151, in __init__
wsgi_app_function = get_django_wsgi(self.settings.DJANGO_SETTINGS)
File "/var/task/zappa/ext/django_zappa.py", line 20, in get_django_wsgi
return get_wsgi_application()
File "/tmp/task/django/core/wsgi.py", line 14, in get_wsgi_application
django.setup()
File "/tmp/task/django/__init__.py", line 17, in setup
configure_logging(settings.LOGGING_CONFIG, settings.LOGGING)
File "/tmp/task/django/conf/__init__.py", line 48, in __getattr__
self._setup(name)
File "/tmp/task/django/conf/__init__.py", line 44, in _setup
self._wrapped = Settings(settings_module)
File "/tmp/task/django/conf/__init__.py", line 92, in __init__
mod = importlib.import_module(self.SETTINGS_MODULE)
File "/usr/lib64/python2.7/importlib/__init__.py", line 37, in import_module
__import__(name)
File "/var/task/api/settings/test.py", line 4, in <module>
File "/tmp/task/elasticsearch/__init__.py", line 17, in <module>
from .client import Elasticsearch
File "/tmp/task/elasticsearch/client/__init__.py", line 5, in <module>
from ..transport import Transport
File "/tmp/task/elasticsearch/transport.py", line 5, in <module>
from .connection import Urllib3HttpConnection
File "/tmp/task/elasticsearch/connection/__init__.py", line 3, in <module>
from .http_urllib3 import Urllib3HttpConnection
File "/tmp/task/elasticsearch/connection/http_urllib3.py", line 2, in <module>
import urllib3
```
My project directory structure is:
<project_name>/api --> this has all my django apps and code
/manage.py
/zappa_settings.json
Environment
zappa: 0.47.0
docker image built using https://github.com/danielwhatmuff/zappa
python 2.7
venv)bash-4.2# pip freeze | grep url
urllib3==1.20
zappa_settings.json:
```
{
"dev": {
"django_settings": "api.settings.test",
"profile_name": null,
"project_name": "server",
"runtime": "python2.7",
"s3_bucket": "zappa-server-localhost",
"slim_handler": true,
"exclude": [
"*.mp4",
"*.ogv",
"*.webm",
"logs",
"*.log",
"media",
"static_dirs",
".git",
".elasticbeanstalk",
".ropeproject"
],
"environment_variables": {
//removed
}
},
}
```
| closed | 2021-02-20T12:32:38Z | 2024-04-13T17:36:44Z | https://github.com/zappa/Zappa/issues/664 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
tflearn/tflearn | data-science | 974 | Fast way to load multiple models from files? | I'm building a system for classifying images into multiple categories, and approaching the problem using multiple networks (all my networks have the same structure, and output `[0,1]` or `[1,0]` (Yes or No).
There's (for the sake of the example) a network for dogs, another network for cats, the third network is for cars, and so and so...
**My first question**: Is this kind of approach the popular one, or is there a different (and hopefully better) approach for classifying a LOT of different subjects?
If I'm on the right path, I wondered if there a fast way to load the models from their files using TFLearn.
Currently, I have a function to define my model structure:
def define_network_model(category_id):
"""
Defines a network structure
"""`
# Defining image augmentation
# We want to create a stilted wider dataset
aug = ImageAugmentation()
aug.add_random_flip_leftright()
aug.add_random_blur(sigma_max=3.)
aug.add_random_rotation(max_angle=20.)
T.reset_default_graph()
# Input layer
net = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 1], name="input", \
data_augmentation=aug)
# 2D Convolution layer
net = conv_2d(net, 32, 5, activation='relu')
net = max_pool_2d(net, 3)
.... More layers ....
# Output layer
net = fully_connected(net, 2, activation='softmax')
# Regression layer
net = regression(net, optimizer='adam', learning_rate=LEARNING_RATE, \
loss='categorical_crossentropy', name='targets')
# Creating a model
model = tflearn.DNN(net, tensorboard_dir="tensorboard", \
best_checkpoint_path=os.path.join(CHECKPOINTS_DIR, category_id), \
best_val_accuracy=0.7)
return model
And when I want to load the models, I have a categories object I'm iterating through and loading all the categories:
models = {}
for id, cat in categories.items():
m = define_network_model(id) # Creating the structure
m.load(os.path.join(MODELS_DIR, id)) # Loading the file
models[id] = m # Appending to the models list for later evaluation
Defining the network every iteration takes **a LOT** of time (>2sec for every iteration).
**My second question**: Am I missing something here? Can the structure be cloned from a model to another in a faster way, or am I just on the wrong path? | open | 2017-12-09T15:44:48Z | 2018-06-29T07:40:19Z | https://github.com/tflearn/tflearn/issues/974 | [] | yotam180 | 1 |
QuivrHQ/quivr | api | 3,428 | Add Structured Parsing | From a BaseModel class, we return the parsing of the file organized. | closed | 2024-10-25T08:23:53Z | 2025-01-29T08:03:52Z | https://github.com/QuivrHQ/quivr/issues/3428 | [
"enhancement"
] | chloedia | 2 |
huggingface/diffusers | deep-learning | 10,144 | Why mochi diffusers video output is worse than mochi official code? | ### Describe the bug
The quality of video is worse.
### Reproduction
Run the code with official prompt
### Logs
_No response_
### System Info
diffusers@main
### Who can help?
@a-r-r-o-w @yiyixuxu | closed | 2024-12-07T05:53:57Z | 2025-01-07T15:38:38Z | https://github.com/huggingface/diffusers/issues/10144 | [
"bug",
"stale"
] | foreverpiano | 10 |
deepfakes/faceswap | machine-learning | 1,035 | Using TensorFlow backend | I can't use my GPU.
============ System Information ============
encoding: cp936
git_branch: master
git_commits: 2f15597 requirements.txt - typofix
gpu_cuda: No global version found. Check Conda packages for Conda Cuda
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: GeForce RTX 2060
gpu_devices_active: GPU_0
gpu_driver: 436.30
gpu_vram: GPU_0: 6144MB
os_machine: AMD64
os_platform: Windows-10-10.0.18362-SP0
os_release: 10
py_command: C:\Users\Administrator\faceswap/faceswap.py gui
py_conda_version: conda 4.8.3
py_implementation: CPython
py_version: 3.7.7
py_virtual_env: True
sys_cores: 6
sys_processor: AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD
sys_ram: Total: 16333MB, Available: 8629MB, Used: 7704MB, Free: 8629MB
=============== Pip Packages ===============
absl-py==0.9.0
astor==0.8.0
blinker==1.4
brotlipy==0.7.0
cachetools==4.1.0
certifi==2020.6.20
cffi==1.14.0
chardet==3.0.4
click==7.1.2
cloudpickle @ file:///tmp/build/80754af9/cloudpickle_1594141588948/work
cryptography==2.9.2
cycler==0.10.0
cytoolz==0.10.1
dask @ file:///tmp/build/80754af9/dask-core_1594156306305/work
decorator==4.4.2
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.2.2
google-auth @ file:///tmp/build/80754af9/google-auth_1594357566944/work
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.27.2
h5py==2.10.0
idna @ file:///tmp/build/80754af9/idna_1593446292537/work
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1594236160679/work
Keras==2.2.4
Keras-Applications @ file:///tmp/build/80754af9/keras-applications_1594366238411/work
Keras-Preprocessing==1.1.0
kiwisolver==1.2.0
Markdown==3.1.1
matplotlib @ file:///C:/ci/matplotlib-base_1592846084747/work
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
networkx @ file:///tmp/build/80754af9/networkx_1594377231366/work
numpy==1.18.5
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.3.0.36
opt-einsum==3.1.0
Pillow @ file:///C:/ci/pillow_1594298234712/work
protobuf==3.12.3
psutil==5.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.7
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
PyJWT==1.7.1
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1594392929924/work
pyparsing==2.4.7
pyreadline==2.1
PySocks @ file:///C:/ci/pysocks_1594394709107/work
python-dateutil==2.8.1
PyWavelets==1.1.1
pywin32==227
PyYAML==5.3.1
requests @ file:///tmp/build/80754af9/requests_1592841827918/work
requests-oauthlib==1.3.0
rsa==4.0
scikit-image==0.16.2
scikit-learn @ file:///C:/ci/scikit-learn_1592847564598/work
scipy @ file:///C:/ci/scipy_1592916958183/work
six==1.15.0
tensorboard==2.2.1
tensorboard-plugin-wit==1.6.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
toolz==0.10.0
toposort==1.5
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1593446365756/work
urllib3==1.25.9
Werkzeug==0.16.1
win-inet-pton==1.1.0
wincertstore==0.2
wrapt==1.12.1
============== Conda Packages ==============
# packages in environment at C:\Users\Administrator\MiniConda3\envs\faceswap:
#
# Name Version Build Channel
_tflow_select 2.2.0 eigen
absl-py 0.9.0 py37_0
astor 0.8.0 py37_0
blas 1.0 mkl
blinker 1.4 py37_0
brotlipy 0.7.0 py37he774522_1000
ca-certificates 2020.6.24 0
cachetools 4.1.0 py_1
certifi 2020.6.20 py37_0
cffi 1.14.0 py37h7a1dbc1_0
chardet 3.0.4 py37_1003
click 7.1.2 py_0
cloudpickle 1.5.0 py_0
cryptography 2.9.2 py37h7a1dbc1_0
cycler 0.10.0 py37_0
cytoolz 0.10.1 py37he774522_0
dask-core 2.20.0 py_0
decorator 4.4.2 py_0
fastcluster 1.1.26 py37h9b59f54_1 conda-forge
ffmpeg 4.3 ha925a31_0 conda-forge
ffmpy 0.2.3 pypi_0 pypi
freetype 2.10.2 hd328e21_0
gast 0.2.2 py37_0
git 2.23.0 h6bb4b03_0
google-auth 1.17.2 py_0
google-auth-oauthlib 0.4.1 py_2
google-pasta 0.2.0 py_0
grpcio 1.27.2 py37h351948d_0
h5py 2.10.0 py37h5e291fa_0
hdf5 1.10.4 h7ebc959_0
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 py_0
imageio 2.9.0 py_0
imageio-ffmpeg 0.4.2 py_0 conda-forge
intel-openmp 2020.1 216
joblib 0.16.0 py_0
jpeg 9b hb83a4c4_2
keras 2.2.4 0
keras-applications 1.0.8 py_1
keras-base 2.2.4 py37_0
keras-preprocessing 1.1.0 py_1
kiwisolver 1.2.0 py37h74a9793_0
libpng 1.6.37 h2a8f88b_0
libprotobuf 3.12.3 h7bd577a_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.2 h62dcd97_0
markdown 3.1.1 py37_0
matplotlib 3.2.2 0
matplotlib-base 3.2.2 py37h64f37c6_0
mkl 2020.1 216
mkl-service 2.3.0 py37hb782905_0
mkl_fft 1.1.0 py37h45dec08_0
mkl_random 1.1.1 py37h47e9c7a_0
networkx 2.4 py_1
numpy 1.18.5 py37h6530119_0
numpy-base 1.18.5 py37hc3f5095_0
nvidia-ml-py3 7.352.1 pypi_0 pypi
oauthlib 3.1.0 py_0
olefile 0.46 py37_0
opencv-python 4.3.0.36 pypi_0 pypi
openssl 1.1.1g he774522_0
opt_einsum 3.1.0 py_0
pathlib 1.0.1 py37_2
pillow 7.2.0 py37hcc1f983_0
pip 20.1.1 py37_1
protobuf 3.12.3 py37h33f27b4_0
psutil 5.7.0 py37he774522_0
pyasn1 0.4.8 py_0
pyasn1-modules 0.2.7 py_0
pycparser 2.20 py_2
pyjwt 1.7.1 py37_0
pyopenssl 19.1.0 py_1
pyparsing 2.4.7 py_0
pyqt 5.9.2 py37h6538335_2
pyreadline 2.1 py37_1
pysocks 1.7.1 py37_1
python 3.7.7 h81c818b_4
python-dateutil 2.8.1 py_0
python_abi 3.7 1_cp37m conda-forge
pywavelets 1.1.1 py37he774522_0
pywin32 227 py37he774522_1
pyyaml 5.3.1 py37he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.24.0 py_0
requests-oauthlib 1.3.0 py_0
rsa 4.0 py_0
scikit-image 0.16.2 py37h47e9c7a_0
scikit-learn 0.23.1 py37h25d0782_0
scipy 1.5.0 py37h9439919_0
setuptools 49.2.0 py37_0
sip 4.19.8 py37h6538335_0
six 1.15.0 py_0
sqlite 3.32.3 h2a8f88b_0
tensorboard 2.2.1 pyh532a8cf_0
tensorboard-plugin-wit 1.6.0 py_0
tensorflow 1.15.0 eigen_py37h9f89a44_0
tensorflow-base 1.15.0 eigen_py37h07d2309_0
tensorflow-estimator 1.15.1 pyh2649769_0
termcolor 1.1.0 py37_1
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
toolz 0.10.0 py_0
toposort 1.5 py_3 conda-forge
tornado 6.0.4 py37he774522_1
tqdm 4.47.0 py_0
urllib3 1.25.9 py_0
vc 14.1 h0510ff6_4
vs2015_runtime 14.16.27012 hf0eaf9b_3
werkzeug 0.16.1 py_0
wheel 0.34.2 py37_0
win_inet_pton 1.1.0 py37_0
wincertstore 0.2 py37_0
wrapt 1.12.1 py37he774522_1
xz 5.2.5 h62dcd97_0
yaml 0.2.5 he774522_0
zlib 1.2.11 h62dcd97_4
zstd 1.4.5 ha9fde0e_0
================= Configs ==================
--------- .faceswap ---------
backend: nvidia
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
[scaling.sharpen]
method: unsharp_mask
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
scalefactor: 0.709
batch-size: 8
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
coverage: 68.75
mask_type: none
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
icnr_init: False
conv_aware_init: False
reflect_padding: False
penalized_mask_loss: True
loss_function: mae
learning_rate: 5e-05
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
clipnorm: True
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
clipnorm: True
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
| closed | 2020-07-18T10:01:06Z | 2020-07-18T14:47:27Z | https://github.com/deepfakes/faceswap/issues/1035 | [] | danuonuo | 4 |
wkentaro/labelme | deep-learning | 984 | Change landmark color to digit ? | **Change landmark color to digit**
I have 17 points that need to be labeled, but it is not easy to distinguish by color, I want to change the color to numbers. Is it possible?
| closed | 2022-02-09T06:41:42Z | 2022-02-09T09:14:17Z | https://github.com/wkentaro/labelme/issues/984 | [] | jnulzl | 1 |
iperov/DeepFaceLab | deep-learning | 913 | I cannot login DFL Google Colab Forum | I cannot login [mrdeepface.com](https://mrdeepfakes.com/forums/thread-sfw-guide-deepfacelab-google-colab-tutorial#) `DeepFaceLab with Google Colab - Tutorial`
I logged in successfully (maybe) but reload and display same as previous page | open | 2020-09-29T06:54:03Z | 2023-06-08T21:28:07Z | https://github.com/iperov/DeepFaceLab/issues/913 | [] | keisan1231 | 1 |
litestar-org/litestar | pydantic | 3,649 | Bug: Extending middlewares or dependencies in the main application does not actually register them. | ### Description
When I try to register middlewares or dependencies or even error_handlers from outside the main application, by extending them after creation rather than passing them directly, they appear to be in scope but are not actually present (or at least not recognized by Swagger, which requires them as path parameters).
Interestingly, this issue does not occur with the Router, which is paradoxical.
So, what i did wrong?
### URL to code causing the issue
https://github.com/litestar-org/litestar/blob/main/litestar/app.py
### MCVE
```python
def init_app(settings: Settings, *routers: Router) -> Litestar:
log.info("Initialize Application")
app = Litestar(
[],
path="/api",
cors_config=CORSConfig(
allow_origins=settings.server.origins,
allow_methods=cast(list[Method | Literal["*"]], settings.server.methods),
allow_headers=settings.server.headers,
allow_credentials=True,
),
csrf_config=(
CSRFConfig(secret=settings.server.csrf_secret)
if settings.server.csrf_secret
else None
),
openapi_config=(
OpenAPIConfig(
title=settings.server.title,
version=settings.server.version,
servers=[Server(url=settings.server.domain)],
)
if settings.server.title
else None
),
debug=bool(settings.server.debug),
middleware=get_current_common_middlewares(),
exception_handlers=get_current_common_exception_handlers(),
dependencies=deepcopy(proxy_app.dependencies),
state=State(proxy_app.state),
lifespan=[release_resources],
)
for router in routers:
app.register(router)
setup_dependencies(proxy_app, settings)
return app
```
```py
def setup_dependencies(app: Litestar, settings: Settings) -> None:
log.info("Setup dependencies")
engine = create_sa_engine(
settings.db.url,
pool_size=settings.db.connection_pool_size,
max_overflow=settings.db.connection_max_overflow,
pool_pre_ping=settings.db.connection_pool_pre_ping,
)
app.state.engine = engine
conn_factory = create_sa_connection_factory(engine)
manager_factory = create_db_manager_factory(conn_factory)
hasher = get_argon2_hasher()
mediator = CommandMediator()
setup_command_mediator(
mediator=mediator,
manager=manager_factory,
hasher=hasher,
)
app.dependencies["mediator"] = Provide(
singleton(mediator), use_cache=True, sync_to_thread=False
)
```
And there how i fix it
```py
def init_app(settings: Settings, *routers: Router) -> Litestar:
log.info("Initialize Application")
# NOTE It's frustrating to discover that extending or overriding the object to register dependencies or middlewares
# does not actually register anything. What a mess!
proxy_app = Litestar()
setup_dependencies(proxy_app, settings)
app = Litestar(
[],
path="/api",
cors_config=CORSConfig(
allow_origins=settings.server.origins,
allow_methods=cast(list[Method | Literal["*"]], settings.server.methods),
allow_headers=settings.server.headers,
allow_credentials=True,
),
csrf_config=(
CSRFConfig(secret=settings.server.csrf_secret)
if settings.server.csrf_secret
else None
),
openapi_config=(
OpenAPIConfig(
title=settings.server.title,
version=settings.server.version,
servers=[Server(url=settings.server.domain)],
)
if settings.server.title
else None
),
debug=bool(settings.server.debug),
middleware=get_current_common_middlewares(),
exception_handlers=get_current_common_exception_handlers(),
dependencies=deepcopy(proxy_app.dependencies),
state=State(proxy_app.state),
lifespan=[release_resources],
)
del proxy_app
for router in routers:
app.register(router)
return app
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Litestar Version
2.9.1
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-07-27T03:08:41Z | 2025-03-20T15:54:50Z | https://github.com/litestar-org/litestar/issues/3649 | [
"Bug :bug:"
] | hpphpro | 2 |
feature-engine/feature_engine | scikit-learn | 100 | add tests for doc build in circleci | Add automatic test for doc build as part of circleci checks
| closed | 2020-08-11T04:56:50Z | 2020-08-13T17:57:01Z | https://github.com/feature-engine/feature_engine/issues/100 | [] | solegalli | 0 |
ipyflow/ipyflow | jupyter | 101 | transfer `call_scope` for assigned-to functions | E.g. if `foo` is a function and I do `bar = foo`, `bar` is currently missing the `call_scope`, because it wasn't created with `def bar(): ...` | closed | 2022-05-08T15:57:24Z | 2022-05-28T18:03:33Z | https://github.com/ipyflow/ipyflow/issues/101 | [] | smacke | 0 |
profusion/sgqlc | graphql | 213 | Enums should be nullable on input | Input fields deriving from Enum are not properly nullified. The resulting `str(mutation)` (prettified):
```graphql
mutation {
createUser(
user: {
firstName: "Jan"
lastName: "Kowalski"
middleName: null // <-- this is string field, nullified correctly
idType: None // <-- BUG HERE: no value given to an enum should result in "null" instead
idNumber: null
// … snip
}
) {
// … snip
}
}
```
Relevant code:
```python
class IDType(sgqlc.types.Enum):
__schema__ = admin_schema
__choices__ = ('IDCard', 'Passport')
class UserInput(sgqlc.types.Input):
__schema__ = admin_schema
__field_names__ = ('first_name', 'middle_name', 'last_name', 'id_type', 'id_number', ...)
first_name = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='firstName')
last_name = sgqlc.types.Field(sgqlc.types.non_null(String), graphql_name='lastName')
middle_name = sgqlc.types.Field(String, graphql_name='middleName')
id_type = sgqlc.types.Field(IDType, graphql_name='idType')
id_number = sgqlc.types.Field(String, graphql_name='idNumber')
# ... snip
```
Actual constructed values are `None`. It seems that the problem is in `sgqlc.types.EnumMeta.__to_graphql_input__` which should check `if value is None`. This change fixed the problem for me | closed | 2022-08-19T05:57:33Z | 2022-08-19T15:15:38Z | https://github.com/profusion/sgqlc/issues/213 | [] | kgadek | 1 |
developmentseed/lonboard | data-visualization | 566 | Why "Input being reprojected to EPSG:4326 CRS"? | I'm plotting multiple layers of a Map and I get a bunch of warning saying:
`Input being reprojected to EPSG:4326 CRS` . Why is that warning showing up? I think a more explanatory message would be helpful.
| closed | 2024-07-09T00:00:36Z | 2024-09-24T19:48:47Z | https://github.com/developmentseed/lonboard/issues/566 | [] | ncclementi | 2 |
simple-login/app | flask | 1,603 | Confusing characters | For generated email aliases, it is difficult to differentiate between
- capital i
- small L
This particularly becomes a problem when you manually need to write an email on a paper. | open | 2023-02-24T07:20:02Z | 2023-02-24T07:20:02Z | https://github.com/simple-login/app/issues/1603 | [] | bigfoot31 | 0 |
lexiforest/curl_cffi | web-scraping | 471 | BUG: free(): invalid pointer | **Describe the bug**
I create a Session instance and use streaming responses. But after a while I get memory related errors, for example free(): invalid pointer.
**Error message content**
```
free(): invalid pointer
Fatal Python error: Aborted
Current thread 0x0000007f8947e1c0 (most recent call first):
File "/home/login/Projects/find_error/.venv/lib/python3.9/site-packages/curl_cffi/curl.py", line 324 in reset
File "/home/login/Projects/find_error/.venv/lib/python3.9/site-packages/curl_cffi/requests/session.py", line 954 in cleanup
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 329 in _invoke_callbacks
File "/usr/lib/python3.9/concurrent/futures/_base.py", line 531 in set_result
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58 in run
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 77 in _worker
File "/usr/lib/python3.9/threading.py", line 892 in run
File "/usr/lib/python3.9/threading.py", line 954 in _bootstrap_inner
File "/usr/lib/python3.9/threading.py", line 912 in _bootstrap
Thread 0x0000007f89c7f1c0 (most recent call first):
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 75 in _worker
File "/usr/lib/python3.9/threading.py", line 892 in run
File "/usr/lib/python3.9/threading.py", line 954 in _bootstrap_inner
File "/usr/lib/python3.9/threading.py", line 912 in _bootstrap
Thread 0x0000007f8b11a040 (most recent call first):
File "/home/login/Projects/find_error/error.py", line 17 in main
File "/home/login/Projects/find_error/error.py", line 21 in <module>
```
**To Reproduce**
```py
import curl_cffi.requests
def read_stream(s, url):
response = s.request('GET', url, stream=True)
data = b''.join(chunk for chunk in response.iter_content())
response.close()
return data
def main():
s = curl_cffi.requests.Session()
url = 'http://localhost:8000/200k'
for _ in range(5000):
read_stream(s, url)
if __name__ == '__main__':
main()
```
**Versions**
- OS: linux arm64
- curl_cffi version 0.71
- `pip freeze` [freeze.txt](https://github.com/user-attachments/files/18353421/freeze.txt) | closed | 2025-01-08T22:16:52Z | 2025-03-08T14:33:52Z | https://github.com/lexiforest/curl_cffi/issues/471 | [
"bug",
"help wanted",
"confirmed"
] | lihachev9 | 12 |
widgetti/solara | flask | 330 | Change select values typing | I'm not sure if I should have opened this issue in solara's repo since it seems to use reacton/ipyvuetify, but I would like to suggest changing the [Select](https://solara.dev/api/select) `values` parameter from `List[T]` to `Union[List[T], Dict[T, Any]]`.
Allowing dict will enable users to display a good label for the user while using a good typing for the developer. Ipywidgets also allows dict usage in their select. In the next example the user will see the options `'unlabeled`, `label 01` and `label 02` and when iterating with this options the backend receives `None` or an integer `1` or `2` to work with it.
```python
import ipywidgets as widgets
select = widgets.Select(options={'unlabeled': None, 'label 01': 1, 'label 02': 2}, description='')
select.observe(lambda x: print(x['new']), names='value')
select
``` | open | 2023-10-19T16:47:05Z | 2024-01-26T14:53:13Z | https://github.com/widgetti/solara/issues/330 | [] | itepifanio | 2 |
MilesCranmer/PySR | scikit-learn | 296 | Taking Derivatives of Candidate Expression Inside Custom Loss Function | Hi Miles,
I'm working on a problem in which I would like to define a custom loss function that doesn't just use the predicted values of the candidate expression given the training data ( y_pred | X ), but instead evaluates the candidate expression and its derivatives at a number of new (necessarily unknown at the beginning) points while evaluating the loss function. To draw an analogy with neural networks, assume the NN loss function has its own copy of the entire network at each step of the optimization that it uses to make predictions and take derivatives at a number of inputs previously unseen in the training set. Do you have any thoughts on how easy or tricky it would be to make something like this work? If this sounds too vague/confusing, I'd be happy to connect with you over email to provide more details about the problem. Thanks! | closed | 2023-04-19T04:13:05Z | 2023-04-19T18:16:25Z | https://github.com/MilesCranmer/PySR/issues/296 | [] | gopal-iyer | 0 |
litestar-org/polyfactory | pydantic | 431 | Bug: Attribute with `_` prefixed alias cannot be set on a factory class | ### Description
When a model has an attribute, that is not private, but its alias is prefixed with an underscore (because of e.g. third party input data format), it is not generated by the model's factory, nor it can have a default set as a model attribute.
### URL to code causing the issue
_No response_
### MCVE
```python
import uuid
from polyfactory import Use
from polyfactory.factories.pydantic_factory import ModelFactory
from pydantic import BaseModel, Field
class A(BaseModel):
id_field: str = Field(..., alias="_id")
class AFactory(ModelFactory[A]):
__model__ = A
_id = Use(lambda: str(uuid.uuid4()))
# id_field = Use(... doesn't work either
a = AFactory.build()
```
### Steps to reproduce
_No response_
### Screenshots
_No response_
### Logs
_No response_
### Release Version
2.11.0
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2023-10-30T03:39:25Z | 2025-03-20T15:53:10Z | https://github.com/litestar-org/polyfactory/issues/431 | [
"bug"
] | uhlikfil | 4 |
ultralytics/yolov5 | deep-learning | 12,458 | How can improve my YOLOv5x-Seg model's precision? | 
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi, im working on a YOLOv5 model for months. I started with YOLOv4 but then i thought should use segmentation so i used YOLOv8 and YOLOv5 for my segmentation project. After tests i took best result from YOLOv5x-Seg, so currently working with it. My project about a thing that exist in sea surface and i have to detect from satellite images. So I made a dataset that include 1000 photos (taken from satellite hub by myself and manuel segment labeled and have only one class). Because of satellite images my objects little and scattered this is the problem i guess, maybe hard to detect? Then trained a YOLOv5 model it was really bad in the beginning but i change some settings and hypermeters now not bad. After that i tried many things but cant improved more. Besides this im getting weird confusion matrix that shows FP is one and TN is zero. Can you give me any advice how to improve like over %90's. Thanks.


### Additional
_No response_ | closed | 2023-12-02T14:39:18Z | 2024-10-20T19:33:13Z | https://github.com/ultralytics/yolov5/issues/12458 | [
"question"
] | uyuk2 | 4 |
tqdm/tqdm | jupyter | 1,102 | It is not correct to take the average of speed; tqdm needs to use two EMAs. | Average speed is not the average of speed measurements, it is the ratio of the average of units and the average of time. This fact means that tqdm.update calculates average speed incorrectly in many cases.
Average speed is `(a + b)/(c + d)` and does not equal `((a/c)+(b/d))/2` in the general case.
As an example, consider the following elementary school problem. Jack wants to travel from city A to city C through city B. The distance A->B is 10 miles and B->C is 100 miles. Jack walks 2mph A->B and then drives 50mph B->C. What is Jack's average speed? It is 15.7mph. tqdm would calclulate 3.84mph, which is (very) incorrect.
The solution is to keep two running averages. One for distance and the other for time, and to take their ratio as the output.
Here is a sample program demonstrating current behavior:
```
from tqdm import tqdm
from time import sleep
t = tqdm(unit='miles', total=110, smoothing=0.5, mininterval=0, miniters=0, ncols=60, postfix='\n')
sleep(5)
t.update(10)
sleep(2)
t.update(100)
```
It outputs:
```
0%| | 0/110 [00:00<?, ?miles/s,
9%|█▍ | 10/110 [00:05<00:50, 2.00miles/s,
100%|███████████████| 110/110 [00:07<00:00, 3.84miles/s,
100%|███████████████| 110/110 [00:07<00:00, 15.70miles/s,
]
```
The second-to-last output is the incorrect average speed calculated in `update`, and the last output is the true average speed calculated over the entire run.
Correct implementation would track two running averages. Average distance would be `10*0.5+100*0.5==55` and average time would be `5*0.5+2*0.5==3.5`. Their ratio is `55/3.5` which gives the correct `15.7`.
The only case where tqdm currently works correctly is when `n` is always the same (eg, `1`) and the average is updated after every update (mininterval/miniters has no effect). In that case, because you're averaging `time/n` (which is *not* called the rate, it is the period) the denominator is constant and the average of the period works out to `1/(average speed)`. | closed | 2020-12-24T07:32:58Z | 2020-12-25T11:09:08Z | https://github.com/tqdm/tqdm/issues/1102 | [
"p3-enhancement 🔥",
"to-merge ↰",
"c1-quick 🕐"
] | almson | 3 |
CorentinJ/Real-Time-Voice-Cloning | python | 525 | Fine-tuning for hindi | Hi @blue-fish , I am trying to fine-tune the model to clone voices of hindi speakers. I wanted to know the steps to follow for the same and also the amount of data I'd need for the model to work well.
Edit - I shall use google colab for fine-tuning | closed | 2020-09-11T06:50:07Z | 2024-05-13T13:01:09Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/525 | [] | hetpandya | 32 |
piskvorky/gensim | nlp | 3,081 | Memory leaks when using doc_topics in LdaSeqModel | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I trained a large LdaSeqModel with 53,000 documents, 30 topics, and 18 timeslices.
I saved the model to disk because it ran for 7 days.
When extracting topic probabilities for 53,000 documents, memory usage rises above 120GB.
However, only extracting probabilities for 1,000 documents works flawlessly.
#### Steps/code/corpus to reproduce
1. Train LdaSeqModel
2. Save LdaSeqModel
2. Load LdaSeqModel from disk
4. Extract document-topic probabilities with `doc_topics`
ldaseq.corpus_len = 53,101 in the below MWE.
```python
from gensim.models import LdaSeqModel
ldaseq = LdaSeqModel.load("/ldaseq_model")
prob = (ldaseq.doc_topics(x) for x in range(ldaseq.corpus_len))
df = pd.DataFrame(prob, columns=[f"topic_{i}" for i in range(30)])
```
#### Versions
macOS-10.16-x86_64-i386-64bit
Python 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:12:38)
[Clang 11.0.1 ]
Bits 64
NumPy 1.20.1
SciPy 1.6.1
gensim 3.8.3
FAST_VERSION 1 | open | 2021-03-18T15:29:10Z | 2021-03-18T17:46:06Z | https://github.com/piskvorky/gensim/issues/3081 | [] | nikchha | 0 |
sktime/sktime | data-science | 7,896 | [ENH] Design improvement on `config` parameter for FM forecasters | **Is your feature request related to a problem? Please describe.**
Currently the below list of forecasters, use the `config` parameter to pass in the model configs.
- TinyTimeMixerForecaster
```python
def __init__(
self,
model_path="ibm/TTM",
revision="main",
validation_split=0.2,
config=None, # parameter in question
...
):
```
- HFTransformersForecaster
```python
def __init__(
self,
model_path: str,
fit_strategy="minimal",
validation_split=0.2,
config=None, # parameter in question.
...
):
```
- ChronosForecaster
```python
def __init__(
self,
model_path: str,
config: dict = None, # parameter in question.
...
):
```
This seems to be going against `sktime`/`sklearn` design compatibility of having parameters directly passed into the estimator during initialization. Using `get_params` also returns config as a single dictionary and `set_params` cannot be used to set individual config parameters, without passing the entire config dict again, which seems inefficent.
**Describe the solution you'd like**
Solution 1: Passing all the config parameters directly.
If the "config" dictionary is required internally, it can be constructed from the parameters. Open to discussion on other possibilities to address this design issue.
Solution 2:
I believe an hybrid approach would be a good option. Commonly used parameters can be exposed while the rest can be passed in as a separate `config` parameter. This is especially useful when using TSFMs with several config parameters. If a case arises when a parameter passed in the proposed `config` parameter conflicts with the directly passed parameters, then the latter has higher priority.
Solution 3:
Overidding `get_params` and `set_params`. I am not sure about the feasibility of this.
| open | 2025-02-25T14:08:02Z | 2025-02-28T14:02:06Z | https://github.com/sktime/sktime/issues/7896 | [
"API design",
"module:forecasting",
"enhancement",
"module:base-framework"
] | PranavBhatP | 6 |
Gozargah/Marzban | api | 872 | date left and total usage column in mysql | سلام خسته نباشید
جای خالی ستون های Data_ left و total usage تو دیتابیس مرزبان احساس میشه.
به نظرم بودنشون توی محاسبه دقیق تر حجم مصرفی ادمین ها کمک میکنه | closed | 2024-03-15T01:11:13Z | 2024-03-16T10:48:10Z | https://github.com/Gozargah/Marzban/issues/872 | [
"Feature"
] | tohid3aeed | 1 |
chatopera/Synonyms | nlp | 106 | WARNING:absl:not exist in w2v model: 全脂奶粉 | # description
lack of these words???
## current
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 脱脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 包邮
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: mymy
WARNING:absl:not exist in w2v model: 全脂
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 营养早餐
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 醇香
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 包邮
WARNING:absl:not exist in w2v model: 脱脂奶粉
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 全脂
WARNING:absl:not exist in w2v model: 速溶
WARNING:absl:not exist in w2v model: 59
WARNING:absl:not exist in w2v model: 包邮
WARNING:absl:not exist in w2v model: 全脂奶粉
WARNING:absl:not exist in w2v model: 原装
WARNING:absl:not exist in w2v model: 一袋
WARNING:absl:not exist in w2v model: 包邮
WARNING:absl:not exist in w2v model: 全脂奶粉
## expected
# solution
# environment
centos7
* version:3.10.2
The commit hash (`git rev-parse HEAD`)
| closed | 2020-06-28T08:15:06Z | 2020-10-01T11:28:07Z | https://github.com/chatopera/Synonyms/issues/106 | [] | cancerwower | 2 |
firerpa/lamda | automation | 45 | 请问 d(className="xxx")获取多个元素后,如何遍历这些元素 | 请问 d(className="xxx")获取多个元素后,如何遍历这些元素操作,比如:
```python
element = d(className="xxx")
print(element.count()) # 输出15
for i in range(element.count()):
element_i = element # ???? 这里怎么填?
element_i.screenshot(quality=60).save(str(i)+".png")
``` | closed | 2023-06-02T13:36:49Z | 2025-01-18T04:40:50Z | https://github.com/firerpa/lamda/issues/45 | [] | argszero | 3 |
plotly/dash | data-visualization | 3,111 | Background callback polling request sends all callback inputs, states, outputs each time | **Describe your context**
As far as I can tell this is a general issue I have reproduced it in Jupyterlab with Diskcache and also in a separately served app with Gunicorn and Celery.
```
dash 2.18.2
dash_ag_grid 31.2.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
**Describe the bug**
When a dash background callback is executed there is one initial request which sends all of the related inputs and states to the server and then a series of polling requests that contain the caheKey identifying the background callback until it has completed. Looking at the request being sent, I think it is sending all of the callback data with each polling request instead of just sending it once with the first request and then using only the cacheKey on subsequent requests.
The impact of this behaviour is that if there is a lot of input data the polling is very slow (for example uploading with dcc.Upload and then running a longer processing and saving in a background callback). Since the polling interval gets long due to all the uploading, progress and set_props updates can be missed entirely.
**Expected behavior**
It would be preferable to make the polling as fast as possible by dropping all of the callback inputs/states and just using the cacheKey to identify it. | open | 2024-12-27T11:19:47Z | 2025-01-03T17:16:02Z | https://github.com/plotly/dash/issues/3111 | [
"performance",
"P2"
] | andredfb | 1 |
mljar/mercury | jupyter | 235 | issue with notebook files with space in the name | closed | 2023-03-29T12:15:57Z | 2023-03-31T11:45:47Z | https://github.com/mljar/mercury/issues/235 | [] | pplonski | 0 |
|
sgl-project/sglang | pytorch | 4,085 | [Bug] sgl.Engine(**dataclasses.asdict(server_args)) return_logprob=True error | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
- [ ] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [ ] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [ ] 5. Please use English, otherwise it will be closed.
### Describe the bug
I use Sglang Engine to run offfine inference with qwen2-vl-2b, and I want to obtain the logprob of generated tokens, but when I run the code
sampling_params = {"temperature": 0.8, "top_p": 0.95, "stop": ["<|endoftext|>", "</s>"], "max_new_tokens": 1}
`vlm = sgl.Engine(**dataclasses.asdict(server_args))
outputs = await vlm.async_generate(prompts, sampling_params, return_logprob=[True]*len(prompts),
image_data=image_datas)`
and
`vlm = sgl.Engine(**dataclasses.asdict(server_args))
outputs = await vlm.async_generate(prompts, sampling_params, return_logprob=True,
image_data=image_datas)`
the errors are the same as
`
[2025-03-05 15:02:47 TP0] max_total_num_tokens=2303336, chunked_prefill_size=-1, max_prefill_tokens=16384, max_running_requests=4097, context_len=32768
[2025-03-05 15:02:52 TP0] Prefill batch. #new-seq: 1, #new-token: 282, #cached-token: 0, cache hit rate: 0.00%, token usage: 0.00, #running-req: 0, #queue-req: 0
[2025-03-05 15:02:52 TP0] Scheduler hit an exception: Traceback (most recent call last):
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 1827, in run_scheduler_process
scheduler.event_loop_normal()
File "/home/shuxin/.local/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 478, in event_loop_normal
result = self.run_batch(batch)
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/managers/scheduler.py", line 1080, in run_batch
logits_output, next_token_ids = self.tp_worker.forward_batch_generation(
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/managers/tp_worker.py", line 164, in forward_batch_generation
logits_output = self.model_runner.forward(forward_batch)
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 796, in forward
return self.forward_extend(forward_batch)
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/model_executor/model_runner.py", line 761, in forward_extend
return self.model.forward(
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/models/qwen2_vl.py", line 571, in forward
return self.logits_processor(
File "/home/shuxin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/shuxin/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1750, in _call_impl
return forward_call(*args, **kwargs)
File "/home/shuxin/miniconda3/envs/qwen-vl/lib/python3.10/site-packages/sglang/srt/layers/logits_processor.py", line 164, in forward
pruned_input_ids.append(input_ids[pt + start_len : pt + extend_len])
TypeError: 'NoneType' object is not subscriptable`
[2025-03-05 15:02:52] Received sigquit from a child proces. It usually means the child failed.
where is my set incorrect?
### Reproduction
qwen2-vl-2b
### Environment
linux
PRETTY_NAME="Ubuntu 22.04.4 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.4 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
sglang-0.4.3.post2 | closed | 2025-03-05T07:16:44Z | 2025-03-06T15:30:41Z | https://github.com/sgl-project/sglang/issues/4085 | [] | Young1993 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.