repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
Lightning-AI/pytorch-lightning | machine-learning | 20,558 | Error: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory | ### Bug description
I’m using pytorch lighting DDP training with batch size = 16, 8 (gpu per node) * 2 (2 nodes) = 16 total gpus. However, I got the following
error, which happens in ModelCheckpoint callback. There seems to be an error during synchronization between nodes when saving the model checkpoint. And I decreased the batch size to 4 and this error disappeared. Can anyone help me?
```
- type: ModelCheckpoint
every_n_train_steps: 2000
save_top_k: 30
monitor: "step"
filename: "checkpoint_{epoch}-{step}"
```
Stack:
```
[rank2]: Traceback (most recent call last):
[rank2]: File "/workspace/weiyh2@xiaopeng.com/xpilot_vision/ai_foundation/projects/e2e_aeb/main.py", line 130, in <module>
[rank2]: main()
[rank2]: File "/workspace/weiyh2@xiaopeng.com/xpilot_vision/ai_foundation/projects/e2e_aeb/main.py", line 121, in main
[rank2]: runner.train(resume_from=ckpt_path)
[rank2]: File "/workspace/weiyh2@xiaopeng.com/xpilot_vision/ai_foundation/projects/e2e_aeb/flow/runner/xflow_runner.py", line 38, in train
[rank2]: self.trainer.fit(
[rank2]: File "/workspace/weiyh2@xiaopeng.com/xpilot_vision/ai_foundation/xflow/xflow/lightning/trainer/xflow_trainer.py", line 356, in fit
[rank2]: super().fit(
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/trainer.py", line 543, in fit
[rank2]: call._call_and_handle_interrupt(
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/call.py", line 43, in _call_and_handle_interrupt
[rank2]: return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/strategies/launchers/subprocess_script.py", line 105, in launch
[rank2]: return function(*args, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/trainer.py", line 579, in _fit_impl
[rank2]: self._run(model, ckpt_path=ckpt_path)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/trainer.py", line 986, in _run
[rank2]: results = self._run_stage()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/trainer.py", line 1030, in _run_stage
[rank2]: self.fit_loop.run()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/loops/fit_loop.py", line 206, in run
[rank2]: self.on_advance_end()
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/loops/fit_loop.py", line 378, in on_advance_end
[rank2]: call._call_callback_hooks(trainer, "on_train_epoch_end", monitoring_callbacks=True)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/trainer/call.py", line 210, in _call_callback_hooks
[rank2]: fn(trainer, trainer.lightning_module, *args, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 323, in on_train_epoch_end
[rank2]: self._save_topk_checkpoint(trainer, monitor_candidates)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 383, in _save_topk_checkpoint
[rank2]: self._save_monitor_checkpoint(trainer, monitor_candidates)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 703, in _save_monitor_checkpoint
[rank2]: self._update_best_and_save(current, trainer, monitor_candidates)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 732, in _update_best_and_save
[rank2]: filepath = self._get_metric_interpolated_filepath_name(monitor_candidates, trainer, del_filepath)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 661, in _get_metric_interpolated_filepath_name
[rank2]: while self.file_exists(filepath, trainer) and filepath != del_filepath:
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/callbacks/model_checkpoint.py", line 774, in file_exists
[rank2]: return trainer.strategy.broadcast(exists)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/lightning/pytorch/strategies/ddp.py", line 307, in broadcast
[rank2]: torch.distributed.broadcast_object_list(obj, src, group=_group.WORLD)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py", line 75, in wrapper
[rank2]: return func(*args, **kwargs)
[rank2]: File "/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py", line 2636, in broadcast_object_list
[rank2]: object_tensor = torch.empty( # type: ignore[call-overload]
[rank2]: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate more than 1EB memory.
```
### What version are you seeing the problem on?
v2.3
### How to reproduce the bug
```python
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.5.0):
#- PyTorch Version (e.g., 2.5):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | open | 2025-01-22T11:49:59Z | 2025-01-22T11:50:16Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20558 | [
"bug",
"needs triage",
"ver: 2.3.x"
] | Neronjust2017 | 0 |
tqdm/tqdm | jupyter | 1,071 | Notebook progress bar hides input() | visual output bug :
This code hides input() dialog, I cannot type anything, print statements works fine:
from tqdm.notebook import tqdm
print('Before tqdm')
tqdm_bar = tqdm(total=1)
print('After tqdm')
test = input('\n\nenter : \n')
tqdm_bar.update(1)
tqdm_bar.close()
This code works just fine :
from tqdm import tqdm
print('Before tqdm')
tqdm_bar = tqdm(total=1)
print('After tqdm')
test = input('\n\nenter : \n')
tqdm_bar.update(1)
tqdm_bar.close()
that "tqdm.notebook" does something with input() dialog.
please fix this.... | open | 2020-11-08T08:33:35Z | 2020-11-08T10:29:32Z | https://github.com/tqdm/tqdm/issues/1071 | [] | everestokok | 0 |
ultralytics/ultralytics | deep-learning | 18,733 | Evaluating a model that does multiple tasks | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
If a model is used for different things, lets say I have a pose estimation model, that I use to detect the bounding boxes of a person that works in a factory. Then I use the same model to detect if those persons are sitting or standing in their working space - for this I only care when they are around the desk, not while they are in other places in a factory.
Should I have different test sets for this model. First test set I would use to evaluate the keypoints and bounding boxes of the person - regardless of the category, and the second test set I would use to evaluate how well it classifies between the person sitting and person standing classes. And the second test set should only have images from those places in the factory that have desks.
### Additional
_No response_ | closed | 2025-01-17T10:46:52Z | 2025-01-20T08:19:59Z | https://github.com/ultralytics/ultralytics/issues/18733 | [
"question",
"pose"
] | uran-lajci | 4 |
widgetti/solara | fastapi | 14 | Support for route in solara.ListItem() | Often you will need to point a listitem to solara route. We can have a `path_or_route` param to the `ListItem()`. to support vue route (similar to `solara.Link()`). | open | 2022-09-01T10:41:58Z | 2022-09-01T10:42:21Z | https://github.com/widgetti/solara/issues/14 | [] | prionkor | 0 |
alirezamika/autoscraper | automation | 47 | Pagination | Hi how can I handle pagination for example if I want to fetch comments and reviews.
And is there a way to detect/handle consecutive pages other than by listing them like how general scrapers would have a click function to move to different pages or actions. | closed | 2021-01-25T10:18:26Z | 2021-07-28T13:39:11Z | https://github.com/alirezamika/autoscraper/issues/47 | [] | programmeddeath1 | 1 |
openapi-generators/openapi-python-client | rest-api | 1,091 | allOf fails if it references a type that also uses allOf with just single item | **Describe the bug**
Conditions:
- Schema A is a type with any definition.
- Schema B contains only an `allOf` with a single element referencing Schema A.
- Schema C contains an `allOf` that 1. references Schema B and 2. adds a property.
Expected behavior:
- Spec is valid. Schema B should be treated as exactly equivalent to Schema A (in other words, C becomes an extension of A with an extra property).
Observed behavior:
- Parsing fails. Error message is "Unable to process schema <path to schema C>".
**OpenAPI Spec File**
https://gist.github.com/eli-bl/8f5c7d1d872d9fda5379fa6370dab6a8
**Desktop (please complete the following information):**
- OS: macOS 14.5
- Python Version: 3.8.15
- openapi-python-client version 0.21.2
| closed | 2024-08-06T18:57:33Z | 2024-08-25T02:58:03Z | https://github.com/openapi-generators/openapi-python-client/issues/1091 | [] | eli-bl | 0 |
nolar/kopf | asyncio | 560 | Stop the handler execution after delete the object inside the handler | ## Problem
When i am delete a resource inside a handler kopf trying to patch the object and running inconsistencies checks, but it's not needed because the object does not exist anymore. This is consume the CPU and locks the interpreter.
## Proposal
Maybe create any object inside the handler and call him for break execution manually after delete.
## Checklist
- [x] Many users can benefit from this feature, it is not a one-time case
- [x] The proposal is related to the K8s operator framework, not to the K8s client libraries
| open | 2020-10-05T19:11:06Z | 2020-12-08T04:31:47Z | https://github.com/nolar/kopf/issues/560 | [
"enhancement"
] | Nonname123 | 5 |
iperov/DeepFaceLab | deep-learning | 5,403 | TypeError: Can't parse 'center'. Sequence item with index 0 has a wrong type | ## Expected behavior
python main.py train --training-data-src-dir workspace/data_src/aligned --training-data-dst-dir workspace/data_dst/aligned --model-dir workspace/model --model AMP --no-preview
## Actual behavior
Traceback (most recent call last):
File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/DeepFaceLab/core/joblib/SubprocessGenerator.py", line 54, in process_func
gen_data = next (self.generator_func)
File "/DeepFaceLab/samplelib/SampleGeneratorFace.py", line 136, in batch_func
raise Exception ("Exception occured in sample %s. Error: %s" % (sample.filename, traceback.format_exc() ) )
Exception: Exception occured in sample /DeepFaceLab/workspace/data_dst/aligned/00001_0.jpg. Error: Traceback (most recent call last):
File "/DeepFaceLab/samplelib/SampleGeneratorFace.py", line 134, in batch_func
x, = SampleProcessor.process ([sample], self.sample_process_options, self.output_sample_types, self.debug, ct_sample=ct_sample)
File "/DeepFaceLab/samplelib/SampleProcessor.py", line 99, in process
rnd_state=warp_rnd_state)
File "/DeepFaceLab/core/imagelib/warp.py", line 145, in gen_warp_params
random_transform_mat = cv2.getRotationMatrix2D((w // 2, w // 2), rotation, scale)
TypeError: Can't parse 'center'. Sequence item with index 0 has a wrong type
## Steps to reproduce
One difference is that I'm using opencv-python-headless instead of opencv-python, as that caused another error.
## Other relevant information
- **Command lined used (if not specified in steps to reproduce): python main.py train --training-data-src-dir workspace/data_src/aligned --training-data-dst-dir workspace/data_dst/aligned --model-dir workspace/model --model AMP --no-preview
- **Operating system and version: Linux
- **Python version: 3.6.9 | closed | 2021-10-07T18:00:24Z | 2023-10-14T19:36:42Z | https://github.com/iperov/DeepFaceLab/issues/5403 | [] | a178235 | 6 |
robotframework/robotframework | automation | 4,717 | Support stack trace logging with locals | I'm quite fond of Python's logging framework's ability to be configured to your liking.
One of the things I like to do is to add a custom formatter, often to the root file/console logger, that formats the exceptions differently than the default. I do this primarily to enable local capture, as I find that can greatly reduce debugging time.
<details><summary>Example of such a stacktrace</summary>
<pre>
$ robot --loglevel TRACE:INFO --outputdir ./output --xunit junit.xml --debugfile debug.txt --exitonerror --nostatusrc -s Experiments test_data
==============================================================================
Test Data
==============================================================================
Test Data.Experiments
==============================================================================
[ ERROR ] Caught exception!
Traceback (most recent call last):
File "....../Utilities.py", line 56, in raise_an_exception
raise Exception("An exception.")
a_local = True
b_local = 420.69
log = <Logger opso.Utilities (NOTSET)>
Exception: An exception.
Logging test F
</pre>
</details>
Unfortunately, the way robotframework handles it's interactions with the logging framework makes it quite difficult to cleanly do such things, especially if I want it to appear as such in the robot log.
This is my workaround so far is to put this code in a utilities library that is imported as a listener or keyword library:
```python
class FormatterWithLocals(logging.Formatter):
def formatException(self, ei) -> str:
# noinspection PyBroadException
try:
tbe = traceback.TracebackException(*ei, capture_locals=True)
s = "".join(tbe.format())
except Exception:
logging.exception("Exception trying to format a stacktrace with captured locals, trying again without.")
tbe = traceback.TracebackException(*ei, capture_locals=False)
s = "".join(tbe.format())
return s.removesuffix("\n")
# fixme: Access to internals :(
logging._defaultFormatter = FormatterWithLocals()
```
This access to `_defaultFormatter` is required because in `robot/output/pyloggingconf.py` (via function `robot_handler_enabled`) the root logger gets a new handler appended to it without any way to configure it. This then ends up using the default formatter.
A downside of this workaround (aside from the obviously bad accessing of privates) is that it only applies after robot has imported the relevant library. A way to make this ASAP is to put this in a pre-run modifier.
A few other possible workarounds:
- Since the handler is first added in robot's main and subsequent calls to `robot_handler_enabled` are nops, you could wait until you're inside of the run context and modify the handler by calling it's `setFormatter` function. This not require private accesses, but does run the risk of changes in internal behaviour in robot messing up the change without notice.
- Preempt the `robot_handler_enabled` by already adding a `RobotHandler` to the root logger (perhaps via a logging config file) that is configured as you want it. This is perhaps the cleanest way to handle this, but also requires that the `robot_handler_enabled` behaviour stays the same. I don't know if this behaves nicely with what robot expects, given that you'd be adding log messages before it would normally be the case. This also requires that you have a hook somewhere before the normal call happens, the most obvious candidate is a pre-run modifier.
- Preempt robot's main entirely and setup logging before even invoking it. Maybe even monkeypatching the `RobotHandler` or `robot_handler_enabled`. This is probably what we'll end up doing long term, as it has benefits for our usecase not relevant to this discussion.
I don't like any of these solutions. I would prefer to see robot support a way to setup the logging configuration via a more standardized mechanic ([logging.config](https://docs.python.org/3/library/logging.config.html)). I'm making this issue as a feature request, but I don't exactly know what the feature would look like. The logging in robot is already quite complex, so I wouldn't want to overload that even further. | open | 2023-04-03T09:22:39Z | 2025-01-07T16:28:52Z | https://github.com/robotframework/robotframework/issues/4717 | [] | dries007 | 4 |
DistrictDataLabs/yellowbrick | scikit-learn | 801 | ValueError: too many values to unpack (expected 2) using load_occupancy() | **Describe the bug**
A clear and concise description of what the bug is.
In feature analysis step, the error occurs around this statement "X, y = load_occupancy()"
"ValueError: too many values to unpack (expected 2)"
However, if only one variable is used on the left hand side of assign operator, the error won't occur. (e.g., "y = load_occupancy()")
**To Reproduce**
```python
# Steps to reproduce the behavior (code snippet):
# Should include imports, dataset loading, and execution
# Add the traceback below
```
**Dataset**
Did you use a specific dataset to produce the bug? Where can we access it?
**Expected behavior**
A clear and concise description of what you expected to happen.
**Traceback**
```
If applicable, add the traceback from the exception.
```
ValueError Traceback (most recent call last)
<ipython-input-3-859602daf482> in <module>
3
4 # Load the classification data set
----> 5 X, y = load_occupancy()
6 print(X)
7
ValueError: too many values to unpack (expected 2)
**Desktop (please complete the following information):**
- OS: [e.g. macOS]
- Python Version [e.g. 2.7, 3.6, miniconda]
- Yellowbrick Version [e.g. 0.7]
Windows
Python 3.7
Yellowbrick most current one (Mar 29, 2019)
**Additional context**
Add any other context about the problem here.
| closed | 2019-03-31T13:38:29Z | 2019-04-15T00:40:12Z | https://github.com/DistrictDataLabs/yellowbrick/issues/801 | [
"gone-stale"
] | chiahsuy | 4 |
BeanieODM/beanie | asyncio | 151 | `Indexed` breaks pydantic models | ```python
>>> from beanie import Indexed
>>> from pydantic import BaseModel
>>> class Foo(BaseModel):
... foo: int
...
>>> Foo(foo=1)
Foo(foo=1)
>>> # `Foo` works properly
>>> IndexedFoo = Indexed(Foo)
>>> IndexedFoo(foo=1)
Foo()
>>> # `IndexedFoo` is broken
```
This is blocking me from creating an index on an embedded document by doing so:
```python
class Boo(Document):
foo: Indexed(Foo)
```
Since I don't know the internals of pydantic, I cannot figure out why the model is broken. I am now working around this by patching `Indexed` like this:
```diff
class NewType(typ):
_indexed = (index_type, kwargs)
- def __new__(cls, *args, **kwargs):
- return typ.__new__(typ, *args, **kwargs)
-
NewType.__name__ = f"Indexed {typ.__name__}"
return NewType
``` | closed | 2021-11-29T16:16:04Z | 2021-11-29T17:05:01Z | https://github.com/BeanieODM/beanie/issues/151 | [] | shniubobo | 2 |
xmu-xiaoma666/External-Attention-pytorch | pytorch | 96 | 使用SEA出现的问题 | 
请问一下如图在使用SEA出现的问题应该怎么解决呢
| open | 2022-11-28T06:45:46Z | 2022-11-28T06:51:57Z | https://github.com/xmu-xiaoma666/External-Attention-pytorch/issues/96 | [] | mmmmyolo | 1 |
voila-dashboards/voila | jupyter | 844 | Voila.template_paths is config=True but ignored by `collect_template_paths` | So I'd think that either that should be `config=False`, or any paths given in the config are prepended or somehow merged with the result of `collect_template_paths`?
As it stands it doesn't look like there is any way of meaningfully using `--Voila.template_paths`.
https://github.com/voila-dashboards/voila/blob/0f91e4076234c743f18a7eaa88ef5dd4ab1d048c/voila/app.py#L180
https://github.com/voila-dashboards/voila/blob/0f91e4076234c743f18a7eaa88ef5dd4ab1d048c/voila/app.py#L376 | open | 2021-02-25T11:58:12Z | 2021-02-25T11:58:12Z | https://github.com/voila-dashboards/voila/issues/844 | [] | kurtschelfthout | 0 |
AirtestProject/Airtest | automation | 276 | 请问为什么我用pinch无法对图片进行放大操作? | **bug**
-运行未报错,只是未见图片被放大
**复现步骤**
-代码:pinch('out')
**预期效果**
-pinch('out')应该能模拟放大图片的手势吧?
**python**
-3.6
**airtest**
1.0.25
**设备:**
- Device: [pixel]
- Android version: [Android 9]
**环境**
-win7 | closed | 2019-02-19T08:59:27Z | 2020-05-13T07:43:48Z | https://github.com/AirtestProject/Airtest/issues/276 | [] | LIMU2 | 3 |
tfranzel/drf-spectacular | rest-api | 718 | _get_sidecar_url should probably use django.templatetags.static.static instead of STATIC_URL | **Describe the bug**
```python
def _get_sidecar_url(package):
return f'{settings.STATIC_URL}drf_spectacular_sidecar/{package}'
```
This may fail when using `STATICFILES_STORAGE` with a non-default backend for which `STATIC_URL` is unused or is insufficient to describe the static file location. A more robust implementation would possibly be to use `django.templatetags.static`, i.e.:
```python
from django.templatetags.static import static
def _get_sidecar_url(package):
return static(f'drf_spectacular_sidecar/{package}')
```
**To Reproduce**
Install `django-storages` and configure `STATICFILES_STORAGE = "storages.backends.s3boto3.S3Boto3Storage"` (etc.) in your project settings. Do not set STATIC_URL in your project settings. Verify that other static files are correctly loaded from S3, but SpectacularSwaggerView is attempting incorrectly to load the sidecar files from the relative path of `/static/` instead of from the S3 server.
**Expected behavior**
When `STATICFILES_STORAGE` is configured, use `staticfiles_storage.url()` instead of `STATIC_URL` to construct the static file paths for sidecar files (just as `django.templatetags.static.static()` does).
| closed | 2022-04-26T16:32:00Z | 2022-10-04T13:52:15Z | https://github.com/tfranzel/drf-spectacular/issues/718 | [
"enhancement",
"fix confirmation pending"
] | glennmatthews | 7 |
hankcs/HanLP | nlp | 1,144 | 求推荐HanLP for NodeJS 还在活跃的版本 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [ ] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:
我使用的版本是:
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
<!-- 请详细描述问题,越详细越可能得到解决 -->
## 复现问题
<!-- 你是如何操作导致产生问题的?比如修改了代码?修改了词典或模型?-->
### 步骤
1. 首先……
2. 然后……
3. 接着……
### 触发代码
```
public void testIssue1234() throws Exception
{
CustomDictionary.add("用户词语");
System.out.println(StandardTokenizer.segment("触发问题的句子"));
}
```
### 期望输出
<!-- 你希望输出什么样的正确结果?-->
```
期望输出
```
### 实际输出
<!-- HanLP实际输出了什么?产生了什么效果?错在哪里?-->
```
实际输出
```
## 其他信息
<!-- 任何可能有用的信息,包括截图、日志、配置文件、相关issue等等。-->
如题,求大神推荐,谢谢 | closed | 2019-04-08T08:17:54Z | 2019-04-08T14:16:44Z | https://github.com/hankcs/HanLP/issues/1144 | [
"invalid"
] | bosen365 | 2 |
netbox-community/netbox | django | 17,738 | Migration of virtualization.0040_convert_disk_size fails with very large value | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
4.0.10
### Python Version
3.10
### Steps to Reproduce
1. Running Netbox 4.0.10
2. Set Virtual Machine Resources Disk (GB) to an absurdly large value. e.g. 3700000
3. Value is accepted and record saved
4. Check release notes and dependences for 4.1.x branch of Netbox
5. Pull master branch (currently 4.1.3)
6. Run upgrade.sh against current install
7. Upgrade runs as expected till migration of virtualization.0040_convert_disk_size which fails with "integer out of range"
8. Revert VM and set Virtual Machine Resources Disk (GB) to a more normal value. e.g. 3700
9. Conversion runs as expected
### Expected Behavior
If a value is accepted in the GUI then the conversion should be able to handle the value during the migration of the table.
### Observed Behavior
```
Running migrations:
Applying core.0011_move_objectchange... OK
Applying extras.0117_move_objectchange... OK
Applying extras.0118_customfield_uniqueness... OK
Applying extras.0119_notifications... OK
Applying circuits.0044_circuit_groups... OK
Applying core.0012_job_object_type_optional... OK
Applying extras.0120_eventrule_event_types... OK
Applying extras.0121_customfield_related_object_filter... OK
Applying dcim.0188_racktype... OK
Applying dcim.0189_moduletype_rack_airflow... OK
Applying dcim.0190_nested_modules... OK
Applying dcim.0191_module_bay_rebuild... OK
Applying ipam.0070_vlangroup_vlan_id_ranges... OK
Applying virtualization.0039_virtualmachine_serial_number... OK
Applying virtualization.0040_convert_disk_size...Traceback (most recent call last):
File "/opt/netbox/venv/lib/python3.10/site-packages/django/db/backends/utils.py", line 105, in _execute
return self.cursor.execute(sql, params)
File "/opt/netbox/venv/lib/python3.10/site-packages/psycopg/cursor.py", line 97, in execute
raise ex.with_traceback(None)
psycopg.errors.NumericValueOutOfRange: integer out of range
``` | closed | 2024-10-11T16:39:27Z | 2025-01-14T03:01:19Z | https://github.com/netbox-community/netbox/issues/17738 | [
"type: bug",
"severity: low"
] | Kage1 | 1 |
xzkostyan/clickhouse-sqlalchemy | sqlalchemy | 306 | `create_all` for all the `MaterializedView` | Hi,
The [example](https://clickhouse-sqlalchemy.readthedocs.io/en/latest/features.html#materialized-views) in the documentation shows a way to create a materialised view, this however requires a specific call for each materialised view I create. Given that I have a large number of materialised views, how can I create them all in one call? (Like in sqlalchemy `Base.metadata.create_all` for example)
Thanks! | open | 2024-03-31T06:26:40Z | 2024-03-31T06:26:40Z | https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/306 | [] | yuvalshi0 | 0 |
pytorch/pytorch | machine-learning | 149,771 | How to remove the “internal api” notice? | ### 📚 The doc issue
What is the option that will remove this notice?
> This page describes an internal API which is not intended to be used outside of the PyTorch codebase and can be modified or removed without notice.
We would like to remove it for https://pytorch.org/docs/stable/onnx_dynamo.html and a few onnx pages.
@svekars
### Suggest a potential alternative/fix
_No response_
cc @svekars @sekyondaMeta @AlannaBurke | open | 2025-03-21T22:46:30Z | 2025-03-21T22:49:35Z | https://github.com/pytorch/pytorch/issues/149771 | [
"module: docs",
"triaged"
] | justinchuby | 0 |
plotly/dash | plotly | 2,515 | [BUG] tml : "In the callback for output(s):\n plot-update.id\nOutput 0 (plot-update.id) is already in use.\nTo resolve this, set `allow_duplicate=True` on\nduplicate outputs, or combine the outputs into\none callback function, distinguishing the trigger\nby using `dash.callback_context` if necessary." message : "Duplicate callback outputs" | When I run my in a docker container in production, I get this error in the browser console and the app is fozen:
sh_renderer.v2_9_3m1682621169.min.js:2 {message: 'Duplicate callback outputs', html: 'In the callback for output(s):\n plot-update.id\nOu…er\nby using `dash.callback_context` if necessary.'}
Qo @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
p @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
(anonymous) @ dash_renderer.v2_9_3m1682621169.min.js:2
to @ dash_renderer.v2_9_3m1682621169.min.js:2
ks @ dash_renderer.v2_9_3m1682621169.min.js:2
Bh @ react-dom@16.v2_9_3m1682621169.14.0.min.js:126
Dj @ react-dom@16.v2_9_3m1682621169.14.0.min.js:162
unstable_runWithPriority @ react@16.v2_9_3m1682621169.14.0.min.js:25
Da @ react-dom@16.v2_9_3m1682621169.14.0.min.js:60
xb @ react-dom@16.v2_9_3m1682621169.14.0.min.js:162
(anonymous) @ react-dom@16.v2_9_3m1682621169.14.0.min.js:162
U @ react@16.v2_9_3m1682621169.14.0.min.js:16
B.port1.onmessage @ react@16.v2_9_3m1682621169.14.0.min.js:24
However, when I run my app on development and staging, I do not get this error. Any idea why that would be happening in different environments?
Prodcution env:
```
Package Version
------------------------- -----------
alembic 1.10.4
asttokens 2.2.1
attrs 23.1.0
backcall 0.2.0
beautifulsoup4 4.12.2
blosc2 2.0.0
brotlipy 0.7.0
cachelib 0.9.0
cffi 1.15.1
click 8.1.3
cloudpickle 2.2.1
colorlover 0.3.0
comm 0.1.3
contourpy 1.0.7
cycler 0.11.0
Cython 0.29.34
dash 2.9.3
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-extensions 0.1.3
dash-html-components 2.0.0
dash-table 5.0.0
dash-tabulator 0.4.2
dash-uploader 0.7.0a1
dask 2023.4.0
debugpy 1.6.7
decorator 5.1.1
dill 0.3.6
diskcache 5.6.1
dnspython 2.3.0
EditorConfig 0.12.3
email-validator 2.0.0.post2
entrypoints 0.4
et-xmlfile 1.1.0
executing 1.2.0
Flask 2.2.4
Flask-Caching 1.10.1
Flask-Login 0.6.2
Flask-Migrate 4.0.4
Flask-SQLAlchemy 3.0.3
Flask-WTF 1.1.1
fonttools 4.39.3
fsspec 2023.4.0
greenlet 2.0.2
h5py 3.8.0
hdf5plugin 4.1.1
idna 3.4
importlib-metadata 6.6.0
ipyfilechooser 0.6.0
ipykernel 6.22.0
ipython 8.12.0
ipywidgets 8.0.6
itsdangerous 2.1.2
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsbeautifier 1.14.7
jsonschema 4.17.3
jupyter_client 8.2.0
jupyter_core 5.3.0
jupyterlab-widgets 3.0.7
kiwisolver 1.4.4
locket 1.0.0
lxml 4.9.2
Mako 1.2.4
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
molmass 2023.4.10
more-itertools 8.14.0
ms-mint 0.2.3
ms-mint-app 0.2.3.2
msgpack 1.0.5
multiprocess 0.70.14
nest-asyncio 1.5.6
numexpr 2.8.4
numpy 1.24.3
openpyxl 3.1.2
packaging 21.3
pandas 2.0.1
parso 0.8.3
partd 1.4.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.1.2
platformdirs 3.5.0
plotly 5.14.1
prompt-toolkit 3.0.38
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 11.0.0
pycparser 2.21
Pygments 2.15.1
pymzml 2.5.2
pyparsing 3.0.9
pyrsistent 0.19.3
pyteomics 4.6
python-dateutil 2.8.2
pytz 2023.3
PyYAML 6.0
pyzmq 25.0.2
regex 2023.3.23
scikit-learn 1.2.2
scipy 1.10.1
seaborn 0.12.2
setuptools 65.5.1
six 1.16.0
soupsieve 2.4.1
SQLAlchemy 2.0.11
stack-data 0.6.2
tables 3.8.0
tenacity 8.2.2
threadpoolctl 3.1.0
toolz 0.12.0
tornado 6.3.1
tqdm 4.65.0
traitlets 5.9.0
typing_extensions 4.5.0
tzdata 2023.3
urllib3 2.0.0
waitress 2.1.2
wcwidth 0.2.6
Werkzeug 2.2.3
wget 3.2
wheel 0.40.0
widgetsnbextension 4.0.7
WTForms 3.0.1
XlsxWriter 3.1.0
zipp 3.15.0
```
Staging env:
```
Package Version
------------------------- ------------------------
alembic 1.10.4
asttokens 2.2.1
attrs 23.1.0
backcall 0.2.0
beautifulsoup4 4.12.2
blosc2 2.0.0
brotlipy 0.7.0
cachelib 0.9.0
cffi 1.15.1
click 8.1.3
cloudpickle 2.2.1
colorlover 0.3.0
comm 0.1.3
contourpy 1.0.7
cycler 0.11.0
Cython 0.29.34
dash 2.9.3
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-extensions 0.1.3
dash-html-components 2.0.0
dash-table 5.0.0
dash-tabulator 0.4.2
dash-uploader 0.7.0a1
dask 2023.4.0
debugpy 1.6.7
decorator 5.1.1
dill 0.3.6
diskcache 5.6.1
dnspython 2.3.0
EditorConfig 0.12.3
email-validator 2.0.0.post2
entrypoints 0.4
et-xmlfile 1.1.0
executing 1.2.0
Flask 2.2.4
Flask-Caching 1.10.1
Flask-Login 0.6.2
Flask-Migrate 4.0.4
Flask-SQLAlchemy 3.0.3
Flask-WTF 1.1.1
fonttools 4.39.3
fsspec 2023.4.0
greenlet 2.0.2
h5py 3.8.0
hdf5plugin 4.1.1
idna 3.4
importlib-metadata 6.6.0
ipyfilechooser 0.6.0
ipykernel 6.22.0
ipython 8.12.0
ipywidgets 8.0.6
itsdangerous 2.1.2
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
jsbeautifier 1.14.7
jsonschema 4.17.3
jupyter_client 8.2.0
jupyter_core 5.3.0
jupyterlab-widgets 3.0.7
kiwisolver 1.4.4
locket 1.0.0
lxml 4.9.2
Mako 1.2.4
MarkupSafe 2.1.2
matplotlib 3.7.1
matplotlib-inline 0.1.6
molmass 2023.4.10
more-itertools 8.14.0
ms-mint 0.2.3
ms-mint-app 0.2.3.1+0.gf86c0d7.dirty
msgpack 1.0.5
multiprocess 0.70.14
nest-asyncio 1.5.6
numexpr 2.8.4
numpy 1.24.3
openpyxl 3.1.2
packaging 21.3
pandas 2.0.1
parso 0.8.3
partd 1.4.0
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.1.2
platformdirs 3.5.0
plotly 5.14.1
prompt-toolkit 3.0.38
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 11.0.0
pycparser 2.21
Pygments 2.15.1
pymzml 2.5.2
pyparsing 3.0.9
pyrsistent 0.19.3
pyteomics 4.6
python-dateutil 2.8.2
pytz 2023.3
PyYAML 6.0
pyzmq 25.0.2
regex 2023.3.23
scikit-learn 1.2.2
scipy 1.10.1
seaborn 0.12.2
setuptools 67.7.2
six 1.16.0
soupsieve 2.4.1
SQLAlchemy 2.0.11
stack-data 0.6.2
tables 3.8.0
tenacity 8.2.2
threadpoolctl 3.1.0
toolz 0.12.0
tornado 6.3.1
tqdm 4.65.0
traitlets 5.9.0
typing_extensions 4.5.0
tzdata 2023.3
urllib3 2.0.0
waitress 2.1.2
wcwidth 0.2.6
Werkzeug 2.2.3
wget 3.2
wheel 0.40.0
widgetsnbextension 4.0.7
WTForms 3.0.1
XlsxWriter 3.1.0
zipp 3.15.0
```
| closed | 2023-04-27T19:15:32Z | 2024-05-23T10:11:03Z | https://github.com/plotly/dash/issues/2515 | [] | sorenwacker | 1 |
comfyanonymous/ComfyUI | pytorch | 6,609 | Stability Matrix comfyui multiple errors on clean install. | ### Your question
Hello I have an issue with comfyui, I downloaded it clean via stability matrix, at first I had issues with doing update since I always got an error but not I am sure it's the newest. The version of flux that I'm using is flux1-dev-bnb-nf4-v2.safetensors from https://huggingface.co/lllyasviel/flux1-dev-bnb-nf4/tree/main
### Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 4
- **Node Type:** CheckpointLoaderSimple
- **Exception Type:** RuntimeError
- **Exception Message:** Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 0]).
size mismatch for time_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for time_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for vector_in.in_layer.weight: copying a param with shape torch.Size([1179648, 1]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for vector_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for guidance_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for guidance_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for txt_in.weight: copying a param with shape torch.Size([6291456, 1]) from checkpoint, the shape in current model is torch.Size([3072, 4096]).
size mismatch for double_blocks.0.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.0.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.0.img_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.0.img_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.0.img_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.0.txt_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.0.txt_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.0.txt_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.0.txt_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.0.txt_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.1.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.1.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.1.img_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.1.img_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.1.img_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.1.txt_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.1.txt_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.1.txt_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.1.txt_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.1.txt_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.2.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.2.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.2.img_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.2.img_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.2.img_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.2.txt_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.2.txt_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.2.txt_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.2.txt_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.2.txt_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.3.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.3.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.3.img_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.3.img_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.3.img_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.3.txt_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.3.txt_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.3.txt_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.3.txt_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.3.txt_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.4.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.4.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for final_layer.linear.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([64, 3072]).
size mismatch for final_layer.adaLN_modulation.1.weight: copying a param with shape torch.Size([9437184, 1]) from checkpoint, the shape in current model is torch.Size([6144, 3072]).
## Stack Trace
File "A:\Data\Packages\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "A:\Data\Packages\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "A:\Data\Packages\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "A:\Data\Packages\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "A:\Data\Packages\ComfyUI\nodes.py", line 570, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
File "A:\Data\Packages\ComfyUI\comfy\sd.py", line 849, in load_checkpoint_guess_config
out = load_state_dict_guess_config(sd, output_vae, output_clip, output_clipvision, embedding_directory, output_model, model_options, te_model_options=te_model_options)
File "A:\Data\Packages\ComfyUI\comfy\sd.py", line 890, in load_state_dict_guess_config
model.load_model_weights(sd, diffusion_model_prefix)
File "A:\Data\Packages\ComfyUI\comfy\model_base.py", line 253, in load_model_weights
m, u = self.diffusion_model.load_state_dict(to_load, strict=False)
File "A:\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 2584, in load_state_dict
raise RuntimeError(
## System Information
- **ComfyUI Version:** 0.3.12
- **Arguments:** A:\Data\Packages\ComfyUI\main.py --preview-method auto
- **OS:** nt
- **Python Version:** 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
- **Embedded Python:** false
- **PyTorch Version:** 2.5.1+cu124
## Devices
- **Name:** cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 25769279488
- **VRAM Free:** 24438112256
- **Torch VRAM Total:** 0
- **Torch VRAM Free:** 0
## Logs
2025-01-26T17:44:10.371484 - Adding extra search path checkpoints A:\Data\Models\StableDiffusion
2025-01-26T17:44:10.371484 - Adding extra search path vae A:\Data\Models\VAE
2025-01-26T17:44:10.371484 - Adding extra search path loras A:\Data\Models\Lora
2025-01-26T17:44:10.371484 - Adding extra search path loras A:\Data\Models\LyCORIS
2025-01-26T17:44:10.371484 - Adding extra search path upscale_models A:\Data\Models\ESRGAN
2025-01-26T17:44:10.371484 - Adding extra search path upscale_models A:\Data\Models\RealESRGAN
2025-01-26T17:44:10.371484 - Adding extra search path upscale_models A:\Data\Models\SwinIR
2025-01-26T17:44:10.371484 - Adding extra search path embeddings A:\Data\Models\TextualInversion
2025-01-26T17:44:10.371484 - Adding extra search path hypernetworks A:\Data\Models\Hypernetwork
2025-01-26T17:44:10.371484 - Adding extra search path controlnet A:\Data\Models\ControlNet
2025-01-26T17:44:10.371484 - Adding extra search path controlnet A:\Data\Models\T2IAdapter
2025-01-26T17:44:10.371484 - Adding extra search path clip A:\Data\Models\CLIP
2025-01-26T17:44:10.371484 - Adding extra search path clip_vision A:\Data\Models\InvokeClipVision
2025-01-26T17:44:10.371484 - Adding extra search path diffusers A:\Data\Models\Diffusers
2025-01-26T17:44:10.371484 - Adding extra search path gligen A:\Data\Models\GLIGEN
2025-01-26T17:44:10.371484 - Adding extra search path vae_approx A:\Data\Models\ApproxVAE
2025-01-26T17:44:10.371484 - Adding extra search path ipadapter A:\Data\Models\IpAdapter
2025-01-26T17:44:10.371484 - Adding extra search path ipadapter A:\Data\Models\InvokeIpAdapters15
2025-01-26T17:44:10.371484 - Adding extra search path ipadapter A:\Data\Models\InvokeIpAdaptersXl
2025-01-26T17:44:10.371484 - Adding extra search path prompt_expansion A:\Data\Models\PromptExpansion
2025-01-26T17:44:10.371484 - Adding extra search path ultralytics A:\Data\Models\Ultralytics
2025-01-26T17:44:10.371484 - Adding extra search path ultralytics_bbox A:\Data\Models\Ultralytics\bbox
2025-01-26T17:44:10.371484 - Adding extra search path ultralytics_segm A:\Data\Models\Ultralytics\segm
2025-01-26T17:44:10.371484 - Adding extra search path sams A:\Data\Models\Sams
2025-01-26T17:44:10.371484 - Adding extra search path diffusion_models A:\Data\Models\unet
2025-01-26T17:44:11.696539 - Checkpoint files will always be loaded safely.
2025-01-26T17:44:11.817143 - Total VRAM 24576 MB, total RAM 65315 MB
2025-01-26T17:44:11.817143 - pytorch version: 2.5.1+cu124
2025-01-26T17:44:11.817143 - Set vram state to: NORMAL_VRAM
2025-01-26T17:44:11.817143 - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
2025-01-26T17:44:12.904498 - Using pytorch attention
2025-01-26T17:44:14.095681 - ComfyUI version: 0.3.12
2025-01-26T17:44:14.111631 - [Prompt Server] web root: A:\Data\Packages\ComfyUI\web
2025-01-26T17:44:14.397389 -
Import times for custom nodes:
2025-01-26T17:44:14.397389 - 0.0 seconds: A:\Data\Packages\ComfyUI\custom_nodes\websocket_image_save.py
2025-01-26T17:44:14.397389 -
2025-01-26T17:44:14.403338 - Starting server
2025-01-26T17:44:14.403338 - To see the GUI go to: http://127.0.0.1:8188
2025-01-26T17:44:47.144317 - got prompt
2025-01-26T17:44:47.283850 - model weight dtype torch.bfloat16, manual cast: None
2025-01-26T17:44:47.284847 - model_type FLUX
2025-01-26T17:44:47.313751 - !!! Exception during processing !!! Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 0]).
size mismatch for time_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for time_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for vector_in.in_layer.weight: copying a param with shape torch.Size([1179648, 1]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for vector_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for guidance_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for guidance_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for txt_in.weight: copying a param with shape torch.Size([6291456, 1]) from checkpoint, the shape in current model is torch.Size([3072, 4096]).
size mismatch for double_blocks.0.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.0.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.0.img_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.0.img_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from
size mismatch for single_blocks.37.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.37.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for final_layer.linear.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([64, 3072]).
size mismatch for final_layer.adaLN_modulation.1.weight: copying a param with shape torch.Size([9437184, 1]) from checkpoint, the shape in current model is torch.Size([6144, 3072]).
2025-01-26T17:44:47.317737 - Traceback (most recent call last):
File "A:\Data\Packages\ComfyUI\execution.py", line 327, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "A:\Data\Packages\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
File "A:\Data\Packages\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "A:\Data\Packages\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
File "A:\Data\Packages\ComfyUI\nodes.py", line 570, in load_checkpoint
out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
File "A:\Data\Packages\ComfyUI\comfy\sd.py", line 849, in load_checkpoint_guess_config
out = load_state_dict_guess_config(sd, output_vae, output_clip, output_clipvision, embedding_directory, output_model, model_options, te_model_options=te_model_options)
File "A:\Data\Packages\ComfyUI\comfy\sd.py", line 890, in load_state_dict_guess_config
model.load_model_weights(sd, diffusion_model_prefix)
File "A:\Data\Packages\ComfyUI\comfy\model_base.py", line 253, in load_model_weights
m, u = self.diffusion_model.load_state_dict(to_load, strict=False)
File "A:\Data\Packages\ComfyUI\venv\lib\site-packages\torch\nn\modules\module.py", line 2584, in load_state_dict
raise RuntimeError(
RuntimeError: Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 0]).
size mismatch for time_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for time_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for vector_in.in_layer.weight: copying a param with shape torch.Size([1179648, 1]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for vector_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from
size mismatch for single_blocks.2.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.2.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.3.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.3.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.3.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.4.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.4.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.4.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.5.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.5.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.5.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.6.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.6.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.6.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.7.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.7.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.7.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.8.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.8.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.8.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.9.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.9.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.9.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.10.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.10.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.10.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for final_layer.linear.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([64, 3072]).
size mismatch for final_layer.adaLN_modulation.1.weight: copying a param with shape torch.Size([9437184, 1]) from checkpoint, the shape in current model is torch.Size([6144, 3072]).
2025-01-26T17:44:47.320727 - Prompt executed in 0.17 seconds
2025-01-26T17:51:22.533245 - got prompt
2025-01-26T17:51:22.819288 - model weight dtype torch.bfloat16, manual cast: None
2025-01-26T17:51:22.819288 - model_type FLUX
2025-01-26T17:51:22.848192 - !!! Exception during processing !!! Error(s) in loading state_dict for Flux:
size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 0]).
size mismatch for time_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for time_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for vector_in.in_layer.weight: copying a param with shape torch.Size([1179648, 1]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for vector_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for guidance_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for guidance_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for txt_in.weight: copying a param with shape torch.Size([6291456, 1]) from checkpoint, the shape in current model is torch.Size([3072, 4096]).
size mismatch for double_blocks.0.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.0.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.0.img_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.0.img_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.0.img_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.0.txt_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.0.txt_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.0.txt_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.0.txt_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.0.txt_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.1.img_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from checkpoint, the shape in current model is torch.Size([18432, 3072]).
size mismatch for double_blocks.1.img_attn.qkv.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for double_blocks.1.img_attn.proj.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for double_blocks.1.img_mlp.0.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([12288, 3072]).
size mismatch for double_blocks.1.img_mlp.2.weight: copying a param with shape torch.Size([18874368, 1]) from checkpoint, the shape in current model is torch.Size([3072, 12288]).
size mismatch for double_blocks.1.txt_mod.lin.weight: copying a param with shape torch.Size([28311552, 1]) from
size mismatch for single_blocks.36.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for single_blocks.37.linear1.weight: copying a param with shape torch.Size([33030144, 1]) from checkpoint, the shape in current model is torch.Size([21504, 3072]).
size mismatch for single_blocks.37.linear2.weight: copying a param with shape torch.Size([23592960, 1]) from checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.37.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) from checkpoint, the shape in current model is torch.Size([9216, 3072]).
size mismatch for final_layer.linear.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([64, 3072]).
size mismatch for final_layer.adaLN_modulation.1.weight: copying a param with shape torch.Size([9437184, 1]) from checkpoint, the shape in current model is torch.Size([6144, 3072]).
2025-01-26T17:51:22.857161 - Prompt executed in 0.32 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
{"last_node_id":10,"last_link_id":9,"nodes":[{"id":3,"type":"KSampler","pos":[863,186],"size":[315,262],"flags":{},"order":5,"mode":0,"inputs":[{"name":"model","type":"MODEL","link":1},{"name":"positive","type":"CONDITIONING","link":4},{"name":"negative","type":"CONDITIONING","link":6},{"name":"latent_image","type":"LATENT","link":2}],"outputs":[{"name":"LATENT","type":"LATENT","links":[7],"slot_index":0}],"properties":{"Node name for S&R":"KSampler"},"widgets_values":[962452822958925,"randomize",10,8,"euler","normal",1]},{"id":9,"type":"SaveImage","pos":[1451,189],"size":[210,58],"flags":{},"order":7,"mode":0,"inputs":[{"name":"images","type":"IMAGE","link":9}],"outputs":[],"properties":{},"widgets_values":["ComfyUI"]},{"id":5,"type":"EmptyLatentImage","pos":[521,673],"size":[315,106],"flags":{},"order":0,"mode":0,"inputs":[],"outputs":[{"name":"LATENT","type":"LATENT","links":[2],"slot_index":0}],"properties":{"Node name for S&R":"EmptyLatentImage"},"widgets_values":[512,512,1]},{"id":8,"type":"VAEDecode","pos":[1209,188],"size":[210,46],"flags":{"collapsed":false},"order":6,"mode":0,"inputs":[{"name":"samples","type":"LATENT","link":7},{"name":"vae","type":"VAE","link":8}],"outputs":[{"name":"IMAGE","type":"IMAGE","links":[9],"slot_index":0}],"properties":{"Node name for S&R":"VAEDecode"},"widgets_values":[]},{"id":10,"type":"LoadImage","pos":[424,239],"size":[315,314],"flags":{},"order":1,"mode":0,"inputs":[],"outputs":[{"name":"IMAGE","type":"IMAGE","links":null},{"name":"MASK","type":"MASK","links":null}],"properties":{"Node name for S&R":"LoadImage"},"widgets_values":["pasted/image.png","image"]},{"id":7,"type":"CLIPTextEncode","pos":[413,389],"size":[425.27801513671875,180.6060791015625],"flags":{},"order":4,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":5}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[6],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["text, watermark"]},{"id":6,"type":"CLIPTextEncode","pos":[415,186],"size":[422.84503173828125,164.31304931640625],"flags":{},"order":3,"mode":0,"inputs":[{"name":"clip","type":"CLIP","link":3}],"outputs":[{"name":"CONDITIONING","type":"CONDITIONING","links":[4],"slot_index":0}],"properties":{"Node name for S&R":"CLIPTextEncode"},"widgets_values":["horny succubus with a red skin"]},{"id":4,"type":"CheckpointLoaderSimple","pos":[51,317],"size":[315,98],"flags":{"collapsed":false},"order":2,"mode":0,"inputs":[],"outputs":[{"name":"MODEL","type":"MODEL","links":[1],"slot_index":0},{"name":"CLIP","type":"CLIP","links":[3,5],"slot_index":1},{"name":"VAE","type":"VAE","links":[8],"slot_index":2}],"properties":{"Node name for S&R":"CheckpointLoaderSimple"},"widgets_values":["flux1-dev-bnb-nf4-v2.safetensors"]}],"links":[[1,4,0,3,0,"MODEL"],[2,5,0,3,3,"LATENT"],[3,4,1,6,0,"CLIP"],[4,6,0,3,1,"CONDITIONING"],[5,4,1,7,0,"CLIP"],[6,7,0,3,2,"CONDITIONING"],[7,3,0,8,0,"LATENT"],[8,4,2,8,1,"VAE"],[9,8,0,9,0,"IMAGE"]],"groups":[],"config":{},"extra":{"ds":{"scale":1,"offset":[469,220]}},"version":0.4}
```
### Other
I deleted a lot of lines with for example from the logs like "checkpoint, the shape in current model is torch.Size([3072, 15360]).
size mismatch for single_blocks.2.modulation.lin.weight: copying a param with shape torch.Size([14155776, 1]) " | open | 2025-01-26T16:55:37Z | 2025-01-26T16:55:37Z | https://github.com/comfyanonymous/ComfyUI/issues/6609 | [
"User Support"
] | AlmostHuman34 | 0 |
reloadware/reloadium | django | 84 | Not working, Python 3.10.8, Reloadium 0.9.2 plugin version, M2 chip | ## Describe the bug*
Trying to run using reloadium after the latest update results in the issue:
`It seems like your platform or Python version are not supported yet.
Windows, Linux, macOS and Python 64 bit >= 3.7 (>= 3.9 for M1) <= 3.10 are currently supported.
Please submit a github issue if you believe Reloadium should be working on your system at
https://github.com/reloadware/reloadium
To see the exception run reloadium with environmental variable RW_DEBUG=True`
## To Reproduce
Steps to reproduce the behavior:
1. Start any code with reloadium using latest versions using Python 3.10.8 and M2 chip
2. Get the error
## Expected behavior
For it to work.
## Desktop or remote (please complete the following information):**
- OS: Mac
- OS version: 12.6
- M1 chip: Kinda?, M2 chip
- Reloadium package version: 0.9.7
- PyCharm plugin version: 0.9.2
- Editor: PyCharm 2022.3.1 (build 223.8214.51)
- Python Version: 3.10.8
- Python Architecture: 64bit
- Run mode: Both
## Additional context
Worked before the latest 0.9.2 plugin update (or maybe before the latest PyCharm update).
Running with `os.environ["RW_DEBUG"] = "True"` does not present any extra output. Am I setting the environmental variable wrong? | closed | 2023-01-19T17:31:50Z | 2023-04-23T07:56:20Z | https://github.com/reloadware/reloadium/issues/84 | [] | stormnick | 13 |
vitalik/django-ninja | django | 1,324 | [BUG] PatchDict errors with inherited schemas | **Describe the bug**
I have a schema hierarchy such as:
```python
class ViewableContent(Schema):
name: str
description: str = None
class MySchema(ViewableContent):
other: str # If I don't add a new field the problem does not arise
```
Then add a router like the following:
```python
@router.patch('/{uuid}', response={200: MySchema})
@transaction.atomic
def my_update(request: HttpRequest, uuid: str, payload: PatchDict[MySchema]):
...
```
When I run my application the following error is raised:
```
File "/home/user/mnp/applications/neuroglass-research/backend/neuroglass_research/api/__init__.py", line 4, in <module>
from .studies import router as studies_router
File "/home/user/mnp/applications/neuroglass-research/backend/neuroglass_research/api/studies.py", line 69, in <module>
def update_study(request: HttpRequest, study_id: int, payload: PatchDict[UpdateStudyPayload]):
~~~~~~~~~^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/mnp/lib/python3.12/site-packages/ninja/patch_dict.py", line 45, in __getitem__
new_cls = create_patch_schema(schema_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/miniconda3/envs/mnp/lib/python3.12/site-packages/ninja/patch_dict.py", line 29, in create_patch_schema
t = schema_cls.__annotations__[f]
~~~~~~~~~~~~~~~~~~~~~~~~~~^^^
KeyError: 'name'
```
**Versions:**
- Python version: 3.12
- Django version: 5.1.2
- Django-Ninja version: 1.3.0
- Pydantic version: 2.9.2
| open | 2024-10-22T10:25:13Z | 2025-03-10T07:43:26Z | https://github.com/vitalik/django-ninja/issues/1324 | [] | filippomc | 7 |
modin-project/modin | data-science | 6,500 | FEAT: HDK: Add support for datetime64 to int64 cast | Currently, the following code defaults to pandas:
```python
df = pd.DataFrame({"date": [1, 2, 3]}, dtype="datetime64[ns]")
print(df.astype("int64"))
``` | closed | 2023-08-23T15:11:32Z | 2023-08-24T13:59:31Z | https://github.com/modin-project/modin/issues/6500 | [
"new feature/request 💬",
"HDK"
] | AndreyPavlenko | 0 |
sammchardy/python-binance | api | 1,093 | APIError(code=-4006): Stop price less than zero | **Describe the bug**
I am currently developing a tradingbot which listens to webhooks and places future trades based on said webhook input. So far so good, I was already able to place the first trades. For the sake of simplicity, I started with basic market orders. In a next step, I want to include stop loss with my order.
My solution currently throws me an error from the Binance API and I can't find the source of the issue:
"an exception occured - APIError(code=-4006): Stop price less than zero."
**To Reproduce**
For the sake of simplicity, I hardcoded the function input the below example. In reality, I derive the stop price based on mark price, leverage and buy/sell side and store it as a float.
client.futures_create_order(symbol='BTCUSDT', side='BUY', type='STOP_MARKET', quantity=0.001, stop_price=59553.27495, isolated='TRUE')
**Expected behavior**
Code places market order with a stop loss
**Environment (please complete the following information):**
- Python version: 3.7.9
- Virtual Env: venv
- OS: Windows 10 Pro
- python-binance version
**Logs or Additional context**
Add any other context about the problem here.
| closed | 2021-11-25T23:39:36Z | 2022-05-24T11:14:28Z | https://github.com/sammchardy/python-binance/issues/1093 | [] | anam-cara | 4 |
PaddlePaddle/ERNIE | nlp | 561 | 动态图版本的 ERNIE-GEN 链接失效 | 说明里面有提到
动态图版本的 ERNIE-GEN 代码更加简洁灵活,使用请参考 ERNIE-GEN Dygraph。
但对应链接点进去,并不是 ERNIE-GEN Dygraph的代码。
想请教, ERNIE-GEN Dygraph 的源代码开放吗?能否给出链接 | closed | 2020-09-13T08:59:32Z | 2020-11-20T11:49:39Z | https://github.com/PaddlePaddle/ERNIE/issues/561 | [
"wontfix"
] | herb711 | 4 |
pyeventsourcing/eventsourcing | sqlalchemy | 149 | exception that might not be expected | in https://github.com/johnbywater/eventsourcing/blob/2cfeff4f6bb05078f3f5110e23cba81772061ec8/eventsourcing/utils/cipher/aes.py#L71
this will raise an exception if nonce is empty (`[]`), which is possible since the length of the ciphertext is not checked. | closed | 2018-05-29T15:28:16Z | 2018-05-31T22:58:49Z | https://github.com/pyeventsourcing/eventsourcing/issues/149 | [] | mimoo | 2 |
eriklindernoren/ML-From-Scratch | machine-learning | 90 | Adam mhat and vhat updates | https://github.com/eriklindernoren/ML-From-Scratch/blob/a2806c6732eee8d27762edd6d864e0c179d8e9e8/mlfromscratch/deep_learning/optimizers.py#L125
While updating mhat and vhat in Adam, shouldn't we also consider the update number(t) for the weight decay?
m_hat = m/(1 - pow(beta,t)) | open | 2021-02-25T07:37:15Z | 2021-02-25T07:37:15Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/90 | [] | shivamsharma01 | 0 |
slackapi/bolt-python | fastapi | 968 | Getting "Received an unexpected response for handshake" error when using socketmode | I am running socketMode and getting below error in Unix machine. The same code works in Windows
### Reproducible in:
```
WARNING:__main__:Received an unexpected response for handshake (status: 200, response: HTTP/1.1 200 OK
Cache-Control: no-cache
X-XSS-Protection: 1
Connection: close
Content-Type: text/html; charset=utf-8
Content-Length: 5923
Pragma: no-cache, session id: 6924c984-abe2-4326-b5b6-f930598b739c)
```
#### The `slack_bolt` version
slack-bolt==1.18.0
slack-sdk==3.23.0
#### Python runtime version
Python 3.9.17
| closed | 2023-10-11T01:40:34Z | 2023-11-27T00:11:19Z | https://github.com/slackapi/bolt-python/issues/968 | [
"question",
"auto-triage-stale"
] | aksgpt | 3 |
AirtestProject/Airtest | automation | 505 | 是否可以更改默认的adb路径 | 我想做成和gui结合的方式 打包 用户只需要打开exe文件就可以使用了 目前看到每次运行的时候是调用ide自带的adb 是否可以用指定的adb呢 怎么用呢? | open | 2019-08-23T03:01:40Z | 2023-08-24T05:55:43Z | https://github.com/AirtestProject/Airtest/issues/505 | [] | hujingpay | 7 |
aminalaee/sqladmin | fastapi | 89 | Errors with Postgres UUID primary key | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
The call `self.pk_column.type.python_type` fails when the pk is a Postgres UUID field with a `NotImplementedError`. Maybe this call can be replaced with something along the lines of
```
from sqlalchemy.dialects.postgresql import UUID
...
def coerce_to_pk_value(self, value):
if isinstance(self.pk_column.type, UUID):
return str(value)
return self.pk_column.type.python_type(value)
...
stmt = select(self.model).where(self.pk_column == self.coerce_to_pk_value(value))
```
Alternatively, if it's not desirable importing specific dialects such as Postgres, maybe we could pass a custom value coercion function to each admin model?
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
Python 3.10
### Additional context
_No response_ | closed | 2022-03-15T15:51:27Z | 2022-03-17T10:12:38Z | https://github.com/aminalaee/sqladmin/issues/89 | [
"bug"
] | tr11 | 3 |
tqdm/tqdm | pandas | 614 | support global config to override defaults | 1. look for e.g. "$HOME/.tqdm.config" to override defaults
2. alternatively/additionally use os.environ.get("TQDM_CONFIG", '')
helps with e.g. #370, #612, #619, #950, #1061, #1318
The second (os.environ) option would be great to fix https://github.com/tqdm/tqdm/issues/370#issuecomment-421809342 (have `tmux` alter `TQDM_CONFIG` to have `dynamic_ncols`) | closed | 2018-09-17T09:29:59Z | 2023-08-12T19:03:19Z | https://github.com/tqdm/tqdm/issues/614 | [
"p3-enhancement 🔥",
"to-merge ↰",
"c3-small 🕒"
] | casperdcl | 14 |
modin-project/modin | pandas | 7,349 | modin with ray engine hang | my code hang , can advice what i miss out?
modin 0.31.0
modin-spreadsheet 0.1.2
ray 2.32.0
```
import argparse
import modin.pandas as pd
import os
os.environ["MODIN_ENGINE"] = "ray"
os.environ["RAY_memory_monitor_refresh_ms"] = "0"
args = parser.parse_args()
print("1")
df = pd.read_parquet(args.path) # hang at this line
print("2")
```
**output**
```
1
2024-07-22 15:04:45,809 INFO worker.py:1788 -- Started a local Ray instance.
2024-07-22 15:04:45,809 INFO worker.py:1788 -- Started a local Ray instance.
(raylet) [2024-07-22 15:06:45,793 E 1438 1438] (raylet) node_manager.cc:3064: 1 Workers (tasks / actors) killed due to memory pressure (OOM), 0 Workers crashed due to other reasons at node (ID: 82849b5e133875b55fbd974d5392b702c2705b5dc16c6c8ca24aaead, IP: x.x.x.x) over the last time period. To see more information about the Workers killed on this node, use `ray logs raylet.out -ip x.x.x.x`
(raylet)
(raylet) Refer to the documentation on how to address the out of memory issue: https://docs.ray.io/en/latest/ray-core/scheduling/ray-oom-prevention.html. Consider provisioning more memory on this node or reducing task parallelism by requesting more CPUs per task. To adjust the kill threshold, set the environment variable `RAY_memory_usage_threshold` when starting Ray. To disable worker killing, set the environment variable `RAY_memory_monitor_refresh_ms` to zero.
```
extra info, i see memory usage is maxed out. 64GB. this might be related and caused the hang. any option i need to set to modin ?
| open | 2024-07-22T06:58:01Z | 2024-07-30T14:12:48Z | https://github.com/modin-project/modin/issues/7349 | [
"question ❓",
"Triage 🩹"
] | cometta | 5 |
PokemonGoF/PokemonGo-Bot | automation | 5,719 | ValueError: bad marshal data (unknown type code) | Hi,
I used this fantastic Pokemon Go Bot for a week and all was working very well, but today when I opened it, I found the following error :
Traceback (most recent call last):
File "pokecli.py", line 50, in <module>
from pokemongo_bot import PokemonGoBot, TreeConfigBuilder
File "/home/andrea/PokemonGo-Bot/pokemongo_bot/**init**.py", line 23, in <module>
from . import cell_workers
File "/home/andrea/PokemonGo-Bot/pokemongo_bot/cell_workers/**init**.py", line 29, in <module>
from .telegram_task import TelegramTask
File "/home/andrea/PokemonGo-Bot/pokemongo_bot/cell_workers/telegram_task.py", line 6, in <module>
from pokemongo_bot.event_handlers import TelegramHandler
File "/home/andrea/PokemonGo-Bot/pokemongo_bot/event_handlers/**init**.py", line 3, in <module>
from .socketio_handler import SocketIoHandler
File "/home/andrea/PokemonGo-Bot/pokemongo_bot/event_handlers/socketio_handler.py", line 5, in <module>
from socketIO_client import SocketIO
File "/home/andrea/PokemonGo-Bot/local/lib/python2.7/site-packages/socketIO_client/**init**.py", line 9, in <module>
from .parsers import (
ValueError: bad marshal data (unknown type code)
mar 27 set 2016, 22.34.20, CEST Pokebot Stopped.
Press any button or wait 20 seconds to continue.
I just changed my location in the auth.json file.
What can I do to fix the iusse?
thank you
| closed | 2016-09-27T20:43:53Z | 2017-10-28T09:39:59Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5719 | [] | barna92 | 4 |
biolab/orange3 | numpy | 6,207 | Data Sampler: Fixed proportion with 0% crash | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
```
Traceback (most recent call last):
File "/Users/vesna/orange3/Orange/widgets/data/owdatasampler.py", line 228, in commit
self.updateindices()
File "/Users/vesna/orange3/Orange/widgets/data/owdatasampler.py", line 293, in updateindices
self.indices = self.sample(data_length, size, stratified=False)
File "/Users/vesna/orange3/Orange/widgets/data/owdatasampler.py", line 310, in sample
return sampler(self.data)
File "/Users/vesna/orange3/Orange/widgets/data/owdatasampler.py", line 427, in __call__
return SampleRandomN(n, self.stratified,
File "/Users/vesna/orange3/Orange/widgets/data/owdatasampler.py", line 416, in __call__
return next(iter(ind))
File "/Users/vesna/miniconda3/lib/python3.8/site-packages/sklearn/model_selection/_split.py", line 1622, in split
for train, test in self._iter_indices(X, y, groups):
File "/Users/vesna/miniconda3/lib/python3.8/site-packages/sklearn/model_selection/_split.py", line 1730, in _iter_indices
n_train, n_test = _validate_shuffle_split(
File "/Users/vesna/miniconda3/lib/python3.8/site-packages/sklearn/model_selection/_split.py", line 2071, in _validate_shuffle_split
raise ValueError(
ValueError: test_size=0 should be either positive and smaller than the number of samples 150 or a float in the (0, 1) range
```
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
File (iris) -> Data Sampler (Fixed proportion of data, set to 0%, click Sample Data)
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system: macos
- Orange version: master
- How you installed Orange: pip
| closed | 2022-11-18T09:20:07Z | 2022-12-01T13:03:32Z | https://github.com/biolab/orange3/issues/6207 | [
"bug"
] | VesnaT | 0 |
seleniumbase/SeleniumBase | web-scraping | 3,278 | Run CDP Mode in Github actions | I am trying to get a python script using Selenium Base CDP mode to work using github actions. I know i am supposed to be using XVFB, but not sure how to configure for this application.
I get an error when it tries to run the python code.
Please provide a simple working example of how to run in CDP mode using github actions.
Here is my YAML file
# This is a basic workflow to help you get started with Actions
name: Python Script
# Controls when the workflow will run
on:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout@v4
- name: Install dependencies
run: |
sudo apt install xvfb
python -m pip install --upgrade pip wheel setuptools
pip install seleniumbase
pip install pyvirtualdisplay
- name: Install Chrome
run: |
sudo apt install google-chrome-stable
# Runs a single command using the runners shell
- name: Run python script
run: python .github/workflows/example.py
And here is my example.py
"""Example of using CDP Mode with WebDriver"""
from seleniumbase import SB
import pyvirtualdisplay
def main():
print("Hello World")
display = pyvirtualdisplay.Display()
display.start()
with SB(uc=True, test=True, locale_code="en",xvfb=True) as sb:
url = "https://www.priceline.com/"
sb.activate_cdp_mode(url)
print(sb.get_title())
if __name__ == '__main__':
main() | closed | 2024-11-21T06:02:13Z | 2024-11-21T13:06:23Z | https://github.com/seleniumbase/SeleniumBase/issues/3278 | [
"self-resolved",
"UC Mode / CDP Mode"
] | cohnhead66 | 1 |
dropbox/PyHive | sqlalchemy | 49 | can't read Hive NULLs to Decimal | There does not seem to be a check for None in HiveDecimal.process_result_value().
If there is a NULL in a Hive column expected as Decimal, it causes TypeError: Cannot convert None to Decimal
Shouldn't this return NaN instead?
| closed | 2016-05-26T14:28:14Z | 2017-05-13T08:21:40Z | https://github.com/dropbox/PyHive/issues/49 | [
"bug"
] | mschmill | 2 |
litestar-org/litestar | pydantic | 3,868 | Enhancement: avoid circular dependencies on litestar-htmx | ### Summary
Treat `htmx` as any other optional dependency.
### Basic Example
2.13.0 change of introducing a hard dependency on `litestar-htmx` (which has a hard, but unpinned dependency on `litestar`).
While `pip` can handle this, is troublesome for downstreams (such as https://github.com/conda-forge/litestar-feedstock/pull/11) to bootstrap.
### Drawbacks and Impact
Users will need to use the existing `litestar[htmx]` rather than get `litestar-htmx` "for free".
### Unresolved questions
`litestar-htmx` has further packaging issues: https://github.com/litestar-org/litestar-htmx/issues/7 | closed | 2024-11-20T19:51:18Z | 2025-03-20T15:55:02Z | https://github.com/litestar-org/litestar/issues/3868 | [
"Enhancement"
] | bollwyvl | 8 |
dask/dask | scikit-learn | 11,006 | as of v2024.3.1, comparing a 1D dask.array.Array to a dask.dataframe.Series fails | **Describe the issue**:
Prior to `dask < 2024.3.0`, it was possible to directly combine a 1-dimensional Dask Array and a Dask Series with operators like `==`, `-`, and `+`, like this:
```python
(d_ser == d_arr).compute()
```
As of `v2024.3.1`, with the switch to `dask-expr`, that instead raises an exception like this:
```text
ValueError: For a 1d array, columns must be a scalar or single element list
```
**Minimal Complete Verifiable Example**:
Consider the following Python code.
```python
import dask.array as da
import dask.dataframe as dd
import pandas as pd
d_series = dd.from_pandas(pd.Series([1.0, 2.0, 3.0, 4.0]), npartitions=1)
d_array = d_series.to_dask_array()
(d_series == d_array).compute()
(d_series - d_array).compute()
(d_series + d_array).compute()
```
In a Python 3.11 environment with `dask==2024.2.1`, `numpy==1.26.4`, and `pandas==2.2.1`, that succeeds.
```python
(d_series == d_array).compute()
# 0 True
# 1 True
# 2 True
# 3 True
# dtype: bool
```
In a Python 3.11 environment with `dask==2024.3.1`, that latest `dask-expr` (1.0.3), and the same `numpy` and `pandas` versions, it fails.
```text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.11/site-packages/dask_expr/_collection.py", line 160, in _wrap_expr_op
other = from_dask_array(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/dask_expr/_collection.py", line 4730, in from_dask_array
df = from_dask_array(x, columns=columns, index=index, meta=meta)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/dask/dataframe/io/io.py", line 433, in from_dask_array
meta = _meta_from_array(x, columns, index, meta=meta)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/dask/dataframe/io/io.py", line 86, in _meta_from_array
raise ValueError(
ValueError: For a 1d array, columns must be a scalar or single element list
```
**Anything else we need to know?**:
I've read #10995. I don't believe this issue is already captured elsewhere here or at https://github.com/dask/dask-expr/issues.
I discovered this behavior change because it broke LightGBM's CI: https://github.com/microsoft/LightGBM/issues/6365. It was convenient there to be able to compare two 1-dimensional arrays to each other and not need to care whether one was a dask Array and another was a Dask Series, but if that's a bad practice or no longer supported here, it won't be a huge problem to change it. I don't know if other projects rely on that behavior though.
Note that `pandas` and `numpy` objects can be compared in these ways (as of `pandas==2.2.1` and `numpy==1.26.4`):
```python
import pandas as pd
import numpy as np
ser = pd.Series([1.0, 2.0, 3.0, 4.0])
arr = ser.values
ser == arr
arr == ser
```
**Environment**:
I set these up using the official `python:3.11` container images.
```shell
docker run --rm -it python:3.11 bash
```
Latest `dask`, `numpy`, `pandas`, and `dask-expr`:
```shell
docker run --rm -it python:3.11 bash
pip install \
'dask==2024.3.1' \
'distributed==2024.3.1' \
'dask-expr==1.0.3' \
'numpy==1.26.4' \
'pandas==2.2.1'
```
Slightly older `dask` and the newest `dask-expr` that supported it:
```shell
docker run --rm -it python:3.11 bash
pip install \
'dask==2024.2.1' \
'distributed==2024.2.1' \
'dask-expr==0.5.3' \
'numpy==1.26.4' \
'pandas==2.2.1'
```
Thanks for your time and consideration. | closed | 2024-03-17T03:47:01Z | 2024-03-18T13:54:21Z | https://github.com/dask/dask/issues/11006 | [
"bug",
"dask-expr"
] | jameslamb | 1 |
deepset-ai/haystack | nlp | 8,918 | Expanded Model Support in LLMEvaluator | It would be great to be able to natively use model providers other than OpenAI for the `LLMEvaluator` and related evaluators i.e. any that make use of LLMs.
Right now it's [fixed to the OpenAIGenerator](https://github.com/deepset-ai/haystack/blob/76753fd4c643158e76d0d3f443ff4fe98b0d2481/haystack/components/evaluators/llm_evaluator.py#L112).
**Describe the solution you'd like**
Not sure of design but maybe the generator could be passed in?
**Describe alternatives you've considered**
Just swapping out the generator can work for some of the models but is a bit hacky / not guaranteed to work. But it does mean that users, if they really want to, do have this option right now.
```python
from typing import List
from haystack.components.evaluators import LLMEvaluator
from haystack_integrations.components.generators.amazon_bedrock import AmazonBedrockGenerator
claude_generator = AmazonBedrockGenerator(
model="anthropic.claude-3-5-sonnet-20241022-v2:0",
max_length=600,
)
# To prevent the Evaluator from complaining
os.environ['OPENAI_API_KEY'] = ''
llm_evaluator = LLMEvaluator(
instructions="Respond with valid json. Is this answer problematic for children?",
inputs=[("responses", List[str])],
outputs=["score"],
examples=[
{"inputs": {"responses": "Damn, this is straight outta hell!!!"}, "outputs": {"score": 1}},
{"inputs": {"responses": "Football is the most popular sport."}, "outputs": {"score": 0}},
],
)
# Swap out the generator
llm_evaluator.generator = claude_generator
responses = [
"Football is the most popular sport with around 4 billion followers worldwide",
"Python language was created by Guido van Rossum.",
]
results = llm_evaluator.run(responses=responses)
print(results)
# {'results': [{'score': 0}, {'score': 0}]}
```
| closed | 2025-02-25T09:27:19Z | 2025-03-21T08:53:03Z | https://github.com/deepset-ai/haystack/issues/8918 | [
"P2"
] | bglearning | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 904 | If np.isnan(grad_norm.cpu()) | AttributeError: 'float' object has no attribute 'cpu'
We're getting this issue before training. We commented it out (line 192 in synthesizer/train.py) and the training started happening. Would there be any consequences affecting our output for this? Thank you. | closed | 2021-11-24T08:52:38Z | 2021-12-28T12:33:21Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/904 | [] | Fzr2k | 0 |
eriklindernoren/ML-From-Scratch | machine-learning | 8 | Demo.py AttributeError: no attribute 'solvers' | ```
`Traceback (most recent call last):
File "demo.py", line 23, in <module>
from support_vector_machine import SupportVectorMachine
File "/media/test/V2/lab/hack/ML-From-Scratch/supervised_learning/support_vector_machine.py", line 20, in <module>
cvxopt.solvers.options['show_progress'] = False
AttributeError: 'module' object has no attribute 'solvers'`
``` | closed | 2017-03-02T12:11:28Z | 2017-03-02T18:28:36Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/8 | [] | indrajithi | 3 |
hack4impact/flask-base | flask | 216 | Why is SSL disabled by default in production? | Hi,
I noticed that [SSL is by default disabled](https://github.com/hack4impact/flask-base/blob/f77172ab47d200f3e8294a9a52e041b6b4c59425/config.py#L103) in the ProductionConfig class in config.py. The line of code is:
```python
SSL_DISABLE = (os.environ.get('SSL_DISABLE', 'True') == 'True')
```
In case `SSL_DISABLE` is not set, the default value will be `True` which from my point of viewing means SSL (or TLS for that matter) will be disabled.
| closed | 2021-07-01T12:23:11Z | 2021-08-31T04:46:31Z | https://github.com/hack4impact/flask-base/issues/216 | [] | oerd | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 1,233 | Error: metadata-generation-failed - related to scikit-learn? - python 3.11 | Arch Linux, kernel 6.4.2-arch1-1, python 3.11.3 (GCC 13.1.1), pip 23.1.2
Thank you for any help!
I followed these steps:
```
python -m venv .env
pip install ffmpeg
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
```
Below is the log of `pip install -r requirements.txt`, with the relevant error:
```
Collecting inflect==5.3.0 (from -r requirements.txt (line 1))
Using cached inflect-5.3.0-py3-none-any.whl (32 kB)
Collecting librosa==0.8.1 (from -r requirements.txt (line 2))
Using cached librosa-0.8.1-py3-none-any.whl (203 kB)
Collecting matplotlib==3.5.1 (from -r requirements.txt (line 3))
Using cached matplotlib-3.5.1.tar.gz (35.3 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Installing backend dependencies: started
Installing backend dependencies: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting numpy==1.20.3 (from -r requirements.txt (line 4))
Using cached numpy-1.20.3.zip (7.8 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting Pillow==8.4.0 (from -r requirements.txt (line 5))
Using cached Pillow-8.4.0.tar.gz (49.4 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'done'
Collecting PyQt5==5.15.6 (from -r requirements.txt (line 6))
Using cached PyQt5-5.15.6-cp36-abi3-manylinux1_x86_64.whl (8.3 MB)
Collecting scikit-learn==1.0.2 (from -r requirements.txt (line 7))
Using cached scikit-learn-1.0.2.tar.gz (6.7 MB)
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [261 lines of output]
Partial import of sklearn during the build process.
setup.py:128: DeprecationWarning:
`numpy.distutils` is deprecated since NumPy 1.23.0, as a result
of the deprecation of `distutils` itself. It will be removed for
Python >= 3.12. For older Python versions it will remain present.
It is recommended to use `setuptools < 60.0` for those Python versions.
For more details, see:
https://numpy.org/devdocs/reference/distutils_status_migration.html
from numpy.distutils.command.build_ext import build_ext # noqa
INFO: C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC
INFO: compile options: '-c'
INFO: gcc: test_program.c
INFO: gcc objects/test_program.o -o test_program
INFO: C compiler: gcc -DNDEBUG -g -fwrapv -O3 -Wall -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -ffat-lto-objects -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -march=x86-64 -mtune=generic -O3 -pipe -fno-plt -fexceptions -Wp,-D_FORTIFY_SOURCE=2 -Wformat -Werror=format-security -fstack-clash-protection -fcf-protection -g -ffile-prefix-map=/build/python/src=/usr/src/debug/python -flto=auto -fPIC
INFO: compile options: '-c'
extra options: '-fopenmp'
INFO: gcc: test_program.c
INFO: gcc objects/test_program.o -o test_program -fopenmp
Compiling sklearn/__check_build/_check_build.pyx because it changed.
Compiling sklearn/preprocessing/_csr_polynomial_expansion.pyx because it changed.
Compiling sklearn/cluster/_dbscan_inner.pyx because it changed.
Compiling sklearn/cluster/_hierarchical_fast.pyx because it changed.
Compiling sklearn/cluster/_k_means_common.pyx because it changed.
Compiling sklearn/cluster/_k_means_lloyd.pyx because it changed.
Compiling sklearn/cluster/_k_means_elkan.pyx because it changed.
Compiling sklearn/cluster/_k_means_minibatch.pyx because it changed.
Compiling sklearn/datasets/_svmlight_format_fast.pyx because it changed.
Compiling sklearn/decomposition/_online_lda_fast.pyx because it changed.
Compiling sklearn/decomposition/_cdnmf_fast.pyx because it changed.
Compiling sklearn/ensemble/_gradient_boosting.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_gradient_boosting.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/histogram.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/splitting.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_binning.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_predictor.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_loss.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/_bitset.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/common.pyx because it changed.
Compiling sklearn/ensemble/_hist_gradient_boosting/utils.pyx because it changed.
Compiling sklearn/feature_extraction/_hashing_fast.pyx because it changed.
Compiling sklearn/manifold/_utils.pyx because it changed.
Compiling sklearn/manifold/_barnes_hut_tsne.pyx because it changed.
Compiling sklearn/metrics/cluster/_expected_mutual_info_fast.pyx because it changed.
Compiling sklearn/metrics/_pairwise_fast.pyx because it changed.
Compiling sklearn/metrics/_dist_metrics.pyx because it changed.
Compiling sklearn/neighbors/_ball_tree.pyx because it changed.
Compiling sklearn/neighbors/_kd_tree.pyx because it changed.
Compiling sklearn/neighbors/_partition_nodes.pyx because it changed.
Compiling sklearn/neighbors/_quad_tree.pyx because it changed.
Compiling sklearn/tree/_tree.pyx because it changed.
Compiling sklearn/tree/_splitter.pyx because it changed.
Compiling sklearn/tree/_criterion.pyx because it changed.
Compiling sklearn/tree/_utils.pyx because it changed.
Compiling sklearn/utils/sparsefuncs_fast.pyx because it changed.
Compiling sklearn/utils/_cython_blas.pyx because it changed.
Compiling sklearn/utils/arrayfuncs.pyx because it changed.
Compiling sklearn/utils/murmurhash.pyx because it changed.
Compiling sklearn/utils/_fast_dict.pyx because it changed.
Compiling sklearn/utils/_openmp_helpers.pyx because it changed.
Compiling sklearn/utils/_seq_dataset.pyx because it changed.
Compiling sklearn/utils/_weight_vector.pyx because it changed.
Compiling sklearn/utils/_random.pyx because it changed.
Compiling sklearn/utils/_logistic_sigmoid.pyx because it changed.
Compiling sklearn/utils/_readonly_array_wrapper.pyx because it changed.
Compiling sklearn/utils/_typedefs.pyx because it changed.
Compiling sklearn/svm/_newrand.pyx because it changed.
Compiling sklearn/svm/_libsvm.pyx because it changed.
Compiling sklearn/svm/_liblinear.pyx because it changed.
Compiling sklearn/svm/_libsvm_sparse.pyx because it changed.
Compiling sklearn/linear_model/_cd_fast.pyx because it changed.
Compiling sklearn/linear_model/_sgd_fast.pyx because it changed.
Compiling sklearn/linear_model/_sag_fast.pyx because it changed.
Compiling sklearn/_isotonic.pyx because it changed.
[ 1/55] Cythonizing sklearn/__check_build/_check_build.pyx
[ 2/55] Cythonizing sklearn/_isotonic.pyx
[ 3/55] Cythonizing sklearn/cluster/_dbscan_inner.pyx
[ 4/55] Cythonizing sklearn/cluster/_hierarchical_fast.pyx
[ 5/55] Cythonizing sklearn/cluster/_k_means_common.pyx
[ 6/55] Cythonizing sklearn/cluster/_k_means_elkan.pyx
[ 7/55] Cythonizing sklearn/cluster/_k_means_lloyd.pyx
[ 8/55] Cythonizing sklearn/cluster/_k_means_minibatch.pyx
[ 9/55] Cythonizing sklearn/datasets/_svmlight_format_fast.pyx
[10/55] Cythonizing sklearn/decomposition/_cdnmf_fast.pyx
[11/55] Cythonizing sklearn/decomposition/_online_lda_fast.pyx
[12/55] Cythonizing sklearn/ensemble/_gradient_boosting.pyx
[13/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_binning.pyx
[14/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_bitset.pyx
[15/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_gradient_boosting.pyx
[16/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_loss.pyx
[17/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/_predictor.pyx
[18/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/common.pyx
[19/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/histogram.pyx
[20/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/splitting.pyx
[21/55] Cythonizing sklearn/ensemble/_hist_gradient_boosting/utils.pyx
[22/55] Cythonizing sklearn/feature_extraction/_hashing_fast.pyx
[23/55] Cythonizing sklearn/linear_model/_cd_fast.pyx
[24/55] Cythonizing sklearn/linear_model/_sag_fast.pyx
[25/55] Cythonizing sklearn/linear_model/_sgd_fast.pyx
[26/55] Cythonizing sklearn/manifold/_barnes_hut_tsne.pyx
[27/55] Cythonizing sklearn/manifold/_utils.pyx
[28/55] Cythonizing sklearn/metrics/_dist_metrics.pyx
[29/55] Cythonizing sklearn/metrics/_pairwise_fast.pyx
[30/55] Cythonizing sklearn/metrics/cluster/_expected_mutual_info_fast.pyx
[31/55] Cythonizing sklearn/neighbors/_ball_tree.pyx
[32/55] Cythonizing sklearn/neighbors/_kd_tree.pyx
[33/55] Cythonizing sklearn/neighbors/_partition_nodes.pyx
[34/55] Cythonizing sklearn/neighbors/_quad_tree.pyx
[35/55] Cythonizing sklearn/preprocessing/_csr_polynomial_expansion.pyx
[36/55] Cythonizing sklearn/svm/_liblinear.pyx
[37/55] Cythonizing sklearn/svm/_libsvm.pyx
[38/55] Cythonizing sklearn/svm/_libsvm_sparse.pyx
[39/55] Cythonizing sklearn/svm/_newrand.pyx
[40/55] Cythonizing sklearn/tree/_criterion.pyx
[41/55] Cythonizing sklearn/tree/_splitter.pyx
[42/55] Cythonizing sklearn/tree/_tree.pyx
[43/55] Cythonizing sklearn/tree/_utils.pyx
[44/55] Cythonizing sklearn/utils/_cython_blas.pyx
[45/55] Cythonizing sklearn/utils/_fast_dict.pyx
[46/55] Cythonizing sklearn/utils/_logistic_sigmoid.pyx
[47/55] Cythonizing sklearn/utils/_openmp_helpers.pyx
[48/55] Cythonizing sklearn/utils/_random.pyx
[49/55] Cythonizing sklearn/utils/_readonly_array_wrapper.pyx
[50/55] Cythonizing sklearn/utils/_seq_dataset.pyx
[51/55] Cythonizing sklearn/utils/_typedefs.pyx
[52/55] Cythonizing sklearn/utils/_weight_vector.pyx
[53/55] Cythonizing sklearn/utils/arrayfuncs.pyx
[54/55] Cythonizing sklearn/utils/murmurhash.pyx
[55/55] Cythonizing sklearn/utils/sparsefuncs_fast.pyx
running dist_info
running build_src
INFO: build_src
INFO: building library "libsvm-skl" sources
INFO: building library "liblinear-skl" sources
INFO: building extension "sklearn.__check_build._check_build" sources
INFO: building extension "sklearn.preprocessing._csr_polynomial_expansion" sources
INFO: building extension "sklearn.cluster._dbscan_inner" sources
INFO: building extension "sklearn.cluster._hierarchical_fast" sources
INFO: building extension "sklearn.cluster._k_means_common" sources
INFO: building extension "sklearn.cluster._k_means_lloyd" sources
INFO: building extension "sklearn.cluster._k_means_elkan" sources
INFO: building extension "sklearn.cluster._k_means_minibatch" sources
INFO: building extension "sklearn.datasets._svmlight_format_fast" sources
INFO: building extension "sklearn.decomposition._online_lda_fast" sources
INFO: building extension "sklearn.decomposition._cdnmf_fast" sources
INFO: building extension "sklearn.ensemble._gradient_boosting" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._gradient_boosting" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.histogram" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.splitting" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._binning" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._predictor" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._loss" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting._bitset" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.common" sources
INFO: building extension "sklearn.ensemble._hist_gradient_boosting.utils" sources
INFO: building extension "sklearn.feature_extraction._hashing_fast" sources
INFO: building extension "sklearn.manifold._utils" sources
INFO: building extension "sklearn.manifold._barnes_hut_tsne" sources
INFO: building extension "sklearn.metrics.cluster._expected_mutual_info_fast" sources
INFO: building extension "sklearn.metrics._pairwise_fast" sources
INFO: building extension "sklearn.metrics._dist_metrics" sources
INFO: building extension "sklearn.neighbors._ball_tree" sources
INFO: building extension "sklearn.neighbors._kd_tree" sources
INFO: building extension "sklearn.neighbors._partition_nodes" sources
INFO: building extension "sklearn.neighbors._quad_tree" sources
INFO: building extension "sklearn.tree._tree" sources
INFO: building extension "sklearn.tree._splitter" sources
INFO: building extension "sklearn.tree._criterion" sources
INFO: building extension "sklearn.tree._utils" sources
INFO: building extension "sklearn.utils.sparsefuncs_fast" sources
INFO: building extension "sklearn.utils._cython_blas" sources
INFO: building extension "sklearn.utils.arrayfuncs" sources
INFO: building extension "sklearn.utils.murmurhash" sources
INFO: building extension "sklearn.utils._fast_dict" sources
INFO: building extension "sklearn.utils._openmp_helpers" sources
INFO: building extension "sklearn.utils._seq_dataset" sources
INFO: building extension "sklearn.utils._weight_vector" sources
INFO: building extension "sklearn.utils._random" sources
INFO: building extension "sklearn.utils._logistic_sigmoid" sources
INFO: building extension "sklearn.utils._readonly_array_wrapper" sources
INFO: building extension "sklearn.utils._typedefs" sources
INFO: building extension "sklearn.svm._newrand" sources
INFO: building extension "sklearn.svm._libsvm" sources
INFO: building extension "sklearn.svm._liblinear" sources
INFO: building extension "sklearn.svm._libsvm_sparse" sources
INFO: building extension "sklearn.linear_model._cd_fast" sources
INFO: building extension "sklearn.linear_model._sgd_fast" sources
INFO: building extension "sklearn.linear_model._sag_fast" sources
INFO: building extension "sklearn._isotonic" sources
INFO: building data_files sources
INFO: build_src: building npy-pkg config files
/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
Traceback (most recent call last):
File "/tmp/Real-Time-Voice-Cloning/.env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/tmp/Real-Time-Voice-Cloning/.env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/Real-Time-Voice-Cloning/.env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 149, in prepare_metadata_for_build_wheel
return hook(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 174, in prepare_metadata_for_build_wheel
self.run_setup()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 268, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 158, in run_setup
exec(compile(code, __file__, 'exec'), locals())
File "setup.py", line 319, in <module>
setup_package()
File "setup.py", line 315, in setup_package
setup(**metadata)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/core.py", line 169, in setup
return old_setup(**new_attr)
^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 148, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/core.py", line 163, in run_commands
dist.run_commands()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 967, in run_commands
self.run_command(cmd)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/command/dist_info.py", line 31, in run
egg_info.run()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/egg_info.py", line 24, in run
self.run_command("build_src")
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/dist.py", line 986, in run_command
cmd_obj.run()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/build_src.py", line 144, in run
self.build_sources()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/build_src.py", line 164, in build_sources
self.build_npy_pkg_config()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/build_src.py", line 235, in build_npy_pkg_config
install_cmd.finalize_options()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/numpy/distutils/command/install.py", line 21, in finalize_options
old_install.finalize_options(self)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/command/install.py", line 45, in finalize_options
orig.install.finalize_options(self)
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 325, in finalize_options
self.finalize_unix()
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 498, in finalize_unix
self.select_scheme("posix_prefix")
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 528, in select_scheme
return self._select_scheme(resolved)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-6h00dobt/overlay/lib/python3.11/site-packages/setuptools/_distutils/command/install.py", line 537, in _select_scheme
setattr(self, attrname, scheme[key])
~~~~~~^^^^^
KeyError: 'headers'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
``` | open | 2023-07-10T23:04:33Z | 2023-07-14T13:36:52Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1233 | [] | jessienab | 1 |
lanpa/tensorboardX | numpy | 503 | Question about add_Graph function's argument "input_to_model" | First, thanks for the contribution of this great tensorboardX.
Second, I try to visualize the network structure graph of a text classification--TextCNN in NLP. The beginning of the network is the embedding layer. The question is how do I define the argument "input_to_model" of "writer.add_graph()"? | closed | 2019-08-29T13:49:01Z | 2019-10-03T14:33:32Z | https://github.com/lanpa/tensorboardX/issues/503 | [] | zhanlaoban | 1 |
dnouri/nolearn | scikit-learn | 190 | Append git hash to version when installing from git | Given the infrequency of pypi releases many users install nolearn from git rather than from pypi. Because of this, the version string does not full capture the installed version (i.e. the git hash). I would suggest adding the commit hash to the version string when installing from git. This would be consistent with [what Theano does](https://github.com/Theano/Theano/blob/149d008d528277988867089cb607105e706910c4/setup.py#L134-L143).
| closed | 2016-01-13T17:15:35Z | 2016-01-14T21:04:44Z | https://github.com/dnouri/nolearn/issues/190 | [] | cancan101 | 1 |
fastapi-users/fastapi-users | fastapi | 952 | Inconsistent fields identifying an OAuth account | ## Describe the bug
My main problem is that OAuth accounts is not being updated from the callback route, access tokens remaining the same.
This seems to stem from that the correct OAuth account is not consistently being identified in the same way.
## To Reproduce
Stepping trough the code from the callback route:
1. [In the callback route a new (temporary) OAuth account is always created with a new generated id (UUID)](https://github.com/fastapi-users/fastapi-users/blob/master/fastapi_users/router/oauth.py#L117)
2. [The above OAuth account is passed on to the user manager](https://github.com/fastapi-users/fastapi-users/blob/master/fastapi_users/router/oauth.py#L126)
3. [The user manager finds an existing user, based on the fields oauth_name and account_id of the above oauth](https://github.com/fastapi-users/fastapi-users/blob/master/fastapi_users/manager.py#L190)
5. Begin the process to update existing OAuth account. [OAuth account being identified by the field account_id](https://github.com/fastapi-users/fastapi-users/blob/master/fastapi_users/manager.py#L212). Not checking the field oauth_name as done in 3.
6. [Replace the list of oauth accounts on the user object with a mix of new and old existing OAuth accounts](https://github.com/fastapi-users/fastapi-users/blob/master/fastapi_users/manager.py#L216)
7. [Call the update method of user_db with the above user object as argument](https://github.com/fastapi-users/fastapi-users/blob/master/fastapi_users/manager.py#L217)
8. Using SQLAlchemyUserDatabase, [update based on the id field of the oauth account](https://github.com/fastapi-users/fastapi-users-db-sqlalchemy/blob/main/fastapi_users_db_sqlalchemy/__init__.py#L129). Not using the account_id as in 4. nor oauth_name as in 3.
Pseudo code to illustrate some of the problems with above:
```
new_oauth_account = {
id = "789",
oauth_name = "google",
account_id = "abc"
}
# Note: identified with fields oauth_name and account_id
existing_user = find_user(new_oauth_account.oauth_name, new_oauth_account.account_id)
old_oauth_account = {
id = "123",
oauth_name = "github",
account_id= "abc"
} = existing_user.oauth_accounts[0]
oauth_db = {"123": old_oauth_account }
# Note: identified with field account_id
if new_oauth_account.account_id == old_oauth_account.account_id:
# "abc" == "abc", ok..
update_oauth_db(new_oauth_account):
# Note: identified with field id
if new_oauth_account.id in oauth_db: # false, id "789" not found. New uuid never exist in db so not updating..
oauth_db[new_oauth_account.id] = new_oauth_account
```
## Expected behavior
The identifiers of a specific OAuth account should be composite of the OAuth providers name and the account id.
A simple solution should be to [here](https://github.com/fastapi-users/fastapi-users/blob/master/fastapi_users/manager.py#L212 ) change to:
```
...
if existing_oauth_account.account_id == oauth_account.account_id and existing_oauth_account.oauth_name == oauth_account.oauth_name:
oauth_account.id = existing_oauth_account.id
updated_oauth_accounts.append(oauth_account)
...
```
Other solutions could be to overload equal function of oauth account and/or remove the id field from oauth accounts. But these could cause breaking change and require more work.
## Configuration
- FastAPI Users version : 9.3.0
- fastapi-users-db-sqlalchemy: 3.0.0
| closed | 2022-03-24T12:35:40Z | 2022-04-21T09:06:31Z | https://github.com/fastapi-users/fastapi-users/issues/952 | [
"bug"
] | ricfri | 2 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 549 | How to use ContrastiveLoss | Hi,
I have a Siamese NN architercture that holds a Bert Transformer for each of the siblings Sub-networks.
So I have sentences pairs and I want to encode each of the sentence in order to get their Embeddings.
def forward(set1, set2):
embeddings1 = BERT(set1) #here we have batch_size x 768 after avg pooling
embeddings2 = BERT(set2) #here we have batch_size x 768 after avg pooling
return embeddings1, embeddings2
I am wondering how to pass those 2 Embeddings Tensors to ContrastiveLoss? Should I concatenate them in 0 dimension in order to have 2 x batch_size x 768 -----> for example if batch size equals to 16 then after concat I will have 32x768.
After that I must repeat the tensor of Labels. I suppose.....
Labels.repeat(2) #32x768
I can not understand how the loss between Embeddings is calculated. It calculates first the distance for every possible pair in 32 batch_size, based on labels?
Could you please provide me an example?
Thanks in advanced
| closed | 2022-11-10T18:13:38Z | 2024-03-06T04:03:10Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/549 | [
"question"
] | icsd13152 | 27 |
aiogram/aiogram | asyncio | 543 | Incorrect work of ReplyKeyboardMarkup | Using aiogram.types.reply_keyboard.ReplyKeyboardMarkup messages in groups are sent as aiogram.types.force_reply.ForceReply | closed | 2021-03-25T19:49:35Z | 2023-08-25T12:37:49Z | https://github.com/aiogram/aiogram/issues/543 | [
"waiting for reply",
"stale",
"needs triage"
] | ghost | 2 |
drivendataorg/cookiecutter-data-science | data-science | 88 | Rename cookitcutter project | Is it possible to properly rename a project created by cookiecutter? That is, besides renaming the directory, it should also properly edit file contents (eg. README.md) replacing old project name with new name.
If there is not an automated-solution, is the below list of files inclusive of all files that need to be manually edited?
* README.md
* docs/index.rst
* docs/conf.py
* LICENSE
| closed | 2017-10-11T21:53:48Z | 2017-10-16T23:34:51Z | https://github.com/drivendataorg/cookiecutter-data-science/issues/88 | [] | ManavalanG | 2 |
modin-project/modin | pandas | 6,843 | Allow pass-through to underlying pandas implementation | We occasionally have the need to access underlying pandas DataFrame internals, i.e. attributes/functions prefixed with `_`, such as follows:
```python3
import pandas
import modin.pandas as pd
df = pd.DataFrame({"a": [1, 2], "b": [3, 4]})
dfp = pandas.DataFrame({"a": [1, 2], "b": [3, 4]})
df._info_axis_name
AttributeError Traceback (most recent call last)
Cell In[18], line 1
----> 1 df._info_axis_name
File .../site-packages/modin/pandas/dataframe.py:2489, in DataFrame.__getattr__(self, key)
2487 if key not in _ATTRS_NO_LOOKUP and key in self.columns:
2488 return self[key]
-> 2489 raise err
File .../site-packages/modin/pandas/dataframe.py:2485, in DataFrame.__getattr__(self, key)
2467 """
2468 Return item identified by `key`.
2469
(...)
2482 try to get `key` from ``DataFrame`` fields.
2483 """
2484 try:
-> 2485 return object.__getattribute__(self, key)
2486 except AttributeError as err:
2487 if key not in _ATTRS_NO_LOOKUP and key in self.columns:
AttributeError: 'DataFrame' object has no attribute '_info_axis_name'
dfp._info_axis_name
> 'columns'
```
It would be great if Modin could fallback to also internal attribute accessing (perhaps toggleable with some configuration?):
```python3
@disable_logging
def __getattr__(self, key):
# docstring truncated
try:
return object.__getattribute__(self, key)
except AttributeError as err:
if key not in _ATTRS_NO_LOOKUP and key in self.columns:
return self[key]
try: # If not a column name and not inherently supported by Modin, get raw pandas dataframe and get that attribute
return getattr(self._to_pandas(), key) # Or object.__getattribute(self._to_pandas(), key)?
except:
raise err
``` | closed | 2023-12-29T13:12:24Z | 2024-01-09T09:33:02Z | https://github.com/modin-project/modin/issues/6843 | [
"new feature/request 💬"
] | idantene | 6 |
huggingface/pytorch-image-models | pytorch | 1,712 | [BUG] Training does not start with --amp | <img width="611" alt="image" src="https://user-images.githubusercontent.com/11882938/223652139-d6ddae91-554a-40f1-a4c2-fa0d3bc78616.png">
<img width="1928" alt="image" src="https://user-images.githubusercontent.com/11882938/223652403-c9e6fd92-8ccc-492d-82b0-4b725cab49db.png">
env:
CUDA Version 10.2.89
Ubuntu 18.04
GPU: T4
drive: 450.102.04
torch 1.9.1+cu102
torchaudio 0.9.1
torchvision 0.10.1+cu102
python3 -u ./train.py /gpu2-data/zhenty/coin-paper-algorithm/dataset --num-classes 1050 --model efficientformer_l3 -b 64 --img-size 224 --sched step --epochs 160 --pretrained --lr .018 -j 16 --opt adamp --weight-decay 1e-5 --decay-epochs 2 --decay-rate .91 --warmup-lr 1e-5 --warmup-epochs 1 --remode pixel --reprob 0.15 --amp
Once the amp parameter is used, the following error will be returned
`AttributeError: module 'torch' has no attribute 'autocast'
` | closed | 2023-03-08T07:51:54Z | 2023-03-11T23:27:18Z | https://github.com/huggingface/pytorch-image-models/issues/1712 | [
"bug"
] | hacktmz | 0 |
plotly/dash | data-visualization | 2,286 | ERROR: Python 3.11.0 Installation Error | After I updated my Python to 3.11.0, I'm no more capable of installing Dash with pip.
First I got the following error:

Then I installed a newer version of C++ BuildTools and now I got another one;

Any suggestions?
Thanks in advance.
| closed | 2022-10-26T13:17:44Z | 2022-10-31T13:33:21Z | https://github.com/plotly/dash/issues/2286 | [] | dogukankaratas | 7 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 71 | How to run raw SELECT for a Base field? | Hello,
I have a class extending `Base` that has one field that's quite complex to retrieve - I want to run a complex SELECT (one that has a JOIN on another table where ID is a string that has to be converted to int and is back JOIN-ed to the original table using that ID). It seems like its simply impossible to retrieve the value for this field using only `relationship` and keep it only one field (and not a structure with multiple { }).
Is it possible to run a raw SQL to retrieve a field value?
Thanks | open | 2017-08-24T17:09:23Z | 2017-08-24T18:55:50Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/71 | [] | rok-povsic | 1 |
mwaskom/seaborn | data-visualization | 3,443 | Feature request: Dual x/y Agg and errorbars in seaborn objects interface | Hi,
Is it possible to have Est, Range, Agg, etc. be used when both x and y are quantitative? Like, being able to plot a mean point and x and y errorbars for two continuous variables:
g = (so.Plot()
.add(so.Line(), data=mean_curves_l, x='x', y='y', color='trt')
.add(so.Dot(), so.Agg(), data=df, x='Obs-Pci', y='rel_O40', color='PPT_treat')
.add(so.Range(), so.Est(errorbar='se'), data=df, x='Obs-Pci', y='rel_O40')
)
where both Obs-Pci and rel_O40 are continuous. | open | 2023-08-23T19:12:09Z | 2024-03-08T08:49:25Z | https://github.com/mwaskom/seaborn/issues/3443 | [
"wishlist"
] | Auerilas | 3 |
pyro-ppl/numpyro | numpy | 1,015 | Initialization issues when using mask/latent variables | ### Description
I am not sure where the problem is coming from exactly, either from discrete latent variables or using `handlers.mask`. A relatively small script that reproduces it is below: I try to infer in a mixture model, where the mixture ID is determined by a Bernoulli variable. I use `mask` to decide between sampling from either of the two distributions.
When I run MCMC inference with `init_strategy=numpyro.infer.init_to_uniform`, for both models I get `xyz.shape == (7, 100, 3)` (as it should be). When I change to `init_to_median`, for `model2` I get `xyz.shape == (7, 2, 100, 3)`. In the full version (before condensing it to make it self-contained) the behavior was even more esoteric, with the extra dimension appearing in `model2` when I specify any `init_strategy`, including `init_to_uniform`, which is the default!
### Reproduction
```python
import numpy as np
import jax
import jax.numpy as jnp
import numpyro as npyro
import numpyro.distributions as dist
def f(xyz):
''' an arbitrary function from R^3 -> R^2'''
return jnp.cos(xyz)[..., :2]
def g(xyz):
''' another arbitrary function from R^3 -> R^2'''
return jnp.sin(xyz)[..., :2]
def model1(e1, e2):
''' a single component baseline '''
n = e1.shape[0]
obs_δ = npyro.sample('obs_δ', dist.LogNormal(scale=1.))
obs_cov = jnp.eye(2) * obs_δ
with npyro.plate('n', n):
xyz = npyro.sample(
'xyz',
dist.MultivariateNormal(jnp.ones(3), jnp.eye(3)),
)
obs_1 = npyro.sample(
'obs_1',
dist.MultivariateNormal(
loc=f(xyz),
covariance_matrix=obs_cov,
),
obs=e1,
)
obs_2 = npyro.sample(
'obs_2',
dist.MultivariateNormal(
loc=g(xyz),
covariance_matrix=obs_cov,
),
obs=e2,
)
def model2(e1, e2):
''' a two component mixture '''
n = e1.shape[0]
obs_δ = npyro.sample('obs_δ', dist.LogNormal(scale=1.))
obs_cov = jnp.eye(2) * obs_δ
cmp_p = npyro.sample('cmp_p', dist.Beta(1, 1))
with npyro.plate('n', n):
is_cmp_1 = npyro.sample(
'is_cmp_1',
dist.Bernoulli(cmp_p),
).astype(bool)
with npyro.handlers.mask(mask=is_cmp_1):
xyz = npyro.sample(
'xyz',
dist.MultivariateNormal(jnp.ones(3), jnp.eye(3)),
)
obs_1 = npyro.sample(
'obs_1_1',
dist.MultivariateNormal(
loc=f(xyz),
covariance_matrix=obs_cov,
),
obs=e1,
)
obs_2 = npyro.sample(
'obs_1_2',
dist.MultivariateNormal(
loc=g(xyz),
covariance_matrix=obs_cov,
),
obs=e2,
)
with npyro.handlers.mask(mask=~is_cmp_1):
obs_1 = npyro.sample(
'obs_2_1',
dist.Uniform(
low=jnp.zeros(2),
high=jnp.ones(2),
).to_event(1),
obs=e1,
)
obs_2 = npyro.sample(
'obs_2_2',
dist.Uniform(
low=jnp.zeros(2),
high=jnp.ones(2),
).to_event(1),
obs=e2,
)
def run_mcmc(model, *evidence):
rng_key = jax.random.PRNGKey(42)
kernel = npyro.infer.NUTS(model, init_strategy=npyro.infer.init_to_median)
#kernel = npyro.infer.NUTS(model) # this would give the same output shapes for both models
mcmc = npyro.infer.MCMC(
kernel,
num_warmup=2,
num_samples=7,
num_chains=1,
progress_bar=True,
)
mcmc.run(rng_key, *evidence)
predictions = mcmc.get_samples()
print(20 * '-' + ' ' + model.__name__ + ' ' + 20 * '-')
for key, value in predictions.items():
print(key, value.shape)
evidence = (np.random.randn(100, 2), np.random.randn(100, 2))
run_mcmc(model1, *evidence)
run_mcmc(model2, *evidence)
```
### Setup
I am running numpyro 0.6.0 with jax 0.2.10. | closed | 2021-04-20T18:41:34Z | 2021-05-05T17:23:02Z | https://github.com/pyro-ppl/numpyro/issues/1015 | [
"bug"
] | jatentaki | 3 |
AntonOsika/gpt-engineer | python | 589 | "No API key provided" - altough it is provided in the .env file | ## Expected Behavior
If the OpenAI API key is provided in the .env file, it should be recognized and used.
## Current Behavior
Runtime error message: openai.error.AuthenticationError: No API key provided.
### Steps to Reproduce
1. Set the key in the .env file
2. Run the app with gpt-engineer projects/my-new-project
### Solution
When I added the line `openai.api_key = os.getenv("OPENAI_API_KEY")` to the end of the function `load_env_if_needed()` in the file `main.py`, as well as `import openai` at the beginning of this file _(thanks, engerlina, for reminder)_, the issue was resolved. | closed | 2023-08-14T09:40:11Z | 2023-08-17T09:25:49Z | https://github.com/AntonOsika/gpt-engineer/issues/589 | [] | LeRobert | 9 |
graphql-python/gql | graphql | 175 | Add quick example of syntax transport "auth" parameter expects (e.g when using GitHub personal access token)? | Might you provide documentation of that parameter?
Here is what I see, now.
https://github.com/graphql-python/gql/blob/d3f79e720037e0bd1f9f28e7a7964c58ff60f584/gql/transport/requests.py#L50-L51
What might that expect for, say, using a GitHub personal access token (which I've already created)?
tuple(\<user\>, \<token\>) ?
tuple('Bearer', '\<user\>:\<token\>') ?
tuple('Authorization', \<user\>, \<token\>) ?
I guess this must be passed off to requests, and dependent on https standards and/or attributes the service expects/defines.
So "tuple" seems to imply there is a specific number and order of arguments, we must already know about. If it were a dict, I guess that could give more clues about what types of information are expected.
Maybe just provide a link to that standard or documentation in the requests module, or whatever?
Thanks. | closed | 2020-11-28T17:00:25Z | 2020-12-11T12:20:03Z | https://github.com/graphql-python/gql/issues/175 | [
"type: question or discussion"
] | bdklahn | 1 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,028 | [Feature Request]: Update Intel Extension for PyTorch to v2.1.30 | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Fix issues the current version has and increases performance.
### Proposed workflow
Just update from 2.0.10 to https://github.com/intel/intel-extension-for-pytorch/releases/tag/v2.1.30%2Bxpu
### Additional information
_No response_ | open | 2024-06-15T17:31:59Z | 2024-06-15T17:34:24Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16028 | [
"enhancement"
] | Pantonia4 | 0 |
aio-libs/aiomysql | sqlalchemy | 20 | Complare aiomysql.sa API with aiopg.sa and sync them | closed | 2015-06-26T19:42:42Z | 2015-11-26T22:29:55Z | https://github.com/aio-libs/aiomysql/issues/20 | [] | jettify | 0 |
|
Gerapy/Gerapy | django | 162 | 'python3 -m pip install gerapy ' failed on Mac OS 10.15.6 | ERROR: Could not find a version that satisfies the requirement incremental>=16.10.1 (from versions: none)
ERROR: No matching distribution found for incremental>=16.10.1 | closed | 2020-08-07T14:26:11Z | 2020-08-10T12:51:10Z | https://github.com/Gerapy/Gerapy/issues/162 | [] | SeekPoint | 3 |
robotframework/robotframework | automation | 5,019 | v7 fails to find the tests when sub-suites are used | Given the following directory layout
```
tests/
suite1/
suite11/
test1.robot
test2.robot
suite2/
test3.robot
```
When running the following command line:
```
robot ... --suite suite1 tests/
```
robot successfully finds and executes `test1` and `test2`, however, when using:
```
robot ... --suite suite1.suite11 tests/
```
will fail to find the tests and produce an output
```
Suite 'Tests' contains no tests or tasks matching tag... in suite 'suite1.suite11'.
```
The following commands, however, will successfully find and run the tests:
```
robot ... --suite tests.suite1.suite11 tests/
robot ... --suite *.suite1.suite11 tests/
```
This misbehavour was introduced with `Robot Framework 7.0 (Python 3.11.2 on linux)` of robot framework. Using robot `Robot Framework 6.1.1 (Python 3.11.2 on linux)` the command above works fine. | closed | 2024-01-15T11:48:42Z | 2024-02-02T15:55:20Z | https://github.com/robotframework/robotframework/issues/5019 | [] | realtimeprojects | 1 |
A3M4/YouTube-Report | matplotlib | 24 | TypeError: 'encoding' is an invalid keyword argument for this function | Traceback (most recent call last):
File "report.py", line 8, in <module>
from parse import *
File "\~/YouTube-Report/parse.py", line 17, in <module>
class HTML:
File "\~/YouTube-Report/parse.py", line 19, in HTML
htmlWatch = open(watchHistory, 'r', encoding='utf-8').read()
TypeError: 'encoding' is an invalid keyword argument for this function | closed | 2019-12-22T19:55:01Z | 2019-12-22T21:06:22Z | https://github.com/A3M4/YouTube-Report/issues/24 | [] | aabacchus | 2 |
ultralytics/yolov5 | deep-learning | 12,844 | Raspi5 Yolov5 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
How do I even activate the gpu of my raspi5 in yolov5, it does only output 500ms in every detection thats around 2 fps, which is quite low in the chard I saw which is around 15 fps, can you help me how do I able to improve it's fps, even 10 fps will do
### Additional
_No response_ | closed | 2024-03-23T11:22:51Z | 2024-10-20T19:42:06Z | https://github.com/ultralytics/yolov5/issues/12844 | [
"question",
"Stale"
] | Jinairu | 3 |
albumentations-team/albumentations | deep-learning | 1,998 | [Performance] Benchmark scipy random vs np.ranom | Could happen that scipy has faster random Number Generator that could be important for array generations like GaussNoise.
Need to benchmark and update if it is the case. | closed | 2024-10-18T19:06:25Z | 2024-10-20T00:52:45Z | https://github.com/albumentations-team/albumentations/issues/1998 | [] | ternaus | 1 |
AirtestProject/Airtest | automation | 461 | 新版本airtest调用底层LogToHtml.report()生成测试报告时出错 | **(重要!问题分类)**
* 图像识别、设备控制相关问题 -> 报告生成问题
**描述问题bug**
你好,我之前曾在网上使用airtest生成聚合测试报告的myRunner启动器进行测试(具体代码可见https://blog.csdn.net/u010127154/article/details/83375659),这个启动器的核心是调用airtest.cli.runner中的run_script()运行脚本,然后用airtest.report.report中的LogToHtml.report()生成报告。
这个启动器在使用AirtestIDE V1.2.0时运行正常,但使用了AirtestIDE V1.2.1,或airtest-1.0.27之后,**却一直在运行脚本之后报错,无法生成报告**,报错信息如下:
File "D:/自动化回归测试/AirTest/myRunner.py", line 87, in run_air
rpt.report("log_template.html", output_file=output_file)
File "C:\Users\user\AppData\Local\Programs\Python\Python36\lib\site-packages\airtest\report\report.py", line 340, in report
info = json.loads(get_script_info(script_path))
File "C:\Users\tanjiahao\AppData\Local\Programs\Python\Python36\lib\site-packages\airtest\cli\info.py", line 23, in get_script_info
with open(pyfilepath, encoding="utf-8") as pyfile:
FileNotFoundError: [Errno 2] No such file or directory: 'D:\\自动化回归测试\\AirTest\\win7x64\\main.air\\'
**最后提到的脚本文件是存在的,路径完全正确**。所以我也不明白为什么会找不到。。
我找到了airtest.report.report模块,但是也没发现LogToHtml类中相较前一版本有什么变化。希望能给予解答,**最好能说明一下report()方法是否需要修改传入的参数**,不胜感激。
**python 版本:** `python3.6.8`
**airtest 版本:** `1.0.27`
- 系统: Windows7-amd64
**其他相关环境信息**
在使用AirtestIDE V1.2.0时运行正常,在使用AirtestIDE V1.2.1,或airtest-1.0.27之后出错
| closed | 2019-07-19T01:40:17Z | 2019-10-16T09:08:21Z | https://github.com/AirtestProject/Airtest/issues/461 | [
"bug",
"enhancement",
"to be released"
] | niuniuprice | 10 |
NVlabs/neuralangelo | computer-vision | 60 | Mesh extraction failure: data loader keeps quitting unexpectedly | Hello,
I am running the code on an A10 GPU. The training process for Neuralangelo works fine and I am trying to run the mesh extraction. However, I keep getting this:
(fyi I tried with resolution 1028 and 2048 but still got same error)
```
- Loading the model...
Done with loading the checkpoint.
Extracting surface at resolution 1035 1522 2048
0%| | 0/1728 [00:00<?, ?it/s]ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
ERROR: Unexpected bus error encountered in worker. This might be caused by insufficient shared memory (shm).
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1133, in _try_get_data
data = self._data_queue.get(timeout=timeout)
File "/usr/lib/python3.8/queue.py", line 179, in get
self.not_empty.wait(remaining)
File "/usr/lib/python3.8/threading.py", line 306, in wait
gotit = waiter.acquire(True, timeout)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/_utils/signal_handling.py", line 66, in handler
_error_if_any_worker_fails()
RuntimeError: DataLoader worker (pid 2913) is killed by signal: Bus error. It is possible that dataloader's workers are out of shared memory. Please try to raise your shared memory limit.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "projects/neuralangelo/scripts/extract_mesh.py", line 106, in <module>
main()
File "projects/neuralangelo/scripts/extract_mesh.py", line 88, in main
mesh = extract_mesh(sdf_func=sdf_func, bounds=bounds,
File "/usr/local/lib/python3.8/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/neuralangelo/projects/neuralangelo/utils/mesh.py", line 31, in extract_mesh
for it, data in enumerate(data_loader):
File "/usr/local/lib/python3.8/dist-packages/tqdm/std.py", line 1182, in __iter__
for obj in iterable:
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 634, in __next__
data = self._next_data()
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1329, in _next_data
idx, data = self._get_data()
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1285, in _get_data
success, data = self._try_get_data()
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataloader.py", line 1146, in _try_get_data
raise RuntimeError('DataLoader worker (pid(s) {}) exited unexpectedly'.format(pids_str)) from e
RuntimeError: DataLoader worker (pid(s) 2913) exited unexpectedly
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2596) of binary: /usr/bin/python
Traceback (most recent call last):
File "/usr/local/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch==2.1.0a0+fe05266', 'console_scripts', 'torchrun')())
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 794, in main
run(args)
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/run.py", line 785, in run
elastic_launch(
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 134, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.8/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
projects/neuralangelo/scripts/extract_mesh.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-08-22_17:34:17
host : 8c8b262fd3ca
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 2596)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
I looked at the OOM FAQ section but I don't know how to fix this kind of issue for the mesh extraction step. It also seems to be complaining about shared memory and not CUDA memory. I have a feeling this is related to the Docker container but I looked up how to resolve it and I am not sure which route to take. Let me know if there is a solution for this that is recommended! | closed | 2023-08-22T17:31:56Z | 2023-08-22T19:22:59Z | https://github.com/NVlabs/neuralangelo/issues/60 | [] | smandava98 | 1 |
PrefectHQ/prefect | data-science | 17,576 | Add a possibility to guarantee local execution of flows when directly calling Python functions is not an option | ### Describe the current behavior
As far as I've understood, at the moment flows can be served or deployed locally, which is fine. The problem is that my monorepo has a bunch of separated Python projects, each of which is a different Prefect flow. The only way to test these flows locally is to rely on Prefect and use `run_deployment`, because I cannot import one project into another (it would mess up dependencies and defeat the whole purpose of keeping these projects separated).
I don't mind this, but the problem is that when I run a local worker (either from serving a flow, or for a process workpool), this worker can be used by anyone. This means that if me and another person are testing the same flow locally, and run a worker each for the same served flow, I cannot guarantee that my flow is run on my machine. This can be problematic if me and the other person are running different version of the source code, causing problems.
### Describe the proposed behavior
It would be nice to have some way to guarantee this behavior: if I run `run_deployment` for a flow, which is deployed locally or served, I would like to have an option to make sure it only runs on a worker executing on my local machine. Not sure if it already possible, or if it possible to implement it.
The only work around I've found is to create a different process work pool, one for each person working on the project, which is a bit annoying and unwieldy.
### Example Use
_No response_
### Additional context
_No response_ | open | 2025-03-24T09:58:43Z | 2025-03-24T16:38:38Z | https://github.com/PrefectHQ/prefect/issues/17576 | [
"enhancement"
] | NicholasPini | 4 |
gradio-app/gradio | deep-learning | 10,789 | Update the Audio playback state from the backend | - [x] I have searched to see if a similar issue already exists.
**Is your feature request related to a problem? Please describe.**
Would be great to update the playback state of the audio component (current timestamp, paused/playing, etc) by returning a `gr.Audio`.
This could be useful in transcription demos when you want to move the audio to the timestamp a particular word was detected.
| open | 2025-03-12T00:12:14Z | 2025-03-13T23:59:22Z | https://github.com/gradio-app/gradio/issues/10789 | [
"🎵 Audio"
] | freddyaboulton | 5 |
huggingface/transformers | tensorflow | 35,937 | Llama3.2: Allow batch to have | ### Feature request
Currently, llama3.2 requires either no images per batch or each example have at least one image. Is there some easy workaround (apart from feeding dummy images which is computationally expensive) to allow some examples to have images and other examples to have no images?
Current error message:
`"If a batch of text is provided, there should be either no images or at least one image per sample"`
from https://github.com/huggingface/transformers/blob/3f860dba553d09a8eb96fded8d940c98a9a86854/src/transformers/models/mllama/processing_mllama.py#L303
### Motivation
Not all examples may have images, but some do.
### Your contribution
Could possibly submit a PR | open | 2025-01-28T13:56:56Z | 2025-01-28T14:05:34Z | https://github.com/huggingface/transformers/issues/35937 | [
"Feature request",
"VLM"
] | maximilianmordig | 1 |
opengeos/leafmap | jupyter | 581 | Add support for lonboard | <!-- Please search existing issues to avoid creating duplicates. -->
### Description
lonboard is a new Python package for visualizing large vector dataset. It uses pydeck as the mapping backend. Since leafmap already supports pydeck, it would be great to add support for lonboard as well.
https://developmentseed.org/blog/2023-10-23-lonboard | closed | 2023-10-24T23:41:26Z | 2023-11-06T11:46:21Z | https://github.com/opengeos/leafmap/issues/581 | [
"Feature Request"
] | giswqs | 0 |
modin-project/modin | data-science | 6,982 | Dataset used in examples jupyter notebook is failing when jupyter notebook is run | The dataset used in the [notebook](https://github.com/modin-project/modin/blob/master/examples/jupyter/Pandas_Taxi.ipynb ) is named yellow_tripdata_2015-01.csv, which is hosted in
https://modin-datasets.intel.com/testing/yellow_tripdata_2015-01.csv this dataset is not the expected yellow_tripdata_2015-01.csv so the jupyter notebook fails as mentioned in https://github.com/modin-project/modin/issues/6964#issuecomment-1970641771_,
Either the hosted dataset should be changed or the snipet should be updated | closed | 2024-02-29T14:58:44Z | 2024-03-04T09:40:38Z | https://github.com/modin-project/modin/issues/6982 | [] | arunjose696 | 1 |
allenai/allennlp | nlp | 5,225 | Access the [CLS] token when using a pretrained_transformer_mismatched_embedder | Is there a simple way to access the `[CLS]` Token after encoding a token-sequence using a `pretrained_transformer_mismatched_embedder`?
Using a `bert_pooler` `seq2vec` encoder leads to an additional forward pass through the transformer. My Intuition is, that this should be avoided.
In [pretrained_transformer_mismatched_embedder.py](https://github.com/allenai/allennlp/blob/main/allennlp/modules/token_embedders/pretrained_transformer_mismatched_embedder.py#L139)
`span_embeddings, span_mask = util.batched_span_select(embeddings.contiguous(), offsets)`
the `embedding` tensor contains the [CLS] vector at index 0. `batched_span_select` removes this information, since a `seq2seq` encoder does not include the [CLS] token. For RNN-based encoders we can use [get_final_encoder_states](https://github.com/allenai/allennlp/blob/main/allennlp/nn/util.py#L187). Is there a similar way to access the [CLS] information for transformer-based token embedders? | closed | 2021-05-26T09:53:43Z | 2021-06-04T11:21:09Z | https://github.com/allenai/allennlp/issues/5225 | [
"Contributions welcome",
"question"
] | MSLars | 5 |
mwaskom/seaborn | data-visualization | 3,176 | pandas plotting backend? | Plotly has toplevel `.plot` function which allows for a [pandas plotting backend](https://github.com/pandas-dev/pandas/blob/d95bf9a04f10590fff41e75de94c321a8743af72/pandas/plotting/_core.py#L1848-L1861) to exist:
https://github.com/plotly/plotly.py/blob/4363c51448cda178463277ff3c12becf35dbd3b8/packages/python/plotly/plotly/__init__.py
Like this, if people have `plotly` installed, they can do:
```
pd.set_option('plotting.backend', 'plotly')
```
and then `df.plot.line(x=x, y=y)` will defer to `plotly.express.line(data_frame=df, x=x, y=y)`:

It'd be nice to be able to do
```
pd.set_option('plotting.backend', 'seaborn')
```
and then have `df.plot.line(x=x, y=y)` defer to `seaborn.line(data=df, x=x, y=y)`
Would you be open to these ~150 lines of code or so to allow `seaborn` to be set as a plotting backend in pandas? Check the link above to see what it looks like in `plotly`. I'd be happy to implement this, just checking if it'd be welcome | closed | 2022-12-05T08:43:50Z | 2022-12-06T21:32:08Z | https://github.com/mwaskom/seaborn/issues/3176 | [] | MarcoGorelli | 8 |
huggingface/datasets | pandas | 6,793 | Loading just one particular split is not possible for imagenet-1k | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work like that?
### Steps to reproduce the bug
1. Install the required libraries (python, datasets, huggingface_hub)
2. Login using huggingface cli
2. Run the code in the description
### Expected behavior
Just a single (validation) split should be downloaded.
### Environment info
python: 3.12.2
datasets: 2.18.0
huggingface_hub: 0.22.2 | open | 2024-04-08T14:39:14Z | 2024-09-12T16:24:48Z | https://github.com/huggingface/datasets/issues/6793 | [] | PaulPSta | 1 |
ray-project/ray | tensorflow | 51,321 | [core][autoscaler][v2] do not removing nodes for upcoming resource requests | ### What happened + What you expected to happen
The issues described in https://github.com/ray-project/ray/pull/51122 still happen with autoscaler v2.
### Versions / Dependencies
Ray 2.43.0
### Reproduction script
See https://github.com/ray-project/ray/pull/51122
### Issue Severity
None | open | 2025-03-12T23:16:06Z | 2025-03-21T01:39:41Z | https://github.com/ray-project/ray/issues/51321 | [
"bug",
"core-autoscaler"
] | rueian | 2 |
keras-team/keras | data-science | 20,322 | Loading weights into custom LSTM layer fails: Layer 'lstm_cell' expected 3 variables, but received 0 variables during loading. Expected: ['kernel', 'recurrent_kernel', 'bias'] | I'm using the official TF 2.17 container (**tensorflow/tensorflow:2.17.0-gpu-jupyter**) + **keras==3.5.0**.
The following code saves a model which contains a (dummy) custom LSTM layer, then inits a new copy of the model (with a vanilla LSTM) and tries to load the weights from the first model into the second.
Code:
```
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import keras
from keras import layers
# An extremely uninteresting custom layer
@keras.saving.register_keras_serializable()
class MyCustomLSTM(keras.layers.LSTM):
def __init__(self, units, **kwargs):
super().__init__(units, **kwargs)
def make_model(
use_custom_lstm=True,
):
inputs = layers.Input(shape=(None, 4), name="inputs")
if use_custom_lstm:
lstm = MyCustomLSTM
else:
lstm = layers.LSTM
outputs = lstm(
units=8,
return_sequences=True,
name="my_LSTM",
)(inputs)
model = keras.models.Model(inputs=inputs, outputs=outputs)
return model
weights_file = "this_is_a_test.weights.h5"
if os.path.exists(weights_file):
os.remove(weights_file)
model = make_model(use_custom_lstm=True)
model.compile()
model.save_weights(weights_file)
new_model = make_model(use_custom_lstm=False)
new_model.load_weights(weights_file)
```
Output:
```
Traceback (most recent call last):
File "scratch_1.py", line 45, in <module>
new_model.load_weights(weights_file)
File "venv/lib/python3.11/site-packages/keras/src/utils/traceback_utils.py", line 122, in error_handler
raise e.with_traceback(filtered_tb) from None
File "venv/lib/python3.11/site-packages/keras/src/saving/saving_lib.py", line 593, in _raise_loading_failure
raise ValueError(msg)
ValueError: A total of 1 objects could not be loaded. Example error message for object <LSTMCell name=lstm_cell, built=True>:
Layer 'lstm_cell' expected 3 variables, but received 0 variables during loading. Expected: ['kernel', 'recurrent_kernel', 'bias']
List of objects that could not be loaded:
[<LSTMCell name=lstm_cell, built=True>]
```
Considering that the custom layer in this case is doing absolutely nothing of interest, I assume this is a bug. If not, please let me know how one is meant to wrap a LSTM layer to avoid this issue.
Thanks! | open | 2024-10-04T13:11:38Z | 2024-10-07T15:33:19Z | https://github.com/keras-team/keras/issues/20322 | [
"type:Bug"
] | lbortolotti | 4 |
CatchTheTornado/text-extract-api | api | 50 | Docker installation | I want to try this, but im unable to follow the installation, a proper installation guide on docker installation would be very helpful | open | 2024-12-12T15:51:26Z | 2025-01-19T00:12:41Z | https://github.com/CatchTheTornado/text-extract-api/issues/50 | [] | drmetro09 | 6 |
python-gino/gino | sqlalchemy | 566 | How to set up quart and Gino | * GINO version: 0.8.3
* Python version: 3.7
* asyncpg version: 0.19.0
* aiocontextvars version: 0.2.2
* PostgreSQL version: 11
### Description
I want to use gino as an orm with quart, but i cant get it to connect together....
most of the time i get : Gino engine is not initialized. when i try to acess the db api when the server is running.
The database tables were created but when i try to insert , i get this. and nothing seem to work.
### What I Did
below is how i try to connect it
```python
from config import DevelopmentConfig
from quart import Quart
import asyncio
from gino import Gino
app = Quart(__name__)
app.config.from_object(DevelopmentConfig)
db = Gino()
from app.views import deep, neutral
app.register_blueprint(deep)
app.register_blueprint(neutral)
async def main():
await db.set_bind(DevelopmentConfig.DATABASE_URI)
await db.gino.create_all()
if __name__ == '__main__':
app.run()
asyncio.get_event_loop().run_until_complete(main())
```
This is the error
```
Running on http://127.0.0.1:5000 (CTRL + C to quit)
[2019-10-11 16:52:12,978] Running on 127.0.0.1:5000 over http (CTRL + C to quit)
[2019-10-11 16:52:20,575] ERROR in app: Exception on request GET /telegram/messages/fetch
Traceback (most recent call last):
File "/home/lekan/Documents/workspace/telegram-flask/lib/python3.7/site-packages/quart/app.py", line 1524, in handle_request
return await self.full_dispatch_request(request_context)
File "/home/lekan/Documents/workspace/telegram-flask/lib/python3.7/site-packages/quart/app.py", line 1546, in full_dispatch_request
result = await self.handle_user_exception(error)
File "/home/lekan/Documents/workspace/telegram-flask/lib/python3.7/site-packages/quart/app.py", line 957, in handle_user_exception
raise error
File "/home/lekan/Documents/workspace/telegram-flask/lib/python3.7/site-packages/quart/app.py", line 1544, in full_dispatch_request
result = await self.dispatch_request(request_context)
File "/home/lekan/Documents/workspace/telegram-flask/lib/python3.7/site-packages/quart/app.py", line 1592, in dispatch_request
return await handler(**request_.view_args)
File "/home/lekan/Documents/workspace/telegram-flask/deepview-telegram/app/views.py", line 72, in fetch
msgs = await client.messages(limit=500)
File "/home/lekan/Documents/workspace/telegram-flask/deepview-telegram/app/request.py", line 50, in messages
msg = await DbMessage.get_messages(message_id=message.id)
File "/home/lekan/Documents/workspace/telegram-flask/lib/python3.7/site-packages/gino/api.py", line 135, in first
return await self._query.bind.first(self._query, *multiparams,
File "/home/lekan/Documents/workspace/telegram-flask/lib/python3.7/site-packages/gino/api.py", line 501, in __getattribute__
raise self._exception
gino.exceptions.UninitializedError: Gino engine is not initialized.
```
I also tried to do this in my views:
```python
@app.before_serving
async def main():
await db.set_bind(DevelopmentConfig.DATABASE_URI)
await db.gino.create_all()
asyncio.get_event_loop().run_until_complete(main())
```
but i get the same error.
Please i'd appreciate if you could help on how to connect and do db operations successfuly.
Thanks. | closed | 2019-10-11T15:57:19Z | 2019-10-19T06:09:14Z | https://github.com/python-gino/gino/issues/566 | [
"question"
] | horlahlekhon | 32 |
holoviz/panel | matplotlib | 7,014 | Panel tries to set a numeric param to an HTML string when data is updated in the background with a Panel app in a notebook | I ran into an unusual situation where I set up a simple Panel UI to run inside a Jupyter notebook, and then tried to update the data for the UI from a background thread. I observed that several updates would work properly, but at some point, Panel would try to update the value of a numeric param to be an HTML string (seemingly from a `pn.widgets.StaticText`) which would throw an exception and stop the UI from updating.
#### ALL software version info
Python: 3.11.9
OS: Mac OSX 14.5 on M3 Max
Browser: Chrome 126.0.6478.183 (Official Build) (arm64)
Selected python library versions:
```
bokeh==3.4.3
ipython==8.26.0
notebook==7.2.1
panel==1.4.4
param==2.1.1
pyviz_comms==3.0.2
```
#### Description of expected behavior and the observed behavior
It should be possible to update parameterized models from a background thread and have the UI update without issue.
Instead, when I trigger the background thread to update the model repeatedly, at some point Panel handles a message which is trying to set the model's value to be an HTML string, rather than the raw number.
It seems that this may come from the `pn.widgets.StaticText` widget - if I comment out and only display the position via `pn.widgets.FloatSlider`, I did not observe this problem in my repro case.
#### Complete, minimal, self-contained example code that reproduces the issue
See a minimal repro in repo here: https://github.com/dennisjlee/panel-thread-repro
#### Stack traceback and/or browser JavaScript console output
```
Traceback (most recent call last):
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/pyviz_comms/__init__.py", line 340, in _handle_msg
self._on_msg(msg)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/viewable.py", line 478, in _on_msg
doc.unhold()
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/bokeh/document/document.py", line 776, in unhold
self.callbacks.unhold()
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/bokeh/document/callbacks.py", line 432, in unhold
self.trigger_on_change(event)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/bokeh/document/callbacks.py", line 409, in trigger_on_change
invoke_with_curdoc(doc, event.callback_invoker)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/bokeh/document/callbacks.py", line 444, in invoke_with_curdoc
return f()
^^^
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/bokeh/util/callback_manager.py", line 185, in invoke
callback(attr, old, new)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/reactive.py", line 474, in _comm_change
state._handle_exception(e)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/io/state.py", line 458, in _handle_exception
raise exception
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/reactive.py", line 472, in _comm_change
self._schedule_change(doc, comm)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/reactive.py", line 454, in _schedule_change
self._change_event(doc)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/reactive.py", line 450, in _change_event
self._process_events(events)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/reactive.py", line 387, in _process_events
self.param.update(**self_params)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 2319, in update
restore = dict(self_._update(arg, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 2352, in _update
self_._batch_call_watchers()
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 2546, in _batch_call_watchers
self_._execute_watcher(watcher, events)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 2506, in _execute_watcher
watcher.fn(*args, **kwargs)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/panel/param.py", line 527, in link_widget
self.object.param.update(**{p_name: change.new})
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 2319, in update
restore = dict(self_._update(arg, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 2345, in _update
setattr(self_or_cls, k, v)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 528, in _f
instance_param.__set__(obj, val)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 530, in _f
return f(self, obj, val)
^^^^^^^^^^^^^^^^^
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameters.py", line 543, in __set__
super().__set__(obj,val)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 530, in _f
return f(self, obj, val)
^^^^^^^^^^^^^^^^^
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameterized.py", line 1498, in __set__
self._validate(val)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameters.py", line 828, in _validate
self._validate_value(val, self.allow_None)
File "/Users/dj/code/panel-thread-repro/venv/lib/python3.11/site-packages/param/parameters.py", line 811, in _validate_value
raise ValueError(
ValueError: Number parameter 'PositionView.position' only takes numeric values, not <class 'str'>.```
```
#### Screenshots or screencasts of the bug in action
https://github.com/user-attachments/assets/326fb233-be8c-490a-ab44-21a019102cca
- [ ] I may be interested in making a pull request to address this
| closed | 2024-07-25T19:18:31Z | 2024-07-29T11:30:14Z | https://github.com/holoviz/panel/issues/7014 | [] | dennisjlee | 0 |
pywinauto/pywinauto | automation | 419 | ctypes.ArgumentError @ click_input | > Windows 10.0.15063 x64
> Python 3.6.2
> pywinauto 0.6.3
Setup (functional):
```python
hwnd = my_kivy_app.get_hwnd()
app = pywinauto.Application()
app.connect(handle=hwnd)
window = app.window(handle=hwnd).wrapper_object()
```
Exception @:
```python
window.click_input(button='left', pressed='', coords=(100, 100), double=False, absolute=False)
```
```java
Traceback (most recent call last):
File "Test_Window_Commands.py", line 114, in <module>
app.run()
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\kivy\app.py", line 828, in run
runTouchApp()
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\kivy\base.py", line 504, in runTouchApp
EventLoop.window.mainloop()
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\kivy\core\window\window_sdl2.py", line 663, in mainloop
self._mainloop()
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\kivy\core\window\window_sdl2.py", line 405, in _mainloop
EventLoop.idle()
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\kivy\base.py", line 339, in idle
Clock.tick()
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\kivy\clock.py", line 581, in tick
self._process_events()
File "kivy\_clock.pyx", line 367, in kivy._clock.CyClockBase._process_events (kivy\_clock.c:7700)
File "kivy\_clock.pyx", line 397, in kivy._clock.CyClockBase._process_events (kivy\_clock.c:7577)
File "kivy\_clock.pyx", line 395, in kivy._clock.CyClockBase._process_events (kivy\_clock.c:7498)
File "kivy\_clock.pyx", line 167, in kivy._clock.ClockEvent.tick (kivy\_clock.c:3483)
File "C:\OneDrive\_Projects\Python\dev\kivy\basic\_app.py", line 140, in _wrapped
function(*args, **kwargs)
File "Test_Window_Commands.py", line 104, in ctrl_shift_alt
app._window.click_input(button='left', pressed='', coords=(100, 100), double=False, absolute=False)
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\pywinauto\base_wrapper.py", line 667, in click_input
coords = self.client_to_screen(coords)
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\pywinauto\base_wrapper.py", line 333, in client_to_screen
rect = self.element_info.rectangle
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\pywinauto\win32_element_info.py", line 142, in rectangle
return handleprops.rectangle(self.handle)
File "C:\OneDrive\_Frameworks\Python\3\lib\site-packages\pywinauto\handleprops.py", line 200, in rectangle
win32functions.GetWindowRect(handle, ctypes.byref(rect))
ctypes.ArgumentError: argument 2: <class 'TypeError'>: expected LP_RECT instance instead of pointer to RECT
``` | closed | 2017-10-05T20:31:40Z | 2019-07-06T15:50:03Z | https://github.com/pywinauto/pywinauto/issues/419 | [
"bug"
] | Enteleform | 19 |
igorbenav/FastAPI-boilerplate | sqlalchemy | 125 | Duplicate JWT cookie descriptions | **Description**
In the section for JWT auth & cookies, both `Lax` and `Strict` descriptions in README are the same.
**Screenshots**
 | closed | 2024-03-14T08:25:19Z | 2024-04-14T22:57:46Z | https://github.com/igorbenav/FastAPI-boilerplate/issues/125 | [
"documentation",
"good first issue"
] | CHE1RON | 0 |
mars-project/mars | numpy | 3,216 | [BUG] Ray task backend no progress | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Ray task mode doesn't update progress until whole task finished:

**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.7.9
2. The version of Mars you use: master
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
```python
def test_groupby(n=10):
from datetime import datetime
start = datetime.now()
df = md.DataFrame(
mt.random.rand(n * 500, 4, chunk_size=500),
columns=list('abcd'))
# print(df.sum().execute())
result = df.groupby(['a']).apply(lambda pdf: pdf).execute()
duration = datetime.now() - start
return result, duration
mars.new_session(n_worker=10, n_cpu=10*2, backend="ray")
test_groupby(200)
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| closed | 2022-08-09T11:18:21Z | 2022-11-10T07:24:47Z | https://github.com/mars-project/mars/issues/3216 | [] | chaokunyang | 2 |
pyro-ppl/numpyro | numpy | 1,999 | Running svi with mutable states causes tracer leak | ### Bug Description
This is a sub-issue of https://github.com/pyro-ppl/numpyro/issues/1981. Running the following test raises error/xfail.
### Steps to Reproduce
```
JAX_CHECK_TRACER_LEAKS=1 pytest -vs test/infer/test_svi.py::test_mutable_state
```
### Expected Behavior
No error happens.
| closed | 2025-03-06T22:20:02Z | 2025-03-07T19:25:40Z | https://github.com/pyro-ppl/numpyro/issues/1999 | [
"bug"
] | fehiepsi | 0 |
xlwings/xlwings | automation | 1,786 | unable to set cell values | #### OS (e.g. Windows 10 or macOS Sierra)
Win10 Pro 19044.1348
#### Versions of xlwings, Excel and Python (e.g. 0.11.8, Office 365, Python 3.7)
xlwings 0.25.0
Excel 365
Python 3.7
#### Describe your issue (incl. Traceback!)
<Sheet [Relay.xlsm]sm_test>
sfasdf
pythoncom error: Python error invoking COM method.
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\win32com\server\policy.py", line 278, in _Invoke_
return self._invoke_(dispid, lcid, wFlags, args)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\win32com\server\policy.py", line 283, in _invoke_
return S_OK, -1, self._invokeex_(dispid, lcid, wFlags, args, None, None)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\win32com\server\policy.py", line 586, in _invokeex_
return func(*args)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\server.py", line 198, in CallUDF
res = call_udf(script, fname, args, this_workbook, FromVariant(caller))
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\udfs.py", line 519, in call_udf
ret = func(*args)
File "C:\Users\User\Documents\step_model\Relay.py", line 16, in init_sm
sht.range('B3').value = "foo"
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\main.py", line 2027, in value
conversion.write(data, self, self._options)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\conversion\__init__.py", line 48, in write
pipeline(ctx)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\conversion\framework.py", line 66, in __call__
stage(*args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\conversion\standard.py", line 74, in __call__
self._write_value(ctx.range, ctx.value, scalar)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\conversion\standard.py", line 62, in _write_value
rng.raw_value = value
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\main.py", line 1620, in raw_value
self.impl.raw_value = data
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\xlwings\_xlwindows.py", line 859, in raw_value
self.xl.Value = data
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\win32com\client\__init__.py", line 482, in __setattr__
self._oleobj_.Invoke(*(args + (value,) + defArgs))
pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2146827284), None)
#### Include a minimal code sample to reproduce the issue (and attach a sample workbook if required!)
import xlwings as xw
@xw.func
def init_sm():
sht = xw.Book.caller().sheets[0]
print(sht)
print(sht.range('B3').value) #prints value
sht.range('B3').value = "foo" #exception
return 1
if __name__ == "__main__":
xw.serve()
| closed | 2021-12-14T20:26:38Z | 2021-12-15T08:15:54Z | https://github.com/xlwings/xlwings/issues/1786 | [] | ekwf | 1 |
Farama-Foundation/Gymnasium | api | 464 | [Question] Displaying contact forces and contact points in video recordings of a MuJoCo environment using RecordVideo | ### Question
Inside the MuJoCo simulation GUI (simulate.exe), I can click buttons to display contract forces and contact points. I would like to be able to display such information in the videos generated by `gymnasium.wrappers.RecordVideo`. Can this be done from the `MujocoEnv` class? Otherwise, can you please suggest a workaround or consider adding this option?
I couldn't find in MuJoCo's documentation a way to specify this in the XML.
p.s. this is important for validating what the agent is doing in locomotion tasks, as it is often hard to see in the video if a body part is touching the floor. | closed | 2023-04-24T20:00:47Z | 2023-04-29T12:12:54Z | https://github.com/Farama-Foundation/Gymnasium/issues/464 | [
"question"
] | Omer1Yuval1 | 6 |
HIT-SCIR/ltp | nlp | 276 | 可否提供把增量模型合并到基础模型的工具或接口? | 情景:我有基础模型,有增量语料库A,训练出增量模型A,现在又多了增量语料库B,希望在基础模型和增量模型A的基础上训练。那么,如果可以把增量模型A和基础模型合并成一个大模型,就可以继续下去了。以后还可以再把B合并进去,训练C。 | closed | 2017-12-22T07:20:40Z | 2020-06-25T11:20:39Z | https://github.com/HIT-SCIR/ltp/issues/276 | [] | BoatingZeng | 1 |
dagster-io/dagster | data-science | 27,748 | Cannot terminate completely queued job backfills since 1.9.11 | ### What's the issue?
After upgrading from 1.9.10 to 1.9.11, I cannot cancel completely submitted backfills.
I believe the issue is connected to renaming buttons:
Cancel backfill submission → Cancel Backfill ✅ Triggers call to graphql?op=CancelBackfill
Terminate unfinished runs → Cancel Backfill ❌ Doesn’t trigger call to graphql?op=Terminate
### What did you expect to happen?
_No response_
### How to reproduce?
- Install dagster==1.9.11
- Run `dagster dev`
- Set dagster.yaml
```
run_queue:
max_concurrent_runs: 1
dequeue_interval_seconds: 1
```
- Submit backfill with 100 partitions
- Wait for log:
```
dagster.daemon.BackfillDaemon - INFO - Backfill {backfill_id} has unfinished runs. Status will be updated when all runs are finished.
```
- Open developer tools -> network
- Click on `Cancel Backfill`
- No `graphql?op=Terminate` is triggered
### Dagster version
1.9.11
### Deployment type
Dagster Helm chart
### Deployment details
_No response_
### Additional information
_No response_
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | open | 2025-02-11T13:09:53Z | 2025-03-03T14:29:50Z | https://github.com/dagster-io/dagster/issues/27748 | [
"type: bug",
"area: backfill",
"area: UI/UX"
] | HynekBlaha | 8 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 207 | 点开始制作,docker自动停止 | 版本最新版
docker安装
使用示例马保国可以生成,换成自己上传的,容器直接停止
日志: 2024-11-13 13:01:12.059647738 [E:onnxruntime:Default, env.cc:234 ThreadMain] pthread_setaffinity_np failed for thread: 18, index: 1, mask: {2, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set. | open | 2024-11-13T13:10:11Z | 2024-12-02T00:52:25Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/207 | [] | fjcy233 | 3 |
mljar/mljar-supervised | scikit-learn | 457 | 'float' object has no attribute 'lower' | ```py
**'float' object has no attribute 'lower'**
Traceback (most recent call last):
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\supervised\base_automl.py", line 1078, in _fit
trained = self.train_model(params)
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\supervised\base_automl.py", line 365, in train_model
mf.train(results_path, model_subpath)
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\supervised\model_framework.py", line 184, in train
X_train, y_train, sample_weight = self.preprocessings[
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\supervised\preprocessing\preprocessing.py", line 165, in fit_and_transform
t.fit(X_train, col)
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\supervised\preprocessing\text_transformer.py", line 25, in fit
self._vectorizer.fit(x)
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\sklearn\feature_extraction\text.py", line 1823, in fit
X = super().fit_transform(raw_documents)
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\sklearn\feature_extraction\text.py", line 1202, in fit_transform
vocabulary, X = self._count_vocab(raw_documents,
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\sklearn\feature_extraction\text.py", line 1114, in _count_vocab
for feature in analyze(doc):
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\sklearn\feature_extraction\text.py", line 104, in _analyze
doc = preprocessor(doc)
File "C:\Users\lucas.ssousa\AppData\Roaming\Python\Python38\site-packages\sklearn\feature_extraction\text.py", line 69, in _preprocess
doc = doc.lower()
AttributeError: 'float' object has no attribute 'lower'
```
Can anyone help me with this error? I have tried to work with other kind of models and my data worked normal. But when I try using MLJAR AutoML, it shows me this error. | closed | 2021-08-16T13:29:41Z | 2021-09-02T11:34:47Z | https://github.com/mljar/mljar-supervised/issues/457 | [] | Lssousadc | 2 |
unit8co/darts | data-science | 2,080 | [BUG] I can't run my darts model with multi-thread | **Context**
I have a one-time series model to predict.
But I have 10 covariates that I test all possible combinations computationally exhaustively, resulting in 1024 models for this case. Running 1024 models is quite slow. Therefore, we decided to run it in parallel, in a multi-thread fashion (we had this code in `skforecast` and it worked).
**Describe the bug**
When we run the multi-thread with `DARTS-RegressionModel`, it returns the following error for some combinations:
```bash
File ""/usr/local/lib/python3.9/threading.py"", line 973, in _bootstrap_inner
self.run()
File ""/usr/local/lib/python3.9/threading.py"", line 910, in run"
1697804257535," self._target(*self._args, **self._kwargs)
File ""/app/furukawa/scripts/model_optimization.py"", line 362, in lags_grid_search
results = model_params_grid_search(
File ""/app/furukawa/scripts/model_optimization.py"", line 587, in model_params_grid_search
forecaster = RegressionModel(
File ""/usr/local/lib/python3.9/site-packages/darts/models/forecasting/forecasting_model.py"", line 106, in __call__
return super().__call__(**all_params)
File ""/usr/local/lib/python3.9/site-packages/darts/models/forecasting/regression_model.py"", line 185, in __init__
super().__init__(add_encoders=add_encoders)
File ""/usr/local/lib/python3.9/site-packages/darts/models/forecasting/forecasting_model.py"", line 2061, in __init__
super().__init__(add_encoders=add_encoders)
File ""/usr/local/lib/python3.9/site-packages/darts/models/forecasting/forecasting_model.py"", line 134, in __init__
self._model_params = self._extract_model_creation_params()
File ""/usr/local/lib/python3.9/site-packages/darts/models/forecasting/forecasting_model.py"", line 1781, in _extract_model_creation_params
del self.__class__._model_call"
```
The error is inside the darts.
**To Reproduce**
We are working on it.
**Expected behavior**
I would like to be able to run this code in multi-thread. We were able to use this code in skforecast and sklearn. I want to know if this multi-thread approach is possible with darts.
**System (please complete the following information):**
- Python version: 3.9.12
- darts version: 0.26.0
@pamboukian | closed | 2023-11-21T15:03:46Z | 2024-01-21T15:21:21Z | https://github.com/unit8co/darts/issues/2080 | [
"bug",
"triage"
] | guilhermeparreira | 4 |
kornia/kornia | computer-vision | 2,260 | Add output_size Importance to Augmentation Base Classes in Docs | ## 📚 Documentation
The current documentation provides a general guideline on which class to use for implementing a custom augmentation.
An important aspect would be the explanation of the ``output_size`` in params that needs to be overwritten when changing the output size in the augmentation.
Without any remark in the [guideline page](https://kornia.readthedocs.io/en/latest/augmentation.base.html), it takes looking for the function documentation to find the issue. | open | 2023-03-07T12:51:52Z | 2023-04-10T23:30:50Z | https://github.com/kornia/kornia/issues/2260 | [
"docs :books:"
] | sebimarkgraf | 1 |
miguelgrinberg/Flask-Migrate | flask | 95 | application structure ?? | Hi,
I am reading your post at http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-iv-database and I am wondering how I can use this extension with the above post. My application structure is:
~/LargeApp
|-- run.py # Starting the application
|-- config.py
|__ /env # Virtual Environment
|__ /app # Our Application Module
|-- **init**.py # Actual app
|-- **models**.py
|-- **views**.py
I am passing additional arguments to app.run() like host and post. When I try to run manage.run() with these arguments I get an error.
| closed | 2016-01-06T22:00:51Z | 2019-01-13T22:20:41Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/95 | [
"question",
"auto-closed"
] | anselal | 3 |
Anjok07/ultimatevocalremovergui | pytorch | 917 | Please help! Error with Ensemble processing | Hi there!
I'm very new to this, and followed recommendations in the thread to try and process but came to this error.
Any help is appreciated!
Thanks,
Dolan | open | 2023-10-20T21:12:42Z | 2023-10-22T19:37:52Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/917 | [] | Taxmonkey | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 997 | I get that error while trying to install. What to do? | C:\Users\PC\Desktop\Real-Time-Voice-Cloning-master>pip install -r requirements.txt
Collecting inflect==5.3.0
Using cached inflect-5.3.0-py3-none-any.whl (32 kB)
Collecting librosa==0.8.1
Using cached librosa-0.8.1-py3-none-any.whl (203 kB)
Collecting matplotlib==3.5.1
Using cached matplotlib-3.5.1-cp37-cp37m-win_amd64.whl (7.2 MB)
Requirement already satisfied: numpy==1.20.3 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 4)) (1.20.3)
Requirement already satisfied: Pillow==8.4.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 5)) (8.4.0)
Collecting PyQt5==5.15.6
Using cached PyQt5-5.15.6-cp36-abi3-win_amd64.whl (6.7 MB)
Requirement already satisfied: scikit-learn==1.0.2 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 7)) (1.0.2)
Requirement already satisfied: scipy==1.7.3 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 8)) (1.7.3)
Collecting sounddevice==0.4.3
Using cached sounddevice-0.4.3-py3-none-win_amd64.whl (195 kB)
Requirement already satisfied: SoundFile==0.10.3.post1 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 10)) (0.10.3.post1)
Requirement already satisfied: tqdm==4.62.3 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 11)) (4.62.3)
Collecting umap-learn==0.5.2
Using cached umap_learn-0.5.2-py3-none-any.whl
Collecting Unidecode==1.3.2
Using cached Unidecode-1.3.2-py3-none-any.whl (235 kB)
Requirement already satisfied: urllib3==1.26.7 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from -r requirements.txt (line 14)) (1.26.7)
Collecting visdom==0.1.8.9
Using cached visdom-0.1.8.9-py3-none-any.whl
Collecting webrtcvad==2.0.10
Using cached webrtcvad-2.0.10.tar.gz (66 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: packaging>=20.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from librosa==0.8.1->-r requirements.txt (line 2)) (21.3)
Requirement already satisfied: joblib>=0.14 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from librosa==0.8.1->-r requirements.txt (line 2)) (1.1.0)
Requirement already satisfied: pooch>=1.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from librosa==0.8.1->-r requirements.txt (line 2)) (1.6.0)
Requirement already satisfied: numba>=0.43.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from librosa==0.8.1->-r requirements.txt (line 2)) (0.55.1)
Requirement already satisfied: audioread>=2.0.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from librosa==0.8.1->-r requirements.txt (line 2)) (2.1.9)
Requirement already satisfied: resampy>=0.2.2 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from librosa==0.8.1->-r requirements.txt (line 2)) (0.2.2)
Requirement already satisfied: decorator>=3.0.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from librosa==0.8.1->-r requirements.txt (line 2)) (5.1.1)
Requirement already satisfied: python-dateutil>=2.7 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from matplotlib==3.5.1->-r requirements.txt (line 3)) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from matplotlib==3.5.1->-r requirements.txt (line 3)) (1.3.2)
Requirement already satisfied: cycler>=0.10 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from matplotlib==3.5.1->-r requirements.txt (line 3)) (0.11.0)
Requirement already satisfied: fonttools>=4.22.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from matplotlib==3.5.1->-r requirements.txt (line 3)) (4.29.0)
Requirement already satisfied: pyparsing>=2.2.1 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from matplotlib==3.5.1->-r requirements.txt (line 3)) (3.0.7)
Requirement already satisfied: PyQt5-sip<13,>=12.8 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from PyQt5==5.15.6->-r requirements.txt (line 6)) (12.9.0)
Requirement already satisfied: PyQt5-Qt5>=5.15.2 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from PyQt5==5.15.6->-r requirements.txt (line 6)) (5.15.2)
Requirement already satisfied: threadpoolctl>=2.0.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from scikit-learn==1.0.2->-r requirements.txt (line 7)) (3.0.0)
Requirement already satisfied: CFFI>=1.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from sounddevice==0.4.3->-r requirements.txt (line 9)) (1.15.0)
Requirement already satisfied: colorama in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from tqdm==4.62.3->-r requirements.txt (line 11)) (0.4.4)
Requirement already satisfied: pynndescent>=0.5 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from umap-learn==0.5.2->-r requirements.txt (line 12)) (0.5.6)
Requirement already satisfied: six in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from visdom==0.1.8.9->-r requirements.txt (line 15)) (1.16.0)
Requirement already satisfied: tornado in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from visdom==0.1.8.9->-r requirements.txt (line 15)) (6.1)
Requirement already satisfied: pyzmq in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from visdom==0.1.8.9->-r requirements.txt (line 15)) (22.3.0)
Requirement already satisfied: torchfile in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from visdom==0.1.8.9->-r requirements.txt (line 15)) (0.1.0)
Requirement already satisfied: jsonpatch in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from visdom==0.1.8.9->-r requirements.txt (line 15)) (1.32)
Requirement already satisfied: websocket-client in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from visdom==0.1.8.9->-r requirements.txt (line 15)) (1.2.3)
Requirement already satisfied: requests in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from visdom==0.1.8.9->-r requirements.txt (line 15)) (2.27.1)
Requirement already satisfied: pycparser in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from CFFI>=1.0->sounddevice==0.4.3->-r requirements.txt (line 9)) (2.21)
Requirement already satisfied: llvmlite<0.39,>=0.38.0rc1 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from numba>=0.43.0->librosa==0.8.1->-r requirements.txt (line 2)) (0.38.0)
Requirement already satisfied: setuptools in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from numba>=0.43.0->librosa==0.8.1->-r requirements.txt (line 2)) (47.1.0)
Requirement already satisfied: appdirs>=1.3.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from pooch>=1.0->librosa==0.8.1->-r requirements.txt (line 2)) (1.4.4)
Requirement already satisfied: idna<4,>=2.5 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from requests->visdom==0.1.8.9->-r requirements.txt (line 15)) (3.3)
Requirement already satisfied: charset-normalizer~=2.0.0 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from requests->visdom==0.1.8.9->-r requirements.txt (line 15)) (2.0.10)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from requests->visdom==0.1.8.9->-r requirements.txt (line 15)) (2021.10.8)
Requirement already satisfied: jsonpointer>=1.9 in c:\users\pc\appdata\local\programs\python\python37\lib\site-packages (from jsonpatch->visdom==0.1.8.9->-r requirements.txt (line 15)) (2.2)
Building wheels for collected packages: webrtcvad
Building wheel for webrtcvad (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\PC\AppData\Local\Programs\Python\Python37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-i5z7dkyd\\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\\setup.py'"'"'; __file__='"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-i5z7dkyd\\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\PC\AppData\Local\Temp\pip-wheel-d_g4yjkd'
cwd: C:\Users\PC\AppData\Local\Temp\pip-install-i5z7dkyd\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\
Complete output (9 lines):
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying webrtcvad.py -> build\lib.win-amd64-3.7
running build_ext
building '_webrtcvad' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Failed building wheel for webrtcvad
Running setup.py clean for webrtcvad
Failed to build webrtcvad
Installing collected packages: webrtcvad, visdom, Unidecode, umap-learn, sounddevice, PyQt5, matplotlib, librosa, inflect
Running setup.py install for webrtcvad ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\PC\AppData\Local\Programs\Python\Python37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-i5z7dkyd\\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\\setup.py'"'"'; __file__='"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-i5z7dkyd\\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\PC\AppData\Local\Temp\pip-record-epy40fs2\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\PC\AppData\Local\Programs\Python\Python37\Include\webrtcvad'
cwd: C:\Users\PC\AppData\Local\Temp\pip-install-i5z7dkyd\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\
Complete output (9 lines):
running install
running build
running build_py
creating build
creating build\lib.win-amd64-3.7
copying webrtcvad.py -> build\lib.win-amd64-3.7
running build_ext
building '_webrtcvad' extension
error: Microsoft Visual C++ 14.0 is required. Get it with "Build Tools for Visual Studio": https://visualstudio.microsoft.com/downloads/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\PC\AppData\Local\Programs\Python\Python37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-i5z7dkyd\\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\\setup.py'"'"'; __file__='"'"'C:\\Users\\PC\\AppData\\Local\\Temp\\pip-install-i5z7dkyd\\webrtcvad_f90e9d7aafb5416faa2eb06f9cfa4074\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\PC\AppData\Local\Temp\pip-record-epy40fs2\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\PC\AppData\Local\Programs\Python\Python37\Include\webrtcvad' Check the logs for full command output. | closed | 2022-01-28T22:02:58Z | 2022-01-29T12:01:14Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/997 | [] | pofdzm | 1 |
521xueweihan/HelloGitHub | python | 2,867 | 自荐项目:Awesome-Iwb 全网最全的教学一体机/数字白板 教学优化软件收集列表 | ## 推荐项目
- 项目地址:https://github.com/Awesome-Iwb/Awesome-Iwb
- 类别:Markdown
- 项目标题:Awesome-Iwb - 适用于希沃等教学一体机的最全软件收集列表
- 项目描述:适用于希沃、鸿合、京东方、东方中原等品牌的教学一体机、教学触摸屏和红外教学白板的 Windows 平台实用软件推荐合集。为广大电教倾情撰写。应该是全 Github 收集的最全的一个,其中很多开发者都是在校就读的学生,因为班上一体机自带的软件并不好用,于是很多有志青年自己开发了一些比较好用的教学辅助类软件,这个项目就是专门用来收集这些软件和项目,方便让大家了解这个小众领域。
> 因为本人也是在校学生,整理这个列表只是为了方便大家更快速的找到自己适合的软件。
- 亮点:
1. 每个项目都有介绍,更新频繁。
2. 全 Github 应该是收集的最全的。
- 后续更新计划:
1. 计划进一步完善每个项目的介绍,加入实时更新的测评板块,并为每个项目适配统一风格的Banner。
2. 计划上线网页版
| open | 2024-12-15T18:00:18Z | 2025-01-26T04:03:41Z | https://github.com/521xueweihan/HelloGitHub/issues/2867 | [] | aloisp28 | 4 |
databricks/koalas | pandas | 1,516 | subset parameter in DataFrame.replace | ```python
>>> kdf.replace('Mjolnir', 'Stormbuster', subset=('weapon',))
name weapon
0.342778 Ironman Mark-45
0.087444 Captain America Shield
0.179212 Thor Stormbuster
0.522174 Hulk Smash
>>> pdf.replace('Mjolnir', 'Stormbuster', subset=('weapon',))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: replace() got an unexpected keyword argument 'subset'
```
@HyukjinKwon
When I tried adding the above test to the test case, I found that pandas does not support the `subset` parameter. So when I looked into the old version, pandas didn't support the `subset` parameter from the beginning. I found that the current `replace` parameter matches the spark `replace`. So, what are your thoughts on deleting a `subset` from Koalas for pandas?
ref>
[pandas 0.9.0 DataFrame.replace](https://pandas.pydata.org/pandas-docs/version/0.9.0/generated/pandas.DataFrame.replace.html?highlight=replace#pandas.DataFrame.replace)
[pandas 1.0.1 DataFrame.replace](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.replace.html)
[pyspark 2.1.3 DataFrame.replace](https://spark.apache.org/docs/2.1.3/api/python/pyspark.sql.html#pyspark.sql.DataFrame.replace)
_Originally posted by @beobest2 in https://github.com/_render_node/MDI0OlB1bGxSZXF1ZXN0UmV2aWV3Q29tbWVudDQyNzAxNjQ0Mw==/comments/review_comment_ | closed | 2020-05-19T04:09:27Z | 2020-06-11T19:23:47Z | https://github.com/databricks/koalas/issues/1516 | [
"enhancement"
] | beobest2 | 4 |
pallets-eco/flask-wtf | flask | 376 | Can the expiration time of csrf_token be extended | closed | 2019-09-19T02:22:30Z | 2021-05-26T00:55:00Z | https://github.com/pallets-eco/flask-wtf/issues/376 | [] | LIMr1209 | 2 |
|
gradio-app/gradio | deep-learning | 10,413 | Incorrect documentation for gradio.Chatbot | ### Describe the bug
Python syntax error in https://www.gradio.app/docs/gradio/chatbot
For example `SyntaxError: ':' expected after dictionary key` in [Message format](https://www.gradio.app/docs/gradio/chatbot#message-format):
```py3
import gradio as gr
history = [
{"role": "assistant", content="I am happy to provide you that report and plot."}
{"role": "assistant", content=gr.Plot(value=make_plot_from_file('quaterly_sales.txt'))}
]
with gr.Blocks() as demo:
gr.Chatbot(history)
demo.launch()
```
The syntax error occur for `content="...` in line 4
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
Copy example code in https://www.gradio.app/docs/gradio/chatbot
```python
import gradio as gr
history = [
{"role": "assistant", content="I am happy to provide you that report and plot."}
{"role": "assistant", content=gr.Plot(value=make_plot_from_file('quaterly_sales.txt'))}
]
with gr.Blocks() as demo:
gr.Chatbot(history)
demo.launch()
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
gradio==5.12.0
also
gradio==5.13.0
```
### Severity
I can work around it | closed | 2025-01-23T07:04:57Z | 2025-01-23T22:01:19Z | https://github.com/gradio-app/gradio/issues/10413 | [
"bug",
"docs/website"
] | NewJerseyStyle | 2 |
huggingface/datasets | computer-vision | 6,842 | Datasets with files with colon : in filenames cannot be used on Windows | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCommons/peoples_speech
### Expected behavior
Does not crash during extraction
### Environment info
Windows 11, NTFS filesystem, Python 3.12
| open | 2024-04-26T00:14:16Z | 2024-04-26T00:14:16Z | https://github.com/huggingface/datasets/issues/6842 | [] | jacobjennings | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.