repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
wagtail/wagtail | django | 12,588 | Add Wagtail prefix to non-taggit settings | ### Is your proposal related to a problem?
All of our settings are prefixed with `WAGTAIL_` except for two Wagtail specific tagging settings.
* [`TAG_SPACES_ALLOWED`](https://docs.wagtail.org/en/stable/reference/settings.html#tag-spaces-allowed)
* [`TAG_LIMIT`](https://docs.wagtail.org/en/stable/reference/settings.html#tag-limit)
### Describe the solution you'd like
Adding an alias for the other non-Taggit settings, to have a name spaced `WAGTAIL_` prefix, while still supporting the current naming but triggering a warning.
* `TAG_SPACES_ALLOWED` -> `WAGTAIL_TAGS_SPACES_ALLOWED`
* `TAG_LIMIT` -> `WAGTAIL_TAGS_LIMIT`
See how a similar rename was done a while ago https://github.com/wagtail/wagtail/pull/11525 (including warnings `RemovedInWagtail70Warning`).
This way we have Wagtail prefixed (namespaced) settings once the next major release bump happens.
### Additional context
The PR #12564 introduced a hard limit on number of tags returned to 10 to avoid fetching all items possible which was causing performance issues for large databases - see https://github.com/wagtail/wagtail/issues/12415 .
There may be a future desire to make this hard limit configurable, for now though we will not cover that on this issue.
### Working on this
* PR WIP already https://github.com/wagtail/wagtail/pull/12639
* Anyone can contribute to this. View our [contributing guidelines](https://docs.wagtail.org/en/latest/contributing/index.html), add a comment to the issue once you’re ready to start.
| closed | 2024-11-16T05:38:42Z | 2025-02-04T10:49:52Z | https://github.com/wagtail/wagtail/issues/12588 | [
"type:Enhancement",
"component:Tagging"
] | aayushman-singh | 11 |
ansible/ansible | python | 84,062 | yum module should have option to list updates without "--show-duplicates" argument | ### Summary
I am trying to get the list of packages that has the update available and it should only list the latest available package version.
```
- name: Use the Yum module to get a list of installed packages
yum:
list: updates
```
because this runs `yum list --show-duplicates <package>` in the backend, this command ends up listing same packages with multiple available version.
There should be a parameter for this yum and dnf module to do yum list without --show-duplicates.
### Issue Type
Feature Idea
### Component Name
yum and dnf
### Additional Information
We could use this feature as follow:
```
- name: Use the Yum module to get a list of installed packages
yum:
list: updates
show-duplicates: false
```
This will help to just list the latest available version of the packages to update to.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-10-07T14:44:49Z | 2024-10-22T13:00:08Z | https://github.com/ansible/ansible/issues/84062 | [
"module",
"feature"
] | mehulgogri-hpe | 2 |
predict-idlab/plotly-resampler | data-visualization | 329 | [BUG] Support for plotly 6? (TypeError) | **Describe the bug** :crayon:
Plotly 6.0.0 RC0 is now available from pypi as a prerelease version but causes some issues.
**Reproducing the bug** :mag:
```python
from plotly_resampler import FigureResampler, FigureWidgetResampler
import plotly.graph_objects as go
fig = FigureWidgetResampler(go.Figure())
```
Creates an exception:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 1
----> 1 fig = FigureWidgetResampler(go.Figure())
File [~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\plotly_resampler\figure_resampler\figurewidget_resampler.py:96](http://localhost:8888/lab/tree/~/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0/LocalCache/local-packages/Python312/site-packages/plotly_resampler/figure_resampler/figurewidget_resampler.py#line=95), in FigureWidgetResampler.__init__(self, figure, convert_existing_traces, default_n_shown_samples, default_downsampler, default_gap_handler, resampled_trace_prefix_suffix, show_mean_aggregation_size, convert_traces_kwargs, verbose)
92 elif isinstance(figure, (dict, list)):
93 # A single trace dict or a list of traces
94 f.add_traces(figure)
---> 96 super().__init__(
97 f,
98 convert_existing_traces,
99 default_n_shown_samples,
100 default_downsampler,
101 default_gap_handler,
102 resampled_trace_prefix_suffix,
103 show_mean_aggregation_size,
104 convert_traces_kwargs,
105 verbose,
106 )
108 if isinstance(figure, AbstractFigureAggregator):
109 # Copy the `_hf_data` if the previous figure was an AbstractFigureAggregator
110 # And adjust the default max_n_samples and
111 self._hf_data.update(
112 self._copy_hf_data(figure._hf_data, adjust_default_values=True)
113 )
File [~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\plotly_resampler\figure_resampler\figure_resampler_interface.py:150](http://localhost:8888/lab/tree/~/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0/LocalCache/local-packages/Python312/site-packages/plotly_resampler/figure_resampler/figure_resampler_interface.py#line=149), in AbstractFigureAggregator.__init__(self, figure, convert_existing_traces, default_n_shown_samples, default_downsampler, default_gap_handler, resampled_trace_prefix_suffix, show_mean_aggregation_size, convert_traces_kwargs, verbose)
148 # make sure that the UIDs of these traces do not get adjusted
149 self._data_validator.set_uid = False
--> 150 self.add_traces(figure.data, **convert_traces_kwargs)
151 else:
152 super().__init__(figure)
File [~\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\plotly_resampler\figure_resampler\figure_resampler_interface.py:1172](http://localhost:8888/lab/tree/~/AppData/Local/Packages/PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0/LocalCache/local-packages/Python312/site-packages/plotly_resampler/figure_resampler/figure_resampler_interface.py#line=1171), in AbstractFigureAggregator.add_traces(self, data, max_n_samples, downsamplers, gap_handlers, limit_to_views, **traces_kwargs)
1169 assert trace is not None
1170 data[i] = trace
-> 1172 return super(self._figure_class, self).add_traces(data, **traces_kwargs)
TypeError: super(type, obj): obj must be an instance or subtype of type
```
**Expected behavior** :wrench:
> Please give a clear and concise description of what you expected to happen.
**Screenshots** :camera_flash:
> If applicable, add screenshots to help explain your problem.
**Environment information**: (please complete the following information)
- OS: Windows 11
- Python environment:
- Python version: 3.12.7
- plotly-resampler environment: e.g.: Jupyter(lab), Dash web app (which browser)
- plotly-resampler version: 0.10.0
**Additional context**
Add any other context about the problem here.
| closed | 2024-12-16T19:15:24Z | 2025-02-19T09:32:11Z | https://github.com/predict-idlab/plotly-resampler/issues/329 | [
"bug"
] | auxym | 1 |
biolab/orange3 | data-visualization | 6,228 | Change Refresh in MDS widget to Delay | **What's your use case?**
We used the Refresh option with MDS for educational purposes to show that MDS performs optimization and is an embedding method. The points moved in every step of optimization, and one could see the convergence to the final solution. Not that this is too enlightening or required, but it was just an extra feature that could help when introducing MDS.

Since computers got faster, and for small data sets such as the zoo dataset, setting the refresh even to Every does not help. The optimization converges instantly, and the movement of points can not be tracked.
**What's your proposed solution?**
Replace the Refresh with Delay, with options 0.1s, 0.2s, 0.5s, 1s, which defines that max delay between two successive iterations.
**Are there any alternative solutions?**
Remove the refresh option altogether.
| closed | 2022-11-29T13:52:56Z | 2023-01-20T17:01:26Z | https://github.com/biolab/orange3/issues/6228 | [
"wish",
"meal"
] | BlazZupan | 1 |
deepfakes/faceswap | machine-learning | 408 | Sort.py issue: ModuleNotFoundError: No module named 'lib.cli' |
## Expected behavior
Launch the sort.py tool and do some sorting on folders
## Actual behavior
sort.py doesn't launch, instead I get an error where the "lib.cli" module is not found:
I type this: python .\sort.py and get this:
Traceback (most recent call last):
File ".\sort.py", line 11, in <module>
from lib.cli import FullPaths
ModuleNotFoundError: No module named 'lib.cli'
## Steps to reproduce
launch python .\sort.py in the tools folder and this will happen.
What package is needed to be installed for this module to be loaded? (so that I can try to reload the package and see if that fixes it)
All the requirements seem to be met, no red flag that I can see otherwise, and the train and convert scripts work great otherwise.
## Other relevant information
- **Operating system and version:** Windows 10 - 64 bit
- **Python version:** 3.6.4
- **Faceswap version:** latest commit 6f004e4b24b2c7abb25c94e8466c1c3614576025
- **Faceswap method:** tried on both CPU and GPU
| closed | 2018-05-24T15:01:57Z | 2018-05-24T19:40:33Z | https://github.com/deepfakes/faceswap/issues/408 | [] | deepfaceswap12345 | 4 |
modelscope/modelscope | nlp | 901 | 安装audio报错 | pip install modelscope[audio] -f https://modelscope.oss-cn-beijing.aliyuncs.com/releases/repo.html
报错:
File "<string>", line 38, in finalize_options
AttributeError: 'dict' object has no attribute '__NUMPY_SETUP__'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output. | closed | 2024-07-04T06:08:29Z | 2024-08-11T01:58:33Z | https://github.com/modelscope/modelscope/issues/901 | [
"Stale"
] | maydayxx | 3 |
encode/httpx | asyncio | 3,411 | Zstandard data is incomplete on HEAD requests | Tried to do a HEAD request with a server answering with `zstd` encoding by default but since the content is empty `ZStandardDecoder` is raising `DecodingError("Zstandard data is incomplete")` when calling `flush()`.
Tried to fix the problem by adding a check if `ret` is empty:
```python
def flush(self) -> bytes:
ret = self.decompressor.flush() # note: this is a no-op
# If content is empty, no need to wait for EOF
if not self.decompressor.eof and len(ret) > 0:
raise DecodingError("Zstandard data is incomplete") # pragma: no cover
return bytes(ret)
```
It is working for me but I don't know if there is other implications with this modification.
URLs to test: https://password-hashing.net/ or https://3proxy.org/
---
- [X] Initially raised as discussion https://github.com/encode/httpx/discussions/3408 | closed | 2024-11-22T09:32:39Z | 2024-11-22T11:42:53Z | https://github.com/encode/httpx/issues/3411 | [] | AlexMili | 0 |
dunossauro/fastapi-do-zero | sqlalchemy | 100 | Sugestação sobre aula 2 | Duno você acha que é valido colocar um hyperlink desse site [iana](https://www.iana.org/assignments/http-status-codes/http-status-codes.xhtml) quando estava comentando sobre o status code na aula 2?

Nesse site ele especifica todos as possíveis saídas do status code, não sei se seria interessante para a curiosidade do pessoal. | closed | 2024-02-26T12:22:05Z | 2024-02-28T20:39:35Z | https://github.com/dunossauro/fastapi-do-zero/issues/100 | [] | azmovi | 1 |
ray-project/ray | machine-learning | 50,961 | [Feedback] Feedback for ray + uv | Hello everyone! As of [Ray 2.43.0](https://github.com/ray-project/ray/releases/tag/ray-2.43.0), we have launched a new integration with `uv run` that we are super excited to share with you all. This will serve as the main Github issue to track any issues or feedback that ya'll might have while using this.
Please share any success stories, configs, or just cool discoveries that you might have while running uv + Ray! We are excited to hear from you.
To read more about uv + Ray, check out [our new blog post here](https://www.anyscale.com/blog/uv-ray-pain-free-python-dependencies-in-clusters). | open | 2025-02-27T21:33:22Z | 2025-03-21T17:54:50Z | https://github.com/ray-project/ray/issues/50961 | [] | cszhu | 17 |
vitalik/django-ninja | rest-api | 870 | [BUG] Enum choices | v 1.0b2
<img width="1527" alt="SCR-20231003-jssq" src="https://github.com/vitalik/django-ninja/assets/95222/7298a46d-7274-4a4e-befa-0a34874695b4">
<img width="1154" alt="SCR-20231003-jsyd" src="https://github.com/vitalik/django-ninja/assets/95222/3d6f5927-24e1-4fbf-99e4-ee00c339e607">
| closed | 2023-10-03T07:39:34Z | 2023-10-18T15:15:40Z | https://github.com/vitalik/django-ninja/issues/870 | [
"bug",
"v1"
] | vitalik | 0 |
agronholm/anyio | asyncio | 608 | On CPython, `TLSListener.handle_handshake_error` on asyncio logs `"NoneType: None"` instead of the error | ### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
5f208ee84b2321e773ccc4e8ed2fbb7ca921c5a9
### Python version
CPython 3.11.5
### What happened?
on CPython (but not PyPy), `TLSListener.handle_handshake_error` on asyncio (but not Trio) logs `"NoneType: None"` instead of the error.
i believe this is due to https://github.com/python/cpython/issues/108668.
### How can we reproduce the bug?
40fd35537b7e113f3ca062518f99b4daf1a857a2 | closed | 2023-08-30T06:21:49Z | 2023-08-30T08:53:14Z | https://github.com/agronholm/anyio/issues/608 | [
"bug"
] | gschaffner | 0 |
onnx/onnx | machine-learning | 6,432 | export T5 model to onnx | # Bug Report
hello
i am using https://huggingface.co/Ahmad/parsT5-base model
i want toe xport it to onnx using "python -m transformers.onnx --model="Ahmad/parsT5-base" onnx/"
but i get this error:
```
/usr/local/lib/python3.10/dist-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: '/usr/local/lib/python3.10/dist-packages/torchvision/image.so: undefined symbol: _ZN3c104cuda9SetDeviceEi'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
Framework not requested. Using torch to export to ONNX.
/usr/local/lib/python3.10/dist-packages/torch/_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.__get__(instance, owner)()
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Using framework PyTorch: 2.0.0+cu117
Overriding 1 configuration item(s)
- use_cache -> False
/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py:1092: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if causal_mask.shape[1] < attention_mask.shape[1]:
============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
```
could you please help me? | closed | 2024-10-08T21:08:39Z | 2024-10-08T21:51:41Z | https://github.com/onnx/onnx/issues/6432 | [] | arefekh | 1 |
Miserlou/Zappa | django | 2,155 | message about python-dateutil<2.7.0 appearing | the message:
(python-dateutil 2.8.1 (c:\users\\*****\site-packages), Requirement.parse('python-dateutil<2.7.0'), {'zappa'})
appears in several places
I use python-dateutil==2.8.1 and everything works fine and I use a lot of zappa functionality.
can the requirement be upgraded to newer version of python-dateutil?
tnx | open | 2020-08-19T19:42:49Z | 2020-11-26T14:45:42Z | https://github.com/Miserlou/Zappa/issues/2155 | [] | akobig | 2 |
JoeanAmier/TikTokDownloader | api | 121 | UnicodeEncodeError: 'latin-1' codec can't encode character | 请输入作品链接: 3.82 Qkc:/ b@a.nd 10/27 闪电姐,为了你我愿意关注河野华 # 少女前线2追放 https://v.douyin.com/i8pWN5MU/ 复制此链接,打开Dou音搜索,直接观看视频!
Traceback (most recent call last):
File "main.py", line 343, in <module>
File "main.py", line 321, in run
File "main.py", line 217, in main_menu
File "main.py", line 286, in compatible
File "main.py", line 65, in inner
File "main.py", line 226, in complete
File "src\main_complete.py", line 883, in run
File "src\main_complete.py", line 406, in works_interactive
File "src\main_complete.py", line 422, in input_links_acquisition
File "src\DataAcquirer.py", line 51, in inner
File "src\DataAcquirer.py", line 500, in run
File "src\DataAcquirer.py", line 111, in send_request
File "requests\api.py", line 59, in request
File "requests\sessions.py", line 589, in request
File "requests\sessions.py", line 703, in send
File "requests\adapters.py", line 486, in send
File "urllib3\connectionpool.py", line 791, in urlopen
File "urllib3\connectionpool.py", line 497, in _make_request
File "urllib3\connection.py", line 394, in request
File "urllib3\connection.py", line 308, in putheader
File "http\client.py", line 1292, in putheader
UnicodeEncodeError: 'latin-1' codec can't encode character '\u2026' in position 48: ordinal not in range(256)
[15720] Failed to execute script 'main' due to unhandled exception!
抖音视频和评论都是编码错误 | closed | 2023-12-29T03:08:44Z | 2023-12-29T06:49:33Z | https://github.com/JoeanAmier/TikTokDownloader/issues/121 | [] | kongye-su | 1 |
CTFd/CTFd | flask | 1,814 | Markdown quotes don't render right | I don't know why but for some reason things like
```
> quote
```
don't seem to render right when they should. | closed | 2021-02-27T15:58:05Z | 2021-03-16T20:31:55Z | https://github.com/CTFd/CTFd/issues/1814 | [
"completed"
] | ColdHeat | 0 |
521xueweihan/HelloGitHub | python | 2,389 | 推荐项目:PDF.js | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/mozilla/pdf.js
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Rust、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:PDF.js,一个用 JavaScript 实现的 PDF 阅读器。
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:[PDF.js](https://mozilla.github.io/pdf.js/)是使用 HTML5 构建的可移植文档格式 (PDF) 查看器。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:PDF.js由社区驱动,并得到 Mozilla 的支持。
- 示例代码:(可选)
- 截图:(可选)gif/png/jpg
- 后续更新计划:社区驱动
| open | 2022-10-02T06:00:05Z | 2022-10-25T00:38:42Z | https://github.com/521xueweihan/HelloGitHub/issues/2389 | [
"JavaScript 项目"
] | XYZscratcher | 0 |
ivy-llc/ivy | tensorflow | 27,935 | Fix Ivy Failing Test: paddle - elementwise.add | closed | 2024-01-17T06:06:57Z | 2024-01-18T09:52:07Z | https://github.com/ivy-llc/ivy/issues/27935 | [
"Sub Task"
] | MuhammadNizamani | 0 |
|
deezer/spleeter | tensorflow | 673 | [Bug] Spleeter only processes the first minute of audio | - [x] I didn't find a similar issue already open.
- [x] I read the documentation (README AND Wiki)
- [x] I have installed FFMpeg
- [x] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
When trying to separate the music using any of the tracks (2/4/5 stems) the spleeter processes the command normally, but the final audio is only 1 minute long.
## Step to reproduce
1. Installed using `pip`
2. Run `python3 -m spleeter separate -p spleeter:2stems -o output audio.mp3`
3. Receive "succesfully" message
4. The final audio is just 1 minute
## Output
```bash
INFO:spleeter:File output/audio/vocals.wav written succesfully
INFO:spleeter:File output/audio/accompaniment.wav written succesfully
```
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Linux (Kubuntu 20.04) |
| Installation type | pip |
| RAM available | 8GB |
| Hardware spec | CPU (I5 3ª Generation) |
## Additional context
It started happening after the September 3rd update.
| closed | 2021-10-30T16:18:56Z | 2022-07-31T18:40:19Z | https://github.com/deezer/spleeter/issues/673 | [
"bug",
"invalid"
] | LoboMetalurgico | 5 |
dsdanielpark/Bard-API | api | 54 | cannot import | When trying to import the package, I got the following error
ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with LibreSSL 2.8.3. See: https://github.com/urllib3/urllib3/issues/2168
please advice which urllib3 to use | closed | 2023-06-05T16:35:38Z | 2023-06-07T12:14:48Z | https://github.com/dsdanielpark/Bard-API/issues/54 | [] | todo | 2 |
koxudaxi/datamodel-code-generator | pydantic | 1,456 | Generates model containing name confusion with imported module | **Describe the bug**
Code is generated that resembles:
```python
from datetime import date
class ActivityBase(BaseModel):
date: Optional[date] = Field(
None, description="The date the Activity was performed (as a ISO-8601 date)"
)
```
`date: Optional[date]` is a recursive reference that crashes under Pydantic 2.
**To Reproduce**
Example schema:
```json
{
"openapi": "3.0.0",
"info": {
"title": "API Documentation",
"contact": {
"name": "API Support",
"email": "api@api.com"
},
"description": "API documentation",
"version": "v4"
},
"components": {
"schemas": {
"Activity_base": {
"type": "object",
"properties": {
"date": {
"type": "string",
"format": "date",
"description": "The date the Activity was performed (as a ISO-8601 date)"
}
}
}
}
}
}
```
Used commandline:
```
$ datamodel-codegen --use-standard-collections --use-schema-description --use-double-quotes --input-file-type openapi --target-python-version 3.10 --encoding utf8 --input openapi.json --output models.py
```
Importing `models.py` with Pydantic 2.1.1 will then crash with a lengthy stack trace ending in:
```
RecursionError: maximum recursion depth exceeded
```
**Expected behavior**
The generated code could use namespaced imports to prevent conflicting with common names like 'date'. For example,
```python
import datetime
class ActivityBase(BaseModel):
date: Optional[datetime.date] = Field(
None, description="The date the Activity was performed (as a ISO-8601 date)"
)
```
explicitly identifies the intended reference (and does not crash).
**Version:**
- OS: Windows 11
- Python version: 3.11.0
- datamodel-code-generator version: 0.21.1
| closed | 2023-07-27T01:49:37Z | 2025-02-27T17:59:17Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1456 | [
"bug",
"help wanted"
] | syntaxaire | 5 |
psf/black | python | 4,205 | `black` ignores file in current folder if current folder is `git` workspace and (parent of) current folder is a symbolic link | **Describe the bug**
`black` incorrectly ignores a file in the current folder
**To Reproduce**
```
(.venv) C:\>dir
12/21/2023 10:19 AM <JUNCTION> Code [C:\ws]
...
02/02/2024 08:33 AM <DIR> ws
(.venv) C:\>cd Code
(.venv) C:\Code>mkdir Bug
(.venv) C:\Code>cd Bug
(.venv) C:\Code\Bug>git init
Initialized empty Git repository in C:/ws/Bug/.git/
(.venv) C:\Code\Bug>echo 1 > bug.py
(.venv) C:\Code\Bug>black --verbose bug.py
Identified `C:\ws\Bug` as project root containing a .git directory.
bug.py ignored: is a symbolic link that points outside C:\ws\Bug
No Python files are present to be formatted. Nothing to do 😴
```
**Expected behavior**
`black` should not ignore `bug.py`
**Environment**
black, 24.1.2.dev11+g632f44bd68 (compiled: no)
Python (CPython) 3.12.1
Windows 10 22H2 | closed | 2024-02-02T07:43:13Z | 2024-02-15T07:19:47Z | https://github.com/psf/black/issues/4205 | [
"T: bug",
"C: file collection"
] | bersbersbers | 5 |
widgetti/solara | flask | 498 | What's the best way to control a component lifecycle? | I have more experience developing UI using Vue and when using solara I miss controlling the lifecycle of a component. Sometimes I want to trigger a component callback or some other action once on init/on mounted for example and I'm not completely sure how to do this using solara components.
For this cases the best approach is to use a [vue component](https://solara.dev/examples/general/vue_component) or there are some workarounds?
| closed | 2024-02-09T12:17:54Z | 2024-02-12T23:38:59Z | https://github.com/widgetti/solara/issues/498 | [] | itepifanio | 2 |
deepset-ai/haystack | machine-learning | 8,352 | Deprecate `Pipeline` init argument `debug_path` as it's unused | `debug_path` is completely unused and a remnant of a debugging approach inherited from Haystack 1.
We should mark it for deprecation and remove it. | closed | 2024-09-11T09:16:10Z | 2024-09-13T08:03:30Z | https://github.com/deepset-ai/haystack/issues/8352 | [
"breaking change",
"P2"
] | silvanocerza | 0 |
wger-project/wger | django | 1,792 | Invalid token error when registering account | ## Steps to Reproduce
<!-- Please include as many steps to reproduce so that we can replicate the problem. -->
1. ... I clicked register new account
2. ... I filled in all fields
3. ... clicked 'submit' or whatever it is
4. Error screen
**Expected results:** <!-- what did you expect to see? --> next step in verification process
**Actual results:** <!-- what did you see? --> error
Token not found. See attachment
<details>
<summary>Logs</summary>
<!--
Any logs you think would be useful (if you have a local instance)
-->
```bash
```
</details>
| closed | 2024-10-13T21:47:22Z | 2024-11-09T16:33:23Z | https://github.com/wger-project/wger/issues/1792 | [] | Afnyc252 | 2 |
jupyterlab/jupyter-ai | jupyter | 950 | FIM support for inline code completion | Just to ensure that FIM is on the radar at jupyter-ai I leave this comment here.
FIM ([Fill-in-the-Middle](https://medium.com/@SymeCloud/what-is-fim-and-why-does-it-matter-in-llm-based-ai-53f33385585b), [Fill-in-the-Middle](https://codeium.com/blog/why-code-completion-needs-fill-in-the-middle)) is to my knowledge a widely used approach in prompt-generation for inline code completion. Models are trained to understand a short list of keywords which describe where the "hole" in a text/code needs to be filled.
Google for example describes these keywords (for Gemma) here: https://ai.google.dev/gemma/docs/formatting#formatting-fim and shows an example here: https://ai.google.dev/gemma/docs/formatting#fim-example
The AI-code-assistence `twinny` uses the following FIM prompt generator for code-completion with the model `StarCoder`:
[`getPrompt(..)`](https://github.com/twinnydotdev/twinny/blob/v3.14.0/src/extension/providers/completion.ts#L446) -> `getFimPrompt(..)` -> `getFimTemplateChosen(..)` -> `getFimPromptTemplateOther(..)` ->
[`return <fim_prefix>${fileContext}\n${heading}${prefix}<fim_suffix>${suffix}<fim_middle>`](https://github.com/twinnydotdev/twinny/blob/v3.14.0/src/extension/fim-templates.ts#L61) | open | 2024-08-12T16:54:27Z | 2024-08-14T11:03:06Z | https://github.com/jupyterlab/jupyter-ai/issues/950 | [
"enhancement"
] | jhgoebbert | 4 |
lepture/authlib | flask | 629 | Quoting (URL-encoding) Base authentication username / password is incorrect | **Describe the bug**
Version `1.3.0` introduces a change in the encoding of the Basic Authentication header (commit d2d1f494e625b7ee9c64f70165bd6d5faf28fe21). In the comment to the commit you're mentioning this RFC section: https://datatracker.ietf.org/doc/html/rfc6749#section-2.3.1, but I think that rather this one is applicable:
https://datatracker.ietf.org/doc/html/rfc2617#section-2
I've never seen basic auth username and password being url encoded before. The servers (e.g. cloud foundry UAA) seem to reject such requests. It also doesn't make sense to url-encode it if it's then base64-encoded
**Error Stacks**
```
--
```
**To Reproduce**
A minimal example to reproduce the behavior:
happens always in version 1.3.0 if basic authentication is used to get a token.
**Expected behavior**
Basic authentication username and password shouldn't be URL encoded before being base64 encoded.
**Environment:**
- OS: Linux / Mac OSX
- Python Version: 3.11
- Authlib Version: 1.3.0
**Additional context**
--
| closed | 2024-02-21T21:34:50Z | 2024-08-26T08:14:40Z | https://github.com/lepture/authlib/issues/629 | [
"bug"
] | igielski | 7 |
allenai/allennlp | pytorch | 5,376 | what time you can sovle 'python code to configure experiments' can load a saved model | closed | 2021-08-24T12:27:34Z | 2021-09-07T16:09:43Z | https://github.com/allenai/allennlp/issues/5376 | [
"question",
"stale"
] | dugudongfangshuo | 2 |
|
chatanywhere/GPT_API_free | api | 21 | wrong api key | after I run code in the [demo](https://github.com/chatanywhere/GPT_API_free/blob/main/demo.py) , the error pop up (False, 'OpenAI API 异常: HTTP code 401 from API (wrong api key\n)'), my setting are
```python
openai.api_key = "sk-0PfcSdT723UR44igwVxvEWvLoZJgi0FJyZWy0WCCATp5ka2a"
openai.api_base = "https://api.chatanywhere.com.cn/v1"
```
has anyone encounter this problem also? | closed | 2023-05-26T03:44:39Z | 2023-06-01T04:24:43Z | https://github.com/chatanywhere/GPT_API_free/issues/21 | [] | nickyi1990 | 1 |
hankcs/HanLP | nlp | 740 | 关于resize(65536 * 32)的数字大小问题 | <!--
注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。
-->
## 注意事项
请确认下列注意事项:
* 我已仔细阅读下列文档,都没有找到答案:
- [首页文档](https://github.com/hankcs/HanLP)
- [wiki](https://github.com/hankcs/HanLP/wiki)
- [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ)
* 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。
* 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。
* [x] 我在此括号内输入x打钩,代表上述事项确认完毕。
## 版本号
<!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 -->
当前最新版本号是:1.5.2
我使用的版本是:1.3.4
<!--以上属于必填项,以下可自由发挥-->
## 我的问题
我在看hanlp的源码时,看到在`src/main/java/com/hankcs/hanlp/collection/trie/DoubleArrayTrie.java`中`resize(65536 * 32); `。 这段代码的目的是为了生成双数组而设置数组的初始大小。我想请问为什么要设置这么大呢。65536是根据char的最大值设定,而32是为了什么原因选中此数值呢?
| closed | 2018-01-10T11:31:17Z | 2018-01-22T03:57:47Z | https://github.com/hankcs/HanLP/issues/740 | [
"question"
] | jimmy-walker | 5 |
davidsandberg/facenet | computer-vision | 1,049 | Rotated image learning method | I would like to classify the kind of flowers into facenet.
When testing, when a 180 rotated image comes in
How can I improve my accuracy?
Should the dataset be rotated when training?
Or compare the vector feature to the 180 rotated flower picture. | open | 2019-07-09T05:08:24Z | 2019-07-09T05:08:24Z | https://github.com/davidsandberg/facenet/issues/1049 | [] | ljhll0942 | 0 |
netbox-community/netbox | django | 18,213 | ASN Range search does not work | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.7
### Python Version
3.10
### Steps to Reproduce
1. Go to IPAM -> ASN Ranges
2. Add new range, give it a name, do not add description, save
3. Try to search range by name either using using quick search field or by using IPAM -> ASN Ranges -> Search TAB
Searching using global search works, since this was fixed as part of this issue https://github.com/netbox-community/netbox/issues/17537
### Expected Behavior
Searching ASN range by name works for both:
- IPAM -> ASN Ranges -> Search TAB
- IPAM -> ASN Ranges -> quick search field
### Observed Behavior
Searching ASN range by name does not works for:
- IPAM -> ASN Ranges -> Search TAB
- IPAM -> ASN Ranges -> quick search field | closed | 2024-12-12T04:09:33Z | 2025-03-13T03:10:02Z | https://github.com/netbox-community/netbox/issues/18213 | [
"type: bug",
"status: accepted",
"severity: low"
] | dmulyalin | 0 |
graphistry/pygraphistry | pandas | 234 | [FEA] RDF/sparql support - neptune, ... | **Is your feature request related to a problem? Please describe.**
Neptune and other DBs support rdf/sparql, it'd be great to enable one-liners fo them!
**Describe the solution you'd like**
Integration with a store-rich lib like https://github.com/RDFLib/rdflib , esp. if Arrow-friendly, and explicit plugins for DB connectors
| open | 2021-05-05T18:32:07Z | 2023-05-22T20:55:29Z | https://github.com/graphistry/pygraphistry/issues/234 | [
"enhancement",
"help wanted"
] | lmeyerov | 3 |
521xueweihan/HelloGitHub | python | 2,142 | 【开源自荐】music-dl - 音乐搜索下载器 | ## 项目推荐
- 项目地址:[https://github.com/guanguans/music-dl](https://github.com/guanguans/music-dl)
- 类别:PHP
- 项目后续更新计划:
- 项目描述:
- 命令行音乐搜索下载器。
- 支持 QQ 音乐、网易云音乐、酷狗音乐。
- 推荐理由:支持多个平台(QQ 音乐、网易云音乐、酷狗音乐)的音乐搜索下载。
- 示例代码:无
- 截图:

| closed | 2022-03-26T11:24:21Z | 2022-04-26T07:27:10Z | https://github.com/521xueweihan/HelloGitHub/issues/2142 | [
"PHP 项目"
] | guanguans | 1 |
deepfakes/faceswap | machine-learning | 543 | Install went fine, extraction fine, training no errors but no feedback either. | (i answered all of the items in the post outline form below, but i just condensed to paragraph)
Hey, thanks for the awesome program (cool gui as well) I came from openswap, and this gui is great cant wait to delve into the alignments file operations and maintenance options. (even though ill probably use the console, its nice having a gui to quickly visualize and save configs)
Anyway, so I had an issue with splicing first, I ended up just using ffmpeg on a separate machine and porting the stills over. Extraction went just fine, lots of faces found etc. After removing the false positives, I went to train, however it never showed any progress after a half an hour. I switched to 25 save interval, still nothing. I am using original trainer, batch 64. I am working with a k80, and usually that can breeze through the first 100 iterations in less than a couple minutes. So I think something is wrong. I'm also seeing very low cpu usage in task manager, and usually it would be almost maxed out when training in the past.
I followed the windows installation guide to a T, have all the proper drivers/cuda/cudnn/py3.5 versions, all recognized etc. ATX yes etc. The extraction went fine, so not sure why training is not showing anything. I used the verbose option, but cant really see any output anywhere at all.
Any ideas? thanks.
| closed | 2018-12-09T05:34:49Z | 2019-01-09T20:01:16Z | https://github.com/deepfakes/faceswap/issues/543 | [] | IEWbgfnYDwHRoRRSKtkdyMDUzgdwuBYgDKtDJWd | 2 |
Neoteroi/BlackSheep | asyncio | 486 | Mounting applications | ```python
import uvicorn
from blacksheep import Application
parent = Application()
@parent.router.get("/")
def parent_home():
return "Hello, from the parent app"
child = Application()
@child.router.get("/")
def child_home():
return "Hello, from the child app"
# Note: when mounting another BlackSheep application,
# make sure to handle the start and stop events of the mounted app
parent.mount_registry.auto_events = True
parent.mount("/sub", child)
if __name__ == "__main__":
uvicorn.run("multiapp:app", host="0.0.0.0", port=5556)
```
I got the code from [blacksheep](https://www.neoteroi.dev/blacksheep/mounting/)
When I run it, it throws an error,
Traceback (most recent call last):
File "/home/leib/code/py/web/xy/multiapp.py", line 12, in <module>
child = Application()
File "/home/leib/.pyenv/versions/3.10.13/lib/python3.10/site-packages/blacksheep/server/application.py", line 213, in __init__
validate_router(self)
File "/home/leib/.pyenv/versions/3.10.13/lib/python3.10/site-packages/blacksheep/server/routing.py", line 963, in validate_router
raise SharedRouterError()
blacksheep.server.routing.SharedRouterError: Invalid routers configuration: the same router is used in more than one Application object. When working with multiple applications, ensure that each application is configured to use different routers. For more information, refer to: https://www.neoteroi.dev/blacksheep/routing/ | closed | 2024-02-28T11:56:24Z | 2024-03-11T07:47:13Z | https://github.com/Neoteroi/BlackSheep/issues/486 | [] | lbsx | 4 |
xinntao/Real-ESRGAN | pytorch | 334 | Output video is different duration than input video. | Tried with 2 different videos from 2 different sources. Tried 23.98 as well as 23.976 from mediainfo. The video seems to run fine, but the audio is way out sync. The output is 1 minute shorter than the 26 minute input video. | open | 2022-05-21T15:37:23Z | 2022-08-19T02:08:29Z | https://github.com/xinntao/Real-ESRGAN/issues/334 | [] | fallon33 | 2 |
polarsource/polar | fastapi | 5,179 | Type 'Polar' is missing the following properties from type 'Polar': advertisements, #private | ### Description
I'm working with nuxt, I have installed better-auth and polar.sh necessary libs. And when I configure my auth instance like this :
```ts
import { betterAuth } from 'better-auth'
import { prismaAdapter } from 'better-auth/adapters/prisma'
import { PrismaClient } from '@prisma/client'
import { Polar } from '@polar-sh/sdk'
import { polar } from '@polar-sh/better-auth'
const client = new Polar({
accessToken: process.env.POLAR_ACCESS_TOKEN,
server: 'production'
})
export const auth = betterAuth({
baseURL: process.env.BETTER_AUTH_URL!,
database: prismaAdapter(prisma, { provider: 'postgresql' }),
emailAndPassword: { enabled: true },
socialProviders: {
github: {
clientId: process.env.GITHUB_CLIENT_ID!,
clientSecret: process.env.GITHUB_CLIENT_SECRET!
}
},
user: {
deleteUser: { enabled: true }
},
account: {
accountLinking: { enabled: true, allowDifferentEmails: true }
},
plugins: [
polar({
client,
^^^^^^
createCustomerOnSignUp: true,
enableCustomerPortal: true,
checkout: {
enabled: true,
...
}
})
]
})
```
### Current Behavior
I have this errors :
```
Type 'Polar' is missing the following properties from type 'Polar': advertisements, #privatets(2739)
index.d.ts(43, 5): The expected type comes from property 'client' which is declared here on type 'PolarOptions'
```
### Expected Behavior
No errors.
### Screenshots

### Environment:
- Operating System: Windows
- Browser: Firefox
--- | closed | 2025-03-06T07:11:38Z | 2025-03-06T10:58:14Z | https://github.com/polarsource/polar/issues/5179 | [
"bug"
] | nethriis | 8 |
sktime/sktime | scikit-learn | 7,146 | [ENH] Improving verbosity on tests on `check_estimator` | This issue is being opened to track possible improvements or changes that can be made to help users debug tests when implementing estimators.
As `check_estimator` is the primary tool to verify if estimators are being properly. Improving logging/printing on what tests are being executed, memory management, and others would be helpful for the contributer | open | 2024-09-21T16:16:47Z | 2024-09-22T07:04:17Z | https://github.com/sktime/sktime/issues/7146 | [
"enhancement",
"module:plotting&utilities"
] | julian-fong | 2 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 564 | [BUG]: Company specific answer messages saved in answers.json | ### Describe the bug
Messages tailored to a specific company that may include that company's name get saved in answers.json and reused with different companies they shouldn't be used with.
### Steps to reproduce
_No response_
### Expected behavior
Messages catering to a specific company should not be saved in Answers.json.
### Actual behavior
_No response_
### Branch
None
### Branch name
_No response_
### Python version
_No response_
### LLM Used
_No response_
### Model used
_No response_
### Additional context
_No response_ | closed | 2024-10-19T01:54:50Z | 2024-10-22T23:38:49Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/564 | [
"bug"
] | sloganking | 1 |
open-mmlab/mmdetection | pytorch | 11,686 | ./tools/dist_test.sh evaluation value is all zero | **I am performing object detection using Cascade R-CNN. I have already trained the model, but when I try to evaluate it using test data, the result is consistently zero.**
My code:
`!./tools/dist_test.sh configs/cascade_rcnn/cascade-rcnn_x101-64x4d_fpn_1x_coco.py checkpoints/cascade_rcnn_x101_64x4d_fpn_1x_coco_20200515_075702-43ce6a30.pth 1 `
My configs:
```
config_resep_cascade_rcnn = """
# Inherit and overwrite part of the config based on this config
_base_ = './cascade-rcnn_r50_fpn_1x_coco.py'
data_root = 'data/RESEP-2-P-S-10/' # dataset root
train_batch_size_per_gpu = 4
train_num_workers = 2
max_epochs = 50
stage2_num_epochs = 1
base_lr = 0.00008
metainfo = {
'classes': ('PRESCRIPTIO', 'SIGNATURA', ),
'palette': [
(190, 77, 37),
(37, 150, 190)
]
}
# dataloader
train_dataloader = dict(
batch_size=train_batch_size_per_gpu,
num_workers=train_num_workers,
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='train/'),
ann_file='train/_annotations.coco.json'))
val_dataloader = dict(
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='valid/'),
ann_file='valid/_annotations.coco.json'))
test_dataloader = dict(
dataset=dict(
data_root=data_root,
metainfo=metainfo,
data_prefix=dict(img='test/'),
ann_file='test/_annotations.coco.json'))
val_evaluator = dict(ann_file=data_root + 'valid/_annotations.coco.json')
test_evaluator = dict(ann_file=data_root + 'test/_annotations.coco.json')
#######################################
classes = ('prescriptio', 'signatura', )
n_classes = len(classes)
model = dict(
type='CascadeRCNN',
backbone=dict(
type='ResNeXt',
depth=101,
groups=64,
base_width=4,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
style='pytorch',
init_cfg=dict(
type='Pretrained', checkpoint='open-mmlab://resnext101_64x4d')),
roi_head=dict(
bbox_head=[
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=n_classes,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.1, 0.1, 0.2, 0.2]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=n_classes,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.05, 0.05, 0.1, 0.1]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0,
loss_weight=1.0)),
dict(
type='Shared2FCBBoxHead',
in_channels=256,
fc_out_channels=1024,
roi_feat_size=7,
num_classes=n_classes,
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[0.033, 0.033, 0.067, 0.067]),
reg_class_agnostic=True,
loss_cls=dict(
type='CrossEntropyLoss',
use_sigmoid=False,
loss_weight=1.0),
loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0))
])
)
#######################################
# learning rate
param_scheduler = [
dict(
type='LinearLR',
start_factor=1.0e-5,
by_epoch=False,
begin=0,
end=10),
dict(
# use cosine lr from 10 to 20 epoch
type='CosineAnnealingLR',
eta_min=base_lr * 0.05,
begin=max_epochs // 2,
end=max_epochs,
T_max=max_epochs // 2,
by_epoch=True,
convert_to_iter_based=True),
]
train_pipeline_stage2 = [
dict(type='LoadImageFromFile', backend_args=None),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', scale=(640, 800), keep_ratio=False),
dict(type='PackDetInputs')
]
# optimizer
optim_wrapper = dict(
_delete_=True,
type='OptimWrapper',
optimizer=dict(type='AdamW', lr=base_lr, weight_decay=0.05),
paramwise_cfg=dict(
norm_decay_mult=0, bias_decay_mult=0, bypass_duplicate=True))
default_hooks = dict(
checkpoint=dict(
interval=5,
max_keep_ckpts=2, # only keep latest 2 checkpoints
save_best='auto'
),
logger=dict(type='LoggerHook', interval=5))
custom_hooks = [
dict(
type='PipelineSwitchHook',
switch_epoch=max_epochs - stage2_num_epochs,
switch_pipeline=train_pipeline_stage2)
]
# load COCO pre-trained weight
load_from = './checkpoints/cascade_rcnn_x101_64x4d_fpn_1x_coco_20200515_075702-43ce6a30.pth'
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=max_epochs, val_interval=1)
visualizer = dict(vis_backends=[dict(type='LocalVisBackend'),dict(type='TensorboardVisBackend')])
"""
with open('./configs/cascade_rcnn/cascade-rcnn_x101-64x4d_fpn_1x_coco.py', 'w') as f:
f.write(config_resep_cascade_rcnn)
```
Output:
Accumulating evaluation results...
DONE (t=0.29s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.000
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.011
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=300 ] = 0.011
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.011
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.017
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.002
05/05 10:00:09 - mmengine - INFO - bbox_mAP_copypaste: 0.000 0.000 0.000 0.000 0.000 0.000
05/05 10:00:09 - mmengine - INFO - Results has been saved to results.pkl.
05/05 10:00:09 - mmengine - INFO - Epoch(test) [68/68] coco/bbox_mAP: 0.0000 coco/bbox_mAP_50: 0.0000 coco/bbox_mAP_75: 0.0000 coco/bbox_mAP_s: 0.0000 coco/bbox_mAP_m: 0.0000 coco/bbox_mAP_l: 0.0000 data_time: 0.0036 time: 0.0511
any suggestions is really helpfull | closed | 2024-05-07T04:32:20Z | 2024-07-19T13:30:36Z | https://github.com/open-mmlab/mmdetection/issues/11686 | [] | fathur-rs | 4 |
yuka-friends/Windrecorder | streamlit | 111 | feat: 支持使用AI对数据库内容做归纳总结以及查询 | 效果是这样的:

体验:
https://app.shokichan.com/c/tg/bookshelf_in_storageroom?anchor=49c40b9c-51ef-4d6c-bcc6-dcf453a16574
有了AI之后,或许会更强大?
这种功能应该也有人实现过了(~抄一遍就好~
如果没有相关计划的话我倒是可以试试,但是可能会烂尾 | open | 2024-01-31T09:17:55Z | 2024-01-31T12:26:25Z | https://github.com/yuka-friends/Windrecorder/issues/111 | [] | welann | 2 |
tensorflow/tensor2tensor | machine-learning | 1,829 | What is the relation of tensor2tensor version and tensorflow version? | How to know the corresponding tensorflow version for each tensor2tensor version?
Thank you! | closed | 2020-07-09T08:13:05Z | 2020-07-09T08:15:41Z | https://github.com/tensorflow/tensor2tensor/issues/1829 | [] | guotong1988 | 1 |
OFA-Sys/Chinese-CLIP | nlp | 293 | 使用RN50预训练模型和flick30k后得到的权重文件特别大 | 如题,官方提供的clip_cn_rn50.pt大小约300M,我训练完后得到的权重文件约900M,是因为保存时同时包含encode_image和encode_text模块吗?
| open | 2024-04-12T09:12:15Z | 2024-04-23T01:51:42Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/293 | [] | TC120 | 2 |
miguelgrinberg/Flask-SocketIO | flask | 1,395 | flask-socket.io keep's running background task | Question: flask-socket.io keep's background task ( socketio.start_background_task ) running even after the client has left or disconnected.
Is there some way to immediately close a background tasks after some on has left?
| closed | 2020-10-17T16:56:17Z | 2020-10-23T21:25:45Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1395 | [
"question"
] | yusuf8ahmed | 12 |
JaidedAI/EasyOCR | deep-learning | 1,009 | Error(s) in loading state_dict for DataParallel: | ```
Missing key(s) in state_dict: "module.FeatureExtraction.ConvNet.0.weight", "module.FeatureExtraction.ConvNet.0.bias", "module.FeatureExtraction.ConvNet.3.weight", "module.FeatureExtraction.ConvNet.3.bias", "module.FeatureExtraction.ConvNet.6.weight", "module.FeatureExtraction.ConvNet.6.bias", "module.FeatureExtraction.ConvNet.8.weight", "module.FeatureExtraction.ConvNet.8.bias", "module.FeatureExtraction.ConvNet.11.weight", "module.FeatureExtraction.ConvNet.12.weight", "module.FeatureExtraction.ConvNet.12.bias", "module.FeatureExtraction.ConvNet.12.running_mean", "module.FeatureExtraction.ConvNet.12.running_var", "module.FeatureExtraction.ConvNet.14.weight", "module.FeatureExtraction.ConvNet.15.weight", "module.FeatureExtraction.ConvNet.15.bias", "module.FeatureExtraction.ConvNet.15.running_mean", "module.FeatureExtraction.ConvNet.15.running_var", "module.FeatureExtraction.ConvNet.18.weight", "module.FeatureExtraction.ConvNet.18.bias", "module.SequenceModeling.0.rnn.weight_ih_l0", "module.SequenceModeling.0.rnn.weight_hh_l0", "module.SequenceModeling.0.rnn.bias_ih_l0", "module.SequenceModeling.0.rnn.bias_hh_l0", "module.SequenceModeling.0.rnn.weight_ih_l0_reverse", "module.SequenceModeling.0.rnn.weight_hh_l0_reverse", "module.SequenceModeling.0.rnn.bias_ih_l0_reverse", "module.SequenceModeling.0.rnn.bias_hh_l0_reverse", "module.SequenceModeling.0.linear.weight", "module.SequenceModeling.0.linear.bias", "module.SequenceModeling.1.rnn.weight_ih_l0", "module.SequenceModeling.1.rnn.weight_hh_l0", "module.SequenceModeling.1.rnn.bias_ih_l0", "module.SequenceModeling.1.rnn.bias_hh_l0", "module.SequenceModeling.1.rnn.weight_ih_l0_reverse", "module.SequenceModeling.1.rnn.weight_hh_l0_reverse", "module.SequenceModeling.1.rnn.bias_ih_l0_reverse", "module.SequenceModeling.1.rnn.bias_hh_l0_reverse", "module.SequenceModeling.1.linear.weight", "module.SequenceModeling.1.linear.bias", "module.Prediction.weight", "module.Prediction.bias".
Unexpected key(s) in state_dict: "FeatureExtraction.ConvNet.0.weight", "FeatureExtraction.ConvNet.0.bias", "FeatureExtraction.ConvNet.3.weight", "FeatureExtraction.ConvNet.3.bias", "FeatureExtraction.ConvNet.6.weight", "FeatureExtraction.ConvNet.6.bias", "FeatureExtraction.ConvNet.8.weight", "FeatureExtraction.ConvNet.8.bias", "FeatureExtraction.ConvNet.11.weight", "FeatureExtraction.ConvNet.12.weight", "FeatureExtraction.ConvNet.12.bias", "FeatureExtraction.ConvNet.12.running_mean", "FeatureExtraction.ConvNet.12.running_var", "FeatureExtraction.ConvNet.12.num_batches_tracked", "FeatureExtraction.ConvNet.14.weight", "FeatureExtraction.ConvNet.15.weight", "FeatureExtraction.ConvNet.15.bias", "FeatureExtraction.ConvNet.15.running_mean", "FeatureExtraction.ConvNet.15.running_var", "FeatureExtraction.ConvNet.15.num_batches_tracked", "FeatureExtraction.ConvNet.18.weight", "FeatureExtraction.ConvNet.18.bias", "SequenceModeling.0.rnn.weight_ih_l0", "SequenceModeling.0.rnn.weight_hh_l0", "SequenceModeling.0.rnn.bias_ih_l0", "SequenceModeling.0.rnn.bias_hh_l0", "SequenceModeling.0.rnn.weight_ih_l0_reverse", "SequenceModeling.0.rnn.weight_hh_l0_reverse", "SequenceModeling.0.rnn.bias_ih_l0_reverse", "SequenceModeling.0.rnn.bias_hh_l0_reverse", "SequenceModeling.0.linear.weight", "SequenceModeling.0.linear.bias", "SequenceModeling.1.rnn.weight_ih_l0", "SequenceModeling.1.rnn.weight_hh_l0", "SequenceModeling.1.rnn.bias_ih_l0", "SequenceModeling.1.rnn.bias_hh_l0", "SequenceModeling.1.rnn.weight_ih_l0_reverse", "SequenceModeling.1.rnn.weight_hh_l0_reverse", "SequenceModeling.1.rnn.bias_ih_l0_reverse", "SequenceModeling.1.rnn.bias_hh_l0_reverse", "SequenceModeling.1.linear.weight", "SequenceModeling.1.linear.bias", "Prediction.weight", "Prediction.bias".
```
Unable to read checkpoints after training,
```
model = torch.nn.DataParallel(model).to(device)
model.load_state_dict(torch.load(model_path, map_location=device))
```
i tried to change the above code to and able to recognize but the then its unable to recognize the whitespaces.
```
model.load_state_dict(torch.load(model_path, map_location=device))
model = torch.nn.DataParallel(model).to(device)
``` | open | 2023-05-08T10:32:37Z | 2023-05-08T10:36:29Z | https://github.com/JaidedAI/EasyOCR/issues/1009 | [] | NeoDhirendra | 0 |
oegedijk/explainerdashboard | plotly | 143 | Unable to deactivate the additivity_check | Versions : shap 0.39, python 3.6.8, explainerdashboard 0.3.6.2
Hello ! I'm trying to build an usual dashboard, something like:
```python
X_test = my_X_data()
y_test = my_y_data()
model = my_xgboost_model()
model["XGB"].best_estimator_.get_booster().feature_names = features_name
explainer = RegressionExplainer(Pipeline([('imputer', model["imputer"]), ("XGB", model["XGB"].best_estimator_)]),
X_test, y_test, precision="float32", shap="tree")
dashboard = ExplainerDashboard(explainer, shap_interaction=False, index_dropdown=False)
```
I get the following error:
```python
Traceback (most recent call last):
File "C:\Users\***\Bureau\repo_prog\forecast\src\entity\mlflow_output\recording.py", line 156, in generate_explainer_dashboard
dashboard = ExplainerDashboard(explainer, shap_interaction=False, index_dropdown=False)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\dashboards.py", line 587, in __init__
fluid=fluid))
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\dashboards.py", line 90, in __init__
self.tabs = [instantiate_component(tab, explainer, name=str(i+1), **kwargs) for i, tab in enumerate(tabs)]
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\dashboards.py", line 90, in <listcomp>
self.tabs = [instantiate_component(tab, explainer, name=str(i+1), **kwargs) for i, tab in enumerate(tabs)]
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\dashboard_methods.py", line 688, in instantiate_component
component = component(explainer, name=name, **kwargs)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\dashboard_components\composites.py", line 251, in __init__
logs=logs, **kwargs)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\dashboard_components\regression_components.py", line 938, in __init__
self.col = self.explainer.columns_ranked_by_shap()[0]
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\explainers.py", line 48, in inner
return func(self, *args, **kwargs)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\explainers.py", line 1043, in columns_ranked_by_shap
return self.mean_abs_shap_df(pos_label).Feature.tolist()
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\explainers.py", line 48, in inner
return func(self, *args, **kwargs)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\explainers.py", line 1025, in mean_abs_shap_df
self._mean_abs_shap_df = (self.get_shap_values_df(pos_label)[self.merged_cols].abs().mean()
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\explainers.py", line 48, in inner
return func(self, *args, **kwargs)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\explainerdashboard\explainers.py", line 959, in get_shap_values_df
self._shap_values_df = pd.DataFrame(self.shap_explainer.shap_values(self.X),
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\shap\explainers\_tree.py", line 376, in shap_values
self.assert_additivity(out, model_output_vals)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\shap\explainers\_tree.py", line 539, in assert_additivity
check_sum(self.expected_value + phi.sum(-1), model_output)
File "C:\Users\***\AppData\Local\pypoetry\Cache\virtualenvs\forecast-T9U7-MYL-py3.6\lib\site-packages\shap\explainers\_tree.py", line 533, in check_sum
raise Exception(err_msg)
Exception: Additivity check failed in TreeExplainer! Please ensure the data matrix you passed to the explainer is the same shape that the model was trained on. If your data shape is correct then please report this on GitHub. Consider retrying with the feature_perturbation='interventional' option. This check failed because for one of the samples the sum of the SHAP values was -105.507812, while the model output was -105.539818. If this difference is acceptable you can set check_additivity=False to disable this check.
```
Since the difference is acceptable to me, I set `check_additivity=False`, but I get the same error. Diving deeper into the code, I do think that the argument is not repercuted to the "shap_values" method. | closed | 2021-09-06T08:31:36Z | 2021-12-23T15:28:34Z | https://github.com/oegedijk/explainerdashboard/issues/143 | [] | Simon-Free | 2 |
microsoft/unilm | nlp | 1,654 | [Differential Transformer] How is it visualized? | Hello, @YTianZHU . I read the Differential Transformer paper and found it very interesting.
Thank you so much for your work.
I was wondering how you visualized the attention scores in Figure 1:

Since there are a lot of attention scores (e.g. by layers, heads... etc) in transformer-based models, on whick layer and heads based score did you use?
Or is it the averaged score you used?
Also, What was the dataset used in the visualized experiment?
Thank you.
| closed | 2024-11-17T11:40:01Z | 2024-12-02T08:55:41Z | https://github.com/microsoft/unilm/issues/1654 | [] | yjoonjang | 3 |
lexiforest/curl_cffi | web-scraping | 295 | [BUG] Duplicate GET parameters are dropped when the params kwargs is used | Using curl_cffi 0.6.2
Easier to explain with an example.
```
>>> from curl_cffi import requests
>>> requests.get("http://localhost:8000/?a=1&a=2", params=dict(b=3))
<Response [200]>
```
On the web server we receive `GET /?a=2&b=3` (INCORRECT, missing first "a" parameter)
If using normal requests 2.31.0:
```
>>> import requests
>>> requests.get("http://localhost:8000/?a=1&a=2", params=dict(b=3))
<Response [200]>
```
On the web server we receive `GET /?a=1&a=2&b=3` (CORRECT) | closed | 2024-04-14T13:52:21Z | 2024-04-15T05:29:58Z | https://github.com/lexiforest/curl_cffi/issues/295 | [
"bug"
] | emv33 | 1 |
jwkvam/bowtie | plotly | 33 | make commands preprocess arguments | for example for the drop down component function `do_options` can provide a nicer interface to the user. | closed | 2016-11-08T20:47:14Z | 2016-12-02T19:39:54Z | https://github.com/jwkvam/bowtie/issues/33 | [] | jwkvam | 1 |
pydantic/logfire | fastapi | 476 | Is there a way to log images so they are displayed when looking at logged data? | ### Question
I have a base64 encoded image I would love to pass into my logs and have it displayed, perhaps as a thumbnail that can be clicked on to show the full image, when looking at my logs.
I'd imagine logging images using something like this:
```
span.set_attribute("input_image", {base64_encoded_image})
# - or use the OpenAI format -
span.set_attribute("input_image", f"data:image/jpeg;base64,{base64_encoded_image}")
```
... and then they would be displayed appropriate on the logfire web app.
Is this possible?
Thanks much! | open | 2024-10-04T17:21:22Z | 2024-10-04T19:36:29Z | https://github.com/pydantic/logfire/issues/476 | [
"Question"
] | ohmeow | 1 |
sinaptik-ai/pandas-ai | data-science | 1,358 | Ollama models are not working with Loca;LLM | ### System Info
python=3.11.7
### 🐛 Describe the bug
from pandasai.llm.local_llm import LocalLLM
llm = LocalLLM(api_base="http://localhost:11434/v1",
model="llama3")
db = Agent(scm_vc, config={"llm": llm})
not working | closed | 2024-09-05T09:41:54Z | 2025-03-14T17:06:51Z | https://github.com/sinaptik-ai/pandas-ai/issues/1358 | [
"bug"
] | anilmadishetty2498 | 5 |
suitenumerique/docs | django | 238 | Add edit roles "anyone with the link" | ## Feature Request
**Is your feature request related to a problem or unsupported use case? Please describe.**
I can only add roles to people by inviting them one by one.
Sometimes my doc is not that sensitive.
**Describe the solution you'd like**
I'd like to enable edit for people who have the link.
**Describe alternatives you've considered**
Invite people one by one
**Discovery, Documentation, Adoption, Migration Strategy**
If you can, explain how users will be able to use this and possibly write out a version the docs (if applicable).
Maybe a screenshot or design?
If my doc is not public, I need to sign in to see it.
If my doc is public and edit is available to anyone with the link, then anyone can edit it.
**Do you want to work on it through a Pull Request?**
<!-- Make sure to coordinate with us before you spend too much time working on an implementation! -->
| closed | 2024-09-09T07:12:03Z | 2024-10-23T09:20:34Z | https://github.com/suitenumerique/docs/issues/238 | [
"feature"
] | virgile-dev | 2 |
pydantic/pydantic | pydantic | 10,679 | Better document Pydantic dataclasses configuration | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
I reviewed the documentation, and Pydantic currently offers two methods for passing the Dataclass configuration:
- Apply the config to the dataclass decorator as a dictionary.
- Pass a `ConfigDict` to the dataclass decorator.
In both cases, the configuration must be applied via the decorator. I propose allowing the `model_config` to be defined directly within the dataclass, similar to how it's done with `BaseModel`. If there's a specific reason against this approach, an exception should be raised when the `model_config` field is used incorrectly. While the code below runs, it doesn't apply the Dataclass configuration properly.
### Example Code
```Python
from pydantic import ConfigDict
from pydantic.dataclasses import dataclass
@dataclass
class TestDC:
model_config = ConfigDict(extra="forbid")
a: int
b: int
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.10.0a1
pydantic-core version: 2.24.2
pydantic-core build: profile=release pgo=false
install path: /Users/rajatrajdeep/Desktop/projects/pydantic/pydantic
python version: 3.12.2 (v3.12.2:6abddd9f6a, Feb 6 2024, 17:02:06) [Clang 13.0.0 (clang-1300.0.29.30)]
platform: macOS-14.5-arm64-arm-64bit
related packages: pydantic-extra-types-2.10.0 pydantic-settings-2.5.2 mypy-1.12.0 typing_extensions-4.12.2
commit: 06a04b59
```
| closed | 2024-10-21T19:46:46Z | 2025-01-03T16:36:42Z | https://github.com/pydantic/pydantic/issues/10679 | [
"bug V2"
] | RajatRajdeep | 3 |
deezer/spleeter | tensorflow | 394 | [Bug] Various Errors using Spleeter Seperate with custom model. | <!-- PLEASE READ THIS CAREFULLY :
- Any issue which does not respect following template or lack of information will be considered as invalid and automatically closed
- First check FAQ from wiki to see if your problem is not already known
-->
## Description
I was originally having another [issue](https://github.com/deezer/spleeter/issues/390), but with help from @romi1502 that was resolved nicely.
However, as soon as that issue was fixed, I immediately got thrown several other issues.
<!-- Give us a clear and concise description of the bug you are reporting. -->
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
I managed to run the seperate command with my custom model, and it thought about it for a long time and powershell was eating up my CPU, so it was doing something.
I used the following spleeter imput command: `spleeter separate -i 'F:\BluRay 5.1\Done\S01E01 - Ambush\fullmix.wav' -o output -p F:\SpleetTest\Configs\filmModel.json`
## Output
After about 30-40 seconds I got the error:
```
Traceback (most recent call last):
File "c:\users\joe93\appdata\local\programs\python\python37\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\joe93\AppData\Local\Programs\Python\Python37\Scripts\spleeter.exe\__main__.py", line 9, in <module>
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 54, in entrypoint
main(sys.argv)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\__main__.py", line 46, in main
entrypoint(arguments, params)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\commands\separate.py", line 45, in entrypoint
synchronous=False
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\separator.py", line 228, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\separator.py", line 195, in separate
return self._separate_librosa(waveform, audio_descriptor)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\spleeter\separator.py", line 181, in _separate_librosa
outputs = sess.run(outputs, feed_dict=self._get_input_provider().get_feed_dict(features, stft, audio_id))
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 956, in run
run_metadata_ptr)
File "c:\users\joe93\appdata\local\programs\python\python37\lib\site-packages\tensorflow_core\python\client\session.py", line 1156, in _run
(np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (25836, 2, 6) for Tensor 'mix_stft:0', which has shape '(?, 2049, 2)'
```
I have so little experience with tensor, I don't know where to begin. From a few searches, this looks like a tensor issue with defining the parameters of the input (the sound files).
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows 10 (fully updated) |
| Installation type | pip |
| RAM available | 16GB |
| Hardware spec | RTX2080 / Ryzen R2600 |
## Additional context
<!-- Add any other context about the problem here, references, cites, etc.. -->
| open | 2020-05-23T22:01:47Z | 2024-01-04T00:30:54Z | https://github.com/deezer/spleeter/issues/394 | [
"bug",
"invalid"
] | JavaShipped | 16 |
deepset-ai/haystack | pytorch | 9,017 | Add run_async for `AzureOpenAITextEmbedder` | We should be able to reuse the implementation when it is made for `OpenAITextEmbedder` | open | 2025-03-11T11:07:32Z | 2025-03-21T08:59:04Z | https://github.com/deepset-ai/haystack/issues/9017 | [
"Contributions wanted!",
"P2"
] | sjrl | 0 |
stitchfix/hamilton | numpy | 84 | Show pyspark dataframe support | **Is your feature request related to a problem? Please describe.**
A common question we get, is does Hamilton support spark dataframes? The answer is yes, but it's not ideal at the moment, and we don't have a vanilla example to point to.
It's not ideal because joins are a bit of a pain -- you need to know the index to join on. In the pandas world, we got away with
this because everything had an index associated with it. In spark, you need to provide it, and know when to provide it.
**Describe the solution you'd like**
(1) Provide a vanilla pyspark example.
(2) Provide a pattern to show how to handle multiple spark data sources. Perhaps implement a graph adapter to do so.
**Describe alternatives you've considered**
N/A
| closed | 2022-03-10T07:53:38Z | 2023-02-26T17:05:23Z | https://github.com/stitchfix/hamilton/issues/84 | [
"documentation",
"enhancement",
"pyspark"
] | skrawcz | 1 |
IvanIsCoding/ResuLLMe | streamlit | 5 | Add template to Overleaf | To help people manually edit their resumes | open | 2023-04-12T23:48:21Z | 2023-04-12T23:48:21Z | https://github.com/IvanIsCoding/ResuLLMe/issues/5 | [
"good first issue"
] | IvanIsCoding | 0 |
tflearn/tflearn | data-science | 319 | Verify image input channels for 'image_preloader' | The code is here:
```
# -*- coding: utf-8 -*-
from __future__ import division, print_function, absolute_import
'''
Coding Just for Fun
Created by burness on 16/8/30.
'''
import tflearn
from tflearn.layers.estimator import regression
from tflearn.data_utils import build_image_dataset_from_dir
import os
from config import *
def vgg16(placeholderX=None):
x = tflearn.input_data(shape=[None, 224, 224, 3], name='input',
placeholder=placeholderX)
x = tflearn.conv_2d(x, 64, 3, activation='relu', scope='conv1_1')
x = tflearn.conv_2d(x, 64, 3, activation='relu', scope='conv1_2')
x = tflearn.max_pool_2d(x, 2, strides=2, name='maxpool1')
x = tflearn.conv_2d(x, 128, 3, activation='relu', scope='conv2_1')
x = tflearn.conv_2d(x, 128, 3, activation='relu', scope='conv2_2')
x = tflearn.max_pool_2d(x, 2, strides=2, name='maxpool2')
x = tflearn.conv_2d(x, 256, 3, activation='relu', scope='conv3_1')
x = tflearn.conv_2d(x, 256, 3, activation='relu', scope='conv3_2')
x = tflearn.conv_2d(x, 256, 3, activation='relu', scope='conv3_3')
x = tflearn.max_pool_2d(x, 2, strides=2, name='maxpool3')
x = tflearn.conv_2d(x, 512, 3, activation='relu', scope='conv4_1')
x = tflearn.conv_2d(x, 512, 3, activation='relu', scope='conv4_2')
x = tflearn.conv_2d(x, 512, 3, activation='relu', scope='conv4_3')
x = tflearn.max_pool_2d(x, 2, strides=2, name='maxpool4')
x = tflearn.conv_2d(x, 512, 3, activation='relu', scope='conv5_1')
x = tflearn.conv_2d(x, 512, 3, activation='relu', scope='conv5_2')
x = tflearn.conv_2d(x, 512, 3, activation='relu', scope='conv5_3')
x = tflearn.max_pool_2d(x, 2, strides=2, name='maxpool5')
x = tflearn.fully_connected(x, 4096, activation='relu', scope='fc6')
x = tflearn.dropout(x, 0.5, name='dropout1')
x = tflearn.fully_connected(x, 4096, activation='relu', scope='fc7')
x = tflearn.dropout(x, 0.5, name='dropout2')
x = tflearn.fully_connected(x, 12, activation='softmax', scope='fc8',restore=False)
return x
data_dir = data_path
from tflearn.data_utils import image_preloader
X,Y = image_preloader('files_list', image_shape = (224,224),mode='file',categorical_labels=True,normalize=True)
# data_dir = data_path
# dataset_file = os.path.join(data_dir, 'DL_bot.pkl')
# shuffle = False
# one_hot = True
# resize_pics = (224,224)
# X, Y = build_image_dataset_from_dir(data_dir,
# dataset_file=dataset_file,
# resize=resize_pics,
# filetypes=['.jpg', '.jpeg'],
# convert_gray=False,
# shuffle_data=shuffle,
# categorical_Y=one_hot)
num_classes = 12
softmax = vgg16()
regression = regression(softmax, optimizer='adam',
loss='categorical_crossentropy',
learning_rate=0.001,restore=False)
model = tflearn.DNN(regression, checkpoint_path='finetuning-cv',
max_checkpoints=3, tensorboard_verbose=2, tensorboard_dir="./logs")
model_file = os.path.join(model_path, "vgg16.tflearn")
model.load(model_file,weights_only=True)
# Start finetuning
model.fit(X, Y, n_epoch=10, validation_set=0.1, shuffle=True,
show_metric=True, batch_size=64, snapshot_epoch=False, snapshot_step=200, run_id='finetuning')
model.save('animal-classifier')
model.predict()
```
I use the data input method with the comment, it is ok to run, but when i change to image_preloader, it met an error with `ValueError: could not broadcast input array from shape (224,224,3) into shape (224,224)`
Please check it, Thanks!!!
| closed | 2016-08-31T14:52:22Z | 2016-09-03T18:18:22Z | https://github.com/tflearn/tflearn/issues/319 | [
"contributions welcome"
] | burness | 4 |
sanic-org/sanic | asyncio | 2,534 | Inconsistent access logs | There is inconsistent behavior when starting from CLI and from script in how access logs perform. This is related to `--access-log` always returning a boolean and not a `None` value.
By default, access logs should only be displayed when running in DEBUG unless otherwise specified.
Ref: https://github.com/sanic-org/sanic/pull/2237#issuecomment-926205582 | closed | 2022-08-22T10:34:46Z | 2022-09-18T14:17:25Z | https://github.com/sanic-org/sanic/issues/2534 | [] | ahopkins | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,362 | App crashes whenever inference is at 90% | On Mac OS Montery, M1 Pro, 2021 Macbook Pro.
App crashes whenever inference is at 90%, regardless of the model or audio source | open | 2024-05-21T22:46:32Z | 2024-12-20T04:08:20Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1362 | [] | jothtaylor | 2 |
akfamily/akshare | data-science | 5,823 | AKShare 接口问题报告 | stock_zh_a_hist_min_em 接口无法正常取数,
stock_zh_a_hist_min_em_df = ak.stock_zh_a_hist_min_em(symbol="000001", start_date="2024-03-07 09:30:00",
end_date="2024-03-07 15:00:00", period="1", adjust="")
直接返回错误,KeyError: '000001'
| closed | 2025-03-10T06:37:08Z | 2025-03-10T11:42:11Z | https://github.com/akfamily/akshare/issues/5823 | [
"bug"
] | ninthheaven3 | 6 |
tflearn/tflearn | tensorflow | 926 | predicting binary sums with rnns | I would like some help with setting up the recurrent neural network for predicting binary sums. I have this...
```
import tflearn
import numpy as np
import random
def get_binary(number):
b = bin(number)
b = b[2:] # remove 0b
# i reverse it so i can keep passing numbers into the rnn
b = b[::-1]
return b
def get_random_question():
num1 = random.randint(1, 1000)
num2 = random.randint(1, 1000)
num1bin = get_binary(num1)
num2bin = get_binary(num2)
answer = num1 + num2
return num1bin, num2bin, answer
```
Can you help me put this into code. I want the rnn to go to another timestep for every binary digit. I also would like to put as many binary numbers as I want into it. Any thoughts? | open | 2017-10-07T12:26:23Z | 2017-10-07T12:26:23Z | https://github.com/tflearn/tflearn/issues/926 | [] | ericbot | 0 |
browser-use/browser-use | python | 708 | 'BrowserContext' object has no attribute 'get_state' | ### Bug Description
I am using playwright and browser-use for my project. For the login part I am using playwright and then for the other actions I am using the browser use agent
### Reproduction Steps
1. Install browser-use
2. paste the following code
3. run
### Code Sample
```python
from langchain_openai import AzureChatOpenAI
from playwright.async_api import async_playwright
from browser_use import Agent
import asyncio
from dotenv import load_dotenv
import os
from pydantic import SecretStr
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
llm = AzureChatOpenAI(
model="gpt-4o-mini",
api_version='version',
azure_endpoint="endpoint",
api_key=SecretStr(api_key),
)
async def main():
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False)
context = await browser.new_context()
page = await context.new_page()
await page.goto(os.getenv("BASE_URL"))
// login part using playwright
agent = Agent(
task="Click the 'Procedures' dropdown",
llm=llm,
browser_context=context,
)
result = await agent.run()
print(result)
if __name__ == "__main__":
asyncio.run(main())
```
### Version
Version: 0.1.37
### LLM Model
GPT-4o
### Operating System
Windows 11
### Relevant Log Output
```shell
INFO [agent] 🚀 Starting task: Click the 'Procedures' dropdown
INFO [agent] 📍 Step 1
ERROR [agent] ❌ Result failed 1/3 times:
'BrowserContext' object has no attribute 'get_state'
INFO [agent] 📍 Step 1
ERROR [agent] ❌ Result failed 2/3 times:
'BrowserContext' object has no attribute 'get_state'
INFO [agent] 📍 Step 1
ERROR [agent] ❌ Result failed 3/3 times:
'BrowserContext' object has no attribute 'get_state'
ERROR [agent] ❌ Stopping due to 3 consecutive failures
WARNING [agent] No history to create GIF from
``` | closed | 2025-02-13T10:38:02Z | 2025-02-15T15:23:47Z | https://github.com/browser-use/browser-use/issues/708 | [
"bug"
] | Dinesh-mn-20 | 1 |
minivision-ai/photo2cartoon | computer-vision | 49 | 判别器相关 | 作者,您好,您的工作做的非常棒,给了我很多启发。我想请问设置两个分别在layer=5和layer=7的判别器的用意是什么呢?另外这个参数的设置是否有什么经验可循? | closed | 2020-11-04T02:56:43Z | 2020-11-19T07:38:21Z | https://github.com/minivision-ai/photo2cartoon/issues/49 | [] | itomorrower08 | 3 |
proplot-dev/proplot | data-visualization | 279 | Assigning projections with projection dict. in subplots method | ### Description
Drawing custom subplot layout (one axis with Cartopy Proj, others with Cartesian) breaks with new version of proplot (> 0.6)
### Steps to reproduce
Notebook to show where it breaks: https://colab.research.google.com/drive/1jlXYQTflOHVE-QxkEho7PFxfl8OyLZ13?usp=sharing
**Expected behavior**: The first example in the notebook
**Actual behavior**: breaks, does not plot
### Equivalent steps in matplotlib
Not a way to do this in matplotlib, that's why I am using your package
### Proplot version
version < 0.6 works. > 0.6 breaks
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
^I put these in the notebook
| closed | 2021-09-10T17:03:20Z | 2021-09-10T21:06:06Z | https://github.com/proplot-dev/proplot/issues/279 | [
"bug"
] | dopplerchase | 1 |
errbotio/errbot | automation | 1,047 | Groupchat error message when trying to send a message to a Hipchat room | ### I am...
* [x] Reporting a bug(?)
* [x] Requesting help writing plugins(?)
### I am running...
* Errbot version: 5.0.1
* OS version: Ubuntu 16.04.2
* Python version: 3.6.1
* Using a virtual environment: yes
### Issue description
We've developed a plugin to remind our remote team about a weekly company event. The bot sends the message to a common room. We're using Hipchat, so we use an XMPP JID to identify the room. We recently upgrade to Errbot 5.0.1 from 2.x, so we started using `self.build_identifier` to convert the JID string to an `Identifier` object. However, every time the plugin activates to send the message, we get this traceback:
```
2017-06-29 11:29:58,341 ERROR sleekxmpp.xmlstream.xmlstream Error processing stream handler: MUCError
Traceback (most recent call last):
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/xmlstream.py", line 1684, in _event_runner
handler.run(args[0])
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/handler/callback.py", line 76, in run
self._pointer(payload)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/plugins/xep_0045.py", line 187, in handle_groupchat_error_message
self.xmpp.event("muc::%s::message_error" % msg['from'].bare, msg)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/stanzabase.py", line 701, in __getitem__
return getattr(self, get_method)(**kwargs)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/stanzabase.py", line 1511, in get_from
return JID(self._get_attr('from'))
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 463, in __init__
parsed_jid = _parse_jid(jid)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 152, in _parse_jid
domain = _validate_domain(domain)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 233, in _validate_domain
raise InvalidJID('Domain contains illegal characters')
sleekxmpp.jid.InvalidJID: Domain contains illegal characters
Exception in thread event_thread_0:
Traceback (most recent call last):
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/xmlstream.py", line 1684, in _event_runner
handler.run(args[0])
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/handler/callback.py", line 76, in run
self._pointer(payload)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/plugins/xep_0045.py", line 187, in handle_groupchat_error_message
self.xmpp.event("muc::%s::message_error" % msg['from'].bare, msg)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/stanzabase.py", line 701, in __getitem__
return getattr(self, get_method)(**kwargs)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/stanzabase.py", line 1511, in get_from
return JID(self._get_attr('from'))
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 463, in __init__
parsed_jid = _parse_jid(jid)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 152, in _parse_jid
domain = _validate_domain(domain)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 233, in _validate_domain
raise InvalidJID('Domain contains illegal characters')
sleekxmpp.jid.InvalidJID: Domain contains illegal characters
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/pyenv/versions/3.6.1/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/local/pyenv/versions/3.6.1/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/xmlstream.py", line 1688, in _event_runner
orig.exception(e)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/stanza/rootstanza.py", line 78, in exception
self.reply()
File "/path/to/lib/python3.6/site-packages/sleekxmpp/stanza/message.py", line 139, in reply
StanzaBase.reply(self, clear)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/stanzabase.py", line 1560, in reply
self['to'] = self['from']
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/stanzabase.py", line 701, in __getitem__
return getattr(self, get_method)(**kwargs)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/xmlstream/stanzabase.py", line 1511, in get_from
return JID(self._get_attr('from'))
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 463, in __init__
parsed_jid = _parse_jid(jid)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 152, in _parse_jid
domain = _validate_domain(domain)
File "/path/to/lib/python3.6/site-packages/sleekxmpp/jid.py", line 233, in _validate_domain
raise InvalidJID('Domain contains illegal characters')
sleekxmpp.jid.InvalidJID: Domain contains illegal characters
```
After this, the bot disconnects/is disconnected from Hipchat, and the process has to be stopped and restarted in order to reconnect.
### Steps to reproduce
Here is the plugin implementation (with the JID scrambled a bit):
```python
# -*- coding: utf-8 -*-
import logging
from datetime import timedelta
from errbot import BotPlugin
import arrow
class ChapelTime(BotPlugin):
"""Notify when it's time for chapel.
"""
def check_for_chapel(self):
now = arrow.now().to('US/Central')
already_notified = self.last_notified and (now - self.last_notified) < timedelta(days=1)
is_thursday = now.isoweekday() == 4
if is_thursday and not already_notified:
before_chapel_start = now.replace(hour=11, minute=29)
chapel_start = now.replace(hour=11, minute=35)
if before_chapel_start <= now <= chapel_start:
logging.info("Notifying at %s", now)
self.notify_for_chapel_time()
self.last_notified = now
def notify_for_chapel_time(self):
room = self.build_identifier('71078_developer@conf.hipchat.com') # Web Development
self.send(room, text="http://i.imgur.com/9oFlFfG.gif")
self.send(room, text="Stop! It's chapel time! http://...")
def activate(self):
super(ChapelTime, self).activate()
self.last_notified = None
self.start_poller(60, self.check_for_chapel)
```
### Additional info
The documentation for `build_identifier` just says that the structure of the string is backend-dependent, but there's no mention in the Hipchat backend documentation about what an identifier should look like. The room JID is valid, and is used elsewhere, e.g. for specifying rooms to join in `CHATROOM_PRESENCE`, and was used by `send()` in 2.x (w/o `build_identifier()`) to send a message to a room, so I'm not sure what I'm missing. | closed | 2017-06-29T17:12:20Z | 2019-01-05T16:32:32Z | https://github.com/errbotio/errbot/issues/1047 | [] | eykd | 2 |
exaloop/codon | numpy | 371 | How do I install Codon Jupyter plugin | ```
> codon jupyter
Jupyter support not included. Please install Codon Jupyter plugin.
```
I didn't even know a plugin system was available. | closed | 2023-04-30T19:56:46Z | 2024-11-10T06:06:51Z | https://github.com/exaloop/codon/issues/371 | [] | NoelJacob | 9 |
stanford-oval/storm | nlp | 263 | [BUG] | Co-Storm at Stanford appears to face several challenges that impact its performance and user experience. Here’s a summary of the key issues:
1. Performance Challenges
• Slowness and Bugs: Users have reported slow response times and bugs due to Co-Storm’s ambitious design, which attempts to synthesize organized, long-form content from vast data sets. These factors are exacerbated by high utilization and request bursts during peak times【4】【6】.
• System Overload: The complexity of processing data for generating detailed outlines and citations can lead to delays, especially under heavy traffic【1】【6】.
2. Content Generation Issues
• Accuracy and Citations: The model sometimes generates confident but inaccurate outputs, often missing proper citations, which raises concerns for academic reliability【1】【3】.
• Plagiarism Concerns: Unintended replication of source material without attribution has been flagged as a significant issue, potentially undermining the credibility of the content【1】【6】.
3. Usability and Depth
• Limited Detail: While Co-Storm is effective for creating high-level outlines, it struggles with providing the depth and specificity needed for more complex academic topics【1】【6】.
• Inconsistent Quality: Users report variability in the quality of generated content, with some outputs requiring substantial refinement【6】.
Key Contributing Factors:
• Ambitious Design: Balancing thoroughness, speed, and accuracy is challenging for any system aiming to synthesize complex, long-form academic content【4】.
• Infrastructure Limitations: High system utilization and the need to manage large-scale data processing can lead to bottlenecks【1】【2】.
These factors collectively indicate that while Co-Storm has promising potential as an academic tool, it currently struggles to consistently meet user expectations. Improvements in processing efficiency, citation accuracy, and handling of peak usage could help address these concerns.
Alright, so it sounds like you've experienced the slowness, plagiarism, and citation issues yourself. I can definitely see how that would be frustrating! Plagiarism and incorrect citations can have serious consequences, especially in academic settings.
Co-Storm at Stanford has potential for improvement, particularly regarding its slowness, citation accuracy, and plagiarism issues. Addressing these challenges may be feasible through:
1. System Optimization: Enhancing processing algorithms and infrastructure could help manage high traffic more effectively, reducing response times.
2. Improved Citation Mechanisms: Implementing robust citation protocols could mitigate plagiarism risks and ensure academic reliability.
3. User Feedback Integration: Actively incorporating user feedback can guide refinements in content generation and usability.
While some issues may stem from the system’s ambitious design, targeted improvements could enhance Co-Storm’s effectiveness as an academic tool. | closed | 2024-12-01T14:33:14Z | 2024-12-01T17:51:00Z | https://github.com/stanford-oval/storm/issues/263 | [] | amraz-k | 1 |
aleju/imgaug | deep-learning | 839 | ImportError: cannot import name 'QhullError' from 'scipy.spatial' (/opt/conda/lib/python3.10/site-packages/scipy/spatial/__init__.py) | # i got this error on kaggle
## this is my code :
```
import os
import random
import fnmatch
import datetime
import pickle
import numpy as np
np.set_printoptions(formatter={'float_kind':lambda x: "%.4f" % x})
import pandas as pd
pd.set_option('display.width', 300)
pd.set_option('display.float_format', '{:,.4f}'.format)
pd.set_option('display.max_colwidth', 200)
import tensorflow as tf
import keras
from keras.models import Sequential # V2 is tensorflow.keras.xxxx, V1 is keras.xxx
from keras.layers import Conv2D, MaxPool2D, Dropout, Flatten, Dense, GlobalMaxPooling2D
from keras.optimizers import Adam
from keras.models import load_model
print( f'tf.__version__: {tf.__version__}' )
print( f'keras.__version__: {keras.__version__}' )
from sklearn.utils import shuffle
from sklearn.model_selection import train_test_split
import cv2
import imgaug.augmenters as img_aug
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
from PIL import Image
import tensorflow as tf
devices = tf.config.list_physical_devices('GPU')
```
## My error :
```
/opt/conda/lib/python3.10/site-packages/scipy/__init__.py:146: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.5
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion}"
/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:98: UserWarning: unable to load libtensorflow_io_plugins.so: unable to open file: libtensorflow_io_plugins.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io_plugins.so: undefined symbol: _ZN3tsl6StatusC1EN10tensorflow5error4CodeESt17basic_string_viewIcSt11char_traitsIcEENS_14SourceLocationE']
warnings.warn(f"unable to load libtensorflow_io_plugins.so: {e}")
/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/__init__.py:104: UserWarning: file system plugins are not loaded: unable to open file: libtensorflow_io.so, from paths: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so']
caused by: ['/opt/conda/lib/python3.10/site-packages/tensorflow_io/python/ops/libtensorflow_io.so: undefined symbol: _ZTVN10tensorflow13GcsFileSystemE']
warnings.warn(f"file system plugins are not loaded: {e}")
tf.__version__: 2.12.0
keras.__version__: 2.12.0
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[2], line 34
31 # imaging
32 import cv2
---> 34 import imgaug.augmenters as img_aug
36 import matplotlib.pyplot as plt
37 import matplotlib.image as mpimg
File /opt/conda/lib/python3.10/site-packages/imgaug/__init__.py:9
4 # this contains some deprecated classes/functions pointing to the new
5 # classes/functions, hence always place the other imports below this so that
6 # the deprecated stuff gets overwritten as much as possible
7 from imgaug.imgaug import * # pylint: disable=redefined-builtin
----> 9 import imgaug.augmentables as augmentables
10 from imgaug.augmentables import *
11 import imgaug.augmenters as augmenters
File /opt/conda/lib/python3.10/site-packages/imgaug/augmentables/__init__.py:8
6 from imgaug.augmentables.lines import *
7 from imgaug.augmentables.heatmaps import *
----> 8 from imgaug.augmentables.segmaps import *
9 from imgaug.augmentables.batches import *
File /opt/conda/lib/python3.10/site-packages/imgaug/augmentables/segmaps.py:12
9 import six.moves as sm
11 from .. import imgaug as ia
---> 12 from ..augmenters import blend as blendlib
13 from .base import IAugmentable
16 @ia.deprecated(alt_func="SegmentationMapsOnImage",
17 comment="(Note the plural 'Maps' instead of old 'Map'.)")
18 def SegmentationMapOnImage(*args, **kwargs):
File /opt/conda/lib/python3.10/site-packages/imgaug/augmenters/__init__.py:21
19 import imgaug.augmenters.pillike # use via: iaa.pillike.*
20 from imgaug.augmenters.pooling import *
---> 21 from imgaug.augmenters.segmentation import *
22 from imgaug.augmenters.size import *
23 from imgaug.augmenters.weather import *
File /opt/conda/lib/python3.10/site-packages/imgaug/augmenters/segmentation.py:21
17 import numpy as np
18 # use skimage.segmentation instead `from skimage import segmentation` here,
19 # because otherwise unittest seems to mix up imgaug.augmenters.segmentation
20 # with skimage.segmentation for whatever reason
---> 21 import skimage.segmentation
22 import skimage.measure
23 import six
File /opt/conda/lib/python3.10/site-packages/skimage/segmentation/__init__.py:7
5 from .slic_superpixels import slic
6 from ._quickshift import quickshift
----> 7 from .boundaries import find_boundaries, mark_boundaries
8 from ._clear_border import clear_border
9 from ._join import join_segmentations, relabel_sequential
File /opt/conda/lib/python3.10/site-packages/skimage/segmentation/boundaries.py:5
2 from scipy import ndimage as ndi
4 from .._shared.utils import _supported_float_type
----> 5 from ..morphology import dilation, erosion, square
6 from ..util import img_as_float, view_as_windows
7 from ..color import gray2rgb
File /opt/conda/lib/python3.10/site-packages/skimage/morphology/__init__.py:12
10 from ..measure._label import label
11 from ._skeletonize import medial_axis, skeletonize, skeletonize_3d, thin
---> 12 from .convex_hull import convex_hull_image, convex_hull_object
13 from .grayreconstruct import reconstruction
14 from .misc import remove_small_holes, remove_small_objects
File /opt/conda/lib/python3.10/site-packages/skimage/morphology/convex_hull.py:4
2 from itertools import product
3 import numpy as np
----> 4 from scipy.spatial import ConvexHull, QhullError
5 from ..measure.pnpoly import grid_points_in_poly
6 from ._convex_hull import possible_hull
ImportError: cannot import name 'QhullError' from 'scipy.spatial' (/opt/conda/lib/python3.10/site-packages/scipy/spatial/__init__.py)
``` | closed | 2023-07-19T13:42:35Z | 2023-09-06T10:06:30Z | https://github.com/aleju/imgaug/issues/839 | [] | khengyun | 3 |
labmlai/annotated_deep_learning_paper_implementations | machine-learning | 197 | Implement new models/architectures: | I want to implement new Architecture | closed | 2023-07-11T16:08:49Z | 2024-03-02T08:57:59Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/197 | [] | Adesoji1 | 0 |
onnx/onnx | deep-learning | 5,886 | exporting pytorch to onnx fails, instead of an error, prints infinite number calculations | # Bug Report
When exporting a model to onnx using this code:
torch.onnx.export(self.track_wrapper,
(im_patches, self.train_feat, self.target_labels, self.train_ltrb),
"new_full_implicit_batch1t.onnx",
verbose=False,
export_params=True,
do_constant_folding=True,
opset_version=17,
input_names=['im_patches','train_feat','target_labels','train_ltrb'],
output_names=['scores_raw','bbox_preds'])
I get an output like this, only it is like this for an infinite amount:
(243,512,.,.) =
0.001 *
-8.2217
(244,512,.,.) =
0.01 *
2.7684
(245,512,.,.) =
0.01 *
7.3573
(246,512,.,.) =
0.01 *
-2.2212
(247,512,.,.) =
0.001 *
8.3807
(248,512,.,.) =
0.01 *
-7.1563
(249,512,.,.) =
-0.1012
(250,512,.,.) =
0.01 *
2.5295
(251,512,.,.) =
0.001 *
-8.2534
(252,512,.,.) =
0.001 *
3.6239
(253,512,.,.) =
0.01 *
-1.9034
(254,512,.,.) =
0.001 *
-2.5435
(255,512,.,.) =
0.001 *
2.9510
(256,512,.,.) =
0.01 *
-1.2922
[ torch.cuda.FloatTensor{256,512,1,1} ]
Perhaps anybody knows why this is happening. I dried moving all the tensors and weights to cuda, then to cpu, but I'm still getting the same result...
Like I said this output goes for infinity, this is just the ending...
The pytorch model by itself works perfectly fine
If any more info is required please ask
Thank you
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Windows 10
- ONNX version (*e.g. 1.13*): onnx with pytorch 2.1.2+cu121
- Python version: 3.10.13
| closed | 2024-01-31T22:42:51Z | 2024-02-01T17:31:27Z | https://github.com/onnx/onnx/issues/5886 | [
"bug"
] | ninono12345 | 1 |
tortoise/tortoise-orm | asyncio | 1,290 | MinValueValidator does not work on DecimalField | **Describe the bug**
MinValueValidator and MaxValueValidator are affected by this and reject decimal fields.
**To Reproduce**
```py
class Model:
# The following will immediately be rejected
example1 = fields.DecimalField(max_digits=12, decimal_places=3, validators=(MinValueValidator(Decimal("0.00")),))
# The following will be rejected when the value is compared in the db
example2 = fields.DecimalField(max_digits=12, decimal_places=3, validators=(MinValueValidator(0),))
```
**Expected behavior**
It should intuitively work.
**Additional context**
https://github.com/tortoise/tortoise-orm/blob/48ea2dfbff3abc0ffc1b8ed85653f6cdf08bd57c/tortoise/validators.py#L63-L78
The two isinstance() calls could be augmented with a `Decimal` and I don't immediately see anything wrong with it.
However, it seems to almost explicitly exclude Decimal, so I'm not sure. Maybe @MojtabaArezoomand knows more as this was done in 48ea2dfbff3abc0ffc1b8ed85653f6cdf08bd57c. | closed | 2022-11-16T13:45:00Z | 2022-11-17T12:45:14Z | https://github.com/tortoise/tortoise-orm/issues/1290 | [] | jleclanche | 3 |
erdewit/ib_insync | asyncio | 306 | Market Data Disconnect with 0.9.62 on Ubuntu Linux 18.04 | I've recently developed and deployed a stock trading app based on this wonderful framework. (Thanks Ewald!!) After a bit of testing locally on my Mac I deployed to the Google Cloud on Ubuntu 18.04 Linux. I set up a Gnome Desktop and TWS and installed all the supporting libraries in my server in including the latest patch of ib_insync (0.9.62). The IB app seemed to connect fine, the market data connections initialized, but after a couple of trades my app would disconnect. It didn't error out but I would simply see the following message in the logs:
2020-10-20 09:31:09,012 - ib_insync.wrapper - INFO - Warning 2103, reqId -1: Market data farm connection is broken:usfarm
Then the TWS app would basically freeze. I ran the app a few times in the debugging process. Sometimes after this message I would see a message in the logs that the connection had been reestablished but all processing with the TWS and the API would stop.
I tried several things to resolve the issue (changing versions of TWS and the JDK's) but the resolution occurred when I realized I had the previous version of ib_insync on my dev environment vs. the latest on Ubuntu. Once I rolled this back my app seems to be fully connected again and doesn't drop once trading starts.
Let me know if you need more info to duplicate or if there is logging I can enable to provide more info.
| closed | 2020-10-23T14:54:55Z | 2020-10-25T13:02:36Z | https://github.com/erdewit/ib_insync/issues/306 | [] | wardster-ai | 1 |
unionai-oss/pandera | pandas | 1,455 | When used as the bound of a TypeVar, a DataFrameModel is not recognized by MyPy as a valid type | **Describe the bug**
When used as the bound of a TypeVar, a DataFrameModel is not recognized by MyPy as a valid type. This may be more of a MyPy issue.
- [x] I have checked that this issue has not already been reported.
- [x] I have confirmed this bug exists on the latest version of pandera.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandera.
#### Code Sample, a copy-pastable example
```python
"""
Minimum code to reproduce the issue.
TypeVar of Pandera DataFrameModel not interpreted as a valid type.
"""
from typing import TypeVar, Type
import pandas as pd # type:ignore
from pandera import DataFrameModel
from pandera.typing import DataFrame, Series
T = TypeVar("T", bound=DataFrameModel)
def reproduce_issue(model: Type[T], df: pd.DataFrame) -> DataFrame[T]:
"""
MyPy will give an error here that model is not a valid type.
MyPy gives the message "variable 'model' is not valid as a type."
"""
return DataFrame[model](df)
test_df = pd.DataFrame(
{
"a": [1, 2, 3],
"b": [4, 5, 6],
}
)
class Model(DataFrameModel):
"""DataFrameModel example to reproduce issue."""
a: Series[int]
b: Series[int]
reproduce_issue(Model, test_df)
```
#### Expected behavior
MyPy should recognize that the variable, which is typed via a TypeVar object, is a valid type for a pandera.typing.DataFrame object.
#### Desktop (please complete the following information):
- OS: Windows 11
- Browser: Chrome
- Version: pandera 0.18.0, MyPy 1.8.0, Python 3.11.6
#### Screenshots

#### Additional context
This may well be a MyPy issue, rather than a Pandera issue. It could also be more of a feature request than a bug. Sorry for any inconvenience.
| open | 2024-01-07T03:49:30Z | 2024-01-07T03:49:30Z | https://github.com/unionai-oss/pandera/issues/1455 | [
"bug"
] | madkopp | 0 |
tfranzel/drf-spectacular | rest-api | 448 | Integration with drf-rw-serializers | **Describe the bug**
Hello, first of all, thanks for your work. This library is being very useful for one of the projects I'm working on.
One of the projects I work on, we make extensive use of `drf-rw-serializers`.
This means that `serializer_class` is not defined in the Views, and instead, `write_serializer_class` and `read_serializer_class` are defined.
All view classes provided by `drf-rw-serializers` inherit from `drf-rw-serializers` `GenericAPIView` class https://github.com/vintasoftware/drf-rw-serializers/blob/master/drf_rw_serializers/generics.py#L8-L71
I'm trying to get the documentation generated correctly, but I haven't been able to succeed yet.
The generated schema does not include any information provided by serializers:
```yaml
/api/mymodel/:
post:
operationId: mymodel_create
description: ''
tags:
- core
security:
- cookieAuth: []
- basicAuth: []
responses:
'201':
description: No response body
/api/mymodel/{id}/:
get:
operationId: mymodel_retrieve
description: ''
parameters:
- in: path
name: id
schema:
type: integer
required: true
tags:
- core
security:
- cookieAuth: []
- basicAuth: []
responses:
'200':
description: No response body
```
Below is the minimum code to reproduce the problem.
**To Reproduce**
```python
from rest_framework.permissions import IsAdminUser
from rest_framework import parsers
from drf_rw_serializers.viewsets import ModelViewSet as ModelRWViewSet
from drf_spectacular.utils import extend_schema, extend_schema_view
from .models import MyModel
from .serializers import MyModelUploadSerializer, MyModelSerializer
@extend_schema_view(
post=extend_schema(
description="POST method description here",
request=MyModelUploadSerializer,
responses={
201: MyModelSerializer
}
),
get=extend_schema(
description="GET method description here",
responses={
200: MyModelSerializer
}
)
)
class MyAPIView(ModelRWViewSet):
queryset = MyModel.objects.order_by('-created').all()
write_serializer_class = MyModelUploadSerializer
read_serializer_class = MyModelSerializer
permission_classes = [IsAdminUser]
parser_classes = [parsers.MultiPartParser]
```
urls.py
```python
from django.urls import path
from .api_views import MyAPIView
urlpatterns = [
path('api/mymodel/', view=MyAPIView.as_view({'post': 'create'}),
name='mymodel-upload'
),
path('api/mymodel/<int:pk>/', view=MyAPIView.as_view({'get': 'retrieve'}),
name='mymodel-detail'
),
]
```
**Expected behavior**
I would hope to get some hint on how to implement a generic way to make `drf-spectacular` understand the structure of a view that inherits from `drf-rw-serializers` `GenericAPIView` class and from there, be able to use `write_serializer_class` and `read_serializer_class` instead of just `serializer_class` to generate the OpenAPI schema.
| closed | 2021-07-01T22:50:37Z | 2021-10-17T12:30:27Z | https://github.com/tfranzel/drf-spectacular/issues/448 | [] | luzfcb | 4 |
allenai/allennlp | pytorch | 5,440 | Coreference resolution on Litbank Coref dataset | How can we use the allennlp coref training module on [litbank dataset](https://github.com/dbamman/litbank/tree/master/coref/conll) with conll files? | closed | 2021-10-20T05:09:24Z | 2021-11-03T16:09:40Z | https://github.com/allenai/allennlp/issues/5440 | [
"question",
"stale"
] | aakashb95 | 3 |
FujiwaraChoki/MoneyPrinter | automation | 209 | [BUG] | **Describe the bug**
Main.py exits out with the code : ModuleNotFoundError: No module named 'g4f'
**To Reproduce**
1. CD into Backend
2. Run Python main.py
**Expected behavior**
The script would run normally
**Desktop**
- OS: Windows
- Python Version 3.12.2
**Additional context**
Full logs;
C:\Users\user\Documents\YTShorts\MoneyPrinter\Backend>python main.py
Traceback (most recent call last):
File "C:\Users\user\Documents\YTShorts\MoneyPrinter\Backend\main.py", line 3, in <module>
from gpt import *
File "C:\Users\user\Documents\YTShorts\MoneyPrinter\Backend\gpt.py", line 3, in <module>
import g4f
ModuleNotFoundError: No module named 'g4f'
| closed | 2024-02-12T15:50:11Z | 2024-02-12T18:03:09Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/209 | [] | takuruk1 | 5 |
lonePatient/awesome-pretrained-chinese-nlp-models | nlp | 2 | 有类似CLIP这样文字-图片的中文模型吗 | closed | 2021-08-13T22:03:16Z | 2022-11-08T06:35:00Z | https://github.com/lonePatient/awesome-pretrained-chinese-nlp-models/issues/2 | [] | troilus-canva | 6 |
|
tensorflow/tensor2tensor | deep-learning | 990 | tensorflow_model_server not responding correctly | ### Description
I have trained the translate_encs_wmt32k_rev problem and now am trying to use the tensorflow_model_server to serve it. I have installed it locally and I also am using the docker image but both have the same results. I am trying to use gPRC and REST APIs but neither of them work.
I start the server this way with docker
```
nvidia-docker run -p 8500:8500 -p 8501:8501 -v $TRAIN_DIR/export/Servo:/models/my_model -e MODEL_NAME=my_model -t tensorflow/serving:latest-gpu
```
and this way without it
```
./tensorflow_model_server --port=8080 --model_name=my_model --model_base_path=$TRAIN_DIR/export/Servo
```
when I run t2t-query I get this
```
honza@honza-xps ~/W/U/E/t/encs> t2t-query-server --server=localhost:8500 --servable_name=my_model --problem=$PROBLEM --data_dir=$DATA_DIR --inputs_once="ahoj"
Traceback (most recent call last):
File "/usr/bin/t2t-query-server", line 16, in <module>
tf.app.run()
File "/usr/lib/python3.7/site-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "/usr/bin/t2t-query-server", line 12, in main
query.main(argv)
File "/usr/lib/python3.7/site-packages/tensor2tensor/serving/query.py", line 88, in main
outputs = serving_utils.predict([inputs], problem, request_fn)
File "/usr/lib/python3.7/site-packages/tensor2tensor/serving/serving_utils.py", line 114, in predict
predictions = request_fn(examples)
File "/usr/lib/python3.7/site-packages/tensor2tensor/serving/serving_utils.py", line 71, in _make_grpc_request
response = stub.Predict(request, timeout_secs)
File "/usr/lib/python3.7/site-packages/grpc/_channel.py", line 514, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/lib/python3.7/site-packages/grpc/_channel.py", line 448, in _end_unary_response_blocking
raise _Rendezvous(state, None, None, deadline)
grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero
[[Node: transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/dot_product_attention/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/dot_product_attention/add, transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/dot_product_attention/concat)]]
[[Node: transformer/while/body/parallel_0/body/decoder/layer_4/ffn/conv1/Tensordot/Shape/_1127 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_5346_...rdot/Shape", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopConstantFolding/transformer/while/body/parallel_0/body/decoder/laye...TRUNCATED"
debug_error_string = "{"created":"@1534190158.516842292","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1095,"grpc_message":"Reshape cannot infer the missing input size for an empty tensor unless all specified input sizes are non-zero\n\t [[Node: transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/dot_product_attention/Reshape = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/dot_product_attention/add, transformer/body/parallel_0/body/encoder/layer_0/self_attention/multihead_attention/dot_product_attention/concat)]]\n\t [[Node: transformer/while/body/parallel_0/body/decoder/layer_4/ffn/conv1/Tensordot/Shape/_1127 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_5346_...rdot/Shape", tensor_type=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](^_cloopConstantFolding/transformer/while/body/parallel_0/body/decoder/laye...TRUNCATED","grpc_status":3}"
>
```
and when I use curl to access the REST API I get this
```
honza@honza-xps ~> curl -d '{"instances": [{"input": "Ahoj"}]}' -X POST http://localhost:8501/v1/models/my_model:predict
{ "error": "Could not parse example input, value: \'Ahoj\'\n\t [[Node: ParseSingleExample/ParseSingleExample = ParseSingleExample[Tdense=[DT_INT64], dense_keys=[\"batch_prediction_key\"], dense_shapes=[[1]], num_sparse=2, sparse_keys=[\"inputs\", \"targets\"], sparse_types=[DT_INT64, DT_INT64]](arg0, ParseSingleExample/Reshape)]]\n\t [[Node: DatasetToSingleElement = DatasetToSingleElement[output_shapes=[[?,1], [?,?,1,1], [?,?,1,1]], output_types=[DT_INT32, DT_INT32, DT_INT32], _device=\"/job:localhost/replica:0/task:0/device:CPU:0\"](MapDataset_4)]]\n\t [[Node: transformer/while/body/parallel_0/body/decoder/layer_0/self_attention/multihead_attention/output_transform/Tensordot/GatherV2/_807 = _Recv[client_terminated=false, recv_device=\"/job:localhost/replica:0/task:0/device:GPU:0\", send_device=\"/job:localhost/replica:0/task:0/device:CPU:0\", send_device_incarnation=1, tensor_name=\"edge_3723_...t/GatherV2\", tensor_type=DT_INT32, _device=\"/job:localhost/replica:0/task:0/device:GPU:0\"](^_clooptransformer/while/symbol_modality_32888_512/parallel_0/symbol_modality_32888_512/shared/Squeeze/_410)]]" }⏎
```
Also when I look at the server logs when I use t2t-query nothing show up on there while with curl it shows a error. This is the whole log and the last line is the one that appears after each curl execution.
```
2018-08-13 20:04:59.124197: I tensorflow_serving/model_servers/main.cc:157] Building single TensorFlow model file config: model_name: my_model model_base_path: /models/my_model
2018-08-13 20:04:59.124561: I tensorflow_serving/model_servers/server_core.cc:462] Adding/updating models.
2018-08-13 20:04:59.124602: I tensorflow_serving/model_servers/server_core.cc:517] (Re-)adding model: my_model
2018-08-13 20:04:59.225195: I tensorflow_serving/core/basic_manager.cc:739] Successfully reserved resources to load servable {name: my_model version: 1534129049}
2018-08-13 20:04:59.225268: I tensorflow_serving/core/loader_harness.cc:66] Approving load for servable version {name: my_model version: 1534129049}
2018-08-13 20:04:59.225310: I tensorflow_serving/core/loader_harness.cc:74] Loading servable version {name: my_model version: 1534129049}
2018-08-13 20:04:59.225367: I external/org_tensorflow/tensorflow/contrib/session_bundle/bundle_shim.cc:360] Attempting to load native SavedModelBundle in bundle-shim from: /models/my_model/1534129049
2018-08-13 20:04:59.225407: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /models/my_model/1534129049
2018-08-13 20:04:59.701749: I external/org_tensorflow/tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2018-08-13 20:04:59.766195: I external/org_tensorflow/tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2018-08-13 20:05:00.144263: I external/org_tensorflow/tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:897] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2018-08-13 20:05:00.145509: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1405] Found device 0 with properties:
name: GeForce GTX 1050 major: 6 minor: 1 memoryClockRate(GHz): 1.493
pciBusID: 0000:01:00.0
totalMemory: 3.95GiB freeMemory: 3.90GiB
2018-08-13 20:05:00.145555: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1484] Adding visible gpu devices: 0
2018-08-13 20:05:00.422697: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:965] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-08-13 20:05:00.422727: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0
2018-08-13 20:05:00.422751: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:984] 0: N
2018-08-13 20:05:00.423054: I external/org_tensorflow/tensorflow/core/common_runtime/gpu/gpu_device.cc:1097] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 3629 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1050, pci bus id: 0000:01:00.0, compute capability: 6.1)
2018-08-13 20:05:00.609687: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:113] Restoring SavedModel bundle.
2018-08-13 20:05:00.847616: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:148] Running LegacyInitOp on SavedModel bundle.
2018-08-13 20:05:00.939754: I external/org_tensorflow/tensorflow/cc/saved_model/loader.cc:233] SavedModel load for tags { serve }; Status: success. Took 1714341 microseconds.
2018-08-13 20:05:00.939811: I tensorflow_serving/servables/tensorflow/saved_model_warmup.cc:83] No warmup data file found at /models/my_model/1534129049/assets.extra/tf_serving_warmup_requests
2018-08-13 20:05:00.939935: I tensorflow_serving/core/loader_harness.cc:86] Successfully loaded servable version {name: my_model version: 1534129049}
2018-08-13 20:05:00.941520: I tensorflow_serving/model_servers/main.cc:327] Running ModelServer at 0.0.0.0:8500 ...
[warn] getaddrinfo: address family for nodename not supported
2018-08-13 20:05:00.942502: I tensorflow_serving/model_servers/main.cc:337] Exporting HTTP/REST API at:localhost:8501 ...
[evhttp_server.cc : 235] RAW: Entering the event loop ...
2018-08-13 20:05:20.740251: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Could not parse example input, value: 'Ahoj'
2018-08-13 20:07:07.921231: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Could not parse example input, value: 'Ahoj'
2018-08-13 20:07:10.157668: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Could not parse example input, value: 'Ahoj'
2018-08-13 20:07:13.500175: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Could not parse example input, value: 'Ahoj'
2018-08-13 20:07:13.776296: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Could not parse example input, value: 'Ahoj'
2018-08-13 20:07:14.115015: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Could not parse example input, value: 'Ahoj'
2018-08-13 20:07:14.350318: W external/org_tensorflow/tensorflow/core/framework/op_kernel.cc:1275] OP_REQUIRES failed at example_parsing_ops.cc:240 : Invalid argument: Could not parse example input, value: 'Ahoj'
```
### Environment information
```
OS: Arch linux
$ pip freeze | grep tensor
tensor2tensor==1.7.0
tensorboard==1.9.0
tensorflow==1.9.0
tensorflow-serving-api==1.10.0
$ python -V
Python 3.7.0
``` | closed | 2018-08-13T20:09:52Z | 2018-08-13T21:28:15Z | https://github.com/tensorflow/tensor2tensor/issues/990 | [] | kockahonza | 1 |
keras-team/keras | tensorflow | 20,698 | The default setting for aggregation='mean' in Variable and Optimizer is incorrect. | The default policy of using aggregation='mean' is incorrect and should be set to none. In distributed contexts, backends handle gradient reductions and variable updates, making the mean aggregation unnecessary. Using aggregation='mean' disrupts optimizer moment estimates. For reference, in keras==2.15.0, the default policy was effectively equivalent to aggregation='none | closed | 2024-12-28T16:34:24Z | 2024-12-31T04:22:19Z | https://github.com/keras-team/keras/issues/20698 | [] | aschk45 | 0 |
serengil/deepface | machine-learning | 926 | Can a base64 image be parsed by deepface | Hi,
Rather than specifying an img_path to a jpg image, is it possible to send a base64 encoded image to Deepface for analysis instead? Any help would be much appreciated. | closed | 2023-12-20T16:10:10Z | 2023-12-20T16:11:41Z | https://github.com/serengil/deepface/issues/926 | [
"question"
] | medichops1 | 1 |
PaddlePaddle/models | nlp | 4,906 | 将两个用同一预训练网络的模型放在不同线程预测报错 | 我训了两个图像分类网络(6分类网络和1000分类网络),骨架网络都是用的ResNet200_vd,我将这两个网络分别放在不同线程运行,
单独运行的时候都没问题,两个线程同时运行就会报错:


两个模型用的infer.py代码都一样,只是class_dim和pretraind_model参数不一样,报错的位置是这行代码:

应该是两个线程之间的指令相互干扰造成的,请问如何处理? | open | 2020-10-14T09:50:58Z | 2024-02-26T05:10:00Z | https://github.com/PaddlePaddle/models/issues/4906 | [] | Derek-Kun | 5 |
microsoft/RD-Agent | automation | 418 | JSONDecodeError during Code Execution (before executing agent-written code) | ## 🐛 Bug Description
Facing the following error for every code execution stage when the `general_model` is used to implement a paper:
```
--------------Execution feedback:---------------
Execution error: Extra data: line 2 column 1 (char 77)
Traceback: Traceback (most recent call last):
File "/mnt/d/TechStuff/RD-Agent/rdagent/components/coder/model_coder/model.py", line 97, in execute
qtde.prepare()
File "/mnt/d/TechStuff/RD-Agent/rdagent/utils/env.py", line 374, in prepare
super().prepare()
File "/mnt/d/TechStuff/RD-Agent/rdagent/utils/env.py", line 211, in prepare
status_dict = json.loads(part)
File "/home/tezansahu/anaconda3/envs/rdagent/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/home/tezansahu/anaconda3/envs/rdagent/lib/python3.10/json/decoder.py", line 340, in decode
raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 77)
```
All other model responses like writing the description/formulation, as well as the response of critics on the code are returned correctly.
Command used to run: `rdagent general_model --report_file_path=rdagent/scenarios/general_model/1412.6550v4_FitNet.pdf`
## To Reproduce
Steps to reproduce the behavior:
1. Installed rdagent using instructions for devs: https://rdagent.readthedocs.io/en/stable/development.html
2. Created `.env` file with the following settings:
```
USE_AZURE=True
USE_AZURE_TOKEN_PROVIDER=False
MAX_RETRY=10
RETRY_WAIT_SECONDS=20
OPENAI_API_KEY=<removed>
CHAT_MODEL=gpt-4o
CHAT_MAX_TOKENS=3000
CHAT_TEMPERATURE=0.7
CHAT_AZURE_API_BASE=<removed>
CHAT_AZURE_API_VERSION=2024-06-01
EMBEDDING_MODEL=text-embedding-ada-002
EMBEDDING_AZURE_API_BASE=<removed>
EMBEDDING_AZURE_API_VERSION=2024-06-01
```
3. Downloaded the FitNet paper & placed it in `rdagent/scenarios/general_model/` folder
4. Ran the command `rdagent general_model --report_file_path=rdagent/scenarios/general_model/1412.6550v4_FitNet.pdf`
## Expected Behavior
Code execution to happen successfully & actual code errors to be found (not setup errors)
## Screenshot

## Environment
**Note**: Users can run `rdagent collect_info` to get system information and paste it directly here.
- Name of current operating system: Windows (using WSL 2 - Ubuntu 22.04LTS)
- Processor architecture: AMD64
- System, version, and hardware information: Microsoft Windows 11 Home Single Language (OS Version: 10.0.22631 N/A Build 22631) | Intel64 Family 6 Model 154 Stepping 3 GenuineIntel ~2700 Mhz processor
- Version number of the system: 10.0.22631 N/A Build 22631
- Python version: 3.10.15
- Container ID:
- Container Name:
- Container Status:
- Image ID used by the container:
- Image tag used by the container:
- Container port mapping:
- Container Label:
- Startup Commands: `rdagent general_model --report_file_path=rdagent/scenarios/general_model/1412.6550v4_FitNet.pdf`
- RD-Agent version: 0.2.1
- Package version:
| closed | 2024-10-09T17:08:00Z | 2024-11-25T09:14:14Z | https://github.com/microsoft/RD-Agent/issues/418 | [
"bug"
] | tezansahu | 1 |
seleniumbase/SeleniumBase | web-scraping | 3,003 | If specifying an invalid `by`, fail fast, rather than after trying to find an element | ### If specifying an invalid `by`, fail fast, rather than after trying to find an element
----
These are the valid `by` options: "css selector", "link text", "partial link text", "name", "xpath", "id", "tag name", and "class name".
By default, SeleniumBase autodetects between "css selector" and "xpath" if the `by` is not specified.
If people try to specify an invalid `by`, it should raise an exception right away. (Currently, it tries waiting for the element up to the `timeout`, and that would eventually fail.) | closed | 2024-08-06T16:02:31Z | 2024-08-07T03:43:14Z | https://github.com/seleniumbase/SeleniumBase/issues/3003 | [
"enhancement"
] | mdmintz | 1 |
Kanaries/pygwalker | plotly | 18 | [Feat] Integrate VegaFusion to move transforms out of the browser | Hi 👋 ,
Congrats on the pygwalker release. I'm the maintainer of [VegaFusion](https://vegafusion.io/), which is an open source project that provides server-side scaling for Vega visualizations by automatically extracting Vega transforms and evaluating them on the server. This makes it possible to scale many Vega/Vega-Lite visualizations to millions of rows as long as they include some form of aggregation.
I haven't looked at the architecture of pygwalker, but it might be fairly straightforward to integrate VegaFusion and enable pygwalker to support lager data sets. Let me know if you're interested in talking through details! | open | 2023-02-21T16:17:11Z | 2023-03-01T08:25:21Z | https://github.com/Kanaries/pygwalker/issues/18 | [] | jonmmease | 0 |
littlecodersh/ItChat | api | 659 | process_login_info error | login.py:147 core.s.get(core.loginInfo['url'], headers=headers, allow_redirects=False)
没有fun等参数 导致返回结果为空,xml解析报错

```
Traceback (most recent call last):
File "C:/Users/Administrator/PycharmProjects/Lucky/Bot/main.py", line 7, in <module>
run()
File "C:\Users\Administrator\PycharmProjects\Lucky\Bot\core\core.py", line 171, in run
itchat.auto_login(hotReload=True)
File "D:\Python\3.5\lib\site-packages\itchat\components\register.py", line 30, in auto_login
loginCallback=loginCallback, exitCallback=exitCallback)
File "D:\Python\3.5\lib\site-packages\itchat\components\login.py", line 48, in login
status = self.check_login()
File "D:\Python\3.5\lib\site-packages\itchat\components\login.py", line 131, in check_login
process_login_info(self, r.text)
File "D:\Python\3.5\lib\site-packages\itchat\components\login.py", line 164, in process_login_info
for node in xml.dom.minidom.parseString(r.text).documentElement.childNodes:
File "D:\Python\3.5\lib\xml\dom\minidom.py", line 1968, in parseString
return expatbuilder.parseString(string)
File "D:\Python\3.5\lib\xml\dom\expatbuilder.py", line 925, in parseString
return builder.parseString(string)
File "D:\Python\3.5\lib\xml\dom\expatbuilder.py", line 223, in parseString
parser.Parse(string, True)
xml.parsers.expat.ExpatError: no element found: line 1, column 0
```
| closed | 2018-05-14T02:50:07Z | 2018-05-14T03:11:44Z | https://github.com/littlecodersh/ItChat/issues/659 | [] | Qiu800820 | 1 |
deepinsight/insightface | pytorch | 2,694 | 请问有PyTorch模型么? | 问题1: 请问针对这两个模型能力,您有训练好的PyTorch格式的模型么?
问题2: 官网中提供的[InsightFace_Pytorch](https://github.com/TreB1eN/InsightFace_Pytorch),能完全替代这两个模型么? | open | 2024-11-22T08:32:03Z | 2024-11-22T08:32:03Z | https://github.com/deepinsight/insightface/issues/2694 | [] | gongjl | 0 |
psf/requests | python | 6,542 | Cookies with `Secure` are not sent for `localhost` via unencrypted `http` | See the title, `Secure` cookies are not sent in requests to `localhost` if those requests are not encrypted. There should probably be an exception for `localhost` (as there is in browsers, as well) to aid in developing servers.
## Expected Result
The cookies should be sent with the request.
## Actual Result
They aren't.
## Reproduction Steps
```python
import requests
session = requests.Session()
session.post("http://localhost:8080/login")
session.get("http://localhost:8080/only-with-login").raise_for_status()
```
and
```python
from flask import Flask, request, make_response
app = Flask(__name__)
@app.route("/login")
def login():
resp = make_response("Logged in")
resp.set_cookie("token", "token", secure=True)
return resp
@app.route("/only-with-login")
def only_with_login():
if request.cookies.get("token") != "token":
return "Not logged in", 403
return "Logged in"
```
Running the flask app with `flask --app server run --port 8080` and then running the requests with `python3 do_requests.py` throws an exception in `do_requests`, while in a browser the page does show "Logged in" after visiting `http://localhost:8080/login`.
## System Information
$ python -m requests.help
```json
{
"chardet": {
"version": "5.2.0"
},
"charset_normalizer": {
"version": "3.2.0"
},
"cryptography": {
"version": "41.0.3"
},
"idna": {
"version": "3.4"
},
"implementation": {
"name": "CPython",
"version": "3.11.5"
},
"platform": {
"release": "6.5.4-arch2-1",
"system": "Linux"
},
"pyOpenSSL": {
"openssl_version": "30100020",
"version": "23.2.0"
},
"requests": {
"version": "2.28.2"
},
"system_ssl": {
"version": "30100030"
},
"urllib3": {
"version": "1.26.15"
},
"using_charset_normalizer": false,
"using_pyopenssl": true
}
```
<!-- This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c). -->
| closed | 2023-09-30T19:50:23Z | 2024-09-09T15:54:34Z | https://github.com/psf/requests/issues/6542 | [] | 42triangles | 2 |
plotly/dash | plotly | 3,141 | component typing id as dict and datetime for Date pickers in Dash 3.0rc1 | Hi, I tried to update to dash 3.0rc1, and what I noticed is that some typehints are too strict. What I found:
- `id`s are typed as `typing.Optional[str]`, omitting dict ids
- dates in datepickers are typed as `typing.Optional[str]`, but we use `datetime.date`s without problem, not sure if `date`s working is intended or just a coincidence though
There are possibly others, this is what I found in our codebase. | open | 2025-01-29T15:46:02Z | 2025-03-18T13:41:54Z | https://github.com/plotly/dash/issues/3141 | [
"P1",
"dash-3.0"
] | tlauli | 5 |
aimhubio/aim | tensorflow | 2,550 | AimLogger closes its run after training ends, causing any subsequent logging after testing to hang in an infinite loop | https://github.com/aimhubio/aim/blob/0dbcf41834cb6fe928aa93b547d98f1ba58874a3/aim/sdk/adapters/pytorch_lightning.py#L140-L143
Typically in Lightning loggers, the `finalize()` callback does not explicitly close or "finish" any runs, as that is left to the user to explicitly do. However, in Aim's Lightning logger, its `finalize()` callback explicitly closes the run associated with the logger. This has the unintended side-effect where if you were to run `trainer.fit()` then `trainer.test()` then `logger.log("test")` (where `import logging; logger = logging.getLogger(__name__)`), the program will experience an infinite loop within the call to `log("test")`.
More explanation can be found [in this PR](https://github.com/ashleve/lightning-hydra-template/pull/534#issuecomment-1445184624) as well as showing how to reproduce it. | open | 2023-02-26T03:12:25Z | 2023-03-06T16:27:29Z | https://github.com/aimhubio/aim/issues/2550 | [
"type / bug",
"area / integrations"
] | tesfaldet | 13 |
microsoft/hummingbird | scikit-learn | 33 | Fix the doc style based on google code style | [link](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) | closed | 2020-04-26T06:15:41Z | 2020-05-01T00:35:25Z | https://github.com/microsoft/hummingbird/issues/33 | [] | interesaaat | 1 |
flairNLP/flair | pytorch | 2,938 | All predictions are <unk> | I'm running the code from https://medium.com/thecyphy/training-custom-ner-model-using-flair-df1f9ea9c762 for NER on a custom dataset, and I find that no matter how I change the learning rate, every prediction is unk, and the f1-score is 0.0 on every epoch. I'm thinking there must be something wrong with the formatting of my dataset. Here is what my train set would look like, where I replace my actual labels with Text1 to keep my data anonymous.
`Text1 B-Brand
Text1 O
Text1 B-MPN
Text1 B-Type
Text1 B-Model
Text1 B-Color
Text1 B-Fabric Type
Text1 B-No Tag
Text1 B-Brand
Text1 B-Color
Text1 B-Pattern
Text1 B-Fabric Type
Text1 B-Model
Text1 O
Text1 B-Type
Text1 B-No Tag
Text1 B-Type
Text1 O
Text1 B-No Tag
Text1 B-Type
`
And here is the result of loss.tsv trained starting with learning_rate=.001 (I've tried larger and smaller learning_rates already)
EPOCH TIMESTAMP BAD_EPOCHS LEARNING_RATE TRAIN_LOSS DEV_LOSS DEV_PRECISION DEV_RECALL DEV_F1 DEV_ACCURACY
1 21:55:24 0 0.0010 3.6659961510777896 3.160431146621704 0.0 0.0 0.0 0.0
2 21:55:30 0 0.0010 2.658900432190474 2.093571424484253 0.0 0.0 0.0 0.0
3 21:55:36 0 0.0010 1.5765421452425217 0.9758513569831848 0.0 0.0 0.0 0.0
4 21:55:42 0 0.0010 0.5964466308130153 0.21864087879657745 0.0 0.0 0.0 0.0
5 21:55:48 0 0.0010 0.12082597720927506 0.027130696922540665 0.0 0.0 0.0 0.0
6 21:55:55 0 0.0010 0.015038865753739897 0.0025882211048156023 0.0 0.0 0.0 0.0
7 21:56:02 0 0.0010 0.001861507955604636 0.000609234906733036 0.0 0.0 0.0 0.0
8 21:56:09 0 0.0010 0.0007104066469299261 0.0003203396627213806 0.0 0.0 0.0 0.0
9 21:56:16 0 0.0010 0.0004282736406687817 0.0002125622413586825 0.0 0.0 0.0 0.0
10 21:56:23 0 0.0010 0.0003175982157330431 0.00015547996736131608 0.0 0.0 0.0 0.0
11 21:56:30 0 0.0010 0.00023519093161660838 0.00012211497232783586 0.0 0.0 0.0 0.0
12 21:56:37 0 0.0010 0.00018551815456892758 0.00010058629413833842 0.0 0.0 0.0 0.0
13 21:56:42 0 0.0010 0.00016401175303360117 8.437278302153572e-05 0.0 0.0 0.0 0.0
14 21:56:48 0 0.0010 0.00013860434806521084 7.258114055730402e-05 0.0 0.0 0.0 0.0
15 21:56:54 0 0.0010 0.00012990906794919298 6.315676000667736e-05 0.0 0.0 0.0 0.0
16 21:57:00 0 0.0010 0.00010746981776682954 5.596564369625412e-05 0.0 0.0 0.0 0.0
17 21:57:07 0 0.0010 9.767208015885881e-05 5.0248483603354543e-05 0.0 0.0 0.0 0.0
18 21:57:13 0 0.0010 9.089903361855359e-05 4.502263982431032e-05 0.0 0.0 0.0 0.0
19 21:57:20 0 0.0010 8.164969794247736e-05 4.14940805057995e-05 0.0 0.0 0.0 0.0
20 21:57:27 0 0.0010 7.59508407533057e-05 3.7652862374670804e-05 0.0 0.0 0.0 0.0
Notably, the loss significantly decreases, but the f1 score remains the same. If it helps at all, here is also the code I use to run the trainer
`# define columns
from flair.data import Corpus
from flair.datasets import ColumnCorpus
from flair.embeddings import TokenEmbeddings
columns = {0 : 'text', 1 : 'ner'}
# directory where the data resides
data_folder = './dataset2/'
# initializing the corpus
corpus = ColumnCorpus(
data_folder,
columns,
train_file = 'train1.txt',
test_file='test1.txt',
dev_file = 'dev1.txt')
# tag to predict
tag_type = 'ner'
# make tag dictionary from the corpus
label_dictionary = corpus.make_label_dictionary(tag_type)
from flair.embeddings import WordEmbeddings, StackedEmbeddings
from typing import List
embedding_types : List[TokenEmbeddings] = [
WordEmbeddings('glove'),]
embeddings : StackedEmbeddings = StackedEmbeddings(
embeddings=embedding_types)
print(embeddings)
from flair.models import SequenceTagger
tagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=label_dictionary,
tag_type=tag_type,
use_crf=True)
from flair.trainers import ModelTrainer
trainer= ModelTrainer(tagger, corpus)
print(trainer)
trainer.train('resources/taggers/example-ner',
learning_rate=.001,
mini_batch_size=32,
max_epochs=20)
`
Please let me know if there's anything that might stand out as to why the model is just not learning. Thanks | closed | 2022-09-11T03:02:28Z | 2023-03-30T10:40:49Z | https://github.com/flairNLP/flair/issues/2938 | [
"question",
"wontfix"
] | ModeEric | 4 |
psf/black | python | 4,151 | INTERNAL ERROR: Black produced invalid code: f-string: expecting '}' | **Describe the bug**
Internal crash
**To Reproduce**
Run the code down below
For example, take this code:
```python
File "src/black/__init__.py", line 1476, in assert_equivalent
File "src/black/parsing.py", line 140, in parse_ast
email = "test@example.com"
query_params = UsersRequestBuilder.UsersRequestBuilderGetQueryParameters(
filter=f"mail eq '{email}' or startswith(userPrincipalName, '{email.split("
@ ")[0]}')",
count=True, # required
)
```
And run it with these arguments:
```sh
$ black .
```
The resulting error is:
> [> cannot format file.py: INTERNAL ERROR: ...](error: cannot format whatever.py: INTERNAL ERROR: Black produced invalid code: f-string: expecting '}' (<unknown>, line 3). Please report a bug on https://github.com/psf/black/issues. This invalid output might be helpful: /var/folders/lg/s80890fd1ts6s1w69c8lg03m0000gn/T/blk_jssqz16j.log)
**Expected behavior**
It should format correctly
**Environment**
- Black's version: 23.12.1
- OS and Python version: macOS 14.2.1, Python (CPython) 3.12.1
| closed | 2024-01-12T09:09:07Z | 2024-01-12T15:52:47Z | https://github.com/psf/black/issues/4151 | [
"T: bug"
] | max-wittig | 7 |
autogluon/autogluon | computer-vision | 4,553 | [Tabular] Add Support for Loading Excel Files | We might want to add support for excel format here: https://github.com/autogluon/autogluon/blob/a2ad006bf12f9cde018d17c17eade192e6c69859/common/src/autogluon/common/loaders/load_pd.py#L20
For more details, we may discuss offline. @Innixma @AnirudhDagar
| open | 2024-10-17T20:29:18Z | 2024-11-11T15:59:42Z | https://github.com/autogluon/autogluon/issues/4553 | [
"enhancement",
"module: tabular",
"module: common"
] | FANGAreNotGnu | 3 |
reloadware/reloadium | django | 15 | Issues with exceptions | **Describe the bug**
Reloadium does not handle correctly methods that raise exceptions.
**To Reproduce**
Steps to reproduce the behavior:
1. Create a module with the following content
```python
def bar():
raise Exception('Some exception')
def foo():
try:
bar()
except Exception as e:
pass
foo()
pass
```
2. Place a breakpoint at the end of the module.
3. Debug the code using reloadium
**Expected behavior**
Application stops at the set breakpoint
**Actual behavior**
The message "**An exception occurred during reloading current frame. Fix your changes and save to reload**" appears. Reloadium waits for the user to fix the `bar()` method.
**Screenshots**

**Desktop (please complete the following information):**
- OS: Windows
- OS version: 10
- Reloadium package version: 0.8.7
- PyCharm plugin version: 0.8.1
- Editor: PyCharm
- Run mode: Debug
**Additional context**
No problems will appear if you catch the exception in the method where the exception occurs. The following snippet will work:
```
def bar():
try:
raise Exception('Some exception')
except Exception:
pass
```
Previous versions of Reloadium handled such situations without any problems. | closed | 2022-05-23T04:40:48Z | 2022-05-25T16:55:17Z | https://github.com/reloadware/reloadium/issues/15 | [] | BusHero | 1 |
biolab/orange3 | numpy | 6,712 | Radar Chart | Extending the parallel coordinates visualization to include radar charts could be useful in some instances. I have a data set where the relationships between variables is roughly a ring (like a ring species), and a radar chart would be a more natural fit for this than parallel coordinates (although repeating the first variable as the last would partially resolve this).
In other cases, where the angle of the axes could be adjusted, radar charts may allow one to convey more information than parallel coordinates. | open | 2024-01-23T18:36:07Z | 2024-01-26T08:12:10Z | https://github.com/biolab/orange3/issues/6712 | [] | belg4mit | 0 |
ray-project/ray | machine-learning | 50,883 | [Serve] Ray Serve APIs for users to define when the Ray Serve applications are ready to serve requests | ### Description
It'd be useful for the Ray Serve API to allow users to configure settings such as custom timeouts for when applications are ready to serve requests.
### Use case
This would be useful for scenarios such as: https://github.com/ray-project/enhancements/pull/58#discussion_r1968439611, where a large number of non-declaratively created applications which frequently update may make it difficult for the controller to find a state where all Serve apps are in a "Ready" state. | open | 2025-02-25T03:39:23Z | 2025-02-25T17:29:47Z | https://github.com/ray-project/ray/issues/50883 | [
"enhancement",
"triage",
"serve"
] | ryanaoleary | 0 |
modelscope/modelscope | nlp | 739 | IndexError: index 2 is out of bounds for axis 0 with size 2 | In Windows10, when I trained the DCT-Net model, I got the following error:
2024-01-24 17:06:39.367662: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_100.dll
2024-01-24 17:06:52,521 - modelscope - INFO - Iter: 0, d_loss: 0.5868567228317261, g_loss: 0.24602027237415314
0%| | 0/300000 [00:17<?, ?it/s]
Traceback (most recent call last):
File "G:/Project/Face/DCT-Net/train_localtoon.py", line 37, in
main(args)
File "G:/Project/Face/DCT-Net/train_localtoon.py", line 25, in main
trainer.train()
File "D:\InstallPath\Develop\Anaconda3\2020.07\envs\dctnet\lib\site-packages\modelscope\trainers\cv\cartoon_translation_trainer.py", line 218, in train
str('%8d' % max_steps) + '_face_result.jpg', 4)
File "D:\InstallPath\Develop\Anaconda3\2020.07\envs\dctnet\lib\site-packages\modelscope\models\cv\cartoon\utils.py", line 142, in write_batch_image
image[k] = (image[k] + 1) * 127.5
IndexError: index 2 is out of bounds for axis 0 with size 2 | closed | 2024-01-25T10:04:54Z | 2024-05-29T01:51:12Z | https://github.com/modelscope/modelscope/issues/739 | [
"Stale"
] | WestbrookZero | 3 |
huggingface/peft | pytorch | 1,650 | Backward compatibility on saved config. | ### Feature request
Model trained on newer peft should be available on older version when possible.
### Motivation
We found model trained on peft 0.10 cannot be loaded with older peft due to unknown entry `layer_replication` in adapter config json.
This entry is never used by us so it's probably just the default value.
Default value like this should not be exported.
### Your contribution
N/A | closed | 2024-04-13T23:20:11Z | 2024-06-17T15:03:38Z | https://github.com/huggingface/peft/issues/1650 | [] | xkszltl | 5 |
iperov/DeepFaceLab | deep-learning | 5,688 | 3) cut video (drop video on me) | usage: main.py videoed cut-video [-h] --input-file INPUT_FILE
[--from-time FROM_TIME] [--to-time TO_TIME]
[--audio-track-id AUDIO_TRACK_ID]
[--bitrate BITRATE]
main.py videoed cut-video: error: argument --input-file: expected one argument
Press any key to continue . . . | open | 2023-06-18T20:34:10Z | 2023-06-18T20:34:10Z | https://github.com/iperov/DeepFaceLab/issues/5688 | [] | nirajansolta | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.