repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
python-restx/flask-restx | api | 549 | SwaggerUIBundle is not defined | I am using `flask-restx==1.1.0`
My Python is `3.8.10`
Sometime I am seeing this issue in my swagger dashboard
`GET https://{host}/api/swaggerui/swagger-ui-standalone-preset.js net::ERR_ABORTED 404 (NOT FOUND)
{host}/:71 GET https://{host}/api/swaggerui/swagger-ui-bundle.js net::ERR_ABORTED 404 (NOT FOUND)
{host}/:7 GET https://{host}/api/swaggerui/swagger-ui.css net::ERR_ABORTED 404 (NOT FOUND)
(index):75 Uncaught ReferenceError: SwaggerUIBundle is not defined
at window.onload ((index):75:40)`
And my dashboard is not getting load because of this issue
Can someone please help me with this ? I am not able to find much about this issue in internet already | open | 2023-06-29T10:29:34Z | 2023-07-07T03:26:03Z | https://github.com/python-restx/flask-restx/issues/549 | [
"bug"
] | viveksahu56722 | 5 |
horovod/horovod | pytorch | 3,240 | One process are worked in two GPUs? | **Environment:**
1. Framework: PyTorch
2. Framework version: I do not know
3. Horovod version: 0.23.0
4. MPI version: 4.0.0
5. CUDA version:11.2
6. NCCL version:2.8.4 + cuda 11.1
7. Python version: 3.8
8. Spark / PySpark version: no
9. Ray version: no
10. OS and version: Ubuntu 18.04
11. GCC version: I do not know
12. CMake version: I do not know
**Checklist:**
1. Did you search issues to find if somebody asked this question before? Yes but no answer
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)? yes, but no answer
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)? no
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
I use the following Dockerfile to make a docker image:
[](url)https://drive.google.com/file/d/1aZAGyqCyBbB7hgR1uHn-KPX98ymLjBMx/view?usp=sharing
And then I run the horovod example: pytorch_mnist.py
But I got the following picture:
<img width="483" alt="b25845e196c2411c2a4b7350da28749" src="https://user-images.githubusercontent.com/30434881/138592927-2f8b2abe-5fe4-4bfd-9746-59c553c3a5f5.png">
It seems that PID worked in two GPUs, such as 801, 802 and 803.
But the training process can be done.
How can I do?
Thank you in advance. | open | 2021-10-24T11:56:55Z | 2021-10-24T12:01:13Z | https://github.com/horovod/horovod/issues/3240 | [
"bug"
] | xml94 | 0 |
Yorko/mlcourse.ai | seaborn | 697 | fix Plotly visualizations in JupyterBook | Topics 2 and 9, part 2. [Plolty & JupyterBook](https://jupyterbook.org/interactive/interactive.html#plotly), `iplot` is not working | closed | 2021-12-28T02:16:08Z | 2022-08-27T20:25:07Z | https://github.com/Yorko/mlcourse.ai/issues/697 | [
"jupyter-book"
] | Yorko | 0 |
supabase/supabase-py | flask | 576 | Change the functions method inside of supabase-py to a property | Currently the supabase-py library has a functions method but this should follow all the other services inside of the library and use a property instead.
To currently invoke a function your code looks like:
```python
supabase.functions().invoke()
```
This change will make this code look like:
```python
supabase.functions.invoke()
``` | closed | 2023-10-02T16:20:08Z | 2023-10-04T10:33:06Z | https://github.com/supabase/supabase-py/issues/576 | [] | silentworks | 2 |
streamlit/streamlit | data-visualization | 10,055 | Preserve exact spacing in `st.text` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
st.text behavior seems to have changed between 1.37 and the latest version (1.41) in which it no longer respects pre-formatted spacing.
### Reproducible Code Example
```Python
import streamlit as st
def display_preformatted_text():
preformatted_text = """
Interface: GigabitEthernet1/0/1
MAC Address: 00:1A:2B:3C:4D:5E
IPv4 Address: 192.168.1.10
IPv6 Address: fe80::1a2b:3c4d:5e6f
User-Name: user1
User-Role: admin
Status: active
Domain: example.com
Current Policy: default
Vlan: 10
Device-name: device1
"""
st.text(preformatted_text)
st.write(preformatted_text)
st.code(preformatted_text,language=None )
display_preformatted_text()
```
### Steps To Reproduce
Should be demonstrated in example code (self-evident)
### Expected Behavior
Adding an option to preserve spacing like previous behavior
### Current Behavior
Screenshot of before (1.37) -> (1.41):

While I am aware I can get similar results with st.code it does add a grey background that I can't seem to remove. An option to remove the grey background on st.code to the standard black might be sufficient too.

I think ideally, I would like the option to preserve spacing(old behavior) or not preserve spacing (current behavior).
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.41
- Python version: 3.12
- Operating System: win
- Browser: edge
### Additional Information
_No response_ | closed | 2024-12-20T03:10:56Z | 2025-02-12T16:51:47Z | https://github.com/streamlit/streamlit/issues/10055 | [
"type:enhancement",
"feature:st.text"
] | netnem | 6 |
ageitgey/face_recognition | python | 641 | has anybody tried it on windows ? | * face_recognition version:
* Python version:
* Operating System:
### Description
Describe what you were trying to get done.
Tell us what happened, what went wrong, and what you expected to happen.
IMPORTANT: If your issue is related to a specific picture, include it so others can reproduce the issue.
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
| closed | 2018-10-07T12:15:14Z | 2019-07-18T14:31:31Z | https://github.com/ageitgey/face_recognition/issues/641 | [] | safaad | 4 |
Avaiga/taipy | data-visualization | 1,504 | [๐ BUG] f-string syntax not working in lambda expression | ### What went wrong? ๐ค
Using `{}` in a lambda expression will result in the visual element not showing up.
### Expected Behavior
We should find a way to make it work. If it is impossible, we should have some kind of warning and documentation.
### Steps to Reproduce Issue
```python
from taipy.gui import Gui
import taipy.gui.builder as tgb
value = 10
with tgb.Page() as page:
# Not working
tgb.text(value=lambda value: f"1: Value {value}")
# Not working
tgb.text(value=lambda value: "2: Value {value}")
# Works
tgb.text(value=lambda value: "3: Value " + str(value))
# Works
tgb.text("4: Value {value}")
Gui(page=page).run(title="Frontend Demo")
```

### Browsers
Chrome
### OS
Windows
### Version of Taipy
Develop - 7/11/24
### Additional Context
_No response_
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | closed | 2024-07-11T08:01:27Z | 2024-07-11T11:49:21Z | https://github.com/Avaiga/taipy/issues/1504 | [
"๐ฐ GUI",
"๐ฅMalfunction",
"๐ง Priority: High"
] | FlorianJacta | 3 |
piccolo-orm/piccolo | fastapi | 690 | uniform behaviour on joining null values | Lets say the artist is unknown (= null), and we have a query like this:
```
song = await Song.select(Song.artist.all_columns()).first()
```
When using SQLiteEngine, `song['artist']['id']` is `None`, but with PostgresEngine it seems that `song['artist']` is `None`. Can we make it so that SQLiteEngine behave like PostgresEngine in this regard? | closed | 2022-11-28T23:14:17Z | 2022-11-29T15:07:07Z | https://github.com/piccolo-orm/piccolo/issues/690 | [] | powellnorma | 3 |
koaning/scikit-lego | scikit-learn | 37 | missing documentation: Estimator Transformer | The `EstimatorTransformer` is complicated enough to add an .rst document for. Might be nice to check if we can automatically test this as well. | closed | 2019-03-20T06:02:49Z | 2019-06-20T20:59:10Z | https://github.com/koaning/scikit-lego/issues/37 | [
"good first issue"
] | koaning | 0 |
healthchecks/healthchecks | django | 231 | Feature Request: Turn off/separate "Up" emails on an integration | We are planning to configure emails to be sent to our ticketing system but we only want to create tickets for when checks are down. At the moment, a ticket would also be created when the check comes back up.
It would be great if the "up" email could be turned off or sent to a different email address (so that the "up" email could go to the team rather than the ticketing system). | closed | 2019-03-18T15:27:44Z | 2019-04-10T14:54:32Z | https://github.com/healthchecks/healthchecks/issues/231 | [] | dalee-bis | 1 |
scrapy/scrapy | web-scraping | 5,855 | test_batch_path_differ sometimes fails | See https://github.com/scrapy/scrapy/pull/5847#issuecomment-1471778039. | closed | 2023-03-23T12:55:24Z | 2023-04-19T06:33:34Z | https://github.com/scrapy/scrapy/issues/5855 | [
"good first issue",
"CI"
] | Gallaecio | 2 |
openapi-generators/openapi-python-client | fastapi | 928 | Nullable array models generate failing code | **Describe the bug**
When an array is marked as nullable (in OpenAPI 3.0 or 3.1) the generated code fails type checking with the message:
```
error: Incompatible types in assignment (expression has type "tuple[None, bytes, str]", variable has type "list[float] | Unset | None") [assignment]
```
From the end-to-end test suite, making `some_array` nullable (part of `Body_upload_file_tests_upload_post`) results in this change:
```diff
@@ -165,10 +172,17 @@ class BodyUploadFileTestsUploadPost:
else (None, str(self.some_number).encode(), "text/plain")
)
- some_array: Union[Unset, Tuple[None, bytes, str]] = UNSET
- if not isinstance(self.some_array, Unset):
- _temp_some_array = self.some_array
- some_array = (None, json.dumps(_temp_some_array).encode(), "application/json")
+ some_array: Union[List[float], None, Unset]
+ if isinstance(self.some_array, Unset):
+ some_array = UNSET
+ elif isinstance(self.some_array, list):
+ some_array = UNSET
+ if not isinstance(self.some_array, Unset):
+ _temp_some_array = self.some_array
+ some_array = (None, json.dumps(_temp_some_array).encode(), "application/json")
+
+ else:
+ some_array = self.some_array
some_optional_object: Union[Unset, Tuple[None, bytes, str]] = UNSET
```
**OpenAPI Spec File**
The following patch applied the end-to-end test suite reproduces the problem:
```diff
diff --git a/end_to_end_tests/baseline_openapi_3.0.json b/end_to_end_tests/baseline_openapi_3.0.json
index d21d1d5..25adeaa 100644
--- a/end_to_end_tests/baseline_openapi_3.0.json
+++ b/end_to_end_tests/baseline_openapi_3.0.json
@@ -1778,6 +1778,7 @@
},
"some_array": {
"title": "Some Array",
+ "nullable": true,
"type": "array",
"items": {
"type": "number"
diff --git a/end_to_end_tests/baseline_openapi_3.1.yaml b/end_to_end_tests/baseline_openapi_3.1.yaml
index 03270af..4e33e68 100644
--- a/end_to_end_tests/baseline_openapi_3.1.yaml
+++ b/end_to_end_tests/baseline_openapi_3.1.yaml
@@ -1794,7 +1794,7 @@ info:
},
"some_array": {
"title": "Some Array",
- "type": "array",
+ "type": [ "array", "null" ],
"items": {
"type": "number"
}
```
**Desktop (please complete the following information):**
- openapi-python-client version 0.17.0
| closed | 2024-01-03T15:15:57Z | 2024-01-04T00:29:42Z | https://github.com/openapi-generators/openapi-python-client/issues/928 | [] | kgutwin | 1 |
falconry/falcon | api | 1,907 | Make JSONHandler customization docs clearer | As pointed out by @Stargateur in https://github.com/falconry/falcon/issues/1906#issuecomment-817374057, our [`JSONHandler`](https://falcon.readthedocs.io/en/stable/api/media.html#falcon.media.JSONHandler) customization docs could be made clearer by separately illustrating different (albeit closely related) concepts:
* Use a custom JSON library (such as the exemplified `rapidjson`). Customize parameters.
* Use the stdlib's `json` module, just provide custom serialization or deserialization parameters. Also link to the ["Prettifying JSON Responses" recipe](https://falcon.readthedocs.io/en/stable/user/recipes/pretty-json.html), which illustrates customization of `dumps` parameters.
* Add a sentence or two about replacing the default JSON handlers, not just toss in a code snippet as it is at the time of writing this. Also link to [Replacing the Default Handlers](https://falcon.readthedocs.io/en/stable/api/media.html#custom-media-handlers) from that explanation. | closed | 2021-04-11T21:28:07Z | 2021-06-26T13:52:57Z | https://github.com/falconry/falcon/issues/1907 | [
"documentation",
"good first issue"
] | vytas7 | 2 |
ITCoders/Human-detection-and-Tracking | numpy | 30 | The node is neither a map nor an empty collection in function 'cvGetFileNodeByName' | Hello everybody,
When I run main.py, I get the following error :
```
Traceback (most recent call last):
File "/home/mounir/PycharmProjects/Human-detection-and-Tracking-master/main.py", line 137, in <module>
recognizer.read("model.yaml")
cv2.error: OpenCV(4.0.0-pre) /home/mounir/opencv/modules/core/src/persistence_c.cpp:757: error: (-2:Unspecified error) The node is neither a map nor an empty collection in function 'cvGetFileNodeByName'
```
I have the latest OpenCV version installed
## Details
* **Exact error or Issue details**
* **OpenCV Version ** : 4.0.0-pre (I know you told us to opt for 3.1.1
* **Python Version** : 3.6
* **Operating System** : Ubuntu 18.04
* **Changes done, if any in the original code** : Yes, some changes have been done.
`recognizer = cv2.face.LBPHFaceRecognizer_create()`
instead of
`recognizer = cv2.face.createLBPHFaceRecognizer()`
and
`recognizer.read("model.yaml")`
instead of
`recognizer.load("model.yaml")`
Thanks for your help ! | closed | 2018-06-12T14:18:19Z | 2018-06-13T05:17:47Z | https://github.com/ITCoders/Human-detection-and-Tracking/issues/30 | [] | MounirB | 1 |
tflearn/tflearn | data-science | 220 | One-hot output string labels | Hi,
I'm just wondering I have the following output in my network:
```
network = fully_connected(network, len(mod), activation='softmax',name="out")
```
So there are 11 output neurons (len(mod) == 11), I'm wondering if it is possible
to associate strings with those neurons, where the strings will be saved, when I freeze
the graph.
I'm struggling to find if this is possible.
For instance I've tried the following:
```
s = tf.Variable("test string",name="outputs")
tf.add_to_collection(tf.GraphKeys.VARIABLES, s)
ops = tf.initialize_all_variables()
sess.run([ops,s])
```
But it doesn't appear to be saved in the graph, when i use my savegraph routine after the sess.run
Any pointers would be great
cheers
Chris
| closed | 2016-07-22T13:46:24Z | 2016-07-24T15:07:52Z | https://github.com/tflearn/tflearn/issues/220 | [] | chrisruk | 2 |
babysor/MockingBird | deep-learning | 42 | ไฝฟ็จ็พๅบฆ็ฝ็ๆๆฐ้ข่ฎญ็ปๆจกๅ๏ผspectrogramไธๆญฃๅธธ๏ผๅชๆไธค็งๆ้ณ | 
| closed | 2021-08-23T10:24:09Z | 2021-08-23T10:50:19Z | https://github.com/babysor/MockingBird/issues/42 | [] | gebumc | 2 |
zihangdai/xlnet | nlp | 221 | Experiment attention on attention on XLnet | *In this paper, we propose the Attention on Attention
(AoA) module, an extension to conventional attention
mechanisms, to address the irrelevant attention issue. Fur-
thermore, we propose AoANet for image captioning by ap-
plying AoA to both the encoder and decoder. Extensive ex-
periments conducted on the MS COCO dataset demonstrate
the superiority and general applicability of our proposed
AoA module and AoANet. More remarkably, we achieve
a new state-of-the-art performance for image captioning.*
From https://paperswithcode.com/paper/attention-on-attention-for-image-captioning
This seems like a generalist innovation to try! | open | 2019-08-21T19:35:14Z | 2019-08-25T12:56:18Z | https://github.com/zihangdai/xlnet/issues/221 | [] | LifeIsStrange | 2 |
google-research/bert | nlp | 744 | Two fields for sentence classification | Hi,
I need to do classification using several fields (at least two). What is the best way for representation such data after tokenizer?
Variants:
1) `['CLS']<field1 tokens> [SEP] <field2 tokens>`? What segment_ids should be used in such case?
2) `['CLS']<field1 tokens> [my_unique_sequence] <field2 tokens>`? At the moment It looks working better than classification using only field1 or field2. I still not sure that did it in the best way. What is the best way to select such `[my_unique_sequence]`? Should it be only letters or it can have punctuation marks? I'm using segment_ids with 0 only in such case. Is it right?
Thanks in advance
| open | 2019-07-03T13:49:22Z | 2019-07-03T13:49:22Z | https://github.com/google-research/bert/issues/744 | [] | AlexanderKUA | 0 |
pytest-dev/pytest-cov | pytest | 180 | Omiting folders for coverage from setup.cfg | I'm running on my project PyTest and Coverage, and I'm trying to implement this plugin, the issue I'm having is than I haven't been able to make it run omitting some folders, and I haven't been able to make it run as automated as I want.
Perhaps what I need is more help than reporting an issue, but up to now I haven't been able to find a way it picks the `omit=` line, perhaps is related to additional configurations.
My libraries:
```
Python 3.5.2
coverage==4.0.3
pytest==3.2.3
pytest-cov==2.5.1
pytest-django==2.9.1
pytest-flake8==0.9.1
pytest-sugar==0.7.1
django-coverage-plugin==1.3
```
My `setup.cfg` file (Partially, the last part is for PyLint and is really long):
```
[tool:pytest]
DJANGO_SETTINGS_MODULE=config.settings.local
python_files=tests.py test_*.py *_tests.py
addopts=--cov=accountant --cov-config setup.cfg
[coverage:run]
source=accountant/*
omit=*/migrations/*,*/tests/*
plugins=django_coverage_plugin
[flake8]
max-line-length=80
exclude=.tox,.git,*/migrations/*,*/static/CACHE/*,docs,node_modules,build,dist,*.egg-info
statistics=True
``` | closed | 2017-11-03T14:04:18Z | 2018-10-30T01:16:50Z | https://github.com/pytest-dev/pytest-cov/issues/180 | [] | sebastian-code | 7 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,160 | GIT from Anaconda | Hello. I am using win 11 and the newest build of Anaconda. I tried to install this from there and it won't work. It is not available on the repos there. Has it been taken down? Is there any other way to get this working? | open | 2023-02-05T13:40:12Z | 2023-02-05T13:40:12Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1160 | [] | Supermatt01 | 0 |
yezyilomo/django-restql | graphql | 85 | Toggle automatic application on EagerLoadingMixin | We have a few cases where we might not want to apply the `EagerLoadingMixin` `get_queryset` by default. For example, we might have our own prefetching/joining we want to do when someone doesn't send in a `query`. Currently, the mixin will return a query result even if a user did not send one in and will then apply the prefetching and joining specified on the view.
I'm imagining this as a boolean field on the mixin, such as `auto_apply_eager_loading` or some equivalent, and that field would be checked in the overridden `get_queryset` before attempting to apply it. | closed | 2019-11-25T17:53:14Z | 2019-12-02T19:18:18Z | https://github.com/yezyilomo/django-restql/issues/85 | [] | ashleyredzko | 4 |
jupyterlab/jupyter-ai | jupyter | 1,208 | Jupyter_ai for Azure OpenAI throws 'InternalServerError' for all chat responses | <!-- Welcome! Thank you for contributing. These HTML comments will not render in the issue.
Before creating a new issue:
* Search for relevant issues
* Follow the issue reporting guidelines:
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html
-->
## Description
Jupyter_ai throwing InternalServerError for the chat response for Azure Openai provider
It works for the `/generate` command but the chat responds with the below error for all questions
this is with the latest version of jupyter_ai and its dependencies
Any help or insights on this issue would be greatly appreciated
`Traceback (most recent call last):
File "/opt/conda/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 226, in on_message
await self.process_message(message)
File "/opt/conda/lib/python3.11/site-packages/jupyter_ai/chat_handlers/default.py", line 72, in process_message
await self.stream_reply(inputs, message)
File "/opt/conda/lib/python3.11/site-packages/jupyter_ai/chat_handlers/base.py", line 564, in stream_reply
async for chunk in chunk_generator:
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5535, in astream
async for item in self.bound.astream(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5535, in astream
async for item in self.bound.astream(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3430, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3413, in atransform
async for chunk in self._atransform_stream_with_config(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2301, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3383, in _atransform
async for output in final_pipeline:
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5571, in atransform
async for item in self.bound.atransform(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4941, in atransform
async for output in self._atransform_stream_with_config(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2301, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4922, in _atransform
async for chunk in output.astream(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5535, in astream
async for item in self.bound.astream(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3430, in astream
async for chunk in self.atransform(input_aiter(), config, **kwargs):
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3413, in atransform
async for chunk in self._atransform_stream_with_config(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2301, in _atransform_stream_with_config
chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3383, in _atransform
async for output in final_pipeline:
File "/opt/conda/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 84, in atransform
async for chunk in self._atransform_stream_with_config(
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2259, in _atransform_stream_with_config
final_input: Optional[Input] = await py_anext(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 76, in anext_impl
return await __anext__(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/langchain_core/utils/aiter.py", line 125, in tee_peer
item = await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1471, in atransform
async for output in self.astream(final, config, **kwargs):
File "/opt/conda/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 494, in astream
raise e
File "/opt/conda/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 472, in astream
async for chunk in self._astream(
File "/opt/conda/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 881, in _astream
response = await self.async_client.create(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1720, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/_base_client.py", line 1849, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/_base_client.py", line 1543, in request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/_base_client.py", line 1629, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/_base_client.py", line 1676, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/_base_client.py", line 1629, in _request
return await self._retry_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/_base_client.py", line 1676, in _retry_request
return await self._request(
^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/lib/python3.11/site-packages/openai/_base_client.py", line 1644, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Internal Server Error`
## Reproduce
<!--Describe step-by-step instructions to reproduce the behavior-->
Start with the base Docker image for JupyterLab 4.1.8.
Install the jupyter_ai package in the Dockerfile.
```
# Use the JupyterLab 4.1.8 minimal notebook image as the base image
FROM quay.io/jupyter/minimal-notebook:x86_64-lab-4.1.8 AS base
# Install Jupyter AI with all its dependencies
pip install -U "jupyter-ai[all]"
```
Build the Docker Image and run in a container
Verify Jupyter AI Chat in the local host
<img width="890" alt="Image" src="https://github.com/user-attachments/assets/c22c64fd-2fa0-4b8f-90c7-ce3c68ea1ac7" />
<!--Describe how you diagnosed the issue. See the guidelines at
https://jupyterlab.readthedocs.io/en/latest/getting_started/issue.html -->
## Expected behavior
The chat should work and provide the correct answer.
## Context
Hello,
We are upgrading from JupyterLab 3.6.7, along with other packages, including Jupyter AI. Jupyter AI works fine with the current setup (JupyterLab 3.6.7).
However, after upgrading the packages and Jupyter AI, I am encountering an Internal Server Error for all chat-based queries. Interestingly, some commands, such as /generate a notebook about how to add 5 numbers in Python, work fine and successfully generate the notebook.
- Browser and version: Chrome
- JupyterLab version: 4.1.8
other package versions
```
jupyter_ai 2.29.0
jupyter_ai_magics 2.29.0
jupyter_client 8.6.1
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_packaging 0.12.3
jupyter_server 2.14.0
jupyter_server_terminals 0.5.3
jupyter-telemetry 0.1.0
jupyterhub 4.1.5
jupyterlab 4.1.8
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.1
jupyterlab_widgets 3.0.13
langchain 0.3.14
langchain-anthropic 0.3.3
langchain-aws 0.2.11
langchain-cohere 0.3.4
langchain-community 0.3.14
langchain-core 0.3.30
langchain-experimental 0.3.4
langchain-google-genai 2.0.8
langchain-mistralai 0.2.4
langchain-nvidia-ai-endpoints 0.3.7
langchain-ollama 0.2.2
langchain-openai 0.3.0
langchain-text-splitters 0.3.5
langsmith 0.2.11
libmambapy 1.5.8
```
<!--The more content you provide, the more we can help!-->
<details><summary>Troubleshoot Output</summary>
<pre>
jupyter troubleshoot
pip list:
Package Version Editable project location
----------------------------- --------------- -----------------------------------
ai21 3.0.1
ai21-tokenizer 0.12.0
aiohappyeyeballs 2.4.4
aiohttp 3.11.11
aiolimiter 1.2.1
aiosignal 1.3.2
aiosqlite 0.20.0
alembic 1.13.1
annotated-types 0.7.0
anthropic 0.43.1
anyio 4.8.0
archspec 0.2.3
argon2-cffi 23.1.0
argon2-cffi-bindings 21.2.0
arrow 1.3.0
arxiv 2.1.3
asttokens 2.4.1
async-generator 1.10
async-lru 2.0.4
attrs 23.2.0
Babel 2.14.0
bce-python-sdk 0.9.25
beautifulsoup4 4.12.3
bleach 6.1.0
blinker 1.8.1
boltons 24.0.0
boto3 1.36.2
botocore 1.36.2
Brotli 1.1.0
cached-property 1.5.2
cachetools 5.5.0
certifi 2024.2.2
certipy 0.1.3
cffi 1.16.0
charset-normalizer 3.3.2
click 8.1.8
cloudpickle 3.1.1
cohere 5.13.8
colorama 0.4.6
comm 0.2.2
conda 24.4.0
conda-libmamba-solver 24.1.0
conda-package-handling 2.2.0
conda_package_streaming 0.9.0
cryptography 42.0.6
dask 2025.1.0
dataclasses-json 0.6.7
debugpy 1.8.1
decorator 5.1.1
deepmerge 2.0
defusedxml 0.7.1
deprecation 2.1.0
dill 0.3.9
diskcache 5.6.3
distributed 2025.1.0
distro 1.9.0
entrypoints 0.4
eval_type_backport 0.2.2
exceptiongroup 1.2.0
executing 2.0.1
faiss-cpu 1.9.0.post1
fastavro 1.10.0
fastjsonschema 2.19.1
feedparser 6.0.11
filelock 3.16.1
filetype 1.2.0
fqdn 1.5.1
frozenlist 1.5.0
fsspec 2024.12.0
future 1.0.0
google-ai-generativelanguage 0.6.10
google-api-core 2.24.0
google-api-python-client 2.159.0
google-auth 2.37.0
google-auth-httplib2 0.2.0
google-generativeai 0.8.3
googleapis-common-protos 1.66.0
gpt4all 2.8.2
greenlet 3.0.3
grpcio 1.69.0
grpcio-status 1.69.0
h11 0.14.0
h2 4.1.0
hpack 4.0.0
httpcore 1.0.5
httplib2 0.22.0
httpx 0.27.0
httpx-sse 0.4.0
huggingface-hub 0.27.1
hyperframe 6.0.1
idna 3.7
importlib_metadata 7.1.0
importlib_resources 6.4.0
ipykernel 6.29.3
ipython 8.22.2
ipython-genutils 0.2.0
ipywidgets 8.1.5
isoduration 20.11.0
jedi 0.19.1
Jinja2 3.1.3
jiter 0.8.2
jmespath 1.0.1
json5 0.9.25
jsonpatch 1.33
jsonpath-ng 1.7.0
jsonpointer 2.4
jsonschema 4.22.0
jsonschema-specifications 2023.12.1
jupyter_ai 2.29.0
jupyter_ai_magics 2.29.0
jupyter_client 8.6.1
jupyter_core 5.7.2
jupyter-events 0.10.0
jupyter-lsp 2.2.5
jupyter_packaging 0.12.3
jupyter_server 2.14.0
jupyter_server_terminals 0.5.3
jupyter-telemetry 0.1.0
jupyterhub 4.1.5
jupyterlab 4.1.8
jupyterlab_pygments 0.3.0
jupyterlab_server 2.27.1
jupyterlab_widgets 3.0.13
langchain 0.3.14
langchain-anthropic 0.3.3
langchain-aws 0.2.11
langchain-cohere 0.3.4
langchain-community 0.3.14
langchain-core 0.3.30
langchain-experimental 0.3.4
langchain-google-genai 2.0.8
langchain-mistralai 0.2.4
langchain-nvidia-ai-endpoints 0.3.7
langchain-ollama 0.2.2
langchain-openai 0.3.0
langchain-text-splitters 0.3.5
langsmith 0.2.11
libmambapy 1.5.8
locket 1.0.0
Mako 1.3.3
mamba 1.5.8
markdown-it-py 3.0.0
MarkupSafe 2.1.5
marshmallow 3.25.1
matplotlib-inline 0.1.7
mdurl 0.1.2
menuinst 2.0.2
mistune 3.0.2
msgpack 1.1.0
multidict 6.1.0
multiprocess 0.70.17
mypy-extensions 1.0.0
nbclassic 1.0.0
nbclient 0.10.0
nbconvert 7.16.4
nbformat 5.10.4
nest_asyncio 1.6.0
notebook 7.1.3
notebook_shim 0.2.4
numpy 1.26.4
oauthlib 3.2.2
ollama 0.4.6
openai 1.59.8
orjson 3.10.14
overrides 7.7.0
packaging 24.0
pamela 1.1.0
pandas 2.2.3
pandocfilters 1.5.0
parameterized 0.9.0
parso 0.8.4
partd 1.4.2
pexpect 4.9.0
pickleshare 0.7.5
pillow 10.4.0
pip 24.0
pkgutil_resolve_name 1.3.10
platformdirs 4.2.1
pluggy 1.5.0
ply 3.11
prometheus_client 0.20.0
prompt-toolkit 3.0.42
propcache 0.2.1
proto-plus 1.25.0
protobuf 5.29.3
psutil 5.9.8
ptyprocess 0.7.0
pure-eval 0.2.2
pyarrow 19.0.0
pyasn1 0.6.1
pyasn1_modules 0.4.1
pycosat 0.6.6
pycparser 2.22
pycryptodome 3.21.0
pycurl 7.45.3
pydantic 2.10.5
pydantic_core 2.27.2
pydantic-settings 2.7.1
Pygments 2.18.0
PyJWT 2.8.0
pyOpenSSL 24.0.0
pyparsing 3.2.1
pypdf 5.1.0
PySocks 1.7.1
python-dateutil 2.9.0
python-dotenv 1.0.1
python-json-logger 2.0.7
pytz 2024.1
PyYAML 6.0.1
pyzmq 26.0.2
qianfan 0.4.12.2
referencing 0.35.1
regex 2024.11.6
requests 2.32.3
requests-toolbelt 1.0.0
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
rich 13.9.4
rpds-py 0.18.0
rsa 4.9
ruamel.yaml 0.18.6
ruamel.yaml.clib 0.2.8
s3transfer 0.11.1
Send2Trash 1.8.3
sentencepiece 0.2.0
setuptools 69.5.1
sgmllib3k 1.0.0
shellingham 1.5.4
six 1.16.0
sniffio 1.3.1
sortedcontainers 2.4.0
soupsieve 2.5
SQLAlchemy 2.0.30
stack-data 0.6.2
tabulate 0.9.0
tblib 3.0.0
tenacity 8.5.0
terminado 0.18.1
tiktoken 0.8.0
tinycss2 1.3.0
together 1.3.11
tokenizers 0.21.0
tomli 2.0.1
tomlkit 0.13.2
toolz 1.0.0
tornado 6.4
tqdm 4.66.4
traitlets 5.14.3
truststore 0.8.0
typer 0.15.1
types-python-dateutil 2.9.0.20240316
types-requests 2.32.0.20241016
typing_extensions 4.12.2
typing-inspect 0.9.0
typing-utils 0.1.0
tzdata 2024.2
uri-template 1.3.0
uritemplate 4.1.1
urllib3 2.2.1
wcwidth 0.2.13
webcolors 1.13
webencodings 0.5.1
websocket-client 1.8.0
wheel 0.43.0
widgetsnbextension 4.0.13
yarl 1.18.3
zict 3.0.0
zipp 3.17.0
zstandard 0.19.0
conda env:
name: base
channels:
- conda-forge
dependencies:
- _libgcc_mutex=0.1=conda_forge
- _openmp_mutex=4.5=2_gnu
- alembic=1.13.1=pyhd8ed1ab_1
- archspec=0.2.3=pyhd8ed1ab_0
- argon2-cffi=23.1.0=pyhd8ed1ab_0
- argon2-cffi-bindings=21.2.0=py311h459d7ec_4
- arrow=1.3.0=pyhd8ed1ab_0
- asttokens=2.4.1=pyhd8ed1ab_0
- async-lru=2.0.4=pyhd8ed1ab_0
- async_generator=1.10=py_0
- attrs=23.2.0=pyh71513ae_0
- babel=2.14.0=pyhd8ed1ab_0
- beautifulsoup4=4.12.3=pyha770c72_0
- bleach=6.1.0=pyhd8ed1ab_0
- blinker=1.8.1=pyhd8ed1ab_0
- boltons=24.0.0=pyhd8ed1ab_0
- brotli-python=1.1.0=py311hb755f60_1
- bzip2=1.0.8=hd590300_5
- c-ares=1.28.1=hd590300_0
- ca-certificates=2024.2.2=hbcca054_0
- cached-property=1.5.2=hd8ed1ab_1
- cached_property=1.5.2=pyha770c72_1
- certifi=2024.2.2=pyhd8ed1ab_0
- certipy=0.1.3=py_0
- cffi=1.16.0=py311hb3a22ac_0
- charset-normalizer=3.3.2=pyhd8ed1ab_0
- colorama=0.4.6=pyhd8ed1ab_0
- comm=0.2.2=pyhd8ed1ab_0
- conda=24.4.0=py311h38be061_0
- conda-libmamba-solver=24.1.0=pyhd8ed1ab_0
- conda-package-handling=2.2.0=pyh38be061_0
- conda-package-streaming=0.9.0=pyhd8ed1ab_0
- configurable-http-proxy=4.6.1=h92b4e83_0
- cryptography=42.0.6=py311h4a61cc7_0
- debugpy=1.8.1=py311hb755f60_0
- decorator=5.1.1=pyhd8ed1ab_0
- defusedxml=0.7.1=pyhd8ed1ab_0
- distro=1.9.0=pyhd8ed1ab_0
- entrypoints=0.4=pyhd8ed1ab_0
- exceptiongroup=1.2.0=pyhd8ed1ab_2
- executing=2.0.1=pyhd8ed1ab_0
- fmt=10.2.1=h00ab1b0_0
- fqdn=1.5.1=pyhd8ed1ab_0
- greenlet=3.0.3=py311hb755f60_0
- h11=0.14.0=pyhd8ed1ab_0
- h2=4.1.0=pyhd8ed1ab_0
- hpack=4.0.0=pyh9f0ad1d_0
- httpcore=1.0.5=pyhd8ed1ab_0
- httpx=0.27.0=pyhd8ed1ab_0
- hyperframe=6.0.1=pyhd8ed1ab_0
- icu=73.2=h59595ed_0
- idna=3.7=pyhd8ed1ab_0
- importlib-metadata=7.1.0=pyha770c72_0
- importlib_metadata=7.1.0=hd8ed1ab_0
- importlib_resources=6.4.0=pyhd8ed1ab_0
- ipykernel=6.29.3=pyhd33586a_0
- ipython=8.22.2=pyh707e725_0
- ipython_genutils=0.2.0=py_1
- isoduration=20.11.0=pyhd8ed1ab_0
- jedi=0.19.1=pyhd8ed1ab_0
- jinja2=3.1.3=pyhd8ed1ab_0
- json5=0.9.25=pyhd8ed1ab_0
- jsonpatch=1.33=pyhd8ed1ab_0
- jsonpointer=2.4=py311h38be061_3
- jsonschema=4.22.0=pyhd8ed1ab_0
- jsonschema-specifications=2023.12.1=pyhd8ed1ab_0
- jsonschema-with-format-nongpl=4.22.0=pyhd8ed1ab_0
- jupyter-lsp=2.2.5=pyhd8ed1ab_0
- jupyter_client=8.6.1=pyhd8ed1ab_0
- jupyter_core=5.7.2=py311h38be061_0
- jupyter_events=0.10.0=pyhd8ed1ab_0
- jupyter_server=2.14.0=pyhd8ed1ab_0
- jupyter_server_terminals=0.5.3=pyhd8ed1ab_0
- jupyter_telemetry=0.1.0=pyhd8ed1ab_1
- jupyterhub=4.1.5=pyh31011fe_0
- jupyterhub-base=4.1.5=pyh31011fe_0
- jupyterlab=4.1.8=pyhd8ed1ab_0
- jupyterlab_pygments=0.3.0=pyhd8ed1ab_1
- jupyterlab_server=2.27.1=pyhd8ed1ab_0
- keyutils=1.6.1=h166bdaf_0
- krb5=1.21.2=h659d440_0
- ld_impl_linux-64=2.40=h55db66e_0
- libarchive=3.7.2=h2aa1ff5_1
- libcurl=8.7.1=hca28451_0
- libedit=3.1.20191231=he28a2e2_2
- libev=4.33=hd590300_2
- libexpat=2.6.2=h59595ed_0
- libffi=3.4.2=h7f98852_5
- libgcc-ng=13.2.0=h77fa898_6
- libgomp=13.2.0=h77fa898_6
- libiconv=1.17=hd590300_2
- libmamba=1.5.8=had39da4_0
- libmambapy=1.5.8=py311hf2555c7_0
- libnghttp2=1.58.0=h47da74e_1
- libnsl=2.0.1=hd590300_0
- libsodium=1.0.18=h36c2ea0_1
- libsolv=0.7.29=ha6fb4c9_0
- libsqlite=3.45.3=h2797004_0
- libssh2=1.11.0=h0841786_0
- libstdcxx-ng=13.2.0=hc0a3c3a_6
- libuuid=2.38.1=h0b41bf4_0
- libuv=1.48.0=hd590300_0
- libxcrypt=4.4.36=hd590300_1
- libxml2=2.12.6=h232c23b_2
- libzlib=1.2.13=hd590300_5
- lz4-c=1.9.4=hcb278e6_0
- lzo=2.10=hd590300_1001
- mako=1.3.3=pyhd8ed1ab_0
- mamba=1.5.8=py311h3072747_0
- markupsafe=2.1.5=py311h459d7ec_0
- matplotlib-inline=0.1.7=pyhd8ed1ab_0
- menuinst=2.0.2=py311h38be061_0
- mistune=3.0.2=pyhd8ed1ab_0
- nbclassic=1.0.0=pyhb4ecaf3_1
- nbclient=0.10.0=pyhd8ed1ab_0
- nbconvert=7.16.4=hd8ed1ab_0
- nbconvert-core=7.16.4=pyhd8ed1ab_0
- nbconvert-pandoc=7.16.4=hd8ed1ab_0
- nbformat=5.10.4=pyhd8ed1ab_0
- ncurses=6.4.20240210=h59595ed_0
- nest-asyncio=1.6.0=pyhd8ed1ab_0
- nodejs=20.12.2=hb753e55_0
- notebook=7.1.3=pyhd8ed1ab_0
- notebook-shim=0.2.4=pyhd8ed1ab_0
- oauthlib=3.2.2=pyhd8ed1ab_0
- openssl=3.3.0=hd590300_0
- overrides=7.7.0=pyhd8ed1ab_0
- packaging=24.0=pyhd8ed1ab_0
- pamela=1.1.0=pyh1a96a4e_0
- pandoc=3.1.13=ha770c72_0
- pandocfilters=1.5.0=pyhd8ed1ab_0
- parso=0.8.4=pyhd8ed1ab_0
- pexpect=4.9.0=pyhd8ed1ab_0
- pickleshare=0.7.5=py_1003
- pip=24.0=pyhd8ed1ab_0
- pkgutil-resolve-name=1.3.10=pyhd8ed1ab_1
- platformdirs=4.2.1=pyhd8ed1ab_0
- pluggy=1.5.0=pyhd8ed1ab_0
- prometheus_client=0.20.0=pyhd8ed1ab_0
- prompt-toolkit=3.0.42=pyha770c72_0
- psutil=5.9.8=py311h459d7ec_0
- ptyprocess=0.7.0=pyhd3deb0d_0
- pure_eval=0.2.2=pyhd8ed1ab_0
- pybind11-abi=4=hd8ed1ab_3
- pycosat=0.6.6=py311h459d7ec_0
- pycparser=2.22=pyhd8ed1ab_0
- pycurl=7.45.3=py311h3393d6f_1
- pygments=2.18.0=pyhd8ed1ab_0
- pyjwt=2.8.0=pyhd8ed1ab_1
- pyopenssl=24.0.0=pyhd8ed1ab_0
- pysocks=1.7.1=pyha2e5f31_6
- python=3.11.9=hb806964_0_cpython
- python-dateutil=2.9.0=pyhd8ed1ab_0
- python-fastjsonschema=2.19.1=pyhd8ed1ab_0
- python-json-logger=2.0.7=pyhd8ed1ab_0
- python_abi=3.11=4_cp311
- pytz=2024.1=pyhd8ed1ab_0
- pyyaml=6.0.1=py311h459d7ec_1
- pyzmq=26.0.2=py311h08a0b41_0
- readline=8.2=h8228510_1
- referencing=0.35.1=pyhd8ed1ab_0
- reproc=14.2.4.post0=hd590300_1
- reproc-cpp=14.2.4.post0=h59595ed_1
- rfc3339-validator=0.1.4=pyhd8ed1ab_0
- rfc3986-validator=0.1.1=pyh9f0ad1d_0
- rpds-py=0.18.0=py311h46250e7_0
- ruamel.yaml=0.18.6=py311h459d7ec_0
- ruamel.yaml.clib=0.2.8=py311h459d7ec_0
- send2trash=1.8.3=pyh0d859eb_0
- setuptools=69.5.1=pyhd8ed1ab_0
- six=1.16.0=pyh6c4a22f_0
- sniffio=1.3.1=pyhd8ed1ab_0
- soupsieve=2.5=pyhd8ed1ab_1
- sqlalchemy=2.0.30=py311h331c9d8_0
- stack_data=0.6.2=pyhd8ed1ab_0
- terminado=0.18.1=pyh0d859eb_0
- tinycss2=1.3.0=pyhd8ed1ab_0
- tk=8.6.13=noxft_h4845f30_101
- tomli=2.0.1=pyhd8ed1ab_0
- tornado=6.4=py311h459d7ec_0
- tqdm=4.66.4=pyhd8ed1ab_0
- traitlets=5.14.3=pyhd8ed1ab_0
- truststore=0.8.0=pyhd8ed1ab_0
- types-python-dateutil=2.9.0.20240316=pyhd8ed1ab_0
- typing_utils=0.1.0=pyhd8ed1ab_0
- uri-template=1.3.0=pyhd8ed1ab_0
- urllib3=2.2.1=pyhd8ed1ab_0
- wcwidth=0.2.13=pyhd8ed1ab_0
- webcolors=1.13=pyhd8ed1ab_0
- webencodings=0.5.1=pyhd8ed1ab_2
- websocket-client=1.8.0=pyhd8ed1ab_0
- wheel=0.43.0=pyhd8ed1ab_1
- xz=5.2.6=h166bdaf_0
- yaml=0.2.5=h7f98852_2
- yaml-cpp=0.8.0=h59595ed_0
- zeromq=4.3.5=h75354e8_3
- zipp=3.17.0=pyhd8ed1ab_0
- zlib=1.2.13=hd590300_5
- zstandard=0.19.0=py311hd4cff14_0
- zstd=1.5.6=ha6fb4c9_0
- pip:
- ai21==3.0.1
- ai21-tokenizer==0.12.0
- aiohappyeyeballs==2.4.4
- aiohttp==3.11.11
- aiolimiter==1.2.1
- aiosignal==1.3.2
- aiosqlite==0.20.0
- al-server-extension==0.1.0
- annotated-types==0.7.0
- anthropic==0.43.1
- anyio==4.8.0
- arxiv==2.1.3
- bce-python-sdk==0.9.25
- boto3==1.36.2
- botocore==1.36.2
- cachetools==5.5.0
- click==8.1.8
- cloudpickle==3.1.1
- cohere==5.13.8
- dask==2025.1.0
- dataclasses-json==0.6.7
- deepmerge==2.0
- deprecation==2.1.0
- dill==0.3.9
- diskcache==5.6.3
- distributed==2025.1.0
- eval-type-backport==0.2.2
- faiss-cpu==1.9.0.post1
- fastavro==1.10.0
- feedparser==6.0.11
- filelock==3.16.1
- filetype==1.2.0
- frozenlist==1.5.0
- fsspec==2024.12.0
- future==1.0.0
- google-ai-generativelanguage==0.6.10
- google-api-core==2.24.0
- google-api-python-client==2.159.0
- google-auth==2.37.0
- google-auth-httplib2==0.2.0
- google-generativeai==0.8.3
- googleapis-common-protos==1.66.0
- gpt4all==2.8.2
- grpcio==1.69.0
- grpcio-status==1.69.0
- httplib2==0.22.0
- httpx-sse==0.4.0
- huggingface-hub==0.27.1
- ipywidgets==8.1.5
- jiter==0.8.2
- jmespath==1.0.1
- jsonpath-ng==1.7.0
- jupyter-ai==2.29.0
- jupyter-ai-magics==2.29.0
- jupyter-packaging==0.12.3
- jupyterlab-widgets==3.0.13
- langchain==0.3.14
- langchain-anthropic==0.3.3
- langchain-aws==0.2.11
- langchain-cohere==0.3.4
- langchain-community==0.3.14
- langchain-core==0.3.30
- langchain-experimental==0.3.4
- langchain-google-genai==2.0.8
- langchain-mistralai==0.2.4
- langchain-nvidia-ai-endpoints==0.3.7
- langchain-ollama==0.2.2
- langchain-openai==0.3.0
- langchain-text-splitters==0.3.5
- langsmith==0.2.11
- locket==1.0.0
- markdown-it-py==3.0.0
- marshmallow==3.25.1
- mdurl==0.1.2
- msgpack==1.1.0
- multidict==6.1.0
- multiprocess==0.70.17
- mypy-extensions==1.0.0
- numpy==1.26.4
- ollama==0.4.6
- openai==1.59.8
- orjson==3.10.14
- pandas==2.2.3
- parameterized==0.9.0
- partd==1.4.2
- pillow==10.4.0
- ply==3.11
- propcache==0.2.1
- proto-plus==1.25.0
- protobuf==5.29.3
- pyarrow==19.0.0
- pyasn1==0.6.1
- pyasn1-modules==0.4.1
- pycryptodome==3.21.0
- pydantic==2.10.5
- pydantic-core==2.27.2
- pydantic-settings==2.7.1
- pyparsing==3.2.1
- pypdf==5.1.0
- python-dotenv==1.0.1
- qianfan==0.4.12.2
- regex==2024.11.6
- requests==2.32.3
- requests-toolbelt==1.0.0
- rich==13.9.4
- rsa==4.9
- s3transfer==0.11.1
- sentencepiece==0.2.0
- sgmllib3k==1.0.0
- shellingham==1.5.4
- sortedcontainers==2.4.0
- tabulate==0.9.0
- tblib==3.0.0
- tenacity==8.5.0
- tiktoken==0.8.0
- together==1.3.11
- tokenizers==0.21.0
- tomlkit==0.13.2
- toolz==1.0.0
- typer==0.15.1
- types-requests==2.32.0.20241016
- typing-extensions==4.12.2
- typing-inspect==0.9.0
- tzdata==2024.2
- uritemplate==4.1.1
- widgetsnbextension==4.0.13
- yarl==1.18.3
- zict==3.0.0
prefix: /opt/conda
</pre>
</details>
<details><summary>Command Line Output</summary>
<pre>
I 2025-01-17 21:29:33.641 ServerApp] al_server_extension | extension was successfully linked.
[I 2025-01-17 21:29:33.648 ServerApp] jupyter_ai | extension was successfully linked.
[I 2025-01-17 21:29:33.648 ServerApp] jupyter_lsp | extension was successfully linked.
[I 2025-01-17 21:29:33.652 ServerApp] jupyter_server_terminals | extension was successfully linked.
[I 2025-01-17 21:29:33.657 ServerApp] jupyterlab | extension was successfully linked.
[I 2025-01-17 21:29:33.662 ServerApp] nbclassic | extension was successfully linked.
[I 2025-01-17 21:29:33.667 ServerApp] notebook | extension was successfully linked.
[I 2025-01-17 21:29:33.675 ServerApp] notebook_shim | extension was successfully linked.
/opt/conda/lib/python3.11/site-packages/traitlets/traitlets.py:1241: UserWarning: Overriding existing pre_save_hook (custom_pre_save_hook) with a new one (custom_pre_save_hook).
return self.func(*args, **kwargs)
/opt/conda/lib/python3.11/site-packages/traitlets/traitlets.py:1241: UserWarning: Overriding existing post_save_hook (custom_post_save_hook) with a new one (custom_post_save_hook).
return self.func(*args, **kwargs)
[I 2025-01-17 21:29:33.698 ServerApp] notebook_shim | extension was successfully loaded.
[I 2025-01-17 21:29:33.699 ServerApp] Registered common endpoints server extension
[I 2025-01-17 21:29:33.699 ServerApp] al_server_extension | extension was successfully loaded.
[I 2025-01-17 21:29:33.699 AiExtension] Configured provider allowlist: ['azure-chat-openai']
[I 2025-01-17 21:29:33.699 AiExtension] Configured provider blocklist: None
[I 2025-01-17 21:29:33.699 AiExtension] Configured model allowlist: None
[I 2025-01-17 21:29:33.699 AiExtension] Configured model blocklist: None
[I 2025-01-17 21:29:33.700 AiExtension] Configured model parameters: {'azure-chat-openai:XXXXXXXXXX': {'azure_endpoint': 'https://XXXXXXXXXXXXXXXX/openai-proxy', 'openai_api_version': '2023-07-01-preview'}}
[I 2025-01-17 21:29:33.707 AiExtension] Skipping blocked provider `ai21`.
[I 2025-01-17 21:29:33.829 AiExtension] Skipping blocked provider `bedrock`.
[I 2025-01-17 21:29:33.829 AiExtension] Skipping blocked provider `bedrock-chat`.
[I 2025-01-17 21:29:33.829 AiExtension] Skipping blocked provider `bedrock-custom`.
[I 2025-01-17 21:29:33.938 AiExtension] Skipping blocked provider `anthropic-chat`.
[I 2025-01-17 21:29:34.119 AiExtension] Registered model provider `azure-chat-openai`.
[I 2025-01-17 21:29:34.919 AiExtension] Skipping blocked provider `cohere`.
[I 2025-01-17 21:29:35.149 AiExtension] Skipping blocked provider `gemini`.
[I 2025-01-17 21:29:35.149 AiExtension] Skipping blocked provider `gpt4all`.
[I 2025-01-17 21:29:35.149 AiExtension] Skipping blocked provider `huggingface_hub`.
[I 2025-01-17 21:29:35.160 AiExtension] Skipping blocked provider `mistralai`.
[I 2025-01-17 21:29:35.178 AiExtension] Skipping blocked provider `nvidia-chat`.
[I 2025-01-17 21:29:35.244 AiExtension] Skipping blocked provider `ollama`.
[I 2025-01-17 21:29:35.244 AiExtension] Skipping blocked provider `openai`.
[I 2025-01-17 21:29:35.244 AiExtension] Skipping blocked provider `openai-chat`.
[I 2025-01-17 21:29:35.257 AiExtension] Skipping blocked provider `openrouter`.
[I 2025-01-17 21:29:35.257 AiExtension] Skipping blocked provider `qianfan`.
[I 2025-01-17 21:29:35.257 AiExtension] Skipping blocked provider `sagemaker-endpoint`.
[I 2025-01-17 21:29:35.257 AiExtension] Skipping blocked provider `togetherai`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `azure`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `bedrock`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `cohere`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `gpt4all`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `huggingface_hub`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `mistralai`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `ollama`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `openai`.
[I 2025-01-17 21:29:35.265 AiExtension] Skipping blocked provider `qianfan`.
[I 2025-01-17 21:29:35.271 AiExtension] Registered providers.
[I 2025-01-17 21:29:35.271 AiExtension] Registered jupyter_ai server extension
[I 2025-01-17 21:29:35.286 AiExtension] Registered context provider `file`.
[I 2025-01-17 21:29:35.287 AiExtension] Initialized Jupyter AI server extension in 1588 ms.
[I 2025-01-17 21:29:35.288 ServerApp] jupyter_ai | extension was successfully loaded.
[I 2025-01-17 21:29:35.289 ServerApp] jupyter_lsp | extension was successfully loaded.
[I 2025-01-17 21:29:35.290 ServerApp] jupyter_server_terminals | extension was successfully loaded.
[I 2025-01-17 21:29:35.291 LabApp] JupyterLab extension loaded from /opt/conda/lib/python3.11/site-packages/jupyterlab
[I 2025-01-17 21:29:35.291 LabApp] JupyterLab application directory is /opt/conda/share/jupyter/lab
[I 2025-01-17 21:29:35.291 LabApp] Extension Manager is 'pypi'.
[I 2025-01-17 21:29:35.298 ServerApp] jupyterlab | extension was successfully loaded.
_ _ _ _
| | | |_ __ __| |__ _| |_ ___
| |_| | '_ \/ _` / _` | _/ -_)
\___/| .__/\__,_\__,_|\__\___|
|_|
Read the migration plan to Notebook 7 to learn about the new features and the actions to take if you are using extensions.
https://jupyter-notebook.readthedocs.io/en/latest/migrate_to_notebook7.html
Please note that updating to Notebook 7 might break some of your extensions.
[I 2025-01-17 21:29:35.300 ServerApp] nbclassic | extension was successfully loaded.
[I 2025-01-17 21:29:35.301 ServerApp] notebook | extension was successfully loaded.
[C 2025-01-17 21:29:35.302 ServerApp] Running as root is not recommended. Use --allow-root to bypass.</pre>
</details>
</pre>
</details>
| open | 2025-01-17T21:43:03Z | 2025-01-31T19:29:11Z | https://github.com/jupyterlab/jupyter-ai/issues/1208 | [
"bug"
] | eazuman | 9 |
miguelgrinberg/Flask-SocketIO | flask | 1,078 | Embedded Server not listening for ws:// or wss:// prefix | When running the [embedded server](https://flask-socketio.readthedocs.io/en/latest/#embedded-server):
```
socketio.run(app, host='0.0.0.0', port=5005)
```
Using a the [socket.io](https://github.com/socketio/socket.io) client forcing websockets over polling:
```
let namespace = '/' + m_websocket_info['listener'] + '/' + m_user_location_id;
if (location.port === "5000") {
namespace = location.protocol + '//' + location.hostname + ':5005' + namespace;
}
socket = io(namespace, { transports: ['websocket'] });
```
Despite `location.protocol` being `http:` the browser console shows the following error:
```
index.js:83 WebSocket connection to 'ws://localhost:5005/socket.io/?EIO=3&transport=websocket' failed: Unknown reason
```
When the traffic is routed through nginx there are no errors:
```
# https://uwsgi-docs.readthedocs.io/en/latest/Nginx.html
upstream web_upstream{
server server-web:5000;
}
upstream socket_upstream{
server server-socket:5005;
}
server {
listen 80 default_server;
# error_log /var/log/nginx/error.log debug;
server_name _;
location / {
try_files $uri @web;
}
location /socket.io {
try_files $uri @socket;
}
location @web {
# https://nginx.org/en/docs/http/ngx_http_uwsgi_module.html
# https://serverfault.com/a/800729
proxy_pass http://web_upstream;
}
location @socket {
# http://nginx.org/en/docs/http/websocket.html
# https://nginx.org/en/docs/http/ngx_http_proxy_module.html
# https://flask-socketio.readthedocs.io/en/latest/#using-nginx-as-a-websocket-reverse-proxy
proxy_pass http://socket_upstream;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_buffering off;
proxy_read_timeout 120s;
}
}
```
Routing through nginx is used in production, however connecting indirectly to the container breaks debugging using the [vscode chrome debugger](https://github.com/Microsoft/vscode-chrome-debug). | closed | 2019-10-09T18:03:01Z | 2019-10-10T16:36:24Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1078 | [
"question"
] | jonfen | 2 |
nltk/nltk | nlp | 2,809 | Trying to get in touch regarding a security issue | Hey there!
I'd like to report a security issue but cannot find contact instructions on your repository.
If not a hassle, might you kindly add a `SECURITY.md` file with an email, or another contact method? GitHub [recommends](https://docs.github.com/en/code-security/getting-started/adding-a-security-policy-to-your-repository) this best practice to ensure security issues are responsibly disclosed, and it would serve as a simple instruction for security researchers in the future.
Thank you for your consideration, and I look forward to hearing from you!
(cc @huntr-helper) | closed | 2021-09-19T19:26:09Z | 2021-09-25T14:46:45Z | https://github.com/nltk/nltk/issues/2809 | [] | JamieSlome | 2 |
donnemartin/system-design-primer | python | 285 | Point this course at resume? | Hi! Thanks for maintaining this list of important work on system design. I wonder how one can write about this course at the resume / CV? If it goes to the Education section then it needs some credentials. If in the Projects section then it needs some measurable outcome. | closed | 2019-05-29T09:55:42Z | 2020-07-04T16:26:30Z | https://github.com/donnemartin/system-design-primer/issues/285 | [
"question"
] | artkpv | 2 |
tflearn/tflearn | data-science | 713 | How to get features from a specific layer if I use merge layer? | I am new to TFLearn. I want to get features from a specific layer and my network use a merge layer. But I got an error:
```
Assign requires shapes of both tensors to match. lhs shape= [256] rhs shape= [10]
```
This my code:
```python
def single_net(test=False):
# Building Residual Network
net = tflearn.input_data(shape=[None, 28, 28, 1])
net = tflearn.conv_2d(net, 64, 3, activation='relu', bias=False)
# Residual blocks
net = tflearn.residual_bottleneck(net, 3, 16, 64)
net = tflearn.residual_bottleneck(net, 1, 32, 128, downsample=True)
net = tflearn.residual_bottleneck(net, 2, 32, 128)
net = tflearn.residual_bottleneck(net, 1, 64, 256, downsample=True)
net = tflearn.residual_bottleneck(net, 2, 64, 256)
net = tflearn.batch_normalization(net)
net = tflearn.activation(net, 'relu')
net = tflearn.global_avg_pool(net)
net = tflearn.fully_connected(net, 256, activation='tanh')
if test:
return net
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='momentum',
loss='categorical_crossentropy',
learning_rate=0.1)
return net
def fc_fc_net():
net1=single_net()
net2=single_net()
net=tflearn.merge([net1,net2], mode = 'concat')
return net
def fc_fc_net_1():
net1=single_net(True)
net2=single_net(True)
net=tflearn.merge([net1,net2], mode = 'concat')
return net
```
When I first train the network, I use fc_fc_net() function, and when I predict, I use fc_fc_net_1() function. But it has an error:
```
Assign requires shapes of both tensors to match. lhs shape= [256] rhs shape= [10]
```
| closed | 2017-04-14T13:03:22Z | 2017-04-19T02:33:21Z | https://github.com/tflearn/tflearn/issues/713 | [] | FoxerLee | 0 |
HumanSignal/labelImg | deep-learning | 229 | Unable to run under PyQt4 environment | This edition has some new features which is much more friendly for users.
But it is unable to run under PyQt4 environment, what is more, the windows app is unable to run as it is built with PyQt4.
This problem is caused by importing QtCore from PyQt5 directly in the file resources.py without checking the PyQt version.
It is easily to solve by just checking the PyQt version in the resources.py.
- **OS:Ubuntu 16.04
- **PyQt version: PyQt4 | closed | 2018-01-27T03:18:40Z | 2018-05-22T06:12:08Z | https://github.com/HumanSignal/labelImg/issues/229 | [] | TommeyChang | 3 |
bendichter/brokenaxes | matplotlib | 5 | using with subplots | Will it be possible to include an example on how to use it with subplots? | closed | 2017-08-03T09:28:14Z | 2017-08-09T16:54:13Z | https://github.com/bendichter/brokenaxes/issues/5 | [] | themiyan | 2 |
hankcs/HanLP | nlp | 1,159 | CollectionUtility.sortMapByValueๆนๆณๅจๅญๅจbug | v1.72็ๆฌๅญๅจbug๏ผๆๆฐ็masterไธญไพ็ถๅญๅจ
com.hankcs.hanlp.classification.utilities.CollectionUtilityไธญ
public static <K, V extends Comparable<V>> Map<K, V> sortMapByValue(Map<K, V> input, final boolean desc)ๆนๆณๅญๅจbug
ArrayList<Map.Entry<K, V>> entryList = new ArrayList<Map.Entry<K, V>>(input.size());
ไธญ็input.size()ๅบ่ฏฅๆนไธบinput.entrySet() | closed | 2019-04-24T05:59:53Z | 2019-04-27T22:08:32Z | https://github.com/hankcs/HanLP/issues/1159 | [
"bug"
] | wyuz1028 | 1 |
lgienapp/aquarel | data-visualization | 13 | KeyError: 'xtick.labelcolor' | matplotlib==3.3.4. Require a minimum version of matplotlib module. | closed | 2022-08-17T13:02:18Z | 2022-08-23T12:44:25Z | https://github.com/lgienapp/aquarel/issues/13 | [
"bug"
] | pangahn | 1 |
polarsource/polar | fastapi | 4,608 | AssertionError | Sentry Issue: [SERVER-21G](https://polar-sh.sentry.io/issues/6120513945/?referrer=github_integration)
```
SSLError: [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1000)
(15 additional frame(s) were not displayed)
...
File "polar/webhook/tasks.py", line 124, in _webhook_event_send
response = await client.post(
AssertionError:
File "polar/worker.py", line 304, in wrapper
r = await f(*args, **kwargs)
File "polar/webhook/tasks.py", line 40, in webhook_event_send
return await _webhook_event_send(
File "polar/webhook/tasks.py", line 150, in _webhook_event_send
assert delivery.succeeded is not None
``` | closed | 2024-12-09T08:02:41Z | 2024-12-09T08:07:33Z | https://github.com/polarsource/polar/issues/4608 | [] | sentry-io[bot] | 0 |
xonsh/xonsh | data-science | 5,166 | f-string with special syntax are not supported yet in py3.12: Unsupported fstring syntax | Python 3.12 has implemented [PEP 701](https://peps.python.org/pep-0701/) affecting [string literals](https://xon.sh/tutorial.html#advanced-string-literals):
```xsh
# xonsh + python 3.12
f"{$HOME}"
# Unsupported fstring syntax
```
We need to handle cases where $ENV variables are inside f-strings. Check the comment and related test for more info https://github.com/xonsh/xonsh/pull/5156#discussion_r1247003673
### Workaround
```xsh
print(p"$HOME") # p-string for path
# /Users/me
print(f"{__xonsh__.env['HOME']}")
# /Users/me
env = __xonsh__.env
print(f"{env['HOME']}")
# /Users/me
```
## For community
โฌ๏ธ **Please click the ๐ reaction instead of leaving a `+1` or ๐ comment**
| open | 2023-06-30T04:11:04Z | 2024-10-31T09:24:54Z | https://github.com/xonsh/xonsh/issues/5166 | [
"parser",
"py312"
] | jnoortheen | 11 |
dgtlmoon/changedetection.io | web-scraping | 1,991 | TrueNas Scale - Visual Filter Selector resolution/cropped result (missing env vars) | **Describe the bug**
Visual Filtor Selector displays a cropped version of websites, making it almost unusable. (The red selections are not aligned with the screenshot; everything is shifted.)
**Version**
v0.45.7.3
**To Reproduce**
Steps to reproduce the behavior:
1. The issue occurs with 100% of the added links. As a test link, use: https://www.20minutes.fr/locales/
**Expected behavior**
The Visual Filter Selector should display the entire page, not a cropped version.
**Screenshots**
Visual selector cropped result :

Expected result :

**Desktop (please complete the following information):**
- Hosted on TrueNas Scale
- Latest update of ChangeDetection and Browserless
- Fetch Method : Playwright Chromium/Javascript via 'ws://localip:port/&stealth=1&--disable-web-security=true'
**Additional context**
I need to specify that it has never worked properly, it's not a new behavior; I have been experiencing this issue since I installed ChangeDetection for the first time a few months ago. I am unsure whether it is related to Browserless or ChangeDetection. Feel free to ask if I can provide more useful information to troubleshoot this issue. Thanks! | closed | 2023-11-20T09:17:27Z | 2023-11-20T17:32:43Z | https://github.com/dgtlmoon/changedetection.io/issues/1991 | [
"triage"
] | FhenrisGit | 11 |
deezer/spleeter | tensorflow | 604 | [Feature] Yosemite Compatibility? | ## Description
I run OSX 10.10.5 on my mid 2012 MacBook Pro, 2.3 GHz core i7 with eight virtual cores, 16GB RAM, and two internal 1TB SSDโs. My d.a.w. of choice is ProTools, and version 10.3.10 is what I have a license for and what all of my plug-ins are licensed for. I refuse to upgrade when my hardware and software is working perfectly but Iโm having issues finding things that are compatible with anything below 10.11. I really need something like this and was hoping there was a gooey version available that was pre-compiled that will work with Yosemite. If not, I would be extremely grateful if somebody with more coding experience could message me somehow and help me do it. I have some experience with Command line in Darwin and Linux, but usually just following instructions and memorizing certain tasks. I have not built some thing from source code completely by myself except for a couple times many years ago at the beginning of the OSX86 project using OSX 10.4.3.
This is my first time posting on here as usually I just read, so please be easier on me if I made a mistake and posted some thing in the wrong place. Iโm still learning my way around.
## Additional information
<!-- Add any additional description -->
| closed | 2021-04-03T07:51:26Z | 2021-04-03T19:26:24Z | https://github.com/deezer/spleeter/issues/604 | [
"enhancement",
"feature"
] | louisCyphre666 | 1 |
kennethreitz/responder | graphql | 470 | Uvicorn version too old ? | Hi,
I can notice that `uvicorn` version in use in old.
https://github.com/taoufik07/responder/blob/6ff47adbb51ea4c822c75220023740c64b60e860/setup.py#L26
Is there a specific reason to ping this version ?
Regards, | closed | 2022-01-14T13:54:58Z | 2024-03-31T00:57:34Z | https://github.com/kennethreitz/responder/issues/470 | [] | waghanza | 0 |
dask/dask | pandas | 11,825 | P2PShuffle RuntimeError P2P {id} failed during transfer phase when groupby apply to_bag |
**Describe the issue**:
groupby.apply().to_bag() causes a Runtime error when using P2PShuffle (the default) with a distributed dask client.
Expected:
- when computing the bag, there is no error
Actual:
- when computing the bag,
Throws AssertionError on: assert isinstance(barrier_task_spec, P2PBarrierTask)
```
File "/home/vscode/.local/lib/python3.12/site-packages/distributed/shuffle/_scheduler_plugin.py", line 196, in _retrieve_spec
assert isinstance(barrier_task_spec, P2PBarrierTask)
^^^^^^^^^^^^^^^^^
AssertionError
...
File "/home/vscode/.local/lib/python3.12/site-packages/distributed/shuffle/_core.py", line 531, in handle_transfer_errors
raise RuntimeError(f"P2P {id} failed during transfer phase") from e
```
To unblock:
- adding a persist() before the to_bag()
- switch to Task shuffle
**Minimal Complete Verifiable Example**:
```python
import dask.config
import dask.dataframe as dd
import pandas as pd
from dask.distributed import Client
def main():
# ----------------------------------------------------------------
# Set up data
# ------------------------------------------------------------
simple_data = pd.DataFrame(
{
"foo": [
"1",
"1",
"2",
"2",
],
"bar": ["1", "2", "3", "4"],
}
)
# ----------------------------------------------------------------
# Set up Dask
# ----------------------------------------------------------------
dask.config.set(scheduler="distributed")
# dask.config.set({"dataframe.shuffle.method": "tasks"}) # adding this line will also fix the exception
client = Client()
simple_ddf = dd.from_pandas(simple_data)
# ----------------------------------------------------------------
# groupby() then apply() then to_bag()
# ----------------------------------------------------------------
grouped_ddf = simple_ddf.groupby("foo")
def worker(grouped_df):
print(f"Name: {grouped_df.name}, content:\n {grouped_df}")
return object()
results_bag = (
grouped_ddf.apply(worker, meta=("result", "object"))
# Add persist() to avoid the P2P Shuffle problem (or else use tasks shuffle)
# .persist()
.to_bag()
)
print("Results:", results_bag.compute())
client.close()
if __name__ == "__main__":
main()
```
**Anything else we need to know?**:
**Script Expected behaviour:**
. results_bag should compute
**Actual Behaviour:**
Throws AssertionError on: assert isinstance(barrier_task_spec, P2PBarrierTask)
**How to avoid the undesired behaviour:**
(1) insert persist() before the to_bag()
OR
(2) change dask dataframe shuffle to Tasks dask.config.set({"dataframe.shuffle.method": "tasks"}) (edited)
"""
**Environment**:
- Dask version:
dask: Version: 2025.1.0
distributed: Version: 2025.1.0
- Python version: Python 3.12.7
- Operating System: Debian GNU/Linux 11 (bullseye)
- Install method (conda, pip, source): pip
| open | 2025-03-12T03:20:19Z | 2025-03-12T03:37:21Z | https://github.com/dask/dask/issues/11825 | [
"needs triage"
] | IsisChameleon | 1 |
comfyanonymous/ComfyUI | pytorch | 7,299 | Can't find torch version, can't install natten | ### Expected Behavior
[notice] A new release of pip available: 22.2.1 -> 25.0.1
[notice] To update, run: python.exe -m pip install --upgrade pip
Installing natten for PMRF...
Searching for CUDA and Torch versions for installing atten needed by PMRF...
************************************
Error: Can't find torch version, can't install natten
PMRF will not work until natten is installed, see https://github.com/SHI-Labs/NATTEN for help in installing natten.
************************************
Prestartup times for custom nodes:
0.0 seconds: E:\workplace\ComfyUI\custom_nodes\ComfyUI-Easy-Use
0.0 seconds: E:\workplace\ComfyUI\custom_nodes\rgthree-comfy
3.2 seconds: E:\workplace\ComfyUI\custom_nodes\ComfyUI-Manager
20.9 seconds: E:\workplace\ComfyUI\custom_nodes\ComfyUI-PMRF
Checkpoint files will always be loaded safely.
Total VRAM 24576 MB, total RAM 65277 MB
pytorch version: 2.6.0+cu126
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync
Using pytorch attention
ComfyUI version: 0.3.26
ComfyUI frontend version: 1.12.14
### Actual Behavior
Can't find torch version, can't install natten
### Steps to Reproduce
Can't find torch version, can't install natten
### Debug Logs
```powershell
Can't find torch version, can't install natten
```
### Other
_No response_ | closed | 2025-03-18T13:53:07Z | 2025-03-18T15:03:35Z | https://github.com/comfyanonymous/ComfyUI/issues/7299 | [
"User Support"
] | Song367 | 1 |
sktime/pytorch-forecasting | pandas | 992 | Dataloader in TFT tutorial goes beyond last time point of dataset and sets missing values to zero | - PyTorch-Forecasting version: 0.10.1
- PyTorch version: 1.11.0
- Python version: 3.9.12
- Operating System: Ubuntu 20.04.4 LTS
Dear developer team,
I noticed, that some time series in the tensors produced by the training dataloader in [tutorial on demand forecasting with TFT](https://pytorch-forecasting.readthedocs.io/en/stable/tutorials/stallion.html) have only zeros after a certain time point. Looking at the first time index of each mini-batch time series with `x_to_index(x)` and the corresponding time series I found out, that it only happens when the first time index of the given time series is too close to the maximum time index of the dataset.
For instance in the demand forecasting tutorial
```
torch.manual_seed(2)
nix, niy = next(iter(train_dataloader))
training.x_to_index(nix)[0:5]
```
produces
```
time_idx agency sku
0 16 Agency_50 SKU_05
1 15 Agency_46 SKU_05
2 48 Agency_38 SKU_02
3 32 Agency_05 SKU_26
4 12 Agency_47 SKU_17
```
The third time series starts with the time index 48, which is already too much, as the maximum time index is 59 and having
```
max_prediction_length = 6
max_encoder_length = 24
```
there are no vales for the tail of the encoder sequence.
This is confirmed by the third time series of that mini-batch
`nix["encoder_cont"][2, :, :5]` takes only the first 5 variables and produces
```
tensor([[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.5067, 0.4525, 0.1667, 1.1313, 0.6161],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],
[0.0000, 0.0000, 0.0000, 0.0000, 0.0000]])
```
Thus, the dataloader goes too far in time taking such time series that are lying in less than `encoder_length` + `decoder_length` samples from the last sample of the training dataset. The same with categorical variables and when creating validation dataloader with `predict = False` in `TimeSeriesDataSet`. Basically, I always observe such behavior except for setting `predict = True` in `TimeSeriesDataSet`.
I guess it shouldn't work that way as these are just incorrect time series making the model learn incorrect dependencies regardless of whether it's training or validation. Please correct me if I'm wrong.
### Code to reproduce the problem
```
import os
import warnings
warnings.filterwarnings("ignore") # avoid printing out absolute paths
os.chdir("../../..")
import copy
from pathlib import Path
import warnings
import numpy as np
import pandas as pd
import pytorch_lightning as pl
from pytorch_lightning.callbacks import EarlyStopping, LearningRateMonitor
from pytorch_lightning.loggers import TensorBoardLogger
import torch
from pytorch_forecasting import Baseline, TemporalFusionTransformer, TimeSeriesDataSet
from pytorch_forecasting.data import GroupNormalizer
from pytorch_forecasting.metrics import SMAPE, PoissonLoss, QuantileLoss
from pytorch_forecasting.models.temporal_fusion_transformer.tuning import optimize_hyperparameters
from pytorch_forecasting.data.examples import get_stallion_data
data = get_stallion_data()
# add time index
data["time_idx"] = data["date"].dt.year * 12 + data["date"].dt.month
data["time_idx"] -= data["time_idx"].min()
# add additional features
data["month"] = data.date.dt.month.astype(str).astype("category") # categories have be strings
data["log_volume"] = np.log(data.volume + 1e-8)
data["avg_volume_by_sku"] = data.groupby(["time_idx", "sku"], observed=True).volume.transform("mean")
data["avg_volume_by_agency"] = data.groupby(["time_idx", "agency"], observed=True).volume.transform("mean")
# we want to encode special days as one variable and thus need to first reverse one-hot encoding
special_days = [
"easter_day",
"good_friday",
"new_year",
"christmas",
"labor_day",
"independence_day",
"revolution_day_memorial",
"regional_games",
"fifa_u_17_world_cup",
"football_gold_cup",
"beer_capital",
"music_fest",
]
data[special_days] = data[special_days].apply(lambda x: x.map({0: "-", 1: x.name})).astype("category")
max_prediction_length = 6
max_encoder_length = 24
training_cutoff = data["time_idx"].max() - max_prediction_length
training = TimeSeriesDataSet(
data[lambda x: x.time_idx <= training_cutoff],
time_idx="time_idx",
target="volume",
group_ids=["agency", "sku"],
min_encoder_length=max_encoder_length // 2, # keep encoder length long (as it is in the validation set)
max_encoder_length=max_encoder_length,
min_prediction_length=1,
max_prediction_length=max_prediction_length,
static_categoricals=["agency", "sku"],
static_reals=["avg_population_2017", "avg_yearly_household_income_2017"],
time_varying_known_categoricals=["special_days", "month"],
variable_groups={"special_days": special_days}, # group of categorical variables can be treated as one variable
time_varying_known_reals=["time_idx", "price_regular", "discount_in_percent"],
time_varying_unknown_categoricals=[],
time_varying_unknown_reals=[
"volume",
"log_volume",
"industry_volume",
"soda_volume",
"avg_max_temp",
"avg_volume_by_agency",
"avg_volume_by_sku",
],
target_normalizer=GroupNormalizer(
groups=["agency", "sku"], transformation="softplus"
), # use softplus and normalize by group
add_relative_time_idx=True,
add_target_scales=True,
add_encoder_length=True,
)
# create validation set (predict=True) which means to predict the last max_prediction_length points in time
# for each series
validation = TimeSeriesDataSet.from_dataset(training, data, predict=True, stop_randomization=True)
# create dataloaders for model
batch_size = 128 # set this between 32 to 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=0)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size * 10, num_workers=0)
torch.manual_seed(2)
nix, niy = next(iter(train_dataloader))
print(training.x_to_index(nix)[0:5])
print(nix["encoder_cont"][2, :, :5])
```
### Colab Notebook:
https://colab.research.google.com/drive/1ce45UFTurrds5t5fLpFdSYuKnHVGPzfR?usp=sharing
| closed | 2022-05-20T19:19:42Z | 2022-06-15T11:43:54Z | https://github.com/sktime/pytorch-forecasting/issues/992 | [] | hd1894 | 2 |
amidaware/tacticalrmm | django | 1,917 | Atualizei o RMM para v0.19.1 e nรฃo consigo mais abrir os agentes. | I updated the SSL certificate and also the RMM version to v0.19.1, but now I can't connect to the machines, the connect button is gone, can anyone help me?
| closed | 2024-07-19T18:13:11Z | 2024-10-18T00:16:52Z | https://github.com/amidaware/tacticalrmm/issues/1917 | [] | Cleberson-Brandao | 12 |
JoeanAmier/XHS-Downloader | api | 123 | explore_data has no column named ๅจๅพๅฐๅ | ๅทฒ็ปๅฐๆดไธช็ฎๅฝๅ ้คๆไบ๏ผ้คไบDownloadsๆไปถๅคนไปฅๅค๏ผ่ฟ่กๆๆฐ๏ผ2.1๏ผ็ๆฌ็ไพๆงไผๆพ็คบไปฅไธ้่ฏฏ๏ผ
OperationalError: table explore_data has no column named ๅจๅพๅฐๅ
ๆฏไธๆฏๆง็databaseๆไปถไผๅญๅจไบไปไนๅ
ถไป็ๅฐๆน๏ผ | closed | 2024-07-25T06:23:06Z | 2024-07-25T15:08:57Z | https://github.com/JoeanAmier/XHS-Downloader/issues/123 | [] | jyu041 | 2 |
jpadilla/django-rest-framework-jwt | django | 195 | Need to receive signals when a token is created | Hi,
Would be great to get a signal when a user successfully obtain a token with his username/password.
| closed | 2016-01-26T16:47:35Z | 2020-01-09T20:16:34Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/195 | [] | stunaz | 3 |
mwaskom/seaborn | data-visualization | 3,570 | Boxplot Y-axis Labels Incorrectly Scaled When Font Size Is Altered | **Description:**
When updating the y-axis tick labels' font size using Matplotlib and Seaborn, the y-axis labels (i.e. numbers at the y-axis) appear to be incorrectly scaled, showing smaller numerical values than the actual data points if the font size is decreased compared to default.
**Steps to Reproduce:**
1. Create a boxplot using Seaborn's sns.boxplot with a specific set of data.
2. Overlay individual data points using Seaborn's sns.swarmplot.
3. Set the y-axis tick labels with a custom font size using ax.set_yticklabels() and a font dictionary.
**Expected Behavior:**
The numerical values of the y-axis tick labels should accurately reflect the data's scale and not be altered by changes in font size. The numerical values should simply appear in the new font style.
**Actual Behavior:**
The numerical values of the y-axis tick labels are scaled down, not matching the actual data points, giving the impression that the median is lower than it should be.
**Additional Context:**
This behavior was observed when attempting to set the font size of the y-axis tick labels for consistency. The issue seems to occur when the ax.set_yticklabels() method is used with a font size defined in the font dictionary. It's unclear whether the problem lies in the font size scaling or the actual rendering of the labels on the y-axis.
`# Sample code to reproduce the issue:`
`import seaborn as sns`
`import matplotlib.pyplot as plt`
`# Example data and plotting code that causes the issue`
`# ...`
`ax.set_xticklabels(ax.get_xticklabels(), fontdict={'size': 5})`
`# Suspected problematic line`
`ax.set_yticklabels(ax.get_yticklabels(), fontdict={'size': 5})`
`plt.show()` | closed | 2023-11-23T15:36:56Z | 2023-12-02T14:03:44Z | https://github.com/mwaskom/seaborn/issues/3570 | [
"mod:categorical",
"needs-reprex"
] | richtertill | 2 |
roboflow/supervision | deep-learning | 1,647 | Remove Images for which there are no annotations | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
Hi,
Any idea how to remove the images where there are no annotations or we need to do it manually in JSON in COCO Format?
### Use case
Cleaning COCO
### Additional
N/a
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2024-11-02T13:36:04Z | 2024-11-04T09:19:01Z | https://github.com/roboflow/supervision/issues/1647 | [
"enhancement"
] | shanalikhan | 1 |
pytest-dev/pytest-xdist | pytest | 742 | 2.5.0: pytest is failing | I'm trying to package your module as an rpm package. So I'm using the typical PEP517 based build, install and test cycle used on building packages from non-root account.
- `python3 -sBm build -w`
- install .whl file in </install/prefix>
- "pytest with PYTHONPATH pointing to sitearch and sitelib inside </install/prefix>
- during testing I'm disabling few pytest extensions which in other tickets have been identified as causing some issues
Here is pytest output:
```console
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-xdist-2.5.0-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-pytest-xdist-2.5.0-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ /usr/bin/pytest -ra -p no:randomly -p no:benchmark -p no:django -p no:twisted --deselect testing/acceptance_test.py::test_issue_594_random_parametrize --deselect testing/test_newhooks.py::TestHooks::test_node_collection_finished
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-0.13.1
rootdir: /home/tkloczko/rpmbuild/BUILD/pytest-xdist-2.5.0, configfile: tox.ini, testpaths: testing
plugins: xdist-2.5.0, mock-3.6.1, cov-2.12.1, anyio-3.3.4, flaky-3.7.0, console-scripts-1.2.0, asyncio-0.16.0, freezegun-0.4.2, flake8-1.0.7, rerunfailures-9.1.1, yagot-0.5.0, forked-1.4.0, ordering-0.6, Faker-10.0.0, pyfakefs-4.5.3, datadir-1.3.1, regressions-2.2.0, timeout-2.0.2, perf-0.10.1, trio-0.7.0, requests-mock-1.9.3, hypothesis-6.31.5, easy-server-0.8.0
collected 167 items / 2 deselected / 165 selected
testing/acceptance_test.py ..........F......x.......xx.......................x......................................... [ 55%]
testing/test_dsession.py ........x...x [ 63%]
testing/test_looponfail.py ...........x... [ 72%]
testing/test_newhooks.py F. [ 73%]
testing/test_plugin.py .............. [ 82%]
testing/test_remote.py x...Fx...... [ 89%]
testing/test_workermanage.py ........x.......s [100%]
================================================================================= FAILURES =================================================================================
_______________________________________________________________ TestDistribution.test_dist_tests_with_crash ________________________________________________________________
self = <acceptance_test.TestDistribution object at 0x7fcf8f6e0070>, pytester = <Pytester PosixPath('/tmp/pytest-of-tkloczko/pytest-10/test_dist_tests_with_crash0')>
@pytest.mark.xfail("sys.platform.startswith('java')", run=False)
def test_dist_tests_with_crash(self, pytester: pytest.Pytester) -> None:
if not hasattr(os, "kill"):
pytest.skip("no os.kill")
p1 = pytester.makepyfile(
"""
import pytest
def test_fail0():
assert 0
def test_fail1():
raise ValueError()
def test_ok():
pass
def test_skip():
pytest.skip("hello")
def test_crash():
import time
import os
time.sleep(0.5)
os.kill(os.getpid(), 15)
"""
)
result = pytester.runpytest(p1, "-v", "-d", "-n1")
> result.stdout.fnmatch_lines(
[
"*Python*",
"*PASS**test_ok*",
"*node*down*",
"*3 failed, 1 passed, 1 skipped*",
]
)
E Failed: nomatch: '*Python*'
E and: '============================= test session starts =============================='
E fnmatch: '*Python*'
E with: 'platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-0.13.1 -- /usr/bin/python3'
E nomatch: '*PASS**test_ok*'
E and: 'cachedir: .pytest_cache'
E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)'
E and: 'Using --randomly-seed=244357277'
E and: "hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-xdist-2.5.0/.hypothesis/examples')"
E and: 'rootdir: /tmp/pytest-of-tkloczko/pytest-10/test_dist_tests_with_crash0'
E and: 'plugins: xdist-2.5.0, mock-3.6.1, cov-2.12.1, anyio-3.3.4, flaky-3.7.0, console-scripts-1.2.0, asyncio-0.16.0, freezegun-0.4.2, flake8-1.0.7, rerunfailures-9.1.1, yagot-0.5.0, forked-1.4.0, ordering-0.6, Faker-10.0.0, benchmark-3.4.1, pyfakefs-4.5.3, datadir-1.3.1, regressions-2.2.0, timeout-2.0.2, randomly-3.10.3, perf-0.10.1, trio-0.7.0, requests-mock-1.9.3, hypothesis-6.31.5, easy-server-0.8.0'
E and: 'gw0 I'
E and: ''
E and: '[gw0] linux Python 3.8.12 cwd: /tmp/pytest-of-tkloczko/pytest-10/test_dist_tests_with_crash0'
E and: ''
E and: '[gw0] Python 3.8.12 (default, Dec 17 2021, 08:35:49) -- [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]'
E and: 'gw0 [5]'
E and: ''
E and: 'scheduling tests via LoadScheduling'
E and: ''
E and: 'test_dist_tests_with_crash.py::test_fail0 '
E and: '[gw0] [ 20%] FAILED test_dist_tests_with_crash.py::test_fail0 '
E and: 'test_dist_tests_with_crash.py::test_fail1 '
E and: '[gw0] [ 40%] FAILED test_dist_tests_with_crash.py::test_fail1 '
E and: 'test_dist_tests_with_crash.py::test_crash '
E and: '[gw0] node down: Not properly terminated'
E and: '[gw0] [ 60%] FAILED test_dist_tests_with_crash.py::test_crash '
E and: ''
E and: 'replacing crashed worker gw0'
E and: ''
E and: '[gw1] linux Python 3.8.12 cwd: /tmp/pytest-of-tkloczko/pytest-10/test_dist_tests_with_crash0'
E and: ''
E and: '[gw1] Python 3.8.12 (default, Dec 17 2021, 08:35:49) -- [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]'
E and: ''
E and: 'test_dist_tests_with_crash.py::test_skip '
E and: '[gw1] [ 80%] SKIPPED test_dist_tests_with_crash.py::test_skip '
E and: 'test_dist_tests_with_crash.py::test_ok '
E fnmatch: '*PASS**test_ok*'
E with: '[gw1] [100%] PASSED test_dist_tests_with_crash.py::test_ok '
E nomatch: '*node*down*'
E and: ''
E and: '=================================== FAILURES ==================================='
E and: '__________________________________ test_fail0 __________________________________'
E and: '[gw0] linux -- Python 3.8.12 /usr/bin/python3'
E and: ''
E and: ' def test_fail0():'
E and: '> assert 0'
E and: 'E assert 0'
E and: ''
E and: 'test_dist_tests_with_crash.py:3: AssertionError'
E and: '__________________________________ test_fail1 __________________________________'
E and: '[gw0] linux -- Python 3.8.12 /usr/bin/python3'
E and: ''
E and: ' def test_fail1():'
E and: '> raise ValueError()'
E and: 'E ValueError'
E and: ''
E and: 'test_dist_tests_with_crash.py:5: ValueError'
E and: '________________________ test_dist_tests_with_crash.py _________________________'
E and: '[gw0] linux -- Python 3.8.12 /usr/bin/python3'
E and: "worker 'gw0' crashed while running 'test_dist_tests_with_crash.py::test_crash'"
E and: '=========================== short test summary info ============================'
E and: 'FAILED test_dist_tests_with_crash.py::test_fail0 - assert 0'
E and: 'FAILED test_dist_tests_with_crash.py::test_fail1 - ValueError'
E and: 'FAILED test_dist_tests_with_crash.py::test_crash'
E and: '==================== 3 failed, 1 passed, 1 skipped in 6.52s ===================='
E remains unmatched: '*node*down*'
/home/tkloczko/rpmbuild/BUILD/pytest-xdist-2.5.0/testing/acceptance_test.py:181: Failed
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
Using --randomly-seed=244357277
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/pytest-xdist-2.5.0/.hypothesis/examples')
rootdir: /tmp/pytest-of-tkloczko/pytest-10/test_dist_tests_with_crash0
plugins: xdist-2.5.0, mock-3.6.1, cov-2.12.1, anyio-3.3.4, flaky-3.7.0, console-scripts-1.2.0, asyncio-0.16.0, freezegun-0.4.2, flake8-1.0.7, rerunfailures-9.1.1, yagot-0.5.0, forked-1.4.0, ordering-0.6, Faker-10.0.0, benchmark-3.4.1, pyfakefs-4.5.3, datadir-1.3.1, regressions-2.2.0, timeout-2.0.2, randomly-3.10.3, perf-0.10.1, trio-0.7.0, requests-mock-1.9.3, hypothesis-6.31.5, easy-server-0.8.0
gw0 I
[gw0] linux Python 3.8.12 cwd: /tmp/pytest-of-tkloczko/pytest-10/test_dist_tests_with_crash0
[gw0] Python 3.8.12 (default, Dec 17 2021, 08:35:49) -- [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
gw0 [5]
scheduling tests via LoadScheduling
test_dist_tests_with_crash.py::test_fail0
[gw0] [ 20%] FAILED test_dist_tests_with_crash.py::test_fail0
test_dist_tests_with_crash.py::test_fail1
[gw0] [ 40%] FAILED test_dist_tests_with_crash.py::test_fail1
test_dist_tests_with_crash.py::test_crash
[gw0] node down: Not properly terminated
[gw0] [ 60%] FAILED test_dist_tests_with_crash.py::test_crash
replacing crashed worker gw0
[gw1] linux Python 3.8.12 cwd: /tmp/pytest-of-tkloczko/pytest-10/test_dist_tests_with_crash0
[gw1] Python 3.8.12 (default, Dec 17 2021, 08:35:49) -- [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)]
test_dist_tests_with_crash.py::test_skip
[gw1] [ 80%] SKIPPED test_dist_tests_with_crash.py::test_skip
test_dist_tests_with_crash.py::test_ok
[gw1] [100%] PASSED test_dist_tests_with_crash.py::test_ok
=================================== FAILURES ===================================
__________________________________ test_fail0 __________________________________
[gw0] linux -- Python 3.8.12 /usr/bin/python3
def test_fail0():
> assert 0
E assert 0
test_dist_tests_with_crash.py:3: AssertionError
__________________________________ test_fail1 __________________________________
[gw0] linux -- Python 3.8.12 /usr/bin/python3
def test_fail1():
> raise ValueError()
E ValueError
test_dist_tests_with_crash.py:5: ValueError
________________________ test_dist_tests_with_crash.py _________________________
[gw0] linux -- Python 3.8.12 /usr/bin/python3
worker 'gw0' crashed while running 'test_dist_tests_with_crash.py::test_crash'
=========================== short test summary info ============================
FAILED test_dist_tests_with_crash.py::test_fail0 - assert 0
FAILED test_dist_tests_with_crash.py::test_fail1 - ValueError
FAILED test_dist_tests_with_crash.py::test_crash
==================== 3 failed, 1 passed, 1 skipped in 6.52s ====================
_____________________________________________________________________ TestHooks.test_runtest_logreport _____________________________________________________________________
self = <test_newhooks.TestHooks object at 0x7fcf4875c0d0>, pytester = <Pytester PosixPath('/tmp/pytest-of-tkloczko/pytest-10/test_runtest_logreport0')>
def test_runtest_logreport(self, pytester: pytest.Pytester) -> None:
"""Test that log reports from pytest_runtest_logreport when running
with xdist contain "node", "nodeid", "worker_id", and "testrun_uid" attributes. (#8)
"""
pytester.makeconftest(
"""
def pytest_runtest_logreport(report):
if hasattr(report, 'node'):
if report.when == "call":
workerid = report.node.workerinput['workerid']
testrunuid = report.node.workerinput['testrunuid']
if workerid != report.worker_id:
print("HOOK: Worker id mismatch: %s %s"
% (workerid, report.worker_id))
elif testrunuid != report.testrun_uid:
print("HOOK: Testrun uid mismatch: %s %s"
% (testrunuid, report.testrun_uid))
else:
print("HOOK: %s %s %s"
% (report.nodeid, report.worker_id, report.testrun_uid))
"""
)
res = pytester.runpytest("-n1", "-s")
> res.stdout.fnmatch_lines(
[
"*HOOK: test_runtest_logreport.py::test_a gw0 *",
"*HOOK: test_runtest_logreport.py::test_b gw0 *",
"*HOOK: test_runtest_logreport.py::test_c gw0 *",
"*3 passed*",
]
)
E Failed: nomatch: '*HOOK: test_runtest_logreport.py::test_a gw0 *'
E and: '============================= test session starts =============================='
E and: 'platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-0.13.1'
E and: 'benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)'
E and: 'Using --randomly-seed=2699028449'
E and: 'rootdir: /tmp/pytest-of-tkloczko/pytest-10/test_runtest_logreport0'
E and: 'plugins: xdist-2.5.0, mock-3.6.1, cov-2.12.1, anyio-3.3.4, flaky-3.7.0, console-scripts-1.2.0, asyncio-0.16.0, freezegun-0.4.2, flake8-1.0.7, rerunfailures-9.1.1, yagot-0.5.0, forked-1.4.0, ordering-0.6, Faker-10.0.0, benchmark-3.4.1, pyfakefs-4.5.3, datadir-1.3.1, regressions-2.2.0, timeout-2.0.2, randomly-3.10.3, perf-0.10.1, trio-0.7.0, requests-mock-1.9.3, hypothesis-6.31.5, easy-server-0.8.0'
E and: 'gw0 I'
E and: 'gw0 [3]'
E and: ''
E and: '.HOOK: test_runtest_logreport.py::test_b gw0 0d69e931b31a4f4a8e735485eb0afc7d'
E and: '.HOOK: test_runtest_logreport.py::test_c gw0 0d69e931b31a4f4a8e735485eb0afc7d'
E fnmatch: '*HOOK: test_runtest_logreport.py::test_a gw0 *'
E with: '.HOOK: test_runtest_logreport.py::test_a gw0 0d69e931b31a4f4a8e735485eb0afc7d'
E nomatch: '*HOOK: test_runtest_logreport.py::test_b gw0 *'
E and: ''
E and: '============================== 3 passed in 3.10s ==============================='
E remains unmatched: '*HOOK: test_runtest_logreport.py::test_b gw0 *'
/home/tkloczko/rpmbuild/BUILD/pytest-xdist-2.5.0/testing/test_newhooks.py:39: Failed
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
============================= test session starts ==============================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.11.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
Using --randomly-seed=2699028449
rootdir: /tmp/pytest-of-tkloczko/pytest-10/test_runtest_logreport0
plugins: xdist-2.5.0, mock-3.6.1, cov-2.12.1, anyio-3.3.4, flaky-3.7.0, console-scripts-1.2.0, asyncio-0.16.0, freezegun-0.4.2, flake8-1.0.7, rerunfailures-9.1.1, yagot-0.5.0, forked-1.4.0, ordering-0.6, Faker-10.0.0, benchmark-3.4.1, pyfakefs-4.5.3, datadir-1.3.1, regressions-2.2.0, timeout-2.0.2, randomly-3.10.3, perf-0.10.1, trio-0.7.0, requests-mock-1.9.3, hypothesis-6.31.5, easy-server-0.8.0
gw0 I
gw0 [3]
.HOOK: test_runtest_logreport.py::test_b gw0 0d69e931b31a4f4a8e735485eb0afc7d
.HOOK: test_runtest_logreport.py::test_c gw0 0d69e931b31a4f4a8e735485eb0afc7d
.HOOK: test_runtest_logreport.py::test_a gw0 0d69e931b31a4f4a8e735485eb0afc7d
============================== 3 passed in 3.10s ===============================
__________________________________________________________________ TestWorkerInteractor.test_runtests_all __________________________________________________________________
self = <test_remote.TestWorkerInteractor object at 0x7fcf44835520>, worker = <test_remote.WorkerSetup object at 0x7fcf44877340>
unserialize_report = <function TestWorkerInteractor.unserialize_report.<locals>.unserialize at 0x7fcf43dadf70>
def test_runtests_all(self, worker: WorkerSetup, unserialize_report) -> None:
worker.pytester.makepyfile(
"""
def test_func(): pass
def test_func2(): pass
"""
)
worker.setup()
ev = worker.popevent()
assert ev.name == "workerready"
ev = worker.popevent()
assert ev.name == "collectionstart"
assert not ev.kwargs
ev = worker.popevent("collectionfinish")
ids = ev.kwargs["ids"]
assert len(ids) == 2
worker.sendcommand("runtests_all")
worker.sendcommand("shutdown")
for func in "::test_func", "::test_func2":
for i in range(3): # setup/call/teardown
ev = worker.popevent("testreport")
assert ev.name == "testreport"
rep = unserialize_report(ev.kwargs["data"])
> assert rep.nodeid.endswith(func)
E AssertionError: assert False
E + where False = <built-in method endswith of str object at 0x7fcf5125d390>('::test_func')
E + where <built-in method endswith of str object at 0x7fcf5125d390> = 'test_runtests_all.py::test_func2'.endswith
E + where 'test_runtests_all.py::test_func2' = <TestReport 'test_runtests_all.py::test_func2' when='setup' outcome='passed'>.nodeid
/home/tkloczko/rpmbuild/BUILD/pytest-xdist-2.5.0/testing/test_remote.py:182: AssertionError
--------------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------------
skipping <EventCall logstart(**{'nodeid': 'test_runtests_all.py::test_func2', 'location': ('test_runtests_all.py', 1, 'test_func2')})>
============================================================================= warnings summary =============================================================================
testing/acceptance_test.py: 74 warnings
testing/test_dsession.py: 1 warning
testing/test_newhooks.py: 2 warnings
testing/test_remote.py: 4 warnings
/usr/lib/python3.8/site-packages/pytest_benchmark/logger.py:46: PytestBenchmarkWarning: Benchmarks are automatically disabled because xdist plugin is active.Benchmarks cannot be performed reliably in a parallelized environment.
warner(PytestBenchmarkWarning(text))
testing/acceptance_test.py: 80 warnings
testing/test_dsession.py: 10 warnings
testing/test_looponfail.py: 8 warnings
testing/test_newhooks.py: 2 warnings
testing/test_plugin.py: 16 warnings
testing/test_remote.py: 10 warnings
testing/test_workermanage.py: 13 warnings
/usr/lib/python3.8/site-packages/pytest_randomly/__init__.py:50: UserWarning: The NumPy module was reloaded (imported a second time). This can in some cases result in small but subtle issues and is discouraged.
from numpy import random as np_random
-- Docs: https://docs.pytest.org/en/stable/warnings.html
========================================================================= short test summary info ==========================================================================
SKIPPED [1] ../../../../../usr/lib/python3.8/site-packages/_pytest/config/__init__.py:1473: no 'gspecs' option found
XFAIL testing/acceptance_test.py::TestDistEach::test_simple_diffoutput
reason: [NOTRUN] other python versions might not have pytest installed
XFAIL testing/acceptance_test.py::test_terminate_on_hangingnode
XFAIL testing/acceptance_test.py::test_session_hooks
reason: [NOTRUN] works if run outside test suite
XFAIL testing/acceptance_test.py::TestNodeFailure::test_each_multiple
#20: xdist race condition on node restart
XFAIL testing/test_dsession.py::TestDistReporter::test_rsync_printing
XFAIL testing/test_dsession.py::test_pytest_issue419
duplicate test ids not supported yet
XFAIL testing/test_looponfail.py::TestLooponFailing::test_looponfail_removed_test
broken by pytest 3.1+
XFAIL testing/test_remote.py::test_remoteinitconfig
#59
XFAIL testing/test_remote.py::TestWorkerInteractor::test_happy_run_events_converted
reason: implement a simple test for event production
XFAIL testing/test_workermanage.py::TestNodeManager::test_rsync_roots_no_roots
reason: [NOTRUN]
FAILED testing/acceptance_test.py::TestDistribution::test_dist_tests_with_crash - Failed: nomatch: '*Python*'
FAILED testing/test_newhooks.py::TestHooks::test_runtest_logreport - Failed: nomatch: '*HOOK: test_runtest_logreport.py::test_a gw0 *'
FAILED testing/test_remote.py::TestWorkerInteractor::test_runtests_all - AssertionError: assert False
======================================= 3 failed, 151 passed, 1 skipped, 2 deselected, 10 xfailed, 220 warnings in 502.36s (0:08:22) =======================================
``` | closed | 2021-12-18T17:36:28Z | 2022-07-23T13:18:16Z | https://github.com/pytest-dev/pytest-xdist/issues/742 | [] | kloczek | 6 |
ading2210/poe-api | graphql | 89 | bug? | INFO:root:Downloading next_data...
Traceback (most recent call last):
File "G:\AI\poe\poe\poe-api-main\poe-api-main\examples\send_message.py", line 10, in <module>
client = poe.Client(token)
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\poe.py", line 129, in __init__
self.setup_connection()
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\poe.py", line 134, in setup_connection
self.next_data = self.get_next_data(overwrite_vars=True)
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\poe.py", line 173, in get_next_data
r = request_with_retries(self.session.get, self.home_url)
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\poe.py", line 45, in request_with_retries
r = method(*args, **kwargs)
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\tls_client\sessions.py", line 422, in get
return self.execute_request(method="GET", url=url, **kwargs)
File "C:\Users\lin85\AppData\Local\Programs\Python\Python310\lib\site-packages\tls_client\sessions.py", line 405, in execute_request
raise TLSClientExeption(response_object["body"])
tls_client.exceptions.TLSClientExeption: failed to do request: Get "https://poe.com": dial tcp: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. | open | 2023-05-31T07:42:32Z | 2023-07-15T18:49:03Z | https://github.com/ading2210/poe-api/issues/89 | [
"bug"
] | 40740 | 6 |
huggingface/datasets | machine-learning | 7,399 | Synchronize parameters for various datasets | ### Describe the bug
[IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.Dataset.map).
There might be other parameters missing - I haven't checked.
### Steps to reproduce the bug
from datasets import Dataset, IterableDataset, IterableDatasetDict
ds = IterableDatasetDict({"train": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3),
"validate": Dataset.from_dict({"a": range(6)}).to_iterable_dataset(num_shards=3)})
for d in ds["train"]:
print(d)
ds = ds.map(lambda x: {k: v+1 for k, v in x.items()}, desc="increment")
for d in ds["train"]:
print(d)
### Expected behavior
The description parameter should be available for all datasets (or none).
### Environment info
- `datasets` version: 3.2.0
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.11.11
- `huggingface_hub` version: 0.28.1
- PyArrow version: 17.0.0
- Pandas version: 2.2.2
- `fsspec` version: 2024.9.0 | open | 2025-02-14T09:15:11Z | 2025-02-19T11:50:29Z | https://github.com/huggingface/datasets/issues/7399 | [] | grofte | 2 |
aeon-toolkit/aeon | scikit-learn | 2,110 | [ENH] Add PyODAdapter-implementation for CBLOF | ### Describe the feature or idea you want to propose
The [`PyODAdapter`](https://github.com/aeon-toolkit/aeon/blob/main/aeon/anomaly_detection/_pyodadapter.py) in aeon allows us to use any outlier detector from [PyOD](https://github.com/yzhao062/pyod), which were originally proposed for relational data, also for time series anomaly detection (TSAD). Not all detectors are equally well suited for TSAD, however. We want to represent the frequently used and competitive outlier detection techniques within the `anomaly_detection` module of aeon directly.
**Implement the [CBLOF method](https://github.com/yzhao062/pyod/blob/master/pyod/models/cblof.py#L25)** using the `PyODAdapter`.
### Describe your proposed solution
- Create a new file in `aeon.anomaly_detection` for the method
- Create a new estimator class with `PyODAdapter` as the parent
- Expose the algorithm's hyperparameters as constructor arguments, create the PyOD model and pass it to the super-constructor
- Document your class
- Add tests for certain edge cases if necessary
---
Example for IsolationForest:
```python
class IsolationForest(PyODAdapter):
"""documentation ..."""
def __init__(n_estimators: int = 100, max_samples: int | str = "auto", ..., window_size: int, stride: int):
model = IForest(n_estimators, max_samples, ...
super().__init__(model, window_size, stride)
@classmethod
def get_test_params(cls, parameter_set="default"):
"""..."""
return {"n_estimators": 10, ...}
```
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
_No response_ | closed | 2024-09-27T13:27:08Z | 2024-10-28T19:17:41Z | https://github.com/aeon-toolkit/aeon/issues/2110 | [
"enhancement",
"interfacing algorithms",
"anomaly detection"
] | SebastianSchmidl | 3 |
public-apis/public-apis | api | 3,486 | AA KENYA QUIZ WEBSITE | Thanks for looking to open an issue for this project.
If you are opening an issue to suggest adding a new entry, please consider opening a pull request instead!
| closed | 2023-04-03T06:20:50Z | 2023-05-26T18:46:31Z | https://github.com/public-apis/public-apis/issues/3486 | [] | markchweya | 0 |
horovod/horovod | pytorch | 3,268 | Unable to load most recent checkpoint for Pytorch and Pytorch lightning Estimator | **Environment:**
1. Framework: PyTorch
2. Framework version: 1.8.1
3. Horovod version: 0.23.0
4. MPI version:
5. CUDA version:
6. NCCL version:
7. Python version: 3.8
8. Spark / PySpark version: 3.1.2
9. Ray version:
10. OS and version:
11. GCC version:
12. CMake version:
**Bug report:**
In case of pytorch lightning estimator, the _read_checkpoint() API does not return the latest checkpoint stored in the run path.
Reason: Pytorch lightning estimator calls store.get_checkpoints() which looks for a folder named 'checkpoint' in run path while there is no folder named checkpoint, instead there is a temp folder generated via tempfile.TemporaryDirectory()
In case of pytorch estimator, the checkpoint stored in run path is not overwritten if multiple iterations are done using the same run path, which leads to _load_checkpoint() API returning the stale checkpoint.
| closed | 2021-11-10T11:34:06Z | 2021-11-23T07:12:35Z | https://github.com/horovod/horovod/issues/3268 | [
"bug"
] | kamalsharma2 | 3 |
PokemonGoF/PokemonGo-Bot | automation | 5,653 | Edit Type codes for renaming Pokemon | ### Short Description
Option to edit the type codes of Pokemon according to personal preference
### Possible solution
Something like this in the NicknamePokemon Task:
{
"type": "NicknamePokemon",
"config": {
"enabled": true,
"nickname_above_iv": 0.8,
"nickname_above_cp": 1500,
"nickname_template": "{iv_pct}-{attack_code}",
"nickname_wait_min": 3,
"nickname_wait_max": 5,
"type_codes":[
Bug: 'Bu'
Dark: 'Da'
Dragon: 'Dr'
Electric: 'Ele'
Fairy: 'Fy'
Fighting: 'Fg'
Fire: 'Fi'
Flying: 'Fl'
Ghost: 'Gh'
Grass: 'Gr'
Ground: 'Go'
Ice: 'I'
Normal: 'No'
Poison: 'Po'
Psychic: 'Py'
Rock: 'Ro'
Steel: 'St'
Water: 'Wa'
]
}
},
### How it would help others
Easier to remember your own preferences, rather than keep checking the list specified in the documentation.
Additional request: Please let me know the file (and line number if possible) which I can edit to manually change the codes for now.
| open | 2016-09-24T11:34:22Z | 2016-09-27T19:09:04Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5653 | [
"Feature Request"
] | abhinavagrawal1995 | 6 |
gee-community/geemap | streamlit | 460 | style_callback for Map.add_geojson()? | ### Description
It would be great for https://geemap.org/geemap/#geemap.geemap.Map.add_geojson to support a `style_callback` parameter like https://ipyleaflet.readthedocs.io/en/latest/api_reference/geo_json.html does. Else I don't see any way for defining "dynamic styling". | closed | 2021-05-06T21:18:11Z | 2021-05-07T04:21:58Z | https://github.com/gee-community/geemap/issues/460 | [
"Feature Request"
] | deeplook | 1 |
koxudaxi/fastapi-code-generator | pydantic | 269 | $ref parameter prevents code generation | I am trying to use $ref parameters in my openapi yaml, however as soon as I insert a $ref entry no code is generated (the folder is but it contains no files) and no error is thrown.
I took the code from #24, so my minimum working example is:
openapi.yaml
```
openapi: "3.0.0"
paths:
/foo:
parameters:
- $ref: "#/components/parameters/MyParam"
components:
parameters:
MyParam:
name: foo
schema:
type: string
```
But the same happens with my actual openapi definition. As soon as I add a $ref parameter, no code is generated.
command:
`fastapi-codegen -i openapi.yaml -o app`
I am using fastapi-code-generator version 0.3.5
Can someone help? Thanks! | open | 2022-08-11T10:25:08Z | 2022-08-13T19:43:10Z | https://github.com/koxudaxi/fastapi-code-generator/issues/269 | [] | aktentasche | 1 |
microsoft/qlib | deep-learning | 1,686 | ๅ ๅญ่ฎก็ฎๅคๅคดๆถ็็ๆถ็ไผผๆ่ฏฏ | ๅฆ้ข๏ผqlib.contrib.eva.alpha.pyไธญ็calc_long_short_returnๅฝๆฐ่ฟๅไบไธคไธช็ปๆ๏ผๅๅซไธบ(r_long - r_short) / 2ๅr_avg๏ผ็่ฎก็ฎ่ฟ็จๅบ่ฏฅๅๅซๆฏๅค็ฉบๆถ็็ๅๆๆ่ก็ฅจ็็ญๆๆถ็็ใ
ไฝๅจqlib.workflow.record_temp.pyไธญSigAnaRecord._generate()ๅจ่ฐ็จcalc_long_short_return()ๅฝๆฐๆถๅฐ่ฟๅ็ปๆๅๅซๅฝๅไธบlong_short_rๅlong_avg_r๏ผๅนถๅจไธๆน่พๅบๆถ็ดๆฅๅฐlong_avg_rไฝไธบๅคๅคด็ปๅๆถ็็ใ
่ฟไผๅฏผ่ดๆ็ป็ๅฐ็ๅคๅคดๆถ็็ๅ
ถๅฎๆฏๆๆๆ ทๆฌ่ก็ญๆ็ๅนณๅๆถ็็ใ
ๆ็ๆฏ่ชๅทฑๅช้ๆฒก็่งฃๅฐไฝ๏ผๆฑ็ญ็ใ
qlib.workflow.record_temp.py SigAnaRecord._generate()ไธญ็็ๆฎต๏ผ

qlib.contrib.eva.alpha.pyไธญ็calc_long_short_returnไธญ็็ๆฎต๏ผ

| open | 2023-10-27T05:11:49Z | 2024-08-07T02:40:59Z | https://github.com/microsoft/qlib/issues/1686 | [
"bug"
] | wangxk15 | 1 |
AutoGPTQ/AutoGPTQ | nlp | 588 | [question] | ๅฆๆๆๆณๅพๅฐไธไธชๅ็ด้ขๅ็chat้ๅๆจกๅ๏ผๆฏ็จc4็ๆฐๆฎ้ๆ้ ๅฅฝ่ฟๆฏ็จๅ็ด้ขๅ็ๆฐๆฎ้ๅฅฝ๏ผๅฆไฝ่ชๅทฑๆๅปบ้ๅๆฐๆฎ้๏ผ่ฆๅฐpromptๅ็่พๅ
ฅๅ่พๅบ้ฝๆพ่ฟๅปๅ๏ผ | closed | 2024-03-13T02:03:36Z | 2024-03-13T02:04:02Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/588 | [
"bug"
] | bihui9968 | 0 |
airtai/faststream | asyncio | 1,765 | Bug: incorrect parsing of path parameters with nested routers | **Describe the bug**
When using nested routers with path parameters, the values are parsed incorrectly. Specifically, when passing a valid enum value in the subject, only the last character of the path parameter is taken.
**How to reproduce**
```python
from enum import StrEnum
from typing import Annotated, Any
from faststream import FastStream, Path
from faststream.nats import NatsBroker, NatsRouter
class MyEnum(StrEnum):
FIRST = "first"
SECOND = "second"
THIRD = "third"
broker = NatsBroker()
root_router = NatsRouter(prefix="root_router.")
nested_router = NatsRouter()
@nested_router.subscriber("{my_enum}.nested_router")
async def do_nothing(message: Any, my_enum: Annotated[MyEnum, Path()]): ...
root_router.include_router(nested_router)
broker.include_router(nested_router)
app = FastStream(broker)
@app.after_startup
async def run():
await broker.publish("", f"root_router.{MyEnum.THIRD}.nested_router")
```
**Expected behavior**
The my_enum path parameter should be correctly parsed, matching the full enum value (e.g., โthirdโ).
**Observed behavior**
```
pydantic_core._pydantic_core.ValidationError: 1 validation error for do_nothing
my_enum
Input should be 'first', 'second' or 'third' [type=enum, input_value='d', input_type=str]
For further information visit https://errors.pydantic.dev/2.8/v/enum
```
| closed | 2024-09-05T16:16:19Z | 2024-09-13T19:22:58Z | https://github.com/airtai/faststream/issues/1765 | [
"bug",
"good first issue",
"NATS"
] | ulbwa | 0 |
huggingface/datasets | tensorflow | 7,287 | Support for identifier-based automated split construction | ### Feature request
As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure))
It would seem to be pretty useful to also allow splits to be based on identifiers of individual examples
This could be configured like
{"split_name": {"column_name": [column values in split]}}
(This in turn requires unique 'index' columns, which could be explicitly supported or just assumed to be defined appropriately by the user).
I guess a potential downside would be that shards would end up spanning different splits - is this something that can be handled somehow? Would this only affect streaming from hub?
### Motivation
The main motivation would be that all data files could be stored in a single directory, and multiple sets of splits could be generated from the same data. This is often useful for large datasets with multiple distinct sets of splits.
This could all be configured via the README.md yaml configs
### Your contribution
May be able to contribute if it seems like a good idea | open | 2024-11-10T07:45:19Z | 2024-11-19T14:37:02Z | https://github.com/huggingface/datasets/issues/7287 | [
"enhancement"
] | alex-hh | 3 |
FactoryBoy/factory_boy | sqlalchemy | 170 | Multi-db no longer supported. | In `DjangoModelFactory`, the factory will eventually make a call to `_setup_next_sequence` regardless of whether we are `build`ing or `create`ing. This means the strategy of building an instance, then settings it's destination via `save(using='other_db')` is no longer valid.
When using `factory.BUILD_STRATEGY`, it should not be hitting the database at all.
I can mitigate this issue by overriding the `_setup_next_sequence()` method.
```
@classmethod
def _setup_next_sequence(cls):
return 1
```
| closed | 2014-10-06T21:27:40Z | 2015-03-27T01:30:41Z | https://github.com/FactoryBoy/factory_boy/issues/170 | [] | ashchristopher | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 297 | too many issues and impossible to install on windows :( | there are too many issues and nothing works :( | closed | 2020-03-10T17:46:32Z | 2020-07-04T22:19:41Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/297 | [] | daciansolgen3 | 8 |
torchbox/wagtail-grapple | graphql | 311 | Critical Typo in registering custom Rendition model | https://github.com/torchbox/wagtail-grapple/blob/2e7cb3e23f81c3c65e1fddc811aeaed99cd7743c/grapple/actions.py#L135-L142
Line 140 should call **register_image_rendition_model()** not register_image_model()
Fixing this typo will **breake** the server duo to "python-BaseException KeyError: '\_\_module\_\_'" so this issue actually references two issues
And a note for contributors: I Love and appreciate what you have done here in this project ๐โค | open | 2023-02-08T10:33:31Z | 2023-02-09T12:40:11Z | https://github.com/torchbox/wagtail-grapple/issues/311 | [] | engAmirEng | 3 |
ray-project/ray | machine-learning | 50,946 | Release test long_running_many_ppo.aws failed | Release test **long_running_many_ppo.aws** failed. See https://buildkite.com/ray-project/release/builds/34295#01954657-d47e-48b5-9d21-322726f53c62 for more details.
Managed by OSS Test Policy | closed | 2025-02-27T08:16:09Z | 2025-02-28T06:08:37Z | https://github.com/ray-project/ray/issues/50946 | [
"bug",
"P0",
"triage",
"release-test",
"unstable-release-test",
"ray-test-bot",
"stability",
"ml"
] | can-anyscale | 1 |
facebookresearch/fairseq | pytorch | 5,248 | mms data preparation doesn't work with latest nightly torchaudio build | I used this tutorial: https://github.com/facebookresearch/fairseq/tree/main/examples/mms/data_prep
To setup a forced alignment system with mms, however, when I reinstalled my conda environment today, (10th of July 2023), it didn't run anymore, as the torchaudio nightly build is incompatible with the code in the example. The nightly build requires an extra dimension for the emissions variable (probably a batch dimension).
Reverting the code to 27th of Juny 2023 (using `pip install --pre torchaudio==2.1.0.dev20230627+cu118 --index-url https://download.pytorch.org/whl/nightly/cu118` instead of `pip install --pre torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118` ) fixed the issue for me. But would be nice if the tutorial and code could be updated to work with the new batch in the nightly build.
```
def get_alignments(
audio_waveform,
tokens,
model,
dictionary,
use_star,
sample_rate
):
# Generate emissions
# emissions, stride = generate_emissions(model, audio_file)
emissions, stride = generate_emissions_waveform(model, audio_waveform, sample_rate)
T, N = emissions.size()
if use_star:
emissions = torch.cat([emissions, torch.zeros(T, 1).to(DEVICE)], dim=1)
# Force Alignment
if tokens:
token_indices = [dictionary[c] for c in " ".join(tokens).split(" ") if c in dictionary]
else:
print(f"Empty transcript!!!!! for audio file {audio_waveform}")
token_indices = []
blank = dictionary["<blank>"]
targets = torch.tensor(token_indices, dtype=torch.int32).to(DEVICE)
input_lengths = torch.tensor(emissions.shape[0])
target_lengths = torch.tensor(targets.shape[0])
path, scores = F.forced_align(
emissions, targets, input_lengths, target_lengths, blank=blank
)
path = path.to("cpu").tolist()
segments, scores_segments = merge_repeats_scores(path, scores, {v: k for k, v in dictionary.items()})
return segments, stride, scores_segments
``` | closed | 2023-07-10T09:57:45Z | 2023-09-07T18:26:16Z | https://github.com/facebookresearch/fairseq/issues/5248 | [
"bug",
"needs triage"
] | GBurg | 3 |
localstack/localstack | python | 11,517 | bug: ApiGateway ChunkedEncodingError while receiveing response from spring boot rest controller | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Current Behavior
Using a STEP FUNCTION to call asynchronously an api gateway, the step is failing with the following error :
Exception=FailureEventException, Error=ApiGateway.ChunkedEncodingError.
The issue seems to be related to the received response from the spring boot rest api :
> WARN --- [_and_notify)] urllib3.response : Received response with both Content-Length and Transfer-Encoding set. This is expressly forbidden by RFC 7230 sec 3.3.2. Ignoring Content-Length and attempting to process response as Transfer-Encoding: chunked.
Note that this actually working in AWS.
Here is the full logs :
> 2024-09-14T09:56:39.166 ERROR --- [d-417 (eval)] l.s.s.a.c.eval_component : Exception=FailureEventException, Error=ApiGateway.ChunkedEncodingError, Details={"taskFailedEventDetails": {"error": "ApiGateway.ChunkedEncodingError", "cause": "('Connection broken: InvalidChunkLength(got length b\\'{xxxxxxxxxxxxxxxxxxxxxxxxxxxxx}\\', 0 bytes read)', InvalidChunkLength(got length b'{xxxxxxxxxxxxxxxxxxxxxxxxxxxxx}', 0 bytes read))", "resource": "invoke.waitForTaskToken", "resourceType": "apigateway"}} at '(StateTaskServiceApiGateway| {'comment': None, 'input_path': (InputPath| {'input_path_src': '$'}, 'output_path': (OutputPath| {'output_path': '$'}, 'state_entered_event_type': <HistoryEventType.TaskStateEntered: 'TaskStateEntered'>, 'state_exited_event_type': <HistoryEventType.TaskStateExited: 'TaskStateExited'>, 'result_path': (ResultPath| {'result_path_src': '$'}, 'result_selector': None, 'retry': None, 'catch': None, 'timeout': (TimeoutSeconds| {'timeout_seconds': 99999999, 'is_default': None}, 'heartbeat': None, 'parameters': (Parameters| {'payload_tmpl': (PayloadTmpl| {'payload_bindings': [(PayloadBindingValue| {'field': 'ApiEndpoint', 'value': (PayloadValueStr| {'val': 'http://localhost:4566/restapis/local-api-gateway'}}, (PayloadBindingValue| {'field': 'Headers', 'value': (PayloadTmpl| {'payload_bindings': [(PayloadBindingValue| {'field': 'Accept', 'value': (PayloadArr| {'payload_values': [(PayloadValueStr| {'val': 'application/json'}]}}, (PayloadBindingValue| {'field': 'Content-Type', 'value': (PayloadArr| {'payload_values': [(PayloadValueStr| {'val': 'application/json'}]}}, (PayloadBindingPathContextObj| {'field': 'TaskToken', 'path_context_obj': '$.Task.Token'}, (PayloadBindingPathContextObj| {'field': 'workflowName', 'path_context_obj': '$.StateMachine.Name'}, (PayloadBindingPathContextObj| {'field': 'executionName', 'path_context_obj': '$.Execution.Name'}, (PayloadBindingPathContextObj| {'field': 'stepName', 'path_context_obj': '$.State.Name'}]}}, (PayloadBindingValue| {'field': 'Method', 'value': (PayloadValueStr| {'val': 'POST'}}, (PayloadBindingValue| {'field': 'Stage', 'value': (PayloadValueStr| {'val': 'test'}}, (PayloadBindingValue| {'field': 'Path', 'value': (PayloadValueStr| {'val': 'compute-data-calc'}}, (PayloadBindingValue| {'field': 'RequestBody', 'value': (PayloadTmpl| {'payload_bindings': [(PayloadBindingValue| {'field': 'activityName', 'value': (PayloadValueStr| {'val': 'calculSoldes'}}, (PayloadBindingValue| {'field': 'xxxxxxx', 'value': (PayloadValueStr| {'val': 'xxxxxxx'}}, (PayloadBindingValue| {'field': 'xxxxxxx', 'value': (PayloadTmpl| {'payload_bindings': [(PayloadBindingPath| {'field': 'xxxxxxx', 'path': 'xxxxxxx'}, (PayloadBindingValue| {'field': 'other', 'value': (PayloadTmpl| {'payload_bindings': [(PayloadBindingValue| {'field': 'xxxxxxx', 'value': (PayloadValueStr| {'val': 'xxxxxxx'}}]}}]}}]}}, (PayloadBindingValue| {'field': 'AuthType', 'value': (PayloadValueStr| {'val': 'IAM_ROLE'}}]}}, '_supported_integration_patterns': {'waitForTaskToken'}, 'name': 'Compute', 'state_type': <StateType.Task: 16>, 'continue_with': <localstack.services.stepfunctions.asl.component.state.state_continue_with.ContinueWithNext object at 0x7f58036557d0>, 'resource': (ServiceResource| {'_region': '', '_account': '', 'resource_arn': 'arn:aws:states:::apigateway:invoke.waitForTaskToken', 'partition': 'aws', 'service_name': 'apigateway', 'api_name': 'apigateway', 'api_action': 'invoke', 'condition': 'waitForTaskToken'}}'
In the above logs i replaced the real json data response with "xxxxxxxxxxxxxxxxxxxxxxxxxxxxx".
I tested the same code in AWS and it's working correctly.
Is there any fix or workarround ?
The step function used :
> "Compute": {
> "Type": "Task",
> "Resource": "arn:aws:states:::apigateway:invoke.waitForTaskToken",
> "Parameters": {
> "ApiEndpoint": "http://localhost:4566/restapis/local-api-gateway",
> "Headers": {
> "Accept": [
> "application/json"
> ],
> "Content-Type": [
> "application/json"
> ],
> "TaskToken.$": "$$.Task.Token",
> "workflowName.$": "$$.StateMachine.Name",
> "executionName.$": "$$.Execution.Name",
> "stepName.$": "$$.State.Name"
> },
> "Method": "POST",
> "Stage": "test",
> "Path": "...",
> "RequestBody": {
> ....
> },
> "AuthType": "IAM_ROLE"
> },
> "End": true
> }
### Expected Behavior
The step function should be successful without any error when the api gateway receives the response from the spring boot rest api.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### Docker compose :
```
services:
localstack:
container_name: localstack
image: localstack/localstack:3.7.2
ports:
- '4566:4566' # LocalStack Gateway
- '4510-4559:4510-4559' # external services port range
environment:
- AWS_DEFAULT_REGION=eu-west-3
- AWS_ACCESS_KEY_ID=<HIDDEN>
- AWS_SECRET_ACCESS_KEY=<HIDDEN>
- DEBUG=${DEBUG-1}
- DISABLE_CORS_CHECKS=1
- DOCKER_HOST=unix:///var/run/docker.sock
- LS_LOG=WARN # Localstack DEBUG Level
- SERVICES=s3,sqs,apigateway,stepfunctions,events,cloudformation
volumes:
- localstack:/var/lib/localstack
- '/var/run/docker.sock:/var/run/docker.sock'
- ./localstack/start-localstack.sh:/etc/localstack/init/ready.d/start-localstack.sh
- ./localstack/statemachines:/tmp/statemachines
volumes:
localstack:
```
#### Start local stack :
docker compose up -d
#### Start Step Function
Start execution from the local stack UI
### Environment
```markdown
- OS: WSL2 Debian
- LocalStack version: 3.7.2
```
### Anything else?
_No response_ | closed | 2024-09-14T10:44:30Z | 2025-02-25T19:02:30Z | https://github.com/localstack/localstack/issues/11517 | [
"type: bug",
"status: response required",
"aws:apigateway",
"aws:stepfunctions",
"status: resolved/stale"
] | mbench777 | 3 |
pytest-dev/pytest-qt | pytest | 400 | Allow PyQt5 versions < 5.11 | Thank you for your amazing work. I'm currently using a modified version of `pytest-qt` which allows usage with `PyQt 5.9.2`
So far I haven't had any issues. Where does the restriction for `5.11` come from? Is it arbitrary because you didn't test any lower version number with your package or is there an actual known problem with lower version numbers?
If it is arbitrary, I would like to help getting 5.9 "greenlighted". | closed | 2021-12-09T14:00:12Z | 2021-12-09T20:02:05Z | https://github.com/pytest-dev/pytest-qt/issues/400 | [] | cafhach | 6 |
deepfakes/faceswap | deep-learning | 1,381 | just installed new graphic card and it stopped working | it worked on my last graphic card "rtx 2027" and i just bought this one and exchanged it "ASUS TUF Gaming Radeonโข RX 7900 XT OC Edition 20GB GDDR6"
and it stopped working .
it returns this error :
> C:\Users\Nassar>"C:\Users\Nassar\Miniconda3\scripts\activate.bat" && conda activate "faceswap" && python "C:\Users\Nassar\faceswap/faceswap.py" gui
> Setting Faceswap backend to NVIDIA
> Traceback (most recent call last):
> File "C:\Users\Nassar\faceswap\lib\gpu_stats\nvidia.py", line 47, in _initialize
> pynvml.nvmlInit()
> File "C:\Users\Nassar\MiniConda3\envs\faceswap\lib\site-packages\pynvml.py", line 1945, in nvmlInit
> nvmlInitWithFlags(0)
> File "C:\Users\Nassar\MiniConda3\envs\faceswap\lib\site-packages\pynvml.py", line 1935, in nvmlInitWithFlags
> _nvmlCheckReturn(ret)
> File "C:\Users\Nassar\MiniConda3\envs\faceswap\lib\site-packages\pynvml.py", line 897, in _nvmlCheckReturn
> raise NVMLError(ret)
> pynvml.NVMLError_NoPermission: Insufficient Permissions
>
> The above exception was the direct cause of the following exception:
>
> Traceback (most recent call last):
> File "C:\Users\Nassar\faceswap\faceswap.py", line 12, in <module>
> from lib.cli import args as cli_args # pylint:disable=wrong-import-position
> File "C:\Users\Nassar\faceswap\lib\cli\args.py", line 23, in <module>
> _GPUS = GPUStats().cli_devices
> File "C:\Users\Nassar\faceswap\lib\gpu_stats\_base.py", line 95, in __init__
> self._initialize()
> File "C:\Users\Nassar\faceswap\lib\gpu_stats\nvidia.py", line 55, in _initialize
> raise FaceswapError(msg) from err
> lib.utils.FaceswapError: There was an error reading from the Nvidia Machine Learning Library. The most likely cause is incorrectly installed drivers. If this is the case, Please remove and reinstall your Nvidia drivers before reporting. Original Error: Insufficient Permissions
Note: i am using windows 10 | closed | 2024-04-05T08:32:14Z | 2024-04-05T11:19:25Z | https://github.com/deepfakes/faceswap/issues/1381 | [] | eassa | 1 |
amisadmin/fastapi-amis-admin | fastapi | 149 | Setup id at runtime? | educate event system in amis, but this require id, I do:
```python
class TriggerAdminPage(admin.ModelAdmin):
.
.
.
async def get_form_item(
self, request: Request, modelfield: ModelField, action: CrudEnum
) -> Union[FormItem, SchemaNode, None]:
item = await super().get_form_item(request, modelfield, action)
if item.name == Trigger.event.key: # noqa
item.id = item.name # just field name
```
but why just not assign a name at runtime? are there any reasons? | closed | 2023-12-12T22:06:35Z | 2023-12-20T12:59:34Z | https://github.com/amisadmin/fastapi-amis-admin/issues/149 | [] | MatsiukMykola | 6 |
cupy/cupy | numpy | 8,987 | GPU-Accelerated Numerical Solvers | ### Description
Iโm currently developing a numerical solver package for a specific class of PDEs. My initial approach used SciPyโs ODE solvers, but runtime has become a bottleneck for 2D/3D problems with fine discretizations. Since I have access to many GPUs, Iโm very interested in leveraging GPU acceleration.
I came across a couple of related discussions ([#7452](https://github.com/cupy/cupy/issues/7452), [#7019](https://github.com/cupy/cupy/issues/7019)) but havenโt seen a definitive path forward. Specifically, Iโm wondering:
- Is a solver interface similar to `scipy.integrate.solve_ivp` feasible in CuPy?
- If so, is this approach recommended for building a GPU-based PDE solver?
- Are there any existing examples or best practices you could point me toward to get started?
Any guidance or suggestions would be greatly appreciated.
### Additional Information
_No response_ | open | 2025-02-25T14:54:32Z | 2025-02-25T14:56:18Z | https://github.com/cupy/cupy/issues/8987 | [
"cat:feature"
] | Hrrsmjd | 0 |
flasgger/flasgger | rest-api | 535 | How to pass header with marshmallow schema | I am using marshmallow schema to generate documents. but i am not able to uderstand how to pass header along with schema.
Please help. | open | 2022-05-26T09:39:35Z | 2022-05-26T09:39:35Z | https://github.com/flasgger/flasgger/issues/535 | [] | kamrapooja | 0 |
TheKevJames/coveralls-python | pytest | 572 | Implement retries | **Is your feature request related to a problem? Please describe.**
Frequently enough to be frustrating, running `coveralls` fails in GitHub Actions due to an HTTP error. Retrying the Action run resolves this, but this can be painful for very long-running workflows.
**Describe the solution you'd like**
When API calls fail with transient HTTP error (e.g., bad gateway), they should be able to be retried (perhaps optionally, based on a CLI parameter) ideally with some kind of backoff.
**Describe alternatives you've considered**
- implement retrying in the Action itself. Though, I'd rather not implement it in bash.
- GitHub Actions doesn't implement any retry functionality itself like other CI providers, but retries might work in providers like GitLab.
- To minimize the impact of the problem, I could use the artifact upload/download actions to move coverage reports from jobs to one place where they are transmitted. That way, when it fails, I just need to re-download the artifact and try the upload again, instead of running all the tests again. But it would be much better if the CLI could implement it.
**Additional context**
For example: [this action run](https://github.com/spyoungtech/ahk/actions/runs/12780747803/job/35627541065) (which takes 9+ minutes to run!) ends with an error:
```
Traceback (most recent call last):
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\site-packages\coveralls\cli.py", line 98, in main
result = coverallz.wear()
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\site-packages\coveralls\api.py", line 275, in wear
return self.submit_report(json_string)
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\site-packages\coveralls\api.py", line 301, in submit_report
raise CoverallsException(
coveralls.exception.CoverallsException: Could not submit coverage: 504 Server Error: Gateway Time-out for url: https://coveralls.io/api/v1/jobs
``` | open | 2025-02-01T02:31:39Z | 2025-02-01T02:35:26Z | https://github.com/TheKevJames/coveralls-python/issues/572 | [
"feature",
"in-review"
] | spyoungtech | 0 |
ray-project/ray | deep-learning | 50,710 | [Serve] Serve no longer retries deployments after 3 failures | ### What happened + What you expected to happen
1. Previously, a Ray Serve deployment that hit a retryable error would back-off and retry until successful. With the latest release, it will transition the deployment into DEPLOY_FAILED after 3 tries.
2. A cluster with a large number of ray serve deployments has a high probability of having one of them hit 3 retryable errors (Model Download failures, Nodes restarting, etc.) If I'm deploying a single cluster I can retry manually, but it's a pain if I'm trying to automate multiple deployments.
(The actual case where I saw this first was with Nodes restarting while waiting on a large model file download.)
This change seems to have been a deliberate decision in https://github.com/ray-project/ray/pull/49224. The "3" is hardcoded in https://github.com/ray-project/ray/blob/094fde63cdce99bfe7ddca30d5a04c0759c86ffd/python/ray/serve/_private/deployment_state.py#L1394.
I liked the old behavior, but I'd settle for having the hardcoded constant be configurable.
### Versions / Dependencies
Ray 2.42.1
### Reproduction script
See https://github.com/ray-project/ray/pull/49224
### Issue Severity
Medium: It is a significant difficulty but I can work around it. | closed | 2025-02-19T03:09:46Z | 2025-03-24T16:21:01Z | https://github.com/ray-project/ray/issues/50710 | [
"bug",
"P1",
"serve"
] | chmeyers | 3 |
Lightning-AI/pytorch-lightning | machine-learning | 19,828 | TensorBoardLogger has the wrong epoch numbers much more than the fact | ### Bug description
I used the following code to log the metrics, but I found that the epoch recorded in the tensorboard logger is much more than it should have:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("train_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("valid_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
pl.Train(..., logger=TensorBoardLogger(save_dir='store',version=log_path), ....)
In the configure, I set max_epoch=10000, but in the logger, I got epoches more than 650k:


### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
```python
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("train_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("valid_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
pl.Train(..., logger=TensorBoardLogger(save_dir='store',version=log_path), ....) # u can use any path you like
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0): 2.1.3
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9): 2.1.2
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source): pip
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-30T17:13:10Z | 2024-05-19T06:46:33Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19828 | [
"bug",
"needs triage",
"ver: 2.1.x"
] | AlbireoBai | 2 |
Evil0ctal/Douyin_TikTok_Download_API | api | 374 | [BUG] msToken ๆไน็ๆ | ๅฏๅจๅ
`โ Douyin_TikTok_Download_API.service - Douyin_TikTok_Download_API deamon
Loaded: loaded (/etc/systemd/system/Douyin_TikTok_Download_API.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2024-04-27 10:39:50 UTC; 8s ago
Main PID: 5067 (python3)
Tasks: 1 (limit: 23204)
Memory: 46.4M
CGroup: /system.slice/Douyin_TikTok_Download_API.service
โโ5067 /usr/local/bin/python3 start.py
Apr 27 10:39:50 CentOS systemd[1]: Started Douyin_TikTok_Download_API deamon.
Apr 27 10:39:59 CentOS python3[5067]: ERROR msToken API้่ฏฏ๏ผEOF occurred in violation of protocol (_ssl.c:1000)
Apr 27 10:39:59 CentOS python3[5067]: INFO ็ๆ่ๅ็msToken`
tokenๆไน็ๆ๏ผๆฏๅจcrawlers/douyin/web/config.yaml ้ไฟฎๆนๅ
| closed | 2024-04-27T10:52:27Z | 2024-04-29T21:30:36Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/374 | [
"BUG"
] | markvlenvision | 4 |
FujiwaraChoki/MoneyPrinter | automation | 160 | [BUG] Invalid data found when processing input, songs | **Describe the bug**
I get the error of Invalid data found when processing input
**To Reproduce**
Steps to reproduce the behavior:
1) i have uploaded a zip file to filebin
2) inserted the link in the frontend
3) run
4) after video saved in temp/output.mp4 it gives the error
**Expected behavior**
FInish the run
**Screenshots**



**Additional context**
The zip file contains only a file.mp3 | closed | 2024-02-10T18:27:29Z | 2024-02-10T19:01:43Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/160 | [] | neker97 | 1 |
flaskbb/flaskbb | flask | 595 | FileNotFoundError: python3.8/site-packages/portal/migrations | When I execute `make install`, it turned out:
```
FileNotFoundError: [Errno 2] No such file or directory: '/Users/me/anaconda3/lib/python3.8/site-packages/portal/migrations'
make: *** [install] Error 1
``` | closed | 2021-07-21T06:00:59Z | 2021-08-20T14:31:46Z | https://github.com/flaskbb/flaskbb/issues/595 | [] | mikolaje | 3 |
google-deepmind/sonnet | tensorflow | 29 | Mac install fails | I am trying to install sonnet on Mac but I get the following error:
sonnet/sonnet/python/BUILD:131:1 C++ compilation of rule '@protobuf//:protobuf' failed: cc_wrapper.sh failed: error executing command
(exec env - \
PATH=/Library/Frameworks/Python.framework/Versions/3.6/bin:/Users/swarsh/torch/install/bin:/Library/Frameworks/Python.framework/Versions/2.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/X11/bin \
TMPDIR=/var/folders/q9/1zzwnrpx5f31kw21mwzdqxjh0000gn/T/ \
external/local_config_cc/cc_wrapper.sh -U_FORTIFY_SOURCE -fstack-protector -Wall -Wthread-safety -Wself-assign -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 '-D_FORTIFY_SOURCE=1' -DNDEBUG -ffunction-sections -fdata-sections -g0 '-std=c++0x' -MD -MF bazel-out/host/bin/external/protobuf/_objs/protobuf/external/protobuf/src/google/protobuf/struct.pb.d '-frandom-seed=bazel-out/host/bin/external/protobuf/_objs/protobuf/external/protobuf/src/google/protobuf/struct.pb.o' -iquote external/protobuf -iquote bazel-out/host/genfiles/external/protobuf -iquote external/bazel_tools -iquote bazel-out/host/genfiles/external/bazel_tools -isystem external/protobuf/src -isystem bazel-out/host/genfiles/external/protobuf/src -isystem external/bazel_tools/tools/cpp/gcc3 -DHAVE_PTHREAD -Wall -Wwrite-strings -Woverloaded-virtual -Wno-sign-compare -Wno-unused-function -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE__="redacted"' '-D__TIMESTAMP__="redacted"' '-D__TIME__="redacted"' -c external/protobuf/src/google/protobuf/struct.pb.cc -o bazel-out/host/bin/external/protobuf/_objs/protobuf/external/protobuf/src/google/protobuf/struct.pb.o)
external/local_config_cc/cc_wrapper.sh: line 56: -U_FORTIFY_SOURCE: command not found | closed | 2017-04-26T23:40:28Z | 2017-09-04T13:16:04Z | https://github.com/google-deepmind/sonnet/issues/29 | [] | ghost | 2 |
jupyter-book/jupyter-book | jupyter | 1,544 | Add example for the seealso directive | There is no reference of the useful `{seealso}` directive in [jupyterbook.org](https://jupyterbook.org). See [jupyterbook.org/search.html?q=seealso](https://jupyterbook.org/search.html?q=seealso). | open | 2021-11-17T10:05:32Z | 2021-11-17T10:06:03Z | https://github.com/jupyter-book/jupyter-book/issues/1544 | [] | NikosAlexandris | 0 |
plotly/dash-bio | dash | 705 | Problem with the color_list functionality of dash.clustergram | Bug reported on community Forum in [this post](https://community.plotly.com/t/clustergrams-color-list-not-working/65525):
It appears that the color_list functionality of dash.clustergram is not working. The color dictionary is supposed to update the cluster trace colors, however, while the color_list dictionary can be defined, it is not used and only default colors are displayed.
This is also the case for the plotly gallery example ([Clustergram | Dash for Python Documentation | Plotly 1](https://dash.plotly.com/dash-bio/clustergram)) as minimal example:
```
import pandas as pd
from dash import dcc
import dash_bio as dashbio
df = pd.read_csv('https://git.io/clustergram_brain_cancer.csv').set_index('ID_REF')
columns = list(df.columns.values)
rows = list(df.index)
clustergram = dashbio.Clustergram(
data=df.loc[rows].values,
row_labels=rows,
column_labels=columns,
color_threshold={
'row': 250,
'col': 700
},
height=800,
width=700,
color_list={
'row': ['#636EFA', '#00CC96', '#19D3F3'],
'col': ['#AB63FA', '#EF553B'],
'bg': '#506784'
},
line_width=2
)
dcc.Graph(figure=clustergram)
clustergram
```
Plotly staff member, Emilie Burton, looked into it and confirmed this is a bug as well.
| open | 2022-08-01T13:52:43Z | 2022-08-01T13:52:43Z | https://github.com/plotly/dash-bio/issues/705 | [] | Coding-with-Adam | 0 |
jupyter/nbgrader | jupyter | 929 | Slow _filter_existing_notebooks impacts each submission | On a deployment with ~600 students, _filter_existing_notebooks takes about 30s. This hits us when manual grading. A single submission is loaded (/formgrader/submissions/:submission_id) and that invokes api.get_notebook_submission_indices which calls the filter which walks the filesystem and filters out non-existing files. So when the grader presses Next, the next submission is loaded which causes another 30s filesystem walk.
### Operating system
### `nbgrader --version`
0.6.0.dev
### `jupyterhub --version` (if used with JupyterHub)
0.8.1
### `jupyter notebook --version`
5.4.0
### Expected behavior
Indexes are cached or are built asynchronously?
### Actual behavior
Indexes are gathered for each submission load.
### Steps to reproduce the behavior
Manual grading in big course, visit assignment. Then visit first submission. | closed | 2018-02-16T21:20:36Z | 2018-05-03T21:36:07Z | https://github.com/jupyter/nbgrader/issues/929 | [
"bug"
] | ryanlovett | 6 |
noirbizarre/flask-restplus | flask | 36 | Add @api.response decorator | Add an @api.response decorator shortcut.
Example:
``` python
@api.route('/somewhere/')
class MyResource(Resource):
@api.response(403, 'Not Authorized')
@api.response(somemodel, headers={}, default=True)
def get(self, id):
return {}
'''
```
| closed | 2015-03-25T15:02:33Z | 2015-04-02T15:38:39Z | https://github.com/noirbizarre/flask-restplus/issues/36 | [
"enhancement"
] | noirbizarre | 0 |
howie6879/owllook | asyncio | 18 | docker Internal Server Error | ็จdockerๅๅปบๆง่กไนๅ๏ผhttp://127.0.0.1:8001/ ๆฅ้Internal Server Error๏ผ่ฟๆฏไปไนๅๅ ๏ผๅๆฅ่งฆpythonใ | closed | 2018-02-10T04:04:20Z | 2018-06-10T11:10:54Z | https://github.com/howie6879/owllook/issues/18 | [] | vinwang | 5 |
MagicStack/asyncpg | asyncio | 1,017 | Why asyncpg connection pool works slower than just connections? | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.26.0
* **PostgreSQL version**: latest
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: db up in docker container
* **Python version**: 3.10
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: Yes
<!-- Enter your issue details below this comment. -->
Hi! I have two postgres cliets and both of them use **asyncpg**. First creates new connection for each request, second use pool of connections.
Fist:
```python
class PostgresConnection(object):
def __init__(self, conn) -> None:
self.conn: asyncpg.Connection = conn
@classmethod
async def get_connection(cls) -> asyncpg.Connection:
conn = await asyncpg.connect(
user='',
password='',
database=''
host='',
port=,
)
return cls(conn)
async def execute(self, query: str) -> None:
return await self.conn.execute(query)
async def fetch_all(self, query: str) -> list[asyncpg.Record | None]:
return await self.conn.fetch(query)
```
Second:
```python
class PostgresConnection:
def __init__(
self,
DSN: str = DSN,
):
self.DSN = DSN
self.con = None
self._cursor = None
self._connection_pool = None
async def create_pool(self) -> None:
self._connection_pool = await asyncpg.create_pool(
dsn=self.DSN,
)
async def get_pool(self) -> Pool:
if not self._connection_pool:
await self.create_pool()
return self._connection_pool
async def fetch_all(self, query: str) -> list[dict | None]:
pool = await self.get_pool()
async with pool.acquire() as conn:
return [dict(row) for row in await conn.fetch(query)]
```
And i maked simple test: make 100 times 'SELECT * FROM user'
**fist: 5sec**
```python
conn = await PostgresClient.get_connection()
for i in range(1_00):
await conn.fetch_all('SELECT * FROM "user"')
```
**second: 10sec**
```python
pg_conn = PostgresConnection()
for i in range(1_00):
await pg_conn.fetch_all('SELECT * FROM "user"')
```
In one of the tests i used max size of pool = 100 and check how many connections was used โ it was 16/17 (i checked ID of connections which pool returns). But i think thats not important in my case and i just did something wrong.
Why connection pool works 2x times slower? Hi! I have two postgres cliets and both of them use **asyncpg**. First creates new connection for each request, second use pool of connections.
Fist:
```python
class PostgresConnection(object):
def __init__(self, conn) -> None:
self.conn: asyncpg.Connection = conn
@classmethod
async def get_connection(cls) -> asyncpg.Connection:
conn = await asyncpg.connect(
user='',
password='',
database=''
host='',
port=,
)
return cls(conn)
async def execute(self, query: str) -> None:
return await self.conn.execute(query)
async def fetch_all(self, query: str) -> list[asyncpg.Record | None]:
return await self.conn.fetch(query)
```
Second:
```python
class PostgresConnection:
def __init__(
self,
DSN: str = DSN,
):
self.DSN = DSN
self.con = None
self._cursor = None
self._connection_pool = None
async def create_pool(self) -> None:
self._connection_pool = await asyncpg.create_pool(
dsn=self.DSN,
)
async def get_pool(self) -> Pool:
if not self._connection_pool:
await self.create_pool()
return self._connection_pool
async def fetch_all(self, query: str) -> list[dict | None]:
pool = await self.get_pool()
async with pool.acquire() as conn:
return [dict(row) for row in await conn.fetch(query)]
```
And i maked simple test: make 100 times 'SELECT * FROM user'
**fist: 5sec**
```python
conn = await PostgresClient.get_connection()
for i in range(1_00):
await conn.fetch_all('SELECT * FROM "user"')
```
**second: 10sec**
```python
pg_conn = PostgresConnection()
for i in range(1_00):
await pg_conn.fetch_all('SELECT * FROM "user"')
```
In one of the tests i used max size of pool = 100 and check how many connections was used โ it was 16/17 (i checked ID of connections which pool returns). But i think thats not important in my case and i just did something wrong.
Why connection pool works 2x times slower? Is the problem on db side ?
| closed | 2023-03-21T11:16:17Z | 2023-03-23T14:01:41Z | https://github.com/MagicStack/asyncpg/issues/1017 | [] | Maksim-Burtsev | 1 |
man-group/arctic | pandas | 245 | Retrieve data stored into Arctic using Julia | Hello,
I stored some tick data using Python / Arctic.
I wonder if / how I could retrieve data using [Julia](http://julialang.org/)
Any help will be great
Kind regards
| closed | 2016-09-28T19:26:53Z | 2019-01-04T10:11:30Z | https://github.com/man-group/arctic/issues/245 | [
"wontfix"
] | femtotrader | 5 |
elliotgao2/gain | asyncio | 1 | Handle error when aiohttp response get wrong. | Handle error when aiohttp response get wrong. | closed | 2017-06-02T09:55:55Z | 2017-06-05T01:41:30Z | https://github.com/elliotgao2/gain/issues/1 | [] | elliotgao2 | 1 |
pytest-dev/pytest-mock | pytest | 420 | [3.13.0] New logged calls in MagicMock mock_calls attribute | Hi!
Just had some tests fail from which I was looking through mock.mock_calls. Have no seen this documented in the [changelogs](https://pytest-mock.readthedocs.io/en/latest/changelog.html).


Was this an intended change?
FYI I have found a workaround for the project's tests, just wanted to notify the repo here directly.
Thanks!
Xavier | closed | 2024-03-21T19:56:29Z | 2024-03-21T22:19:01Z | https://github.com/pytest-dev/pytest-mock/issues/420 | [] | ofx53 | 7 |
jumpserver/jumpserver | django | 14,656 | [Question] how to connect to a https website asset and how to setup it correctly in jumpserver? | ### Product Version
4.4.1
### Product Edition
- [X] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [X] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
Everything installed and working well for SSH
### ๐ค Question Description
Did not work for Websites. Every time a HTTP asset is created it is not possible to connect to a website GUI:
No available accounts
Connect method
No available connect method
Additional Website is only port 80 (http) and not port 443 (https)
### Expected Behavior
need to connect to https websites
### Additional Information
did not find any solution or instruction. Even when translating existing docs with google there is no solution for how to connect to a https website asset. | open | 2024-12-14T17:02:14Z | 2025-03-03T09:45:41Z | https://github.com/jumpserver/jumpserver/issues/14656 | [
"โณ Pending feedback",
"๐ค Question"
] | blocksberghexhex | 6 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,139 | [Bug]: calling `%PYTHON%` in webui.bat stops the execution of the rest of the script. | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
The first line where `%PYTHON%` is called stops the execution of the rest of the script.
This happens with any usage of `%PYTHON%`.

You can test this by putting `echo` before and after the `%PYTHON%` call

You can test this further by running `PYTHON --version` and doing the same test

### Steps to reproduce the problem
1. Install Python 3.10.6 via pyenv-win.
2. Get the latest version of webui, latest is commit feee37d on master branch.
3. Set version 3.10.6 as the local version (`pyenv local 3.10.6`).
4. Try to launch webui
5. (optional) run the tests I did above.
### What should have happened?
WebUI should have launched as expected.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
not available
### Console logs
```Shell
none
```
### Additional information
Python 3.10.6 is installed via pyenv-win. | open | 2024-07-03T13:43:39Z | 2024-07-12T23:09:41Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16139 | [
"bug-report"
] | RalkeyOfficial | 1 |
tensorlayer/TensorLayer | tensorflow | 670 | Failed: TensorLayer (c5b6ceea) | *Sent by Read the Docs (readthedocs@readthedocs.org). Created by [fire](https://fire.fundersclub.com/).*
---
| TensorLayer build #7285697
---
| 
---
| Build Failed for TensorLayer (latest)
---
You can find out more about this failure here:
[TensorLayer build #7285697](https://readthedocs.org/projects/tensorlayer/builds/7285697/) \- failed
If you have questions, a good place to start is the FAQ:
<https://docs.readthedocs.io/en/latest/faq.html>
You can unsubscribe from these emails in your [Notification Settings](https://readthedocs.org/dashboard/tensorlayer/notifications/)
Keep documenting,
Read the Docs
| Read the Docs
<https://readthedocs.org>
---

| closed | 2018-06-02T22:37:38Z | 2018-06-03T11:31:59Z | https://github.com/tensorlayer/TensorLayer/issues/670 | [] | fire-bot | 2 |
seleniumbase/SeleniumBase | web-scraping | 2,794 | Can not bypass detection when using a VPN connection (NordVPN) | Reopening Issue 2793, as I am not sure the testing instructions were clear.
I experience this issue ONLY when connected to a VPN (Nord vpn). This is the modified code as per comment.
```python
url = 'https://rateyourmusic.com/artist/pink-floyd/'
with SB(uc=True) as sb:
sb.driver.uc_open_with_reconnect(url, 8)
if sb.is_element_visible('iframe[src*="challenge"]'):
sb.driver.uc_switch_to_frame('iframe[src*="challenge"]')
confirm_input = sb.driver.find_element(By.CSS_SELECTOR, 'input')
confirm_input.uc_click()
sb.sleep(2)
```
What happens is that the verification box excepts the user action (does not happen on direct connection).
After clicking the checkbox input (manually or as instructed by the driver) the verification proceeds for a seconds (green spinning wheel).
This fails and leads to initial state.
Please could we make sure is tested with a VPN subscription, or is anyone that can do this?
P.S I am able to pass the verification process after clicking the checkbox with selenium-driverless
Many thanks! | closed | 2024-05-21T17:59:22Z | 2024-05-21T19:19:46Z | https://github.com/seleniumbase/SeleniumBase/issues/2794 | [
"duplicate",
"UC Mode / CDP Mode"
] | bjornkarlsson | 1 |
bloomberg/pytest-memray | pytest | 119 | pytest-memray breaks anyio | Hola @pablogsal,
I am facing the same issue: the async tests are skipped if we pass `--memray` argument to pytest.
## Steps to reproduce the issue:
Use the following test file: `test_async.py`
```python
import pytest
@pytest.fixture
def anyio_backend():
return 'asyncio'
@pytest.mark.anyio
async def test_async():
assert True
```
Install required dependencies:
```shell
python -m pip install pytest anyio pytest-memray
```
The test runs as expected if `--memray` is not passed:
```shell
python -m pytest -vv -x test_async.py
```
Output:
```
plugins: memray-1.6.0, anyio-4.0.0
collected 1 item
test_async.py::test_async PASSED
```
However, the test is skipped if we pass `--memray`:
```shell
python -m pytest --memray -vv -x test_async.py
```
Output:
```
plugins: memray-1.6.0, anyio-4.0.0
collected 1 item
test_async.py::test_async SKIPPED (async def function and no async plugin installed (see warnings)) [100%]
============================================================================================ warnings summary =============================================================================================
test_async.py::test_async
<MY_PROJECT_PATH>/.venv/lib/python3.9/site-packages/_pytest/python.py:151: PytestUnhandledCoroutineWarning: async def functions are not natively supported and have been skipped.
You need to install a suitable plugin for your async framework, for example:
- anyio
- pytest-asyncio
- pytest-tornasync
- pytest-trio
- pytest-twisted
warnings.warn(PytestUnhandledCoroutineWarning(msg.format(nodeid)))
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
```
_Originally posted by @albertvillanova in https://github.com/bloomberg/pytest-memray/discussions/101#discussioncomment-9738673_ | closed | 2024-06-12T00:06:36Z | 2024-08-05T06:25:51Z | https://github.com/bloomberg/pytest-memray/issues/119 | [] | godlygeek | 6 |
kubeflow/katib | scikit-learn | 1,636 | Grid Algorithm fails for int parameters | /kind bug
Grid fails with `Chocolate db is exhausted, increase Search Space or decrease maxTrialCount!` error when running this example:
```yaml
apiVersion: "kubeflow.org/v1beta1"
kind: Experiment
metadata:
namespace: kubeflow-user-example-com
name: grid-example
spec:
objective:
type: maximize
goal: 0.99
objectiveMetricName: Validation-accuracy
additionalMetricNames:
- Train-accuracy
algorithm:
algorithmName: grid
parallelTrialCount: 6
maxTrialCount: 12
maxFailedTrialCount: 3
parameters:
- name: num-layers
parameterType: int
feasibleSpace:
min: "3"
max: "6"
- name: optimizer
parameterType: categorical
feasibleSpace:
list:
- sgd
- adam
- ftrl
trialTemplate:
primaryContainerName: training-container
trialParameters:
- name: numberLayers
description: Number of training model layers
reference: num-layers
- name: optimizer
description: Training model optimizer (sdg, adam or ftrl)
reference: optimizer
trialSpec:
apiVersion: batch/v1
kind: Job
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: training-container
image: docker.io/kubeflowkatib/mxnet-mnist:v1beta1-45c5727
command:
- "python3"
- "/opt/mxnet-mnist/mnist.py"
- "--batch-size=64"
- "--num-layers=${trialParameters.numberLayers}"
- "--optimizer=${trialParameters.optimizer}"
- "--num-epochs=1"
restartPolicy: Never
```
We should verify how [quantized_uniform](https://github.com/kubeflow/katib/blob/master/pkg/suggestion/v1beta1/chocolate/base_service.py#L62-L67) distribution works in Chocolate.
For some reason, not all parameters are generated.
cc @johnugeorge
| closed | 2021-08-24T15:37:05Z | 2021-11-12T02:22:53Z | https://github.com/kubeflow/katib/issues/1636 | [
"kind/bug"
] | andreyvelich | 1 |
aminalaee/sqladmin | sqlalchemy | 711 | Use relative URLs instead of absolute URLs | ### Checklist
- [ ] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
@aminalaee, I had configured the package with application correctly in http it working file but in production it showing the below error
Mixed Content: The page at '' was loaded over HTTPS, but requested an insecure stylesheet ''. This request has been blocked; the content must be served over HTTPS.
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
Production
### Additional context
_No response_ | closed | 2024-02-12T10:58:36Z | 2024-02-20T20:44:53Z | https://github.com/aminalaee/sqladmin/issues/711 | [] | tariqjamal057 | 3 |
fastapi/sqlmodel | pydantic | 431 | Decorator that sets all `SQLModel` fields to `Optional` | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options ๐
### Example Code
```python
import sqlmodel
class HeroBase(sqlmodel.SQLModel):
name: str = sqlmodel.Field(index=True)
secret_name: str
age: Optional[int] = sqlmodel.Field(default=None, index=True)
team_id: Optional[int] = sqlmodel.Field(
default=None, foreign_key="team.id"
)
class HeroUpdate(sqlmodel.SQLModel):
name: Optional[str] = None
secret_name: Optional[str] = None
age: Optional[int] = None
team_id: Optional[int] = None
```
### Description
It feels bad to define every field manually to `Optional`. (Also prompt to error)
### Wanted Solution
It would be better to have some kind of decorator or something that allows to to this at runtime
### Wanted Code
```python
import sqlmodel
class HeroBase(sqlmodel.SQLModel):
name: str = sqlmodel.Field(index=True)
secret_name: str
age: Optional[int] = sqlmodel.Field(default=None, index=True)
team_id: Optional[int] = sqlmodel.Field(
default=None, foreign_key="team.id"
)
@sqlmodel.all_fields_to_optional
class HeroUpdate(HeroBase):
pass
```
### Alternatives
_No response_
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
Python 3.10.6
### Additional Context
_No response_ | closed | 2022-09-01T19:27:45Z | 2022-11-28T13:21:52Z | https://github.com/fastapi/sqlmodel/issues/431 | [
"feature"
] | Tomperez98 | 5 |
benbusby/whoogle-search | flask | 1,102 | [Request] Please remove my instance from the instance list - search.rubberverse.xyz | Hello! Well, coming with more or so sad news. I'm no longer hosting the Whoogle instance anymore.
That said, please unlist search.rubberverse.xyz from the public instance list, thank you and good luck on your project!
| closed | 2023-12-01T17:19:52Z | 2023-12-05T22:24:25Z | https://github.com/benbusby/whoogle-search/issues/1102 | [] | MrRubberDucky | 0 |
huggingface/datasets | machine-learning | 6,948 | to_tf_dataset: Visible devices cannot be modified after being initialized | ### Describe the bug
When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``.
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _bootstrap
self.run()
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/datasets/utils/tf_utils.py", line 438, in worker_loop
tf.config.set_visible_devices([], "GPU") # Make sure workers don't try to allocate GPU memory
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/framework/config.py", line 566, in set_visible_devices
context.context().set_visible_devices(devices, device_type)
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/tensorflow/python/eager/context.py", line 1737, in set_visible_devices
raise RuntimeError(
RuntimeError: Visible devices cannot be modified after being initialized
### Steps to reproduce the bug
1. Download a dataset using HuggingFace load_dataset
2. Define a function that transforms the data in some way to be used in the collate_fn argument
3. Provide a ``batch_size`` and ``num_workers`` value in the ``to_tf_dataset`` function
4. Either retrieve directly or use tfds benchmark to test the dataset
``` python
from datasets import load_datasets
import tensorflow_datasets as tfds
from keras_cv.layers import Resizing
def data_loader(examples):
x = Resizing(examples[0]['image'], 256, 256, crop_to_aspect_ratio=True)
return {X[0]: x}
ds = load_datasets("logasja/FDF", split="test")
ds = ds.to_tf_dataset(collate_fn=data_loader, batch_size=16, num_workers=2)
tfds.benchmark(ds)
```
### Expected behavior
Use multiple processes to apply transformations from the collate_fn to the tf dataset on the CPU.
### Environment info
- `datasets` version: 2.19.1
- Platform: Linux-6.5.0-1023-oracle-x86_64-with-glibc2.35
- Python version: 3.11.8
- `huggingface_hub` version: 0.22.2
- PyArrow version: 15.0.2
- Pandas version: 2.2.1
- `fsspec` version: 2024.2.0 | open | 2024-06-03T18:10:57Z | 2024-06-03T18:10:57Z | https://github.com/huggingface/datasets/issues/6948 | [] | logasja | 0 |
xinntao/Real-ESRGAN | pytorch | 669 | Segfaults when executed out of directory | Weirdest bug I've ever seen, have to assume there's some funky shit going on in the executables..
Run the command from the directory: ./realesrgan-ncnn-vulkan -i ~/Downloads/MON263.png -o ~/Downloads/MON263x2.png -s 2
Runs absolutely fine, scales image in under 2 seconds.
Run it outside of the directory: ./realesrgan-ncnn-vulkan-20220424-macos/realesrgan-ncnn-vulkan -i ~/Downloads/MON263.png -o ~/Downloads/MON263x2.png -s 2
zsh: segmentation fault ./realesrgan-ncnn-vulkan-20220424-macos/realesrgan-ncnn-vulkan -i -o -s 2
Segfaults.
WTF is going on. | open | 2023-08-06T10:44:51Z | 2023-08-06T10:44:51Z | https://github.com/xinntao/Real-ESRGAN/issues/669 | [] | kirkbushell | 0 |
D4Vinci/Scrapling | web-scraping | 54 | Pass in args to `async_fetch` and `fetch` | ### Have you searched if there an existing issue for this?
- [x] I have searched the existing issues
### Python version (python --version)
3.12
### Scrapling version (scrapling.__version__)
0.2.94
### Dependencies version (pip3 freeze)
```
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโฌโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ name โ version โ location โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโผโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ aiohappyeyeballs โ 2.4.6 โ โ
โ aiohttp โ 3.11.12 โ โ
โ aiosignal โ 1.3.2 โ โ
โ annotated-types โ 0.7.0 โ โ
โ anyio โ 4.8.0 โ โ
โ attrs โ 25.1.0 โ โ
โ browserforge โ 1.2.3 โ โ
โ camoufox โ 0.4.11 โ โ
โ certifi โ 2024.12.14 โ โ
โ charset-normalizer โ 3.4.1 โ โ
โ click โ 8.1.8 โ โ
โ cssselect โ 1.2.0 โ โ
โ distro โ 1.9.0 โ โ
โ fastapi โ 0.115.6 โ โ
โ filelock โ 3.17.0 โ โ
โ freezegun โ 1.5.1 โ โ
โ frozenlist โ 1.5.0 โ โ
โ greenlet โ 3.1.1 โ โ
โ h11 โ 0.14.0 โ โ
โ httpcore โ 1.0.7 โ โ
โ httpx โ 0.28.1 โ โ
โ idna โ 3.10 โ โ
โ iniconfig โ 2.0.0 โ โ
โ jiter โ 0.8.2 โ โ
โ joblib โ 1.4.2 โ โ
โ jsonpatch โ 1.33 โ โ
โ jsonpointer โ 3.0.0 โ โ
โ langchain โ 0.3.19 โ โ
โ langchain-core โ 0.3.37 โ โ
โ langchain-openai โ 0.3.6 โ โ
โ langchain-text-splitters โ 0.3.6 โ โ
โ langserve โ 0.3.1 โ โ
โ langsmith โ 0.3.10 โ โ
โ language-tags โ 1.2.0 โ โ
โ lxml โ 5.3.1 โ โ
โ multidict โ 6.1.0 โ โ
โ mypy โ 1.14.1 โ โ
โ mypy-extensions โ 1.0.0 โ โ
โ nltk โ 3.9.1 โ โ
โ numpy โ 2.2.3 โ โ
โ openai โ 1.64.0 โ โ
โ orjson โ 3.10.15 โ โ
โ packaging โ 24.2 โ โ
โ platformdirs โ 4.3.6 โ โ
โ playwright โ 1.50.0 โ โ
โ pluggy โ 1.5.0 โ โ
โ propcache โ 0.3.0 โ โ
โ pydantic โ 2.10.5 โ โ
โ pydantic-partial โ 0.7.0 โ โ
โ pydantic_core โ 2.27.2 โ โ
โ pyee โ 12.0.0 โ โ
โ PyJWT โ 2.10.1 โ โ
โ PySocks โ 1.7.1 โ โ
โ pytest โ 8.3.4 โ โ
โ pytest-asyncio โ 0.25.2 โ โ
โ python-dateutil โ 2.9.0.post0 โ โ
โ python-dotenv โ 1.0.1 โ โ
โ python-multipart โ 0.0.20 โ โ
โ pytz โ 2025.1 โ โ
โ PyYAML โ 6.0.2 โ โ
โ rebrowser_playwright โ 1.49.1 โ โ
โ regex โ 2024.11.6 โ โ
โ requests โ 2.32.3 โ โ
โ requests-file โ 2.1.0 โ โ
โ requests-toolbelt โ 1.0.0 โ โ
โ scrapling โ 0.2.94 โ โ
โ screeninfo โ 0.8.1 โ โ
โ six โ 1.17.0 โ โ
โ sniffio โ 1.3.1 โ โ
โ SQLAlchemy โ 2.0.38 โ โ
โ sqlalchemy-stubs โ 0.4 โ โ
โ sse-starlette โ 1.8.2 โ โ
โ starlette โ 0.41.3 โ โ
โ stripe โ 11.6.0 โ โ
โ tenacity โ 9.0.0 โ โ
โ tiktoken โ 0.9.0 โ โ
โ tldextract โ 5.1.3 โ โ
โ tqdm โ 4.67.1 โ โ
โ typing_extensions โ 4.12.2 โ โ
โ ua-parser โ 1.0.1 โ โ
โ ua-parser-builtins โ 0.18.0.post1 โ โ
โ urllib3 โ 2.3.0 โ โ
โ uvicorn โ 0.34.0 โ โ
โ w3lib โ 2.3.1 โ โ
โ yarl โ 1.18.3 โ โ
โ zstandard โ 0.23.0 โ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโดโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
```
### What's your operating system?
Windows 11
### Are you using a separate virtual environment?
Yes
### Expected behavior
I need to be able to pass in a list of args in the `async_fetch` and `fetch` of the `StealthyFetcher`
Even being able to pass in a list of `args` for the call to `with AsyncCamoufox() as browser` would be nice, so you could pass
`with AsyncCamoufox(...rest of arguments, args=args)`
Ex:
```
async def async_fetch(
self,
url: str,
**kwargs # Catch any additional keyword arguments
) -> Response:
"""Opens up the browser and does your request based on your chosen options.
:param url: Target URL.
:param kwargs: Additional keyword arguments for flexible configuration.
:return: A `Response` object that is the same as `Adaptor` object except it has these added attributes: `status`, `reason`, `cookies`, `headers`, and `request_headers`.
"""
# Default value for 'addons' if not provided in kwargs
addons = [] if self.disable_ads else [DefaultAddons.UBO]
# Store the final response
final_response = None
async def handle_response(finished_response):
nonlocal final_response
if finished_response.request.resource_type == "document" and finished_response.request.is_navigation_request():
final_response = finished_response
camoufox_options = {
'geoip': self.geoip,
'proxy': self.proxy,
'enable_cache': True,
'addons': self.addons,
'exclude_addons': addons,
'headless': self.headless,
'humanize': self.humanize,
'i_know_what_im_doing': True, # To turn warnings off with user configurations
'allow_webgl': self.allow_webgl,
'block_webrtc': self.block_webrtc,
'block_images': self.block_images,
'os': None if self.os_randomize else get_os_name(),
}
camoufox_options.update(kwargs)
async with AsyncCamoufox(**camoufox_options) as browser:
```
### Actual behavior
Can not pass in extra args.
### Steps To Reproduce
_No response_ | open | 2025-03-24T00:10:23Z | 2025-03-24T15:15:09Z | https://github.com/D4Vinci/Scrapling/issues/54 | [
"enhancement"
] | jaypyles | 2 |
skforecast/skforecast | scikit-learn | 137 | Bayesian Optimization | I am trying to tune the model using scikit-optimize. But a bunch of errors are coming up. I think it is a good idea to implement bayesian search for this library too.
| closed | 2022-04-07T09:26:52Z | 2022-09-24T09:25:25Z | https://github.com/skforecast/skforecast/issues/137 | [
"question"
] | CalenDario13 | 11 |
sqlalchemy/alembic | sqlalchemy | 1,315 | Alembic doesn't detect adding unique constraints | **Describe the bug**
Alembic doesn't detect adding unique constraints. Hint, I'm not using default schema.
**Expected behavior**
if I add `unique=True` for single-columns or `UniqueConstraint("col1", "col2")` into `__table_args__` it should generate the unique constraints into migration-file
**To Reproduce**
I'm using tiangolo sqlmodel and in my database-model I wanted to add unique-constraints, so one is kind of a business-key and the other is for mapping tables - both aren't recognized by alembic.
The one for the business-key is in a base-table (because all non-mapping-tables inherit from this class) and looks like this:
```python
class BusinessKeyModel(PydanticBase):
businessKey: Optional[str] = Field(
alias="businessKey",
max_length=255,
description=DescriptionConstants.BUSINESS_KEY,
nullable=True,
unique=True # <-- added this before generating new migration
)
class BaseTableModel(SQLModel, BusinessKeyModel):
...
class User(GUIDModel, BaseTableModel):
guid: Optional[UUID] = Field(
...,
primary_key=True,
description=DescriptionConstants.GUID,
sa_column=Column(
"guid",
UNIQUEIDENTIFIER,
nullable=False,
primary_key=True,
server_default=text("newsequentialid()"),
),
)
```
so when I now add `unique=True` to the BusinessKeyModel.businessKey and try to generate a new migration with alembic, (with autogenerate) it doesn't detect the changes.
Same goes for my mapping-tables, after I added `UniqueConstraint` into my `__table_args__` I think it should detect the changes:
```python
class UserRoleMappingBase(BaseMappingModel, GUIDModel):
userId: UUID
roleId: UUID
class UserRoleMapping(UserRoleMappingBase, table=True):
__table_args__ = (
UniqueConstraint("userId", "roleId"), # <-- added this before generating new migration
{"schema": "dbx_v2"}
)
```
**Versions.**
- OS: Mac Ventura 13...
- Python: 3.9
- Alembic: 1.10.2
- SQLAlchemy: 1.4.41
- Database: SQL Server
- DBAPI:
**Have a nice day!**
Have a nice day too :) | closed | 2023-09-22T08:04:23Z | 2023-09-22T08:32:29Z | https://github.com/sqlalchemy/alembic/issues/1315 | [
"Microsoft SQL Server"
] | matthiasburger | 1 |
deepset-ai/haystack | machine-learning | 8,093 | docs: clean up docstrings of AnswerBuilder | closed | 2024-07-26T12:35:58Z | 2024-07-30T09:06:41Z | https://github.com/deepset-ai/haystack/issues/8093 | [] | dfokina | 0 |
|
ivy-llc/ivy | numpy | 28,187 | Fix Ivy Failing Test: paddle - creation.ones_like | closed | 2024-02-05T13:18:51Z | 2024-02-10T12:25:57Z | https://github.com/ivy-llc/ivy/issues/28187 | [
"Sub Task"
] | MuhammadNizamani | 1 |
|
kennethreitz/responder | flask | 28 | GraphiQL integration | I've noticed some TODO-s mentioning GraphiQL in the code. Does it make sense to integrate it at this point, or is it too soon?
I would be up for taking a crack at it, if @kennethreitz gives me a thumbs up. | closed | 2018-10-13T13:43:54Z | 2018-10-17T09:35:02Z | https://github.com/kennethreitz/responder/issues/28 | [
"feature"
] | artemgordinskiy | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.