repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
onnx/onnx | machine-learning | 6,700 | Crashes when executing model quantification on Deeplabv3 | # Bug Report
### Is the issue related to model conversion?
No
### Describe the bug
I want to perform INT8 precision inference on the Deeplabv3 model on the CPU. I first quantified the model, but during execution, I threw NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for ConvInteger(10) node with name '/resnet/conv1/Conv_quant' Error. The model file is shown in the attachment (https://drive.google.com/file/d/10QhV6_lqoD4nnGx3a4HJV-zA7bQJ86E5/view?usp=drive_link).
### System information
- OS Platform and Distribution (Linux Ubuntu 20.04):
- ONNX version (1.17.0):
- Python version:3.9.19
### Reproduction instructions
import onnxruntime as ort
from onnxruntime.quantization import quantize_dynamic, QuantType
import numpy as np
session = ort.InferenceSession("./deeplabv3.onnx",
providers=["CPUExecutionProvider"],graph_optimization_level=ort.GraphOptimizationLevel.ORT_ENABLE_EXTENDED)
quantized_model = quantize_dynamic("./deeplabv3.onnx", "./deeplabv3_quantized.onnx", weight_type=QuantType.QInt8)
input_data = np.random.rand(2, 3, 513, 513).astype(np.float32)
input_data_int8 = input_data.astype(np.int8)
inputs = {session.get_inputs()[0].name: input_data}
outputs = session.run(["infer_output"], inputs)
session_int8 = ort.InferenceSession("../deeplabv3_quantized.onnx", providers=["CPUExecutionProvider"])
outputs_int8 = session_int8.run(["infer_output"], quantized_model)
### Expected behavior
Execute normally.
| closed | 2025-02-13T02:54:53Z | 2025-02-13T07:02:21Z | https://github.com/onnx/onnx/issues/6700 | [
"topic: runtime"
] | EzioQR | 1 |
mljar/mercury | data-visualization | 115 | History of executed notebooks | Get history of last executed notebooks | closed | 2022-06-30T12:51:30Z | 2022-07-01T10:54:30Z | https://github.com/mljar/mercury/issues/115 | [
"enhancement"
] | pplonski | 1 |
elliotgao2/toapi | flask | 85 | Route order problem. | At present, we define route as follows:
```python
route = {'/movies/?page=1': '/html/gndy/dyzz/',
'/movies/?page=:page': '/html/gndy/dyzz/index_:page.html',
'/movies/': '/html/gndy/dyzz/'}
```
The problem is the ordering.
Use tuple or OrderedDict | closed | 2017-12-22T01:17:17Z | 2017-12-22T09:19:28Z | https://github.com/elliotgao2/toapi/issues/85 | [] | elliotgao2 | 0 |
amisadmin/fastapi-amis-admin | fastapi | 115 | faa ๅฝไปค่กๅๅงๅ้กน็ฎๆถๆ็น็ๆฌไพ่ต็้ฎ้ข | fastapi-user-auth 0.5.0 requires fastapi-amis-admin<0.6.0,>=0.5.0, but you have fastapi-amis-admin 0.6.1 which is incompatible. | closed | 2023-07-25T12:41:50Z | 2023-09-17T08:48:29Z | https://github.com/amisadmin/fastapi-amis-admin/issues/115 | [] | taishen | 3 |
jazzband/django-oauth-toolkit | django | 1,027 | How to use jwt tokens in django-oauth-toolkit without adding to the database? | I have been working on separating the authorization server and the resource server using django-oauth-toolkit. Also I wrote my jwt token generator:
```
from rest_framework_simplejwt.tokens import RefreshToken
def my_acces_token_generator (request, refresh_token = False):
refresh = RefreshToken.for_user (request.user)
return str (refresh.access_token)
def my_refresh_token_generator (request, refresh_token = False):
refresh = RefreshToken.for_user (request.user)
return str (refresh)
```
And now I noticed that my jwt tokens are added to the database created by django-oauth-toolkit, but the essence of jwt is not to store them in the database. Tell me how to do it all rummaged through I can not find anything.

| closed | 2021-11-11T22:56:39Z | 2022-01-02T13:34:09Z | https://github.com/jazzband/django-oauth-toolkit/issues/1027 | [
"question"
] | Suldenkov | 1 |
pytest-dev/pytest-xdist | pytest | 577 | AttributeError: 'WorkerController' object has no attribute 'slaveinput' | I recently upgraded from pytest-xdist 1.34.0 to 2.0.0 and found that tests always fail with the new version. I pinned the previous version as a workaround.
I've tested on Ubuntu 18.04 and the circle [CI docker image for python 3.7.6](https://github.com/CircleCI-Public/circleci-dockerfiles/blob/master/python/images/3.7.6/Dockerfile)
If I remove --cov=project_folder (project_folder was redacted), the tests run successfully.
```
pytest --numprocesses=auto --cov=project_folder test itest
============================= test session starts ==============================
platform linux -- Python 3.7.6, pytest-6.0.1, py-1.9.0, pluggy-0.13.1
rootdir: /root/repo
plugins: xdist-2.0.0, forked-1.3.0, cov-2.10.0
gw0 C / gw1 I / gw2 I / gw3 I / gw4 I / gw5 I / gw6 I / gw7 I / gw8 I / gw9 I / gw10 I / gw11 I / gw12 I / gw13 I / gw14 I / gw15 I / gw16 I / gw17 IINTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/_pytest/main.py", line 238, in wrap_session
INTERNALERROR> config.hook.pytest_sessionstart(session=session)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/xdist/dsession.py", line 78, in pytest_sessionstart
INTERNALERROR> nodes = self.nodemanager.setup_nodes(putevent=self.queue.put)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/xdist/workermanage.py", line 65, in setup_nodes
INTERNALERROR> return [self.setup_node(spec, putevent) for spec in self.specs]
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/xdist/workermanage.py", line 65, in <listcomp>
INTERNALERROR> return [self.setup_node(spec, putevent) for spec in self.specs]
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/xdist/workermanage.py", line 73, in setup_node
INTERNALERROR> node.setup()
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/xdist/workermanage.py", line 260, in setup
INTERNALERROR> self.config.hook.pytest_configure_node(node=self)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/manager.py", line 87, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pytest_cov/plugin.py", line 239, in pytest_configure_node
INTERNALERROR> self.cov_controller.configure_node(node)
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pytest_cov/engine.py", line 274, in configure_node
INTERNALERROR> workerinput(node).update({
INTERNALERROR> File "/root/repo/venv/lib/python3.7/site-packages/pytest_cov/compat.py", line 42, in fn
INTERNALERROR> return getattr(obj, attr, *args)
INTERNALERROR> AttributeError: 'WorkerController' object has no attribute 'slaveinput'
``` | closed | 2020-08-14T15:23:46Z | 2020-08-14T15:56:26Z | https://github.com/pytest-dev/pytest-xdist/issues/577 | [] | bruceduhamel | 5 |
streamlit/streamlit | deep-learning | 10,406 | New st.query_params doesn't persist state in URL after page refresh while deprecated st.experimental_set_query_params does | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
The new `st.query_params` API does not maintain state in URL parameters after page refresh, while the deprecated `st.experimental_get/set_query_params` methods work correctly.
This is blocking migration from experimental to stable API for applications that rely on URL parameters for state management.
### Reproducible Code Example
```Python
###########################
# login_experimental_api.py
###########################
import streamlit as st
st.set_page_config(page_title="Login State Test - Experimental API")
# Initialize login state
if "is_logged" not in st.session_state:
query_params = st.experimental_get_query_params()
st.session_state.is_logged = query_params.get("logged_in", ["False"])[0] == "True"
# Display current state
st.write("Current login state:", st.session_state.is_logged)
st.write("Current query parameters:", st.experimental_get_query_params())
if st.button("Login"):
st.session_state.is_logged = True
st.experimental_set_query_params(logged_in="True")
st.rerun()
st.markdown("---")
st.markdown("""
### How to test:
1. Click "Login" - you'll see the state change to logged in
2. Note the URL parameters change
3. Refresh the page - the state persists
4. Check URL parameters - they are maintained
This demonstrates that the deprecated `st.experimental_set_query_params`
maintains state in URL parameters after page refresh.
""")
##################
# login_new_api.py
##################
import streamlit as st
st.set_page_config(page_title="Login State Test - New API")
# Initialize login state
if "is_logged" not in st.session_state:
st.session_state.is_logged = st.query_params.get("logged_in", ["False"])[0] == "True"
# Display current state
st.write("Current login state:", st.session_state.is_logged)
st.write("Current query parameters:", dict(st.query_params))
if st.button("Login"):
st.session_state.is_logged = True
st.query_params.logged_in = "True"
st.rerun()
st.markdown("---")
st.markdown("""
### How to test:
1. Click "Login" - you'll see the state change to logged in
2. Note the URL parameters change
3. Refresh the page - the state is lost
4. Check URL parameters - they are also lost
This demonstrates that the new recommended `st.query_params` API
doesn't maintain state in URL parameters after page refresh.
""")
```
### Steps To Reproduce
1. Run the attached minimal example files
2. Click "Login" - state will persist after refresh in `login_experimental_api.py`
3. Click "Login" - state will be lost after refresh in `login_new_api.py`
### Expected Behavior
`streamlit run login_new_api.py` should keep the session.
### Current Behavior
Using old API Streamlit page shows yellow alerts:
```
Please replace st.experimental_get_query_params with st.query_params.
st.experimental_get_query_params will be removed after 2024-04-11.
Refer to our [docs page](https://docs.streamlit.io/develop/api-reference/caching-and-state/st.query_params) for more information.
```
Whilst terminal throws deprecation warnings with the same content:
```
$ streamlit run login_experimental_api.py
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://10.0.100.20:8501
For better performance, install the Watchdog module:
$ xcode-select --install
$ pip install watchdog
2025-02-15 01:25:14.734 Please replace `st.experimental_get_query_params` with `st.query_params`.
`st.experimental_get_query_params` will be removed after 2024-04-11.
Refer to our [docs page](https://docs.streamlit.io/develop/api-reference/caching-and-state/st.query_params) for more information.
2025-02-15 01:25:14.734 Please replace `st.experimental_get_query_params` with `st.query_params`.
`st.experimental_get_query_params` will be removed after 2024-04-11.
Refer to our [docs page](https://docs.streamlit.io/develop/api-reference/caching-and-state/st.query_params) for more information.
```
### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: `Streamlit, version 1.39.0`
- Python version: `Python 3.12.9`
- Operating System: `macOS 15.2`
- Browser: Brave Browser, Safari, Google Chrome
```
โฏ streamlit version
Streamlit, version 1.39.0
โฏ python --version
Python 3.12.9
โฏ sw_vers
ProductName: macOS
ProductVersion: 15.2
BuildVersion: 24C101
```
### Additional Information
_No response_ | closed | 2025-02-15T00:36:02Z | 2025-02-25T23:06:36Z | https://github.com/streamlit/streamlit/issues/10406 | [
"type:bug",
"status:awaiting-user-response",
"feature:query-params"
] | icmtf | 4 |
marimo-team/marimo | data-visualization | 3,696 | Unable to build from PyPi source file due to missing `build_hook.py` | ### Describe the bug
Building with [source](https://pypi.org/project/marimo/#marimo-0.11.0.tar.gz) distributed by PyPi will result in an error in the `0.11.0` release:
<details>
```
โฏ python -m build --wheel --no-isolation
* Getting build dependencies for wheel...
Traceback (most recent call last):
File "/usr/lib/python3.13/site-packages/pyproject_hooks/_in_process/_in_process.py", line 389, in <module>
main()
~~~~^^
File "/usr/lib/python3.13/site-packages/pyproject_hooks/_in_process/_in_process.py", line 373, in main
json_out["return_val"] = hook(**hook_input["kwargs"])
~~~~^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/site-packages/pyproject_hooks/_in_process/_in_process.py", line 143, in get_requires_for_build_wheel
return hook(config_settings)
File "/usr/lib/python3.13/site-packages/hatchling/build.py", line 44, in get_requires_for_build_wheel
return builder.config.dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/functools.py", line 1039, in __get__
val = self.func(instance)
File "/usr/lib/python3.13/site-packages/hatchling/builders/config.py", line 577, in dependencies
for dependency in self.dynamic_dependencies:
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.13/functools.py", line 1039, in __get__
val = self.func(instance)
File "/usr/lib/python3.13/site-packages/hatchling/builders/config.py", line 593, in dynamic_dependencies
build_hook = build_hook_cls(
self.root, config, self, self.builder.metadata, '', self.builder.PLUGIN_NAME, self.builder.app
)
File "/usr/lib/python3.13/site-packages/hatchling/builders/hooks/custom.py", line 33, in __new__
raise OSError(message)
OSError: Build script does not exist: build_hook.py
ERROR Backend subprocess exited when trying to invoke get_requires_for_build_wheel
```
</details>
This is caused by the change in [commit 52906ba](https://github.com/marimo-team/marimo/commit/52906baa3fae4907711af82ad69a8866408e94b5#diff-50c86b7ed8ac2cf95bd48334961bf0530cdc77b5a56f852c5c61b89d735fd711) to `pyproject.toml`:
```
...
[tool.hatch.build.hooks.custom]
path = "build_hook.py"
...
```
It can be fixed by placing the `build_hook.py` from the Github repo into the extracted source folder.
### Environment
<details>
```
{
"marimo": "0.11.0",
"OS": "Linux",
"OS Version": "6.12.10-zen1-1-zen",
"Processor": "",
"Python Version": "3.13.1",
"Binaries": {
"Browser": "--",
"Node": "v23.7.0"
},
"Dependencies": {
"click": "8.1.7",
"docutils": "0.21.2",
"itsdangerous": "2.1.2",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.24.2",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.1",
"pyyaml": "6.0.2",
"ruff": "missing",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "4.12.2",
"uvicorn": "0.34.0",
"websockets": "12.0"
},
"Optional Dependencies": {},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
_No response_ | closed | 2025-02-05T05:47:55Z | 2025-02-05T13:34:00Z | https://github.com/marimo-team/marimo/issues/3696 | [
"bug"
] | ResRipper | 0 |
aiortc/aiortc | asyncio | 912 | Create infinite stream from image | Hello! can someone tell me how i can make an endless stream from one picture? this picture may change over time
there is my code:
```
def loop_image():
img = cv2.imread('./images/1.jpg')
img = cv2.resize(img, (640, 480))
new_frame = VideoFrame.from_ndarray(img, format="bgr24")
return new_frame
class VideoStreamTrack(VideoStreamTrack):
def __init__(self):
super().__init__() # don't forget this!
self.counter = 0
self.frames = []
for i in range(3):
print(f"append image {i}")
self.frames.append(loop_image())
async def recv(self):
print("append image")
pts, time_base = await self.next_timestamp()
frame = self.frames[self.counter % 30]
frame.pts = pts
frame.time_base = time_base
self.counter += 1
self.frames.append(loop_image())
return frame
``` | closed | 2023-08-02T16:24:10Z | 2023-10-26T07:39:59Z | https://github.com/aiortc/aiortc/issues/912 | [] | Clarxxon | 0 |
microsoft/unilm | nlp | 770 | layoutxlm to onnx question | I use layoutxlm to training my data for downstream, when I convert the model which I trained to onnx by huggingface code layoutlmv2-to-onnx, it occurs below problem, can you give me some advice? it seems concat two different types causing this problem, but I do not modify any code just using run xfun for token classification, which confused me so much, I hope you can help us

| open | 2022-06-24T14:17:38Z | 2022-06-24T14:21:57Z | https://github.com/microsoft/unilm/issues/770 | [] | githublsk | 0 |
JaidedAI/EasyOCR | pytorch | 416 | Arguments and vertical Japanese text | I have used EasyOCR successfully a few times now, however I'm running into issues. Firstly, a guide recommended using a "paragraph" argument. Putting "--paragraph=1" into my script caused an error called "Unrecognized arguments" and that error persisted on every variation I tried. Typing "easyocr -h" tells me that the options I can use are language, input file, detail of output and if I want to use GPU. These options seem very bare bones and in contrast to guides.
Also, Japanese vertical text does not work. It only grabs the uppermost character of each column and puts it on its own line in the output (which is very annoying even when it does manage to grab the text correctly). Can that be fixed? I tried downloading the bigger japanese.pth and replacing the one in the install directory, but the program deleted it. Is it not compatible with the latest git download? Did I get the wrong version? Is there even a way to check version? The standard "--version" did absolutely nothing.
One last thing, how do I make it stop pestering me about my two different GPUs?
To clarify, I'm using Windows 10's CMD to run EasyOCR. | closed | 2021-04-12T11:40:35Z | 2022-03-02T09:24:57Z | https://github.com/JaidedAI/EasyOCR/issues/416 | [] | Troceleng | 6 |
koxudaxi/datamodel-code-generator | fastapi | 1,592 | Comprehensive Documentation of Supported Input Formats | ## Goal
To provide clear and comprehensive documentation detailing the support status for various input formats like JsonSchema and OpenAPI, including different drafts like Draft 7 and Draft 2019-09, to help users understand the extent to which they can utilize datamodel-code-generator with different specifications.
## Background
The current documentation needs more detailed information on the support status of various input formats. Users may need help understanding what aspects of JsonSchema or OpenAPI are supported, especially concerning different drafts. This lack of clarity could hinder the utilization of datamodel-code-generator to its full extent.
## Suggested Documentation Structure:
### Supported Input Formats:
List of all supported input formats with a brief description.
#### JsonSchema:
Support status for Draft 7, Draft 2019-09, etc.
Any known limitations or workarounds.
##### OpenAPI:
Support status for versions 3.0, 3.1, etc.
Any known limitations or workarounds.
## Related Issues
https://github.com/koxudaxi/datamodel-code-generator/issues/1578 | open | 2023-10-04T16:39:43Z | 2024-07-24T18:13:06Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1592 | [
"documentation"
] | koxudaxi | 1 |
ageitgey/face_recognition | python | 737 | receiving error with face_encoding of face_recognition library after cuda setup | * face_recognition version: 1.2.3
* Python version: 3.6.5
* Operating System: windows 10
### What I Did
I have just setup my CUDA with dlib and after that I am receiving this error when running the encoding of face using face_recognition library
```
face_encodings = face_recognition.face_encodings(frame, face_locations, num_jitters=1)
File "C:\Users\abhinav.jhanwar\AppData\Local\Continuum\anaconda3\lib\site-packages\face_recognition\api.py", line 210, in face_encodings
:return: A list of 128-dimensional face encodings (one for each face in the image)
File "C:\Users\abhinav.jhanwar\AppData\Local\Continuum\anaconda3\lib\site-packages\face_recognition\api.py", line 210, in <listcomp>
:return: A list of 128-dimensional face encodings (one for each face in the image)
RuntimeError: Error while calling cudnnCreate(&handles[new_device_id]) in file C:\dlib-master\dlib-master\dlib\cuda\cudnn_dlibapi.cpp:104. code: 1, reason: CUDA Runtime API initialization failed.
```
| open | 2019-02-05T10:05:06Z | 2020-08-24T12:12:33Z | https://github.com/ageitgey/face_recognition/issues/737 | [] | AbhinavJhanwar | 3 |
robotframework/robotframework | automation | 4,404 | Document that failing test setup stops execution even if continue-on-failure mode is active | When support for controlling the continue-on-failure mode was added (#2285), we didn't think how tests setups should behave when the mode is activated. Currently if you have a test like
```robotframework
*** Test Cases ***
Example
[Tags] robot:continue-on-failure
[Setup] Setup Keyword
Keyword In Body
Another Keyword
```
and the `Setup Keyword` fails, keywords in the test body aren't run. That can be considered inconsistent because continue-on-failure mode is explicitly enabled, but I actually like this behavior. You can have some action in setup so that execution ends if it fails, but keywords in the body that are validating that everything went as expected would all be run if the setup succeeds. This would also add some new functionality to setups that are otherwise not really different from normal keywords.
What do others (especially @oboehmer) think about this? Should we stop execution after a failing setup also when continue-on-failure mode is on? If yes, we just need to add some tests for the current behavior and mention this in the User Guide. If not, then we need to change the code as well.
A problem with changing the code is that the change would be backwards incompatible. It's a bit questionable could we do it without a deprecation in RF 5.1 but deprecation adds more work.
| closed | 2022-07-14T10:01:43Z | 2022-07-15T07:26:50Z | https://github.com/robotframework/robotframework/issues/4404 | [
"enhancement",
"priority: medium",
"alpha 1",
"acknowledge"
] | pekkaklarck | 2 |
d2l-ai/d2l-en | tensorflow | 2,605 | d2l.Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16, 2 num_layers=2) | ---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[21], [line 4](vscode-notebook-cell:?execution_count=21&line=4)
[1](vscode-notebook-cell:?execution_count=21&line=1) encoder = d2l.Seq2SeqEncoder(vocab_size=10, embed_size=8, num_hiddens=16,
[2](vscode-notebook-cell:?execution_count=21&line=2) num_layers=2)
[3](vscode-notebook-cell:?execution_count=21&line=3) encoder.eval()
----> [4](vscode-notebook-cell:?execution_count=21&line=4) decoder = Seq2SeqAttentionDecoder(vocab_size=10, embed_size=8, num_hiddens=16,
[5](vscode-notebook-cell:?execution_count=21&line=5) num_layers=2)
[6](vscode-notebook-cell:?execution_count=21&line=6) decoder.eval()
[7](vscode-notebook-cell:?execution_count=21&line=7) X = torch.zeros((4, 7), dtype=torch.long) # (batch_size,num_steps)
Cell In[20], [line 5](vscode-notebook-cell:?execution_count=20&line=5)
[2](vscode-notebook-cell:?execution_count=20&line=2) def __init__(self, vocab_size, embed_size, num_hiddens, num_layers,
[3](vscode-notebook-cell:?execution_count=20&line=3) dropout=0, **kwargs):
[4](vscode-notebook-cell:?execution_count=20&line=4) super(Seq2SeqAttentionDecoder, self).__init__(**kwargs)
----> [5](vscode-notebook-cell:?execution_count=20&line=5) self.attention = d2l.AdditiveAttention(
[6](vscode-notebook-cell:?execution_count=20&line=6) num_hiddens, num_hiddens, num_hiddens, dropout)
[7](vscode-notebook-cell:?execution_count=20&line=7) self.embedding = nn.Embedding(vocab_size, embed_size)
[8](vscode-notebook-cell:?execution_count=20&line=8) self.rnn = nn.GRU(
[9](vscode-notebook-cell:?execution_count=20&line=9) embed_size + num_hiddens, num_hiddens, num_layers,
[10](vscode-notebook-cell:?execution_count=20&line=10) dropout=dropout)
TypeError: __init__() takes 3 positional arguments but 5 were given | open | 2024-06-17T07:45:10Z | 2024-06-17T08:29:55Z | https://github.com/d2l-ai/d2l-en/issues/2605 | [] | achaosss | 2 |
LibreTranslate/LibreTranslate | api | 659 | Feature Request: SRT (Subtitle) Support | (I can't do this as a pull request, I'm not a Python programmer.)
I would like to request that the translate file mode support SRT (Subtitle) files.
The format of these is pretty simple, they are text files with an incrementing number, the next line has a time range in the form of `HH:MM:SS,HUN --> HH:MM:SS,HUN`, and finally the text which **may** include some HTML markup, primarily either `<b>` or` <i>`. Sometimes there is a `<br>` (or `<br/>`). A blank line separates text 'frames'.
Mostly LibreTranslate needs to _ignore_ the number and the time ranges entirely, and for the text, treat it like HTML.
There may already be Python code somewhere for handling SRT files.
### Example:
```
1
00:00:02,830 --> 00:00:05,560
<i>The miniature giant space hamster
descends from the starry sky.</i>
2
00:00:05,700 --> 00:00:06,360
Boo!
```
(If someone decides to work on this and needs sample SRTs, please contact me.)
### Use Case:
Translating subtitle text is an obvious use of LibreTranslate. For example, _Subtitle Edit_, an application to aid in editing or creating subtitle files, even has a "Google Translate" button with in it for machine translating lines of dialogue (it just opens a web browser).
### Notes:
At present LibreTranslate handles srt (as txt) in kind of a mixed bag. Sometimes it trims or strips out the time range and sometimes it changes the frame number although it's incredibly inconsistent on both.
| open | 2024-08-09T22:50:53Z | 2024-08-11T09:05:21Z | https://github.com/LibreTranslate/LibreTranslate/issues/659 | [
"enhancement"
] | LeeThompson | 1 |
jschneier/django-storages | django | 735 | Google cloud storage not retrying file downloads | We are randomly getting this sort of errors when trying to download a file:
```
File "./utils/tools/fingerprint.py", line 11, in generate_fingerprint
img_file.seek(0)
File "/usr/local/lib/python3.7/site-packages/django/core/files/utils.py", line 20, in <lambda>
seek = property(lambda self: self.file.seek)
File "/usr/local/lib/python3.7/site-packages/django/db/models/fields/files.py", line 43, in _get_file
self._file = self.storage.open(self.name, 'rb')
File "/usr/local/lib/python3.7/site-packages/django/core/files/storage.py", line 33, in open
return self._open(name, mode)
File "/usr/local/lib/python3.7/site-packages/storages/backends/gcloud.py", line 164, in _open
file_object = GoogleCloudFile(name, mode, self)
File "/usr/local/lib/python3.7/site-packages/storages/backends/gcloud.py", line 32, in __init__
self.blob = storage.bucket.get_blob(name)
File "/usr/local/lib/python3.7/site-packages/google/cloud/storage/bucket.py", line 380, in get_blob
_target_object=blob,
File "/usr/local/lib/python3.7/site-packages/google/cloud/_http.py", line 290, in api_request
headers=headers, target_object=_target_object)
File "/usr/local/lib/python3.7/site-packages/google/cloud/_http.py", line 183, in _make_request
return self._do_request(method, url, headers, data, target_object)
File "/usr/local/lib/python3.7/site-packages/google/cloud/_http.py", line 212, in _do_request
url=url, method=method, headers=headers, data=data)
File "/usr/local/lib/python3.7/site-packages/google/auth/transport/requests.py", line 208, in request
method, url, data=data, headers=request_headers, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 533, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 646, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 498, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer'))
```
It looks like django-storages gcloud adapter does not retry the attempt if it fails, and handling it manually everywhere would lead to a lot of boilerplate code (I was considering using this: googleapis/google-cloud-python#5879)
Any ways to fix this? | closed | 2019-08-14T13:36:54Z | 2023-08-27T15:31:48Z | https://github.com/jschneier/django-storages/issues/735 | [
"google"
] | and3rson | 1 |
home-assistant/core | asyncio | 140,551 | "Salt nearly empty" is off, despite dish washer and Home Connect app reporting otherwise | ### The problem
My Siemens dish washer and the Home Connect app say that salt is low, though that isn't reflected in HA: "Salt nearly empty" is off, and has never been on. I don't think this sensor existed before 2025.3, but now doesn't seem functional.
Dish washer model is SN85TX00CE.
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
has never worked
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
homeconnect
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/home_connect/
### Diagnostics information
[home-assistant_home_connect_2025-03-13T20-38-34.565Z.txt](https://github.com/user-attachments/files/19236918/home-assistant_home_connect_2025-03-13T20-38-34.565Z.txt)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-13T21:23:46Z | 2025-03-24T09:23:48Z | https://github.com/home-assistant/core/issues/140551 | [
"integration: home_connect"
] | richardvinger | 6 |
huggingface/pytorch-image-models | pytorch | 1,391 | [FEATURE] Add MobileOne | It's Apple's new model for mobiles https://github.com/apple/ml-mobileone | closed | 2022-08-02T09:23:36Z | 2022-09-04T04:49:38Z | https://github.com/huggingface/pytorch-image-models/issues/1391 | [
"enhancement"
] | MohamedAliRashad | 1 |
matplotlib/matplotlib | data-visualization | 29,396 | [Bug]: Style flag errors trying to save figures as PDF with font Inter | ### Bug summary
I have installed the font Inter with `brew install font-inter` and successfully imported it into matplotlib such that the figure from the plot below displays correctly with the Inter font as specified; however, when it comes to saving, I get the error below in "actual outcome".
### Code for reproduction
```Python
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
matplotlib.use("QtAgg")
# Ensure 'Inter' is available
available_fonts = [f.name for f in fm.fontManager.ttflist]
if 'Inter' in available_fonts:
plt.rcParams['font.family'] = 'Inter'
else:
print("Inter font is not available. Please ensure it is installed.")
# generate a test plot and save it
fig, ax = plt.subplots()
ax.plot([0, 1], [0, 1])
ax.set_title("Example Plot with Inter Font")
plt.show()
fig.savefig("example_plot.pdf", format='pdf')
```
### Actual outcome
```
Traceback (most recent call last):
File "/Users/atom/hemanpro/HeMan/misc_tools/addfont.py", line 18, in <module>
fig.savefig("example_plot.pdf", format='pdf')
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/figure.py", line 3490, in savefig
self.canvas.print_figure(fname, **kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/backends/backend_qtagg.py", line 75, in print_figure
super().print_figure(*args, **kwargs)
~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/backend_bases.py", line 2184, in print_figure
result = print_method(
filename,
...<3 lines>...
bbox_inches_restore=_bbox_inches_restore,
**kwargs)
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/backend_bases.py", line 2040, in <lambda>
print_method = functools.wraps(meth)(lambda *args, **kwargs: meth(
~~~~^
*args, **{k: v for k, v in kwargs.items() if k not in skip}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/backends/backend_pdf.py", line 2789, in print_pdf
file.finalize()
~~~~~~~~~~~~~^^
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/backends/backend_pdf.py", line 827, in finalize
self.writeFonts()
~~~~~~~~~~~~~~~^^
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/backends/backend_pdf.py", line 973, in writeFonts
fonts[Fx] = self.embedTTF(filename, chars)
~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^
File "/Users/atom/hemanpro/HeMan/.venv/lib/python3.13/site-packages/matplotlib/backends/backend_pdf.py", line 1416, in embedTTF
sf = font.style_flags
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.13/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/enum.py", line 726, in __call__
return cls.__new__(cls, value)
~~~~~~~~~~~^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.13/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/enum.py", line 1207, in __new__
raise exc
File "/opt/homebrew/Cellar/python@3.13/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/enum.py", line 1184, in __new__
result = cls._missing_(value)
File "/opt/homebrew/Cellar/python@3.13/3.13.1/Frameworks/Python.framework/Versions/3.13/lib/python3.13/enum.py", line 1480, in _missing_
raise ValueError(
...<2 lines>...
))
ValueError: <flag 'StyleFlags'> invalid value 589824
given 0b0 10010000000000000000
allowed 0b0 00000000000000000011
.venvFAIL
```
### Expected outcome
The figure is saved as a PDF with the Inter font.
### Additional information
This only occurs on my macOS installation of Python 3.13.1 with matplotlib 3.10.0.
### Operating system
macOS Sequoia 15.2
### Matplotlib Version
3.10.0
### Matplotlib Backend
QtAgg, cairo, macosx
### Python version
3.13.1
### Jupyter version
_No response_
### Installation
pip | closed | 2025-01-03T03:38:40Z | 2025-01-08T09:18:59Z | https://github.com/matplotlib/matplotlib/issues/29396 | [
"status: confirmed bug",
"Release critical",
"topic: text/fonts"
] | ohshitgorillas | 3 |
biolab/orange3 | numpy | 6,273 | Pythagorean Forest: report does not work | **What's wrong?**
Clicking the report button produces an error. The problem is that widget declares `graph_name = "scene"`, although it doesn't have a "scene" but uses a left-to-right wrapping list view instead.
**How can we reproduce the problem?**
Give it some data and click the report button.
**What's your environment?**
Irrelevant. | closed | 2022-12-25T21:09:19Z | 2023-01-04T20:42:59Z | https://github.com/biolab/orange3/issues/6273 | [
"bug"
] | janezd | 0 |
reloadware/reloadium | pandas | 62 | [Feature Request] Enable changing breakpoints during debugging | **Describe the bug**
Reloadium is not sensitive to runtime changes to breakpoints.
**To Reproduce**
Steps to reproduce the behavior:
1. Set breakpoints
2. Run debug with reloadium
3. Add/remove breakpoints
4. Reloadium insensitive to breakpoint changes
**Expected behavior**
Reloadium should respond to runtime breakpoint changes
**Desktop (please complete the following information):**
- OS: macOS
- OS version: 12.5
- Reloadium package version: 0.9.4
- PyCharm plugin version: None
- Editor: PyCharm CE
- Run mode: Debug
**Additional context**
If every execution frame is hooked by reloadium, we can decide whether to reload or not with latest breakpoints. | closed | 2022-11-07T22:18:49Z | 2022-11-07T23:22:13Z | https://github.com/reloadware/reloadium/issues/62 | [] | James4Ever0 | 0 |
QingdaoU/OnlineJudge | django | 289 | Runtime error | Try to test the system using basic operation like substraction with maximum memory is 8 MB but the return is runtime error. But when i set the memory to 256 MB submission is accepted. | closed | 2020-03-20T11:35:56Z | 2020-03-20T14:03:00Z | https://github.com/QingdaoU/OnlineJudge/issues/289 | [] | tiodh | 10 |
ray-project/ray | pytorch | 51,506 | CI test windows://python/ray/tests:test_multi_tenancy is consistently_failing | CI test **windows://python/ray/tests:test_multi_tenancy** is consistently_failing. Recent failures:
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aaf1-9737-4a02-a7f8-1d7087c16fb1
- https://buildkite.com/ray-project/postmerge/builds/8965#0195aa03-5c4f-4156-97c5-9793049512c1
DataCaseName-windows://python/ray/tests:test_multi_tenancy-END
Managed by OSS Test Policy | closed | 2025-03-19T00:07:58Z | 2025-03-19T21:53:33Z | https://github.com/ray-project/ray/issues/51506 | [
"bug",
"triage",
"core",
"flaky-tracker",
"ray-test-bot",
"ci-test",
"weekly-release-blocker",
"stability"
] | can-anyscale | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 2,115 | Could this support android chrome or chromium? | open | 2025-01-06T14:05:35Z | 2025-01-06T14:05:35Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/2115 | [] | Gemini-NX | 0 |
|
pywinauto/pywinauto | automation | 793 | Doubt about HwndWrapper and | First of all I need to say that this is not an issue, but I need your support because after several hours trying to get it running I still have problems
To clarify I will use the following image.

At the moment I am able to minimize and maximize the Win2 (sub_window) using the following code
```python
app = Application().connect(path=SNDBX_PATH + "/example.exe")
atlfc = app.window(title_re='.*Main.*')
atlf = atlfc[u'Win2']
simtabctrlview = atlfc.A
simtabctrlview.Click()
```
My target is to click into the different elements in the win2 i.e (A,B,C) to capture the different images
Using apps like swapy I got the code but it is not working, I received `"Could not find 'A' in 'dict_keys([''])"`
When I use the children option I can see something like this:
```python
print(atlf.children())
```

I also tried to use the handle
```python
atlf.child_window(handle = 0x002314AC).click()
```
But it only works when I manually click previously in the element, so this is not a solution
@vasily-v-ryabov could you or your team could please give an idea about what can I do?
Thanks in advance | open | 2019-08-13T22:03:56Z | 2019-09-03T06:32:36Z | https://github.com/pywinauto/pywinauto/issues/793 | [
"question"
] | FrankRan23 | 6 |
youfou/wxpy | api | 425 | ๅพฎไฟกไธชไบบๅทAPI ๅฏไปฅ็ไธ่ฟไธชwkteam.gitbook.io๏ผ2017ๅนดๅๅช่ฆๆฒก็ปๅฝ่ฟ็ฝ้กต็ๅพฎไฟก็ ็จไธไบwxpyๅฆ~ | ๆจ่ๅคงไผ็จ่ฟไธชwkteam.gitbook.io | closed | 2019-11-25T02:55:48Z | 2019-12-04T02:36:56Z | https://github.com/youfou/wxpy/issues/425 | [] | 2048537793 | 0 |
pytest-dev/pytest-flask | pytest | 108 | Builds failing after Werkzeug 1.0 bump | Your dependency on Werkzeug is breaking our build. Once Werkzeug bumped their version to 1.0, Pytest-flask is breaking in circleci.
Here's some info from Werkzeug's issues:
https://github.com/pallets/werkzeug/issues/1714
And here's our error traceback:
```
Traceback (most recent call last):
File "/usr/local/bin/pytest", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 73, in main
config = _prepareconfig(args, plugins)
File "/usr/local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 224, in _prepareconfig
pluginmanager=pluginmanager, args=args
File "/usr/local/lib/python3.7/site-packages/pluggy/hooks.py", line 286, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 93, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 87, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/usr/local/lib/python3.7/site-packages/_pytest/helpconfig.py", line 89, in pytest_cmdline_parse
config = outcome.get_result()
File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 80, in get_result
raise ex[1].with_traceback(ex[2])
File "/usr/local/lib/python3.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/usr/local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 794, in pytest_cmdline_parse
self.parse(args)
File "/usr/local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 1000, in parse
self._preparse(args, addopts=addopts)
File "/usr/local/lib/python3.7/site-packages/_pytest/config/__init__.py", line 948, in _preparse
self.pluginmanager.load_setuptools_entrypoints("pytest11")
File "/usr/local/lib/python3.7/site-packages/pluggy/manager.py", line 299, in load_setuptools_entrypoints
plugin = ep.load()
File "/usr/local/lib/python3.7/site-packages/importlib_metadata/__init__.py", line 94, in load
module = import_module(match.group('module'))
File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "/usr/local/lib/python3.7/site-packages/_pytest/assertion/rewrite.py", line 143, in exec_module
exec(co, module.__dict__)
File "/usr/local/lib/python3.7/site-packages/pytest_flask/plugin.py", line 14, in <module>
from werkzeug import cached_property
ImportError: cannot import name 'cached_property' from 'werkzeug' (/usr/local/lib/python3.7/site-packages/werkzeug/__init__.py)
``` | closed | 2020-02-06T23:14:09Z | 2020-02-06T23:56:05Z | https://github.com/pytest-dev/pytest-flask/issues/108 | [] | zoombear | 2 |
hankcs/HanLP | nlp | 1,334 | import com.hankcs.hanlp.corpus.MSR; | 1.7.5 ไปฃ็ ้้ขๆฒกๆๅฎไน็ธๅ
ณ็javaๆไปถไบ - compile demoๅbook example ๆ็ธๅ้่ฏฏ๏ผๆ็ขฐๅฐๅๆ ท้ฎ้ข็ๅ๏ผ | closed | 2019-11-30T10:52:50Z | 2020-01-01T10:48:11Z | https://github.com/hankcs/HanLP/issues/1334 | [
"ignored"
] | jiangangduan | 2 |
ageitgey/face_recognition | machine-learning | 814 | No Issue; Idea of increacing the speed on a Raspberry Pi | I was trying to increase the speed of a simple face detection script on the Raspberry Pi. Since the multitasking example uses OpenCV, it is not easy to set up (it needs to compile OpenCV on the Raspberry Pi).
I therefore created an alternative semi-multitasking approach, here is my code:
```
from multiprocessing import Process
import face_recognition
import picamera
import numpy as np
def detect(output):
face_locations = face_recognition.face_locations(output)
faces = len(face_locations)
if faces != 0:
print("Found {} faces in image.".format(faces))
else:
print("No Face found")
if __name__ == '__main__':
camera = picamera.PiCamera()
camera.resolution = (320, 240)
output = np.empty((240, 320, 3), dtype=np.uint8)
while True:
camera.capture(output, format="rgb")
p = Process(target=detect, args=(output,))
p.start()
```
It runs approximately at twice the speed as if the same code would run in one single process ( ca. 2fps on the Pi 3+).
Since I think this code could be optimized even more, I did not yet want to open a pull request before hearing your feedback. | open | 2019-05-05T12:12:44Z | 2019-07-15T22:11:12Z | https://github.com/ageitgey/face_recognition/issues/814 | [] | NathansLab | 1 |
graphistry/pygraphistry | pandas | 218 | [ENH] Error propagation in files mode | ```python
df = pd.DataFrame({'s': ['a', 'b', 'c'], 'd': ['b', 'c', 'a']})
graphistry.edges(df, 'WRONG', 'd').plot(as_files=True, render=False)
```
Will not hint at the binding error, while `as_files=False` will. Both should -- unclear if PyGraphistry inspecting validity on viz create response, or validity not being set. | open | 2021-02-15T00:38:52Z | 2021-02-15T00:44:21Z | https://github.com/graphistry/pygraphistry/issues/218 | [
"enhancement",
"good-first-issue"
] | lmeyerov | 0 |
jupyterhub/repo2docker | jupyter | 948 | Support for Python dependencies in a form of a wheel file located in repository | <!-- Thank you for contributing. These HTML comments will not render in the issue, but you can delete them once you've read them if you prefer! -->
### Bug description
Straightforward installation of a private Python wheel kept in the repository during docker build does not work, see [topic](https://discourse.jupyter.org/t/local-wheel-file-in-requirements-txt/5768).
As @manics figured, it happens because `requirements.txt` is copied explicitly and used during image build before the rest content arrives.
#### Expected behaviour
local `*.whl` files from repo root should also be copied along with scripts describing dependencies.
#### Actual behaviour
Wheel files are not copied and therefore pip fails to install python requirement specified as a `*.whl` file because it is not found.
### How to reproduce
<!-- Use this section to describe the steps that a user would take to experience this bug. -->
1. Go to mybinder.com
2. Use https://github.com/dvoskov/DARTS-workshop repository
3. Specify commit `512f3dd06d61fbdacfbb63b2c8fca88238f4e8d2`
4. Hit launch
5. Scroll down
5. See error:
> WARNING: Requirement 'darts-0.1.0-cp37-cp37m-linux_x86_64.whl' looks like a filename, but the file does not exist
> ...
> Processing ./darts-0.1.0-cp37-cp37m-linux_x86_64.whl
> ERROR: Could not install packages due to an EnvironmentError: [Errno 2] No such file or directory: '/home/jovyan/darts-0.1.0-cp37-cp37m-linux_x86_64.whl'
### Your personal set up
<!-- Tell us a little about the system you're using. You can see the guidelines for setting up and reporting this information at https://repo2docker.readthedocs.io/en/latest/contributing/contributing.html#setting-up-for-local-development. -->
No personal setup needed
| closed | 2020-08-28T17:36:51Z | 2020-08-31T12:13:52Z | https://github.com/jupyterhub/repo2docker/issues/948 | [] | mkhait | 4 |
NVlabs/neuralangelo | computer-vision | 13 | ValueError when extracting mesh | Hi, when I was extracting mesh from the checkpoint, I got a value error where the terminal output is as follows:
```
(neuralangelo) vrlab@k8s-master-38:~/wangph1/workspace/neuralangelo$ CUDA_VISIBLE_DEVICES=2 torchrun --nproc_per_node=${GPUS} projects/neuralangelo/scripts/extract_mesh.py \
> --logdir=logs/${GROUP}/${NAME} \
> --config=${CONFIG} \
> --checkpoint=${CHECKPOINT} \
> --output_file=${OUTPUT_MESH} \
> --resolution=${RESOLUTION} \
> --block_res=${BLOCK_RES}
Running mesh extraction with 1 GPUs.
Make folder logs/example_group/example_name
Setup trainer.
Using random seed 0
model parameter count: 366,706,268
Initialize model weights using type: none, gain: None
Using random seed 0
Allow TensorFloat32 operations on supported devices
Loading checkpoint (local): logs/example_group/example_name/epoch_02857_iteration_000040000_checkpoint.pt
- Loading the model...
Done with loading the checkpoint.
Extracting surface at resolution 2048 2048 2048
vertices: 0
faces: 0
Traceback (most recent call last):
File "projects/neuralangelo/scripts/extract_mesh.py", line 95, in <module>
main()
File "projects/neuralangelo/scripts/extract_mesh.py", line 90, in main
mesh.vertices = mesh.vertices * meta["sphere_radius"] + np.array(meta["sphere_center"])
ValueError: operands could not be broadcast together with shapes (0,) (3,)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 502080) of binary: /home/vrlab/anaconda3/envs/neuralangelo/bin/python
Traceback (most recent call last):
File "/home/vrlab/anaconda3/envs/neuralangelo/bin/torchrun", line 8, in <module>
sys.exit(main())
File "/home/vrlab/anaconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper
return f(*args, **kwargs)
File "/home/vrlab/anaconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/run.py", line 762, in main
run(args)
File "/home/vrlab/anaconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/run.py", line 753, in run
elastic_launch(
File "/home/vrlab/anaconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 132, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/vrlab/anaconda3/envs/neuralangelo/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
projects/neuralangelo/scripts/extract_mesh.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2023-08-14_19:58:51
host : k8s-master-38
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 502080)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
The server GPU used for extracting is a single RTX3090 with 24GB VRAM.
I wonder if this is a bug or mistake made by myself. Thanks. | closed | 2023-08-14T12:04:39Z | 2023-08-16T03:15:20Z | https://github.com/NVlabs/neuralangelo/issues/13 | [] | AuthorityWang | 8 |
raphaelvallat/pingouin | pandas | 362 | Pairwise tests with mixed design fail with new pandas 2.0 | Hey @raphaelvallat,
I noticed that the example on pairwise tests with mixed design (number 4 here> https://pingouin-stats.org/build/html/generated/pingouin.pairwise_tests.html#pingouin.pairwise_tests) fails after update to pandas 2.0 with
```
TypeError: Could not convert AugustJanuaryJune to numeric
``` | closed | 2023-06-01T08:44:35Z | 2023-06-04T12:33:18Z | https://github.com/raphaelvallat/pingouin/issues/362 | [] | jajcayn | 2 |
miguelgrinberg/microblog | flask | 298 | Mysql2::Error: Data too long for column 'output' at row 1 | While sending mail from postal, I Continuously Got this error. How to fix This?
An internal error occurred while sending this message. This message will be retried automatically. If this persists, contact support for assistance.
Mysql2::Error: Data too long for column 'output' at row 1
| closed | 2021-08-25T21:31:25Z | 2023-12-03T10:57:14Z | https://github.com/miguelgrinberg/microblog/issues/298 | [
"question"
] | judan31 | 2 |
localstack/localstack | python | 11,512 | bug: duplicate EC2 security group rules | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Some security group rules added via `ec2:AuthorizeSecurityGroupIngress` are duplicated.
### Expected Behavior
Security group rules should not be duplicated.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack
```yml
# compose.yml
services:
aws:
image: localstack/localstack:3.7
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "4566:4566"
environment:
LS_LOG: trace
SERVICES: "ec2"
```
#### Client commands
Create a security group
```sh
group_id=$(
awslocal ec2 create-security-group \
--group-name="example" \
--description="Just an example" \
| jq -r .GroupId
)
```
Add some rules
```sh
awslocal ec2 authorize-security-group-ingress \
--group-id="${group_id}" \
--ip-permissions='[
{
"FromPort": 1,
"ToPort": 2,
"IpProtocol": "tcp",
"IpRanges": [{
"CidrIp": "127.0.0.1/32",
"Description": "first rule"
}]
},
{
"FromPort": 1,
"ToPort": 2,
"IpProtocol": "tcp",
"IpRanges": [{
"CidrIp": "127.0.0.2/32",
"Description": "second rule"
}]
}
]'
```
<details>
<summary><em>Output</em></summary>
```json
{
"Return": true,
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-19b625c802efc975a",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": false,
"IpProtocol": "tcp",
"FromPort": 1,
"ToPort": 2,
"CidrIpv4": "127.0.0.2/32",
"Description": "second rule",
"Tags": []
}
]
}
```
</details>
Make sure the rules have been added correctly
```sh
awslocal ec2 describe-security-group-rules \
--filters="Name=group-id,Values=${group_id}"
```
<details>
<summary><em>Output</em></summary>
```json
{
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-c397ed63f861a0362",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": false,
"IpProtocol": "tcp",
"FromPort": 1,
"ToPort": 2,
"CidrIpv4": "127.0.0.1/32",
"Description": "first rule",
"Tags": []
},
{
"SecurityGroupRuleId": "sgr-19b625c802efc975a",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": false,
"IpProtocol": "tcp",
"FromPort": 1,
"ToPort": 2,
"CidrIpv4": "127.0.0.2/32",
"Description": "second rule",
"Tags": []
},
{
"SecurityGroupRuleId": "sgr-6773f91e11c3ff172",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": true,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "0.0.0.0/0",
"Tags": []
}
]
}
```
</details>
Invoke `ec2:DescribeSecurityGroups` operation
```sh
awslocal ec2 describe-security-groups
```
<details>
<summary><em>Output</em></summary>
```json
{
"SecurityGroups": [
{
"Description": "default VPC security group",
"GroupName": "default",
"IpPermissions": [],
"OwnerId": "000000000000",
"GroupId": "sg-9897979fa57feca88",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"Tags": [],
"VpcId": "vpc-226bb034"
},
{
"Description": "Just an example",
"GroupName": "example",
"IpPermissions": [
{
"FromPort": 1,
"IpProtocol": "tcp",
"IpRanges": [
{
"CidrIp": "127.0.0.1/32",
"Description": "first rule"
},
{
"CidrIp": "127.0.0.2/32",
"Description": "second rule"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"ToPort": 2,
"UserIdGroupPairs": []
}
],
"OwnerId": "000000000000",
"GroupId": "sg-95594140376c802f5",
"IpPermissionsEgress": [
{
"IpProtocol": "-1",
"IpRanges": [
{
"CidrIp": "0.0.0.0/0"
}
],
"Ipv6Ranges": [],
"PrefixListIds": [],
"UserIdGroupPairs": []
}
],
"Tags": [],
"VpcId": "vpc-226bb034"
}
]
}
```
</details>
Check the rules again, `second rule` is now duplicated
```sh
awslocal ec2 describe-security-group-rules \
--filters="Name=group-id,Values=${group_id}"
```
<details>
<summary><em>Output</em></summary>
```json
{
"SecurityGroupRules": [
{
"SecurityGroupRuleId": "sgr-c397ed63f861a0362",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": false,
"IpProtocol": "tcp",
"FromPort": 1,
"ToPort": 2,
"CidrIpv4": "127.0.0.1/32",
"Description": "first rule",
"Tags": []
},
{
"SecurityGroupRuleId": "sgr-c397ed63f861a0362",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": false,
"IpProtocol": "tcp",
"FromPort": 1,
"ToPort": 2,
"CidrIpv4": "127.0.0.2/32",
"Description": "second rule",
"Tags": []
},
{
"SecurityGroupRuleId": "sgr-19b625c802efc975a",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": false,
"IpProtocol": "tcp",
"FromPort": 1,
"ToPort": 2,
"CidrIpv4": "127.0.0.2/32",
"Description": "second rule",
"Tags": []
},
{
"SecurityGroupRuleId": "sgr-6773f91e11c3ff172",
"GroupId": "sg-95594140376c802f5",
"GroupOwnerId": "000000000000",
"IsEgress": true,
"IpProtocol": "-1",
"FromPort": -1,
"ToPort": -1,
"CidrIpv4": "0.0.0.0/0",
"Tags": []
}
]
}
```
</details>
### Environment
```markdown
- OS: Ubuntu 22.04.3 LTS
- LocalStack:
LocalStack version: 3.7.2
LocalStack Docker image sha: sha256:811d4cd67e6cc833cd5849ddaac454abd90c0d0fc00d402f8f82ee47926c5e10
LocalStack build date: 2024-09-06
LocalStack build git hash: a607fed91
```
### Anything else?
I observed that immediately after adding the rules everything looks normal. Yet, after invoking `ec2:DescribeSecurityGroups`, the duplicates appear.
Hence, it appears to me that this bug is related to some side effect of `ec2:DescribeSecurityGroups` operation trying to group rules when, for example, "from" and "to" ports are the same.
Also note how the output of `authorize-security-group-ingress` is incorrect (it shows one rule instead of two).
#### Debug logs
```log
DEBUG --- [ MainThread] l.utils.docker_utils : Using SdkDockerClient. LEGACY_DOCKER_CLIENT: False, SDK installed: True
WARN --- [ MainThread] l.services.internal : Enabling diagnose endpoint, please be aware that this can expose sensitive information via your network.
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.runtime.components.aws = <class 'localstack.aws.components.AwsComponents'>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.runtime.components:aws
LocalStack version: 3.7.2
LocalStack build date: 2024-09-06
LocalStack build git hash: a607fed91
DEBUG --- [ MainThread] localstack.utils.run : Executing command: rm -rf "/tmp/localstack"
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start._patch_botocore_endpoint_in_memory = <function _patch_botocore_endpoint_in_memory at 0xffffb1d3f060>)
DEBUG --- [ MainThread] plux.runtime.manager : plugin localstack.hooks.on_infra_start:_patch_botocore_endpoint_in_memory is disabled, reason: Load condition for plugin was false
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start._patch_botocore_json_parser = <function _patch_botocore_json_parser at 0xffffb1d3eca0>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:_patch_botocore_json_parser
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start._patch_cbor2 = <function _patch_cbor2 at 0xffffb1d3ede0>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:_patch_cbor2
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start._publish_config_as_analytics_event = <function _publish_config_as_analytics_event at 0xffffb1131760>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:_publish_config_as_analytics_event
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start._publish_container_info = <function _publish_container_info at 0xffffb1131b20>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:_publish_container_info
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start._run_init_scripts_on_start = <function _run_init_scripts_on_start at 0xffffb12ed6c0>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:_run_init_scripts_on_start
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.apply_aws_runtime_patches = <function apply_aws_runtime_patches at 0xffffb1131e40>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:apply_aws_runtime_patches
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.apply_runtime_patches = <function apply_runtime_patches at 0xffffb1132200>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:apply_runtime_patches
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.conditionally_enable_debugger = <function conditionally_enable_debugger at 0xffffb1132660>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:conditionally_enable_debugger
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.delete_cached_certificate = <function delete_cached_certificate at 0xffffb1132c00>)
DEBUG --- [ MainThread] plux.runtime.manager : plugin localstack.hooks.on_infra_start:delete_cached_certificate is disabled, reason: Load condition for plugin was false
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.deprecation_warnings = <function deprecation_warnings at 0xffffb1132a20>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:deprecation_warnings
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.register_cloudformation_deploy_ui = <function register_cloudformation_deploy_ui at 0xffffb1132e80>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:register_cloudformation_deploy_ui
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.register_custom_endpoints = <function register_custom_endpoints at 0xffffb1009440>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:register_custom_endpoints
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.register_partition_adjusting_proxy_listener = <function register_partition_adjusting_proxy_listener at 0xffffb11328e0>)
DEBUG --- [ MainThread] plux.runtime.manager : plugin localstack.hooks.on_infra_start:register_partition_adjusting_proxy_listener is disabled, reason: Load condition for plugin was false
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.setup_dns_configuration_on_host = <function setup_dns_configuration_on_host at 0xffffb10098a0>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:setup_dns_configuration_on_host
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.start_dns_server = <function start_dns_server at 0xffffb1009760>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:start_dns_server
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_start.validate_configuration = <function validate_configuration at 0xffffb1009300>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.hooks.on_infra_start:validate_configuration
DEBUG --- [ MainThread] localstack.dns.server : Determined fallback dns: 127.0.0.11
DEBUG --- [ MainThread] localstack.dns.server : Starting DNS servers (tcp/udp port 53 on 0.0.0.0)...
DEBUG --- [ MainThread] localstack.dns.server : Adding host .*localhost.localstack.cloud pointing to LocalStack
DEBUG --- [ MainThread] localstack.dns.server : Adding host .*localhost.localstack.cloud with record DynamicRecord(record_type=<RecordType.A: 1>, record_id=None)
DEBUG --- [ MainThread] localstack.dns.server : Adding host .*localhost.localstack.cloud with record DynamicRecord(record_type=<RecordType.AAAA: 2>, record_id=None)
DEBUG --- [-functhread1] localstack.dns.server : DNS Server started
DEBUG --- [ MainThread] localstack.dns.server : DNS server startup finished.
DEBUG --- [ MainThread] localstack.runtime.init : Init scripts discovered: {BOOT: [], START: [], READY: [], SHUTDOWN: []}
DEBUG --- [ MainThread] localstack.plugins : Checking for the usage of deprecated community features and configs...
DEBUG --- [ MainThread] localstack.dns.server : Overwriting container DNS server to point to localhost
DEBUG --- [ MainThread] localstack.utils.ssl : Attempting to download local SSL certificate file
DEBUG --- [ MainThread] localstack.utils.ssl : SSL certificate downloaded successfully
DEBUG --- [ MainThread] plux.runtime.manager : instantiating plugin PluginSpec(localstack.runtime.server.twisted = <class 'localstack.runtime.server.plugins.TwistedRuntimeServerPlugin'>)
DEBUG --- [ MainThread] plux.runtime.manager : loading plugin localstack.runtime.server:twisted
DEBUG --- [ady_monitor)] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_ready._run_init_scripts_on_ready = <function _run_init_scripts_on_ready at 0xffffb12ed800>)
DEBUG --- [ady_monitor)] plux.runtime.manager : loading plugin localstack.hooks.on_infra_ready:_run_init_scripts_on_ready
DEBUG --- [ady_monitor)] plux.runtime.manager : instantiating plugin PluginSpec(localstack.hooks.on_infra_ready.register_virtual_host_routes = <function register_virtual_host_routes at 0xffffa90ccae0>)
DEBUG --- [ady_monitor)] plux.runtime.manager : plugin localstack.hooks.on_infra_ready:register_virtual_host_routes is disabled, reason: Load condition for plugin was false
Ready.
DEBUG --- [et.reactor-0] l.a.p.service_router : building service catalog index cache file /var/lib/localstack/cache/service-catalog-3_7_2-1_35_10.pickle
DEBUG --- [et.reactor-0] rolo.gateway.wsgi : POST localhost:4566/
DEBUG --- [et.reactor-0] plux.runtime.manager : instantiating plugin PluginSpec(localstack.aws.provider.ec2:default = <function ec2 at 0xffff839a4a40>)
DEBUG --- [et.reactor-0] plux.runtime.manager : loading plugin localstack.aws.provider:ec2:default
INFO --- [et.reactor-0] localstack.utils.bootstrap : Execution of "_load_service_plugin" took 521.60ms
INFO --- [et.reactor-0] localstack.utils.bootstrap : Execution of "require" took 521.75ms
DEBUG --- [et.reactor-0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type.
INFO --- [et.reactor-0] localstack.request.aws : AWS ec2.CreateSecurityGroup => 200; 000000000000/us-east-1; CreateSecurityGroupRequest({'Description': 'Just an example', 'GroupName': 'example'}, headers={'Host': 'localhost:4566', 'Accept-Encoding': 'identity', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'aws-cli/1.34.13 md/Botocore#1.35.13 ua/2.0 os/linux#5.15.0-119-generic md/arch#aarch64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#legacy botocore/1.35.13', 'X-Amz-Date': '20240913T094221Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test/20240913/us-east-1/ec2/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=bc973a4d197c670fb9298a029d1b721f17a2223cb3064c631fc9ea087743be4e', 'amz-sdk-invocation-id': '8bfa0030-bf83-499b-a2ae-ae0b52254bf1', 'amz-sdk-request': 'attempt=1', 'Content-Length': '96', 'x-moto-account-id': '000000000000'}); CreateSecurityGroupResult({'GroupId': 'sg-95594140376c802f5', 'Tags': []}, headers={'Content-Type': 'text/xml', 'Content-Length': '254'})
DEBUG --- [et.reactor-0] rolo.gateway.wsgi : POST localhost:4566/
DEBUG --- [et.reactor-0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type.
INFO --- [et.reactor-0] localstack.request.aws : AWS ec2.AuthorizeSecurityGroupIngress => 200; 000000000000/us-east-1; AuthorizeSecurityGroupIngressRequest({'GroupId': 'sg-95594140376c802f5', 'IpPermissions': [{'FromPort': 1, 'IpProtocol': 'tcp', 'IpRanges': [{'CidrIp': '127.0.0.1/32', 'Description': 'first rule'}], 'ToPort': 2}, {'FromPort': 1, 'IpProtocol': 'tcp', 'IpRanges': [{'CidrIp': '127.0.0.2/32', 'Description': 'second rule'}], 'ToPort': 2}]}, headers={'Host': 'localhost:4566', 'Accept-Encoding': 'identity', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'aws-cli/1.34.13 md/Botocore#1.35.13 ua/2.0 os/linux#5.15.0-119-generic md/arch#aarch64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#legacy botocore/1.35.13', 'X-Amz-Date': '20240913T094230Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test/20240913/us-east-1/ec2/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=cec6c131d8ab0ca53547f824333c1b6300fcbce2c5e7b43745be51e55762d67f', 'amz-sdk-invocation-id': 'ef053b1b-2375-4313-82b6-e9b98ab45b59', 'amz-sdk-request': 'attempt=1', 'Content-Length': '449', 'x-moto-account-id': '000000000000'}); AuthorizeSecurityGroupIngressResult({'Return': True, 'SecurityGroupRules': [{'SecurityGroupRuleId': 'sgr-19b625c802efc975a', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': False, 'IpProtocol': 'tcp', 'FromPort': 1, 'ToPort': 2, 'CidrIpv4': '127.0.0.2/32', 'Description': 'second rule', 'Tags': []}]}, headers={'Content-Type': 'text/xml', 'Content-Length': '623'})
DEBUG --- [et.reactor-0] rolo.gateway.wsgi : POST localhost:4566/
DEBUG --- [et.reactor-0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type.
INFO --- [et.reactor-0] localstack.request.aws : AWS ec2.DescribeSecurityGroupRules => 200; 000000000000/us-east-1; DescribeSecurityGroupRulesRequest({'Filters': [{'Name': 'group-id', 'Values': ['sg-95594140376c802f5']}]}, headers={'Host': 'localhost:4566', 'Accept-Encoding': 'identity', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'aws-cli/1.34.13 md/Botocore#1.35.13 ua/2.0 os/linux#5.15.0-119-generic md/arch#aarch64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#legacy botocore/1.35.13', 'X-Amz-Date': '20240913T094301Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test/20240913/us-east-1/ec2/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=7f06f64bed179ce37f8d06da2eed9dd5d382bd0ccae82af02dddd65b5d22a746', 'amz-sdk-invocation-id': '12d61346-e7ee-4d77-88be-ba4abf31a16c', 'amz-sdk-request': 'attempt=1', 'Content-Length': '113', 'x-moto-account-id': '000000000000'}); DescribeSecurityGroupRulesResult({'SecurityGroupRules': [{'SecurityGroupRuleId': 'sgr-c397ed63f861a0362', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': False, 'IpProtocol': 'tcp', 'FromPort': 1, 'ToPort': 2, 'CidrIpv4': '127.0.0.1/32', 'Description': 'first rule', 'Tags': []}, {'SecurityGroupRuleId': 'sgr-19b625c802efc975a', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': False, 'IpProtocol': 'tcp', 'FromPort': 1, 'ToPort': 2, 'CidrIpv4': '127.0.0.2/32', 'Description': 'second rule', 'Tags': []}, {'SecurityGroupRuleId': 'sgr-6773f91e11c3ff172', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': True, 'IpProtocol': '-1', 'FromPort': -1, 'ToPort': -1, 'CidrIpv4': '0.0.0.0/0', 'Tags': []}]}, headers={'Content-Type': 'text/xml', 'Content-Length': '1218'})
DEBUG --- [et.reactor-0] rolo.gateway.wsgi : POST localhost:4566/
DEBUG --- [et.reactor-0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type.
INFO --- [et.reactor-0] localstack.request.aws : AWS ec2.DescribeSecurityGroups => 200; 000000000000/us-east-1; DescribeSecurityGroupsRequest({}, headers={'Host': 'localhost:4566', 'Accept-Encoding': 'identity', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'aws-cli/1.34.13 md/Botocore#1.35.13 ua/2.0 os/linux#5.15.0-119-generic md/arch#aarch64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#legacy botocore/1.35.13', 'X-Amz-Date': '20240913T094315Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test/20240913/us-east-1/ec2/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=5548d7a38f841a4172461452f955fb05b5fad0ade327d9622262e3e0656d7fd1', 'amz-sdk-invocation-id': 'e33fd5e9-7f7e-40a6-88bf-41e53a53d0a7', 'amz-sdk-request': 'attempt=1', 'Content-Length': '48', 'x-moto-account-id': '000000000000'}); DescribeSecurityGroupsResult({'SecurityGroups': [{'Description': 'default VPC security group', 'GroupName': 'default', 'IpPermissions': [], 'OwnerId': '000000000000', 'GroupId': 'sg-9897979fa57feca88', 'IpPermissionsEgress': [{'IpProtocol': '-1', 'IpRanges': [{'CidrIp': '0.0.0.0/0'}], 'Ipv6Ranges': [], 'PrefixListIds': [], 'UserIdGroupPairs': []}], 'Tags': [], 'VpcId': 'vpc-226bb034'}, {'Description': 'Just an example', 'GroupName': 'example', 'IpPermissions': [{'FromPort': 1, 'IpProtocol': 'tcp', 'IpRanges': [{'CidrIp': '127.0.0.1/32', 'Description': 'first rule'}, {'CidrIp': '127.0.0.2/32', 'Description': 'second rule'}], 'Ipv6Ranges': [], 'PrefixListIds': [], 'ToPort': 2, 'UserIdGroupPairs': []}], 'OwnerId': '000000000000', 'GroupId': 'sg-95594140376c802f5', 'IpPermissionsEgress': [{'IpProtocol': '-1', 'IpRanges': [{'CidrIp': '0.0.0.0/0'}], 'Ipv6Ranges': [], 'PrefixListIds': [], 'UserIdGroupPairs': []}], 'Tags': [], 'VpcId': 'vpc-226bb034'}]}, headers={'Content-Type': 'text/xml', 'Content-Length': '1383'})
DEBUG --- [et.reactor-0] rolo.gateway.wsgi : POST localhost:4566/
DEBUG --- [et.reactor-0] l.aws.protocol.serializer : No accept header given. Using request's Content-Type (application/x-www-form-urlencoded; charset=utf-8) as preferred response Content-Type.
INFO --- [et.reactor-0] localstack.request.aws : AWS ec2.DescribeSecurityGroupRules => 200; 000000000000/us-east-1; DescribeSecurityGroupRulesRequest({'Filters': [{'Name': 'group-id', 'Values': ['sg-95594140376c802f5']}]}, headers={'Host': 'localhost:4566', 'Accept-Encoding': 'identity', 'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8', 'User-Agent': 'aws-cli/1.34.13 md/Botocore#1.35.13 ua/2.0 os/linux#5.15.0-119-generic md/arch#aarch64 lang/python#3.11.9 md/pyimpl#CPython cfg/retry-mode#legacy botocore/1.35.13', 'X-Amz-Date': '20240913T094335Z', 'Authorization': 'AWS4-HMAC-SHA256 Credential=test/20240913/us-east-1/ec2/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=eb57f248aefba9ad37cf9b45bd62979313432999052df7b9475a784ce7ca422c', 'amz-sdk-invocation-id': '22da0bb2-a600-48ca-b07c-d8a4b0171829', 'amz-sdk-request': 'attempt=1', 'Content-Length': '113', 'x-moto-account-id': '000000000000'}); DescribeSecurityGroupRulesResult({'SecurityGroupRules': [{'SecurityGroupRuleId': 'sgr-c397ed63f861a0362', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': False, 'IpProtocol': 'tcp', 'FromPort': 1, 'ToPort': 2, 'CidrIpv4': '127.0.0.1/32', 'Description': 'first rule', 'Tags': []}, {'SecurityGroupRuleId': 'sgr-c397ed63f861a0362', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': False, 'IpProtocol': 'tcp', 'FromPort': 1, 'ToPort': 2, 'CidrIpv4': '127.0.0.2/32', 'Description': 'second rule', 'Tags': []}, {'SecurityGroupRuleId': 'sgr-19b625c802efc975a', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': False, 'IpProtocol': 'tcp', 'FromPort': 1, 'ToPort': 2, 'CidrIpv4': '127.0.0.2/32', 'Description': 'second rule', 'Tags': []}, {'SecurityGroupRuleId': 'sgr-6773f91e11c3ff172', 'GroupId': 'sg-95594140376c802f5', 'GroupOwnerId': '000000000000', 'IsEgress': True, 'IpProtocol': '-1', 'FromPort': -1, 'ToPort': -1, 'CidrIpv4': '0.0.0.0/0', 'Tags': []}]}, headers={'Content-Type': 'text/xml', 'Content-Length': '1550'})
``` | open | 2024-09-13T09:51:45Z | 2025-02-13T09:03:13Z | https://github.com/localstack/localstack/issues/11512 | [
"type: bug",
"aws:ec2",
"status: backlog"
] | mcieno | 2 |
allenai/allennlp | nlp | 5,597 | Pass a custom pair of strings to `PretrainedTransformerTokenizer._reverse_engineer_special_tokens`. | Hello,
`PretrainedTransformerTokenizer._reverse_engineer_special_tokens` is called inside `PretrainedTransformerTokenizer.__init__` ([here](https://github.com/allenai/allennlp/blob/5f5f8c30fe7a9530e011787b6ff10549dc5dac37/allennlp/data/tokenizers/pretrained_transformer_tokenizer.py#L78)) . If I understand correctly, the method does some sanity checks by comparing a pair of token ids that should be different from each other.
The current implementation compares `("a", "b")` or `("1", "2")`, but both of the pairs don't work for some Japanese pre-trained tokenizers. For example, the vocabulary of the [NICT Japanese BERT](https://alaginrc.nict.go.jp/nict-bert/index.html) contains none of `{"a", "b", "1", "2"}` because it only accepts full-width characters.
So I'd like to pass a custom pair to `PretrainedTransformerTokenizer._reverse_engineer_special_tokens` by adding a parameter to `PretrainedTransformerTokenizer.__init__`. I'm ready to make a PR, but I opened this issue to make sure that it's a good approach.
Thanks,
| closed | 2022-03-13T00:35:47Z | 2022-04-08T01:25:37Z | https://github.com/allenai/allennlp/issues/5597 | [
"Contributions welcome",
"Feature request"
] | tarohi24 | 3 |
DistrictDataLabs/yellowbrick | matplotlib | 577 | Change "alpha" parameter to "opacity" | In working on issue #558, it's becoming clear that there is potential for naming confusion and collision around our use of `alpha` to mean the opacity/translucency (e.g. in the matplotlib sense), since in scikit-learn, `alpha` is often used to reference other things, such as the regularization hyperparameter, e.g.
```
oz = ResidualsPlot(Lasso(alpha=0.1), alpha=0.7)
```
As such, we should probably change our `alpha` to something like `opacity`. This update will impact a lot of the codebase and docs, and therefore has the potential to be disruptive, so I propose waiting until things are a bit quieter.
| closed | 2018-08-21T21:10:37Z | 2020-06-12T04:57:37Z | https://github.com/DistrictDataLabs/yellowbrick/issues/577 | [
"type: technical debt",
"level: intermediate"
] | rebeccabilbro | 1 |
mwaskom/seaborn | data-science | 3,258 | Legends include extraneous entries for categorical variables | (I understand that legends are a can of worms, and if the answer to this particular question is, we thought about it and it's a mess, that's ok, but thought I'd bring it up just in case.)
When you pass in a column with a restricted set of values to something like `hue`, only the categories used in the view of the data make it into the legend:
```
from seaborn import scatterplot
import numpy as np
tmp = pd.DataFrame({
'x': np.tile(np.arange(10), 3),
'y': np.repeat(np.arange(10), 3),
'catg': np.random.choice(['a', 'b', 'c'], size=30)
})
scatterplot(data=tmp.loc[tmp.catg=='a', :], x='x', y='y', hue='catg')
```

that is to say, 'b' and 'c' aren't in the legend.
However, if you cast to categorical type, then you get extra legend entries:
```
tmp = tmp.astype({'catg': 'category'})
scatterplot(data=tmp.loc[tmp.catg=='a', :], x='x', y='y', hue='catg')
```

I imagine this is because even when you subset the original df, the list of categories stays with the column, and seaborn takes that list as the set of values to be used in the legend.
This would be convenient / efficient in the case where there are huge numbers of categories and you don't want to check which ones are used in the data. However, my guess is that that's an edge case, and the more common case is that there are 5-10 categories, and the user wants to plot 0-a few of them. I would expect default behavior to only show the ones used, and maybe have a kwarg for showing them all.
The only way around this I see, given seaborn's current behavior, is to cast the column to a string just before plotting, or make a copy of the df, neither of which is particularly desirable. Let me know if I'm missing something obvious. Thanks!
| closed | 2023-02-13T20:46:23Z | 2023-02-13T23:25:14Z | https://github.com/mwaskom/seaborn/issues/3258 | [] | jonahpearl | 3 |
saulpw/visidata | pandas | 2,182 | More sensible and legible autocomplete order | Following [a thought in a related issue](https://github.com/saulpw/visidata/issues/2179#issuecomment-1862956197), when I'm at the expression prompt and I hit tab to autocomplete the column name, it presents them in an odd order that I can't identify. It's not left to right. It'd be nice if it started with current column then it could move: leftward, rightward, or back to 0 then rightward. | open | 2023-12-20T15:02:32Z | 2024-03-12T10:54:55Z | https://github.com/saulpw/visidata/issues/2182 | [
"wishlist"
] | reagle | 2 |
python-restx/flask-restx | flask | 588 | should RESTX_ERROR_404_HELP be disabled by default? | **Ask a question**
background:
I came through https://github.com/python-restx/flask-restx/issues/550 and went to https://github.com/flask-restful/flask-restful/issues/780, I see similar behaviors in both libraries. I use restx.
at least it seems to me that author to restful believes the option should never have been existed.
I believe that RESTX_ERROR_404_HELP should at least be disabled by default because:
* it causes confusion. I spent some time finding who's responsible for the extra error message.
* there could be security concerns. it could help attackers enumerate the routes.
| open | 2024-01-03T13:36:04Z | 2024-01-15T12:28:50Z | https://github.com/python-restx/flask-restx/issues/588 | [
"question"
] | frankli0324 | 1 |
slackapi/bolt-python | fastapi | 471 | issues in making HTML body and redirects in custom success handlers | I am trying to replicate the success handler after the oauth flow in my program so i can add some custom logic there and still be able to make it look like the default
I took the example from [here](https://slack.dev/bolt-python/concepts#authenticating-oauth) in the customizing section that follows.
I can see that the default success handler is being created by the `CallbackResponseBuilder` class, in `_build_callback_success_response`
I tried to put a breakpoint to inspect the body there, but when i put the breakpoint in that file, it never reaches it, so no luck there.
I tried to copy the HTML from the webpage like so, but it still doesnt work, just displays the text as-is
```
def slack_oauth_success(args: SuccessArgs) -> BoltResponse:
assert args.request is not None
# do stuff here
response = BoltResponse(
status=200, # you can redirect users too
body="<h2>Thank you!</h2><p>Redirecting to the Slack App... click <a href='slack://app?team=asdf&id=asdf'>here</a>. If you use the browser version of Slack, click <a href='https://app.slack.com/client/asdf' target='_blank'>this link</a> instead.</p>"
)
return response
```
would really like to know how to set up the header and body so that the web page is displayed nicely and it redirects to open the link in the desktop application, given that it is mentioned in the comment that a redirect is possible | closed | 2021-09-21T16:50:50Z | 2021-09-26T14:28:32Z | https://github.com/slackapi/bolt-python/issues/471 | [
"question"
] | akshatd | 5 |
httpie/cli | python | 844 | Stack trace when trying to issue POST request with list | Any POST request containing an array/list causes a stack trace and exit for me on Ubuntu 19.10
Ex:
```
http POST localhost/test_endpoint answer:='["Red","Yellow"]'
Traceback (most recent call last):
File "/usr/bin/http", line 11, in <module>
load_entry_point('httpie==0.9.8', 'console_scripts', 'http')()
File "/usr/lib/python3/dist-packages/httpie/__main__.py", line 11, in main
sys.exit(main())
File "/usr/lib/python3/dist-packages/httpie/core.py", line 210, in main
parsed_args = parser.parse_args(args=args, env=env)
File "/usr/lib/python3/dist-packages/httpie/input.py", line 151, in parse_args
self._parse_items()
File "/usr/lib/python3/dist-packages/httpie/input.py", line 355, in _parse_items
data_class=ParamsDict if self.args.form else OrderedDict
File "/usr/lib/python3/dist-packages/httpie/input.py", line 745, in parse_items
data_class(data),
File "/usr/lib/python3/dist-packages/httpie/input.py", line 635, in __setitem__
assert not isinstance(value, list)
AssertionError
```
I'm not sure exactly what's going on, but judging from similar errors in this repo and the context of the functions being called, I'd venture a guess that the package is trying to write to something outside my home directory, which it does not have access to.
If this is correct, and whatever it's trying to write to is intended to be ephemeral, may I suggest using $TMPDIR for this purpose on POSIX-compliant systems. | closed | 2020-01-28T15:46:44Z | 2020-04-13T16:06:46Z | https://github.com/httpie/cli/issues/844 | [] | bredmor | 1 |
autokey/autokey | automation | 964 | Bump Python versions in workflow files + replace asyncore | ### AutoKey is a Xorg application and will not function in a Wayland session. Do you use Xorg (X11) or Wayland?
Xorg
### Has this issue already been reported?
- [X] I have searched through the existing issues.
### Is this a question rather than an issue?
- [X] This is not a question.
### What type of issue is this?
Enhancement
### Choose one or more terms that describe this issue:
- [ ] autokey triggers
- [ ] autokey-gtk
- [ ] autokey-qt
- [ ] beta
- [ ] bug
- [ ] critical
- [X] development
- [ ] documentation
- [ ] enhancement
- [ ] installation/configuration
- [ ] phrase expansion
- [ ] scripting
- [ ] technical debt
- [ ] user interface
### Other terms that describe this issue if not provided above:
update
### Which Linux distribution did you use?
_No response_
### Which AutoKey GUI did you use?
None
### Which AutoKey version did you use?
_No response_
### How did you install AutoKey?
_No response_
### Can you briefly describe the issue?
Update the Python versions in the [build.yml](https://github.com/autokey/autokey/blob/master/.github/workflows/build.yml) file, and the [python-test.yml](https://github.com/autokey/autokey/blob/master/.github/workflows/python-test.yml) file.
### Can the issue be reproduced?
None
### What are the steps to reproduce the issue?
_No response_
### What should have happened?
_No response_
### What actually happened?
_No response_
### Do you have screenshots?
_No response_
### Can you provide the output of the AutoKey command?
_No response_
### Anything else?
_No response_
<br/>
<hr/>
<details><summary>This repo is using Opire - what does it mean? ๐</summary><br/>๐ต Everyone can add rewards for this issue commenting <code>/reward 100</code> (replace <code>100</code> with the amount).<br/>๐ต๏ธโโ๏ธ If someone starts working on this issue to earn the rewards, they can comment <code>/try</code> to let everyone know!<br/>๐ And when they open the PR, they can comment <code>/claim #964</code> either in the PR description or in a PR's comment.<br/><br/>๐ช Also, everyone can tip any user commenting <code>/tip 20 @Elliria</code> (replace <code>20</code> with the amount, and <code>@Elliria</code> with the user to tip).<br/><br/>๐ If you want to learn more, check out our <a href="https://docs.opire.dev">documentation</a>.</details> | closed | 2024-11-08T15:59:38Z | 2024-12-01T04:47:32Z | https://github.com/autokey/autokey/issues/964 | [
"help-wanted",
"development",
"environment"
] | Elliria | 19 |
TheKevJames/coveralls-python | pytest | 124 | source_files is empty | I'm working on makinacorpus/django-safedelete#56 where I'm facing some troubles moving coveralls to tox. It doesn't report any files and when running `- coveralls debug` instead of `- coveralls` in tox, `source_files` is empty:
```json
{
"config_file": ".coveragerc",
"source_files": [],
"service_name": "coveralls-python",
"git": {
"branch": "bug/can_hard_delete_fk",
"remotes": [
{
"name": "origin",
"url": "git@github.com:AndreasBackx/django-safedelete.git"
},
{
"name": "upstream",
"url": "git@github.com:makinacorpus/django-safedelete.git"
}
],
"head": {
"author_name": "Andreas Backx",
"id": "9861655e43d23c34a0bef6145ddf67ab2f9208f6",
"author_email": "myemail",
"committer_email": "myemail",
"committer_name": "Andreas Backx",
"message": "Moved coveralls to tox and ignore its exit code."
}
}
}
```
I can't see to figure out what is wrong and would appreciate it if you could point me in the right direction. | closed | 2016-12-22T18:28:47Z | 2018-09-03T19:37:14Z | https://github.com/TheKevJames/coveralls-python/issues/124 | [] | AndreasBackx | 5 |
ymcui/Chinese-BERT-wwm | tensorflow | 164 | ๆฏๅฆๆTF็ๆฌ็ไธ่ฝฝๅข๏ผ | ```Exception has occurred: OSError
Can't load weights for 'hfl/chinese-bert-wwm-ext'. Make sure that:
- 'hfl/chinese-bert-wwm-ext' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'hfl/chinese-bert-wwm-ext' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
File ".../transformers/loader.py", line 115, in genModel
transformer_model = TFBertModel.from_pretrained(transformer, output_hidden_states=True)
``` | closed | 2020-12-23T07:29:47Z | 2020-12-31T09:13:09Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/164 | [
"stale"
] | callzhang | 2 |
yinkaisheng/Python-UIAutomation-for-Windows | automation | 185 | ่ฏท้ฎๆฏๅฆๆฏๆwin32api.PostMessageใwin32api.sendMessage่ฟไธ็ฑป็่ฐ็จๅข | ๅจๅฎ็ฐๅบ็จๆถ๏ผๆไบๅ่ฝ็จๅฐPostMessageใsendMessage็ญwin32API็ๆนๆณ๏ผๆฏๅฆไผๆฏๆๅข๏ผ | open | 2021-12-15T15:06:06Z | 2022-02-28T06:41:34Z | https://github.com/yinkaisheng/Python-UIAutomation-for-Windows/issues/185 | [
"question"
] | MeditatorV | 1 |
matterport/Mask_RCNN | tensorflow | 2,103 | How can I get the mask image 28x28 in Mask RCNN? | In Mask RCNN, the instance segmentation model generates the masks for each detected object. The masks are soft masks (with float pixel values) and of size 28x28 during training, then, the predicted masks are rescaled to the bounding box dimensions, and we can overlay them on the original image to visualize the final output.
Please how I can obtain the 28x28 mask before its rescaled ?
I have the following code :
def apply_mask(image, mask, color, alpha=1):
"""apply mask to image"""
for n, c in enumerate(color):
image[:, :, n] = np.where(
mask == 1,
image[:, :, n] * (1 - alpha) + alpha * c,
image[:, :, n]
)
return image
for i in range(n_instances):
if not np.any(boxes[i]):
continue
if ids[i] == 1:
color = [255,255,255]
mask = masks[:, :, i]
image_background_substracted = apply_mask(background, mask, color)
return image_background_substracted
else:
return background | open | 2020-04-11T11:11:38Z | 2020-05-12T07:52:29Z | https://github.com/matterport/Mask_RCNN/issues/2103 | [] | TCHERAKAmine | 1 |
nltk/nltk | nlp | 3,107 | NLTK's `stopwords` requires the stopwords to be first downloaded via the NLTK Data installer. This is a one-time setup, after which you will be able to freely use `from nltk.corpus import stopwords`. | NLTK's `stopwords` requires the stopwords to be first downloaded via the NLTK Data installer. This is a one-time setup, after which you will be able to freely use `from nltk.corpus import stopwords`.
To download the stopwords, open the Python interpreter with `python` in your terminal of choice, and type:
```python
>>> import nltk
>>> nltk.download("stopwords")
```
Afterwards, you're good to go!
_Originally posted by @tomaarsen in https://github.com/nltk/nltk/issues/3063#issuecomment-1280529826_
| open | 2023-01-18T18:00:27Z | 2024-05-24T12:45:02Z | https://github.com/nltk/nltk/issues/3107 | [] | Killpit | 9 |
graphdeco-inria/gaussian-splatting | computer-vision | 336 | a quesion about stereo information in PC | the input is point-cloud, generated by 'colmap SfM'.
very 'STRONG STEREO' information in PC as input, right? so ...
others algrithms (e.g NeRF ...) just used 'POSE ONLY' from colmap, ...
if we append PC as a part of inputs to NeRF* (by sampling, or other ways) , then ...
| closed | 2023-10-18T14:03:12Z | 2023-10-21T14:45:34Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/336 | [] | yuedajiong | 5 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 362 | [BUG] ๆฌๅฐๆญๅปบๆฅ็ฝ็ป้่ฏฏ, ไธๅคชๆ็ฝ่ฏฅๆไน่งฃๅณ |
Traceback (most recent call last):
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/start.py", line 37, in <module>
from app.main import app
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/app/main.py", line 39, in <module>
from app.api.router import router as api_router
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/app/api/router.py", line 2, in <module>
from app.api.endpoints import (
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/app/api/endpoints/tiktok_web.py", line 7, in <module>
from crawlers.tiktok.web.web_crawler import TikTokWebCrawler # ๅฏผๅ
ฅTikTokWebCrawler็ฑป
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/crawlers/tiktok/web/web_crawler.py", line 55, in <module>
from crawlers.tiktok.web.models import (
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/crawlers/tiktok/web/models.py", line 10, in <module>
class BaseRequestModel(BaseModel):
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/crawlers/tiktok/web/models.py", line 43, in BaseRequestModel
msToken: str = TokenManager.gen_real_msToken()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/mac/pythonsrc/pythonProject/Douyin_TikTok_Download_API/crawlers/tiktok/web/utils.py", line 86, in gen_real_msToken
raise APIConnectionError("่ฏทๆฑ็ซฏ็นๅคฑ่ดฅ๏ผ่ฏทๆฃๆฅๅฝๅ็ฝ็ป็ฏๅขใ ้พๆฅ๏ผ{0}๏ผไปฃ็๏ผ{1}๏ผๅผๅธธ็ฑปๅ๏ผ{2}๏ผๅผๅธธ่ฏฆ็ปไฟกๆฏ๏ผ{3}"
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
crawlers.utils.api_exceptions.APIConnectionError: ่ฏทๆฑ็ซฏ็นๅคฑ่ดฅ๏ผ่ฏทๆฃๆฅๅฝๅ็ฝ็ป็ฏๅขใ ้พๆฅ๏ผhttps://mssdk-sg.tiktok.com/web/common?msToken=1Ab-7YxR9lUHSem0PraI_XzdKmpHb6j50L8AaXLAd2aWTdoJCYLfX_67rVQFE4UwwHVHmyG_NfIipqrlLT3kCXps-5PYlNAqtdwEg7TrDyTAfCKyBrOLmhMUjB55oW8SPZ4_EkNxNFUdV7MquA==๏ผไปฃ็๏ผ{'http://': None, 'https://': None}๏ผๅผๅธธ็ฑปๅ๏ผTokenManager๏ผๅผๅธธ่ฏฆ็ปไฟกๆฏ๏ผtimed out
็จๅบๅบ็ฐๅผๅธธ๏ผ่ฏทๆฃๆฅ้่ฏฏไฟกๆฏใ | closed | 2024-04-23T09:47:36Z | 2024-04-23T11:31:43Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/362 | [
"BUG"
] | xyhubl | 4 |
aminalaee/sqladmin | asyncio | 491 | URLType field has no converter defined | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
When a URLType field is added to a model, "edit" on the record causes an exception:
from sqlalchemy_fields.types import URLType
class MyModel(Base):
image_url = Column(URLType(255))
...
sqladmin.exceptions.NoConverterFound: Could not find field converter for column image_url (<class 'sqlalchemy_fields.types.url.URLType'>).
### Steps to reproduce the bug
1. Create a model with a URLType field
2. Add a sqladmin ModelView for that model
3. Display the list of objects
4. Select the "edit" icon for the object
### Expected behavior
I would expect to see the default "Edit" view with the URL field editable
### Actual behavior
sqladmin.exceptions.NoConverterFound thrown
### Debugging material
Traceback (most recent call last):
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/uvicorn/protocols/http/h11_impl.py", line 428, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/fastapi/applications.py", line 276, in __call__
await super().__call__(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 443, in handle
await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/applications.py", line 122, in __call__
await self.middleware_stack(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
raise exc
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
await self.app(scope, receive, sender)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 718, in __call__
await route.handle(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/starlette/routing.py", line 66, in app
response = await func(request)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/authentication.py", line 60, in wrapper_decorator
return await func(*args, **kwargs)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/application.py", line 480, in edit
Form = await model_view.scaffold_form()
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/models.py", line 1021, in scaffold_form
return await get_model_form(
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/forms.py", line 586, in get_model_form
field = await converter.convert(
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/forms.py", line 312, in convert
converter = self.get_converter(prop=prop)
File "/Users/dhait/PycharmProjects/divmax/venv/lib/python3.9/site-packages/sqladmin/forms.py", line 266, in get_converter
raise NoConverterFound( # pragma: nocover
sqladmin.exceptions.NoConverterFound: Could not find field converter for column image_url (<class 'sqlalchemy_fields.types.url.URLType'>).
### Environment
- MacOS / Python 3.9
### Additional context
_No response_ | closed | 2023-05-11T18:21:30Z | 2023-05-11T19:58:31Z | https://github.com/aminalaee/sqladmin/issues/491 | [] | dhait | 2 |
mirumee/ariadne | api | 816 | Update Starlette dependency to 0.19 | Starlette 0.19 has been released, we should bump Ariadne's version to it before releasing 0.15 | closed | 2022-03-15T16:14:14Z | 2022-04-12T14:11:56Z | https://github.com/mirumee/ariadne/issues/816 | [
"dependencies"
] | rafalp | 0 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,304 | [Feature Request]: "Destructor" for the script | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
In my custom script I applied for some CUDA memory, and I want to free these CUDA memory when I switch to script None from my custom script (or after swtching and pressing Generate). I didn't find some API to implement it, and have to do some tricky work to free these memory.
### Proposed workflow
1. Go to Script
2. Press My custom Script
3. Generate some images and get high CUDA mem usage
4. Switch to Script None, and now CUDA mem decreases (or after pressing Generate)
### Additional information
_No response_ | open | 2024-03-18T09:21:24Z | 2024-03-18T09:21:24Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15304 | [
"enhancement"
] | marigoold | 0 |
plotly/dash-table | dash | 583 | virtualization not working with editing? | Promoting this from the community forum. I haven't tried reproducing, but it looks like a good report: https://community.plot.ly/t/dash-data-table-virtualization-completely-broken-with-an-editable-table/28565 | closed | 2019-09-12T17:56:02Z | 2019-09-13T18:45:51Z | https://github.com/plotly/dash-table/issues/583 | [
"dash-type-bug",
"regression",
"size: 1"
] | chriddyp | 0 |
tflearn/tflearn | tensorflow | 711 | How to get features from a specific layer | This my code:
```python
def single_net():
# Building Residual Network
net = tflearn.input_data(shape=[None, 28, 28, 1])
net = tflearn.conv_2d(net, 64, 3, activation='relu', bias=False)
# Residual blocks
net = tflearn.residual_bottleneck(net, 3, 16, 64)
net = tflearn.residual_bottleneck(net, 1, 32, 128, downsample=True)
net = tflearn.residual_bottleneck(net, 2, 32, 128)
net = tflearn.residual_bottleneck(net, 1, 64, 256, downsample=True)
net = tflearn.residual_bottleneck(net, 2, 64, 256)
net = tflearn.batch_normalization(net)
net = tflearn.activation(net, 'relu')
net = tflearn.global_avg_pool(net)
# Regression
net = tflearn.fully_connected(net, 256, activation='tanh')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='momentum',
loss='categorical_crossentropy',
learning_rate=0.1)
return net
```
Now I want to get the out features from the first fully_connected layer. But I don`t know how to do it... Is there any way I can do it? | closed | 2017-04-13T08:59:22Z | 2017-04-14T13:06:24Z | https://github.com/tflearn/tflearn/issues/711 | [] | FoxerLee | 4 |
OpenVisualCloud/CDN-Transcode-Sample | dash | 179 | video not playing .getting error 504 Gateway Time-out | open | 2021-01-20T13:36:11Z | 2021-01-20T13:40:17Z | https://github.com/OpenVisualCloud/CDN-Transcode-Sample/issues/179 | [] | chathurangamt | 0 |
|
pallets/flask | flask | 5,070 | pip install flask-sqlalchemy Fails | <!--
On newest version of python (3.11). Using Pycharm. Venv environment.
Trying to attach Sqlalchemy.
pip install flask-wtf Works fine
pip install flask-sqlalchemy Fails.
Fails to build greenlet. Wheel does not run successfully.
Using the same steps in python (3.8), all works fine.
Is it not compatible with version 3.11
Any help is appreciated.
Thanks,
Digdug
-->
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
<!--
Describe the expected behavior that should have happened but didn't.
-->
Environment:
- Python version:
- Flask version:
| closed | 2023-04-17T15:36:45Z | 2023-05-02T00:05:34Z | https://github.com/pallets/flask/issues/5070 | [] | DigDug10 | 3 |
vastsa/FileCodeBox | fastapi | 211 | ๅ็ผฉๆไปถไผๅกๅจ0%ๅฅฝไน
| ๅ็ผฉๆไปถไผๅกๅจ0%ๅฅฝไน
๏ผ่ฏทๆ่งฃๅณๅๆณ๏ผ็จip่ฏ่ฟ๏ผไนไธ่ก
| closed | 2024-10-23T11:54:26Z | 2025-02-08T15:12:43Z | https://github.com/vastsa/FileCodeBox/issues/211 | [] | ChineseLiusir | 8 |
flaskbb/flaskbb | flask | 529 | Database migration fails at `7c3fcf8a3335 -> af3f5579c84d, Add cascades` under postgres | ```
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 8ad96e49dc6, init
INFO [alembic.runtime.migration] Running upgrade 8ad96e49dc6 -> 514ca0a3282c, Private Messages
INFO [alembic.runtime.migration] Running upgrade 514ca0a3282c -> 127be3fb000, Added m2m forumgroups table
INFO [alembic.runtime.migration] Running upgrade 127be3fb000 -> 221d918aa9f0, Add user authentication infos
INFO [alembic.runtime.migration] Running upgrade 221d918aa9f0 -> d9530a529b3f, add timezone awareness for datetime objects
INFO [alembic.runtime.migration] Running upgrade d9530a529b3f -> d87cea4e995d, remove timezone info from birthday field
INFO [alembic.runtime.migration] Running upgrade d87cea4e995d -> 933bd7d807c4, Add more non nullables
INFO [alembic.runtime.migration] Running upgrade 933bd7d807c4 -> 881dd22cab94, Add date_modified to conversations
INFO [alembic.runtime.migration] Running upgrade 881dd22cab94 -> d0ffadc3ea48, Add hidden columns
INFO [alembic.runtime.migration] Running upgrade d0ffadc3ea48 -> 7c3fcf8a3335, Add plugin tables
INFO [alembic.runtime.migration] Running upgrade 7c3fcf8a3335 -> af3f5579c84d, Add cascades
Traceback (most recent call last):
File "/opt/venv-flaskbb/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1230, in _execute_context
cursor, statement, parameters, context
File "/opt/venv-flaskbb/lib/python3.7/site-packages/sqlalchemy/engine/default.py", line 536, in do_execute
cursor.execute(statement, parameters)
psycopg2.errors.UndefinedObject: constraint "conversations_to_user_id_fkey" of relation "conversations" does not exist
```
Fresh postgres install | closed | 2019-07-14T06:14:03Z | 2019-10-17T19:12:06Z | https://github.com/flaskbb/flaskbb/issues/529 | [
"info needed"
] | traverseda | 2 |
flairNLP/flair | nlp | 3,347 | [Bug]: RuntimeError: Error(s) in loading state_dict for XLMRobertaModel: Unexpected key(s) in state_dict: "embeddings.position_ids". | ### Describe the bug
I'm trying to use this model. I even have an example from the site that doesn't work. Unknown error that I do not know how to solve. Please help or fix it
### To Reproduce
```python
from flair.data import Sentence
from flair.models import SequenceTagger
# load tagger
tagger = SequenceTagger.load("flair/ner-dutch-large")
# make example sentence
sentence = Sentence("George Washington ging naar Washington")
# predict NER tags
tagger.predict(sentence)
# print sentence
print(sentence)
# print predicted NER spans
print('The following NER tags are found:')
# iterate over entities and print
for entity in sentence.get_spans('ner'):
print(entity)
```
### Expected behavior
Span [1,2]: "George Washington" [โ Labels: PER (1.0)]
Span [5]: "Washington" [โ Labels: LOC (1.0)]
### Logs and Stack traces
```stacktrace
Traceback (most recent call last):
File "/home/maksim/Development/pythonprojects/gpt_training.py", line 5, in <module>
tagger = SequenceTagger.load("flair/ner-dutch-large")
File "/home/maksim/.local/lib/python3.10/site-packages/flair/models/sequence_tagger_model.py", line 1035, in load
return cast("SequenceTagger", super().load(model_path=model_path))
File "/home/maksim/.local/lib/python3.10/site-packages/flair/nn/model.py", line 559, in load
return cast("Classifier", super().load(model_path=model_path))
File "/home/maksim/.local/lib/python3.10/site-packages/flair/nn/model.py", line 191, in load
state = load_torch_state(model_file)
File "/home/maksim/.local/lib/python3.10/site-packages/flair/file_utils.py", line 359, in load_torch_state
return torch.load(f, map_location="cpu")
File "/home/maksim/.local/lib/python3.10/site-packages/torch/serialization.py", line 1014, in load
return _load(opened_zipfile,
File "/home/maksim/.local/lib/python3.10/site-packages/torch/serialization.py", line 1422, in _load
result = unpickler.load()
File "/home/maksim/.local/lib/python3.10/site-packages/flair/embeddings/transformer.py", line 1169, in __setstate__
self.model.load_state_dict(model_state_dict)
File "/home/maksim/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for XLMRobertaModel:
Unexpected key(s) in state_dict: "embeddings.position_ids".
```
### Screenshots
_No response_
### Additional Context
_No response_
### Environment
#### Versions:
##### Flair
0.12.2
##### Pytorch
2.1.0+cu121
##### Transformers
4.34.1
#### GPU
False | closed | 2023-10-22T09:47:55Z | 2025-03-11T05:11:38Z | https://github.com/flairNLP/flair/issues/3347 | [
"bug",
"Awaiting Response"
] | brestok-1 | 2 |
yt-dlp/yt-dlp | python | 11,905 | Add cookie extraction support for Floorp | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a feature unrelated to a specific site
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Floorp, which is a derivative of Firefox, happens to have cookie extraction unsupported. I believe support for it could be added since it's basically the same as Firefox.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies-from-browser', 'floorp', 'https://www.youtube.com/watch?v=-LhZ5R_ROns']
Usage: yt-dlp.exe [OPTIONS] URL [URL...]
yt-dlp.exe: error: unsupported browser specified for cookies: "floorp". Supported browsers are: brave, chrome, chromium, edge, firefox, opera, safari, vivaldi, whale
```
| closed | 2024-12-25T14:35:27Z | 2024-12-25T20:23:06Z | https://github.com/yt-dlp/yt-dlp/issues/11905 | [
"duplicate",
"question"
] | 2husecondary | 2 |
profusion/sgqlc | graphql | 8 | aiohttp endpoint | Provide an endpoint using https://aiohttp.readthedocs.io/en/stable/ | open | 2018-01-17T18:00:44Z | 2021-07-23T18:33:33Z | https://github.com/profusion/sgqlc/issues/8 | [
"help wanted"
] | barbieri | 3 |
getsentry/sentry | python | 87,066 | Sandbox kicking me out | Is the sandbox demo kicking me out of sentry and back to https://sentry.io/welcome/ whenever i switch focus away from the page?
When it first happened I didnโt realize it was a focus thing. When i went to report the bug with a video then i started to understand what happened..
1. We add Exit Sandbox button to the header
2. We make sure that you never get redirected to sandbox
3. We remove the Exit Sandbox button | closed | 2025-03-14T08:37:53Z | 2025-03-24T13:16:52Z | https://github.com/getsentry/sentry/issues/87066 | [] | matejminar | 0 |
vi3k6i5/flashtext | nlp | 136 | Get tags that come after a linebreak | Hi guys,
first of all, thanks a lot for your great library, it is a huge help on my current project.
I have one question though: I need to find the indices of certain tags (':F50:') in a string that I get from an XML file.
These tags come after a linebreak, which in xml is represented by '
'.
However, some of the tags are followed by an '/' whereas others are not. When I add ':F50:' to the list of keywords, the keyword processor is able to find the tags that are being followed by the '/', but not the other ones. Only if I add ':F50' to the keyword list, the ones without a '/' are found. My concern is, that with ':F50' as part of the keyword list, the keyword processor finds more tags than I desire. Is there an explanation for that behavior? If yes, can I somehow work around it? Would it make sense to replace the xml formatted linebreak with a different value?
Thanks a lot in advance for any help provided! | open | 2022-09-29T07:50:24Z | 2022-09-29T07:50:24Z | https://github.com/vi3k6i5/flashtext/issues/136 | [] | Moxelbert | 0 |
Evil0ctal/Douyin_TikTok_Download_API | api | 446 | [Feature request] ๆฏๅฆๅฏไปฅๅจdocker้จ็ฝฒ็webๅๅฐ้ๅขๅ ไธไธชCookie็ปดๆค็ๅ่ฝ | ่ฟๆ ทๅฐฑ่ฝๆดๆนไพฟ็่ฟ่ก็ปดๆค๏ผๅจdocker้ไฟฎๆน้
็ฝฎ่ฟๆบ้บป็ฆ็ใ | open | 2024-07-12T07:12:04Z | 2025-02-24T02:08:29Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/446 | [
"enhancement"
] | athlon128 | 3 |
mljar/mercury | data-visualization | 259 | running mercury from google colab | How can I run merciry from google colab (without opening a local server)? Is it possible? | closed | 2023-04-26T17:45:31Z | 2023-04-26T18:49:16Z | https://github.com/mljar/mercury/issues/259 | [] | naomiunk | 1 |
2noise/ChatTTS | python | 18 | ๅช่ฝ็จGPUๅ๏ผCPUไธ่ฝ่ทๆฏๅ๏ผ | ๅช่ฝ็จGPUๅ๏ผCPUไธ่ฝ่ทๆฏๅ๏ผ | closed | 2024-05-28T10:51:46Z | 2024-07-23T04:01:42Z | https://github.com/2noise/ChatTTS/issues/18 | [
"stale"
] | guoqingcun | 7 |
huggingface/transformers | tensorflow | 36,150 | SDPA `is_causal=False` has no effect due to `LlamaModel._prepare_4d_causal_attention_mask_with_cache_position` | ### System Info
- `transformers` version: 4.48.3
- Platform: Linux-5.15.0-130-generic-x86_64-with-glibc2.35
- Python version: 3.9.21
- Huggingface_hub version: 0.28.1
- Safetensors version: 0.5.2
- Accelerate version: 1.3.0
- Accelerate config: not found
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA H100 80GB HBM3
### Who can help?
@ArthurZucker @Cyrilvallez
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Observe `is_causal=False` has no effect when using `attn_implementation="sdpa"` with an `attention_mask` with at least one `False` element:
```python
import torch
import transformers
device = torch.device("cuda:0")
input_ids = torch.tensor(
[
[
128000, 128006, 9125, 128007, 271, 34, 7747, 553, 279,
2768, 1495, 439, 1694, 5552, 311, 5557, 11, 17452,
11, 10034, 11, 477, 11759, 13, 128009, 128006, 882,
128007, 271, 791, 502, 77355, 3280, 690, 10536, 1022,
449, 264, 72097, 2489, 1990, 35812, 323, 64921, 13,
128009, 128006, 78191, 128007, 271, 42079, 128009, 128004, 128004,
128004, 128004
]
],
device=device,
)
attention_mask = torch.tensor(
[
[
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, True, True, True, True, True, True, True, True,
True, True, False, False, False, False
]
],
device=device,
)
with device:
model = transformers.AutoModelForCausalLM.from_pretrained(
"/models/meta-llama/Llama-3.2-1B-Instruct", # https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct
attn_implementation="sdpa",
torch_dtype=torch.bfloat16,
)
causal_logits = model(input_ids, attention_mask=attention_mask, is_causal=True).logits
noncausal_logits = model(input_ids, attention_mask=attention_mask, is_causal=False).logits
torch.testing.assert_close(causal_logits, noncausal_logits) # shouldn't be true, otherwise what is_causal controlling?
```
Observe that mocking `LlamaModel._prepare_4d_causal_attention_mask_with_cache_position` with an implementation that just replicates the `attention_mask` also has no effect when using `is_causal=True`:
```python
from unittest import mock
def _prepare_4d_causal_attention_mask_with_cache_position(
attention_mask: torch.Tensor,
sequence_length: int,
target_length: int,
dtype: torch.dtype,
device: torch.device,
cache_position: torch.Tensor,
batch_size: int,
**kwargs,
):
min_dtype = torch.tensor(torch.finfo(dtype).min, dtype=dtype, device=attention_mask.device)
return ~attention_mask.view(batch_size, 1, 1, sequence_length).expand(batch_size, 1, sequence_length, sequence_length) * min_dtype
with mock.patch.object(model.model, "_prepare_4d_causal_attention_mask_with_cache_position", _prepare_4d_causal_attention_mask_with_cache_position):
sdpa_causal_logits = model(input_ids, attention_mask=attention_mask, is_causal=True).logits
hf_causal_logits = model(input_ids, attention_mask=attention_mask, is_causal=True).logits
torch.testing.assert_close(sdpa_causal_logits, hf_causal_logits) # shouldn't be true, otherwise what is _prepare_4d_causal_attention_mask_with_cache_position doing?
```
### Expected behavior
1. At the very least, `LlamaModel. _prepare_4d_causal_attention_mask_with_cache_position` should respect `is_causal=False`. Right now, it always returns a causal mask when using sdpa with sequence_length > 1 and an attention_mask with at least one False element.
2. It is not really clear to me why we aren't purely relying on SDPA's own `is_causal` parameter. My 2nd example demonstrates that the current implementation of `LlamaModel. _prepare_4d_causal_attention_mask_with_cache_position` definitely isn't always necessary... so when is it necessary? Or what parts are necessary? Looking at the equivalent implementation that PyTorch describes for [`scaled_dot_product_attention`](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html), it seems like we are replicating a bit of their handling of `attn_mask`. Also notably there are 4 separate CUDA allocations happening in the current implementation (`torch.full`, `torch.triu`, `torch.arange`, `Tensor.clone`) compared to my proposed 1. | open | 2025-02-12T11:21:43Z | 2025-03-17T18:05:28Z | https://github.com/huggingface/transformers/issues/36150 | [
"bug"
] | ringohoffman | 7 |
microsoft/MMdnn | tensorflow | 735 | "Size of out_backprop doesn't match computed" when converting Deconvolution layer from caffe to tf | I met a problem when converting model with deconvolution layer from caffe to tf. Here is a minimum reproducible example model:
```prototxt
layer {
name: "input"
type: "Input"
top: "data"
input_param {
shape {
dim: 1
dim: 3
dim: 16
dim: 16
}
}
}
layer {
name: "deconv"
type: "Deconvolution"
bottom: "data"
top: "deconv"
param {
lr_mult: 1
decay_mult: 1
}
convolution_param {
num_output: 16
bias_term: false
pad: 1
kernel_size: 4
stride: 2
weight_filler {
type: "msra"
}
}
}
```
Running script (with any dummy caffemodel file):
```
mmconvert -sf caffe -in ./deploy.prototxt --inputWeight ./deploy.caffemodel -df tensorflow --outputModel ./output_model_mmdnn --dump_tag SERVING
freeze_graph --input_saved_model_dir ./output_model_mmdnn/ --output_node_names conv2d_transpose --output_graph=frozen_graph.pb
```
The conversion process can run successfully. But the program breaks when I try to run a forward pass with the frozen tf model. Error message:
```
tensorflow.python.framework.errors_impl.InvalidArgumentError: Conv2DCustomBackpropInput: Size of out_backprop doesn't match computed: actual = 18, computed = 15 spatial_dim: 1 input: 32 filter: 4 output: 18 stride: 2 dilation: 1
[[node prefix/conv2d_transpose (defined at a.py:15) = Conv2DBackpropInput[T=DT_FLOAT, data_format="NHWC", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 2, 2, 1], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](prefix/conv2d_transpose/output_shape, prefix/deconv_weight, prefix/Pad)]]
```
Test script I use:
```python
import tensorflow as tf
import numpy as np
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def, name='')
return graph
graph = load_graph('./frozen_graph.pb')
x = np.zeros((1,16,16,3))
data = graph.get_tensor_by_name('input:0')
output = graph.get_tensor_by_name('conv2d_transpose:0')
with tf.Session(graph=graph) as sess:
y_out = sess.run(output, feed_dict={data: x})
```
Platform (like ubuntu 16.04/win10):Ubuntu 18.04
Python version:3.7.4
Source framework with version (like Tensorflow 1.4.1 with GPU):caffe
Destination framework with version (like CNTK 2.3 with GPU):tensorflow(1.12 /1.14/ 2.0.0rc1)
| open | 2019-09-13T23:13:11Z | 2019-09-13T23:13:11Z | https://github.com/microsoft/MMdnn/issues/735 | [] | woinck | 0 |
feature-engine/feature_engine | scikit-learn | 453 | Future warning: Passing a set as indexer - SmartCorrelatedSelection | **Describe the bug**
Hi, I'm getting a FutureWarning message when running the SmartCorrelatedSelection both on my data and following one of the examples from the tutorial:
```
/redacted_path/lib/python3.9/site-packages/feature_engine/selection/smart_correlation_selection.py:302: FutureWarning: Passing a set as an indexer is deprecated and will raise in a future version. Use a list instead.
f = X[feature_group].std().sort_values(ascending=False).index[0]
/redacted_path/lib/python3.9/site-packages/feature_engine/selection/smart_correlation_selection.py:302: FutureWarning: Passing a set as an indexer is deprecated and will raise in a future version. Use a list instead.
f = X[feature_group].std().sort_values(ascending=False).index[0]
```
**To Reproduce**
I have followed the code example posted [here](https://feature-engine.readthedocs.io/en/1.3.x/user_guide/selection/SmartCorrelatedSelection.html)
**Screenshots**

**Desktop:**
- OS: Centos 7
- Browser: Chrome
- Feature_engine Version: 1.3.0
Thanks
| closed | 2022-05-11T17:36:08Z | 2022-06-14T08:57:15Z | https://github.com/feature-engine/feature_engine/issues/453 | [] | giemmecci | 4 |
streamlit/streamlit | machine-learning | 10,769 | streamlit run defaults to streamlit_app.py | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
What if we made
```
streamlit run
```
Default to
```
streamlit run streamlit_app.py
```
Just a time saver :)
### Why?
_No response_
### How?
_No response_
### Additional Context
_No response_ | open | 2025-03-13T15:13:53Z | 2025-03-13T15:34:37Z | https://github.com/streamlit/streamlit/issues/10769 | [
"type:enhancement",
"feature:cli"
] | sfc-gh-amiribel | 1 |
scikit-optimize/scikit-optimize | scikit-learn | 996 | Making the library easier to extend | At the current state of the library, it's quite hard to extend the algorithm with extensions of the Acquisition functions or the optimal intercept. Having to place code in the core optimizer class may go badly for the mid-long term.
I was considering if those elements could be managed as extendable objects to incorporate in the Optimiser class in a modular fashion.
I tried to create an extension myself and I struggled to avoid adding dependencies to the core files of the library. | open | 2021-02-09T10:52:35Z | 2021-02-09T10:52:35Z | https://github.com/scikit-optimize/scikit-optimize/issues/996 | [] | Raziel90 | 0 |
gee-community/geemap | streamlit | 846 | Add colorbar to timelapse | It would be useful to add a colorbar to a timelapse, e.g., temperature, NDVI.
```python
import geemap
import geemap.colormaps as cm
geemap.save_colorbar(vis_params={'min': 20, "max":40, 'palette': cm.get_palette("coolwarm")}, tick_size=12, label="Surface temperature")
```

```python
geemap.save_colorbar(vis_params={'min': -1, "max":1, 'palette': cm.palettes.ndvi}, tick_size=12, label="NDVI")
```

| closed | 2021-12-31T18:08:21Z | 2022-01-01T04:15:47Z | https://github.com/gee-community/geemap/issues/846 | [
"Feature Request"
] | giswqs | 2 |
assafelovic/gpt-researcher | automation | 381 | langchain version | I want to know which version of langchain you use? | closed | 2024-03-08T07:56:57Z | 2024-03-11T13:37:38Z | https://github.com/assafelovic/gpt-researcher/issues/381 | [] | wyzhhhh | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 820 | dataset problem | I have downloaded the dataset provided in your git but after opening toolbox its giving warning that"you do not have any of thr =e recoganized dataset in'path' "
| closed | 2021-08-14T16:54:12Z | 2021-08-25T08:47:42Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/820 | [] | sahil-dhuri | 1 |
pydata/pandas-datareader | pandas | 676 | ImportError: parent 'pandas_datareader' not in sys.modules | I upgraded pandas-datareader 0.7.0 to pandas-datareader 0.7.4 because 0.7.0 was failing with StringIO error. the upgraded one is also not working. the error is as below... pls help fix this...
In [16]: import pandas_datareader.data as pdr
[autoreload of pandas_datareader._version failed: Traceback (most recent call last):
File "C:\Users\cool\Anaconda3\lib\site-packages\IPython\extensions\autoreload.py", line 245, in check
superreload(m, reload, self.old_objects)
File "C:\Users\cool\Anaconda3\lib\site-packages\IPython\extensions\autoreload.py", line 434, in superreload
module = reload(module)
File "C:\Users\cool\Anaconda3\lib\imp.py", line 314, in reload
return importlib.reload(module)
File "C:\Users\cool\Anaconda3\lib\importlib\__init__.py", line 160, in reload
name=parent_name) from None
ImportError: parent 'pandas_datareader' not in sys.modules
] | closed | 2019-09-14T17:23:42Z | 2019-09-17T06:57:53Z | https://github.com/pydata/pandas-datareader/issues/676 | [] | ranjitmishra | 2 |
jmcarpenter2/swifter | pandas | 62 | Integration with rpy2 | Hey there,
I'm aiming to run an rpy2 function using swifter (size of my data is 500 rows by 100k columns) (eg. https://github.com/dask/distributed/issues/2119). The python function calls rpy2 and operates on each row of the dataframe, outputting a numerical matrix (numpy) the same size as the row. I ran swifter apply, and tracked the usage of workers. It seems that only one worker is being fully utilized.
Later, when I specified a dask Client top attempt to circumvent the utilization issue, there was a segmentation fault.
What can I do to parallelize quickly without failure? | closed | 2019-07-08T15:04:52Z | 2019-07-29T23:13:21Z | https://github.com/jmcarpenter2/swifter/issues/62 | [] | jlevy44 | 4 |
aimhubio/aim | tensorflow | 3,172 | Securing Aim Remote Tracking server using SSL key and certificate | ## Securing Aim Remote Tracking server using SSL key and certificate
Hi, first of all I appreciate all the work you've put into making Aim!
I am having some trouble securing the connection to the Aim Remote Tracking (RT) Server, and was wondering if you could help me out.
I recently setup a virtual machine on Azure, which is running both the Aim RT Server and the Aim UI. To do this, I have used a `docker-compose.yml`, which brings up both the server and the UI. This is working properly, I can log runs from another machine and see them appear in the UI, great.
However, now I want to secure the connection to the remote tracking server using SSL, as described [here](https://aimstack.readthedocs.io/en/latest/using/remote_tracking.html#ssl-support). I've created a self-signed key and certificate file using openssl, as described [here](https://github.com/aimhubio/aim/issues/2302#issuecomment-1303704009).
Whenever I bring up the server using this command, eveything seems in working order, I do not get any errors etc:
```bash
aim server --repo ~/mycontainer/aim/ --ssl-keyfile ~/secrets/server.key --ssl-certfile ~/secrets/server.crt --host 0.0.0.0 --dev --port 53800
```
But then when I try to log a run from another machine, I get the following error on the client:
```bash
azureuser@ml-ci-jvranken-prd:~/cloudfiles/code/Users/jvranken/aim-tracking-server$ python aim_test.py
Failed to connect to Aim Server. Have you forgot to run `aim server` command?
Traceback (most recent call last):
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
File "/anaconda/envs/verhuiskans/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/anaconda/envs/verhuiskans/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/anaconda/envs/verhuiskans/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
http.client.RemoteDisconnected: Remote end closed connection without response
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/connectionpool.py", line 799, in urlopen
retries = retries.increment(
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/util/retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/packages/six.py", line 769, in reraise
raise value.with_traceback(tb)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/urllib3/connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
File "/anaconda/envs/verhuiskans/lib/python3.10/http/client.py", line 1375, in getresponse
response.begin()
File "/anaconda/envs/verhuiskans/lib/python3.10/http/client.py", line 318, in begin
version, status, reason = self._read_status()
File "/anaconda/envs/verhuiskans/lib/python3.10/http/client.py", line 287, in _read_status
raise RemoteDisconnected("Remote end closed connection without"
urllib3.exceptions.ProtocolError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/ext/transport/utils.py", line 14, in wrapper
return func(*args, **kwargs)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/ext/transport/client.py", line 138, in connect
response = requests.get(endpoint, headers=self.request_headers)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/requests/adapters.py", line 682, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/mnt/batch/tasks/shared/LS_root/mounts/clusters/ml-ci-jvranken-prd/code/Users/jvranken/aim-tracking-server/aim_test.py", line 7, in <module>
run = Run(
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 70, in wrapper
_SafeModeConfig.exception_callback(e, func)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 47, in reraise_exception
raise e
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/ext/exception_resistant.py", line 68, in wrapper
return func(*args, **kwargs)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/sdk/run.py", line 859, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, experiment=experiment, force_resume=force_resume)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/sdk/run.py", line 272, in __init__
super().__init__(run_hash, repo=repo, read_only=read_only, force_resume=force_resume)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/sdk/base_run.py", line 34, in __init__
self.repo = get_repo(repo)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/sdk/repo_utils.py", line 26, in get_repo
repo = Repo.from_path(repo)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/sdk/repo.py", line 210, in from_path
repo = Repo(path, read_only=read_only, init=init)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/sdk/repo.py", line 121, in __init__
self._client = Client(remote_path)
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/ext/transport/client.py", line 50, in __init__
self.connect()
File "/anaconda/envs/verhuiskans/lib/python3.10/site-packages/aim/ext/transport/utils.py", line 18, in wrapper
raise RuntimeError(error_message)
RuntimeError: Failed to connect to Aim Server. Have you forgot to run `aim server` command?
```
Do you have any clue as to why this is not working? Here is the `docker-compose.yaml` and the python file I'm using:
```docker-compose.yaml
services:
ui:
image: aimstack/aim:3.20.1
container_name: aim_ui
restart: unless-stopped
command: up --host 0.0.0.0 --port 43800 --dev
ports:
- 80:43800
volumes:
- ~/mycontainer/aim:/opt/aim
networks:
- aim
server:
image: aimstack/aim:3.20.1
container_name: aim_server
restart: unless-stopped
command: server --host 0.0.0.0 --dev --ssl-keyfile /opt/secrets/server.key --ssl-certfile /opt/secrets/server.crt
ports:
- 53800:53800
volumes:
- ~/mycontainer/aim:/opt/aim
- ~/secrets:/opt/secrets
networks:
- aim
networks:
aim:
driver: bridge
```
```aim-test.py
from aim import Run
# AIM_REPO='/home/azureuser/mycontainer/aim'
AIM_REPO='aim://REDACTED:53800'
AIM_EXPERIMENT='SSL-server'
run = Run(
repo=AIM_REPO,
experiment=AIM_EXPERIMENT
)
hparams_dict = {
'learning_rate': 0.001,
'batch_size': 32,
}
run['hparams'] = hparams_dict
# log metric
for i in range(30):
if i % 5 == 0:
i = i * 0.347
run.track(float(i), name='numbers')
```
| open | 2024-06-19T13:56:35Z | 2024-10-02T17:37:10Z | https://github.com/aimhubio/aim/issues/3172 | [
"type / enhancement",
"phase / shipped",
"area / SDK-storage"
] | JeroenVranken | 9 |
exaloop/codon | numpy | 555 | Interop with Python breaks CTRL+C | Running the following code in Codon and interrupting it with <kbd>CTRL + C</kbd> works as expected:
```py
import time
print("ready")
while True:
time.sleep(0.1)
```
```bash
$ codon run test.py
ready
^C
$
```
However, adding _any_ type of Python interop will disable breaking and will need killing the process to end it:
```py
import time
from python import PIL
print("ready")
while True:
time.sleep(0.1)
```
```bash
$ codon run test.py
ready
^C^C^C^C^C^C^C^CKilled: 9
$
```
Also works with a `@python` decorator:
```py
import time
@python
def foo():
pass
print("ready")
while True:
time.sleep(0.1)
```
```bash
$ codon run test.py
ready
^C^C^C^C^C^C^C^CKilled: 9
$
```
| closed | 2024-04-29T06:23:07Z | 2024-11-10T19:19:49Z | https://github.com/exaloop/codon/issues/555 | [] | Tenchi2xh | 2 |
pyro-ppl/numpyro | numpy | 1,071 | failure when using conda install | Hello, I was trying to install numpyro using conda, and I got the following error message. Would you like to take a look at it? The system is Windows+Anaconda
(kaggle) C:\>conda install -c conda-forge numpyro
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.-
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
| closed | 2021-06-22T20:37:20Z | 2021-07-11T02:16:35Z | https://github.com/pyro-ppl/numpyro/issues/1071 | [
"usability"
] | wenouyang | 2 |
NullArray/AutoSploit | automation | 544 | Unhandled Exception (f90e3a50d) | Autosploit version: `3.0`
OS information: `Linux-4.18.0-kali2-amd64-x86_64-with-Kali-kali-rolling-kali-rolling`
Running context: `autosploit.py`
Error meesage: `global name 'Except' is not defined`
Error traceback:
```
Traceback (most recent call):
File "/home/g5had0w/Downloads/AutoSploit-master/autosploit/main.py", line 113, in main
loaded_exploits = load_exploits(EXPLOIT_FILES_PATH)
File "/home/g5had0w/Downloads/AutoSploit-master/lib/jsonize.py", line 61, in load_exploits
except Except:
NameError: global name 'Except' is not defined
```
Metasploit launched: `False`
| closed | 2019-03-06T16:33:56Z | 2019-04-02T20:25:34Z | https://github.com/NullArray/AutoSploit/issues/544 | [] | AutosploitReporter | 0 |
nolar/kopf | asyncio | 886 | kopf cr status update | ### Keywords
status update
### Problem
Does the framework provide a way for CR status update?
My operator creates a pod and the custom resource needs to keep track/be in sync with the pod status.
Thanks. | open | 2022-01-04T13:32:15Z | 2022-02-11T00:03:37Z | https://github.com/nolar/kopf/issues/886 | [
"question"
] | odra | 8 |
chatanywhere/GPT_API_free | api | 171 | demo.py ๆ ๆณ่ฟ่ก | demo.py ๆ ๆณ่ฟ่ก๏ผๅฏๅฆๆดๆฐไธไธ๏ผ้ๅธธๆ่ฐข๏ผ
| closed | 2024-01-09T02:25:30Z | 2024-01-12T11:18:12Z | https://github.com/chatanywhere/GPT_API_free/issues/171 | [] | zhaozhh | 1 |
ray-project/ray | data-science | 51,218 | [Core] get_user_temp_dir() Doesn't Honor the User Specified Temp Dir | ### What happened + What you expected to happen
There are a couple of places in the code that uses `ray._private.utils.get_user_temp_dir()` to get the temporary directory for ray. However, if the user specifies the `--temp-dir` during `ray start`, the `get_user_temp_dir()` won't honor the custom temp directory.
The function or the usage of the function needs to be fixed to honor the user specified temp directory.
### Versions / Dependencies
Nightly
### Reproduction script
1. Start ray using: `ray start --head --temp-dir=/tmp/my-temp-dir`
2. Run the following:
```
import ray
ray._private.utils.get_user_temp_dir() # it will return 'tmp' instead of the tmp directory specified in ray start
```
### Issue Severity
Low: It annoys or frustrates me. | open | 2025-03-10T17:45:56Z | 2025-03-21T23:10:02Z | https://github.com/ray-project/ray/issues/51218 | [
"bug",
"P1",
"core"
] | MengjinYan | 0 |
ijl/orjson | numpy | 436 | orjson 3.9.9 fails assertions with Python 3.12.0 | When running the test suite with Python 3.12 against orjson built with assertions enabled, it crashes.
```pytb
$ pip install -q pytest
$ pip install -q . -Cbuild-args=--profile=dev
$ export RUST_BACKTRACE=1
$ python -m pytest -s
========================================================= test session starts =========================================================
platform linux -- Python 3.12.0, pytest-7.4.2, pluggy-1.3.0
rootdir: /tmp/orjson-3.9.9
collected 1192 items
test/test_api.py .thread '<unnamed>' panicked at src/lib.rs:211:9:
assertion failed: ffi!(Py_REFCNT(args)) == 2
stack backtrace:
0: rust_begin_unwind
1: core::panicking::panic_fmt
2: core::panicking::panic
3: orjson::raise_loads_exception
at ./src/lib.rs:211:9
4: loads
at ./src/lib.rs:294:21
5: <unknown>
6: _PyEval_EvalFrameDefault
7: <unknown>
8: _PyEval_EvalFrameDefault
9: _PyObject_FastCallDictTstate
10: _PyObject_Call_Prepend
11: <unknown>
12: _PyObject_MakeTpCall
13: _PyEval_EvalFrameDefault
14: _PyObject_FastCallDictTstate
15: _PyObject_Call_Prepend
16: <unknown>
17: _PyObject_Call
18: _PyEval_EvalFrameDefault
19: _PyObject_FastCallDictTstate
20: _PyObject_Call_Prepend
21: <unknown>
22: _PyObject_MakeTpCall
23: _PyEval_EvalFrameDefault
24: _PyObject_FastCallDictTstate
25: _PyObject_Call_Prepend
26: <unknown>
27: _PyObject_MakeTpCall
28: _PyEval_EvalFrameDefault
29: _PyObject_FastCallDictTstate
30: _PyObject_Call_Prepend
31: <unknown>
32: _PyObject_MakeTpCall
33: _PyEval_EvalFrameDefault
34: PyEval_EvalCode
35: <unknown>
36: <unknown>
37: PyObject_Vectorcall
38: _PyEval_EvalFrameDefault
39: <unknown>
40: Py_RunMain
41: Py_BytesMain
42: __libc_start_call_main
at /tmp/portage/sys-libs/glibc-2.38-r6/work/glibc-2.38/csu/../sysdeps/nptl/libc_start_call_main.h:58:16
43: __libc_start_main_impl
at /tmp/portage/sys-libs/glibc-2.38-r6/work/glibc-2.38/csu/../csu/libc-start.c:360:3
44: _start
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
fatal runtime error: failed to initiate panic, error 5
Fatal Python error: Aborted
Current thread 0x00007f282edb5740 (most recent call first):
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/python_api.py", line 952 in raises
File "/tmp/orjson-3.9.9/test/test_api.py", line 30 in test_loads_trailing_invalid
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/python.py", line 194 in pytest_pyfunc_call
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/python.py", line 1792 in runtest
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/runner.py", line 169 in pytest_runtest_call
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/runner.py", line 262 in <lambda>
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/runner.py", line 341 in from_call
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/runner.py", line 261 in call_runtest_hook
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/runner.py", line 222 in call_and_report
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/runner.py", line 133 in runtestprotocol
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/runner.py", line 114 in pytest_runtest_protocol
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/main.py", line 350 in pytest_runtestloop
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/main.py", line 325 in _main
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/main.py", line 271 in wrap_session
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/main.py", line 318 in pytest_cmdline_main
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_callers.py", line 77 in _multicall
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_manager.py", line 115 in _hookexec
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pluggy/_hooks.py", line 493 in __call__
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/config/__init__.py", line 169 in main
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/_pytest/config/__init__.py", line 192 in console_main
File "/tmp/orjson-3.9.9/.venv/lib/python3.12/site-packages/pytest/__main__.py", line 5 in <module>
File "<frozen runpy>", line 88 in _run_code
File "<frozen runpy>", line 198 in _run_module_as_main
Aborted (core dumped)
``` | closed | 2023-10-24T17:09:51Z | 2023-10-26T15:55:05Z | https://github.com/ijl/orjson/issues/436 | [] | mgorny | 2 |
pydantic/logfire | fastapi | 216 | Show message from logfire platform | ### Description
In typical usage, the SDK should make a request to the logfire API at process start:
* the request should include: python version, SDK version, platform
* if the request times out or returns a 5XX error, the SDK should show a warning
* based on the response body, the SDK should show and arbitrary message, perhaps we have `message` and `level` so warnings can be shown in a different color, or even in a more prominent box
In particular we should use this notify the user if they're using an outdated version of the SDK, but the idea is to make the SDK as "dumb" as possible in case we want to change the logic in future.
This request should not be made if:
* `send_to_logfire=False`
* `base_url` is anything other than logfire | open | 2024-05-27T15:38:32Z | 2024-05-28T01:03:48Z | https://github.com/pydantic/logfire/issues/216 | [
"Feature Request"
] | samuelcolvin | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,792 | Cloudflare Form Turnstile Failure | I'm using Driver class to automate login process on a website and even I'm using the **uc mode**, it does not seem to work properly.
I've also tried with other classes, the result is the same. When the span of the iframe is clicked, it shows **Failure**. I've also tried manual clicking, it's the same.
Here is the code
```python
from seleniumbase import Driver
driver = Driver(uc=True, browser='Chrome')
# Reading the credentials
with open('creds.txt', 'r') as f:
lines = f.read().splitlines()
mail = lines[0].strip()
password = lines[1].strip()
try:
driver.maximize_window()
driver.uc_open_with_reconnect("https://visa.vfsglobal.com/ind/en/pol/login", reconnect_time=20)
driver.find_element("#onetrust-accept-btn-handler").click()
# Typing the credentials
driver.type("#mat-input-0", mail)
driver.type("#mat-input-1", password)
driver.sleep(5)
driver.highlight_click(".mat-button-wrapper")
driver.sleep(5)
driver.switch_to_frame("iframe")
driver.uc_click("span")
```
Here is the screenshot

| closed | 2024-05-21T07:05:32Z | 2024-05-21T17:34:54Z | https://github.com/seleniumbase/SeleniumBase/issues/2792 | [
"invalid usage",
"UC Mode / CDP Mode"
] | asibhossen897 | 6 |
onnx/onnx | pytorch | 6,374 | CI not compatible with ubuntu-24.04; Change the runner from ubuntu-latest to ubuntu-22.04? | # Ask a Question
While testing python 3.13 I realized that our current pipeline does not work for ubuntu-24.04.
Currently we are using ubuntu-latest, if 24.04 will become "latest" our build will fail first.
A quick search didn't turn up a roadmap
Example run could be found here
https://github.com/onnx/onnx/actions/runs/10902364296/job/30254115460?pr=6373
Should we change the runner from ubuntu-latest to ubuntu-22.04? | closed | 2024-09-17T11:53:15Z | 2024-11-01T14:57:44Z | https://github.com/onnx/onnx/issues/6374 | [
"question"
] | andife | 4 |
strawberry-graphql/strawberry | asyncio | 3,603 | Schema codegen GraphQLSyntaxError | I thought generating schema.graphql but did not work
## Describe the Bug
tried to get schema.graphql
cli: strawberry codegen --schema services.gateway.src.graphql.schema -o . -p python schema.graphql
## System Information
- MacOS:
- strawberry-graphql==0.237.3
## Additional Context
```
The codegen is experimental. Please submit any bug at https://github.com/strawberry-graphql/strawberry
Generating code for schema.graphql using PythonPlugin plugin(s)
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ /Users/its/Documents/Dev/code/smart-lisa-bot/chatbot-backend/.venv/lib/python3.12/site-packages/ โ
โ strawberry/cli/commands/codegen.py:139 in codegen โ
โ โ
โ 136 โ โ code_generator = QueryCodegen( โ
โ 137 โ โ โ schema_symbol, plugins=plugins, console_plugin=console_plugin โ
โ 138 โ โ ) โ
โ โฑ 139 โ โ code_generator.run(q.read_text()) โ
โ 140 โ โ
โ 141 โ console_plugin.after_all_finished() โ
โ 142 โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ app_dir = '.' โ โ
โ โ cli_plugin = None โ โ
โ โ code_generator = <strawberry.codegen.query_codegen.QueryCodegen object at 0x112d87950> โ โ
โ โ console_plugin = <strawberry.codegen.query_codegen.ConsolePlugin object at 0x103f34f50> โ โ
โ โ console_plugin_type = <class 'strawberry.codegen.query_codegen.ConsolePlugin'> โ โ
โ โ output_dir = PosixPath('/Users/its/Documents/Dev/code/smart-lisa-bot/chatbot-backeโฆ โ โ
โ โ plugins = [ โ โ
โ โ โ <strawberry.codegen.plugins.python.PythonPlugin object at โ โ
โ โ 0x1121ea570> โ โ
โ โ ] โ โ
โ โ q = PosixPath('schema.graphql') โ โ
โ โ query = [PosixPath('schema.graphql')] โ โ
โ โ schema = 'services.gateway.src.graphql.schema' โ โ
โ โ schema_symbol = <strawberry.schema.schema.Schema object at 0x112c7bd70> โ โ
โ โ selected_plugins = ['python'] โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /Users/its/Documents/Dev/code/smart-lisa-bot/chatbot-backend/.venv/lib/python3.12/site-packages/ โ
โ strawberry/codegen/query_codegen.py:314 in run โ
โ โ
โ 311 โ def run(self, query: str) -> CodegenResult: โ
โ 312 โ โ self.plugin_manager.on_start() โ
โ 313 โ โ โ
โ โฑ 314 โ โ ast = parse(query) โ
โ 315 โ โ โ
โ 316 โ โ operations = self._get_operations(ast) โ
โ 317 โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ query = '' โ โ
โ โ self = <strawberry.codegen.query_codegen.QueryCodegen object at 0x112d87950> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /Users/its/Documents/Dev/code/smart-lisa-bot/chatbot-backend/.venv/lib/python3.12/site-packages/ โ
โ graphql/language/parser.py:113 in parse โ
โ โ
โ 110 โ โ max_tokens=max_tokens, โ
โ 111 โ โ allow_legacy_fragment_variables=allow_legacy_fragment_variables, โ
โ 112 โ ) โ
โ โฑ 113 โ return parser.parse_document() โ
โ 114 โ
โ 115 โ
โ 116 def parse_value( โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ allow_legacy_fragment_variables = False โ โ
โ โ max_tokens = None โ โ
โ โ no_location = False โ โ
โ โ parser = <graphql.language.parser.Parser object at 0x112da1220> โ โ
โ โ source = '' โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /Users/its/Documents/Dev/code/smart-lisa-bot/chatbot-backend/.venv/lib/python3.12/site-packages/ โ
โ graphql/language/parser.py:241 in parse_document โ
โ โ
โ 238 โ โ """Document: Definition+""" โ
โ 239 โ โ start = self._lexer.token โ
โ 240 โ โ return DocumentNode( โ
โ โฑ 241 โ โ โ definitions=self.many(TokenKind.SOF, self.parse_definition, TokenKind.EOF), โ
โ 242 โ โ โ loc=self.loc(start), โ
โ 243 โ โ ) โ
โ 244 โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ self = <graphql.language.parser.Parser object at 0x112da1220> โ โ
โ โ start = <Token <SOF> 0:0> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /Users/its/Documents/Dev/code/smart-lisa-bot/chatbot-backend/.venv/lib/python3.12/site-packages/ โ
โ graphql/language/parser.py:1149 in many โ
โ โ
โ 1146 โ โ token. โ
โ 1147 โ โ """ โ
โ 1148 โ โ self.expect_token(open_kind) โ
โ โฑ 1149 โ โ nodes = [parse_fn()] โ
โ 1150 โ โ append = nodes.append โ
โ 1151 โ โ expect_optional_token = partial(self.expect_optional_token, close_kind) โ
โ 1152 โ โ while not expect_optional_token(): โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ close_kind = <TokenKind.EOF: '<EOF>'> โ โ
โ โ open_kind = <TokenKind.SOF: '<SOF>'> โ โ
โ โ parse_fn = <bound method Parser.parse_definition of <graphql.language.parser.Parser object โ โ
โ โ at 0x112da1220>> โ โ
โ โ self = <graphql.language.parser.Parser object at 0x112da1220> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ /Users/its/Documents/Dev/code/smart-lisa-bot/chatbot-backend/.venv/lib/python3.12/site-packages/ โ
โ graphql/language/parser.py:302 in parse_definition โ
โ โ
โ 299 โ โ โ if method_name: โ
โ 300 โ โ โ โ return getattr(self, f"parse_{method_name}")() โ
โ 301 โ โ โ
โ โฑ 302 โ โ raise self.unexpected(keyword_token) โ
โ 303 โ โ
โ 304 โ # Implement the parsing rules in the Operations section. โ
โ 305 โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ locals โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ has_description = False โ โ
โ โ keyword_token = <Token <EOF> 1:1> โ โ
โ โ self = <graphql.language.parser.Parser object at 0x112da1220> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
GraphQLSyntaxError: Syntax Error: Unexpected <EOF>.
GraphQL request:1:1
1 |
| ^
```
<img width="558" alt="image" src="https://github.com/user-attachments/assets/9a7bd077-5a5b-4074-bc2f-8b1e32e34812">
schema.py:
```
import strawberry
from .queries import Query
from .mutations import Mutation
schema = strawberry.Schema(query=Query, mutation=Mutation)
``` | closed | 2024-08-24T06:33:39Z | 2024-09-12T19:20:36Z | https://github.com/strawberry-graphql/strawberry/issues/3603 | [
"bug"
] | itsklimov | 4 |
roboflow/supervision | machine-learning | 1,777 | Deprecation warning for `sv.MeanAveragePrecision` and other outdated metrics | ### Search before asking
- [x] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
##### What's the problem?
I found two ways of computing mAP metric: `sv.MeanAveragePrecision` and `sv.metrics.MeanAveragePrecision`. However, I see [here](https://supervision.roboflow.com/develop/detection/metrics/) that `sv.MeanAveragePrecision` and a few other metrics will be deprecated.
##### Why is it important?
Both `sv.MeanAveragePrecision` and `sv.metrics.MeanAveragePrecision` give different results as shown in the following example.
```py
import os
import numpy as np
import supervision as sv
from ultralytics import YOLO
# Download dataset
if not os.path.exists("/tmp/rf_animals"):
!wget https://universe.roboflow.com/ds/1LLwpXz2td?key=8JnJML5YF6 -O /tmp/rf_animals.zip
!unzip /tmp/dataset.zip -d /tmp/rf_animals
# Load dataset
dataset = sv.DetectionDataset.from_yolo("/tmp/rf_animals/train/images", "/tmp/rf_animals/train/labels", "/tmp/rf_animals/data.yaml")
# Inference
model = YOLO("yolov8s")
targets, detections = [], []
for image_path, image, target in dataset:
targets.append(target)
prediction = model(image, verbose=False)[0]
detection = sv.Detections.from_ultralytics(prediction)
detection = detection[np.isin(detection['class_name'], dataset.classes)]
detection.class_id = np.array([dataset.classes.index(class_name) for class_name in detection['class_name']])
detections.append(detection)
# Method #1
mAP = sv.metrics.MeanAveragePrecision().update(detections, targets).compute()
print(f"mAP50: {mAP.map50:.4f}")
# Method #2
mAP = sv.MeanAveragePrecision.from_detections(detections, targets)
print(f"mAP50: {mAP.map50:.4f}")
```
Output
```
mAP50: 0.1553
mAP50: 0.2100
```
As per the docstrings, Method 1 computes the average precision, given the recall and precision curves following https://github.com/rafaelpadilla/Object-Detection-Metrics and Method 2 uses 101-point interpolation (COCO) method. People may end up using a mixture of these methods and may derive wrong conclusions.
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [x] Yes I'd like to help by submitting a PR! | open | 2025-01-19T08:15:46Z | 2025-01-19T18:05:54Z | https://github.com/roboflow/supervision/issues/1777 | [
"enhancement"
] | patel-zeel | 0 |
miguelgrinberg/flasky | flask | 249 | Add Category to the Blog Post in the Form | Hello, i would like to add Categories to the Posts. For that i have the following Many-To-Many Relationship added to the Post Model:
```python
class Post(db.Model):
#...
categories = db.relationship('Category',
secondary=has_category,
backref=db.backref('posts', lazy='dynamic'),
lazy='dynamic')
class Category(db.Model):
__tablename__ = 'category'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(64), unique=True, index=True)
has_category = db.Table('has_category',
db.Column('category_id', db.Integer, db.ForeignKey('category.id')),
db.Column('post_id', db.Integer, db.ForeignKey('post.id')))
```
Then i have changed the main/forms.py file and added a SelectField to the PostForm
```python
class PostForm(FlaskForm):
body = PageDownField("What's on your mind?", validators=[Required()])
category = SelectField('Category', coerce=int)
submit = SubmitField('Submit')
def __init__(self, user, *args, **kwargs):
super(PostForm, self).__init__(*args, **kwargs)
self.category.choices = [(category.id, category.name)
for category in Category.query.order_by(category.name).all()]
```
The Categories are then displayed in the Select Field like it should.
Now I would like to give the User the option to add New Categories while creating a Blogpost. I have created a new view and a new html File to outsource the Post Creation from the Index.html. In that Html This template looks like the following:
```python
{% block page_content %}
<div class="page-header">
<h1>Add a new Post</h1>
</div>
<div class="col-md-4">
<form action="#" method="post">
{{ form.csrf_token }}
{{ form.body.label }} {{ form.body }}
{{ form.category.label }} {{ form.category}}
<button>Add</button>
<input type="submit" value="Go">
</form>
</div>
{% endblock %}
```
The Click on the Add Button should prompt the User for the name of the new Category, add it to the Database and update the Select Field to display the newly added Category.
But how can i archive this? Is there a solution without Javascript?
I hope this is the right Place to ask for help. | closed | 2017-03-06T08:19:47Z | 2017-03-17T19:06:31Z | https://github.com/miguelgrinberg/flasky/issues/249 | [
"question"
] | KevDi | 3 |
quantmind/pulsar | asyncio | 247 | add bench setup.py command | Use the benchmark test plugin
| closed | 2016-09-29T13:12:06Z | 2016-10-04T18:01:38Z | https://github.com/quantmind/pulsar/issues/247 | [
"test",
"benchmark"
] | lsbardel | 1 |
Lightning-AI/pytorch-lightning | machine-learning | 19,907 | I think it's deadly necessary to add docs or tutorials for handling the case when We return multiple loaders in test_dataloaders() method? I think it | ### ๐ Documentation
I know that LightningDataModule supports return multiple dataloaders ,like list of dataloaders, or dict of dataloaders in LightningDataModule. But how to handle the test in LightningModule?
We always need print respective information of the computation upon each test_loader, and save the figs drawn for each dataloader, and we want to do some calculation on the whole dataset in each test_dataloader. But how to do these things in LightningModule's hooks about test? The developers only say that Lightning automatically handle the combination of multiple dataloaders, but how can Lightning compute for each Dataloader? and record information for each dataloaders??
cc @borda | open | 2024-05-25T11:01:27Z | 2024-05-25T11:01:48Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19907 | [
"docs",
"needs triage"
] | onbigion13 | 0 |
svc-develop-team/so-vits-svc | deep-learning | 33 | /hubert/checkpoint_best_legacy_500.pt box.com็ด่ฟๆนๆณๅไบซ | https://ibm.box.com/s/z1wgl1stco8ffooyatzdwsqn2psd9lrr
ๅฐ url ไธญ็ /s/ ๆฟๆ็บ /shared/static
ๆไปฅๆนๆ https://ibm.ent.box.com/shared/static/z1wgl1stco8ffooyatzdwsqn2psd9lrr
!curl -L https://ibm.ent.box.com/shared/static/z1wgl1stco8ffooyatzdwsqn2psd9lrr --output hubert/checkpoint_best_legacy_500.pt
!wget -O hubert/checkpoint_best_legacy_500.pt https://ibm.ent.box.com/shared/static/z1wgl1stco8ffooyatzdwsqn2psd9lrr

ๆ็จๆฅๆบ๏ผhttps://stackoverflow.com/questions/46239248/how-to-download-a-file-from-box-using-wget | closed | 2023-03-16T12:30:56Z | 2023-03-16T14:56:13Z | https://github.com/svc-develop-team/so-vits-svc/issues/33 | [] | upright2003 | 0 |
voxel51/fiftyone | data-science | 5,211 | [FR] Allow to NOT render semantic segmentation labels | Given a dataset with semantic segmentation labels, can I configure the app to NOT render them?
I realize that I can toggle their visibility on and off, but it seems to me that they get loaded independent of that, thus slowing down rendering.
### Willingness to contribute
The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature?
- [ ] Yes. I can contribute this feature independently
- [X] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community
- [ ] No. I cannot contribute this feature at this time
| open | 2024-12-04T14:19:18Z | 2024-12-04T14:48:39Z | https://github.com/voxel51/fiftyone/issues/5211 | [
"feature"
] | cgebbe | 1 |
vitalik/django-ninja | django | 1,172 | Not required fields in FilterSchema | Please describe what you are trying to achieve:
in FilterSchema, is it possible to use the params 'exclude_none' ?
Please include code examples (like models code, schemes code, view function) to help understand the issue
```
class Filters(FilterSchema):
limit: int = 100
offset: int = None
query: str = None
category__in: List[str] = Field(None, alias="categories")
@route.get("/filter")
def events(request, filters: Query[Filters]):
print(filters.filter)
return {"filters": filters.dict()}
```
the print output is
> <FilterSchema.filter of Filters(limit=100, offset=None, query=None, category__in=None)>
while the filters.dict does't have the none value , the filters.filter is still have the none value.
And my question is there any possible to use 'exclude_none' in request body?
| open | 2024-05-20T08:16:49Z | 2024-09-27T06:27:31Z | https://github.com/vitalik/django-ninja/issues/1172 | [] | horizon365 | 1 |
s3rius/FastAPI-template | fastapi | 48 | Add graphql support. | Strawerry comunity has merges pull request that adds dependencies for FastAPI.
The Idea is to add GraphQL API option
About strawberry:
Current version: 0.84.4
pypi url: https://pypi.org/project/strawberry-graphql/
Homepage: https://strawberry.rocks/
Required python: >=3.7,<4.0 | closed | 2021-10-23T16:08:57Z | 2022-04-19T07:19:14Z | https://github.com/s3rius/FastAPI-template/issues/48 | [] | s3rius | 1 |
Anjok07/ultimatevocalremovergui | pytorch | 1,678 | The Chinese display of the UVR5.6 interface under Rockylinux9 is unicoded, for example: \u05t7\46y8\ Rockylinux9็ณป็ปไธUVR5.6็้ขไธญๆๆพ็คบไธบunicode็ผ็ ๏ผๅฆ๏ผ\u05t7\46y8\ | ๅคงไฝฌ๏ผไฝ ๅฅฝ๏ผ
้ๅธธๆ่ฐขไฝ ๅผๅๅบ่ฟไนไผ็งๅผๆบ่ฝฏไปถโโultimatevocalremover5.6๏ผไฝๆไธ็ด่ขซไธไธช้ฎ้ขๅฐๆฐ๏ผๅธๆ่ฝๅพๅฐไฝ ็ๅธฎๅฉ๏ผไธ่ๆๆฟ๏ผ
ๆๅฐ่ฏๅจRockylinux9.5ไธ็จanacondaๅๅปบpython่ๆ็ฏๅข๏ผๆๅๅซๅจpython3.9ใ3.10ใ3.11็ฏๅขไธๅฎ่ฃ
ไบUVR5.6 ๏ผ้ฝ่ฝๆๅๅฏๅจๅพๅฝข็้ข๏ผไฝๅพๅฝข็้ขไธญ็ไธญๆๅดๆพ็คบไธบunicode็ผ็ ใ
ๆๅจanaconda็ฏๅขไธ่ฟ่กpython -m tkinter ๆญฃๅธธ๏ผไฝtkinter็้ขไธญ็ไธญๆๆพ็คบไธบunicode็ผ็ ๏ผๅฆ\u4065\u5634ใๆๅจ pip ๅฎ่ฃ
่ฟ็จไธญ๏ผๆฟๆขไบ็ณป็ปไธๆฏๆ็Dora-0.0.3็ๆบ็ setup.pyๆไปถไธญ็ไพ่ต้กน"sklearnโไธบscikit-searnใๅ
ถไฝ็้ฝๆ็
งไฝ ็linuxๅฎ่ฃ
่ฏดๆ่ฟ่ก็ใ
ย
ๆไน่ฎฉultimatevocalremover5.6ๅพๅฝข็้ขๆญฃ็กฎๆพ็คบไธญๆๅ๏ผ
ย
Hello guy!
Thank you very much for developing such an excellent open source software - ultimatevocalremover 5.6, but I have been bothered by a problem, I hope to get your help, I am very grateful:
I tried to create a python virtual environment with anaconda on Rockylinux9.5, I installed UVR5.6 in python3.9, 3.10, 3.11 environment, respectively, and I can successfully launch the GUI, but the Chinese in the GUI shows unicode encoding.
I'm running python -m tkinter in anaconda environment is fine, but the Chinese in the tkinter interface shows up as unicode encoding, such as \u4065\u5634. During the pip installation, I replaced the dependency "sklearn" in the source code setup.py file of Dora-0.0.3, which is not supported by the system, with scikit-searn. The rest follows your Linux installation instructions.
ย
How to make ultimatevocalremover 5.6 graphical interface display Chinese correctly? | open | 2024-12-22T13:40:47Z | 2024-12-22T13:41:52Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1678 | [] | mrcedar-git | 0 |
redis/redis-om-python | pydantic | 101 | Implement a configurable key namespace separator | Right not `:` is used for the key namespace separator. While this is the right answer most of the time, there are several places in the code marked with a TODO to make this configurable, which is a good idea but fairly low priority. | open | 2022-01-21T16:37:10Z | 2022-01-21T16:37:10Z | https://github.com/redis/redis-om-python/issues/101 | [
"enhancement"
] | simonprickett | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.