repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
waditu/tushare | pandas | 1,600 | pro.stk_holdernumber获取数据异常 | 使用股东人数借口:
df = pro.stk_holdernumber(ts_code='300199.SZ', start_date='20160101', end_date='20181231')
获取000505的股东人数,返回的dataframe中,holder_nums前三个数据是NaN。
tushare id:shi7631470@163.com | open | 2021-11-03T23:58:53Z | 2021-11-04T00:05:58Z | https://github.com/waditu/tushare/issues/1600 | [] | shi-hao | 0 |
tensorpack/tensorpack | tensorflow | 632 | How to train Mask RCNN in own dataset? | I knew your MaskRCNN when I read the matterport's implementation. Very impressive to your performance. I would like to use it to train in my custom dataset that looks likes: one raw image and `N` label images, in which each label image stored a segmented image without bounding box. I have used matterport's example to train and run it successful. However, I am new in your code. Could you tell me how can I train my dataset in your code? Thanks so much
| closed | 2018-02-06T14:59:06Z | 2020-01-09T21:37:21Z | https://github.com/tensorpack/tensorpack/issues/632 | [
"examples"
] | John1231983 | 23 |
pytorch/pytorch | machine-learning | 149,075 | xpu: target torch::xpurt not found linking with libtorch installed from XPU wheels | Consider that Pytorch XPU is installed on the newly configure system with:
```
# pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/xpu
# pip3 list | grep torch
torch 2.7.0.dev20250312+xpu
```
Further, consider the use case when someone works on C++ library/executable and wants to link with libtorch:
```
# touch sample.cpp
# cat CMakeLists.txt
cmake_minimum_required(VERSION 3.18)
project(sample)
find_package(Torch REQUIRED)
add_library(sample SHARED sample.cpp)
target_link_libraries(sample PUBLIC ${TORCH_LIBRARIES})
```
Trying to configure the above cmake script will lead to the following error:
```
$ cmake -DTorch_DIR=$(python3 -c "import torch; print(torch.utils.cmake_prefix_path)")/Torch .
CMake Warning at /home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:22 (message):
static library kineto_LIBRARY-NOTFOUND not found.
Call Stack (most recent call first):
/home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:125 (append_torchlib_if_found)
CMakeLists.txt:3 (find_package)
-- Configuring done (0.0s)
CMake Error at /home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Caffe2/Caffe2Targets.cmake:61 (set_target_properties):
The link interface of target "c10_xpu" contains:
torch::xpurt
but the target was not found. Possible reasons include:
* There is a typo in the target name.
* A find_package call is missing for an IMPORTED target.
* An ALIAS target is missing.
Call Stack (most recent call first):
/home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Caffe2/Caffe2Config.cmake:114 (include)
/home/dvrogozh/pytorch.xpu/lib/python3.12/site-packages/torch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:3 (find_package)
```
The reason of the failure is that in the above scenario oneAPI environment was not installed and sourced. As a result [FindSYCLToolkit.cmake](https://github.com/pytorch/pytorch/blob/891ba2ec8a3e2e71137fab4a8e91940a19c8272b/cmake/Modules/FindSYCLToolkit.cmake) fails to find SYCL (due to way it's configured). Note also that after installing Pytorch XPU SYCL environment is actually available under pypi installation:
```
$ find ~/pytorch.xpu/ -iname sycl
/home/dvrogozh/pytorch.xpu/include/sycl
/home/dvrogozh/pytorch.xpu/include/sycl/CL/sycl
$ find ~/pytorch.xpu/ -iname sycl*.hpp
/home/dvrogozh/pytorch.xpu/include/syclcompat/syclcompat.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/sycl_span.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/detail/sycl_mem_obj_allocator.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/ext/intel/esimd/detail/sycl_util.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/sycl.hpp
/home/dvrogozh/pytorch.xpu/include/sycl/CL/sycl.hpp
/home/dvrogozh/pytorch.xpu/include/syclcompat.hpp
$ find ~/pytorch.xpu/ -iname libsycl*.so
/home/dvrogozh/pytorch.xpu/lib/libsycl.so
/home/dvrogozh/pytorch.xpu/lib/libsycl_ur_trace_collector.so
/home/dvrogozh/pytorch.xpu/lib/libsycl-preview.so
```
Thus, for the use cases when DPC++ compiler is not needed it technically should be possible to use XPU environment installed from wheels for build. **Do we need to fix FindSYCLToolkit.cmake to make such builds possible?**
CC: @gujinghui @EikanWang @fengyuan14 @guangyey @jgong5
cc @jbschlosser @gujinghui @EikanWang @fengyuan14 @guangyey | open | 2025-03-12T20:51:27Z | 2025-03-18T01:53:30Z | https://github.com/pytorch/pytorch/issues/149075 | [
"module: cpp",
"triaged",
"module: xpu"
] | dvrogozh | 4 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 726 | AM-softmax loss | why don't have AM-softmax loss? | open | 2024-10-27T09:32:09Z | 2024-10-28T12:38:29Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/726 | [
"new algorithm request"
] | sofpya | 1 |
openapi-generators/openapi-python-client | rest-api | 559 | Don't prepend title based property names, use Titles for full model type names | **Is your feature request related to a problem? Please describe.**
Ideally an openapi description would get us nice not too long types without a lot of futzing with overriding type names via class_overrides.
If you add a title/subtitle to an objects body it seems to append it to the path based name making names very long and often redundant
and you have to fix it via class_overrides (or if you control the openapi description by moving everything into components which dont cause long names)
An example to clarify
```yml
paths:
/bob:
get:
operationId: get_bob_method
requestBody:
content:
application/json:
schema:
title: Get Bob Body
type: object
properties:
ArrayProp:
type: array
items:
title: Bob Body Item
type: object
properties:
MyProp:
type: string
responses:
200:
description: "Response"
content:
application/json:
schema:
title: The Robert Report
type: object
properties:
ArrayProp:
type: array
items:
title: Robert Response Item
type: object
properties:
MyPropR:
type: string
```
results in `GetBobMethodGetBobBody` / `GetBobMethodGetBobBodyBobBodyItem` for the json_body and
`GetBobMethodTheRobertReport` / `GetBobMethodTheRobertReportRobertResponseItem` for the response type instead of `GetBobBody`, `BobBodyItem`, and `TheRobertReport`/`RobertResponseItem`
Trying out the alternative openapi-generator-cli it returns `GetBobBody`, `BobBodyItem`, and `TheRobertReport`/`RobertResponseItem`.
While this is not mandated anywhere I think its much more sensible.
Usually with a title you will have named the full type/subtype and not need to prepend it with the method name/outer type.
**Describe the solution you'd like**
We should keep appending to a name when using default property names/item but start from scratch every time we hit a title.
Currently the only way to fix things is to rename via class_overrides or if you have control of the openapi description move everything into components(since types from component only use the component name and not the full path name) | closed | 2021-12-27T12:10:39Z | 2022-12-17T23:35:58Z | https://github.com/openapi-generators/openapi-python-client/issues/559 | [
"✨ enhancement"
] | rtaycher | 3 |
graphql-python/graphene-sqlalchemy | sqlalchemy | 206 | Formatting datetime-like columns | It'd be nice to be able to specify a `strftime` string which would automatically be used when resolving datetime-like columns so the user gets a friendly value. | closed | 2019-04-15T20:44:42Z | 2023-08-15T00:35:51Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/206 | [
"enhancement"
] | thejcannon | 5 |
miguelgrinberg/Flask-Migrate | flask | 190 | Idiom to create empty db if one does not exist | Is there an idiom I can use to create an empty db with all the tables but no data in them?
Here is my code, and it only create an empty sqlite file (0 bytes). Does not populate it with tables (User and Post, following your example):
`
application = Flask(__name__)
application.config.from_object(Config)
db = SQLAlchemy(application)
migrate = Migrate(application, db)
db.create_all()
db.session.commit()
` | closed | 2018-03-04T12:27:01Z | 2019-01-13T22:20:33Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/190 | [
"question",
"auto-closed"
] | eeshans | 4 |
Python3WebSpider/ProxyPool | flask | 154 | 报错,运行不出正确结果 | 



| open | 2022-04-07T11:52:58Z | 2022-04-09T06:15:08Z | https://github.com/Python3WebSpider/ProxyPool/issues/154 | [
"bug"
] | Sy191130 | 1 |
horovod/horovod | machine-learning | 3,057 | Unable to extract storage_options from URL | Integration of fsspec in Horovod was a great step.
Fsspec support extracting storage_options from input url. In Horovod, the url format is restricted i.e the option of extracting the storage options from url is blocked.
It would be of great help if we could have mechanism to extract the storage_options from the input url similar to fsspec.
For above to work, we need to make changes to _get_fs_and_protocol(self) method in store file. The modified def will look like:
```
def _get_fs_and_protocol(self):
protocol, path = split_protocol(self.prefix_path)
cls = fsspec.get_filesystem_class(protocol)
options = cls._get_kwargs_from_urls(self._dataset_url)
update_storage_options(options, storage_options)
fs = cls(**options)
return fs, protocol
``` | closed | 2021-07-28T07:43:42Z | 2021-09-02T16:18:59Z | https://github.com/horovod/horovod/issues/3057 | [
"enhancement"
] | manjuransari-zz | 0 |
miguelgrinberg/flasky | flask | 349 | Redundant code in models.py | Hi, Miguel!
It seems that [this line](https://github.com/miguelgrinberg/flasky/blob/master/app/models.py#L113) in `models.User.add_self_follows` is redundant:
```py
class User():
# ...
@staticmethod
def add_self_follows():
for user in User.query.all():
if not user.is_following(user): # <------- this line
user.follow(user)
db.session.add(user)
db.session.commit()
```
Since it already check in `User.follow()`:
```py
class User():
# ...
def follow(self, user):
if not self.is_following(user): # <----------
f = Follow(follower=self, followed=user)
db.session.add(f)
``` | closed | 2018-04-21T03:54:14Z | 2018-04-21T07:32:48Z | https://github.com/miguelgrinberg/flasky/issues/349 | [
"bug"
] | greyli | 2 |
SALib/SALib | numpy | 355 | Document release process | 1. Write down list of steps to take when preparing for a new release
2. Create an Issue Template which can be used to keep track of these steps for each release (e.g. a new issue using the template is opened when preparing a release, and closed upon release). | closed | 2020-09-14T21:58:47Z | 2022-09-06T12:40:30Z | https://github.com/SALib/SALib/issues/355 | [
"enhancement",
"documentation"
] | willu47 | 17 |
Lightning-AI/pytorch-lightning | data-science | 20,217 | Questions about loading a pre-trained model using lightnining CLI for continue training | ### Bug description
Hi, I tried to load a pre-trained model using lightnining cli for continue training, which works well for single gpu case. However, for multiple gpu case, I meet a bug in the optimization process:
RuntimeError: !tensors.empty() INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1712608853085/work/torch/csrc/distributed/c10d/reducer.cpp":2090, please report a bug to PyTorch.
In both rank 0 and rank 1 gpu cores. How to properly load models for multi gpu training with the help of cli? Thanks.
### What version are you seeing the problem on?
master
### How to reproduce the bug
```python
Please see the questions above.
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
Nope | open | 2024-08-20T06:49:12Z | 2024-08-20T06:49:26Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20217 | [
"bug",
"needs triage",
"ver: 2.4.x"
] | HelloWorldLTY | 0 |
labmlai/annotated_deep_learning_paper_implementations | pytorch | 132 | Multi-Headed Attention colab error link to Transformer | https://nn.labml.ai/transformers/mha.html
This link doesn't go to MHA code, but goes to Transformer code. | closed | 2022-07-15T19:25:46Z | 2022-08-27T09:07:35Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/132 | [
"documentation"
] | Longer430 | 1 |
graphql-python/gql | graphql | 296 | Re-using client after schema fetching failed | **Describe the bug**
The Client has the option to fetch the GraphQL schem from the transport. This is done when the session is initialized and the program enters the context manager, i.e. when doing the following:
```python
client = Client(transport=AIOHTTPTransport(url), fetch_schema_from_transport=True)
with client as session:
# do something with the session
```
However, it can happen that fetching the schema fails. This throws an exception but since the context manager was not completely entered yet, the transport is not closed as it would be when leaving the context manager. When trying to open a new session, this fails with an `TransportAlreadyConnected` Error.
This only happens when re-using the client object, e.g.,
```python
client = Client(transport=AIOHTTPTransport(url), fetch_schema_from_transport=True)
with client as session:
# session is not established because the server is not available
...
with client as session:
# session can now be established
```
Maybe this is not intended? Should we always create a new client object after an error?
I think this can be easily fixed by catching the error in the `__aenter__` method and close the transport. I will add a PR.
**To Reproduce**
Steps to reproduce the behavior:
1. Open a client session to a server that is not yet available
2. Wait until the server is available
3. Try again to open a client session to the server
**Expected behavior**
The following should be possible without errors:
```python
client = Client(transport=AIOHTTPTransport(url), fetch_schema_from_transport=True)
try:
with client as session:
# session is not established because the server is not available
except Exception:
pass
sleep(30)
with client as session:
# session can now be established
```
**System info (please complete the following information):**
- OS: Linux
- Python version: 3.10
- gql version: 3.0.0rc0
- graphql-core version: 3.1.7
| closed | 2022-02-18T10:13:36Z | 2022-02-22T07:13:27Z | https://github.com/graphql-python/gql/issues/296 | [
"type: bug"
] | joricht | 2 |
igorbenav/fastcrud | sqlalchemy | 155 | Rename "data" to "items" in multi response | **Is your feature request related to a problem? Please describe.**
The multi-response field `data` is generic and doesn't describe the returned data very well.
**Describe the solution you'd like**
I would like to use a more descriptive term e.g. `items` that indicate that a _list_ is being returned.
| open | 2024-09-03T08:20:06Z | 2025-02-24T11:12:44Z | https://github.com/igorbenav/fastcrud/issues/155 | [
"enhancement"
] | feluelle | 3 |
timkpaine/lantern | plotly | 14 | Remove offline default | closed | 2017-10-02T17:31:36Z | 2018-02-05T21:28:59Z | https://github.com/timkpaine/lantern/issues/14 | [
"feature",
"plotly/cufflinks"
] | timkpaine | 0 |
|
ultralytics/ultralytics | deep-learning | 19,352 | YOLO12-pose, cls, seg checkpoints not downloadable even after upgrading to the latest stable version from pip | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Predict
### Bug
Hello @glenn-jocher ,
I go to know by referring docs that YOLO12 for other tasks are also ported in ultralytics. However , seems there is a bug in downloading the model.
I'm trying to inference YOLO12 for other tasks, but I'm facing challenges in initializing the model.

Let me know, about the fix. Thanks in advance.

### Environment
```
Ultralytics 8.3.78 🚀 Python-3.11.11 torch-2.5.1+cu124 CUDA:0 (Tesla T4, 15095MiB)
Setup complete ✅ (2 CPUs, 12.7 GB RAM, 33.2/112.6 GB disk)
OS Linux-6.1.85+-x86_64-with-glibc2.35
Environment Colab
Python 3.11.11
Install pip
RAM 12.67 GB
Disk 33.2/112.6 GB
CPU Intel Xeon 2.20GHz
CPU count 2
GPU Tesla T4, 15095MiB
GPU count 1
CUDA 12.4
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.10.0>=3.3.0
opencv-python ✅ 4.11.0.86>=4.6.0
pillow ✅ 11.1.0>=7.1.2
pyyaml ✅ 6.0.2>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.13.1>=1.4.1
torch ✅ 2.5.1+cu124>=1.8.0
torch ✅ 2.5.1+cu124!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1+cu124>=0.9.0
tqdm ✅ 4.67.1>=4.64.0
psutil ✅ 5.9.5
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.2>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.14>=2.0.0
```
### Minimal Reproducible Example
```model = YOLO("yolo12n-pose.pt")```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-02-21T06:45:23Z | 2025-02-24T07:05:37Z | https://github.com/ultralytics/ultralytics/issues/19352 | [
"segment",
"pose"
] | bhomik749 | 9 |
pyqtgraph/pyqtgraph | numpy | 2,931 | Can't add menu item to non-removable ROI | ### Short description
I'd like to be able to add a context menu item to an ROI object. In my specific case, so that I can add a change colour action. I expected to be able to add a new item to the existing menu, so that I get the nice benefits of the hierarchical menus. But the ROI menu is tied to it being `removable`.
### Code to reproduce
So I would like this to give me a menu with "Test", but not "Remove ROI"
```python
import numpy as np
import pyqtgraph as pg
from pyqtgraph.Qt import QtWidgets
app = pg.mkQApp("ImageView Example")
win = pg.GraphicsView()
view = pg.ViewBox()
win.setCentralItem(view)
img = pg.ImageItem(np.zeros((100,100)))
view.addItem(img)
win.show()
roi = pg.ROI((10,10), (50,50), removable=False)
view.addItem(roi)
menu = roi.getMenu()
menu.addAction("Test")
if __name__ == '__main__':
pg.exec()
```
If I set `removable=True` I get

otherwise I get no context menu for the ROI
### Expected behavior
Context menu with new item
### Real behavior
No context menu
### Tested environment(s)
* PyQtGraph version: Current master '0.13.4.dev0'
* Qt Python binding: PyQt5 5.15.10 Qt 5.15.2
* Python version: 3.12.1
* NumPy version: 1.26.4
* Operating system: Fedora 39
* Installation method: running from git, also with conda
### Additional context
| closed | 2024-02-07T20:14:59Z | 2024-02-17T14:14:17Z | https://github.com/pyqtgraph/pyqtgraph/issues/2931 | [] | samtygier | 0 |
flasgger/flasgger | api | 103 | Flasgger doesnt support basic auth | Following is my spec
```
"""
This is the user status API
Call this api passing a linkedin user email and get back their status
---
swagger: "2.0"
tags:
- User Status API
securityDefinitions:
basicAuth:
type: basic
parameters:
- name: user_email
in: path
type: string
required: true
description: The user's email
security:
- basicAuth: []
responses:
500:
description: Exception occurred during processing request
400:
description: No status found for user
200:
description: User's status
schema:
id: user_email
properties:
user:
type: string
description: The User's email
default: xxxx@abc.com
user_status:
type: string
description: The user'status
default: completed
"""
```
Everything works fine except the basic auth

| closed | 2017-05-18T16:28:26Z | 2020-03-11T20:08:33Z | https://github.com/flasgger/flasgger/issues/103 | [
"help wanted",
"hacktoberfest"
] | Anshul21 | 3 |
InstaPy/InstaPy | automation | 6,119 | Fix login issue when running by accpting cookie and change login URL | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Login and start running quickstart program
## Current Behavior
cannot login due cookie banner and updated URL
## Possible Solution (optional)
Change the URL and call the accept cookie function after navigation https://github.com/timgrossmann/InstaPy/pull/6118
## InstaPy configuration
https://github.com/timgrossmann/InstaPy/pull/6118
| closed | 2021-03-14T03:34:34Z | 2021-07-21T04:18:58Z | https://github.com/InstaPy/InstaPy/issues/6119 | [
"wontfix"
] | MoududAbu | 1 |
JaidedAI/EasyOCR | deep-learning | 832 | deform_conv_cuda not found in deform_conv |  | closed | 2022-08-25T13:04:08Z | 2024-07-04T08:42:02Z | https://github.com/JaidedAI/EasyOCR/issues/832 | [] | Mahmuod1 | 9 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 8 | training data | any instructions on training on custom data? | closed | 2020-09-20T07:32:14Z | 2021-02-04T05:01:33Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/8 | [] | lucasjinreal | 3 |
hbldh/bleak | asyncio | 1,484 | Cannot identify how to fetch an updated device name after connection | * bleak version: 0.21.1
* Python version: 3.10.12
* Operating System: Ubuntu 22.04.3 Linux 5.15.0-91-generic
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.64
### Description
I am working with a device that advertises under one device name, but upon successful connection and protocol negotiation it changes its device name to another name.
The mobile app I'm trying to emulate appears to make a second request to the Generic Access Service (0x1800) for Device Name (0x2a00). Unfortunately bleak does not appear to expose that service so I cannot make my own requests.
Per closed issue #250 I believe it was said that this data was wrapped up in the BLEDevice, but I am not sure how to get the updated device name once I have connected.
My debug logs show that DBus picks up the property change, but I do not know how to fetch the updated property.
### What I Did
I am emulating the android application by performing a bluetooth device scan and looking for advertised devices by either a known address or product name.
Once connected, I listen for device updates on its RX characteristic, and perform a connection negotiation by sending a vendor specific command over its TX characteristic.
However the device name never gets updated even though bluez reports the correct change.
At timestamp `2023-12-25 15:18:28` the `device.name` output is `HPA250B` which is the advertised device name
At timestamp `2023-12-25 15:18:28,936` output in the logs bluez reports the property change to the new name of `HEPA 250 JB`
But I cannot figure out how to reference this updated name in bleak.
```python3
import argparse
import asyncio
import logging
from uuid import UUID
from bleak import BleakClient, BleakScanner
from bleak.backends.characteristic import BleakGATTCharacteristic
logger = logging.getLogger(__name__)
address = "f4:5e:ab:bb:a8:94"
RX_UUID = UUID('{0000ffe4-0000-1000-8000-00805f9b34fb}')
TX_UUID = UUID('{0000ffe9-0000-1000-8000-00805f9b34fb}')
def notification_handler(characteristic: BleakGATTCharacteristic, data: bytearray):
display_data("\n\n\nNotify from device: %s", data)
async def main(args: argparse.Namespace):
logger.info("\n\n\nstarting find...")
device = await BleakScanner.find_device_by_address(address)
logger.info("\n\n\nDevice: %s", device)
logger.info("\n\n\nDevice Details: %s", device.details)
if device == None:
logger.error("Could not find a matching device. Aborting...")
return
logger.info("\n\n\nconnecting to device...")
async with BleakClient(device) as client:
logger.info("\n\n\nConnected to address %s!", client.address)
await client.start_notify(RX_UUID, notification_handler)
mac_cmd = b'MAC+\xf4\x5e\xab\xbb\xa8\x94'
logger.info("\n\n\nSending MAC Command: %s", mac_cmd)
await client.write_gatt_char(TX_UUID, mac_cmd)
await asyncio.sleep(10)
logger.info("\n\n\nDevice: %s", device)
logger.info("\n\n\nDevice Details: %s", device.details)
def display_data(msg: str, data: bytearray):
logger.info(msg, ' '.join(list(map(lambda x: format(x,'02x'), data))))
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-d",
"--debug",
action="store_true",
help="sets the log level to debug"
)
args = parser.parse_args()
log_level = logging.DEBUG if args.debug else logging.INFO
logging.basicConfig(
level = log_level,
format="%(asctime)-15s %(name)-8s %(levelname)s: %(message)s",
)
asyncio.run(main(args))
```
```cmd
% python3 scanner.py -d
2023-12-25 15:18:28,205 asyncio DEBUG: Using selector: EpollSelector
2023-12-25 15:18:28,206 __main__ INFO:
starting find...
2023-12-25 15:18:28,225 bleak.backends.bluezdbus.manager DEBUG: initial properties: {'/org/bluez': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.AgentManager1': {}, 'org.bluez.ProfileManager1': {}}, '/org/bluez/hci0': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.Adapter1': {'Address': '9C:B6:D0:C1:FD:1C', 'AddressType': 'public', 'Name': 'computer', 'Alias': 'computer', 'Class': 8126732, 'Powered': True, 'Discoverable': False, 'DiscoverableTimeout': 60, 'Pairable': True, 'PairableTimeout': 0, 'Discovering': False, 'UUIDs': ['00001133-0000-1000-8000-00805f9b34fb', '0000110e-0000-1000-8000-00805f9b34fb', '00001105-0000-1000-8000-00805f9b34fb', '00001132-0000-1000-8000-00805f9b34fb', '00001200-0000-1000-8000-00805f9b34fb', '00001104-0000-1000-8000-00805f9b34fb', '00005005-0000-1000-8000-0002ee000001', '00001108-0000-1000-8000-00805f9b34fb', '0000110c-0000-1000-8000-00805f9b34fb', '00001801-0000-1000-8000-00805f9b34fb', '0000112f-0000-1000-8000-00805f9b34fb', '0000180a-0000-1000-8000-00805f9b34fb', '0000110b-0000-1000-8000-00805f9b34fb', '00001800-0000-1000-8000-00805f9b34fb', '0000111f-0000-1000-8000-00805f9b34fb', '0000110a-0000-1000-8000-00805f9b34fb', '00001106-0000-1000-8000-00805f9b34fb'], 'Modalias': 'usb:v1D6Bp0246d0540', 'Roles': ['central', 'peripheral']}, 'org.freedesktop.DBus.Properties': {}, 'org.bluez.GattManager1': {}, 'org.bluez.Media1': {}, 'org.bluez.NetworkServer1': {}, 'org.bluez.LEAdvertisingManager1': {'ActiveInstances': 0, 'SupportedInstances': 5, 'SupportedIncludes': ['appearance', 'local-name']}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.Device1': {'Address': 'F4:5E:AB:BB:A8:94', 'AddressType': 'public', 'Name': 'HEPA 250 JB', 'Alias': 'HEPA 250 JB', 'Paired': False, 'Trusted': True, 'Blocked': False, 'LegacyPairing': False, 'Connected': False, 'UUIDs': ['00001800-0000-1000-8000-00805f9b34fb', '00001801-0000-1000-8000-00805f9b34fb', '0000180a-0000-1000-8000-00805f9b34fb', '0000fff0-0000-1000-8000-00805f9b34fb'], 'Modalias': 'bluetooth:v000Dp0000d0110', 'Adapter': '/org/bluez/hci0', 'ServicesResolved': False}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattService1': {'UUID': '0000fff0-0000-1000-8000-00805f9b34fb', 'Device': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94', 'Primary': True, 'Includes': []}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0031': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '0000fff5-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023', 'Value': bytearray(b'\x05\x00\x00\x00\x00'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0031/desc0033': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattDescriptor1': {'UUID': '00002901-0000-1000-8000-00805f9b34fb', 'Characteristic': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0031', 'Value': bytearray(b'')}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002d': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '0000ffe4-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023', 'Value': bytearray(b'\xa5\x00\x00\x00 \x00\x00\x91@\x86@\x00\x00\t\x00\x17;\x00\x00'), 'Notifying': False, 'Flags': ['read', 'notify'], 'NotifyAcquired': False}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002d/desc0030': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattDescriptor1': {'UUID': '00002901-0000-1000-8000-00805f9b34fb', 'Characteristic': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002d', 'Value': bytearray(b'')}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002d/desc002f': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattDescriptor1': {'UUID': '00002902-0000-1000-8000-00805f9b34fb', 'Characteristic': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002d', 'Value': bytearray(b'')}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002a': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '0000fff3-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023', 'Value': bytearray(b''), 'Flags': ['write']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002a/desc002c': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattDescriptor1': {'UUID': '00002901-0000-1000-8000-00805f9b34fb', 'Characteristic': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002a', 'Value': bytearray(b'')}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0027': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '0000fff2-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023', 'Value': bytearray(b'\x02'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0027/desc0029': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattDescriptor1': {'UUID': '00002901-0000-1000-8000-00805f9b34fb', 'Characteristic': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0027', 'Value': bytearray(b'')}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0024': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '0000ffe9-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023', 'Value': bytearray(b'\xa5\x00\x00-\x06Q\x00F\x00\x00\x00\t\x00\x17;\x00\x00d\x00\x00'), 'Flags': ['read', 'write']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0024/desc0026': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattDescriptor1': {'UUID': '00002901-0000-1000-8000-00805f9b34fb', 'Characteristic': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0024', 'Value': bytearray(b'')}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattService1': {'UUID': '0000180a-0000-1000-8000-00805f9b34fb', 'Device': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94', 'Primary': True, 'Includes': []}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char0021': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a50-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'\x01\r\x00\x00\x00\x10\x01'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char001f': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a2a-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'\xfe\x00experimental'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char001d': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a29-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'Manufacturer Name\x00'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char001b': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a28-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'Software Revision\x00'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char0019': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a27-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'Hardware Revision\x00'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char0017': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a26-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'Firmware Revision\x00'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char0015': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a25-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'Serial Number\x00'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char0013': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a24-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'Model Number\x00'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010/char0011': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a23-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0010', 'Value': bytearray(b'\x94\xa8\xbb\x00\x00\xab^\xf4'), 'Flags': ['read']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service000c': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattService1': {'UUID': '00001801-0000-1000-8000-00805f9b34fb', 'Device': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94', 'Primary': True, 'Includes': []}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service000c/char000d': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattCharacteristic1': {'UUID': '00002a05-0000-1000-8000-00805f9b34fb', 'Service': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service000c', 'Value': bytearray(b''), 'Notifying': False, 'Flags': ['indicate']}, 'org.freedesktop.DBus.Properties': {}}, '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service000c/char000d/desc000f': {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.GattDescriptor1': {'UUID': '00002902-0000-1000-8000-00805f9b34fb', 'Characteristic': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service000c/char000d', 'Value': bytearray(b'')}, 'org.freedesktop.DBus.Properties': {}}}
2023-12-25 15:18:28,232 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0): ['org.bluez.Adapter1', {'Discovering': <dbus_fast.signature.Variant ('b', True)>}, []]
2023-12-25 15:18:28,353 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.ObjectManager.InterfacesAdded (/): ['/org/bluez/hci0/dev_53_56_14_AA_5E_24', {'org.freedesktop.DBus.Introspectable': {}, 'org.bluez.Device1': {'Address': <dbus_fast.signature.Variant ('s', 53:56:14:AA:5E:24)>, 'AddressType': <dbus_fast.signature.Variant ('s', random)>, 'Alias': <dbus_fast.signature.Variant ('s', 53-56-14-AA-5E-24)>, 'Paired': <dbus_fast.signature.Variant ('b', False)>, 'Trusted': <dbus_fast.signature.Variant ('b', False)>, 'Blocked': <dbus_fast.signature.Variant ('b', False)>, 'LegacyPairing': <dbus_fast.signature.Variant ('b', False)>, 'RSSI': <dbus_fast.signature.Variant ('n', -69)>, 'Connected': <dbus_fast.signature.Variant ('b', False)>, 'UUIDs': <dbus_fast.signature.Variant ('as', ['0000fe9f-0000-1000-8000-00805f9b34fb'])>, 'Adapter': <dbus_fast.signature.Variant ('o', /org/bluez/hci0)>, 'ServiceData': <dbus_fast.signature.Variant ('a{sv}', {'0000fe9f-0000-1000-8000-00805f9b34fb': <dbus_fast.signature.Variant ('ay', bytearray(b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00'))>})>, 'ServicesResolved': <dbus_fast.signature.Variant ('b', False)>}, 'org.freedesktop.DBus.Properties': {}}]
2023-12-25 15:18:28,359 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94): ['org.bluez.Device1', {'RSSI': <dbus_fast.signature.Variant ('n', -48)>, 'TxPower': <dbus_fast.signature.Variant ('n', 0)>, 'Name': <dbus_fast.signature.Variant ('s', HPA250B)>, 'Alias': <dbus_fast.signature.Variant ('s', HPA250B)>}, []]
2023-12-25 15:18:28,363 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_53_56_14_AA_5E_24): ['org.bluez.Device1', {}, ['RSSI']]
2023-12-25 15:18:28,363 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.ObjectManager.InterfacesRemoved (/): ['/org/bluez/hci0/dev_53_56_14_AA_5E_24', ['org.freedesktop.DBus.Properties', 'org.freedesktop.DBus.Introspectable', 'org.bluez.Device1']]
2023-12-25 15:18:28,363 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94): ['org.bluez.Device1', {}, ['TxPower', 'RSSI']]
2023-12-25 15:18:28,364 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0): ['org.bluez.Adapter1', {'Discovering': <dbus_fast.signature.Variant ('b', False)>}, []]
2023-12-25 15:18:28,364 __main__ INFO:
Device: F4:5E:AB:BB:A8:94: HPA250B
2023-12-25 15:18:28,364 __main__ INFO:
Device Details: {'path': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94', 'props': {'Address': 'F4:5E:AB:BB:A8:94', 'AddressType': 'public', 'Name': 'HPA250B', 'Alias': 'HPA250B', 'Paired': False, 'Trusted': True, 'Blocked': False, 'LegacyPairing': False, 'Connected': False, 'UUIDs': ['00001800-0000-1000-8000-00805f9b34fb', '00001801-0000-1000-8000-00805f9b34fb', '0000180a-0000-1000-8000-00805f9b34fb', '0000fff0-0000-1000-8000-00805f9b34fb'], 'Modalias': 'bluetooth:v000Dp0000d0110', 'Adapter': '/org/bluez/hci0', 'ServicesResolved': False, 'RSSI': -48, 'TxPower': 0}}
2023-12-25 15:18:28,364 __main__ INFO:
connecting to device...
2023-12-25 15:18:28,365 bleak.backends.bluezdbus.client DEBUG: Connecting to device @ F4:5E:AB:BB:A8:94
2023-12-25 15:18:28,368 bleak.backends.bluezdbus.client DEBUG: Connecting to BlueZ path /org/bluez/hci0/dev_F4_5E_AB_BB_A8_94
2023-12-25 15:18:28,670 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94): ['org.bluez.Device1', {'Connected': <dbus_fast.signature.Variant ('b', True)>}, []]
2023-12-25 15:18:28,936 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94): ['org.bluez.Device1', {'Name': <dbus_fast.signature.Variant ('s', HEPA 250 JB)>, 'Alias': <dbus_fast.signature.Variant ('s', HEPA 250 JB)>}, []]
2023-12-25 15:18:29,393 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94): ['org.bluez.Device1', {'ServicesResolved': <dbus_fast.signature.Variant ('b', True)>}, []]
2023-12-25 15:18:29,394 __main__ INFO:
Connected to address F4:5E:AB:BB:A8:94!
2023-12-25 15:18:29,560 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002d): ['org.bluez.GattCharacteristic1', {'Notifying': <dbus_fast.signature.Variant ('b', True)>}, []]
2023-12-25 15:18:29,561 __main__ INFO:
Sending MAC Command: b'MAC+\xf4^\xab\xbb\xa8\x94'
2023-12-25 15:18:29,649 bleak.backends.bluezdbus.client DEBUG: Write Characteristic 0000ffe9-0000-1000-8000-00805f9b34fb | /org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char0024: b'MAC+\xf4^\xab\xbb\xa8\x94'
2023-12-25 15:18:31,180 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94/service0023/char002d): ['org.bluez.GattCharacteristic1', {'Value': <dbus_fast.signature.Variant ('ay', bytearray(b'\xa5\x00\x00\x00 \x00\x00\x91@\x86@\x00\x00\t\x00\x17;\x00\x00'))>}, []]
2023-12-25 15:18:31,181 __main__ INFO:
Notify from device: a5 00 00 00 20 00 00 91 40 86 40 00 00 09 00 17 3b 00 00
2023-12-25 15:18:39,658 __main__ INFO:
Device: F4:5E:AB:BB:A8:94: HPA250B
2023-12-25 15:18:39,658 __main__ INFO:
Device Details: {'path': '/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94', 'props': {'Address': 'F4:5E:AB:BB:A8:94', 'AddressType': 'public', 'Name': 'HPA250B', 'Alias': 'HPA250B', 'Paired': False, 'Trusted': True, 'Blocked': False, 'LegacyPairing': False, 'Connected': False, 'UUIDs': ['00001800-0000-1000-8000-00805f9b34fb', '00001801-0000-1000-8000-00805f9b34fb', '0000180a-0000-1000-8000-00805f9b34fb', '0000fff0-0000-1000-8000-00805f9b34fb'], 'Modalias': 'bluetooth:v000Dp0000d0110', 'Adapter': '/org/bluez/hci0', 'ServicesResolved': False, 'RSSI': -48, 'TxPower': 0}}
2023-12-25 15:18:39,659 bleak.backends.bluezdbus.client DEBUG: Disconnecting ({/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94})
2023-12-25 15:18:42,392 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94): ['org.bluez.Device1', {'ServicesResolved': <dbus_fast.signature.Variant ('b', False)>}, []]
2023-12-25 15:18:42,393 bleak.backends.bluezdbus.manager DEBUG: received D-Bus signal: org.freedesktop.DBus.Properties.PropertiesChanged (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94): ['org.bluez.Device1', {'Connected': <dbus_fast.signature.Variant ('b', False)>}, []]
2023-12-25 15:18:42,393 bleak.backends.bluezdbus.client DEBUG: Device disconnected (/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94)
2023-12-25 15:18:42,393 bleak.backends.bluezdbus.client DEBUG: _cleanup_all(/org/bluez/hci0/dev_F4_5E_AB_BB_A8_94)
```
### Logs
See above program output for debug logs.
In particular timestamp `2023-12-25 15:18:28` for the advertised device name and timestamp `2023-12-25 15:18:28,936` for the bluez property update debug log.
| closed | 2023-12-25T20:41:56Z | 2023-12-29T04:41:15Z | https://github.com/hbldh/bleak/issues/1484 | [] | coderjoe | 4 |
jacobgil/pytorch-grad-cam | computer-vision | 221 | SSL Grad-cam | Hello @jacobgil
Thanks a lot for these awesome repositories :)
is it possible to access the ViT backbone of pretrained DINO network and then explain it with grad-cam methods?
Probably accessing specific layers etc. kindly help with how exactly it could be done?
Best Regards,
@jaiswati
| closed | 2022-03-31T18:25:58Z | 2022-04-01T14:01:00Z | https://github.com/jacobgil/pytorch-grad-cam/issues/221 | [] | jaiswati | 1 |
graphql-python/graphene-django | django | 881 | OrderedDjangoFilterConnectionField: "connection_resolver() missing 1 required positional argument: 'info’” | Testing filters using the OrderedDjangoFilterConnectionField solution in this issue; what should resolve_all_images be in this example? I’m getting `connection_resolver() missing 1 required positional argument: 'info’”` in the GraphQL Explorer in its current form.
https://stackoverflow.com/questions/57478464/django-graphene-relay-order-by-orderingfilter
```
class OrderedDjangoFilterConnectionField(DjangoFilterConnectionField):
"""
Adapted from https://github.com/graphql-python/graphene/issues/251
Substituting:
`claims = DjangoFilterConnectionField(ClaimsGraphQLType)`
with:
```
claims = OrderedDjangoFilterConnectionField(ClaimsGraphQLType,
orderBy=graphene.List(of_type=graphene.String))
```
"""
@classmethod
def connection_resolver(cls, resolver, connection, default_manager, max_limit,
enforce_first_or_last, filterset_class, filtering_args,
root, info,
**args):
print("info", info)
print("root", root)
print("args", args)
filter_kwargs = {k: v for k, v in args.items() if k in filtering_args}
qs = filterset_class(
data=filter_kwargs,
queryset=default_manager.get_queryset()
# request=info.context
).qs
order = args.get('orderBy', None)
if order:
qs = qs.order_by(*order)
# if order:
# if type(order) is str:
# snake_order = to_snake_case(order)
# else:
# snake_order = [to_snake_case(o) for o in order]
# qs = qs.order_by(*snake_order)
return super(DjangoFilterConnectionField, cls).connection_resolver(
resolver,
connection,
qs,
max_limit,
enforce_first_or_last,
root,
info,
**args
)
class ExtendedConnection(graphene.Connection):
class Meta:
abstract = True
total_count = graphene.Int()
def resolve_total_count(root, info, **kwargs):
return root.length
class ImageType(DjangoObjectType):
class Meta:
model = Image
exclude_fields = ('id',)
filter_fields = []
interfaces = (graphene.relay.Node,)
connection_class = ExtendedConnection
class Query(graphene.ObjectType):
all_Videos = OrderedDjangoFilterConnectionField(VideoType,
orderBy=graphene.List(of_type=graphene.String))
def resolve_all_Images(self, info, **args):
qs = Images.objects.all()
order = args.get('orderBy', None)
if order:
qs = qs.order_by(*order)
return qs
return Images.objects.none()
```
| open | 2020-02-21T12:40:18Z | 2020-08-27T00:45:57Z | https://github.com/graphql-python/graphene-django/issues/881 | [
"wontfix"
] | SillyScribe95 | 5 |
google/trax | numpy | 1,467 | [Question] [Request] Tutorial for implementing TensorBoard | Is there a tutorial or similar out there that describes how to implement TensorBoard with Trax? Similar to what exists for Tensorboard with tf or pytorch.
I have noticed there are some files distributed along the trax github account ([callbacks](https://github.com/google/trax/blob/5b08d66a4e69cccbab5868697b207a8b71caa890/trax/supervised/callbacks.py#L58), [history](https://github.com/google/trax/blob/5b08d66a4e69cccbab5868697b207a8b71caa890/trax/supervised/history.py), [jaxboard](https://github.com/google/trax/blob/5b08d66a4e69cccbab5868697b207a8b71caa890/trax/jaxboard.py)) but haven't found any beginner friendly docs. | closed | 2021-02-16T11:03:39Z | 2021-02-27T14:52:01Z | https://github.com/google/trax/issues/1467 | [] | PizBernina | 2 |
huggingface/datasets | nlp | 7,372 | Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets | ### Description
I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:
#### Code 1: Using `load_dataset`
```python
from datasets import Dataset, load_dataset
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_dataset("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 1350 samples.
- `test` has 150 samples.
#### Code 2: Using `load_from_disk`
```python
from datasets import Dataset, load_from_disk
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_from_disk("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 450 samples.
- `test` has 50 samples.
### Expected Behavior
I expected both `load_dataset` and `load_from_disk` to load the same dataset, as they are pointing to the same directory. However, the results differ significantly:
- `load_dataset` seems to merge all shards, resulting in a combined dataset.
- `load_from_disk` only loads the last saved dataset, ignoring previous shards.
### Questions
1. Is this behavior intentional? If so, could you clarify the difference between `load_dataset` and `load_from_disk` in the documentation?
2. If this is not intentional, could this be considered a bug?
3. What is the recommended way to handle cases where multiple datasets are saved to the same directory?
Thank you for your time and effort in maintaining this great library! I look forward to your feedback. | open | 2025-01-16T05:47:20Z | 2025-01-16T05:47:20Z | https://github.com/huggingface/datasets/issues/7372 | [] | gaohongkui | 0 |
flasgger/flasgger | flask | 360 | Flasgger execution is a success but not showing the output XML body | So i am trying to deploy an IRIS Model through flasgger , but when i hit the execute button under the API DOC GUI, i do not see any output , where am expecting an XML output. Although i can see the HTTP/200 OK successful response in my ipython notebook. So i am trying to understand what am i doing here . Can somebody please assist me on this problem, i am stuck for almost a month now. Any suggestion would be deeply appreciated !
<img width="783" alt="flasgger_APIDOCS" src="https://user-images.githubusercontent.com/42491841/73424366-a3a47780-4354-11ea-8e8f-85da7da55d0c.PNG">
| open | 2020-01-30T06:05:47Z | 2020-05-12T13:37:21Z | https://github.com/flasgger/flasgger/issues/360 | [] | akudnaver | 2 |
pytest-dev/pytest-xdist | pytest | 922 | Pytest xdist library is getting crashed | I am trying to execute my tests parallel using xdist library, but after 10-15 tests library got crashed and execution stops.
Below is the error I got
```
`dkStaging.php","userkeywords": "atn:vc_w:300_h:250","allowAVAudioSessionAccess": "1"}-1]
INTERNALERROR> def worker_internal_error(self, node, formatted_error):
INTERNALERROR> """
INTERNALERROR> pytest_internalerror() was called on the worker.
INTERNALERROR>
INTERNALERROR> pytest_internalerror() arguments are an excinfo and an excrepr, which can't
INTERNALERROR> be serialized, so we go with a poor man's solution of raising an exception
INTERNALERROR> here ourselves using the formatted message.
INTERNALERROR> """
INTERNALERROR> self._active_nodes.remove(node)
INTERNALERROR> try:
INTERNALERROR> > assert False, formatted_error
INTERNALERROR> E AssertionError: Traceback (most recent call last):
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR> E session.exitstatus = doit(config, session) or 0
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/_pytest/main.py", line 323, in _main
INTERNALERROR> E config.hook.pytest_runtestloop(session=session)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> E return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> E return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> E self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> E return outcome.get_result()
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> E raise ex[1].with_traceback(ex[2])
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> E res = hook_impl.function(*args)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/xdist/remote.py", line 112, in pytest_runtestloop
INTERNALERROR> E self.run_one_test(torun)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/xdist/remote.py", line 131, in run_one_test
INTERNALERROR> E self.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> E return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> E return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> E self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> E return outcome.get_result()
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> E raise ex[1].with_traceback(ex[2])
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> E res = hook_impl.function(*args)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/_pytest/runner.py", line 109, in pytest_runtest_protocol
INTERNALERROR> E runtestprotocol(item, nextitem=nextitem)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/_pytest/runner.py", line 120, in runtestprotocol
INTERNALERROR> E rep = call_and_report(item, "setup", log)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/_pytest/runner.py", line 219, in call_and_report
INTERNALERROR> E hook.pytest_runtest_logreport(report=report)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> E return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> E return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> E self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> E return outcome.get_result()
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> E raise ex[1].with_traceback(ex[2])
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> E res = hook_impl.function(*args)
INTERNALERROR> E File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/xdist/remote.py", line 183, in pytest_runtest_logreport
INTERNALERROR> E assert self.session.items[self.item_index].nodeid == report.nodeid
INTERNALERROR> E AssertionError
INTERNALERROR> E assert False
INTERNALERROR>
INTERNALERROR> venv/lib/python3.9/site-packages/xdist/dsession.py:190: AssertionError
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/_pytest/main.py", line 323, in _main
INTERNALERROR> config.hook.pytest_runtestloop(session=session)
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/hooks.py", line 286, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 93, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/manager.py", line 84, in <lambda>
INTERNALERROR> self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 80, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/xdist/dsession.py", line 115, in pytest_runtestloop
INTERNALERROR> self.loop_once()
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/xdist/dsession.py", line 138, in loop_once
INTERNALERROR> call(**kwargs)
INTERNALERROR> File "/Users/mayur/GitRepo/sdk-automation/venv/lib/python3.9/site-packages/xdist/dsession.py", line 177, in worker_workerfinished
INTERNALERROR> assert not crashitem, (crashitem, node)
INTERNALERROR> AssertionError: ('TestScripts/MRAID/test_audio_volume_change_event.py::TestAudioVolumeChange::test_allowAVAudioSessionAccess[banner-{"...php/sdkStaging.php","userkeywords": "atn:vc_w:300_h:250","allowAVAudioSessionAccess": "1"}-1]', <WorkerController gw1>)
INTERNALERROR> assert not 'TestScripts/MRAID/test_audio_volume_change_event.py::TestAudioVolumeChange::test_allowAVAudioSessionAccess[banner-{"a...ms.pubmatic.com:8443/sdk/php/sdkStaging.php","userkeywords": "atn:vc_w:300_h:250","allowAVAudioSessionAccess": "1"}-1]'` | closed | 2023-06-23T09:48:58Z | 2024-06-26T11:05:05Z | https://github.com/pytest-dev/pytest-xdist/issues/922 | [] | Mayur5712 | 2 |
piskvorky/gensim | data-science | 2,920 | Gensim's word2vec has a loss of 0 from epoch 1? | I am using the Word2vec module of Gensim library to train a word embedding, the dataset is 400k sentences with 100k unique words (its not english)
I'm using this code to monitor and calculate the loss :
```
class MonitorCallback(CallbackAny2Vec):
def __init__(self, test_words):
self._test_words = test_words
def on_epoch_end(self, model):
print("Model loss:", model.get_latest_training_loss()) # print loss
for word in self._test_words: # show wv logic changes
print(model.wv.most_similar(word))
monitor = MonitorCallback(["MyWord"]) # monitor with demo words
w2v_model = gensim.models.word2vec.Word2Vec(size=W2V_SIZE, window=W2V_WINDOW, min_count=W2V_MIN_COUNT , callbacks=[monitor])
w2v_model.build_vocab(tokenized_corpus)
words = w2v_model.wv.vocab.keys()
vocab_size = len(words)
print("Vocab size", vocab_size)
print("[*] Training...")
w2v_model.train(tokenized_corpus, total_examples=len(tokenized_corpus), epochs=W2V_EPOCH)
```
The problem is from epoch 1 the loss is 0 and the vector of the monitored words dont change at all!
[*] Training...
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
Model loss: 0.0
so what is the problem here? is this normal? the tokenized corpus is a list of lists that are something like tokenized_corpus[0] = [ "word1" , "word2" , ...]
I googled and seems like some of the old versions of gensim had problem with calculating loss function, but they are from almost a year ago and it seems like it should be fixed right now?
I tried the code provided in the answer of this question as well but still the loss is 0 :
https://stackoverflow.com/questions/52038651/loss-does-not-decrease-during-training-word2vec-gensim | closed | 2020-08-20T17:07:34Z | 2020-08-20T17:37:48Z | https://github.com/piskvorky/gensim/issues/2920 | [] | LusKrew | 1 |
Textualize/rich | python | 2,395 | [BUG] Inconsistency across spinners | Some spinners, notably the ones built using emojis, include a trailing space. An extra space is inserted between the spinner and the spinner's text. I would argue that this is one space too much.
This difference is easy to see when running `python -m rich.spinner`. | closed | 2022-07-14T18:30:29Z | 2022-07-14T18:53:07Z | https://github.com/Textualize/rich/issues/2395 | [
"Needs triage"
] | brechtm | 6 |
igorbenav/fastcrud | pydantic | 27 | Preventing Duplicate Table Names in SQL Queries Using SQLAlchemy | Thank you for the recent update! We appreciate the enhancements and improvements made. It's not critical, but I think it's worth discussing.
**Describe the bug or question**
The problem lies in encountering an error due to duplicate table names when writing SQL queries using SQLAlchemy. This occurs when there's ambiguity in the query because the same table name is used multiple times without explicitly specifying aliases. As a result, SQLAlchemy fails to process the query correctly, leading to a query execution error.
**To Reproduce**
```python
booking_join_config = [
JoinConfig(
model=models.UserModel,
join_on=models.Booking.owner_id == models.UserModel.id,
join_prefix="owner_",
schema_to_select=schemas.UserBase,
join_type="inner",
),
JoinConfig(
model=models.UserModel,
join_on=models.Booking.user_id == models.UserModel.id,
join_prefix="user_",
schema_to_select=schemas.UserBase,
join_type="inner",
),
]
```
**Description**
Output: table name "users" specified more than once
```SQL
FROM bookings
JOIN users ON bookings.owner_id = users.id
JOIN users ON bookings.user_id = users.id
```
**Additional context**
```SQL
FROM bookings
JOIN users AS owner_users ON bookings.owner_id = owner_users.id
JOIN users AS user_users ON bookings.user_id = user_users.id
```
| closed | 2024-03-15T10:37:53Z | 2024-03-19T04:47:03Z | https://github.com/igorbenav/fastcrud/issues/27 | [
"bug"
] | neatek | 3 |
MagicStack/asyncpg | asyncio | 205 | What happens if an inactive connection that is stored in the pool gets closed by db? | Hello!
I was tracing 'connection closed' errors in my application (my guess is that there are network issues) and found out that if there is an inactive connection stored in the pool and this connection gets closed externally, pool.acquire would return closed connection.
I propose handling this case [here](https://github.com/MagicStack/asyncpg/blob/master/asyncpg/pool.py#L146): `if self._con is None or self._con.is_closed(): ...`. | closed | 2017-10-06T11:03:09Z | 2017-10-10T16:43:21Z | https://github.com/MagicStack/asyncpg/issues/205 | [] | AmatanHead | 1 |
dgtlmoon/changedetection.io | web-scraping | 1,661 | Documentation and fix for error "Re-stock detection requires Chrome or compatible webdriver/playwright fetcher to work" | Re-stock option causes error notification and logs. There is currently no found documentation about this error, and it is clear it requires Chrome or a compatible webdriver/playwrite to solve this. However, having no documentation about on changedetectio.io github makes it difficult to find out what, where, and how to solve this.
From the logs:
```
ERROR:changedetectionio:Exception reached processing watch UUID: xxx - Re-stock detection requires Chrome or compatible webdriver/playwright fetcher to work
```
**Version**
Using docker image: `dgtlmoon/changedetection.io:0.43.1`
**To Reproduce**
Steps to reproduce the behavior:
1. Add page with re-stock detection
2. Run page checker.
3. Get error.
**Expected behavior**
I expect there to be documented instructions that help you solve this issue somewhere, or a better descriptive error message that gives me information about where I can find this information (link to docs for example).
Better yet, No error and instead automatically attempt to use the web-driver config instead of also asking you to manually change to WebDriver under the Request options / Fetching options.
**Desired outcome from this issue**:
Knowing how to solve this. What drivers to install, how and where to install them. This way someone can search for this error and find a solution. Better yet, add docs for it. | open | 2023-06-30T14:59:12Z | 2023-07-05T16:10:03Z | https://github.com/dgtlmoon/changedetection.io/issues/1661 | [
"enhancement"
] | ivanskodje | 2 |
axnsan12/drf-yasg | django | 40 | Schema default value not used | Trying to set a default value for a Schema inside of a Parameter. It does not seem to be getting picked up in Swagger when I use the "Try it out" functionality.
My schema looks like this:
```
@swagger_auto_schema(
operation_id="Get a mission plan",
responses={
200: 'test'},
manual_parameters=[
openapi.Parameter(
name='Organization',
in_=openapi.IN_HEADER,
description="Organization name",
required=True,
schema=openapi.Schema(
type=openapi.TYPE_STRING,
default="Organization 1",
)
)
]
)
```
Perhaps providing a more in depth example in testproj, that uses more of the OpenApi parameters, would be helpful. From what I read [here](https://swagger.io/docs/specification/adding-examples/) it seems the example field should be used rather than default, but it does not seem to be an available field in the Schema class? | closed | 2018-01-11T10:13:49Z | 2018-01-12T02:37:29Z | https://github.com/axnsan12/drf-yasg/issues/40 | [] | arkadyark | 4 |
Morizeyao/GPT2-Chinese | nlp | 98 | 生成文本如何提高多样性 | ### 问题:
1.我使用散文预训练模型,用自己的数据(20000条)进行微调,5epochs,训练完loss:0.08,预测结果会完全拟合我的数据。我想提高预测输出的多样性(比如,我的训练数据以外的词汇或者句子结构)。请问需要如何改进?我的训练是不是过拟合了
2.loss=0.08,微调时的损失函数是什么呢?
### 期待给些指导建议,祝好! | closed | 2019-11-12T02:57:11Z | 2022-11-14T05:18:36Z | https://github.com/Morizeyao/GPT2-Chinese/issues/98 | [] | dkicenan | 5 |
huggingface/transformers | machine-learning | 36,187 | Recent Qwen2VL merge request (#35837) break compatibility with DeepSpeed | The recent merge request (#35837) works with accelerate but breaks with DeepSpeed (w/ and w/o deepspeed config)
- distributed_type: MULTI_GPU (work)
- distributed_type: DEEPSPEED (no longer works)
To be more precise the issue lies in this section: https://github.com/huggingface/transformers/blob/main/src/transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py#L200
```
emb = torch.cat((rotary_pos_emb, rotary_pos_emb), dim=-1)
cos = emb.cos().float()
sin = emb.sin().float()
else:
cos, sin = position_embeddings
q, k = apply_rotary_pos_emb_flashatt(q.unsqueeze(0), k.unsqueeze(0), cos, sin)
```
`cos, sin = position_embeddings` these are not casted to float and are subject to various dtypes depending on the DeepSpeed and mixed_precision config.
This accelerate config works:
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: MULTI_GPU
downcast_bf16: 'no'
enable_cpu_affinity: #false
main_training_function: main
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
mixed_precision: bf16
```
This accelerate config no longer works:
```
compute_environment: LOCAL_MACHINE
debug: false
distributed_type: DEEPSPEED
deepspeed_config:
zero_stage: 3
downcast_bf16: 'no'
enable_cpu_affinity: false
main_training_function: main
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
```
| closed | 2025-02-14T00:25:37Z | 2025-02-18T19:30:12Z | https://github.com/huggingface/transformers/issues/36187 | [] | ArdalanM | 3 |
gradio-app/gradio | deep-learning | 9,963 | Transparency Settings Not Applying Consistently in Dark Mode | ### Describe the bug
Transparency settings for background elements in dark mode are not applied consistently across all component blocks in Gradio. Specific settings, such as `block_background_fill_dark` and `checkbox_background_color_dark`, fail to apply transparency in dark mode. **This issue does not occur in light mode, where transparency settings apply uniformly across all blocks.**
## Steps to Reproduce
1. Define a custom theme with transparency settings applied to dark mode, including `checkbox_background_color_dark`, as shown in the example code below.
2. Apply the theme to a Gradio interface with various components (textbox, checkbox, image, etc.).
3. Launch the interface and enforce dark mode by navigating to `http://127.0.0.1:7860/?__theme=dark`.
4. Observe the transparency inconsistencies across different blocks and the lack of transparency in the checkbox.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
from gradio.themes import colors, sizes, Font, GoogleFont
class ForestOceanTheme(gr.themes.Ocean):
def __init__(
self,
*,
primary_hue: colors.Color | str = colors.Color(
c50="#E6F7E6", c100="#CFF0CF", c200="#A8E0A8", c300="#82D182",
c400="#5BC25B", c500="#34B134", c600="#299229", c700="#1E731E",
c800="#145514", c900="#0A370A", c950="#042704"
),
secondary_hue: colors.Color | str = colors.Color(
c50="#E6F7E6", c100="#A8E0A8", c200="#82D182", c300="#5BC25B",
c400="#34B134", c500="#299229", c600="#1E731E", c700="#145514",
c800="#0A370A", c900="#042704", c950="#001800"
),
neutral_hue: colors.Color | str = colors.zinc,
spacing_size: sizes.Size | str = sizes.spacing_md,
radius_size: sizes.Size | str = sizes.radius_xxl,
text_size: sizes.Size | str = sizes.text_md,
font: Font | str | list[Font | str] = (
GoogleFont("IBM Plex Sans"),
"ui-sans-serif",
"system-ui",
"sans-serif",
),
font_mono: Font | str | list[Font | str] = (
GoogleFont("Inter"),
"ui-monospace",
"Consolas",
"monospace",
),
):
super().__init__(
primary_hue=primary_hue,
secondary_hue=secondary_hue,
neutral_hue=neutral_hue,
spacing_size=spacing_size,
radius_size=radius_size,
text_size=text_size,
font=font,
font_mono=font_mono,
)
# Name the theme for identification
self.name = "forest_ocean_homogeneous_green"
# Set parameters for a subtle green gradient in light mode
super().set(
# More homogeneous background in light mode with subtle green
background_fill_primary="radial-gradient(circle at center, #E0F8E0 10%, #CFEFCF 40%, #D5EDD5 100%)",
# Component box styles with higher contrast and transparency in light mode
background_fill_secondary="rgba(255, 255, 255, 0.95)", # Slightly more opaque for better readability
block_border_color="#888888", # Darker gray for box border
block_border_width="1px",
block_radius="15px", # Rounded corners for a softer look
block_shadow="0 4px 10px rgba(0, 0, 0, 0.15)", # Enhanced shadow for depth
# High contrast for main text and labels
body_text_color="#1A1A1A", # Very dark gray for primary text
body_text_color_subdued="#333333", # Darker gray for subdued text
# Label (title) text for components
block_title_text_color="#000000", # Black for labels to improve contrast
block_title_text_color_dark="#FFFFFF", # White for labels in dark mode
# Input fields
input_background_fill="#FFFFFF", # Pure white for inputs
input_border_color="#555555", # Even darker gray border around input fields
input_border_width="1px",
# Primary button styling for light mode
button_primary_background_fill="linear-gradient(120deg, *primary_300 0%, *primary_400 50%, *primary_500 100%)",
button_primary_text_color="*neutral_50",
button_primary_background_fill_hover="linear-gradient(120deg, *primary_400 0%, *primary_500 60%, *primary_600 100%)",
# Dark mode settings with improved transparency and no green hue
background_fill_primary_dark="radial-gradient(circle at center, #020924 10%, #01071A 50%, #000615 100%)",
background_fill_secondary_dark="rgba(30, 30, 30, 0.2)", # Semi-transparent background for components
block_background_fill_dark="rgba(45, 45, 45, 0.2)", # Darker, more uniform transparent background
panel_background_fill_dark="rgba(45, 45, 55, 0.2)", # Additional transparency for panel-like elements
block_border_color_dark="#666666", # Darker gray border to ensure contrast
block_shadow_dark="0 4px 10px rgba(255, 255, 255, 0.1)", # Softer shadow for dark mode
checkbox_background_color_dark="rgba(30, 30, 30, 0.85)",
# Text and label settings for dark mode
body_text_color_dark="#E0E0E0", # Light gray for body text in dark mode
body_text_color_subdued_dark="#B0B0B0", # Subdued gray for secondary text in dark mode
# Primary button styling for dark mode
button_primary_background_fill_dark="linear-gradient(120deg, *secondary_600 0%, *primary_500 60%, *primary_600 100%)",
button_primary_background_fill_hover_dark="linear-gradient(120deg, *secondary_500 0%, *primary_500 60%, *primary_500 100%)",
button_primary_text_color_dark="*neutral_50",
)
# Define a dummy function to test component interactions
def process_text(text, number, mood, translate):
translation = "Translated text..." if translate else "Translation not selected."
return f"You entered: {text}\nSlider value: {number}\nMood: {mood}\n{translation}"
def process_image(image, brightness):
return image # In a real use case, apply brightness adjustment here
def play_audio_file(audio_file):
return audio_file # Simply returns the audio file for playback
with gr.Blocks(theme=ForestOceanTheme()) as demo:
gr.Markdown("## Comprehensive Web UI Test with Transparency in Dark Mode")
# Text Processing Section
gr.Markdown("### Text Processing")
with gr.Row():
text_input = gr.Textbox(label="Enter some text", placeholder="Type here...")
number_slider = gr.Slider(label="Select a number", minimum=0, maximum=100, value=50)
mood_dropdown = gr.Dropdown(
label="Select your mood",
choices=["Happy", "Sad", "Excited", "Anxious"],
value="Happy"
)
translate_checkbox = gr.Checkbox(label="Translate to another language", value=False)
process_button = gr.Button("Process Text")
output_text = gr.Textbox(label="Output", placeholder="Processed output will appear here")
process_button.click(
fn=process_text,
inputs=[text_input, number_slider, mood_dropdown, translate_checkbox],
outputs=output_text
)
# Image Upload Section
gr.Markdown("### Image Upload")
with gr.Row():
image_input = gr.Image(label="Upload an image", type="pil")
brightness_slider = gr.Slider(label="Adjust Brightness", minimum=0.5, maximum=1.5, value=1.0)
process_image_button = gr.Button("Process Image")
image_output = gr.Image(label="Processed Image")
process_image_button.click(
fn=process_image,
inputs=[image_input, brightness_slider],
outputs=image_output
)
# Audio Upload Section
gr.Markdown("### Audio Upload")
audio_input = gr.Audio(label="Upload an audio file", type="filepath")
play_audio = gr.Button("Play Audio")
audio_output = gr.Audio(label="Playback")
play_audio.click(
fn=play_audio_file,
inputs=audio_input,
outputs=audio_output
)
demo.launch()
```
### Screenshot


### Logs
_No response_
### System Info
```shell
Gradio Version: 5.5.0
Gradio Client Version: 1.4.2
Operating System: Ubuntu 22.04
Python Version: 3.12.7
Browsers Tested: Chrome, Firefox
```
### Severity
I can work around it | open | 2024-11-15T08:34:31Z | 2024-11-15T08:34:31Z | https://github.com/gradio-app/gradio/issues/9963 | [
"bug"
] | JSchmie | 0 |
onnx/onnx | pytorch | 6,311 | Want to substitute Expand operator with some other operator in ONNX due to compatibility issues with hardware | I have an expand operator in the middle of a model that takes 2 inputs with the following shapes:
Output from sigmoid: 1,256,1,1 (tensor to be expanded)
Shape operator: 1,256,80,80 (output shape expected)
I can't use Expand, Tile, Constant and Slice operator due to some external issues. I need to substitute this expand operator to some other operator(s) to get the equivalent functionality.
I tried to substitute them with Reshape + Concat as these are well supported for my hardware but I can't exactly figure out how I can modify the model structure using Onnx.helper.
I wanted to reshape it first to 1,256 and then concat along last dimension 80 times to get 1,256,80 and then again in second last dimension for 1,256,80,80
But I don't understand how to modify the node. If the node is like:
```
Input: "701"
input: "900"
output: "703"
axis: -1
name: "custom_added_Concat1"
op_type: "Concat"
```
I tried changing input 900 to ["701"] * 80 but it didn't work and as I can't create a constant node, I don't want to add 80 concat nodes for a single dimension.
I am very new to Onnx and protobuf and I tried all this from the documentation available but I couldn't find any reference to this. | closed | 2024-08-22T04:15:29Z | 2024-09-06T14:59:24Z | https://github.com/onnx/onnx/issues/6311 | [
"question"
] | AkshatDogra | 1 |
openapi-generators/openapi-python-client | rest-api | 669 | Could not hook httpx "event hooks" for debugging | **Describe the bug**
I like to do deep debugging with the clients generated using openapi-python-client. I would like to see request, response objects for every api call is logged/ dumped for analysis.
From the httpx documentation, its possible using Event Hooks (https://www.python-httpx.org/advanced/).
But I could not successfully able to hook the events.
**To Reproduce**
Steps to reproduce the behavior:
```python3
def log_httpx_request(request: httpx.Request) -> None:
print ("*** ", type(request))
print(f"+++Request event hook: {request.method} {request.url} - Waiting for response")
def log_httpx_response(response: httpx.Response) -> None:
print ("***", type(response))
request = response.request
print(f"+++Response event hook: {request.method} {request.url} - Status {response.status_code}")
...
client = Client(
base_url="https://codebeamer.xxx.com/cb",
headers={"Authorization": "Basic " + base64.b64encode(basic_auth_data).decode("ascii")},
timeout=10.0,
)
client.event_hooks={'request': [log_httpx_request], 'response': [log_httpx_response]}
version_data = get_server_version.sync(client=client)
...
```
**Expected behavior**
HTTPx event hook is working but event hook is not called at all.
Looks like event hooks work with httpx client object but the generated code directly uses "httpx.request" helper function.
Please provide a way to do a full debugging / event hooks.
**OpenAPI Spec File**
NA
**Desktop (please complete the following information):**
- OS: Ubuntu 22.04.1 LTS
- Python Version: Python 3.10.4
- openapi-python-client version 0.11.5
| closed | 2022-09-12T04:30:49Z | 2022-09-12T14:16:13Z | https://github.com/openapi-generators/openapi-python-client/issues/669 | [
"🐞bug"
] | bakkiaraj | 3 |
matplotlib/matplotlib | matplotlib | 29,291 | [Bug]: Calling a python script which imports matplotlib in C++ project shows ImportError | ### Bug summary
I am calling a python script in a CUDA C++project, which imports matplotlib. I ensure that matplotlib has been installed correctly and the Python path has been configured in C++. When I run this Python script alone, everything works fine. But when I call this Python script in a C++program, I encounter the following problem:
ImportError:/ home/wbt/anaconda3/envs/piq/lib/python3.12/lib-dynload/pyexpat.cpython-312-x86_64-linux-gnu.so: undefined symbol: XML_SetReparseDeferralEnabled
The other third-party libraries I imported did not encounter any issues when called.
### Code for reproduction
```Python
test.cu:
int main(){
Py_Initialize();
std::vector<double> yy = {3, 5, 1, 4, 2};
std::vector<double> xx = {1,2,3,4,5};
pltDrawFigure(xx, yy);
Py_Finalize();
}
void pltDrawFigure(const std::vector<double> & xx, const std::vector<double> & yy){
PyRun_SimpleString("import sys");
PyRun_SimpleString("sys.path.append('../python/')");
PyObject * pModule = PyImport_ImportModule("eva");
if (!pModule) {
printf("import python failed1!!\n");
return;
}
PyObject * pFunction = PyObject_GetAttrString(pModule, "drawFigure");
if (!pFunction) {
printf("get python function failed!!!\n");
return;
}
auto size = xx.size();
PyObject* pylist_x = PyList_New(size);
PyObject* pylist_y = PyList_New(size);
for (size_t i = 0; i < size; ++i) {
PyObject *pyitemx = PyLong_FromLong(xx[i]);
PyObject *pyitemy = PyLong_FromLong(yy[i]);
if (PyList_SetItem(pylist_x, i, pyitemx) != 0) {
PyErr_Print();
std::cerr << "Failed to set item in PyList" << std::endl;
Py_DECREF(pyitemx);
Py_DECREF(pylist_x);
return;
}
if (PyList_SetItem(pylist_y, i, pyitemy) != 0) {
PyErr_Print();
std::cerr << "Failed to set item in PyList" << std::endl;
Py_DECREF(pyitemy);
Py_DECREF(pylist_y);
return;
}
}
PyObject *pArgs = PyTuple_New(2);
PyTuple_SetItem(pArgs, 0, pylist_x);
PyTuple_SetItem(pArgs, 1, pylist_y);
PyObject_CallObject(pFunction, pArgs);
Py_DECREF(pFunction);
Py_DECREF(pModule);
}
CmakeLists.txt:
cmake_minimum_required(VERSION 3.23)
project(LightSim_Eva_Zhu)
set(CMAKE_CUDA_STANDARD 17)
find_package(CUDA REQUIRED)
include_directories("/home/wbt/openlibs/eigen-3.4.0/include/eigen3")
find_package(OpenCV REQUIRED PATHS /home/wbt/openlibs/opencv-4.5.5)
include_directories(${OpenCV_INCLUDE_DIRS})
#method1
#include_directories(/home/wbt/anaconda3/envs/piq/include/python3.12)
#LINK_DIRECTORIES(/home/wbt/anaconda3/envs/piq/lib)
#LINK_LIBRARIES(python3.12)
#method2
set(Python3_EXECUTABLE "/home/wbt/anaconda3/envs/piq/bin/python")
set(Python3_INCLUDE_DIR "/home/wbt/anaconda3/envs/piq/include/python3.12")
set(Python3_LIBRARY "/home/wbt/anaconda3/envs/piq/lib/libpython3.12.so")
INCLUDE_DIRECTORIES(${Python3_INCLUDE_DIR})
cuda_add_executable(LightSim_Eva_Zhu test.cu #otherscripts)
target_link_libraries( LightSim_Eva_Zhu ${OpenCV_LIBS} ${Python3_LIBRARY})
eva.py:
import matplotlib.pyplot as plt
def drawFigure(x, y):
print(x)
print(y)
plt.plot(x, y, marker='o')
plt.title("f-loss")
plt.xlabel("f")
plt.ylabel("loss")
plt.grid(True)
plt.savefig('../Output Pictures/fig.png')
```
### Actual outcome
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/site-packages/matplotlib/pyplot.py", line 55, in <module>
import matplotlib.colorbar
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/site-packages/matplotlib/colorbar.py", line 19, in <module>
from matplotlib import _api, cbook, collections, cm, colors, contour, ticker
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/site-packages/matplotlib/contour.py", line 15, in <module>
from matplotlib.backend_bases import MouseButton
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/site-packages/matplotlib/backend_bases.py", line 49, in <module>
from matplotlib import (
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/site-packages/matplotlib/text.py", line 16, in <module>
from .font_manager import FontProperties
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/site-packages/matplotlib/font_manager.py", line 41, in <module>
import plistlib
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/plistlib.py", line 70, in <module>
from xml.parsers.expat import ParserCreate
File "/home/wbt/anaconda3/envs/piq/lib/python3.12/xml/parsers/expat.py", line 4, in <module>
from pyexpat import *
ImportError: /home/wbt/anaconda3/envs/piq/lib/python3.12/lib-dynload/pyexpat.cpython-312-x86_64-linux-gnu.so: undefined symbol: XML_SetReparseDeferralEnabled
[1, 2, 3, 4, 5]
[3, 5, 1, 4, 2]
Process finished with exit code 0
### Expected outcome
draws a figure with x/y
### Additional information
- If I test eva.py in a Python virtual environment, eva.py can execute correctly. But If I call eva.py in C++ projects, it shows this import error.
- If I did not import matplotlib in this script, it's all normal. I can use any other repositories in this virtual environment, so it doesn't seem to be an issue with my Python path settings in CmakeLists.txt.
### Operating system
Ubuntu 20.04.5 LTS
### Matplotlib Version
3.9.3
### Matplotlib Backend
_No response_
### Python version
3.12
### Jupyter version
_No response_
### Installation
pip | closed | 2024-12-12T07:28:56Z | 2024-12-13T01:24:32Z | https://github.com/matplotlib/matplotlib/issues/29291 | [] | hongyifei | 1 |
google/seq2seq | tensorflow | 318 | Process getting finished by exit code 1 | The shell script "wmt16en_de.sh" has some error but couldn't resolve the problem.
Error :--------------------------------------------------------------------------------------------------------
/usr/bin/env bash /home/nil/beam/seq2seq/bin/data/wmt16_en_de.sh
Writing to /home/nil/nmt_data/wmt16_de_en. To change this, set the OUTPUT_DIR environment variable.
Downloading Europarl v7. This may take a while...
Process finished with exit code 1
Code:---------------------------------------------------------------------------------------------------------
`#! /usr/bin/env bash
set -e
BASE_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../.." && pwd )"
OUTPUT_DIR=${OUTPUT_DIR:-$HOME/nmt_data/wmt16_de_en}
echo "Writing to ${OUTPUT_DIR}. To change this, set the OUTPUT_DIR environment variable."
OUTPUT_DIR_DATA="${OUTPUT_DIR}/data"
mkdir -p $OUTPUT_DIR_DATA
echo "Downloading Europarl v7. This may take a while..."
wget -nc -nv -O ${OUTPUT_DIR_DATA}/europarl-v7-de-en.tgz \
http://www.statmt.org/europarl/v7/de-en.tgz
echo "Downloading Common Crawl corpus. This may take a while..."
wget -nc -nv -O ${OUTPUT_DIR_DATA}/common-crawl.tgz \
http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
echo "Downloading News Commentary v11. This may take a while..."
wget -nc -nv -O ${OUTPUT_DIR_DATA}/nc-v11.tgz \
http://data.statmt.org/wmt16/translation-task/training-parallel-nc-v11.tgz
echo "Downloading dev/test sets"
wget -nc -nv -O ${OUTPUT_DIR_DATA}/dev.tgz \
http://data.statmt.org/wmt16/translation-task/dev.tgz
wget -nc -nv -O ${OUTPUT_DIR_DATA}/test.tgz \
http://data.statmt.org/wmt16/translation-task/test.tgz
echo "Extracting all files..."
mkdir -p "${OUTPUT_DIR_DATA}/europarl-v7-de-en"
tar -xvzf "${OUTPUT_DIR_DATA}/europarl-v7-de-en.tgz" -C "${OUTPUT_DIR_DATA}/europarl-v7-de-en"
mkdir -p "${OUTPUT_DIR_DATA}/common-crawl"
tar -xvzf "${OUTPUT_DIR_DATA}/common-crawl.tgz" -C "${OUTPUT_DIR_DATA}/common-crawl"
mkdir -p "${OUTPUT_DIR_DATA}/nc-v11"
tar -xvzf "${OUTPUT_DIR_DATA}/nc-v11.tgz" -C "${OUTPUT_DIR_DATA}/nc-v11"
mkdir -p "${OUTPUT_DIR_DATA}/dev"
tar -xvzf "${OUTPUT_DIR_DATA}/dev.tgz" -C "${OUTPUT_DIR_DATA}/dev"
mkdir -p "${OUTPUT_DIR_DATA}/test"
tar -xvzf "${OUTPUT_DIR_DATA}/test.tgz" -C "${OUTPUT_DIR_DATA}/test"
cat "${OUTPUT_DIR_DATA}/europarl-v7-de-en/europarl-v7.de-en.en" \
"${OUTPUT_DIR_DATA}/common-crawl/commoncrawl.de-en.en" \
"${OUTPUT_DIR_DATA}/nc-v11/training-parallel-nc-v11/news-commentary-v11.de-en.en" \
> "${OUTPUT_DIR}/train.en"
wc -l "${OUTPUT_DIR}/train.en"
cat "${OUTPUT_DIR_DATA}/europarl-v7-de-en/europarl-v7.de-en.de" \
"${OUTPUT_DIR_DATA}/common-crawl/commoncrawl.de-en.de" \
"${OUTPUT_DIR_DATA}/nc-v11/training-parallel-nc-v11/news-commentary-v11.de-en.de" \
> "${OUTPUT_DIR}/train.de"
wc -l "${OUTPUT_DIR}/train.de"
if [ ! -d "${OUTPUT_DIR}/mosesdecoder" ]; then
echo "Cloning moses for data processing"
git clone https://github.com/moses-smt/mosesdecoder.git "${OUTPUT_DIR}/mosesdecoder"
fi
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2014-deen-src.de.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2014.de
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2014-deen-ref.en.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2014.en
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2015-deen-src.de.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2015.de
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/dev/dev/newstest2015-deen-ref.en.sgm \
> ${OUTPUT_DIR_DATA}/dev/dev/newstest2015.en
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/test/test/newstest2016-deen-src.de.sgm \
> ${OUTPUT_DIR_DATA}/test/test/newstest2016.de
${OUTPUT_DIR}/mosesdecoder/scripts/ems/support/input-from-sgm.perl \
< ${OUTPUT_DIR_DATA}/test/test/newstest2016-deen-ref.en.sgm \
> ${OUTPUT_DIR_DATA}/test/test/newstest2016.en
cp ${OUTPUT_DIR_DATA}/dev/dev/newstest20*.de ${OUTPUT_DIR}
cp ${OUTPUT_DIR_DATA}/dev/dev/newstest20*.en ${OUTPUT_DIR}
cp ${OUTPUT_DIR_DATA}/test/test/newstest20*.de ${OUTPUT_DIR}
cp ${OUTPUT_DIR_DATA}/test/test/newstest20*.en ${OUTPUT_DIR}
for f in ${OUTPUT_DIR}/*.de; do
echo "Tokenizing $f..."
${OUTPUT_DIR}/mosesdecoder/scripts/tokenizer/tokenizer.perl -q -l de -threads 8 < $f > ${f%.*}.tok.de
done
for f in ${OUTPUT_DIR}/*.en; do
echo "Tokenizing $f..."
${OUTPUT_DIR}/mosesdecoder/scripts/tokenizer/tokenizer.perl -q -l en -threads 8 < $f > ${f%.*}.tok.en
done
for f in ${OUTPUT_DIR}/*.en; do
fbase=${f%.*}
echo "Cleaning ${fbase}..."
${OUTPUT_DIR}/mosesdecoder/scripts/training/clean-corpus-n.perl $fbase de en "${fbase}.clean" 1 80
done
${BASE_DIR}/bin/tools/generate_vocab.py --delimiter "" \
< ${OUTPUT_DIR}/train.tok.clean.en \
> ${OUTPUT_DIR}/vocab.tok.char.en
${BASE_DIR}/bin/tools/generate_vocab.py --delimiter "" \
< ${OUTPUT_DIR}/train.tok.clean.de \
> ${OUTPUT_DIR}/vocab.tok.char.de
${BASE_DIR}/bin/tools/generate_vocab.py --delimiter "" \
< ${OUTPUT_DIR}/train.clean.en \
> ${OUTPUT_DIR}/vocab.char.en
${BASE_DIR}/bin/tools/generate_vocab.py --delimiter "" \
< ${OUTPUT_DIR}/train.clean.de \
> ${OUTPUT_DIR}/vocab.char.de
$BASE_DIR/bin/tools/generate_vocab.py \
--max_vocab_size 50000 \
< ${OUTPUT_DIR}/train.tok.clean.en \
> ${OUTPUT_DIR}/vocab.50k.en \
$BASE_DIR/bin/tools/generate_vocab.py \
--max_vocab_size 50000 \
< ${OUTPUT_DIR}/train.tok.clean.de \
> ${OUTPUT_DIR}/vocab.50k.de \
if [ ! -d "${OUTPUT_DIR}/subword-nmt" ]; then
git clone https://github.com/rsennrich/subword-nmt.git "${OUTPUT_DIR}/subword-nmt"
fi
for merge_ops in 32000; do
echo "Learning BPE with merge_ops=${merge_ops}. This may take a while..."
cat "${OUTPUT_DIR}/train.tok.clean.de" "${OUTPUT_DIR}/train.tok.clean.en" | \
${OUTPUT_DIR}/subword-nmt/learn_bpe.py -s $merge_ops > "${OUTPUT_DIR}/bpe.${merge_ops}"
echo "Apply BPE with merge_ops=${merge_ops} to tokenized files..."
for lang in en de; do
for f in ${OUTPUT_DIR}/*.tok.${lang} ${OUTPUT_DIR}/*.tok.clean.${lang}; do
outfile="${f%.*}.bpe.${merge_ops}.${lang}"
${OUTPUT_DIR}/subword-nmt/apply_bpe.py -c "${OUTPUT_DIR}/bpe.${merge_ops}" < $f > "${outfile}"
echo ${outfile}
done
done
cat "${OUTPUT_DIR}/train.tok.clean.bpe.${merge_ops}.en" "${OUTPUT_DIR}/train.tok.clean.bpe.${merge_ops}.de" | \
${OUTPUT_DIR}/subword-nmt/get_vocab.py | cut -f1 -d ' ' > "${OUTPUT_DIR}/vocab.bpe.${merge_ops}"
done
echo "All done."` | open | 2018-02-28T10:34:01Z | 2018-02-28T10:34:01Z | https://github.com/google/seq2seq/issues/318 | [] | Sammyreus | 0 |
zappa/Zappa | django | 1,164 | Remove `six` from zappa dependencies | ## Context
Six is currently a dependency of zappa, but zappa no longer supports python 2.x, making the use of `six` unnecessary.
## Expected Behavior
No change in behavior, reduced zappa package size.
> `six` still appears to be a sub-dependency of boto3
## Actual Behavior
six included.
## Possible Fix
remove `six` by migrating relevant code to python 3 specific code.
| closed | 2022-08-12T02:51:58Z | 2022-12-01T10:02:47Z | https://github.com/zappa/Zappa/issues/1164 | [
"next-release-candidate"
] | monkut | 1 |
SciTools/cartopy | matplotlib | 1,634 | 'regrid_shape' produces incorrect wind barbs near the pole | I'm generating numerical model output plots (e.g., GFS, ERA-5) for polar projections using the Nearside Perspective projection. When plotting wind barbs, the barb magnitudes are correct if plotted normally, such as:
`
ax.barbs(x, y, u, v, transform=ccrs.PlateCarree())
`
Once I add the `regrid_shape` argument, the barbs are inaccurate near the North Pole, with the magnitude decreasing towards zero within a few degrees latitude of the pole. I'm not entirely sure if this is user error or a bug in `regrid_shape`, but I am struggling to find any error on my end so I suspect it may be a bug.
I created a sample code & data to reproduce this, by assuming a constant global u=50 m/s and v=0 m/s, meaning the wind magnitude should be 50 m/s throughout the grid. Using `regrid_shape`, most of the barbs correctly show a 50 m/s flag, but right near the pole the barbs gradually decrease towards 0 m/s. If `regrid_shape` is omitted from `ax.barbs()`, then all barbs (including those near the pole) show a 50 m/s flag.
This is produced using Cartopy 0.17.0, in multiple versions of python (anywhere from 3.6.7 to 3.8.0) and on both Windows & Linux operating systems. Any help to identify if this is a user error or a bug, and if the latter then if it can be fixed, would be greatly appreciated.
```python
import numpy as np
import matplotlib.pyplot as plt
from cartopy import crs as ccrs
import cartopy.feature as cfeature
#Retrieve instance of NearsidePerspective projection
proj = ccrs.NearsidePerspective(
central_longitude = -100.0,
central_latitude = 90.0,
satellite_height = 4785831,
)
#Create figure
fig = plt.figure(figsize=(14,9))
ax = fig.add_axes([0.05,0.03,0.89,0.90],projection=proj)
#Plot geography boundaries
countries = ax.add_feature(cfeature.BORDERS.with_scale('50m'),linewidths=1.0,linestyle='solid')
coastlines = ax.add_feature(cfeature.COASTLINE.with_scale('50m'),linewidths=1.0,linestyle='solid')
continent_mask = ax.add_feature(cfeature.LAND.with_scale('50m'),facecolor='#eeeeee',edgecolor='face')
#Add sample data
lon = np.arange(-180,180,4) #lat/lon grid every 4 degrees
lat = np.arange(35,90,4) #lat/lon grid every 4 degrees
lons,lats = np.meshgrid(lon,lat)
u = np.zeros((lons.shape)) + 50.0 #set u-wind to 50 m/s throughout the grid
v = np.zeros((lons.shape)) + 0.0 #set v-wind to 0 m/s throughout the grid
#Plot barbs
ax.barbs(lons,lats,u,v,regrid_shape=(40,30),transform=ccrs.PlateCarree(),linewidth=0.5,length=6)
#Show plot and close
plt.show()
plt.close()
```
| open | 2020-08-15T20:55:42Z | 2020-08-21T18:38:24Z | https://github.com/SciTools/cartopy/issues/1634 | [] | tomerburg | 2 |
ploomber/ploomber | jupyter | 906 | Notifying community members on Slack when an issue is closed | Sometimes users request features or discover bugs that already have an existing issue. So far, we just paste the URL and tell them to keep an eye on it, but it is unrealistic to expect them to check the issue every now and then. One alternative is to ask them to subscribe to the issue updates but this doesn't always happen.
A better approach would be to automatically notify everyone on a given Slack thread that an issue has been closed. I saw someone on YC OSS post that they developed something internally and said they can share the code, I think we should request it.
Notifying when we close an issue is useful but limiting since users will need to install from git to get the fix; a better approach would be to notify them when it's closed *and* released (or maybe in both cases?)
| closed | 2022-07-08T19:59:58Z | 2022-09-02T22:56:08Z | https://github.com/ploomber/ploomber/issues/906 | [] | edublancas | 0 |
comfyanonymous/ComfyUI | pytorch | 6,291 | couldn't connect to 'https://huggingface.co' | ### Your question
When run comfyui, here is a below question.
We couldn't connect to 'https://huggingface.co' to load this model, couldn't find it in the cached files and it looks like /content/ComfyUI/custom_nodes/ComfyUI-InstantID/checkpoints/controlnet is not the path to a directory containing a config.json file.
Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/diffusers/installation#offline-mode'.
### Logs
_No response_
### Other
_No response_ | closed | 2024-12-31T08:38:05Z | 2024-12-31T23:32:36Z | https://github.com/comfyanonymous/ComfyUI/issues/6291 | [
"User Support",
"Custom Nodes Bug"
] | Ellsichan | 2 |
pyro-ppl/numpyro | numpy | 1,603 | Sample and evaluate independent parameter sets at once | First of all, lots of thanks for the great library! Its fun to use and as a beginner I've had a great intro to the topic 🙌
I am currently using Numpyro to solve an inverse problem to estimate parameters of a coupled system of ODEs using `diffrax` for the integration part. Since the latter is capable to `vmap` across multiple sets of parameters I was wondering if it is possible to subsample parameters to evaluate those in parallel.
So far I am able to obtain a matrix of parameters and ingest it into my simulation function:
```python
with numpyro.plate("parameters", 10):
theta = sample_from_distributions()
_, states = sim_func(y0s, theta, times)
# Shapes -> theta: (10, n_params) / states: (10, time, species)
```
My question is, if I am now specifying a `numpyro.plate` to model my observations across all parameter sets, will Numpyro treat these as independent across all of them? Thought about something like this:
```python
with numpyro.plate("parameter_sets", 10):
numpyro.sample("y", dist.TruncatedNormal(states, sigma, low=0.0), obs=data)
```
Curious if this is possible. Would help me a lot!
| closed | 2023-06-09T11:08:14Z | 2023-06-09T16:03:07Z | https://github.com/pyro-ppl/numpyro/issues/1603 | [] | JR-1991 | 1 |
onnx/onnx | deep-learning | 6,710 | type-coverage-of-popular-python-packages-and-github-badge | Hello,
maybe that's of interest for us:
https://discuss.python.org/t/type-coverage-of-popular-python-packages-and-github-badge/63401
https://html-preview.github.io/?url=https://github.com/lolpack/type_coverage_py/blob/main/index.html

 | open | 2025-02-16T09:17:00Z | 2025-03-09T12:19:19Z | https://github.com/onnx/onnx/issues/6710 | [
"contributions welcome"
] | andife | 2 |
sqlalchemy/alembic | sqlalchemy | 660 | downgrade from specific head when multiple heads are present? | Say I have three revisions, two are heads, the other is a branchpoint:
```
B
/
A
\
C
```
Is there a way to rollback only `B` or only `C`?
If I do `alembic downgrade B-1` (or `C-1`), both B and C are rolled back (which _does_ make sense...).
As an aside, `alembic downgrade -1` rolls back one or the other, but I can't figure out what determines which gets rolled back -- when I try it, then do `upgrade heads`, then try it again, sometimes it will be different the second time. | closed | 2020-02-19T01:40:08Z | 2021-04-25T15:59:20Z | https://github.com/sqlalchemy/alembic/issues/660 | [] | eeshugerman | 3 |
babysor/MockingBird | deep-learning | 462 | 运行报错缺少pyworld,安装visualstudio后pip install pyworld后报错如下,求解。 | Collecting pyworld
Using cached pyworld-0.3.0.tar.gz (212 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... done
Requirement already satisfied: numpy in d:\anaconda\data\lib\site-packages (from pyworld) (1.19.3)
Requirement already satisfied: cython>=0.24.0 in d:\anaconda\data\lib\site-packages (from pyworld) (0.29.24)
Building wheels for collected packages: pyworld
Building wheel for pyworld (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for pyworld (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [22 lines of output]
C:\Users\73465\AppData\Local\Temp\pip-build-env-_ap7ysdb\overlay\Lib\site-packages\setuptools\dist.py:739: UserWarning: Usage of dash-separated 'description-file' will not be supported in future versions. Please use the underscore name 'description_file' instead
warnings.warn(
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.9
creating build\lib.win-amd64-3.9\pyworld
copying pyworld\__init__.py -> build\lib.win-amd64-3.9\pyworld
running build_ext
skipping 'pyworld\pyworld.cpp' Cython extension (up-to-date)
building 'pyworld.pyworld' extension
creating build\temp.win-amd64-3.9
creating build\temp.win-amd64-3.9\Release
creating build\temp.win-amd64-3.9\Release\lib
creating build\temp.win-amd64-3.9\Release\lib\World
creating build\temp.win-amd64-3.9\Release\lib\World\src
creating build\temp.win-amd64-3.9\Release\pyworld
D:\visualstudio\visualstudio\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe /c /nologo /O2 /W3 /GL /DNDEBUG /MD -Ilib\World\src -IC:\Users\73465\AppData\Local\Temp\pip-build-env-_ap7ysdb\overlay\Lib\site-packages\numpy\core\include -ID:\Anaconda\DATA\include -ID:\Anaconda\DATA\Include -ID:\visualstudio\visualstudio\VC\Tools\MSVC\14.29.30133\include /EHsc /Tplib\World\src\cheaptrick.cpp /Fobuild\temp.win-amd64-3.9\Release\lib\World\src\cheaptrick.obj
cheaptrick.cpp
lib\World\src\cheaptrick.cpp(10): fatal error C1083: 无法打开包括文件: “math.h”: No such file or directory
error: command 'D:\\visualstudio\\visualstudio\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyworld
Failed to build pyworld
ERROR: Could not build wheels for pyworld, which is required to install pyproject.toml-based projects
| closed | 2022-03-18T07:02:41Z | 2024-07-31T20:35:35Z | https://github.com/babysor/MockingBird/issues/462 | [] | JOKERSLION | 4 |
yt-dlp/yt-dlp | python | 12,443 | how to fix this subtitle error | ### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
hi, every time i try to download i get this error about subtitles
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[generic] 174560?b=1&token=348d6a2444c39a6ad64b69b799c9b961&expires=1745434877&h=1: Downloading webpage
[generic] 174560?b=1&token=348d6a2444c39a6ad64b69b799c9b961&expires=1745434877&h=1: Downloading m3u8 information
[generic] 174560?b=1&token=348d6a2444c39a6ad64b69b799c9b961&expires=1745434877&h=1: Checking m3u8 live status
[info] 174560?b=1&token=348d6a2444c39a6ad64b69b799c9b961&expires=1745434877&h=1: Downloading subtitles: ita
[info] 174560?b=1&token=348d6a2444c39a6ad64b69b799c9b961&expires=1745434877&h=1: Downloading 1 format(s): 4500+audio-Italian
[info] Writing video subtitles to: S1E2 Primo ballo.ita.unknown_video
[download] Destination: S1E2 Primo ballo.ita.unknown_video
[download] 100% of 293.00B in 00:00:06 at 45.48B/s
[info] There are no video thumbnails to download
[SubtitlesConvertor] Converting subtitles
ERROR: Preprocessing: file:S1E2 Primo ballo.ita.unknown_video: Invalid data found when processing input
``` | closed | 2025-02-22T20:26:43Z | 2025-02-22T22:39:29Z | https://github.com/yt-dlp/yt-dlp/issues/12443 | [
"incomplete"
] | Richard642355 | 4 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,572 | [Feature Request]: Better Lora view | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
Would it be possible for you to display folders in the view, as in my example images
The current situation is as follows

Instead of this view, you will see the main directory of Lora with the folder

and if you then go to the Pony folder with me, for example, you can also see the subfolders there

As I own about 4154 Lora and have everything neatly organised in folders, it would be nice to display them in the same way
### Proposed workflow
1. Click on Lora View, you will see the main folder in the Lora directory
2. Click on one of the main folders to display the folders below it
3. and if the subfolders are also divided into subfolders, these are also displayed when you click on one of them.
### Additional information
_No response_ | closed | 2024-10-21T10:30:23Z | 2024-10-22T14:27:57Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16572 | [
"enhancement"
] | ChangeMeNot | 2 |
pydantic/pydantic-settings | pydantic | 378 | Settings fail to load if different sources use different aliases | Contrived example is below:
```
import os
from pydantic import Field, AliasChoices
from pydantic_settings import BaseSettings
class MySettings(BaseSettings):
original_name: str = Field(validation_alias=AliasChoices("DIFFERENT-NAME", "another"))
def test():
os.environ["DIFFERENT-NAME"] = "abc"
try:
settings = MySettings(another="ghi")
except Exception as e:
print(e)
else:
print(settings)
if __name__ == '__main__':
test()
```
This fails with error:
```
1 validation error for MySettings
another
Extra inputs are not permitted [type=extra_forbidden, input_value='ghi', input_type=str]
For further information visit https://errors.pydantic.dev/2.8/v/extra_forbidden
```
It would succeed if there is no environment variable (resulting in `original_name=ghi`) or no constructor value (resulting in `original_name=abc`). Issue is that sources while ordered by priority do not take into account that aliases for each source may be different, resulting in multiple values which map to the same field. Pydantic correctly rejects this. Loading values from sources should take into account that there can be only one value per field.
Realistic scenario is when you are loading settings from environment variables and from remote configuration source (such as AWS Parameter Store) and naming convention is different for those (underscore vs dashes for variable naming). | closed | 2024-09-03T08:15:59Z | 2024-09-07T17:08:11Z | https://github.com/pydantic/pydantic-settings/issues/378 | [
"feature request",
"unconfirmed"
] | nejcskofic | 4 |
globaleaks/globaleaks-whistleblowing-software | sqlalchemy | 3,035 | "Request support" feature | This ticket is to track the implementation of a "Request support" feature to be implemented to offer users the possibility to request support to administrators of the platform.
Such a feature feature could be useful for users in many situations like in cases of inability to access the site due to a lost password or a lost account recovery key.
The feature should offer users the possibility to type a message to be notified to administrators of the system. | closed | 2021-08-25T09:31:05Z | 2021-09-20T21:40:33Z | https://github.com/globaleaks/globaleaks-whistleblowing-software/issues/3035 | [
"C: Client",
"C: Backend",
"T: Feature"
] | evilaliv3 | 2 |
Miserlou/Zappa | django | 1,245 | No Module Named 'task' with Mezzanine | ## Context
When running my application in a local virtual environment, everything works fine. However, when I attempt to deploy the project, I get this traceback in the tail:
```
Traceback (most recent call last):
File "/var/task/django/core/handlers/base.py", line 131, in get_response
response = middleware_method(request, response)
File "/var/task/django/middleware/locale.py", line 36, in process_response
i18n_patterns_used, prefixed_default_language = is_language_prefix_patterns_used(urlconf)
File "/var/task/django/conf/urls/i18n.py", line 29, in is_language_prefix_patterns_used
for url_pattern in get_resolver(urlconf).url_patterns:
File "/var/task/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/var/task/django/urls/resolvers.py", line 313, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/var/task/django/utils/functional.py", line 35, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/var/task/django/urls/resolvers.py", line 306, in urlconf_module
return import_module(self.urlconf_name)
File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 936, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 948, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'task'
```
I'm sure this has something to do with Mezzanine's Middleware, but I don't know what's wrong.
I am using python 3.6.
## Expected Behavior
I should be able to visit my homepage.
## Actual Behavior
Debug page.
## Steps to Reproduce
1. checkout my project https://github.com/jjorissen52/golf-site/ (make sure project root is golf_site)
2. fill in secrets.conf and possibly modify database settings to fit your setup
3. create virtual environment, run pip install -r requirements.txt, zappa init, deploy
4. attempt to visit homepage
## My Environment (golf_site_z)
* Zappa version used: zappa==0.45.1
* Operating System and Python version: Ubunutu 17.04, Python 3.6
* The output of `pip freeze`:
```
argcomplete==1.9.2
base58==0.2.4
beautifulsoup4==4.6.0
bleach==2.1.1
boto==2.48.0
boto3==1.4.7
botocore==1.7.46
certifi==2017.7.27.1
chardet==3.0.4
click==6.7
Django==1.10.8
django-appconf==1.0.2
django-compressor==2.2
django-contrib-comments==1.8.0
django-storages-redux==1.3.3
djangorestframework==3.7.3
docutils==0.14
durationpy==0.5
filebrowser-safe==0.4.7
future==0.16.0
grappelli-safe==0.4.7
gunicorn==19.7.1
hjson==3.0.1
html5lib==1.0b10
idna==2.6
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
Mezzanine==4.2.3
oauthlib==2.0.6
olefile==0.44
Pillow==4.3.0
placebo==0.8.1
psycopg2==2.7.3.2
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2017.3
PyYAML==3.12
rcssmin==1.0.6
requests==2.18.4
requests-oauthlib==0.8.0
rjsmin==1.0.12
s3transfer==0.1.11
six==1.11.0
toml==0.9.3
tqdm==4.19.1
troposphere==2.0.2
tzlocal==1.4
Unidecode==0.4.21
urllib3==1.22
webencodings==0.5.1
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.45.1
```
* Link to project: https://github.com/jjorissen52/golf-site/
* `zappa_settings.json`
```
{
"dev": {
"aws_region": "us-east-1",
"django_settings": "golf_site.settings",
"profile_name": "default",
"project_name": "golf_site",
"runtime": "python3.6",
"s3_bucket": "zappa-0sjf0bvd6",
"exclude": ["__py_cache__", "__pycache__", "*.pyc", "uploads"]
}
}
```
* `settings.py`:
```
from __future__ import absolute_import, unicode_literals
import os, configparser, socket
PROJECT_ROOT = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
PROJECT_DIRNAME = PROJECT_ROOT.split(os.sep)[-1]
config = configparser.ConfigParser()
config.read(os.path.join(PROJECT_ROOT, 'secrets.conf'))
USE_SOUTH = True
SECRET_KEY = config.get('django', 'secret_key')
########################
# MAIN DJANGO SETTINGS #
########################
# People who get code error notifications.
# In the format (('Full Name', 'email@example.com'),
# ('Full Name', 'anotheremail@example.com'))
ADMINS = (
(config.get('golf_site', 'admin_name'), config.get('golf_site', 'admin_email')),
)
MANAGERS = ADMINS
ALLOWED_HOSTS = ["*"]
USE_TZ = True
LANGUAGE_CODE = "en"
# Supported languages
_ = lambda s: s
LANGUAGES = (
('en', _('English')),
)
DEBUG = True
SESSION_EXPIRE_AT_BROWSER_CLOSE = True
SITE_ID = 1
USE_I18N = False
INTERNAL_IPS = ("127.0.0.1",)
AUTHENTICATION_BACKENDS = ("mezzanine.core.auth_backends.MezzanineBackend",)
FILE_UPLOAD_PERMISSIONS = 0o644
CACHE_MIDDLEWARE_KEY_PREFIX = PROJECT_DIRNAME
ROOT_URLCONF = "%s.urls" % PROJECT_DIRNAME
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [
# insert your TEMPLATE_DIRS here
os.path.join(PROJECT_ROOT, "templates"),
os.path.join(PROJECT_ROOT, "events/../events/templates"),
],
'OPTIONS': {
'context_processors': [
# Insert your TEMPLATE_CONTEXT_PROCESSORS here or use this
# list if you haven't customized them:
'django.contrib.auth.context_processors.auth',
'django.template.context_processors.debug',
# 'django.template.context_processors.i18n',
'django.template.context_processors.media',
'django.template.context_processors.request',
'django.template.context_processors.static',
'django.template.context_processors.tz',
'django.contrib.messages.context_processors.messages',
'mezzanine.pages.context_processors.page',
"mezzanine.conf.context_processors.settings",
"mezzanine.pages.context_processors.page",
],
'loaders': [
# insert your TEMPLATE_LOADERS here
"django.template.loaders.filesystem.Loader",
"django.template.loaders.app_directories.Loader",
],
'builtins': [
'mezzanine.template.loader_tags',
]
},
},
]
################
# APPLICATIONS #
################
INSTALLED_APPS = (
# "flat",
# "moderna",
# "nova",
"solid",
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.redirects",
"django.contrib.sessions",
"django.contrib.sites",
"django.contrib.sitemaps",
"django.contrib.staticfiles",
"mezzanine.boot",
"mezzanine.conf",
"mezzanine.core",
"mezzanine.generic",
"mezzanine.blog",
"mezzanine.forms",
"mezzanine.pages",
"mezzanine.galleries",
'events',
'rest_framework',
'storages',
"compressor",
# "mezzanine.twitter",
#"mezzanine.accounts",
#"mezzanine.mobile",
)
# List of middleware classes to use. Order is important; in the request phase,
# these middleware classes will be applied in the order given, and in the
# response phase the middleware will be applied in reverse order.
MIDDLEWARE_CLASSES = (
"mezzanine.core.middleware.UpdateCacheMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.locale.LocaleMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"mezzanine.core.request.CurrentRequestMiddleware",
"mezzanine.core.middleware.RedirectFallbackMiddleware",
"mezzanine.core.middleware.TemplateForDeviceMiddleware",
"mezzanine.core.middleware.TemplateForHostMiddleware",
"mezzanine.core.middleware.AdminLoginInterfaceSelectorMiddleware",
"mezzanine.core.middleware.SitePermissionMiddleware",
# Uncomment the following if using any of the SSL settings:
# "mezzanine.core.middleware.SSLRedirectMiddleware",
"mezzanine.pages.middleware.PageMiddleware",
"mezzanine.core.middleware.FetchFromCacheMiddleware",
)
# Store these package names here as they may change in the future since
# at the moment we are using custom forks of them.
PACKAGE_NAME_FILEBROWSER = "filebrowser_safe"
PACKAGE_NAME_GRAPPELLI = "grappelli_safe"
#########################
# OPTIONAL APPLICATIONS #
#########################
# These will be added to ``INSTALLED_APPS``, only if available.
OPTIONAL_APPS = (
"debug_toolbar",
"django_extensions",
"compressor",
PACKAGE_NAME_FILEBROWSER,
PACKAGE_NAME_GRAPPELLI,
)
try:
HOSTNAME = socket.gethostname()
except:
HOSTNAME = 'localhost'
print(os.path.join(PROJECT_ROOT, "golf.db"))
if config.get('local', 'host_name') in HOSTNAME:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": os.path.join(PROJECT_ROOT, "golf.db"),
}
}
STATIC_URL = '/static/'
else:
############# DATABASE DEFINITIONS ################
SCHEMA = config.get('golf_site', 'db_schema')
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'OPTIONS': {
'options': f'-c search_path={SCHEMA}'
},
'NAME': config.get('lambda', 'db_name'),
'USER': config.get('lambda', 'db_user'),
'PASSWORD': config.get('lambda', 'db_password'),
'HOST': config.get('lambda', 'db_host'),
'PORT': '5432',
}
}
############### FILE STORAGE CONFIG ###################
STATICFILES_DIRS = [os.path.join(PROJECT_ROOT, '../static/'), os.path.join(PROJECT_ROOT, 'solid/../solid/static/')]
STATIC_ROOT = os.path.join(PROJECT_ROOT, 'static_root/')
COMPRESS_ROOT = STATIC_ROOT
AWS_LOCATION = 'content'
# DEFAULT_FILE_STORAGE = 'storages.backends.s3boto3.S3Boto3Storage'
AWS_S3_HOST = "s3.us-east-1.amazonaws.com"
AWS_STORAGE_BUCKET_NAME = config.get('golf_site', 'storage_bucket_name')
AWS_S3_CUSTOM_DOMAIN = '%s.s3.amazonaws.com' % AWS_STORAGE_BUCKET_NAME
STATICFILES_LOCATION = 'static'
STATIC_URL = "https://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, STATICFILES_LOCATION)
ADMIN_MEDIA_PREFIX = STATIC_URL + 'grappelli/'
MEDIAFILES_LOCATION = 'media'
MEDIA_URL = "https://%s/%s/" % (AWS_S3_CUSTOM_DOMAIN, MEDIAFILES_LOCATION)
STATICFILES_STORAGE = 'custom_storages.StaticStorage'
DEFAULT_FILE_STORAGE = 'custom_storages.MediaStorage'
#####################################################
####################
# DYNAMIC SETTINGS #
####################
# set_dynamic_settings() will rewrite globals based on what has been
# defined so far, in order to provide some better defaults where
# applicable. We also allow this settings module to be imported
# without Mezzanine installed, as the case may be when using the
# fabfile, where setting the dynamic settings below isn't strictly
# required.
try:
from mezzanine.utils.conf import set_dynamic_settings
except ImportError:
pass
else:
set_dynamic_settings(globals())
```
| closed | 2017-11-17T01:52:34Z | 2017-12-01T13:00:52Z | https://github.com/Miserlou/Zappa/issues/1245 | [] | jjorissen52 | 1 |
Josh-XT/AGiXT | automation | 1,225 | Github extension FR get all repos | ### Feature/Improvement Description
Please add get all repos for the github extension so I can replace a component inside my task app
### Proposed Solution
adding get repos it will list out all github repos with the `name : repo url`
### Acknowledgements
- [X] I have searched the existing issues to make sure this feature has not been requested yet.
- [X] I have provided enough information for everyone to understand why this feature request is needed in AGiXT. | closed | 2024-07-17T02:40:57Z | 2024-07-18T01:16:33Z | https://github.com/Josh-XT/AGiXT/issues/1225 | [
"needs triage"
] | birdup000 | 1 |
babysor/MockingBird | pytorch | 434 | 训练过程中报错 | > {| Epoch: 1/1 (400/52340) | Loss: 0.4498 | 0.55 steps/s | Step: 27k | }Traceback (most recent call last):
File "D:\Users\Jerry\Documents\Jerry\MockingBird-main\synthesizer_train.py", line 37, in <module>
train(**vars(args))
File "D:\Users\Jerry\Documents\Jerry\MockingBird-main\synthesizer\train.py", line 255, in train
eval_model(attention=np_now(attention[sample_idx][:, :attention_len]),
File "D:\Users\Jerry\Documents\Jerry\MockingBird-main\synthesizer\train.py", line 287, in eval_model
wav = audio.inv_mel_spectrogram(mel_prediction.T, hparams)
File "D:\Users\Jerry\Documents\Jerry\MockingBird-main\synthesizer\audio.py", line 91, in inv_mel_spectrogram
S = _mel_to_linear(_db_to_amp(D + hparams.ref_level_db), hparams) # Convert back to linear
File "D:\Users\Jerry\Documents\Jerry\MockingBird-main\synthesizer\audio.py", line 165, in _mel_to_linear
_inv_mel_basis = np.linalg.pinv(_build_mel_basis(hparams))
File "D:\Users\Jerry\Documents\Jerry\MockingBird-main\synthesizer\audio.py", line 169, in _build_mel_basis
assert hparams.fmax <= hparams.sample_rate // 2
AssertionError
我修改过mandarin.json,但又改回了原状,不知道是不是这个问题
~~~json
{"sample_rate": 14000, "n_fft": 800, "num_mels": 80, "hop_size": 200, "win_size": 800, "fmin": 55, "min_level_db": -100, "ref_level_db": 20, "max_abs_value": 4.0, "preemphasis": 0.97, "preemphasize": true, "tts_embed_dims": 512, "tts_encoder_dims": 256, "tts_decoder_dims": 128, "tts_postnet_dims": 512, "tts_encoder_K": 5, "tts_lstm_dims": 1024, "tts_postnet_K": 5, "tts_num_highways": 4, "tts_dropout": 0.5, "tts_cleaner_names": ["basic_cleaners"], "tts_stop_threshold": -3.4, "tts_schedule": [[2, 0.001, 10000, 14], [2, 0.0005, 15000, 14], [2, 0.0002, 20000, 14], [2, 0.0001, 30000, 14], [2, 5e-05, 40000, 14], [2, 1e-05, 60000, 14], [2, 5e-06, 140000, 14], [2, 3e-06, 320000, 14], [2, 1e-06, 640000, 14]], "tts_clip_grad_norm": 1.0, "tts_eval_interval": 500, "tts_eval_num_samples": 1, "tts_finetune_layers": [], "max_mel_frames": 900, "rescale": true, "rescaling_max": 0.9, "synthesis_batch_size": 16, "signal_normalization": true, "power": 1.5, "griffin_lim_iters": 60, "fmax": 7600, "allow_clipping_in_normalization": true, "clip_mels_length": true, "use_lws": false, "symmetric_mels": true, "trim_silence": true, "speaker_embedding_size": 256, "silence_min_duration_split": 0.4, "utterance_min_duration": 1.6, "use_gst": true, "use_ser_for_gst": true}
~~~ | closed | 2022-03-07T13:02:35Z | 2022-03-08T00:15:36Z | https://github.com/babysor/MockingBird/issues/434 | [] | JerryZRF | 1 |
amdegroot/ssd.pytorch | computer-vision | 250 | Running eval.py without GPU | I want to run eval.py with CPU no GPU(cuda).
So I change argument, --cuda default value from true to False.
But it doesn't work.
And I got this error. Loot at below :
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location='cpu' to map your storages to the CPU.
Please give me solution. | closed | 2018-10-18T23:09:49Z | 2019-08-18T14:20:35Z | https://github.com/amdegroot/ssd.pytorch/issues/250 | [] | MakeToast | 3 |
autogluon/autogluon | data-science | 4,993 | New version AbstractTrainer | Is it possible to install the version with the new AbstractTrainer? there are no bugs in it, namely in terms of predictions, when it was just starting to be developed, there were errors that the predictions were very bad in the TimeSeries module, now the problems have been solved and improved? | open | 2025-03-21T11:06:57Z | 2025-03-21T11:06:57Z | https://github.com/autogluon/autogluon/issues/4993 | [] | PitiChka | 0 |
lukas-blecher/LaTeX-OCR | pytorch | 231 | take a screenshot but latexocr-app flashes back | When I type `snip` in the latexocr after the screenshot, the app flashes back. I test the pix2tex mode and make it. What error did it encounter. | open | 2023-01-16T16:54:28Z | 2023-01-21T16:08:55Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/231 | [] | wsp666 | 1 |
hankcs/HanLP | nlp | 1,864 | ================================ERROR LOG BEGINS================================ | 2023-12-17 19:59:57.877358: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Failed to load https://file.hankcs.com/hanlp/mtl/close_tok_pos_ner_srl_dep_sdp_con_electra_small_20210111_124159.zip
If the problem still persists, please submit an issue to https://github.com/hankcs/HanLP/issues
When reporting an issue, make sure to paste the FULL ERROR LOG below.
================================ERROR LOG BEGINS================================
OS: Windows-10-10.0.22621-SP0
Python: 3.11.5
PyTorch: 2.1.0
HanLP: 2.1.0-beta.54
Traceback (most recent call last):
File "D:\AI\myAI\测试.py", line 6, in <module>
HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_SMALL_ZH) # 中文
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp\__init__.py", line 43, in load
return load_from_meta_file(save_dir, 'meta.json', verbose=verbose, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp\utils\component_util.py", line 186, in load_from_meta_file
raise e from None
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp\utils\component_util.py", line 99, in load_from_meta_file
obj: Component = object_from_classpath(cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp_common\reflection.py", line 27, in object_from_classpath
classpath = str_to_type(classpath)
^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp_common\reflection.py", line 44, in str_to_type
cls = getattr(importlib.import_module(module_name), class_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\tts\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1204, in _gcd_import
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp\components\mtl\multi_task_learning.py", line 27, in <module>
from hanlp.components.mtl.tasks import Task
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp\components\mtl\tasks\__init__.py", line 23, in <module>
from hanlp.transform.transformer_tokenizer import TransformerSequenceTokenizer
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp\transform\transformer_tokenizer.py", line 9, in <module>
from hanlp.layers.transformers.pt_imports import PreTrainedTokenizer, PretrainedConfig, AutoTokenizer_
File "D:\Anaconda\envs\tts\Lib\site-packages\hanlp\layers\transformers\pt_imports.py", line 11, in <module>
from transformers import BertTokenizer, BertConfig, PretrainedConfig, AutoConfig, AutoTokenizer, PreTrainedTokenizer, \
File "<frozen importlib._bootstrap>", line 1229, in _handle_fromlist
File "D:\Anaconda\envs\tts\Lib\site-packages\transformers\utils\import_utils.py", line 1175, in __getattr__
value = getattr(module, name)
^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\tts\Lib\site-packages\transformers\utils\import_utils.py", line 1174, in __getattr__
module = self._get_module(self._class_to_module[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Anaconda\envs\tts\Lib\site-packages\transformers\utils\import_utils.py", line 1186, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback):
Failed to import transformers.generation.utils because of the following error (look up to see its traceback):
cannot import name 'formatargspec' from 'inspect' (D:\Anaconda\envs\tts\Lib\inspect.py)
=================================ERROR LOG ENDS=================================
* [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> | closed | 2023-12-17T12:05:59Z | 2023-12-22T02:24:38Z | https://github.com/hankcs/HanLP/issues/1864 | [
"invalid"
] | zsxbcc | 1 |
databricks/koalas | pandas | 1,826 | ModuleNotFoundError: No module named 'databricks' when using apply_batch or apply_transform | Hi,
I'm using Spark in client mode and I've gotten Koalas working, but the `apply_batch` method seems to indicate that koalas is missing from the executor nodes. It it really so that koalas must be explicitly installed to worker nodes? Or is it another issue / something simple I'm missing? Spark version: 2.4.3, Koalas version: 1.2.0.
Example:
```
kdf = ks.DataFrame({'a': range(0, 20000), 'i': range(0, 20000)}).set_index('i')
# --> works
kdf.head(10)
# --> also works
def test_apply(df):
return df
kdf.koalas.apply_batch(test_apply)
# --> fails, see error below
```
Error:
```
...
File "/var/lib/mesos/slaves/ad6bc800-ab3b-486e-bfa2-cf24ca7aebae-S1/frameworks/7461c35c-4cf7-47a5-ae69-3ba9362cee61-71216/executors/1/runs/71cf1309-75b9-4ac2-b14e-3abc04506810/spark-2.4.3-bin-datalake-hadoop-2.9.2-1/python/lib/pyspark.zip/pyspark/serializers.py", line 580, in loads
return pickle.loads(obj, encoding=encoding)
ModuleNotFoundError: No module named 'databricks'
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:452)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:172)
at org.apache.spark.sql.execution.python.ArrowPythonRunner$$anon$1.read(ArrowPythonRunner.scala:122)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:836)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
``` | closed | 2020-10-06T12:28:06Z | 2020-11-13T05:01:54Z | https://github.com/databricks/koalas/issues/1826 | [
"question"
] | maxpagels | 1 |
seleniumbase/SeleniumBase | pytest | 2,215 | How to set options with the `driver` format inside the code? (Eg. --undetected) | can you tell how to set options eg --headed, --undetected inside code ,,i tried i couldn't solve this | closed | 2023-10-28T10:07:31Z | 2023-10-29T06:55:02Z | https://github.com/seleniumbase/SeleniumBase/issues/2215 | [
"question",
"UC Mode / CDP Mode"
] | Takundanashe | 2 |
assafelovic/gpt-researcher | automation | 204 | duckduckgo 3.9.1 cannot be resolved | Just tried installing locally and it failed when trying to find duckduckgo_search v3.9.1
```
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
...
...
ERROR: Could not find a version that satisfies the requirement duckduckgo_search==3.9.1 (from versions: 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.9.5, 1.0, 1.1, 1.2, 1.3, 1.3.5, 1.4, 1.5, 1.5.1, 1.5.2, 1.6, 1.6.2, 1.7.1, 1.8, 1.8.1, 1.8.2, 2.0.2, 2.1.3, 2.2.0, 2.2.2, 2.3.0, 2.3.1, 2.4.0, 2.5.0, 2.6.0, 2.6.1, 2.7.0, 2.8.0, 2.8.1, 2.8.3, 2.8.4, 2.8.5, 2.8.6, 2.9.0, 2.9.1, 2.9.2, 2.9.3, 2.9.4, 2.9.5, 3.0.2, 3.1.1, 3.2.0, 3.3.0, 3.4.1, 3.5.0, 3.6.0, 3.7.0, 3.7.1, 3.8.0, 3.8.1, 3.8.2, 3.8.3, 3.8.4, 3.8.5, 3.8.6, 3.9.3)
ERROR: No matching distribution found for duckduckgo_search==3.9.1
```
Almost like it was removed from index if looking into pypi https://pypi.org/project/duckduckgo-search/#history
Maybe replacing the current constraint with something like `duckduckgo_search~=3.9` (https://peps.python.org/pep-0440/#compatible-release) in [requirements.txt ](https://github.com/assafelovic/gpt-researcher/blob/master/requirements.txt) might be a safer option to avoid such issue (given patch versions should not include breaking changes)?
| closed | 2023-10-09T18:16:26Z | 2023-10-13T13:01:38Z | https://github.com/assafelovic/gpt-researcher/issues/204 | [] | ivarprudnikov | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,199 | Load trained model to other script | Can I load trained model to other script and call predict function(test single)? | closed | 2020-11-25T16:51:56Z | 2022-09-06T20:55:23Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1199 | [] | darrenleeleelee1 | 3 |
jupyterhub/repo2docker | jupyter | 423 | Make a logo for repo2docker? | Sometimes I'd like to have a visual way to describe repo2docker. Right now, It's hard to do this because there's no "visual" component to this project. Are folks generally +1 on having a logo of some kind for this project? | closed | 2018-10-03T15:54:27Z | 2018-11-12T22:17:54Z | https://github.com/jupyterhub/repo2docker/issues/423 | [
"enhancement",
"help wanted",
"needs: discussion"
] | choldgraf | 40 |
JaidedAI/EasyOCR | machine-learning | 513 | How to use the 'trainer' folder to train the model? | Are there any steps about how to use the 'trainer' to train model?
Thanks in advance. | closed | 2021-08-10T07:57:15Z | 2021-09-04T15:54:51Z | https://github.com/JaidedAI/EasyOCR/issues/513 | [] | fengjinhanzhenshuai | 0 |
pydantic/logfire | fastapi | 614 | Support opentelemetry-instrumentation-llamaindex | ### Description
Hi,
There is a package to auto-instrument llmaindex `opentelemetry-instrumentation-llamaindex` , I wonder if that can be added to logfire. I tried to use it directly `LlamaIndexInstrumentor().instrument()`, but got `maximum recursion depth exceeded` error when adding it.
Best, | closed | 2024-11-19T21:53:11Z | 2025-01-02T13:50:09Z | https://github.com/pydantic/logfire/issues/614 | [
"Feature Request"
] | yanqianglu | 3 |
pyeve/eve | flask | 947 | Serializer for boolean | If I use a field, where the type is "integer", I can also post a string containing an integer and it will be converted by the serializer.
Is there a reason this does not happen for booleans?
It is missing in this list:
https://github.com/nicolaiarocci/eve/blob/develop/eve/io/mongo/mongo.py#L77 | closed | 2016-12-10T19:11:30Z | 2016-12-16T09:38:40Z | https://github.com/pyeve/eve/issues/947 | [
"enhancement"
] | cburchert | 1 |
deeppavlov/DeepPavlov | nlp | 1,542 | Is there any way to achieve mult-lingual intent classification | Want to contribute to DeepPavlov? Please read the [contributing guideline](http://docs.deeppavlov.ai/en/master/devguides/contribution_guide.html) first.
**What problem are we trying to solve?**:
```
I am new to this repo. I am trying to solve the multi-lingual intent classifications for one of my chatbots. is there any way to achieve the same?
```
| closed | 2022-03-29T11:34:43Z | 2022-04-07T12:43:08Z | https://github.com/deeppavlov/DeepPavlov/issues/1542 | [
"enhancement"
] | SAIVENKATARAJU | 3 |
hankcs/HanLP | nlp | 1,601 | install hanlp bug,demo run error |
**Describe the bug**
If the problem still persists, please submit an issue to https://github.com/hankcs/HanLP/issues
When reporting an issue, make sure to paste the FULL ERROR LOG above.
dony222:test dony$ python3 testHanLP.py
/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Failed to load https://file.hankcs.com/hanlp/mtl/close_tok_pos_ner_srl_dep_sdp_con_electra_small_zh_20201222_130611.zip. See traceback below:
================================ERROR LOG BEGINS================================
Traceback (most recent call last):
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/hanlp/utils/component_util.py", line 74, in load_from_meta_file
obj: Component = object_from_classpath(cls)
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/hanlp_common/reflection.py", line 27, in object_from_classpath
classpath = str_to_type(classpath)
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/hanlp_common/reflection.py", line 44, in str_to_type
cls = getattr(importlib.import_module(module_name), class_name)
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/hanlp/components/mtl/multi_task_learning.py", line 23, in <module>
from hanlp.components.mtl.tasks import Task
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/hanlp/components/mtl/tasks/__init__.py", line 22, in <module>
from hanlp.transform.transformer_tokenizer import TransformerSequenceTokenizer
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/hanlp/transform/transformer_tokenizer.py", line 9, in <module>
from hanlp.layers.transformers.pt_imports import PreTrainedTokenizer, PretrainedConfig, AutoTokenizer
File "/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/hanlp/layers/transformers/pt_imports.py", line 10, in <module>
from transformers import BertTokenizer, BertConfig, PretrainedConfig, \
ImportError: cannot import name 'BertTokenizerFast' from 'transformers' (/usr/local/Cellar/python/3.7.2_2/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/transformers/__init__.py)
=================================ERROR LOG ENDS=================================
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
```
import hanlp
HanLP = hanlp.load(hanlp.pretrained.mtl.CLOSE_TOK_POS_NER_SRL_DEP_SDP_CON_ELECTRA_SMALL_ZH)
HanLP(['2021年HanLPv2.1为生产环境带来次世代最先进的多语种NLP技术。', '阿婆主来到北京立方庭参观自然语义科技公司。']).pretty_print()
```
**Describe the current behavior**
A clear and concise description of what happened.
**Expected behavior**
A clear and concise description of what you expected to happen.
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:3.7
- HanLP version:2.1.0a5
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
* [x] I've completed this form and searched the web for solutions.
| closed | 2021-01-06T01:49:16Z | 2021-01-06T02:10:53Z | https://github.com/hankcs/HanLP/issues/1601 | [
"bug"
] | songsh | 2 |
gradio-app/gradio | python | 10,472 | Gradio chatbot image/media are too big but MultimodalTextbox buttons are too small | - [x ] I have searched to see if a similar issue already exists.
**Reproduction**
I have been working on a multimodal chatbot as described in [Gradio Demo: chatbot_multimodal](https://github.com/gradio-app/gradio/blob/main/demo/chatbot_multimodal/run.ipynb)
**Is your feature request related to a problem? Please describe.**
**Problem 1**: Gradio chatbot image/media display size is huge. This is not needed, but look cumbersome and leave little room to display text.
As seen in the example, the image and audio displayed in the chatbot bubbles are way bigger than needed. They don’t need to be shown in full size most of the time. You don’t really get much space left for questions and answers in the chatting area. They can be shrunk into a media type logo most of the time, as seen in most other chat apps. They should be clickable, to show the full view (images, currently appear so but no effect when clicking) or play the sound (audio, this already exists). Note similar size problems have been complained by other users but there has not be a good solution.
**Problem 2**: MultimodalTextbox and the attachment/mic/send buttons on the bottom are small. They shall be customizable in size. But I don’t find a way to do so.
**Describe the solution you'd like**
We shall have options in gr.chatbot() and gr.MultimodalTextbox() functions to change/modify the size of images/media or button sizes.
**Additional context**
 | open | 2025-01-30T22:27:01Z | 2025-02-03T08:05:50Z | https://github.com/gradio-app/gradio/issues/10472 | [
"💬 Chatbot"
] | bigmw | 1 |
iperov/DeepFaceLab | deep-learning | 918 | DeepFaceLab | open | 2020-10-05T23:03:05Z | 2023-06-08T21:26:50Z | https://github.com/iperov/DeepFaceLab/issues/918 | [] | AL-DU-SHOW | 1 |
|
pydata/pandas-datareader | pandas | 694 | RLS: 0.8.0 Release Tracker | A list of issues remaining before a 0.8.0 release:
- [x] Finalize what's new
- [x] Default starting dates #607
| closed | 2019-09-18T08:38:01Z | 2019-09-22T21:08:12Z | https://github.com/pydata/pandas-datareader/issues/694 | [] | bashtage | 14 |
ets-labs/python-dependency-injector | flask | 478 | six library update to 1.16.0 | Any plans on updating six to 1.16.0? Currently version is limited to <=1.15.0 | closed | 2021-07-29T15:56:41Z | 2021-07-29T22:14:41Z | https://github.com/ets-labs/python-dependency-injector/issues/478 | [
"enhancement"
] | ilsurih | 2 |
marcomusy/vedo | numpy | 286 | Drag helper to rotate a plane and display info on click of a volume slice | I would like to display a helper similar to the ones used for slicing (the sphere that you can drag to slice an object) but instead I would like this to drive the orientation of a plane. Is there something in vedo to do this or should I implement event listeners with mouse down to activate drag, mouse move to update while dragging and mouse up to cancel it? | closed | 2021-01-06T10:22:06Z | 2021-05-05T09:41:24Z | https://github.com/marcomusy/vedo/issues/286 | [] | nantille | 6 |
MagicStack/asyncpg | asyncio | 227 | How could I know which column caused `asyncpg.exceptions.UniqueViolationError`? | I want to directly know which column cause `UniqueViolationError`, but not from `e.message` or `e.detail`.
I need it because I want to build my own message to my API consumer.
Currently I just expose the `UniqueViolationError.detail`, which I think unsafe.
```python
try:
ins_res = await app.db.fetchrow(user_tbl.insert().values(**user.to_native('create')))
except UniqueViolationError as e:
return json({'message': e.detail}, status=409)
```
* **asyncpg version**: 0.12.0
* **PostgreSQL version**: PostgreSQL 10.0 on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18) 6.3.0 20170516, 64-bit
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: No; Yes
* **Python version**: 3.6.2
* **Platform**: Ubuntu 16.04.3/Linux 4.4.0-97-generic
* **Do you use pgbouncer?**: I dont know
* **Did you install asyncpg with pip?**: not exactly, I use pipenv
* **If you built asyncpg locally, which version of Cython did you use?**: No
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: Not tested
| closed | 2017-11-14T12:12:38Z | 2017-11-14T14:54:36Z | https://github.com/MagicStack/asyncpg/issues/227 | [
"question"
] | wonderbeyond | 3 |
piccolo-orm/piccolo | fastapi | 315 | ASGI template improvements for FastAPI | When doing some demos of Piccolo I've realised there's some weirdness with the FastAPI ASGI template.
Variables for Piccolo table instances are sometimes prefixed with an underscore, to distinguish them from Pydantic model instances. Instead, the Pydantic model instances should have the `_model` postfix.
For example:
```python
@app.put('/tasks/{task_id}/', response_model=TaskModelOut)
async def update_task(task_id: int, task: TaskModelIn):
_task = await Task.objects().where(Task.id == task_id).first().run()
...
```
Could be improved to this:
```python
@app.put('/tasks/{task_id}/', response_model=TaskModelOut)
async def update_task(task_id: int, task_model: TaskModelIn):
task = await Task.objects().where(Task.id == task_id).first().run()
...
```
Also, some of the endpoints are returning Pydantic instances, but this isn't required - if a dictionary is returned, then FastAPI automatically converts it into a Pydantic model instance. This makes the code slightly cleaner.
And finally, when converting a Piccolo table instance into a dict, then `__dict__` is being used, rather than using the new `to_dict` method.
```python
# Current:
task.__dict__
# Better:
task.to_dict()
``` | closed | 2021-10-29T18:25:35Z | 2021-10-29T18:36:39Z | https://github.com/piccolo-orm/piccolo/issues/315 | [
"enhancement"
] | dantownsend | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,192 | Does flask socketIO works with paramiko in multithreading env | My last hurdle ..
in my backend code i need to use paramiko in threaded env to run remote commands.
look like paramiko doesn't support gevent async. Does that mean i can't use flask socketio for real time updates? | closed | 2020-02-24T16:05:27Z | 2020-06-30T22:52:01Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1192 | [
"question"
] | Set4now | 1 |
suitenumerique/docs | django | 267 | Change invitation email text | ## Bug Report
**Problematic behavior**
The email you receive like a welcome email.

**Expected behavior/code**
Subject : Document partagé avec vous : "{document.title}"
Content :
```
<h1>{user.email} vous a partagé un document</h1>
<p>{user.email} vous a donné accès au document "{document.title}" en tant que : {document.role}.</p>
<a href="{doc.link}">{document.title}</a>
<a class="btn" href="{doc.link}">Ouvrir le document</btn>
<separatorline>
Docs, votre nouvel outil incontournable pour organiser, partager et gérer vos documents en équipe
``` | closed | 2024-09-17T18:15:50Z | 2024-09-26T07:58:12Z | https://github.com/suitenumerique/docs/issues/267 | [
"backend",
"refacto"
] | virgile-dev | 4 |
mljar/mljar-supervised | scikit-learn | 181 | The invalid filename in EDA if the feature contains a forbidden character | X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=42)
automl = AutoML()
automl.fit(X_train, y_train)
after running the code, it raises error like this
AutoML directory: AutoML_9
The task is regression with evaluation metric rmse
AutoML will use algorithms: ['Baseline', 'Linear', 'Decision Tree', 'Random Forest', 'Xgboost', 'Neural Network']
AutoML will ensemble availabe models
2020-09-10 18:59:15,591 supervised.preprocessing.eda ERROR There was an issue when running EDA. [Errno 22] Invalid argument: 'AutoML_9\\EDA\\Spd*LSBW.png'
AutoML steps: ['simple_algorithms', 'default_algorithms', 'ensemble']
* Step simple_algorithms will try to check up to 3 models
1_Baseline rmse 2.013368 trained in 0.11 seconds
2_DecisionTree rmse 0.686167 trained in 9.43 seconds
---------------------------------------------------------------------------
_RemoteTraceback Traceback (most recent call last)
_RemoteTraceback:
'''
Traceback (most recent call last):
File "D:\Anaconda3\envs\mljar\lib\site-packages\joblib\externals\loky\process_executor.py", line 391, in _process_worker
call_item = call_queue.get(block=True, timeout=timeout)
File "D:\Anaconda3\envs\mljar\lib\multiprocessing\queues.py", line 113, in get
return _ForkingPickler.loads(res)
File "C:\Users\ZW\AppData\Roaming\Python\Python36\site-packages\supervised\__init__.py", line 3, in
from supervised.automl import AutoML
File "C:\Users\ZW\AppData\Roaming\Python\Python36\site-packages\supervised\automl.py", line 3, in
from supervised.base_automl import BaseAutoML
File "C:\Users\ZW\AppData\Roaming\Python\Python36\site-packages\supervised\base_automl.py", line 17, in
from supervised.algorithms.registry import AlgorithmsRegistry
File "C:\Users\ZW\AppData\Roaming\Python\Python36\site-packages\supervised\algorithms\registry.py", line 63, in
import supervised.algorithms.random_forest
File "C:\Users\ZW\AppData\Roaming\Python\Python36\site-packages\supervised\algorithms\random_forest.py", line 8, in
from supervised.algorithms.algorithm import BaseAlgorithm
File "C:\Users\ZW\AppData\Roaming\Python\Python36\site-packages\supervised\algorithms\algorithm.py", line 3, in
from supervised.utils.importance import PermutationImportance
File "C:\Users\ZW\AppData\Roaming\Python\Python36\site-packages\supervised\utils\importance.py", line 7, in
import matplotlib.pyplot as plt
File "D:\Anaconda3\envs\mljar\lib\site-packages\matplotlib\pyplot.py", line 43, in
from matplotlib.figure import Figure, figaspect
File "", line 971, in _find_and_load
File "", line 955, in _find_and_load_unlocked
File "", line 665, in _load_unlocked
File "", line 674, in exec_module
File "", line 764, in get_code
File "", line 833, in get_data
MemoryError
'''
The above exception was the direct cause of the following exception:
BrokenProcessPool Traceback (most recent call last)
in
5 # explain_level=0
6 )
----> 7 automl.fit(X_train, y_train)
~\AppData\Roaming\Python\Python36\site-packages\supervised\automl.py in fit(self, X, y)
276 self : AutoML object
277 """
--> 278 return self._fit(X, y)
279
280 def predict(self, X):
~\AppData\Roaming\Python\Python36\site-packages\supervised\base_automl.py in _fit(self, X, y)
668
669 except Exception as e:
--> 670 raise e
671 finally:
672 if self._X_path is not None:
~\AppData\Roaming\Python\Python36\site-packages\supervised\base_automl.py in _fit(self, X, y)
655 trained = self.ensemble_step(is_stacked=params["is_stacked"])
656 else:
--> 657 trained = self.train_model(params)
658
659 params["status"] = "trained" if trained else "skipped"
~\AppData\Roaming\Python\Python36\site-packages\supervised\base_automl.py in train_model(self, params)
227 f"Train model #{len(self._models)+1} / Model name: {params['name']}"
228 )
--> 229 mf.train(model_path)
230
231 # save the model
~\AppData\Roaming\Python\Python36\site-packages\supervised\model_framework.py in train(self, model_path)
176 metric_name=self.get_metric_name(),
177 ml_task=self._ml_task,
--> 178 explain_level=self._explain_level,
179 )
180
~\AppData\Roaming\Python\Python36\site-packages\supervised\algorithms\linear.py in interpret(self, X_train, y_train, X_validation, y_validation, model_file_path, learner_name, target_name, class_names, metric_name, ml_task, explain_level)
137 metric_name,
138 ml_task,
--> 139 explain_level,
140 )
141 if explain_level == 0:
~\AppData\Roaming\Python\Python36\site-packages\supervised\algorithms\algorithm.py in interpret(self, X_train, y_train, X_validation, y_validation, model_file_path, learner_name, target_name, class_names, metric_name, ml_task, explain_level)
77 learner_name,
78 metric_name,
---> 79 ml_task,
80 )
81 if explain_level > 1:
~\AppData\Roaming\Python\Python36\site-packages\supervised\utils\importance.py in compute_and_plot(model, X_validation, y_validation, model_file_path, learner_name, metric_name, ml_task)
58 n_jobs=-1, # all cores
59 random_state=12,
---> 60 n_repeats=5, # default
61 )
62
D:\Anaconda3\envs\mljar\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs)
70 FutureWarning)
71 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)})
---> 72 return f(**kwargs)
73 return inner_f
74
D:\Anaconda3\envs\mljar\lib\site-packages\sklearn\inspection\_permutation_importance.py in permutation_importance(estimator, X, y, scoring, n_repeats, n_jobs, random_state)
135 scores = Parallel(n_jobs=n_jobs)(delayed(_calculate_permutation_scores)(
136 estimator, X, y, col_idx, random_seed, n_repeats, scorer
--> 137 ) for col_idx in range(X.shape[1]))
138
139 importances = baseline_score - np.array(scores)
D:\Anaconda3\envs\mljar\lib\site-packages\joblib\parallel.py in __call__(self, iterable)
1015
1016 with self._backend.retrieval_context():
-> 1017 self.retrieve()
1018 # Make sure that we get a last message telling us we are done
1019 elapsed_time = time.time() - self._start_time
D:\Anaconda3\envs\mljar\lib\site-packages\joblib\parallel.py in retrieve(self)
907 try:
908 if getattr(self._backend, 'supports_timeout', False):
--> 909 self._output.extend(job.get(timeout=self.timeout))
910 else:
911 self._output.extend(job.get())
D:\Anaconda3\envs\mljar\lib\site-packages\joblib\_parallel_backends.py in wrap_future_result(future, timeout)
560 AsyncResults.get from multiprocessing."""
561 try:
--> 562 return future.result(timeout=timeout)
563 except LokyTimeoutError:
564 raise TimeoutError()
D:\Anaconda3\envs\mljar\lib\concurrent\futures\_base.py in result(self, timeout)
430 raise CancelledError()
431 elif self._state == FINISHED:
--> 432 return self.__get_result()
433 else:
434 raise TimeoutError()
D:\Anaconda3\envs\mljar\lib\concurrent\futures\_base.py in __get_result(self)
382 def __get_result(self):
383 if self._exception:
--> 384 raise self._exception
385 else:
386 return self._result
BrokenProcessPool: A task has failed to un-serialize. Please ensure that the arguments of the function are all picklable.
| closed | 2020-09-10T11:02:24Z | 2020-09-15T13:43:43Z | https://github.com/mljar/mljar-supervised/issues/181 | [
"bug",
"help wanted"
] | peter-WeiZhang | 7 |
saleor/saleor | graphql | 17,167 | Guidance on Implementing Multi-Tenancy in Saleor: tenant_manager Not Found Issue | Hello Saleor Community,
I am working on a **non-commercial project** where I aim to implement a **multi-tenant solution** on top of Saleor. My application requires the capability to serve multiple clients, each with their own isolated database, but I want to achieve this without disrupting Saleor’s core functionality.
We are working on implementing a multi-tenancy system in our application built on Saleor. The idea is to create a tenant_manager app that will handle the dynamic assignment of databases based on incoming requests. The flow starts with a middleware that intercepts each request and identifies the tenant (e.g., based on a unique username, domain, or tenant-specific header). If the tenant does not already exist, it will trigger a function to create a new tenant and its corresponding database. For user registration, the code will automatically register each user in the tenant's database and also record tenant details in a master database to maintain a global reference. When a user logs in, the middleware will look up their tenant information, dynamically connect to the appropriate database, and forward the request to ensure seamless multi-tenant behavior. Additionally, for tasks like user creation and login, we plan to integrate this tenant-specific logic into the Saleor GraphQL API to ensure that every operation is routed to the correct tenant database. This architecture will isolate data for each tenant while leveraging Saleor's robust core functionalities. We would like to know if this approach aligns with best practices for multi-tenancy or if there are better ways to achieve this goal.
### **My Goal**
To implement a multi-tenancy architecture where:
1. Each tenant has its own database.
2. Middleware dynamically determines the tenant database based on the request (e.g., a tenant ID or domain).
3. A `TenantManager` module will handle tenant-specific configurations, routing, and middleware.
4. The solution will use Saleor’s existing GraphQL APIs and authentication while maintaining tenant isolation.
---
### **My Plan So Far**
#### **1. Creating `tenant_manager`**
- Created a Django app named `tenant_manager` and added it to `INSTALLED_APPS` in `settings.py`:
```python
INSTALLED_APPS = [
...
"saleor.tenant_manager",
]
```
#### **2. Tenant Models**
Defined `Tenant` and `TenantUser` models to manage tenants and their users:
```python
from django.db import models
from django.contrib.auth import get_user_model
User = get_user_model()
class Tenant(models.Model):
tenant_id = models.CharField(max_length=255, unique=True)
database_name = models.CharField(max_length=255, unique=True)
created_at = models.DateTimeField(auto_now_add=True)
class TenantUser(models.Model):
user = models.OneToOneField(User, on_delete=models.CASCADE, related_name="tenant_user")
tenant = models.ForeignKey(Tenant, on_delete=models.CASCADE, related_name="users")
created_by = models.ForeignKey(User, related_name="tenants_created", on_delete=models.CASCADE)
```
#### **3. Middleware**
Added middleware to dynamically switch databases based on tenant context:
```python
class TenantMiddleware:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
tenant_name = request.headers.get("X-Tenant-Name")
if tenant_name:
from .models import Tenant
try:
tenant = Tenant.objects.get(tenant_id=tenant_name)
connections["default"].settings_dict["NAME"] = tenant.database_name
request.tenant = tenant
except Tenant.DoesNotExist:
raise ValueError("Invalid tenant")
response = self.get_response(request)
return response
```
#### **4. Database Router**
Planned to implement a `db_router` to manage tenant-specific migrations:
```python
class TenantRouter:
def db_for_read(self, model, **hints):
if hasattr(model._meta, "app_label") and model._meta.app_label == "tenant_manager":
return "tenant_master"
return None
def db_for_write(self, model, **hints):
if hasattr(model._meta, "app_label") and model._meta.app_label == "tenant_manager":
return "tenant_master"
return None
```
---
### **The Issue**
When I try to run `python manage.py makemigrations tenant_manager`, I get this error:
```
ModuleNotFoundError: No module named 'tenant_manager'
```
I’ve verified the following:
1. The `tenant_manager` folder exists under the `saleor` directory.
2. It contains an `apps.py` file with the following configuration:
```python
class TenantManagerConfig(AppConfig):
name = "saleor.tenant_manager"
```
3. `tenant_manager` is added to `INSTALLED_APPS` and the correct folder structure is maintained.
---
### **Questions**
1. Is this the right approach to implement multi-tenancy in Saleor without disrupting its core functionality?
2. What could be causing the `tenant_manager` module to not be recognized?
3. Are there alternative approaches or best practices for implementing multi-tenancy in Saleor?
I’m humbly seeking guidance on how to proceed with this. Any feedback, suggestions, or directions would be greatly appreciated. Thank you in advance for helping me with this learning project! | closed | 2024-12-12T07:14:24Z | 2025-03-13T10:56:21Z | https://github.com/saleor/saleor/issues/17167 | [] | egastech | 2 |
pallets/flask | python | 4,625 | mypy errors when static_folder is given a Pathlib | This is a continuation of #4150. I'm still getting the same issues using Flask 2.1.2. Not sure why this is still happening.
Thank you! | closed | 2022-06-07T21:28:59Z | 2022-06-23T00:05:41Z | https://github.com/pallets/flask/issues/4625 | [] | gitpushdashf | 2 |
keras-team/keras | data-science | 20,518 | Need to explicitly specify `x` and `y` instead of using a generator when the model has two inputs. | tf.__version__: '2.17.0'
tf.keras.__version__: '3.5.0'
for a image caption model, one input is vector representation of images, another is caption of encoder.
```python
def batch_generator(batch_size, tokens_train, transfer_values_train, is_random_caption=False):
# ...
x_data = \
{
'transfer_values_input': transfer_values, #transfer_values
'decoder_input': decoder_input_data # decoder_input_data
}
y_data = \
{
'decoder_output': decoder_output_data
}
yield (x_data, y_data)
```
then train on this is not work, it mistake the "decoder_input" or "decoder_output" as the "transfer_values_input" for the model, even with right parameters name in the model.
```python
decoder_model.fit(x=flick_generator,
steps_per_epoch=flick_steps_per_epoch,
epochs=20,
callbacks=flick_callbacks)
```
only explicitly specify `x` and `y` will work.
```python
from tqdm import tqdm
for epoch in tqdm(range(2)):
for step in range(flick_steps_per_epoch):
x_data, y_data = next(flick_generator)
decoder_model.fit(
x=x_data,
y=y_data,
batch_size=len(x_data[0]),
verbose=0,
)
print(f"Epoch {epoch+1} completed.")
``` | closed | 2024-11-19T20:20:59Z | 2024-12-25T02:01:11Z | https://github.com/keras-team/keras/issues/20518 | [
"stat:awaiting response from contributor",
"stale"
] | danica-zhu | 3 |
geex-arts/django-jet | django | 108 | 404 when attempting to reach jet dashboard | Possible duplicate of #62 but there was no mention how the issue was resovled there.
Django 1.10
Django-jet 0.9.1
Python 3.5
I followed installation steps exactly and my files all match what is expected. However, if I try to reach /jet or /jet/dashboard:
`The current URL, jet/dashboard/, didn't match any of these.`
even though the URL patterns are displaying
> ^jet/ ^add_bookmark/$ [name='add_bookmark']
> ^jet/ ^remove_bookmark/$ [name='remove_bookmark']
> ^jet/ ^toggle_application_pin/$ [name='toggle_application_pin']
> ^jet/ ^model_lookup/$ [name='model_lookup']
> ^jet/ ^jsi18n/$ [name='jsi18n']
> ^jet/dashboard/ ^module/(?P<pk>\d+)/$ [name='update_module']
> ^jet/dashboard/ ^update_dashboard_modules/$ [name='update_dashboard_modules']
> ^jet/dashboard/ ^add_user_dashboard_module/$ [name='add_user_dashboard_module']
> ^jet/dashboard/ ^update_dashboard_module_collapse/$ [name='update_dashboard_module_collapse']
> ^jet/dashboard/ ^remove_dashboard_module/$ [name='remove_dashboard_module']
> ^jet/dashboard/ ^load_dashboard_module/(?P<pk>\d+)/$ [name='load_dashboard_module']
> ^jet/dashboard/ ^reset_dashboard/$ [name='reset_dashboard']
> ^jet/dashboard/ ^jsi18n/$ [name='jsi18n']
| closed | 2016-08-24T18:28:04Z | 2018-08-01T16:19:12Z | https://github.com/geex-arts/django-jet/issues/108 | [] | kevin-miles | 5 |
jina-ai/serve | fastapi | 5,857 | Revisit Jina's Client profiling method | As described by #5856,
The profiling method of Jina Client may have some errors and be under Documented | closed | 2023-05-10T04:36:32Z | 2023-09-08T00:16:11Z | https://github.com/jina-ai/serve/issues/5857 | [
"Stale"
] | JoanFM | 2 |
encode/apistar | api | 346 | Authentication and permission with HTMLRenderer() | I am facing a problem using authentication/permission and a route function annotated with HTMLRenderer(). The problem occurs when the user doesn't have the permission or is not allowed to access the route. In this situation, is produced automatically a response data that should be rendered as json:
```json
{
"message": "Forbidden"
}
```
But if the function has HTMLRenderer, even if it also has JSONRenderer, the apistar tries to use only the HTMLRenderer, and since the parameter data in this case is a dict (not str nor bytes) it results in AssertationError:
```python
@annotate(renderers=[JSONRenderer(), HTMLRenderer()])
```
https://github.com/encode/apistar/blob/7d7dc3a11983915882687ca8607b9eca2ae5ff57/apistar/renderers.py#L37-L38
```bash
127.0.0.1 - - [28/Oct/2017 12:57:49] "GET /oauth/authorize HTTP/1.1" 500 -
Traceback (most recent call last):
File "project/authorization-server/.env/lib/python3.6/site-packages/apistar/frameworks/wsgi.py", line 134, in __call__
response = self.http_injector.run_all(funcs, state=state)
File "project/authorization-server/.env/lib/python3.6/site-packages/apistar/components/dependency.py", line 141, in run_all
ret = step.func(**kwargs)
File "project/authorization-server/.env/lib/python3.6/site-packages/apistar/hooks.py", line 68, in render_response
content = injector.run(renderer.render, {'response_data': data})
File "project/authorization-server/.env/lib/python3.6/site-packages/apistar/components/dependency.py", line 336, in run
ret = step.func(**kwargs)
File "project/authorization-server/.env/lib/python3.6/site-packages/apistar/renderers.py", line 38, in render
assert isinstance(data, (str, bytes))
AssertionError
``` | closed | 2017-10-28T16:30:45Z | 2018-09-25T14:44:51Z | https://github.com/encode/apistar/issues/346 | [] | 7robertodantas | 1 |
davidsandberg/facenet | tensorflow | 568 | save the model with tf.serving | We trained the model with saver.save. There are four files in the model dir. Who saved the model with tf.serving?Can you tell me how to convert the format of the two ways?Thanks very much. | closed | 2017-12-05T03:45:13Z | 2018-04-04T20:27:15Z | https://github.com/davidsandberg/facenet/issues/568 | [] | bingjilin | 5 |
jumpserver/jumpserver | django | 14,850 | [Bug] 使用JumperServerClient,win10电脑,无法连接window设备的RDP | ### Product Version
v3.0.1
### Product Edition
- [x] Community Edition
- [ ] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [x] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
win10电脑
### 🐛 Bug Description
使用jumperServerClient,无法连接windows资源,点击连接会显示连接成功,但是没有后续的动作

### Recurrence Steps
使用jumperServerClient,无法连接windows资源,点击连接会显示连接成功,但是没有后续的动作
### Expected Behavior
使用jumperServerClient,无法连接windows资源,点击连接会显示连接成功,但是没有后续的动作
### Additional Information
_No response_
### Attempted Solutions
_No response_ | open | 2025-02-03T12:20:03Z | 2025-02-11T07:37:29Z | https://github.com/jumpserver/jumpserver/issues/14850 | [
"🐛 Bug",
"⏳ Pending feedback"
] | wlinuxgit | 2 |
tqdm/tqdm | jupyter | 1,000 | position argument implementation | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [x] visual output bug
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [ ] new feature request
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
4.47.0 3.6.9 (default, Apr 18 2020, 01:56:04)
[GCC 8.4.0] linux
```
I have been having trouble with specifying the position argument for tqdm for a block of 8 bars. Consider the following code:
```python
from time import sleep
from tqdm import trange, tqdm
L = list(range(9))
def progresser(n):
interval = 0.001 / (n + 2)
total = 1000
text = "#{}, est. {:<04.2}s".format(n, interval * total)
for _ in trange(total, desc=text, position=n):
sleep(interval)
if __name__ == '__main__':
list(map(progresser, L))
input(">")
```
This is the output:

As you can see the progress bars are not outputted correctly. I believe the issue is here https://github.com/tqdm/tqdm/blob/15c5c512fffca8fd98080cec322e8b6c197c0377/tqdm/std.py#L1293-L1294
The problem is that when tqdm closes, the output is always in position 0 regardless of the value of `pos`. Another issue is that a new line `\n` is outputted. This is problematic because it means that the position for the next `tqdm` is no longer correct since the cursor is no longer at the beginning of the block.
Clearly this example is simplistic since the position argument is unnecessary. However it illustrates an unavoidable problem when using threading or multiprocessing.
One way to fix this would be to change `std.py:1293` to `self.display(pos=pos)` and to
have a tqdm command to update the output at the end of a tqdm block so that an appropriate number of new lines is outputted.
| open | 2020-07-06T15:23:37Z | 2023-12-06T04:55:46Z | https://github.com/tqdm/tqdm/issues/1000 | [] | haji-ali | 19 |
plotly/dash | dash | 2,866 | [BUG] No output callbacks with clientside_callback | If a clientside callback has no output, it generate `ID not found in layout` error:
```
clientside_callback(
"""
function(_) {
window.location.reload();
}
""",
Input("reset-button", "n_clicks"),
prevent_initial_call=True,
)
```
| closed | 2024-05-23T15:43:16Z | 2024-06-16T02:03:30Z | https://github.com/plotly/dash/issues/2866 | [
"bug"
] | T4rk1n | 1 |
pyro-ppl/numpyro | numpy | 1,323 | Bayesian Hierarchical Stacking Example | Hello,
The Bayesian Hierarchical Stacking example is not functioning. I tried in both the linked Google Colab & locally in jupyter lab.
Docs:
https://num.pyro.ai/en/latest/tutorials/bayesian_hierarchical_stacking.html
Link:
https://github.com/pyro-ppl/numpyro/tree/master/notebooks/source)/bayesian_hierarchical_stacking.ipynb
Error:
```
AssertionError Traceback (most recent call last)
[/usr/local/lib/python3.7/dist-packages/numpyro/handlers.py](https://localhost:8080/#) in postprocess_message(self, msg)
157 msg["type"] in ("sample", "deterministic") and msg["name"] in self.trace
158 ), "all sites must have unique names but got `{}` duplicated".format(
--> 159 msg["name"]
160 )
161 self.trace[msg["name"]] = msg.copy()
AssertionError: all sites must have unique names but got `w_test` duplicated
```
The issue stems from the test portion of the stacking() function
```
if test:
# test set stacking weights (in unconstrained space)
f_test = jnp.hstack([X_test @ beta.T, jnp.zeros((N_test, 1))])
# test set stacking weights (constrained to sum to 1)
w_test = numpyro.deterministic("w_test", jax.nn.softmax(f_test, axis=1))
numpyro.deterministic("w_test", w_test)
```
| closed | 2022-02-04T21:21:16Z | 2022-02-09T02:56:31Z | https://github.com/pyro-ppl/numpyro/issues/1323 | [
"bug"
] | Vinnie-Palazeti | 7 |
waditu/tushare | pandas | 1,428 | 基金净值 fund_nav 参数太单一 | 1. 建议参考index的方式,参数增加start_date, end_date,取时间段内的数据(根据净值日期)
2. 输出,建议也加入pct_change
Tushare ID : 391630 | open | 2020-09-10T05:17:03Z | 2020-09-10T05:17:03Z | https://github.com/waditu/tushare/issues/1428 | [] | cj9208 | 0 |
yt-dlp/yt-dlp | python | 12,413 | cache path hard disk drive changed from E to F, [WinError 3] appears | ### Checklist
- [x] I'm asking a question and **not** reporting a bug or requesting a feature
- [x] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar questions **including closed ones**. DO NOT post duplicates
### Please make sure the question is worded well enough to be understood
I download the yt-dlp.exe into D drive, and it runs Ok.
There is a temp folder generated in portable hard disk "E:\Program\XDG_CACHE\yt-dlp".
Today I did not have the portable disk, when I run again, it could download webm files, but the prompt shows errors like below.
How to solve this problem?
Thanks.
```
D:\QMDownload\yt>yt-dlp.exe https://www.youtube.com/watch?v=q_XwoBSSvTE&list=WL&
index=16&pp=gAQBiAQB
[youtube] Extracting URL: https://www.youtube.com/watch?v=q_XwoBSSvTE
[youtube] q_XwoBSSvTE: Downloading webpage
[youtube] q_XwoBSSvTE: Downloading tv client config
[youtube] q_XwoBSSvTE: Downloading player e7567ecf
[youtube] q_XwoBSSvTE: Downloading tv player API JSON
[youtube] q_XwoBSSvTE: Downloading ios player API JSON
WARNING: Writing cache to 'E:\\Program\\XDG_CACHE\\yt-dlp\\youtube-sigfuncs\\js_
e7567ecf_108.json' failed: Traceback (most recent call last):
File "yt_dlp\cache.py", line 41, in store
File "os.py", line 215, in makedirs
File "os.py", line 215, in makedirs
File "os.py", line 215, in makedirs
[Previous line repeated 1 more time]
File "os.py", line 225, in makedirs
FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'E:\\'
WARNING: Writing cache to 'E:\\Program\\XDG_CACHE\\yt-dlp\\youtube-nsig\\e7567ec
f.json' failed: Traceback (most recent call last):
File "yt_dlp\cache.py", line 41, in store
File "os.py", line 215, in makedirs
File "os.py", line 215, in makedirs
File "os.py", line 215, in makedirs
[Previous line repeated 1 more time]
File "os.py", line 225, in makedirs
FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'E:\\'
WARNING: Writing cache to 'E:\\Program\\XDG_CACHE\\yt-dlp\\youtube-nsig\\e7567ec
f.json' failed: Traceback (most recent call last):
File "yt_dlp\cache.py", line 41, in store
File "os.py", line 215, in makedirs
File "os.py", line 215, in makedirs
File "os.py", line 215, in makedirs
[Previous line repeated 1 more time]
File "os.py", line 225, in makedirs
FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'E:\\'
[youtube] q_XwoBSSvTE: Downloading m3u8 information
[info] q_XwoBSSvTE: Downloading 1 format(s): 136+251
[download] Destination: 我们的时光+彩虹下面live版 GAO YIFEI | ZHAO LEI ( Time o
f our Lives ) - Tarararara Full Viral Chinese Song [q_XwoBSSvTE].f136.mp4
[download] 100% of 36.01MiB in 00:00:06 at 5.21MiB/s
[download] Destination: 我们的时光+彩虹下面live版 GAO YIFEI | ZHAO LEI ( Time o
f our Lives ) - Tarararara Full Viral Chinese Song [q_XwoBSSvTE].f251.webm
[download] 100% of 3.92MiB in 00:00:01 at 2.62MiB/s
[Merger] Merging formats into "我们的时光+彩虹下面live版 GAO YIFEI | ZHAO LEI (
Time of our Lives ) - Tarararara Full Viral Chinese Song [q_XwoBSSvTE].mkv"
Deleting original file 我们的时光+彩虹下面live版 GAO YIFEI | ZHAO LEI ( Time of
our Lives ) - Tarararara Full Viral Chinese Song [q_XwoBSSvTE].f136.mp4 (pass -
k to keep)
Deleting original file 我们的时光+彩虹下面live版 GAO YIFEI | ZHAO LEI ( Time of
our Lives ) - Tarararara Full Viral Chinese Song [q_XwoBSSvTE].f251.webm (pass
-k to keep)
'list' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
'index' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
'pp' 不是内部或外部命令,也不是可运行的程序
或批处理文件。
```
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
``` | closed | 2025-02-19T12:52:53Z | 2025-02-20T15:52:34Z | https://github.com/yt-dlp/yt-dlp/issues/12413 | [
"question"
] | rmd13 | 2 |
django-oscar/django-oscar | django | 3,486 | Improve documentation for app forking command, and show examples of forking nested apps | The documentation for how to use `oscar_fork_app` could do with some improvement, specifically to explain that:
- The first argument to the command is an *app label* rather than a module path or anything else.
- The second argument is for the *top level target directory* into which all apps should be forked.
- Oscar will create a subdirectory inside the target directory for the forked app (and further subdirectories for nested apps). | open | 2020-08-28T11:24:07Z | 2020-09-19T03:11:31Z | https://github.com/django-oscar/django-oscar/issues/3486 | [
"✎ Docs"
] | solarissmoke | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 811 | Add EfficientNetV2 encoders | Is there any plan to include the EfficientNetV2 networks (such us those available with torchvision>=0.13) among the encoders?
| closed | 2023-09-19T11:34:33Z | 2023-11-30T01:50:29Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/811 | [
"Stale"
] | valeriopaolicelli | 3 |
LAION-AI/Open-Assistant | machine-learning | 2,952 | How we run on the production model | I use the Open Assistant source but for the debugging mode only. What we should change if we want to deploy into production mode ?
Thank you | closed | 2023-04-28T07:47:46Z | 2023-04-29T10:16:06Z | https://github.com/LAION-AI/Open-Assistant/issues/2952 | [] | ntson2002 | 1 |
MaartenGr/BERTopic | nlp | 1,504 | bug with custom_labels in _topics_over_time.py | Hi,
it seems there is a bu in _topics_over_time.py when using custom_labels. In line 67, you loop over topics, but I think it should be selected_topics. My local fix for this line is:
topic_names = [[[str(topic), None]] + topic_model.topic_aspects_[custom_labels][topic] for topic in selected_topics]
Furthermore, line 70 also break down, as you are iterating over all topics and topic_names many not include all of them. I have changed it to:
topic_names = {key: topic_names[index] for index, key in enumerate(selected_topics)}
| open | 2023-09-05T14:54:59Z | 2023-09-08T10:27:31Z | https://github.com/MaartenGr/BERTopic/issues/1504 | [] | rcprati | 3 |
apache/airflow | machine-learning | 47,488 | OpenLineage can silently lose Snowflake query_ids and can't support multiple query_ids | ### Apache Airflow Provider(s)
openlineage
### Versions of Apache Airflow Providers
latest
### Apache Airflow version
2.X
### Operating System
macos
### Deployment
Virtualenv installation
### Deployment details
_No response_
### What happened
When using `SqlExecuteQueryOperator` with Snowflake, and running a query with multiple statements in it, OpenLineage will only include first `query_id` in `ExternalQueryRunFacet`.
This is problematic, as users don't have full control on how the statements are executed (when query consists of multiple statements and `split_statements=False` operator throws an error `snowflake.connector.errors.ProgrammingError: 000008 (0A000): 01bad84f-0000-4392-0000-3d95000110ce: Actual statement count 3 did not match the desired statement count 1.`). The only solution for users to retrieve all query_ids in OL events is to set `split_statements=False` and make sure each task runs a single statement, which is rarely a case.
In BQ, similar problem is solved by ["parent_query_job"](https://github.com/apache/airflow/blob/ab3a1869c57def3ee74a925709cece4c7e07b891/providers/google/src/airflow/providers/google/cloud/openlineage/mixins.py#L109) executing each statement within a "child_query_job" with a link to the parent job, so that it's easy to access all ids later on. I couldn't find a similar mechanism in Snowflake.
### What you think should happen instead
Ideally, from within a single task (SqlExecuteQueryOperator) we would emit a separate OL event for each statement run, containing parentRunFacet pointing to the Airflow task. This may however take some time to implement properly and may? (or not) need some adjustments from the consumers?
As a partial solution, we could extend `ExternalQueryRunFacet` with a new property that accepts multiple `externalQueryIds`. This requires some discussion from OL community as how it fits to the spec.
Another small note, right now we are already sending the entire sql query (with all the statements) in `SQLJobFacet`, regardless if they execute as separate "queries" or not. So it would probably need adjustment as well.
### How to reproduce
Run a sample query like:
```
USE WAREHOUSE COMPUTE_WH;
CREATE OR REPLACE TABLE test.public.result AS SELECT * FROM snowflake_sample_data.tpch_sf1.customer;
```
You can see in Snowflake that this resulted in two queries being run, with two separate query_ids and only first one is included in OpenLineage event.
### Anything else
_No response_
### Are you willing to submit PR?
- [x] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-07T09:43:27Z | 2025-03-07T09:51:04Z | https://github.com/apache/airflow/issues/47488 | [
"kind:bug",
"area:providers",
"needs-triage",
"provider:openlineage"
] | kacpermuda | 0 |
mirumee/ariadne-codegen | graphql | 155 | Mixin Strategy Produces Invalid Queries | ### Summary
The Mixins feature is a great idea for extending the functionality of generated classes, however the mixin directive is not removed from the final generated query, resulting in errors returned from remote APIs.
### Example
Let's say that we simply want to validate that a mixin has been applied by returning a message:
```python
class RepositoryMixin:
def test_mixin(self):
return "This is a mixin for the Repository object!"
```
We then add it to the config TOML and include it in our GraphQL query:
```toml
files_to_include = [
"./codegen/github_enterprise/mixins/repository_mixin.py",
]
```
```gql
query get_repository($repo_owner: String!, $repo_name: String!) {
repository(name: $repo_name, owner: $repo_owner)
@mixin(from: ".repository_mixin", import: "RepositoryMixin") {
nameWithOwner
description
}
}
```
Looking at the generated Pydantic `BaseModel` we can see that the mixin was applied successfully:
```python
class GetRepositoryRepository(BaseModel, RepositoryMixin):
name_with_owner: str = Field(alias="nameWithOwner")
description: Optional[str]
```
However, the final generated query in the final generated GraphQL Client still contains the mixin directive used previously to generate our Pydantic model that inherits the mixin:
<img width="801" alt="codegen-bad-generated-query" src="https://github.com/mirumee/ariadne-codegen/assets/31744764/da4ae62c-4e13-4b09-88f1-7c5b40487bce">
<br />
<br />
Executing the `GithubEnterpriseClient.get_repository` method with this query results in `GraphQLClientHttpError: HTTP status code: 500`. Removing the mixin from the generated query fixes the error and returns the requested data, which contains a Repository on which you can successfully call the `Repository.test_mixin` method and get the expected output: `This is a mixin for the Repository object.`
### Fix
- Is the `@mixin` directive intended to be used exclusively at generation-time to inject the proper class into the inheritance chain of a given Pydantic model?
- If yes, then can this issue be fixed by simply programming the codegen business logic to remove the mixin from the final generated query in the GraphQL client?
- If no, what is the use-case of preserving the mixin in the final generated query and how can one use it without remote API errors?
| closed | 2023-05-22T17:57:18Z | 2023-05-30T06:59:20Z | https://github.com/mirumee/ariadne-codegen/issues/155 | [] | heyaphra | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.