repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
listlengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 512 |
ไธ่ฝฝๅฐ่ง้ขๅๅชๆๅฃฐ้ณ๏ผๆฒกๆ็ป้ข
|
ไฝฟ็จๆฅๅฃ่ทๅๅฐ็่ง้ข้พๆฅ๏ผๆไบ่ง้ขไธ่ฝฝๅฐๆฌๅฐๅๅชๆๅฃฐ้ณ๏ผๆฒกๆ็ป้ข๏ผไฝ่งฃ๏ผ
|
closed
|
2024-11-28T04:40:49Z
|
2024-11-28T05:10:09Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/512
|
[
"BUG"
] |
aslanTT
| 4 |
newpanjing/simpleui
|
django
| 172 |
pro ๅปบ่ฎฎๅขๅ ้กต้ข็ผ่พ
|



|
closed
|
2019-10-24T11:31:02Z
|
2019-12-18T03:14:21Z
|
https://github.com/newpanjing/simpleui/issues/172
|
[
"enhancement"
] |
hublive
| 2 |
aio-libs/aiomysql
|
asyncio
| 477 |
LAST_INSERT_ID() returns 0 when sharing a cursor.execute(...) call with INSERT
|
Windows 10 Home 1903 (18362.720)
Python 3.8.2 x64, aiomysql 0.0.20
MariaDB 10.4
Using `Pool` with `auto_commit=True`
Querying `SELECT LAST_INSERT_ID();` within the same `await cursor.execute(...)` call as the `INSERT...;` query the `LAST_INSERT_ID()` should be referencing causes the next `await cursor.fetchone()` call to yield an empty list (this is on a "clean" cursor).
A current workaround involves splitting the `SELECT LAST_INSERT_ID();` and `INSERT...;` queries into separate `await cursor.execute(...)` calls. This yields expected behaviour of `await cursor.fetchone()`.
I have not tested whether this behaviour is inherited from PyMySQL, rather than aiomysql.
Considering most documentation on SQL best practice indicates it is ideal to keep the `SELECT LAST_INSERT_ID();` close to whatever `AUTO_INCREMENT` it is referencing, requiring two `await cursor.execute(...)` calls may be considered a non-ideal case. Please do let me know if I have configured my environment in a fashion that causes this behaviour.
Many thanks
fwf
|
closed
|
2020-03-31T14:05:07Z
|
2020-04-20T14:24:09Z
|
https://github.com/aio-libs/aiomysql/issues/477
|
[] |
fw-cc
| 2 |
LibrePhotos/librephotos
|
django
| 571 |
Missing photos counter wrong
|
## ๐ Bug Report
### What Operating system and version is LibrePhotos running on:
unknown
### What architecture is LibrePhotos running on:
x64
### How is LibrePhotos installed:
Docker
### Description of issue:
[u/nagarrido_96](https://old.reddit.com/user/nagarrido_96) on [reddit reports](https://old.reddit.com/r/librephotos/comments/w240mi/missing_photos_counter_wrong/): I have a test instance for librephotos, and recently I deleted all photos but one (directly from the source folder). When I click "remove missing photos" the missing photos get deleted, but the counter for total photos and missing photos does not reset. Has this happened to anyone?
### How can we reproduce it:
- Delete images from folder
- Run "Delete Missing" job
- Counter does not change
|
closed
|
2022-07-27T19:05:10Z
|
2023-04-13T08:27:32Z
|
https://github.com/LibrePhotos/librephotos/issues/571
|
[
"bug"
] |
derneuere
| 2 |
onnx/onnx
|
machine-learning
| 5,885 |
[Feature request] Provide a means to convert to numpy array without byteswapping
|
### System information
ONNX 1.15
### What is the problem that this feature solves?
Issue onnx/tensorflow-onnx#1902 in tf2onnx occurs on big endian systems, and it is my observation that attributes which end up converting to integers are incorrectly byteswapped because the original data resided within a tensor. If `numpy_helper.to_array()` could be updated to optionally not perform byteswapping, then that could help solve this issue.
### Alternatives considered
As an alternative, additional logic could be added in tf2onnx to perform byteswapping on the data again, but this seems excessive.
### Describe the feature
I believe this feature is necessary to improve support for big endian systems.
### Will this influence the current api (Y/N)?
_No response_
### Feature Area
converters
### Are you willing to contribute it (Y/N)
Yes
### Notes
_No response_
|
closed
|
2024-01-31T20:58:56Z
|
2024-02-02T16:52:35Z
|
https://github.com/onnx/onnx/issues/5885
|
[
"topic: enhancement"
] |
tehbone
| 4 |
proplot-dev/proplot
|
data-visualization
| 204 |
Use XDG standard directories instead of hidden files in the home directory
|
Would it be possible to change (or, at as an option) the default locations of `~/.proplotrc` and `~/.proplot/fonts` and so on to the corresponding XDG standards? A description may be found here:
https://wiki.archlinux.org/index.php/XDG_Base_Directory
The standard has been gaining steam in the past five or so years; the idea is that apps don't have to litter the home directory with dot files, which makes backing up configuration files easier (not to mention, it keeps the home directory tidy!).
The TLDR is that:
- The file `~/.proplotrc` should be moved to `~/.config/proplot/proplotrc`
- The folders `~/.proplot/*` should be moved to `~/.config/proplot/*`
(Actually, `~/.config` is more generally the value of the environment variable `XDG_CONFIG_HOME`.)
Happy to submit a PR if there is interest.
|
closed
|
2020-07-09T18:34:59Z
|
2021-08-18T18:57:47Z
|
https://github.com/proplot-dev/proplot/issues/204
|
[
"good first issue",
"feature"
] |
legendre6891
| 5 |
ultralytics/ultralytics
|
machine-learning
| 19,741 |
Export torch script with dynamic batch , nms and FP16
|
### Search before asking
- [x] I have searched the Ultralytics [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar feature requests.
### Description
I want to get this model for Triton Server with pytorch backend
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR!
|
closed
|
2025-03-17T07:28:27Z
|
2025-03-18T14:48:14Z
|
https://github.com/ultralytics/ultralytics/issues/19741
|
[
"enhancement",
"exports"
] |
631068264
| 6 |
django-oscar/django-oscar
|
django
| 4,052 |
Extra ) in apps.py?
|
### Issue Summary
Is there an extra parenthesis [here](https://github.com/django-oscar/django-oscar/blob/master/src/oscar/apps/catalogue/reviews/apps.py#L26)?
Should
```
path('<int:pk>)/vote/', login_required(self.vote_view.as_view()), name='reviews-vote'),
```
be
```
path('<int:pk>/vote/', login_required(self.vote_view.as_view()), name='reviews-vote'),
```
?
|
closed
|
2023-02-16T14:20:54Z
|
2023-06-23T13:09:18Z
|
https://github.com/django-oscar/django-oscar/issues/4052
|
[] |
symwell
| 3 |
wkentaro/labelme
|
computer-vision
| 1,428 |
Suggest customizing the model directory
|
Suggest customizing the model directory
Download to C drive by default
Will occupy C drive space
--------------------------------------------
It is best to display a list of models and their addresses
Convenient for users to download in bulk
```python
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='github.com', port=443): Max retries exceeded with url: /labelmeai/efficient-sam/releases/download/onnx-models-20231225/efficient_sam_vits_decoder.onnx (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x000001C46F7A9490>, 'Connection to github.com timed out. (connect timeout=None)'))
```
Users often cannot download models
----------------------------------------------
This way, users can download all models
Put it in the custom directory
|
open
|
2024-04-15T03:15:43Z
|
2024-04-15T03:15:43Z
|
https://github.com/wkentaro/labelme/issues/1428
|
[] |
monkeycc
| 0 |
home-assistant/core
|
python
| 140,585 |
No registered handler for event
|
### The problem
I have a Reolink E1 Zoom camera and I can see the following in the HA log:
2025-03-12 10:54:27.698 WARNING (MainThread) [homeassistant.components.onvif] Cam1: No registered handler for event from c4:3c:b0:07:40:80: {
'SubscriptionReference': None,
'Topic': {
'_value_1': 'tns1:RuleEngine/MyRuleDetector/Package',
'Dialect': 'http://www.onvif.org/ver10/tev/topicExpression/ConcreteSet',
'_attr_1': {
}
},
'ProducerReference': None,
'Message': {
'_value_1': {
'Source': {
'SimpleItem': [
{
'Name': 'Source',
'Value': '000'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Key': None,
'Data': {
'SimpleItem': [
{
'Name': 'State',
'Value': 'false'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Extension': None,
'UtcTime': datetime.datetime(2025, 3, 12, 9, 54, 27, tzinfo=datetime.timezone.utc),
'PropertyOperation': 'Initialized',
'_attr_1': {
}
}
}
}
### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Onvif
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
2025-03-12 10:54:27.698 WARNING (MainThread) [homeassistant.components.onvif] Cam1: No registered handler for event from c4:3c:b0:07:40:80: {
'SubscriptionReference': None,
'Topic': {
'_value_1': 'tns1:RuleEngine/MyRuleDetector/Package',
'Dialect': 'http://www.onvif.org/ver10/tev/topicExpression/ConcreteSet',
'_attr_1': {
}
},
'ProducerReference': None,
'Message': {
'_value_1': {
'Source': {
'SimpleItem': [
{
'Name': 'Source',
'Value': '000'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Key': None,
'Data': {
'SimpleItem': [
{
'Name': 'State',
'Value': 'false'
}
],
'ElementItem': [],
'Extension': None,
'_attr_1': None
},
'Extension': None,
'UtcTime': datetime.datetime(2025, 3, 12, 9, 54, 27, tzinfo=datetime.timezone.utc),
'PropertyOperation': 'Initialized',
'_attr_1': {
}
}
}
}
```
### Additional information
_No response_
|
closed
|
2025-03-14T10:08:31Z
|
2025-03-15T19:03:41Z
|
https://github.com/home-assistant/core/issues/140585
|
[
"integration: onvif"
] |
gonzalo1059
| 9 |
coqui-ai/TTS
|
deep-learning
| 3,847 |
[Feature request] Speed up voice cloning from the same speaker
|
<!-- Welcome to the ๐ธTTS project!
We are excited to see your interest, and appreciate your support! --->
**๐ Feature Description**
Thanks to the project.
We had a try and the result is pretty good.
However, there is one major issue, the voice cloning speed is slow, especially inference by CPU.
We might need to generate the voice several times from the same speaker, could we speed up the process?
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
**Solution**
Here is how we use the code:
```
# Init TTS
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
# Text to speech to a file
tts.tts_to_file(text=words, language="zh-cn", file_path=out, speaker_wav="data/audio/sun1.wav")
```
Since we might need to clone the same speaker, and generate the voice for several times, is is possible to speed up the process?
Could we export some middle results or fine tuned model, and reused or reloaded the model next time?
We might expect the voice generate speed could be as fast as using the single model.
<!-- A clear and concise description of what you want to happen. -->
**Alternative Solutions**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
|
closed
|
2024-08-03T02:10:33Z
|
2024-08-09T12:17:44Z
|
https://github.com/coqui-ai/TTS/issues/3847
|
[
"feature request"
] |
henghamao
| 5 |
vitalik/django-ninja
|
django
| 938 |
[BUG] @model_validator(mode="before") issue
|
**Describe the bug**
When using something like this, `values` is a `DjangoGetter` instance which is somewhat unexpected to me. Prior to 1.0 and the pydantic update you would get a plain dictionary.
```python
class SomeSchema(Schema):
somevar:int
@model_validator(mode="before")
@classmethod
def foo(cls, values):
values.get("something")
```
**Versions (please complete the following information):**
- Python version: 3.10
- Django version: 4.1
- Django-Ninja version: 1.0.1
- Pydantic version: 2.5.1
|
open
|
2023-11-20T23:04:02Z
|
2023-12-21T05:46:20Z
|
https://github.com/vitalik/django-ninja/issues/938
|
[] |
shughes-uk
| 2 |
postmanlabs/httpbin
|
api
| 209 |
Create a 'hidden-digest-auth'
|
Add a new service "hidden-digest-auth" similar to "hidden-basic-auth" returning 404 instead of 401. This is specially necessary because digest authentication responding with 401 in ajax applications will cause the browser to always prompt the user for the credentials in the first attempt.
|
closed
|
2015-02-02T14:22:53Z
|
2018-04-26T17:51:05Z
|
https://github.com/postmanlabs/httpbin/issues/209
|
[] |
reinert
| 0 |
miguelgrinberg/Flask-SocketIO
|
flask
| 1,332 |
How to block all scanners requests except for /socket.io/
|
What's the best practice to cut absolutely everything except my own requests?
|
closed
|
2020-07-21T22:43:27Z
|
2021-04-27T12:31:33Z
|
https://github.com/miguelgrinberg/Flask-SocketIO/issues/1332
|
[
"question"
] |
nakamigo
| 3 |
open-mmlab/mmdetection
|
pytorch
| 11,651 |
I want to train fast-rcnn model but it is got a bug :TypeError: __init__() got an unexpected keyword argument 'proposal_file'
|
after_run:
(BELOW_NORMAL) LoggerHook
--------------------
Traceback (most recent call last):
File "tools/train.py", line 121, in <module>
main()
File "tools/train.py", line 117, in main
runner.train()
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1728, in train
self._train_loop = self.build_train_loop(
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1520, in build_train_loop
loop = LOOPS.build(
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/loops.py", line 44, in __init__
super().__init__(runner, dataloader)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/base_loop.py", line 26, in __init__
self.dataloader = runner.build_dataloader(
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1370, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/password123456/anaconda3/envs/fast-rcnn/lib/python3.8/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
TypeError: __init__() got an unexpected keyword argument 'proposal_file'
|
open
|
2024-04-20T14:36:30Z
|
2024-04-20T14:36:46Z
|
https://github.com/open-mmlab/mmdetection/issues/11651
|
[] |
Letme11
| 0 |
DistrictDataLabs/yellowbrick
|
matplotlib
| 1,231 |
Enable plt.close() to clear memory
|
I've been using Vizualiser to extract the variables elbow_value_ and elbow_score_ for multiple batches of training data. While looping through each batch, the corresponding figure is automatically rendered and plotted which consumes a lot of memory. I suggest enabling the option for plt.close() or even to skip plt.show() to improve performance.
|
closed
|
2022-03-16T15:06:29Z
|
2022-07-12T13:58:44Z
|
https://github.com/DistrictDataLabs/yellowbrick/issues/1231
|
[
"type: question"
] |
dilettante8
| 1 |
blacklanternsecurity/bbot
|
automation
| 1,856 |
Split out individual events in the event base.py to their own files
|
Things are getting real crowded in there.
|
open
|
2024-10-16T18:53:09Z
|
2024-10-16T18:53:09Z
|
https://github.com/blacklanternsecurity/bbot/issues/1856
|
[
"enhancement",
"low priority"
] |
liquidsec
| 0 |
taverntesting/tavern
|
pytest
| 598 |
Reusing 'parametrize' values?
|
Hi, is there any way to reuse parameterize values?
For example, I have the following test configuration:
```yaml
---
test_name: Successfully returns the auto registration history
includes:
- !include stage.async-result.yaml
marks:
- parametrize:
key: vin
vals:
- > A lot of values here <
stages:
- name: Trigger the task
request:
url: "http://localhost:8000/auto/history/{vin}"
method: GET
response:
status_code: 202
save:
$ext:
function: utils:url_from_refresh_header
- type: ref
id: async_result
---
test_name: Successfully returns auto accidents
includes:
- !include stage.async-result.yaml
marks:
- parametrize:
key: vin
vals:
- > A lot of values here <
stages:
- name: Trigger the task
request:
url: "http://localhost:8000/auto/accidents/{vin}"
method: GET
response:
status_code: 202
save:
$ext:
function: utils:url_from_refresh_header
- type: ref
id: async_result
```
How can I define the parametrize values in one place and reuse them?
|
closed
|
2020-09-14T09:50:12Z
|
2020-09-27T13:59:45Z
|
https://github.com/taverntesting/tavern/issues/598
|
[] |
MaximumQuiet
| 3 |
tiangolo/uwsgi-nginx-flask-docker
|
flask
| 123 |
Uwsgi using the wrong python version
|
Hello,
I'm trying to update my app to Python 3.7 with new `tiangolo/uwsgi-nginx-flask:python3.7-alpine3.8` image. When launching the image with Docker compose, it seems like uwsgi is using Python 3.6.6 instead.
```
Attaching to fakeimg
fakeimg | Checking for script in /app/prestart.sh
fakeimg | There is no script /app/prestart.sh
fakeimg | /usr/lib/python2.7/site-packages/supervisor/options.py:461: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
fakeimg | 'Supervisord is running as root and it is searching '
fakeimg | 2019-02-19 12:48:32,061 CRIT Supervisor is running as root. Privileges were not dropped because no user is specified in the config file. If you intend to run as root, you can set user=root in the config file to avoid this message.
fakeimg | 2019-02-19 12:48:32,061 INFO Included extra file "/etc/supervisor.d/supervisord.ini" during parsing
fakeimg | 2019-02-19 12:48:32,076 INFO RPC interface 'supervisor' initialized
fakeimg | 2019-02-19 12:48:32,076 CRIT Server 'unix_http_server' running without any HTTP authentication checking
fakeimg | 2019-02-19 12:48:32,081 INFO supervisord started with pid 1
fakeimg | 2019-02-19 12:48:33,083 INFO spawned: 'nginx' with pid 10
fakeimg | 2019-02-19 12:48:33,086 INFO spawned: 'uwsgi' with pid 11
fakeimg | [uWSGI] getting INI configuration from /app/uwsgi.ini
fakeimg | [uWSGI] getting INI configuration from /etc/uwsgi/uwsgi.ini
fakeimg |
fakeimg | ;uWSGI instance configuration
fakeimg | [uwsgi]
fakeimg | cheaper = 2
fakeimg | processes = 16
fakeimg | ini = /app/uwsgi.ini
fakeimg | module = main
fakeimg | callable = app
fakeimg | wsgi-disable-file-wrapper = true
fakeimg | ini = /etc/uwsgi/uwsgi.ini
fakeimg | socket = /tmp/uwsgi.sock
fakeimg | chown-socket = nginx:nginx
fakeimg | chmod-socket = 664
fakeimg | hook-master-start = unix_signal:15 gracefully_kill_them_all
fakeimg | need-app = true
fakeimg | die-on-term = true
fakeimg | plugin = python3
fakeimg | show-config = true
fakeimg | ;end of configuration
fakeimg |
fakeimg | *** Starting uWSGI 2.0.17 (64bit) on [Tue Feb 19 12:48:33 2019] ***
fakeimg | compiled with version: 6.4.0 on 01 May 2018 17:28:25
fakeimg | os: Linux-4.9.125-linuxkit #1 SMP Fri Sep 7 08:20:28 UTC 2018
fakeimg | nodename: 786a77edca64
fakeimg | machine: x86_64
fakeimg | clock source: unix
fakeimg | pcre jit disabled
fakeimg | detected number of CPU cores: 4
fakeimg | current working directory: /app
fakeimg | detected binary path: /usr/sbin/uwsgi
fakeimg | your memory page size is 4096 bytes
fakeimg | detected max file descriptor number: 1048576
fakeimg | lock engine: pthread robust mutexes
fakeimg | thunder lock: disabled (you can enable it with --thunder-lock)
fakeimg | uwsgi socket 0 bound to UNIX address /tmp/uwsgi.sock fd 3
fakeimg | uWSGI running as root, you can use --uid/--gid/--chroot options
fakeimg | *** WARNING: you are running uWSGI as root !!! (use the --uid flag) ***
fakeimg | Python version: 3.6.6 (default, Aug 24 2018, 05:04:18) [GCC 6.4.0]
fakeimg | *** Python threads support is disabled. You can enable it with --enable-threads ***
fakeimg | Python main interpreter initialized at 0x56004504cfa0
```
This is making my application crash, because every package has been installed for Python 3.7.2.
The project is [here](https://github.com/Rydgel/Fake-images-please) if you need to view the Dockerfile.
|
closed
|
2019-02-19T12:57:10Z
|
2019-02-19T16:24:45Z
|
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/123
|
[] |
Rydgel
| 4 |
wemake-services/django-test-migrations
|
pytest
| 56 |
Dependabot can't resolve your Python dependency files
|
Dependabot can't resolve your Python dependency files.
As a result, Dependabot couldn't update your dependencies.
The error Dependabot encountered was:
```
Creating virtualenv django-test-migrations-8iGcF5E9-py3.8 in /home/dependabot/.cache/pypoetry/virtualenvs
Updating dependencies
Resolving dependencies...
[PackageNotFound]
Package pytest-pythonpath (0.7.3) not found.
```
If you think the above is an error on Dependabot's side please don't hesitate to get in touch - we'll do whatever we can to fix it.
[View the update logs](https://app.dependabot.com/accounts/wemake-services/update-logs/27601734).
|
closed
|
2020-03-26T08:34:50Z
|
2020-03-26T08:34:56Z
|
https://github.com/wemake-services/django-test-migrations/issues/56
|
[] |
dependabot-preview[bot]
| 0 |
Asabeneh/30-Days-Of-Python
|
matplotlib
| 595 |
30 Days of Python
|
closed
|
2024-08-21T11:15:49Z
|
2024-08-21T11:18:49Z
|
https://github.com/Asabeneh/30-Days-Of-Python/issues/595
|
[] |
niravkakariya
| 0 |
|
strawberry-graphql/strawberry
|
django
| 3,413 |
Surprising FIFO behaviour of lifecycle hooks
|
## Describe the (maybe) Bug
I'm surprised that various lifecycle hooks (`on_operation`, `on_parse`, `on_validate`, `on_execute`) are completed in a FIFO fashion, rather than LIFO.
I would expect that if we're wrapping an operation with `on_operation` with 2 extension the following will happen (LIFO):
* First extension starts (before `yield` part)
* Second extension starts (before `yield` part)
* Second extension completes (after `yield` part)
* First extension completes (after `yield` part)
However, the order I'm _actually_ seeing is the following (FIFO):
* First extension starts (before `yield` part)
* Second extension starts (before `yield` part)
* First extension completes (after `yield` part)
* Second extension completes (after `yield` part)
I'm concerned about it because extension can mutate state, so it would be good for them to behave like a context manager. [Example of state mutation.](https://strawberry.rocks/docs/guides/custom-extensions#execution-context)
In fact, I do find it surprising that this is how things work out. Notably, overriding `resolve` doesn't have the same problem โ but it also happens in a slightly different way.
## Repro details
Here're some toy extensions I built to investigate things:
```python
class MyCustomExtension(SchemaExtension):
id = "?"
@override
async def on_validate(self) -> AsyncGenerator[None, None]:
print(f"GraphQL validation start ({self.id})")
yield
print(f"GraphQL validation end ({self.id})")
@override
def on_parse(self) -> Generator[None, None, None]:
print(f"GraphQL parsing start ({self.id})")
yield
print(f"GraphQL parsing end ({self.id})")
@override
def on_execute(self) -> Generator[None, None, None]:
print(f"GraphQL execution start ({self.id})")
yield
print(f"GraphQL execution end ({self.id})")
@override
def on_operation(self) -> Generator[None, None, None]:
print(f"GraphQL operation start ({self.id})")
yield
print(f"GraphQL operation end ({self.id})")
@override
async def resolve(
self,
_next: Callable[..., object],
root: object,
info: GraphQLResolveInfo,
*args,
**kwargs,
) -> AwaitableOrValue[object]:
random_id = randint(0, 1000)
print(f"GraphQL resolver {random_id} start ({self.id})")
result = await await_maybe(_next(root, info, *args, **kwargs))
print(f"GraphQL resolver {random_id} end ({self.id})")
return result
class MyCustomExtensionA(MyCustomExtension):
id = "A"
class MyCustomExtensionB(MyCustomExtension):
id = "B"
```
I'm testing it by running a simple query against a GraphQL Schema:
```python
@strawberry.type
class Me:
id: str
@strawberry.type
class Query:
@strawberry.field
@staticmethod
async def me() -> Me:
return Me(id="foo")
schema = MySchema(
query=Query,
extensions=[MyCustomExtensionA, MyCustomExtensionB],
)
```
When running a simple GraphQL query against this schema:
```graphql
query {
me { id }
}
```
I see the following lines being printed:
```
GraphQL operation start (A)
GraphQL operation start (B)
GraphQL parsing start (A)
GraphQL parsing start (B)
GraphQL parsing end (A)
GraphQL parsing end (B)
GraphQL validation start (A)
GraphQL validation start (B)
GraphQL validation end (A)
GraphQL validation end (B)
GraphQL execution start (A)
GraphQL execution start (B)
GraphQL resolver 598 start (B)
GraphQL resolver 975 start (A)
GraphQL resolver 975 end (A)
GraphQL resolver 598 end (B)
GraphQL resolver 196 start (B)
GraphQL resolver 638 start (A)
GraphQL resolver 638 end (A)
GraphQL resolver 196 end (B)
GraphQL execution end (A)
GraphQL execution end (B)
GraphQL operation end (A)
GraphQL operation end (B)
```
## System Information
- Strawberry version (if applicable): `0.220.0`
|
closed
|
2024-03-19T15:27:12Z
|
2025-03-20T15:56:37Z
|
https://github.com/strawberry-graphql/strawberry/issues/3413
|
[
"bug"
] |
kkom
| 2 |
Esri/arcgis-python-api
|
jupyter
| 1,322 |
Conda install -c esri arcgis not working !
|
**Describe the bug**
I can't install Arcgis API with anaconda distribution. I insert at Anaconda Prompt in a new enviroment
conda install -c esri arcgis
My virtual env called ArcGIS_API and I have not installed any other packages.
error:
```UnsatisfiableError:
Note that strict channel priority may have removed packages required for satisfiability.
```
**Screenshots**

**Expected behavior**
I was expexted the installation of ArcGIS API.
**Platform (please complete the following information):**
- OS: Windows 11
|
closed
|
2022-08-06T15:15:05Z
|
2022-08-26T17:55:24Z
|
https://github.com/Esri/arcgis-python-api/issues/1322
|
[
"bug"
] |
PavlosDem99
| 7 |
mwaskom/seaborn
|
data-science
| 3,130 |
Legend position gets overwritten / can not be set
|
When plotting with the new `seaborn.objects` Plot method including a legend (like `.add(Line(), legend=True)`), the position of the legend gets hardcoded to specific positions (https://github.com/mwaskom/seaborn/blob/master/seaborn/_core/plot.py#L1612-L1624), which differ depending on the usage of pyplot (legend gets pulled on top of the right side over the graph's content) or not (e.g. in a Jupyter notebook, or on `.save()` ,the legend gets place to the right side outside of the plot graph area).
If the users want a different position of the legend (possible especially in the case of pyplot usage, where the legend may easily be generated over important data points), they have no possibility to move it to another place. The old (=not `seaborn.objects`) method [`move_legend()`](https://seaborn.pydata.org/generated/seaborn.move_legend.html) does not work for `Plot` objects, neither can the seaborn objects `.theme()` approach be used, with setting matplotlib rc params (which would be [`legend.loc`](https://matplotlib.org/stable/tutorials/introductory/customizing.html#:~:text=legend.loc) in conjunction with `bbox_to_anchor` of the [matplotlib legend](https://matplotlib.org/stable/api/legend_api.html#matplotlib.legend.Legend)), because the `loc` is hardcoded, and will such be overwritten, additionally there is no possibility at all for the users to set `bbox_to_anchor` for the legend.
The PR https://github.com/mwaskom/seaborn/pull/3128 solves this issue, through staying at the current default behavior, if no additional parameters are given, but respecting the matplotlib rc params entry for `legend.loc` if it is given like `.theme({"legend.loc": "upper left"})`.
Additionally it gives the new possibility to set the `bbox_to_anchor` of the legend, like `.theme({"legend_loc__bbox_to_anchor": (0.1, 0.95)})` (where the kind this is implemented might not be ideal).
This gives the users full control over the legend position.
|
closed
|
2022-11-08T10:00:51Z
|
2022-11-11T23:52:38Z
|
https://github.com/mwaskom/seaborn/issues/3130
|
[] |
jalsti
| 8 |
yzhao062/pyod
|
data-science
| 179 |
XGBOD model file is too large
|
Hello!
I spent an hour on my data set training an XGBOD model, and it worked well on the test set, but after saving the model file with PICKLE, I found that the file size was 1.2G!
Is there a way to reduce the size of the model file ?
(the size of my data set is (30412, 86))
|
open
|
2020-04-10T01:34:49Z
|
2021-06-11T23:56:32Z
|
https://github.com/yzhao062/pyod/issues/179
|
[
"enhancement"
] |
pjgao
| 3 |
KevinMusgrave/pytorch-metric-learning
|
computer-vision
| 10 |
Multi-gpu training
|
Hi, I am stuck on how multi-GPU training would work for loss functions with more than one negative, particularly [NTXentLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss).
In [SimCLR](https://arxiv.org/abs/2002.05709), the number of _negatives_ per some _positive_ pair is taken to be `2 * (N - 1)` (all examples in a minibatch of size `N` that don't belong to that positive pair), and they find ([as other works before them](https://github.com/KevinMusgrave/pytorch-metric-learning)) that the bigger the batch size, the larger the number of negatives, and the better the learned representations.
`DataParallel` and `DistributedDataParallel` divide up a mini-batch, send each partition of examples to a GPU, compute gradients, and then average these gradients before backpropagating. But this means that each GPU is computing a loss with `N/n_gpus` examples and, therefore, `2 * (N/n_gpus - 1)` negatives per positive pair.
My question is: how might I use a loss function from this library, such as [NTXentLoss](https://kevinmusgrave.github.io/pytorch-metric-learning/losses/#ntxentloss), with `DataParallel` or `DistributedDataParallel` that avoids this "issue"? I.e. that allows me to use multiple GPUs while maintaining `2 * (N - 1)` negatives per positive pair.
|
closed
|
2020-02-21T00:31:38Z
|
2022-04-19T15:28:41Z
|
https://github.com/KevinMusgrave/pytorch-metric-learning/issues/10
|
[
"Frequently Asked Questions"
] |
JohnGiorgi
| 12 |
Textualize/rich
|
python
| 2,775 |
[BUG] Trailing newlines when using progress bar in notebook
|
- [X] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions.
- [X] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md).
**Describe the bug**
First of all, thanks a lot for this library!
Displaying a progress bar in a jupyter notebook leads to several trailing newlines being added, seemingly when the progress bar completes. The issue doesn't occur when running the same code in a terminal. Might be related to #2711.

<details>
<summary> Click to see source code </summary>
```python
from time import sleep
from rich.progress import Progress
with Progress() as progress:
task = progress.add_task("Working", total=10, metrics="")
for batch in range(10):
sleep(0.1)
progress.update(task, advance=1, refresh=True)
print("There's a lot of space above me")
```
</details>
**Platform**
<details>
<summary>Click to expand</summary>
```python
โญโโโโโโโโโโโโโโโโโโโโโโ <class 'rich.console.Console'> โโโโโโโโโโโโโโโโโโโโโโโฎ
โ A high level console interface. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ <console width=115 ColorSystem.TRUECOLOR> โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ color_system = 'truecolor' โ
โ encoding = 'utf-8' โ
โ file = <ipykernel.iostream.OutStream object at 0x7f799c8dc9d0> โ
โ height = 100 โ
โ is_alt_screen = False โ
โ is_dumb_terminal = False โ
โ is_interactive = False โ
โ is_jupyter = True โ
โ is_terminal = False โ
โ legacy_windows = False โ
โ no_color = False โ
โ options = ConsoleOptions( โ
โ size=ConsoleDimensions(width=115, height=100), โ
โ legacy_windows=False, โ
โ min_width=1, โ
โ max_width=115, โ
โ is_terminal=False, โ
โ encoding='utf-8', โ
โ max_height=100, โ
โ justify=None, โ
โ overflow=None, โ
โ no_wrap=False, โ
โ highlight=None, โ
โ markup=None, โ
โ height=None โ
โ ) โ
โ quiet = False โ
โ record = False โ
โ safe_box = True โ
โ size = ConsoleDimensions(width=115, height=100) โ
โ soft_wrap = False โ
โ stderr = False โ
โ style = None โ
โ tab_size = 8 โ
โ width = 115 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโ <class 'rich._windows.WindowsConsoleFeatures'> โโโโโฎ
โ Windows features available. โ
โ โ
โ โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ โ
โ โ WindowsConsoleFeatures(vt=False, truecolor=False) โ โ
โ โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ โ
โ โ
โ truecolor = False โ
โ vt = False โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
โญโโโโโโ Environment Variables โโโโโโโโฎ
โ { โ
โ 'TERM': 'xterm-color', โ
โ 'COLORTERM': 'truecolor', โ
โ 'CLICOLOR': '1', โ
โ 'NO_COLOR': None, โ
โ 'TERM_PROGRAM': 'vscode', โ
โ 'COLUMNS': None, โ
โ 'LINES': None, โ
โ 'JUPYTER_COLUMNS': None, โ
โ 'JUPYTER_LINES': None, โ
โ 'JPY_PARENT_PID': '18491', โ
โ 'VSCODE_VERBOSE_LOGGING': None โ
โ } โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
platform="Linux"
```
</details>
|
closed
|
2023-01-25T12:31:20Z
|
2023-01-25T15:10:21Z
|
https://github.com/Textualize/rich/issues/2775
|
[
"Needs triage"
] |
Seon82
| 3 |
dask/dask
|
pandas
| 11,746 |
KeyError when indexing into Series after calling `to_series` on Scalar
|
<!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
**Minimal Complete Verifiable Example**:
```python
In [1]: import dask.dataframe as dd
In [2]: import pandas as pd
In [3]: data = {"a": [1, 3, 2]}
In [4]: df = dd.from_pandas(pd.DataFrame(data), npartitions=2)
In [5]: df['a'].sum().to_series().fillna(0)[0].compute()
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py:3805, in Index.get_loc(self, key)
3804 try:
-> 3805 return self._engine.get_loc(casted_key)
3806 except KeyError as err:
File index.pyx:167, in pandas._libs.index.IndexEngine.get_loc()
File index.pyx:196, in pandas._libs.index.IndexEngine.get_loc()
File pandas/_libs/hashtable_class_helper.pxi:2606, in pandas._libs.hashtable.Int64HashTable.get_item()
File pandas/_libs/hashtable_class_helper.pxi:2630, in pandas._libs.hashtable.Int64HashTable.get_item()
KeyError: 0
The above exception was the direct cause of the following exception:
KeyError Traceback (most recent call last)
Cell In[5], line 1
----> 1 df['a'].sum().to_series().fillna(0)[0].compute()
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/dataframe/dask_expr/_collection.py:489, in FrameBase.compute(self, fuse, concatenate, **kwargs)
487 out = out.repartition(npartitions=1)
488 out = out.optimize(fuse=fuse)
--> 489 return DaskMethodsMixin.compute(out, **kwargs)
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/base.py:374, in DaskMethodsMixin.compute(self, **kwargs)
350 def compute(self, **kwargs):
351 """Compute this dask collection
352
353 This turns a lazy Dask collection into its in-memory equivalent.
(...)
372 dask.compute
373 """
--> 374 (result,) = compute(self, traverse=False, **kwargs)
375 return result
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/dask/base.py:662, in compute(traverse, optimize_graph, scheduler, get, *args, **kwargs)
659 postcomputes.append(x.__dask_postcompute__())
661 with shorten_traceback():
--> 662 results = schedule(dsk, keys, **kwargs)
664 return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
File ~/polars-api-compat-dev/.venv/lib/python3.12/site-packages/pandas/core/indexes/base.py:3812, in Index.get_loc(self, key)
3807 if isinstance(casted_key, slice) or (
3808 isinstance(casted_key, abc.Iterable)
3809 and any(isinstance(x, slice) for x in casted_key)
3810 ):
3811 raise InvalidIndexError(key)
-> 3812 raise KeyError(key) from err
3813 except TypeError:
3814 # If we have a listlike key, _check_indexing_error will raise
3815 # InvalidIndexError. Otherwise we fall through and re-raise
3816 # the TypeError.
3817 self._check_indexing_error(key)
KeyError: 0
```
If I `compute` before the `[0]`, then I get:
```
In [6]: df['a'].sum().to_series().fillna(0).compute()
Out[6]:
0 6
dtype: int64
```
so I'd have expected the `[0]` to work
**Anything else we need to know?**:
spotted in Narwhals
**Environment**:
- Dask version: 2025.1.0
- Python version: 3.12
- Operating System: linux
- Install method (conda, pip, source): pip
|
closed
|
2025-02-14T09:26:09Z
|
2025-02-18T15:24:17Z
|
https://github.com/dask/dask/issues/11746
|
[
"dataframe",
"bug"
] |
MarcoGorelli
| 4 |
modelscope/data-juicer
|
data-visualization
| 126 |
[Bug]: ๅจไฝฟ็จไธไบ่ฟๆปค็ฌฆๆไฝ็ๆถๅ๏ผๅบ็ฐไบdatasets.builder.DatasetGenerationError: An error occurred while generating the datasetๆฅ้๏ผๆณ็ฅ้ๅๅ ๏ผ่ฐข่ฐขใ
|
### Before Reporting ๆฅๅไนๅ
- [X] I have pulled the latest code of main branch to run again and the bug still existed. ๆๅทฒ็ปๆๅไบไธปๅๆฏไธๆๆฐ็ไปฃ็ ๏ผ้ๆฐ่ฟ่กไนๅ๏ผ้ฎ้ขไปไธ่ฝ่งฃๅณใ
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) ๆๅทฒ็ปไป็ป้
่ฏปไบ [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) ไธ็ๆไฝๆๅผ๏ผๅนถไธๅจๅฎ่ฃ
่ฟ็จไธญๆฒกๆ้่ฏฏๅ็ใ๏ผๅฆๅ๏ผๆไปฌๅปบ่ฎฎๆจไฝฟ็จQuestionๆจกๆฟๅๆไปฌ่ฟ่กๆ้ฎ๏ผ
### Search before reporting ๅ
ๆ็ดข๏ผๅๆฅๅ
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. ๆๅทฒ็ปๅจ [issueๅ่กจ](https://github.com/alibaba/data-juicer/issues) ไธญๆ็ดขไฝๆฏๆฒกๆๅ็ฐ็ฑปไผผ็bugๆฅๅใ
### OS ็ณป็ป
ubuntu
### Installation Method ๅฎ่ฃ
ๆนๅผ
source
### Data-Juicer Version Data-Juicer็ๆฌ
v0.1.2
### Python Version Python็ๆฌ
3.10.11
### Describe the bug ๆ่ฟฐ่ฟไธชbug


### To Reproduce ๅฆไฝๅค็ฐ
ๅชๆฏ็ผ่พไบanalyser.yamlๆไปถ๏ผๅๆถ๏ผ่พๅ
ฅๆฐๆฎๆฏไธไธชๆไปถๅคน๏ผ้้ขๅ
ๅซไบjsonๆไปถไปฅๅtxtๆไปถๅ.shๆไปถ๏ผ
### Configs ้
็ฝฎไฟกๆฏ
_No response_
### Logs ๆฅ้ๆฅๅฟ
_No response_
### Screenshots ๆชๅพ
_No response_
### Additional ้ขๅคไฟกๆฏ
_No response_
|
closed
|
2023-12-08T08:11:18Z
|
2023-12-26T08:06:27Z
|
https://github.com/modelscope/data-juicer/issues/126
|
[
"bug"
] |
hitszxs
| 7 |
Evil0ctal/Douyin_TikTok_Download_API
|
api
| 49 |
Douyin ้ณ้ข No BGM found
|
ๅคงไฝฌ๏ผๆ่ฏไบๅ ไธชๆ้ณ็้พๆฅ๏ผ้ณ้ขๅฅฝๅ้ฝๆฒกๆ่ทๅๅฐๆพ็คบ `No BGM found` ๏ผไฝ่ง้ข้ๅนถไธๆฏๅๅ้ณ้ขๆฏๆ็ฌฌไธๆน้ณ้ข็
|
closed
|
2022-07-01T09:25:26Z
|
2022-11-09T21:04:15Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/49
|
[
"invalid",
"Fixed"
] |
joyzhang2022
| 10 |
CorentinJ/Real-Time-Voice-Cloning
|
python
| 1,043 |
quality is insanely horrid
|
even when using large recordings, uncompressed .wav, short audio "text", its just hissing and weird glitchyness..
|
closed
|
2022-03-23T08:33:31Z
|
2022-03-27T15:17:55Z
|
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1043
|
[] |
400lbhacker
| 1 |
aiortc/aiortc
|
asyncio
| 124 |
Can't build the thing, requires outdated visual c++
|
C:\MyProjects\___MECHANIKOS\GPUCloudDeepLearningResearch>pip install aiohttp aiortc opencv-python
Requirement already satisfied: aiohttp in c:\python36-32\lib\site-packages (3.5.4)
Collecting aiortc
Using cached https://files.pythonhosted.org/packages/ba/ae/35360b00e9f03103bebb553c37220f721f766f90490b4203912cfadf0be2/aiortc-0.9.18.tar.gz
Collecting opencv-python
Using cached https://files.pythonhosted.org/packages/49/4b/ad55a2e2c309fb698e1283e687129e0892c7864de9a4424c4ff01ba0a3bb/opencv_python-4.0.0.21-cp36-cp36m-win32.whl
Requirement already satisfied: chardet<4.0,>=2.0 in c:\python36-32\lib\site-packages (from aiohttp) (3.0.4)
Requirement already satisfied: typing-extensions>=3.6.5; python_version < "3.7" in c:\python36-32\lib\site-packages (from aiohttp) (3.7.2)
Requirement already satisfied: idna-ssl>=1.0; python_version < "3.7" in c:\python36-32\lib\site-packages (from aiohttp) (1.1.0)
Requirement already satisfied: yarl<2.0,>=1.0 in c:\python36-32\lib\site-packages (from aiohttp) (1.3.0)
Requirement already satisfied: async-timeout<4.0,>=3.0 in c:\python36-32\lib\site-packages (from aiohttp) (3.0.1)
Requirement already satisfied: multidict<5.0,>=4.0 in c:\python36-32\lib\site-packages (from aiohttp) (4.5.2)
Requirement already satisfied: attrs>=17.3.0 in c:\python36-32\lib\site-packages (from aiohttp) (18.2.0)
Requirement already satisfied: aioice<0.7.0,>=0.6.12 in c:\python36-32\lib\site-packages (from aiortc) (0.6.12)
Collecting av<7.0.0,>=6.1.0 (from aiortc)
Using cached https://files.pythonhosted.org/packages/15/80/edc9e110b2896ebe16863051e68bd4786efeda71ce94b81a048d146062cc/av-6.1.0.tar.gz
Requirement already satisfied: cffi>=1.0.0 in c:\users\fruitfulapproach\appdata\roaming\python\python36\site-packages (from aiortc) (1.11.5)
Collecting crc32c (from aiortc)
Collecting cryptography>=2.2 (from aiortc)
Using cached https://files.pythonhosted.org/packages/af/d7/9e6442de1aa61d3268e5abd7fb73b130cfc2e42439a7db42248653844593/cryptography-2.4.2-cp36-cp36m-win32.whl
Collecting pyee (from aiortc)
Using cached https://files.pythonhosted.org/packages/8e/06/10c18578e2d8b9cf9902f424f86d433c647ca55e82293100f53e6c0afab4/pyee-5.0.0-py2.py3-none-any.whl
Collecting pylibsrtp>=0.5.6 (from aiortc)
Using cached https://files.pythonhosted.org/packages/77/21/0a65a6ea02879fd4af7f6e137cb4fb14a64f72f8557112408462fc43124f/pylibsrtp-0.6.0-cp36-cp36m-win32.whl
Collecting pyopenssl (from aiortc)
Using cached https://files.pythonhosted.org/packages/96/af/9d29e6bd40823061aea2e0574ccb2fcf72bfd6130ce53d32773ec375458c/pyOpenSSL-18.0.0-py2.py3-none-any.whl
Requirement already satisfied: numpy>=1.11.3 in c:\python36-32\lib\site-packages (from opencv-python) (1.14.5)
Requirement already satisfied: idna>=2.0 in c:\python36-32\lib\site-packages (from idna-ssl>=1.0; python_version < "3.7"->aiohttp) (2.6)
Requirement already satisfied: netifaces in c:\python36-32\lib\site-packages (from aioice<0.7.0,>=0.6.12->aiortc) (0.10.9)
Requirement already satisfied: pycparser in c:\users\fruitfulapproach\appdata\roaming\python\python36\site-packages (from cffi>=1.0.0->aiortc) (2.19)
Collecting asn1crypto>=0.21.0 (from cryptography>=2.2->aiortc)
Using cached https://files.pythonhosted.org/packages/ea/cd/35485615f45f30a510576f1a56d1e0a7ad7bd8ab5ed7cdc600ef7cd06222/asn1crypto-0.24.0-py2.py3-none-any.whl
Requirement already satisfied: six>=1.4.1 in c:\python36-32\lib\site-packages (from cryptography>=2.2->aiortc) (1.11.0)
Building wheels for collected packages: aiortc, av
Running setup.py bdist_wheel for aiortc ... error
Complete output from command c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\aiortc\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\FRUITF~1\AppData\Local\Temp\pip-wheel-k1arcvoo --python-tag cp36:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win32-3.6
creating build\lib.win32-3.6\aiortc
copying aiortc\clock.py -> build\lib.win32-3.6\aiortc
copying aiortc\events.py -> build\lib.win32-3.6\aiortc
copying aiortc\exceptions.py -> build\lib.win32-3.6\aiortc
copying aiortc\jitterbuffer.py -> build\lib.win32-3.6\aiortc
copying aiortc\mediastreams.py -> build\lib.win32-3.6\aiortc
copying aiortc\rate.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcconfiguration.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcdatachannel.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcdtlstransport.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcicetransport.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcpeerconnection.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtpparameters.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtpreceiver.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtpsender.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcrtptransceiver.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcsctptransport.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtcsessiondescription.py -> build\lib.win32-3.6\aiortc
copying aiortc\rtp.py -> build\lib.win32-3.6\aiortc
copying aiortc\sdp.py -> build\lib.win32-3.6\aiortc
copying aiortc\stats.py -> build\lib.win32-3.6\aiortc
copying aiortc\utils.py -> build\lib.win32-3.6\aiortc
copying aiortc\__init__.py -> build\lib.win32-3.6\aiortc
creating build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\g711.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\h264.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\opus.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\vpx.py -> build\lib.win32-3.6\aiortc\codecs
copying aiortc\codecs\__init__.py -> build\lib.win32-3.6\aiortc\codecs
creating build\lib.win32-3.6\aiortc\contrib
copying aiortc\contrib\media.py -> build\lib.win32-3.6\aiortc\contrib
copying aiortc\contrib\signaling.py -> build\lib.win32-3.6\aiortc\contrib
copying aiortc\contrib\__init__.py -> build\lib.win32-3.6\aiortc\contrib
running build_ext
generating cffi module 'build\\temp.win32-3.6\\Release\\aiortc.codecs._vpx.c'
creating build\temp.win32-3.6
creating build\temp.win32-3.6\Release
generating cffi module 'build\\temp.win32-3.6\\Release\\aiortc.codecs._opus.c'
building 'aiortc.codecs._opus' extension
creating build\temp.win32-3.6\Release\build
creating build\temp.win32-3.6\Release\build\temp.win32-3.6
creating build\temp.win32-3.6\Release\build\temp.win32-3.6\Release
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Ic:\python36-32\include -Ic:\python36-32\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tcbuild\temp.win32-3.6\Release\aiortc.codecs._opus.c /Fobuild\temp.win32-3.6\Release\build\temp.win32-3.6\Release\aiortc.codecs._opus.obj
aiortc.codecs._opus.c
build\temp.win32-3.6\Release\aiortc.codecs._opus.c(493): fatal error C1083: Cannot open include file: 'opus/opus.h': No such file or directory
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\cl.exe' failed with exit status 2
----------------------------------------
Failed building wheel for aiortc
Running setup.py clean for aiortc
Running setup.py bdist_wheel for av ... error
Complete output from command c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\av\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d C:\Users\FRUITF~1\AppData\Local\Temp\pip-wheel-yn9ebcj9 --python-tag cp36:
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win32-3.6
creating build\lib.win32-3.6\av
copying av\datasets.py -> build\lib.win32-3.6\av
copying av\deprecation.py -> build\lib.win32-3.6\av
copying av\__init__.py -> build\lib.win32-3.6\av
copying av\__main__.py -> build\lib.win32-3.6\av
creating build\lib.win32-3.6\scratchpad
copying scratchpad\audio.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\audio_player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\average.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode_threads.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\dump_format.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode_frames.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\experimental.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\frame_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\glproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\graph.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\merge-filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\qtproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\remux.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\resource_use.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\save_subtitles.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\second_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\seekmany.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\show_frames_opencv.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\__init__.py -> build\lib.win32-3.6\scratchpad
creating build\lib.win32-3.6\av\audio
copying av\audio\__init__.py -> build\lib.win32-3.6\av\audio
creating build\lib.win32-3.6\av\codec
copying av\codec\__init__.py -> build\lib.win32-3.6\av\codec
creating build\lib.win32-3.6\av\container
copying av\container\__init__.py -> build\lib.win32-3.6\av\container
creating build\lib.win32-3.6\av\data
copying av\data\__init__.py -> build\lib.win32-3.6\av\data
creating build\lib.win32-3.6\av\filter
copying av\filter\__init__.py -> build\lib.win32-3.6\av\filter
creating build\lib.win32-3.6\av\subtitles
copying av\subtitles\__init__.py -> build\lib.win32-3.6\av\subtitles
creating build\lib.win32-3.6\av\video
copying av\video\__init__.py -> build\lib.win32-3.6\av\video
running build_ext
running config
writing build\temp.win32-3.6\Release\include\pyav\config.h
running cythonize
building 'av.buffer' extension
creating build\temp.win32-3.6\Release\src
creating build\temp.win32-3.6\Release\src\av
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Iinclude -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include -Ic:\python36-32\include -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tcsrc\av\buffer.c /Fobuild\temp.win32-3.6\Release\src\av\buffer.obj
buffer.c
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\PCbuild\win32 /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\PCbuild\win32 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\lib\x86" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\lib\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\ucrt\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\um\x86" avformat.lib avcodec.lib swresample.lib swscale.lib avdevice.lib avfilter.lib avutil.lib /EXPORT:PyInit_buffer build\temp.win32-3.6\Release\src\av\buffer.obj /OUT:build\lib.win32-3.6\av\buffer.cp36-win32.pyd /IMPLIB:build\temp.win32-3.6\Release\src\av\buffer.cp36-win32.lib /OPT:NOREF
LINK : fatal error LNK1181: cannot open input file 'avformat.lib'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\link.exe' failed with exit status 1181
----------------------------------------
Failed building wheel for av
Running setup.py clean for av
Failed to build aiortc av
Installing collected packages: av, crc32c, asn1crypto, cryptography, pyee, pylibsrtp, pyopenssl, aiortc, opencv-python
Running setup.py install for av ... error
Complete output from command c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\av\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\FRUITF~1\AppData\Local\Temp\pip-record-foc3l784\install-record.txt --single-version-externally-managed --compile:
running install
running build
running build_py
creating build
creating build\lib.win32-3.6
creating build\lib.win32-3.6\av
copying av\datasets.py -> build\lib.win32-3.6\av
copying av\deprecation.py -> build\lib.win32-3.6\av
copying av\__init__.py -> build\lib.win32-3.6\av
copying av\__main__.py -> build\lib.win32-3.6\av
creating build\lib.win32-3.6\scratchpad
copying scratchpad\audio.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\audio_player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\average.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\cctx_encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\decode_threads.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\dump_format.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\encode_frames.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\experimental.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\frame_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\glproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\graph.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\merge-filmstrip.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\player.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\qtproxy.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\remux.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\resource_use.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\save_subtitles.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\second_seek_example.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\seekmany.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\show_frames_opencv.py -> build\lib.win32-3.6\scratchpad
copying scratchpad\__init__.py -> build\lib.win32-3.6\scratchpad
creating build\lib.win32-3.6\av\audio
copying av\audio\__init__.py -> build\lib.win32-3.6\av\audio
creating build\lib.win32-3.6\av\codec
copying av\codec\__init__.py -> build\lib.win32-3.6\av\codec
creating build\lib.win32-3.6\av\container
copying av\container\__init__.py -> build\lib.win32-3.6\av\container
creating build\lib.win32-3.6\av\data
copying av\data\__init__.py -> build\lib.win32-3.6\av\data
creating build\lib.win32-3.6\av\filter
copying av\filter\__init__.py -> build\lib.win32-3.6\av\filter
creating build\lib.win32-3.6\av\subtitles
copying av\subtitles\__init__.py -> build\lib.win32-3.6\av\subtitles
creating build\lib.win32-3.6\av\video
copying av\video\__init__.py -> build\lib.win32-3.6\av\video
running build_ext
running config
writing build\temp.win32-3.6\Release\include\pyav\config.h
running cythonize
building 'av.buffer' extension
creating build\temp.win32-3.6\Release\src
creating build\temp.win32-3.6\Release\src\av
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\cl.exe /c /nologo /Ox /W3 /GL /DNDEBUG /MD -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Dinline=__inline -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include -Iinclude -Ic:\python36-32\include -Ic:\python36-32\include -Ibuild\temp.win32-3.6\Release\include "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.17763.0\cppwinrt" /Tcsrc\av\buffer.c /Fobuild\temp.win32-3.6\Release\src\av\buffer.obj
buffer.c
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\bin\HostX86\x86\link.exe /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\python36-32\PCbuild\win32 /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\libs /LIBPATH:c:\python36-32\PCbuild\win32 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\ATLMFC\lib\x86" "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC\14.16.27023\lib\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\lib\um\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\ucrt\x86" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.17763.0\um\x86" avformat.lib avfilter.lib avutil.lib swresample.lib avcodec.lib swscale.lib avdevice.lib /EXPORT:PyInit_buffer build\temp.win32-3.6\Release\src\av\buffer.obj /OUT:build\lib.win32-3.6\av\buffer.cp36-win32.pyd /IMPLIB:build\temp.win32-3.6\Release\src\av\buffer.cp36-win32.lib /OPT:NOREF
LINK : fatal error LNK1181: cannot open input file 'avformat.lib'
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Community\\VC\\Tools\\MSVC\\14.16.27023\\bin\\HostX86\\x86\\link.exe' failed with exit status 1181
----------------------------------------
Command "c:\python36-32\python.exe -u -c "import setuptools, tokenize;__file__='C:\\Users\\FRUITF~1\\AppData\\Local\\Temp\\pip-install-5l34ichq\\av\\setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record C:\Users\FRUITF~1\AppData\Local\Temp\pip-record-foc3l784\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\FRUITF~1\AppData\Local\Temp\pip-install-5l34ichq\av\
|
closed
|
2019-01-16T23:26:45Z
|
2019-05-27T13:37:34Z
|
https://github.com/aiortc/aiortc/issues/124
|
[
"invalid"
] |
enjoysmath
| 2 |
pallets-eco/flask-sqlalchemy
|
flask
| 801 |
Query are sending INSERT request with back_propagate
|
I am not sure this is a bug or not, but it seems strange that launching a query would send an INSERT request.
Here is a repository to replicate the bug. https://github.com/Noezor/example_flask_sqlalchemy_bug/
### Expected Behavior
For the models :
```python
from config import db
class Parent(db.Model):
__tablename__ = "parent"
id = db.Column(db.Integer(), primary_key = True)
name = db.Column(db.String, unique = True)
children = db.relationship("Child", back_populates="parent")
class Child(db.Model):
__tablename__ = "child"
id = db.Column(db.Integer(), primary_key = True)
name = db.Column(db.String(32), unique = True)
parent_id = db.Column(db.Integer, db.ForeignKey("parent.id"))
parent = db.relationship("Parent", back_populates="children")
```
And now the testscript.
```python
from config import db
from model import Child, Parent
parent = Parent(name='John')
if not Parent.query.filter(Parent.name == parent.name).one_or_none():
db.session.add(parent)
db.session.commit()
else :
parent = Parent.query.filter(Parent.name == parent.name).one_or_none()
child1 = Child(name="Toto",parent = parent)
if not Child.query.filter(Child.name == "Toto").one_or_none() :
db.session.add(child1)
db.session.commit()
else :
child1 = Child.query.filter(Child.name == "Toto").one_or_none()
print("success")
```
At first launch, the problem should work fine. At second launch, once the database is populated, there should not be problem either as the query will detect that the database already contains added elements.
### Actual Behavior
At first launch, everything is working fine. On the other hand, at second launch, it seems that line "if not Child.query.filter(Child.name == "Toto").one_or_none() :" is sending an INSERT request.
```pytb
2020-02-01 15:23:28,552 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2020-02-01 15:23:28,552 INFO sqlalchemy.engine.base.Engine ()
2020-02-01 15:23:28,553 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
2020-02-01 15:23:28,554 INFO sqlalchemy.engine.base.Engine ()
2020-02-01 15:23:28,555 INFO sqlalchemy.engine.base.Engine BEGIN (implicit)
2020-02-01 15:23:28,556 INFO sqlalchemy.engine.base.Engine SELECT parent.id AS parent_id, parent.name AS parent_name
FROM parent
WHERE parent.name = ?
2020-02-01 15:23:28,557 INFO sqlalchemy.engine.base.Engine ('John',)
2020-02-01 15:23:28,560 INFO sqlalchemy.engine.base.Engine SELECT parent.id AS parent_id, parent.name AS parent_name
FROM parent
WHERE parent.name = ?
2020-02-01 15:23:28,560 INFO sqlalchemy.engine.base.Engine ('John',)
**2020-02-01 15:23:28,567 INFO sqlalchemy.engine.base.Engine INSERT INTO child (name, parent_id) VALUES (?, ?)**
2020-02-01 15:23:28,568 INFO sqlalchemy.engine.base.Engine ('Toto', 1)
2020-02-01 15:23:28,569 INFO sqlalchemy.engine.base.Engine ROLLBACK
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlite3.IntegrityError: UNIQUE constraint failed: child.name
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/ptvsd_launcher.py", line 43, in <module>
main(ptvsdArgs)
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 432, in main
run()
File "/home/pionn/.vscode-insiders/extensions/ms-python.python-2020.1.58038/pythonFiles/lib/python/old_ptvsd/ptvsd/__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
**File "/home/pionn/minimum_bug_sqlalchemy/test.py", line 13, in <module>
if not Child.query.filter(Child.name == "Toto").one_or_none() :**
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2784, in one_or_none
ret = list(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/query.py", line 2854, in __iter__
self.session._autoflush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 1407, in _autoflush
util.raise_from_cause(e)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 1397, in _autoflush
self.flush()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2171, in flush
self._flush(objects)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2291, in _flush
transaction.rollback(_capture_exception=True)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/langhelpers.py", line 66, in __exit__
compat.reraise(exc_type, exc_value, exc_tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 187, in reraise
raise value
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/session.py", line 2255, in _flush
flush_context.execute()
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 389, in execute
rec.execute(self)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/unitofwork.py", line 548, in execute
uow
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 181, in save_obj
mapper, table, insert)
File "/usr/lib/python3/dist-packages/sqlalchemy/orm/persistence.py", line 835, in _emit_insert_statements
execute(statement, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 945, in execute
return meth(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/sql/elements.py", line 263, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1053, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1189, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1402, in _handle_dbapi_exception
exc_info
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 186, in reraise
raise value.with_traceback(tb)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1182, in _execute_context
context)
File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 470, in do_execute
cursor.execute(statement, parameters)
sqlalchemy.exc.IntegrityError: (raised as a result of Query-invoked autoflush; consider using a session.no_autoflush block if this flush is occurring prematurely) (sqlite3.IntegrityError) UNIQUE constraint failed: child.name [SQL: 'INSERT INTO child (name, parent_id) VALUES (?, ?)'] [parameters: ('Toto', 1)]
```
I believe it happends through back_propagate as, if removed, the "bug" dissapears. Same if I don't specify a parent for the child.
### Environment
* Operating system: Ubuntu 18.14
* Python version: 3.6.3
* Flask-SQLAlchemy version: 2.4.1
* SQLAlchemy version: 1.3.12
|
closed
|
2020-02-01T14:49:13Z
|
2020-12-05T20:21:37Z
|
https://github.com/pallets-eco/flask-sqlalchemy/issues/801
|
[] |
Noezor
| 1 |
pydata/bottleneck
|
numpy
| 201 |
Leaks memory when input is not a numpy array
|
If you run the following program you see that `nansum` leaks all the memory it are given when passed a Pandas object. If it is passed the ndarray underlying the Pandas object instead then there is no leak:
```
import psutil
import gc
def f():
x = np.zeros(10*1024*1024, dtype='f4')
# Leaks 40MB/iteration
bottleneck.nansum(pd.Series(x))
# No leak:
#bottleneck.nansum(x)
process = psutil.Process(os.getpid())
def _get_usage():
gc.collect()
return process.memory_info().private / (1024*1024)
last_usage = _get_usage()
print(last_usage)
for _ in range(10):
f()
usage = _get_usage()
print(usage - last_usage)
last_usage = usage
```
This affects not just `nansum`, but apparently all the reduction functions (with or without `axis` specified), and at least some other functions like `move_max`.
I'm not completely sure why this happens, but maybe it's because `PyArray_FROM_O` is allocating a new array in this case, and the ref count of that is not being decremented by anyone? https://github.com/kwgoodman/bottleneck/blob/master/bottleneck/src/reduce_template.c#L1237
I'm using Bottleneck 1.2.1 with Pandas 0.23.1. `sys.version` is `3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 18:41:36) [MSC v.1900 64 bit (AMD64)]`.
|
closed
|
2019-01-02T17:16:42Z
|
2022-11-16T13:08:44Z
|
https://github.com/pydata/bottleneck/issues/201
|
[] |
batterseapower
| 15 |
TencentARC/GFPGAN
|
pytorch
| 245 |
error: (-215:Assertion failed) !buf.empty() in function 'imdecode_'
|
First I met the error as follow๏ผ

Then I add int at โqualityโ as [https://github.com/TencentARC/GFPGAN/issues/93](url)
But I got another error as follow:

|
closed
|
2022-08-29T14:45:47Z
|
2022-09-16T12:14:21Z
|
https://github.com/TencentARC/GFPGAN/issues/245
|
[] |
dongfengxijian
| 1 |
proplot-dev/proplot
|
data-visualization
| 239 |
Disable `DiscreteNorm` by default for `imshow` plots?
|
<!-- Thanks for helping us make proplot a better package! If this is a bug report, please use the template provided below. If this is a feature request, you can delete the template text (just try to be descriptive with your request). -->
### Description
In all the examples available for this package and in my tests, the colorbar maximum number of bins is fixed to 10. There is doesn't seem to by any to changing this
### Steps to reproduce
A "[Minimal, Complete and Verifiable Example](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports)" will make it much easier for maintainers to help you.
```python
fig, ax = proplot.subplots()
pc = plt.imshow(p.image, extent=[file.bounds[0], file.bounds[1], file.bounds[2], file.bounds[3]], cmap=my_cmap, vmin=0.5, vmax=1.5)
plt.colorbar(pc, ax=ax)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.set_xlabel(r"$x/R$")
ax.set_ylabel(r"$y/R$")
plt.show()
```
**Actual behavior**: [What actually happened]

### Equivalent steps in matplotlib
Please make sure this bug is related to a specific proplot feature. If you're not sure, try to replicate it with the [native matplotlib API](https://matplotlib.org/3.1.1/api/index.html). Matplotlib bugs belong on the [matplotlib github page](https://github.com/matplotlib/matplotlib).
```python
fig, ax = plt.subplots()
pc = plt.imshow(p.image, extent=[file.bounds[0], file.bounds[1], file.bounds[2], file.bounds[3]], cmap=my_cmap, vmin=0.5, vmax=1.5)
plt.colorbar(pc, ax=ax)
ax.set_xlim([0, 1])
ax.set_ylim([0, 1])
ax.set_xlabel(r"$x/R$")
ax.set_ylabel(r"$y/R$")
plt.show()
```

|
closed
|
2020-12-16T13:52:14Z
|
2021-08-20T23:03:56Z
|
https://github.com/proplot-dev/proplot/issues/239
|
[
"duplicate",
"enhancement",
"feature"
] |
GonzaloSaez
| 9 |
graphdeco-inria/gaussian-splatting
|
computer-vision
| 594 |
Render on certain camera position
|
Hi, thank you for your work. I want to do real-to-sim on my project. How could I render images given a certain camera position (like render.py does but I want to modify the camera position) without using the viewer you provided?
|
open
|
2024-01-04T19:28:23Z
|
2024-09-19T15:40:05Z
|
https://github.com/graphdeco-inria/gaussian-splatting/issues/594
|
[] |
lyuyiqi
| 1 |
fbdesignpro/sweetviz
|
pandas
| 111 |
Fail to allocate bitmap - Process finished with exit code -2147483645
|
I am processing 4 datasets with different number of variables and each with over 4 million rows. When processing the 4 with 52 variables. I get this error and the Python console stops.
|
closed
|
2022-02-25T12:53:22Z
|
2023-10-04T16:36:34Z
|
https://github.com/fbdesignpro/sweetviz/issues/111
|
[
"can't repro issue"
] |
jonathanecm
| 1 |
tflearn/tflearn
|
tensorflow
| 996 |
ImportError: cannot import name 'add_arg_scope'
|
my use tensorflow1.5.0rc in 17flower example
`Traceback (most recent call last):
File "E:/ml/flower/flower.py", line 1, in <module>
import tflearn
File "D:\anconda\lib\site-packages\tflearn\__init__.py", line 4, in <module>
from . import config
File "D:\anconda\lib\site-packages\tflearn\config.py", line 5, in <module>
from .variables import variable
File "D:\anconda\lib\site-packages\tflearn\variables.py", line 7, in <module>
from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
ImportError: cannot import name 'add_arg_scope'`
in tensorflow1.5.0rc contrib-ใframework-ใpython
there isnot add_arg_scope
|
open
|
2018-01-12T03:17:10Z
|
2018-01-13T06:07:38Z
|
https://github.com/tflearn/tflearn/issues/996
|
[] |
nightli110
| 2 |
robotframework/robotframework
|
automation
| 4,845 |
'Set Library Search Order' not working when same keyword is used in both a Robot resources and a Python library
|
Hello,
I'm using RF 6.1
Here my test:
TATA.py
```
from robot.api.deco import keyword
class TATA:
@keyword("Test")
def Test(self):
logger.info(f'Tata')
```
TOTO.robot
```
*** Settings ***
Documentation Resource for manipulating the VirtualBox VM
Library BuiltIn
*** Keywords ***
Test
[Documentation] Open SSH connection To Distant Host (not applicable for VBOX)
Log Toto console=True
```
bac_a_sable.robot:
```
*** Settings ***
Library ${PROJECT}/libs/TATA.py
Resource ${PROJECT}/resources/TOTO.robot
*** Test Cases ***
Mytest
Set Library Search Order TOTO TATA
Test
Set Library Search Order TATA TOTO
Test
```

Current result : a "Log Toto" in both case
Expected result: a "Log Toto" in first Test keyword call and a "Tata" logged in the second
This issue seems to be located robotframework/src/robot/running/namespace.py line 173-175:
```
def _get_implicit_runner(self, name):
return (self._get_runner_from_resource_files(name) or
self._get_runner_from_libraries(name))
```
|
open
|
2023-08-24T14:23:40Z
|
2023-08-24T21:07:27Z
|
https://github.com/robotframework/robotframework/issues/4845
|
[
"enhancement",
"priority: medium"
] |
MaximeAbout
| 1 |
QingdaoU/OnlineJudge
|
django
| 216 |
DEMOไธ่ฝๆๅผ
|
```
dig v2.qduoj.com @9.9.9.9 +tcp
; <<>> DiG 9.10.3-P4-Raspbian <<>> v2.qduoj.com @9.9.9.9 +tcp
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 14719
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;v2.qduoj.com. IN A
;; AUTHORITY SECTION:
qduoj.com. 600 IN SOA dns9.hichina.com. hostmaster.hichina.com. 2015091611 3600 1200 3600 360
;; Query time: 918 msec
;; SERVER: 9.9.9.9#53(9.9.9.9)
;; WHEN: Sat Jan 12 22:44:52 CST 2019
;; MSG SIZE rcvd: 101
```
|
closed
|
2019-01-12T14:46:14Z
|
2019-01-12T15:14:46Z
|
https://github.com/QingdaoU/OnlineJudge/issues/216
|
[] |
Zhang-Siyang
| 1 |
neuml/txtai
|
nlp
| 257 |
Issue importing from txtai.pipeline
|
Hello there, I am trying to follow the example [here](https://neuml.github.io/txtai/pipeline/train/hfonnx/) in a Google Colab setup. I keep getting an error when I try to do any import such as
`from txtai.pipeline import xyz`
In this case:
`from txtai.pipeline import HFOnnx, Labels`
Here's the end of the traceback I get:
```
[<ipython-input-6-429aa8896e23>](https://localhost:8080/#) in <module>()
----> 1 from txtai.pipeline import HFOnnx, Labels
2
3 # Model path
4 path = "distilbert-base-uncased-finetuned-sst-2-english"
5
[/usr/local/lib/python3.7/dist-packages/txtai/pipeline/__init__.py](https://localhost:8080/#) in <module>()
8 from .factory import PipelineFactory
9 from .hfmodel import HFModel
---> 10 from .hfpipeline import HFPipeline
11 from .image import *
12 from .nop import Nop
[/usr/local/lib/python3.7/dist-packages/txtai/pipeline/hfpipeline.py](https://localhost:8080/#) in <module>()
3 """
4
----> 5 from transformers import pipeline
6
7 from ..models import Models
/usr/lib/python3.7/importlib/_bootstrap.py in _handle_fromlist(module, fromlist, import_, recursive)
[/usr/local/lib/python3.7/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in __getattr__(self, name)
845 value = self._get_module(name)
846 elif name in self._class_to_module.keys():
--> 847 module = self._get_module(self._class_to_module[name])
848 value = getattr(module, name)
849 else:
[/usr/local/lib/python3.7/dist-packages/transformers/utils/import_utils.py](https://localhost:8080/#) in _get_module(self, module_name)
859 raise RuntimeError(
860 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its traceback):\n{e}"
--> 861 ) from e
862
863 def __reduce__(self):
RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback):
module 'PIL.Image' has no attribute 'Resampling'
```
I'm not sure what I am doing wrong, but I tried installing following the example in the [notebook](https://colab.research.google.com/github/neuml/txtai/blob/master/examples/18_Export_and_run_models_with_ONNX.ipynb) which was:
`pip install datasets git+https://github.com/neuml/txtai#egg=txtai[pipeline]`
I also tried the following:
`pip install datasets txtai[all]`
|
closed
|
2022-04-06T23:02:20Z
|
2022-04-11T00:18:22Z
|
https://github.com/neuml/txtai/issues/257
|
[] |
HAKSOAT
| 2 |
RobertCraigie/prisma-client-py
|
pydantic
| 704 |
connect() fails if /etc/os-release is not available
|
## Bug description
App fails if `/etc/os-release` is missing.
## How to reproduce
```
sudo mv /etc/os-release /etc/os-release.old
#start app
````
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: Cloud Linux OS (4.18.0-372.19.1.lve.el8.x86_64)
- Database: SQLite]
- Python version: 3.0
- Prisma version:
```
INFO: Waiting for application startup.
cat: /etc/os-release: No such file or directory
ERROR: Traceback (most recent call last):
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/starlette/routing.py", line 671, in lifespan
async with self.lifespan_context(app):
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/starlette/routing.py", line 566, in __aenter__
await self._router.startup()
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/starlette/routing.py", line 648, in startup
await handler()
File "/home/wodorec1/wodore/tests/prisma/main.py", line 13, in startup
await prisma.connect()
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/client.py", line 252, in connect
await self.__engine.connect(
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/engine/query.py", line 128, in connect
self.file = file = self._ensure_file()
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/engine/query.py", line 116, in _ensure_file
return utils.ensure(BINARY_PATHS.query_engine)
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/engine/utils.py", line 72, in ensure
name = query_engine_name()
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/engine/utils.py", line 36, in query_engine_name
return f'prisma-query-engine-{platform.check_for_extension(platform.binary_platform())}'
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/binaries/platform.py", line 56, in binary_platform
distro = linux_distro()
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/binaries/platform.py", line 23, in linux_distro
distro_id, distro_id_like = _get_linux_distro_details()
File "/home/wodorec1/virtualenv/wodore/tests/prisma/3.9/lib/python3.9/site-packages/prisma/binaries/platform.py", line 37, in _get_linux_distro_details
process = subprocess.run(
File "/opt/alt/python39/lib64/python3.9/subprocess.py", line 528, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['cat', '/etc/os-release']' returned non-zero exit status 1.
```
|
open
|
2023-02-18T16:44:57Z
|
2023-02-18T21:31:41Z
|
https://github.com/RobertCraigie/prisma-client-py/issues/704
|
[
"bug/2-confirmed",
"kind/bug",
"level/intermediate",
"priority/medium",
"topic: binaries"
] |
TBxy
| 0 |
agronholm/anyio
|
asyncio
| 781 |
create_unix_listener doesn't accept abstract namespace sockets
|
### Things to check first
- [X] I have searched the existing issues and didn't find my bug already reported there
- [X] I have checked that my bug is still present in the latest release
### AnyIO version
4.4.0
### Python version
3.10
### What happened?
On anyio version 3.7 at least it's possible to pass a path starting with a null byte to `anyio.create_unix_listener` to create an abstract unix domain socket that isn't backed by a real file and is cleaned up automatically on program close.
In version 4.4.0 (at least probably earlier) this gives a `ValueError: embedded null byte` when the code tries to call `os.stat` on the path.
### How can we reproduce the bug?
```
import anyio
anyio.run(anyio.create_unix_listener, '\0abstract-path')
```
I havn't tested on trio backend or more versions but may do so later.
Assuming this is something that is desired to fix I may try creating a PR for it as well.
|
closed
|
2024-09-04T09:28:40Z
|
2024-09-05T15:21:30Z
|
https://github.com/agronholm/anyio/issues/781
|
[
"bug"
] |
tapetersen
| 0 |
tqdm/tqdm
|
jupyter
| 963 |
What is ... (more hidden) ...?
|
Hi!
I want to ask what is this for?
https://github.com/tqdm/tqdm/blob/89ee14464b963ea9ed8dafe64a2dad8586cf9a29/tqdm/std.py#L1467
I've searched on issues/online and haven't found anything related to this.
I first encountered this when I was running experiment in tmux, and I thought it was tmux that was suppressing output. I tried to search online but couldn't found anything related to it, and turns out the phrase `... (more hidden) ...` comes directly from `tqdm`'s source code.
It became problematic to me because after running my python script for a while (with multiple panels and each script repeating multiple times), most panels starts to show this message, and stops updating (i.e. tmux's panel became blank, no progress bar) even though the script may (or may not) be still running. Once I terminates it (with Ctrl-C) the tmux panel may flush out multiple progress bar that were presents (but never shown in terminal stdout), along with the trackback caused by Ctrl-C.
I am not sure what this line is trying to do, and my problem might as well be tmux's panel being filled by buffer? I just want to understand what this message `... (more hidden) ...` is for.
Thanks! :)
|
closed
|
2020-05-05T16:16:47Z
|
2020-05-07T11:56:18Z
|
https://github.com/tqdm/tqdm/issues/963
|
[
"invalid โ",
"question/docs โฝ",
"synchronisation โถ"
] |
soraxas
| 3 |
youfou/wxpy
|
api
| 86 |
Bot ๅๆฐ cache_path ๆฏๅฆๅฏไปฅ่ชๅฎไน๏ผ
|
:param cache_path:
* ่ฎพ็ฝฎๅฝๅไผ่ฏ็็ผๅญ่ทฏๅพ๏ผๅนถๅผๅฏ็ผๅญๅ่ฝ๏ผไธบ `None` (้ป่ฎค) ๅไธๅผๅฏ็ผๅญๅ่ฝใ
* ๅผๅฏ็ผๅญๅๅฏๅจ็ญๆถ้ดๅ
้ฟๅ
้ๅคๆซ็ ๏ผ็ผๅญๅคฑๆๆถไผ้ๆฐ่ฆๆฑ็ป้ใ
* ่ฎพไธบ `True` ๆถ๏ผไฝฟ็จ้ป่ฎค็็ผๅญ่ทฏๅพ 'wxpy.pkl'ใ
if cache_path is True:
cache_path = 'wxpy.pkl'
cache_path ๆฏๅฆๅฏไปฅ่ชๅฎไน๏ผ ่ไธๆฏ็ปไธ้ป่ฎคๅผwxpy.pklใ
ไธๅ่ดฆๅท็ป้ๅฐฑไธ้่ฆๅ ้ค .pkl ๆไปถไบ
|
closed
|
2017-06-16T08:27:24Z
|
2017-09-02T04:47:46Z
|
https://github.com/youfou/wxpy/issues/86
|
[] |
chenbotao828
| 1 |
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 914 |
CPU training time
|
it is taking alot of training time on CPU. Is it possible to reduce CPU training time?How long it takes to train using CPU?
|
closed
|
2020-02-07T04:55:41Z
|
2020-02-08T12:33:48Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/914
|
[] |
manvirvirk
| 3 |
sherlock-project/sherlock
|
python
| 2,009 |
Add secrethitler.io
|
<!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a feature request
- [x] I've checked for similar feature requests including closed ones
## Description
<!--
Provide a detailed description of the feature you would like Sherlock to have
-->
I propose the addition of the site SecretHitler.io, an online platform for the Secret Hitler social deduction game. I'm not sure how their web platform works but it is [open source](https://github.com/cozuya/secret-hitler) (not sure how up to date the repo is).
Here is a valid profile: https://secrethitler.io/game/#/profile/lemmie
|
open
|
2024-02-19T17:38:20Z
|
2024-04-26T01:26:26Z
|
https://github.com/sherlock-project/sherlock/issues/2009
|
[
"enhancement"
] |
laggycomputer
| 1 |
biolab/orange3
|
scikit-learn
| 6,964 |
Dynamic titles for diagrams
|
<!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<!-- In other words, what's your pain point? -->
<!-- Is your request related to a problem, or perhaps a frustration? -->
<!-- Tell us the story that led you to write this request. -->
I have been using time series and the Time Slice widget to create animations of various diagrams. This is very nice, however, there does not currently seem to be any way of displaying the current time from the Time Slice in the diagram, so one ends up trying to arrange both the open Time Slice window and the, say, Bar plot window so that one can keep an eye on the time in one and the data in the other.
It would be nice if one could add a dynamic label to charts. Setting it to the current time is probably only one of the many uses of such a feature.
**What's your proposed solution?**
<!-- Be specific, clear, and concise. -->
A text area that can be dragged to a suitable place in diagrams and which accepts and displays the data in a given feature.
**Are there any alternative solutions?**
One could perhaps have something more specific for time series. Perhaps the Time Slice widget could display its current time on the widget faceโthis would require less space and be more legible than having the widget open.
|
closed
|
2024-12-20T15:11:53Z
|
2025-01-10T16:54:18Z
|
https://github.com/biolab/orange3/issues/6964
|
[] |
kaimikael
| 1 |
agronholm/anyio
|
asyncio
| 369 |
3.3.1: pytest hangs in `tests/test_compat.py::TestMaybeAsync::test_cancel_scope[trio]` unit
|
`trio` 41.0.
```console
+ /usr/bin/pytest -ra -p no:itsdangerous -p no:randomly -v
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.12, pytest-6.2.5, py-1.10.0, pluggy-0.13.1 -- /usr/bin/python3
cachedir: .pytest_cache
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/home/tkloczko/rpmbuild/BUILD/anyio-3.3.1/.hypothesis/examples')
rootdir: /home/tkloczko/rpmbuild/BUILD/anyio-3.3.1, configfile: pyproject.toml, testpaths: tests
plugins: anyio-3.3.1, forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, aspectlib-1.5.2, toolbox-0.5, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, flaky-3.7.0, benchmark-3.4.1, xdist-2.3.0, pylama-7.7.1, datadir-1.3.1, regressions-2.2.0, cases-3.6.3, xprocess-0.18.1, black-0.3.12, asyncio-0.15.1, subtests-0.5.0, isort-2.0.0, hypothesis-6.14.6, mock-3.6.1, profiling-1.7.0, Faker-8.12.1, nose2pytest-1.0.8, pyfakefs-4.5.1, tornado-0.8.1, twisted-1.13.3, aiohttp-0.3.0
collected 1245 items
tests/test_compat.py::TestMaybeAsync::test_cancel_scope[asyncio] PASSED [ 0%]
tests/test_compat.py::TestMaybeAsync::test_cancel_scope[asyncio+uvloop] PASSED [ 0%]
tests/test_compat.py::TestMaybeAsync::test_cancel_scope[trio]
```
.. and that is all.
`ps auxwf` shows only
```console
tkloczko 2198904 1.4 0.0 6576524 115972 pts/7 S+ 10:35 0:10 \_ /usr/bin/python3 /usr/bin/pytest -ra -p no:itsdangerous -p no:randomly -v
```
```console
[tkloczko@barrel SPECS]$ strace -p 2200436
strace: Process 2200436 attached
futex(0x55a89bdec820, FUTEX_WAIT_BITSET_PRIVATE|FUTEX_CLOCK_REALTIME, 0, NULL, FUTEX_BITSET_MATCH_ANY
```
|
open
|
2021-09-10T09:51:09Z
|
2021-11-28T20:54:55Z
|
https://github.com/agronholm/anyio/issues/369
|
[] |
kloczek
| 49 |
mitmproxy/mitmproxy
|
python
| 6,716 |
The mitmproxy program failed to start because the default port 8080 was occupied.
|
#### Problem Description
Because the default port 8080 is occupied, the mitmproxy program fails to start, and there is no output reason for the failure.
#### Steps to reproduce the behavior:
1. Listen on port 8080 using the nc command in a terminal window.
2. Start the mitmproxy program in another terminal window.
3. The mitmproxy program failed to start, and there was no output reason for the failure, and the normal terminal configuration was not restored.

#### System Information
Mitmproxy: 10.2.3 binary
Python: 3.12.2
OpenSSL: OpenSSL 3.2.1 30 Jan 2024
Platform: macOS-14.2.1-arm64-arm-64bit
|
closed
|
2024-03-07T07:06:54Z
|
2024-03-07T20:41:28Z
|
https://github.com/mitmproxy/mitmproxy/issues/6716
|
[
"kind/bug",
"area/console",
"prio/high"
] |
optor666
| 2 |
iperov/DeepFaceLab
|
deep-learning
| 642 |
DFL2.0 support cpu?
|
If not, which latest version could support cpu training?
|
closed
|
2020-02-28T15:13:07Z
|
2020-03-28T05:42:56Z
|
https://github.com/iperov/DeepFaceLab/issues/642
|
[] |
Wesley-Xi
| 1 |
PokeAPI/pokeapi
|
graphql
| 460 |
Sword & Shield Pokemon
|
When will the Sword & Shield Pokemon be added? And how can I help speed up the process? :)
|
closed
|
2019-12-11T20:46:58Z
|
2021-01-08T09:17:35Z
|
https://github.com/PokeAPI/pokeapi/issues/460
|
[
"veekun"
] |
karsvaniersel
| 20 |
mars-project/mars
|
scikit-learn
| 2,711 |
[BUG] raydataset.to_ray_dataset has type error
|
<!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
Got an runtime error when using raydataset.to_ray_dataset
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
3.7.7 from ray 1.9 docker
2. The version of Mars you use
0.8.1
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
022-02-13 22:41:24,618 INFO services.py:1340 -- View the Ray dashboard at http://127.0.0.1:8265
Web service started at http://0.0.0.0:42881
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.0/100 [00:00<00:00, 1044.45it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.0/100 [00:00<00:00, 347.11it/s]
a 508.224206
b 493.014249
c 501.428825
d 474.742166
dtype: float64
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.0/100 [00:05<00:00, 17.29it/s]
a b c d
count 1000.000000 1000.000000 1000.000000 1000.000000
mean 0.508224 0.493014 0.501429 0.474742
std 0.279655 0.286950 0.293181 0.288133
min 0.000215 0.000065 0.000778 0.001233
25% 0.271333 0.238045 0.249944 0.224812
50% 0.516350 0.498089 0.503308 0.459224
75% 0.747174 0.730087 0.750066 0.716232
max 0.999077 0.999674 0.999869 0.999647
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.0/100 [00:00<00:00, 1155.09it/s]
100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 100.0/100 [00:00<00:00, 712.39it/s]
Traceback (most recent call last):
File "distributed_mars.py", line 19, in <module>
ds = to_ray_dataset(df, num_shards=4)
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/dataframe/contrib/raydataset/dataset.py", line 51, in to_ray_dataset
return real_ray_dataset.from_pandas(chunk_refs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/data/read_api.py", line 557, in from_pandas
return from_pandas_refs([ray.put(df) for df in dfs])
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/data/read_api.py", line 557, in <listcomp>
return from_pandas_refs([ray.put(df) for df in dfs])
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 1776, in put
value, owner_address=serialize_owner_address)
File "/home/ray/anaconda3/lib/python3.7/site-packages/ray/worker.py", line 283, in put_object
"Calling 'put' on an ray.ObjectRef is not allowed "
TypeError: Calling 'put' on an ray.ObjectRef is not allowed (similarly, returning an ray.ObjectRef from a remote function is not allowed). If you really want to do this, you can wrap the ray.ObjectRef in a list and call 'put' on it (or return it).
Exception ignored in: <function _TileableSession.__init__.<locals>.cb at 0x7f3f98904ef0>
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/entity/executable.py", line 52, in cb
fut = _decref_pool.submit(decref)
File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 163, in submit
raise RuntimeError('cannot schedule new futures after shutdown')
RuntimeError: cannot schedule new futures after shutdown
Exception ignored in: <function _TileableSession.__init__.<locals>.cb at 0x7f3f989045f0>
Traceback (most recent call last):
File "/home/ray/anaconda3/lib/python3.7/site-packages/mars/core/entity/executable.py", line 52, in cb
File "/home/ray/anaconda3/lib/python3.7/concurrent/futures/thread.py", line 163, in submit
RuntimeError: cannot schedule new futures after shutdown
5. Minimized code to reproduce the error.
```python
import ray
ray.init()
import mars
import mars.tensor as mt
import mars.dataframe as md
from mars.dataframe.contrib.raydataset import to_ray_dataset
session = mars.new_ray_session(worker_num=2, worker_mem=1 * 1024 ** 3)
mt.random.RandomState(0).rand(1000, 5).sum().execute()
df = md.DataFrame(
mt.random.rand(1000, 4, chunk_size=500),
columns=list('abcd'))
df.extra_params.raw_chunk_size = 500
print(df.sum().execute())
print(df.describe().execute())
# Convert mars dataframe to ray dataset
df.execute()
ds = to_ray_dataset(df, num_shards=4)
print(ds.schema(), ds.count())
ds.filter(lambda row: row["a"] > 0.5).show(5)
# Convert ray dataset to mars dataframe
df2 = md.read_ray_dataset(ds)
print(df2.head(5).execute())
```
**Expected behavior**
A type error error caused by to_ray_dataset.
" TypeError: Calling 'put' on an ray.ObjectRef is not allowed (similarly, returning an ray.ObjectRef from a remote function is not allowed). If you really want to do this, you can wrap the ray.ObjectRef in a list and call 'put' on it (or return it)."
**Additional context**
Add any other context about the problem here.
|
open
|
2022-02-14T06:43:47Z
|
2022-03-30T09:04:29Z
|
https://github.com/mars-project/mars/issues/2711
|
[
"question",
"mod: ray integration"
] |
jyizheng
| 12 |
seanharr11/etlalchemy
|
sqlalchemy
| 29 |
How to run
|
Hi,
Your tool looks really great, but I'm having a hard time get it up and running. I don't have a lot off experience with Python but do have programming experience. I did the install and all went fine. From that point on I couldn't figure out what the next command should be.. When I read the "Basic Usage" steps it's not clear for me which python script to run.. Sorry if my question sounds silly but after two hours I felt I just should ask.
Thanks,
Aart
|
open
|
2017-10-13T06:53:35Z
|
2017-10-13T11:09:28Z
|
https://github.com/seanharr11/etlalchemy/issues/29
|
[] |
aartnico
| 1 |
autogluon/autogluon
|
data-science
| 4,601 |
[BUG] TimeSeriesPredictor.load very slow on first run
|
**Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
When you load a pre-trained model for the first time, the it takes a very long time to load, almost as long as if you were training it all over again. This makes it almost unusable in time sensitive environments. This does not happen on subsequent runs.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
When you run TimeSeriesPredictor.load(path=model_path) first time, it is very slow. On each subsequent run, it is very fast.
Someone mentioned in another thread that it appeared temporary directories were being created the first time a model is loaded and this might be the cause. I have not confirmed this.
**To Reproduce**
<!-- A minimal script to reproduce the issue. Links to Colab notebooks or similar tools are encouraged.
If the code is too long, feel free to put it in a public gist and link it in the issue: https://gist.github.com.
In short, we are going to copy-paste your code to run it and we expect to get the same result as you. -->
try:
# Format the model path
formatted_date = pd.to_datetime(model_date).strftime("%Y%m%d")
model_path = os.path.join('models', f"{set_name}_{months_history}m_{formatted_date}")
logger.info(f"Loading predictor from: {model_path}")
if not os.path.exists(model_path):
raise FileNotFoundError(f"No predictor found at: {model_path}")
predictor = TimeSeriesPredictor.load(path=model_path)
logger.info(f"Successfully loaded predictor for {set_name}")
return predictor
except Exception as e:
logger.error(f"Error loading predictor: {str(e)}")
raise
**Screenshots / Logs**
<!-- If applicable, add screenshots or logs to help explain your problem. -->
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
INSTALLED VERSIONS
------------------
date : 2024-10-30
time : 18:17:54.872890
python : 3.10.12.final.0
OS : Linux
OS-release : 5.4.0-196-generic
Version : #216-Ubuntu SMP Thu Aug 29 13:26:53 UTC 2024
machine : x86_64
processor : x86_64
num_cores : 48
cpu_ram_mb : 192039.9921875
cuda version : 12.535.183.01
num_gpus : 1
gpu_ram_mb : [14923]
avail_disk_size_mb : 202650
accelerate : 0.21.0
autogluon : 1.1.1
autogluon.common : 1.1.1
autogluon.core : 1.1.1
autogluon.features : 1.1.1
autogluon.multimodal : 1.1.1
autogluon.tabular : 1.1.1
autogluon.timeseries : 1.1.1
boto3 : 1.35.38
catboost : 1.2.7
defusedxml : 0.7.1
evaluate : 0.4.3
fastai : 2.7.17
gluonts : 0.15.1
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.4
joblib : 1.4.2
jsonschema : 4.21.1
lightgbm : 4.3.0
lightning : 2.3.3
matplotlib : 3.9.2
mlforecast : 0.10.0
networkx : 3.4
nlpaug : 1.1.11
nltk : 3.9.1
nptyping : 2.4.1
numpy : 1.26.4
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
optimum : 1.17.1
optimum-intel : None
orjson : 3.10.7
pandas : 2.2.3
pdf2image : 1.17.0
Pillow : 10.4.0
psutil : 5.9.8
pytesseract : 0.3.10
pytorch-lightning : 2.3.3
pytorch-metric-learning: 2.3.0
ray : 2.10.0
requests : 2.32.3
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : None
scipy : 1.12.0
seqeval : 1.2.2
setuptools : 65.5.0
skl2onnx : None
statsforecast : 1.4.0
tabpfn : None
tensorboard : 2.17.1
text-unidecode : 1.3
timm : 0.9.16
torch : 2.3.1
torchmetrics : 1.2.1
torchvision : 0.18.1
tqdm : 4.66.5
transformers : 4.40.2
utilsforecast : 0.0.10
vowpalwabbit : None
xgboost : 2.0.3
```python
# Replace this code with the output of the following:
from autogluon.core.utils import show_versions
show_versions()
```
</details>
|
open
|
2024-10-30T18:23:19Z
|
2024-12-05T13:17:42Z
|
https://github.com/autogluon/autogluon/issues/4601
|
[
"bug: unconfirmed",
"module: timeseries"
] |
trenchantai
| 1 |
d2l-ai/d2l-en
|
pytorch
| 2,583 |
Can't build the book on MacOS when trying to add MLX implementation
|
I'm trying to add Apple's MLX code implementation, but when I am trying to build the html book I run into this when running `d2lbook build html`:
```
Traceback (most recent call last):
File "/Users/zhongkaining/Library/Python/3.9/bin/d2lbook", line 8, in <module>
sys.exit(main())
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/main.py", line 25, in main
commands[args.command[0]]()
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 43, in build
getattr(builder, cmd)()
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 55, in warp
func(self)
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 342, in html
self.rst()
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 55, in warp
func(self)
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 316, in rst
self.eval()
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 55, in warp
func(self)
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 160, in eval
_process_and_eval_notebook(scheduler, src, tgt, run_cells,
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/build.py", line 515, in _process_and_eval_notebook
scheduler.add(1, num_gpus, target=_job,
File "/Users/zhongkaining/Library/Python/3.9/lib/python/site-packages/d2lbook/resource.py", line 102, in add
assert num_cpus <= self._num_cpus and num_gpus <= self._num_gpus, \
AssertionError: Not enough resources (CPU 2, GPU 0 ) to run the task (CPU 1, GPU 1)
```
Looks like building this book requires a GPU, but on MacBook there is no Nvidia GPU. If I want to build on other OS, then I don't MLX will be available on those platforms since this is kinda specifically designed for MacOS.
I think this problem blocks people who attempts to contribute MLX code to this book.
|
open
|
2024-01-18T05:01:29Z
|
2024-01-18T05:01:29Z
|
https://github.com/d2l-ai/d2l-en/issues/2583
|
[] |
PRESIDENT810
| 0 |
replicate/cog
|
tensorflow
| 1,360 |
Support new python union syntax
|
In cog 0.9.
This works:
```
text: Union[str, List[str]] = Input(description="Text string to embed"),
```
this doesn't:
```
text: str | list[str] = Input(description="Text string to embed"),
```
|
closed
|
2023-11-03T14:00:11Z
|
2023-11-28T14:13:23Z
|
https://github.com/replicate/cog/issues/1360
|
[] |
yorickvP
| 0 |
shaikhsajid1111/facebook_page_scraper
|
web-scraping
| 73 |
Incorrect Reactions where reaction count is > 1000
|
Hi.
Thanks for the great tool!
It seems that the scraper returns the incorrect reaction count when there are over 1000 reactions. It returns the number of thousands. I.e. if there are 2,300 likes, the scraper will show that as 2 likes.
For example, on this post [https://www.facebook.com/nike/posts/825022672186115:825022672186115?__cft__[0]=AZV_npxkdJEb-EzVhBwQmranxQ0MgoUvfxUnxonkKU6Q3E8gIgK119aSXi3uJlgbcCWEcpmbaLT3grLRCyrmUQt2iYdcexBols-vihnMCiIhZM1wyb81Rl8nWai2OxUsIeqbKDRBosPUxotrtEet9YLCNuHL1HuV-atad0YUeRYnD8cbP1wNcyAVmD0V3gV-ML33jX_78kixfDIRq1xr05mo&__tn__=%2CO%2CP-R](https://www.facebook.com/nike/posts/825022672186115:825022672186115?__cft__[0]=AZV_npxkdJEb-EzVhBwQmranxQ0MgoUvfxUnxonkKU6Q3E8gIgK119aSXi3uJlgbcCWEcpmbaLT3grLRCyrmUQt2iYdcexBols-vihnMCiIhZM1wyb81Rl8nWai2OxUsIeqbKDRBosPUxotrtEet9YLCNuHL1HuV-atad0YUeRYnD8cbP1wNcyAVmD0V3gV-ML33jX_78kixfDIRq1xr05mo&__tn__=%2CO%2CP-R)
Here the scraper returns 27 likes (instead of 27,354) and 6 loves (instead of 6,697).
I have tested this on other posts and the issue is consistent.
|
open
|
2023-06-14T13:44:39Z
|
2023-06-17T13:09:56Z
|
https://github.com/shaikhsajid1111/facebook_page_scraper/issues/73
|
[
"bug"
] |
FiveTwenty2
| 1 |
nonebot/nonebot2
|
fastapi
| 3,176 |
Plugin: ๅๆ ้ฟๅผฅ้ไฝ
|
### PyPI ้กน็ฎๅ
ๅๆ ้ฟๅผฅ้ไฝ
### ๆไปถ import ๅ
ๅ
nonebot_plugin_amitabha
### ๆ ็ญพ
[{"label":"ๅฟตไฝ","color":"#facf7e"},{"label":"้ฟๅผฅ้ไฝ","color":"#ffb69c"}]
### ๆไปถ้
็ฝฎ้กน
```dotenv
send_interval=5
```
### ๆไปถๆต่ฏ
- [ ] ๅฆ้้ๆฐ่ฟ่กๆไปถๆต่ฏ๏ผ่ฏทๅพ้ๅทฆไพงๅพ้ๆก
|
closed
|
2024-12-09T15:06:10Z
|
2024-12-10T04:43:18Z
|
https://github.com/nonebot/nonebot2/issues/3176
|
[
"Plugin",
"Publish"
] |
Kaguya233qwq
| 1 |
sktime/pytorch-forecasting
|
pandas
| 998 |
Dtype issue in `TorchNormalizer(method="identity")`
|
- PyTorch-Forecasting version: 0.10.1
- PyTorch version: 1.10.2+cu111
- Python version: 3.8.10
- Operating System: Ubuntu
I recently experienced a `dtype` issue when using `TorchNormalizer` and managed to identify the source of the bug.
When `TorchNormalizer(method="identity", center=False)` is fitted on pandas DataFrame/Series, the attributes `self.center_`and `self.scale_` are set to `np.zeros(y_center.shape[:-1])` and `np.ones(y_scale.shape[:-1])` respectively (lines 467 and 468 of file `pytorch_forecasting.data.encoders.py`).
The default `dtype` for a numpy array is `np.float64` and so both `self.center_`and `self.scale_` have this dtype independently of the `dtype` of `y_center` and `y_scale`. On the other hand, the default dtype of torch (unless specified otherwise using `torch.set_default_dtype`) is `float32` (a.k.a. `Float`, as opposed to `Double` which is equivalent to `float64`).
This may cause problems at inference time (e.g., with DeepAR): the predicted values have type `float64` while the targets in the dataloader have type `float32` (due to scaling). More specifically, at some point during inference (method `predict`), the model calls the method `self.decode_autoregressive` that iteratively fills a torch tensor using method `self.decode_one` (line 292 of file `pytorch_forecasting.models.deepar.py`). The source tensor has type float32 while the other tensor has type float64.
My suggestion would be to modify the code of `TorchNormalizer`. One possibility would be to set the type of `self.center_`and `self.scale_` to be the same as `y_center`and `y_scale` respectively. The problem is that it may be a bit weird if the type of `y_scale` is `int32` for instance (we expect the scale to be a real value rather than an integer). This happens when using NegativeBinomialLoss for instance. Alternatively, we could set the type of `self.center_`and `self.scale_` to be equal to `str(torch.get_default_dtype())[6:]` (this is the current fix I made in my code but not sure it is the best idea).
|
open
|
2022-05-23T15:51:43Z
|
2022-05-23T15:51:43Z
|
https://github.com/sktime/pytorch-forecasting/issues/998
|
[] |
RonanFR
| 0 |
thtrieu/darkflow
|
tensorflow
| 880 |
I can't use yolo.weight
|
This is my training code .
flow --model cfg/dudqls.cfg --load bin/yolov2.weights --labels labels1.txt --train --dataset /mnt/train2017/ --annotation /mnt/cocoxml/ --backup yolov2 --lr 1e-4 --gpu 0.7
I only change cfg file.
classes =7 filters =60
But Error came out ......
Parsing ./cfg/yolov2.cfg
Parsing cfg/dudqls.cfg
Loading bin/yolov2.weights ...
Traceback (most recent call last):
File "/home/dudqls/.local/bin/flow", line 6, in <module>
cliHandler(sys.argv)
File "/home/dudqls/.local/lib/python3.5/site-packages/darkflow/cli.py", line 26, in cliHandler
tfnet = TFNet(FLAGS)
File "/home/dudqls/.local/lib/python3.5/site-packages/darkflow/net/build.py", line 58, in __init__
darknet = Darknet(FLAGS)
File "/home/dudqls/.local/lib/python3.5/site-packages/darkflow/dark/darknet.py", line 27, in __init__
self.load_weights()
File "/home/dudqls/.local/lib/python3.5/site-packages/darkflow/dark/darknet.py", line 82, in load_weights
wgts_loader = loader.create_loader(*args)
File "/home/dudqls/.local/lib/python3.5/site-packages/darkflow/utils/loader.py", line 105, in create_loader
return load_type(path, cfg)
File "/home/dudqls/.local/lib/python3.5/site-packages/darkflow/utils/loader.py", line 19, in __init__
self.load(*args)
File "/home/dudqls/.local/lib/python3.5/site-packages/darkflow/utils/loader.py", line 77, in load
walker.offset, walker.size)
AssertionError: expect 202437760 bytes, found 203934260
How can use yolo.weight If i train 7 classes.
Anyone Help me!!!!
|
open
|
2018-08-22T05:59:07Z
|
2018-10-27T22:53:11Z
|
https://github.com/thtrieu/darkflow/issues/880
|
[] |
dudqls1994
| 4 |
benbusby/whoogle-search
|
flask
| 881 |
[QUESTION] How to 'Set WHOOGLE_DOTENV=1' on local deployment
|
Hey,
Sorry if this is a simple question, but I've been looking e v e r y w h e r e and I just can't seem to find out how to 'Set WHOOGLE_DOTENV=1' so I can use `whoogle.env` custom configuration. What do you mean by 'set'? How do you 'set' something? Should I add something to the `whoogle.service` file? Please clarify this for me :sweat_smile:
Thank you <3
Ps. I want to set a configuration to solve a problem reported on #442
|
closed
|
2022-11-11T01:31:51Z
|
2022-11-22T22:54:34Z
|
https://github.com/benbusby/whoogle-search/issues/881
|
[
"question"
] |
diogogmatos
| 1 |
NullArray/AutoSploit
|
automation
| 717 |
Divided by zero exception37
|
Error: Attempted to divide by zero.37
|
closed
|
2019-04-19T15:59:32Z
|
2019-04-19T16:38:37Z
|
https://github.com/NullArray/AutoSploit/issues/717
|
[] |
AutosploitReporter
| 0 |
AirtestProject/Airtest
|
automation
| 456 |
็ถ็ชๅฃๆฏUnityLandscapeRightOnlyViewController็Controller๏ผ่้้ข็ๅ
็ด ๆฏiOSๆงไปถๅ
็ด ๏ผ่ฝ่ทๅๅฐๅ
|
(่ฏทๅฐฝ้ๆ็
งไธ้ขๆ็คบๅ
ๅฎนๅกซๅ๏ผๆๅฉไบๆไปฌๅฟซ้ๅฎไฝๅ่งฃๅณ้ฎ้ข๏ผๆ่ฐข้
ๅใๅฆๅ็ดๆฅๅ
ณ้ญใ)
**(้่ฆ๏ผ้ฎ้ขๅ็ฑป)**
* ๆต่ฏๅผๅ็ฏๅขAirtestIDEไฝฟ็จ้ฎ้ข -> https://github.com/AirtestProject/AirtestIDE/issues
* ๆงไปถ่ฏๅซใๆ ็ถ็ปๆใpocoๅบๆฅ้ -> https://github.com/AirtestProject/Poco/issues
* ๅพๅ่ฏๅซใ่ฎพๅคๆงๅถ็ธๅ
ณ้ฎ้ข -> ๆไธ้ข็ๆญฅ้ชค
**ๆ่ฟฐ้ฎ้ขbug**
(็ฎๆดๆธ
ๆฐๅพๆฆๆฌไธไธ้ๅฐ็้ฎ้ขๆฏไปไนใๆ่
ๆฏๆฅ้็tracebackไฟกๆฏใ)
```
๏ผๅจ่ฟ้็ฒ่ดดtracebackๆๅ
ถไปๆฅ้ไฟกๆฏ๏ผ
```
**็ธๅ
ณๆชๅพ**
(่ดดๅบ้ๅฐ้ฎ้ขๆถ็ๆชๅพๅ
ๅฎน๏ผๅฆๆๆ็่ฏ)
(ๅจAirtestIDE้ไบง็็ๅพๅๅ่ฎพๅค็ธๅ
ณ็้ฎ้ข๏ผ่ฏท่ดดไธไบAirtestIDEๆงๅถๅฐ้ป็ชๅฃ็ธๅ
ณๆฅ้ไฟกๆฏ)
**ๅค็ฐๆญฅ้ชค**
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**้ขๆๆๆ**
(้ขๆๆณ่ฆๅพๅฐไปไนใ่งๅฐไปไน)
**python ็ๆฌ:** `python3.5`
**airtest ็ๆฌ:** `1.0.69`
> airtest็ๆฌ้่ฟ`pip freeze`ๅฏไปฅๅฝไปคๅฏไปฅๆฅๅฐ
**่ฎพๅค:**
- ๅๅท: [e.g. google pixel 2]
- ็ณป็ป: [e.g. Android 8.1]
- (ๅซ็ไฟกๆฏ)
**ๅ
ถไป็ธๅ
ณ็ฏๅขไฟกๆฏ**
(ๅ
ถไป่ฟ่ก็ฏๅข๏ผไพๅฆๅจlinux ubuntu16.04ไธ่ฟ่กๅผๅธธ๏ผๅจwindowsไธๆญฃๅธธใ)
|
open
|
2019-07-18T03:21:31Z
|
2019-07-18T03:21:31Z
|
https://github.com/AirtestProject/Airtest/issues/456
|
[] |
sucl266
| 0 |
sqlalchemy/sqlalchemy
|
sqlalchemy
| 11,247 |
Using a hybrid_property within instantiation triggers getter function
|
### Describe the bug
If you use hybrid_property to define getters and setters for a table, and you try to instantiate an new object by using the setter, sqlalchemy triggers the getter function where `self` is not a python instance but the class definition.
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.29
### DBAPI (i.e. the database driver)
psycopg
### Database Vendor and Major Version
PostgreSQL 15.1
### Python Version
3.10
### Operating system
OSX
### To Reproduce
```python
from sqlalchemy.orm import DeclarativeBase
from sqlalchemy.ext.hybrid import hybrid_property
from sqlalchemy.orm import Mapped, mapped_column
class Base(DeclarativeBase):
pass
class IntegerTable(Base):
__tablename__ = "integer_table"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
_integer: Mapped[int]
@hybrid_property
def integer(self) -> int:
assert self._integer == 1
return self._integer
@integer.setter
def integer(self, value: int) -> None:
self._integer = value
obj = IntegerTable(integer=1)
print(obj.integer)
```
### Error
```
AssertionError Traceback (most recent call last)
Cell In[1], [line 26](vscode-notebook-cell:?execution_count=1&line=26)
[21](vscode-notebook-cell:?execution_count=1&line=21) @integer.setter
[22](vscode-notebook-cell:?execution_count=1&line=22) def integer(self, value: int) -> None:
[23](vscode-notebook-cell:?execution_count=1&line=23) self._integer = value
---> [26](vscode-notebook-cell:?execution_count=1&line=26) obj = IntegerTable(integer=1)
[27](vscode-notebook-cell:?execution_count=1&line=27) print(obj.integer)
File <string>:4, in __init__(self, **kwargs)
File [~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:564](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:564), in InstanceState._initialize_instance(*mixed, **kwargs)
[562](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:562) manager.original_init(*mixed[1:], **kwargs)
[563](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:563) except:
--> [564](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:564) with util.safe_reraise():
[565](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:565) manager.dispatch.init_failure(self, args, kwargs)
File [~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:146](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:146), in safe_reraise.__exit__(self, type_, value, traceback)
[144](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:144) assert exc_value is not None
[145](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:145) self._exc_info = None # remove potential circular references
--> [146](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:146) raise exc_value.with_traceback(exc_tb)
[147](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:147) else:
[148](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/util/langhelpers.py:148) self._exc_info = None # remove potential circular references
File [~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:562](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:562), in InstanceState._initialize_instance(*mixed, **kwargs)
[559](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:559) manager.dispatch.init(self, args, kwargs)
[561](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:561) try:
--> [562](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:562) manager.original_init(*mixed[1:], **kwargs)
[563](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:563) except:
[564](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/state.py:564) with util.safe_reraise():
File [~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2139](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2139), in _declarative_constructor(self, **kwargs)
[2137](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2137) for k in kwargs:
[2138](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2138) print(dir(cls_))
-> [2139](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2139) if not hasattr(cls_, k):
[2140](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2140) raise TypeError(
[2141](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2141) "%r is an invalid keyword argument for %s" % (k, cls_.__name__)
[2142](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2142) )
[2143](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/orm/decl_base.py:2143) setattr(self, k, kwargs[k])
File [~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1118](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1118), in hybrid_property.__get__(self, instance, owner)
[1116](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1116) return self
[1117](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1117) elif instance is None:
-> [1118](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1118) return self._expr_comparator(owner)
[1119](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1119) else:
[1120](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1120) return self.fget(instance)
File [~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1426](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1426), in hybrid_property._get_comparator.<locals>.expr_comparator(owner)
[1417](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1417) else:
[1418](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1418) name = attributes._UNKNOWN_ATTR_KEY # type: ignore[assignment]
[1420](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1420) return cast(
[1421](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1421) "_HybridClassLevelAccessor[_T]",
[1422](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1422) proxy_attr(
[1423](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1423) owner,
[1424](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1424) name,
[1425](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1425) self,
-> [1426](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1426) comparator(owner),
[1427](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1427) doc=comparator.__doc__ or self.__doc__,
[1428](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1428) ),
[1429](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1429) )
File [~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1395](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1395), in hybrid_property._get_expr.<locals>._expr(cls)
[1394](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1394) def _expr(cls: Any) -> ExprComparator[_T]:
-> [1395](https://file+.vscode-resource.vscode-cdn.net/Users/jichao/Documents/~/Documents/.venv/lib/python3.10/site-packages/sqlalchemy/ext/hybrid.py:1395) return ExprComparator(cls, expr(cls), self)
Cell In[1], [line 18](vscode-notebook-cell:?execution_count=1&line=18)
[16](vscode-notebook-cell:?execution_count=1&line=16) @hybrid_property
[17](vscode-notebook-cell:?execution_count=1&line=17) def integer(self) -> int:
---> [18](vscode-notebook-cell:?execution_count=1&line=18) assert self._integer == 1
[19](vscode-notebook-cell:?execution_count=1&line=19) return self._integer
```
### Additional context
_No response_
|
closed
|
2024-04-09T08:35:36Z
|
2024-04-09T19:47:37Z
|
https://github.com/sqlalchemy/sqlalchemy/issues/11247
|
[
"documentation",
"orm",
"expected behavior"
] |
JichaoS
| 4 |
pydata/xarray
|
pandas
| 9,860 |
test_dask_da_groupby_quantile not passing
|
### What happened?
The test fails because the expected `ValueError` is not raised:
```
________________________ test_dask_da_groupby_quantile _________________________
[gw2] linux -- Python 3.13.0 /usr/bin/python3
@requires_dask
def test_dask_da_groupby_quantile() -> None:
# Only works when the grouped reduction can run blockwise
# Scalar quantile
expected = xr.DataArray(
data=[2, 5], coords={"x": [1, 2], "quantile": 0.5}, dims="x"
)
array = xr.DataArray(
data=[1, 2, 3, 4, 5, 6], coords={"x": [1, 1, 1, 2, 2, 2]}, dims="x"
)
> with pytest.raises(ValueError):
E Failed: DID NOT RAISE <class 'ValueError'>
/builddir/build/BUILD/python-xarray-2024.11.0-build/BUILDROOT/usr/lib/python3.13/site-packages/xarray/tests/test_groupby.py:295: Failed
```
### What did you expect to happen?
Test pass.
### Minimal Complete Verifiable Example
```Python
pytest 'tests/test_groupby.py::test_dask_da_groupby_quantile'
```
### MVCE confirmation
- [X] Minimal example โ the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example โ the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example โ the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue โ a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment โ the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
_No response_
### Anything else we need to know?
_No response_
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.13.0 (main, Oct 8 2024, 00:00:00) [GCC 14.2.1 20240912 (Red Hat 14.2.1-4)]
python-bits: 64
OS: Linux
OS-release: 6.11.5-200.fc40.x86_64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: C.UTF-8
LOCALE: ('C', 'UTF-8')
libhdf5: 1.14.5
libnetcdf: 4.9.2
xarray: 2024.11.0
pandas: 2.2.1
numpy: 1.26.4
scipy: 1.14.1
netCDF4: 1.7.2
pydap: None
h5netcdf: None
h5py: None
zarr: 2.18.3
cftime: 1.6.4
nc_time_axis: None
iris: None
bottleneck: 1.3.7
dask: 2024.11.2
distributed: None
matplotlib: 3.9.1
cartopy: None
seaborn: 0.13.2
numbagg: None
fsspec: 2024.10.0
cupy: None
pint: 0.24.4
sparse: None
flox: None
numpy_groupies: None
setuptools: 74.1.3
pip: 24.3.1
conda: None
pytest: 8.3.3
mypy: None
IPython: None
sphinx: 7.3.7
</details>
|
closed
|
2024-12-06T06:26:18Z
|
2025-01-14T13:21:02Z
|
https://github.com/pydata/xarray/issues/9860
|
[
"bug"
] |
QuLogic
| 6 |
nteract/papermill
|
jupyter
| 174 |
Improve reading notebooks from url that need accept json header
|
Our notebooks are being stored in an [nbgallery](https://github.com/nbgallery/nbgallery) instance. A generic `requests.get(notebook_url)` returns html. Papermill's `execute.execute_notebook(notebook_url)` / `iowr.HttpHandler.read` do not offer a way to set a `{'Accept' : 'application/json'}` header in the request.
When we make a request on our own using accept json headers, we can write that notebook to file (`tempfile` or otherwise) and use Papermill as is. It would be convenient to also be able to pass in a file-like object (`io.StringIO(resp.text)`), json (`resp.text`), and/or dictionary (`resp.json()`) of the downloaded notebook file rather than needing to write to file.
Thanks.
|
closed
|
2018-08-26T01:16:33Z
|
2018-08-26T20:26:10Z
|
https://github.com/nteract/papermill/issues/174
|
[
"enhancement"
] |
kafonek
| 2 |
yunjey/pytorch-tutorial
|
deep-learning
| 171 |
[image caption]training ็ปๆ๏ผๆ ๆณไฟๅญๆจกๅ๏ผ
|

่ฎญ็ป็ปๆ๏ผไฝๆฏmodelsๆไปถๅคนไธ้ขไธบ็ฉบ๏ผ่ฎญ็ปๅฅฝ็ๆจกๅๆฒกๆไฟๅญใ่ฏท้ฎ่ฆๅฆไฝ่งฃๅณ๏ผ
|
open
|
2019-03-30T02:06:42Z
|
2019-04-19T09:24:48Z
|
https://github.com/yunjey/pytorch-tutorial/issues/171
|
[] |
CherishineNi
| 4 |
gunthercox/ChatterBot
|
machine-learning
| 1,954 |
How to build Interactive Chatbot with Chatterbot
|
Hi Team,
I am developing a ChatBot for Our DevOps Team, which will be hosted in our internal Infrastructure i.e. on Intranet.
Our Requirement is to have chatbot with fixed responses, e.g. the user has to select from the given option only. Like dominos app.
Please help me how to build it and please share any example of chatterbot + Flask with this kind of use case.
|
closed
|
2020-04-21T12:41:40Z
|
2020-08-29T19:34:15Z
|
https://github.com/gunthercox/ChatterBot/issues/1954
|
[] |
ankush7489
| 1 |
tqdm/tqdm
|
pandas
| 691 |
MPI compatibility
|
- [x] I have visited the [source website], and in particular read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
```
('4.31.1', '2.7.8 (default_cci, Sep 6 2016, 11:26:30) \n[GCC 4.4.7 20120313 (Red Hat 4.4.7-16)]', 'linux2')
```
I noticed that `mpi4py` complains if I import `tqdm` after `mpi4py`. This happens with OpenMPI 2.0.0, 2.1.6 and 3.0.3 but not with 4.0.0. OpenMPI is not happy with `fork()` system calls. However, `strace` does not indicate `tqdm` is calling `fork()` or `system()`.
Test code:
```python
from mpi4py import MPI
print 'MPI imported'
print MPI.COMM_WORLD.Get_rank()
from tqdm import tqdm
print 'tqdm imported'
```
```
$ mpirun -n 2 python ~/scripts/mpitest.py
MPI imported
0
MPI imported
1
--------------------------------------------------------------------------
A process has executed an operation involving a call to the
"fork()" system call to create a child process. Open MPI is currently
operating in a condition that could result in memory corruption or
other system errors; your job may hang, crash, or produce silent
data corruption. The use of fork() (or system() or other calls that
create child processes) is strongly discouraged.
The process that invoked fork was:
Local host: [[3456,1],0] (PID 54613)
If you are *absolutely sure* that your application will successfully
and correctly survive a call to fork(), you may disable this warning
by setting the mpi_warn_on_fork MCA parameter to 0.
--------------------------------------------------------------------------
tqdm imported
tqdm imported
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
|
closed
|
2019-03-04T14:11:13Z
|
2020-10-08T14:06:40Z
|
https://github.com/tqdm/tqdm/issues/691
|
[
"p0-bug-critical โข",
"to-fix โ",
"synchronisation โถ"
] |
biochem-fan
| 6 |
deezer/spleeter
|
deep-learning
| 887 |
I created web UI. You can have it
|
<!-- Please respect the title [Discussion] tag. -->
Hi. I have been working on this great library for months and i have finally made it work and have created a web GUI Uisng python and flask.
It's literally ready so you can get it and upload on your host or server and make money with it.
The price is 800$. If anyone wants it this is my telegram id:
@hsn_78_h
I Haven't published it anywhere or haven't given it to someone else
Just selling to 1 person.
|
closed
|
2023-12-24T13:58:46Z
|
2023-12-26T16:20:33Z
|
https://github.com/deezer/spleeter/issues/887
|
[
"question"
] |
hassan8971
| 0 |
igorbenav/fastcrud
|
sqlalchemy
| 66 |
Join documentation is out of date
|
**Describe the bug or question**
As far as I can tell, the `join` [documentation](https://igorbenav.github.io/fastcrud/advanced/joins/) is out of date
For example, it shows:
```python
user_tier = await user_crud.get_joined(
db=db,
model=Tier,
join_on=User.tier_id == Tier.id,
join_type="left",
join_prefix="tier_",,
id=1
)
```
While the code suggests it should be `join_model` instead of `model`:
https://github.com/igorbenav/fastcrud/blob/5e45aee16db9b541d98886472618ec967a3c527b/fastcrud/crud/fast_crud.py#L691
|
closed
|
2024-04-26T17:44:10Z
|
2024-05-04T19:35:39Z
|
https://github.com/igorbenav/fastcrud/issues/66
|
[
"documentation",
"good first issue"
] |
waza-ari
| 0 |
collerek/ormar
|
fastapi
| 1,393 |
Read/Write database selection
|
**Is your feature request related to a problem? Please describe.**
I need to implement read and write replicas of my database in my app, I know I can pass the db to the model query directly using use_db.
**Describe the solution you'd like**
It would be great if when creating my OrmarConfig I could define a read only database that will get used for all read operations and the default database will be used for write, what would be even better.
**Describe alternatives you've considered**
using the use_db function but seems a bit messy, this is a config that a lot of other frameworks provide so would be good to consider for a future release.
**Additional context**
Not that I can think of.
|
open
|
2024-08-22T13:17:10Z
|
2024-08-22T13:17:10Z
|
https://github.com/collerek/ormar/issues/1393
|
[
"enhancement"
] |
devon2018
| 0 |
dmlc/gluon-nlp
|
numpy
| 714 |
Port BioBERT to GluonNLP
|
https://github.com/dmis-lab/biobert
|
closed
|
2019-05-20T16:31:24Z
|
2019-06-10T21:25:12Z
|
https://github.com/dmlc/gluon-nlp/issues/714
|
[
"enhancement"
] |
eric-haibin-lin
| 1 |
LibreTranslate/LibreTranslate
|
api
| 236 |
argostranslategui not detected as installed?
|
if i run `./LibreTranslate/env/bin/argos-translate-gui`
it tells me to install `argostranslategui` but I already have it installed.
```
$find -iname 'argostranslategui'
./.local/lib/python3.8/site-packages/argostranslategui
```
|
closed
|
2022-04-02T13:41:12Z
|
2022-04-03T15:53:50Z
|
https://github.com/LibreTranslate/LibreTranslate/issues/236
|
[] |
Tarcaxoxide
| 1 |
errbotio/errbot
|
automation
| 1,412 |
How to make TestBot use text backend? (6.1)
|
In order to let us help you better, please fill out the following fields as best you can:
### I am...
* [ ] Reporting a bug
* [ ] Suggesting a new feature
* [ ] Requesting help with running my bot
* [X] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 6.1.2
* OS version: Windows
* Python version: 3.6.8
* Using a virtual environment: yes
### Issue description
I use the `testbot` fixture to run my tests. Since upgrade `err-bot` from `5.2` to `6.1.2`, I see this error when executing the tests in my environment:
```
Traceback (most recent call last):
File ".env36\Lib\site-packages\errbot\backends\graphic.py", line 18, in <module>
from PySide import QtCore, QtGui, QtWebKit
ModuleNotFoundError: No module named 'PySide'
2020-01-30 09:33:24,857 CRITICAL errbot.plugins.graphic To install graphic support use:
pip install errbot[graphic]
```
It seems the default for backends has changed to "graphic". If I install `errbot-5.2` then my tests pass as expected.
My question is: how do I change the backend of `testbot` to `text`?
Also, shouldn't the backend for `testbot` be `text` by default?
|
closed
|
2020-01-30T12:42:31Z
|
2024-01-05T17:27:36Z
|
https://github.com/errbotio/errbot/issues/1412
|
[
"type: support/question",
"#tests"
] |
nicoddemus
| 5 |
xzkostyan/clickhouse-sqlalchemy
|
sqlalchemy
| 112 |
can
|
closed
|
2020-11-29T07:34:26Z
|
2020-11-29T07:34:42Z
|
https://github.com/xzkostyan/clickhouse-sqlalchemy/issues/112
|
[] |
david-hfx
| 0 |
|
pyg-team/pytorch_geometric
|
pytorch
| 9,328 |
GraphSAGE sampling operation
|
### ๐ Describe the documentation issue
Hello, I recently tried adding some nodes to a graph to observe changes in the original nodes, such as from A to B, where A is the inserted node. I want to see the changes in B, but I found that using GraphSAGE, the output of node B does not change. I also couldn't find the controls for the sampling operation in GraphSAGE. Could you explain which part controls the node sampling operation in GraphSAGE?
### Suggest a potential alternative/fix
_No response_
|
open
|
2024-05-18T09:09:45Z
|
2024-05-27T07:04:30Z
|
https://github.com/pyg-team/pytorch_geometric/issues/9328
|
[
"documentation"
] |
1234238
| 1 |
junyanz/pytorch-CycleGAN-and-pix2pix
|
computer-vision
| 1,034 |
For help? Do you have the model of DenseNetGenerator
|
Ask for the code of DenseNetGenerator
|
open
|
2020-05-20T02:30:29Z
|
2020-05-20T02:30:29Z
|
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1034
|
[] |
HarryAB
| 0 |
Evil0ctal/Douyin_TikTok_Download_API
|
web-scraping
| 6 |
400 and slow loading speed when access homepage
|
```
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on all addresses.
WARNING: This is a development server. Do not use it in a production deployment.
* Running on http://192.168.124.17:5000/ (Press CTRL+C to quit)
127.0.0.1 - - [11/Mar/2022 19:50:11] "GET / HTTP/1.1" 200 -
127.0.0.1 - - [11/Mar/2022 19:50:12] code 400, message Bad request version ('localhost\x00\x17\x00\x00รฟ\x01\x00\x01\x00\x00')
127.0.0.1 - - [11/Mar/2022 19:50:12] " รผยช4รร
ยท0cร<A2ยค:_oยข%%รพ-iA Caรฏ!ยฝX ยธยธ#.RfKยฐรJ&2รยจรฟ-รsรญ> รรร+ร/ร,ร0รยฉรยจรร **
localhost รฟ " HTTPStatus.BAD_REQUEST -
127.0.0.1 - - [11/Mar/2022 19:50:12] "GET /?app=index HTTP/1.1" 200 -
127.0.0.1 - - [11/Mar/2022 19:50:12] code 400, message Bad request version ('localhost\x00\x17\x00\x00รฟ\x01\x00\x01\x00\x00')
127.0.0.1 - - [11/Mar/2022 19:50:12] " รผรท
ยฅร3รรA5!ยชรTC&YรฑรซรนIรรรยน 'รรฏยจ; ยงร.E=@PP"7zKQยฃTรยฅยฅ?ร รFร ร+ร/ร,ร0รยฉรยจรร jj
localhost รฟ " HTTPStatus.BAD_REQUEST -
127.0.0.1 - - [11/Mar/2022 19:50:12] code 400, message Bad request version ('localhost\x00\x17\x00\x00รฟ\x01\x00\x01\x00\x00')
cfยฝยจ<"รยบ
รยฆ รถi8ยญ\\<ร
=
รS ร+ร/ร,ร0รยฉรยจรร ยชยช
localhost รฟ " HTTPStatus.BAD_REQUEST -
127.0.0.1 - - [11/Mar/2022 19:50:12] code 400, message Bad request version ('localhost\x00\x17\x00\x00รฟ\x01\x00\x01\x00\x00')
127.0.0.1 - - [11/Mar/2022 19:50:12] " รผ|ยฆb5lN]?ยผ9_รRยคยณOรรชov= /ยฆFYร
รรรฝ รฝรฌรฅ}รยผรขAร รร~c^ยฑ@ยฆ ร+ร/ร,ร0รยฉรยจรร ZZ
localhost รฟ " HTTPStatus.BAD_REQUEST -
en <class 'str'>
```
|
closed
|
2022-03-11T11:51:52Z
|
2022-03-13T09:09:08Z
|
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/6
|
[] |
wanghaisheng
| 5 |
ultrafunkamsterdam/undetected-chromedriver
|
automation
| 975 |
Does anyone have a full example of running undetected-chromedriver in AWS Lambda?
|
open
|
2023-01-08T01:17:25Z
|
2023-08-24T01:10:08Z
|
https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/975
|
[] |
emanodame
| 1 |
|
jumpserver/jumpserver
|
django
| 14,428 |
[ๅทฅๅๅฎกๆน่ฝๅฆๅ ไธไธชๅญๆฎตๆพ็คบ็ป็ป]
|
### ไบงๅ็ๆฌ
v3.10.13
### ็ๆฌ็ฑปๅ
- [ ] ็คพๅบ็
- [X] ไผไธ็
- [ ] ไผไธ่ฏ็จ็
### ๅฎ่ฃ
ๆนๅผ
- [ ] ๅจ็บฟๅฎ่ฃ
(ไธ้ฎๅฝไปคๅฎ่ฃ
)
- [ ] ็ฆป็บฟๅ
ๅฎ่ฃ
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] ๆบ็ ๅฎ่ฃ
### โญ๏ธ ้ๆฑๆ่ฟฐ
็ณ่ฏท่ตไบงๆๆๆถๅทฅๅๅฎกๆน็็ณ่ฏทไฟกๆฏ๏ผ่ฝๅฆๅ ๅ
ฅๅทฅๅๆฅๆบๆฏๅชไธช็ป็ป๏ผ่ตไบงๆๅจ็ป็ป๏ผ
<img width="269" alt="image" src="https://github.com/user-attachments/assets/d9b08453-725a-4c2d-ae5d-04e39d3a18f2">
### ่งฃๅณๆนๆก
็ณ่ฏท่ตไบงๆๆๆถๅทฅๅๅฎกๆน๏ผ่ฝๅฆๅ ๅ
ฅๅทฅๅๆฅๆบๆฏๅชไธช็ป็ป๏ผ่ตไบงๆๅจ็ป็ป๏ผ
### ่กฅๅ
ไฟกๆฏ
_No response_
|
closed
|
2024-11-11T10:18:48Z
|
2025-01-03T03:40:15Z
|
https://github.com/jumpserver/jumpserver/issues/14428
|
[
"โญ๏ธ Feature Request",
"๐ฆ z~release:v4.4.0"
] |
yuanxixue
| 2 |
jupyter/nbgrader
|
jupyter
| 1,365 |
Syntax error in nbgrader_config.py
|
Ran into an issue following the docs for using the assignment list with multiple courses
My error coming up is
Sep 10 07:03:34 bash: File "/etc/jupyter/nbgrader_config.py", line 2
Sep 10 07:03:34 bash: from nbgrader.auth JupyterHubAuthPlugin
Sep 10 07:03:34 bash: ^
Sep 10 07:03:34 bash: SyntaxError: invalid syntax
Sep 10 07:03:34 bash: [NbGrader | ERROR] Exception while loading config file /etc/jupyter/nbgrader_config.py
I took the config from Can I use the โAssignment Listโ extension with multiple classes?
from nbgrader.auth JupyterHubAuthPlugin
c = get_config()
c.Exchange.path_includes_course = True
c.Authenticator.plugin_class = JupyterHubAuthPlugin
Im stuck here becuase my multiclass is not working correctly and all users can see all assignments
|
open
|
2020-09-10T11:09:18Z
|
2020-09-10T11:09:18Z
|
https://github.com/jupyter/nbgrader/issues/1365
|
[] |
whositwhatnow
| 0 |
yzhao062/pyod
|
data-science
| 114 |
Time Complexity Optimisation - SOD
|
Attempt to improve performance of SOD with respect to execution time.
|
closed
|
2019-06-18T14:48:54Z
|
2020-01-14T12:16:00Z
|
https://github.com/yzhao062/pyod/issues/114
|
[] |
John-Almardeny
| 0 |
statsmodels/statsmodels
|
data-science
| 8,666 |
Cannot chose variables which become dummy variables when setting them in pandas
|
I have a dataset with a few categories that are coded as being the presence or absence of an intervention. I am fitting a simple linear regression using OLS. Multiple interventions may be present at the same time so I am adding an effect for each effect. However the dummy variable is not being encoded the way I want it to be and it make the effects hard to interpret.
data = {
"cat1": [0,0,0,1,1,1,1],
"cat2": [0,0,0,0,0,1,1],
"cat3": [1,1,0,0,1,1,1],
"cat4": [0,1,0,1,1,1,1],
}
#load data into a DataFrame object:
dftest = pd.DataFrame(data)
# variable to store the label mapping into
label_map = {}
for fact in dftest.columns :
dftest[fact], label_map[fact] = pd.factorize(dftest[fact])
print(label_map[fact] )
Produces the following dummy coding output...
Int64Index([0, 1], dtype='int64')
Int64Index([0, 1], dtype='int64')
Int64Index([1, 0], dtype='int64')
Int64Index([0, 1], dtype='int64')
**Question 1)**
How do I ensure that the 0 in the original mapping is always the dummy?
**Question 2)**
If I code an interaction effect between in statsmodels will that dummy carry over to the interaction? I have another categorical with many levels column which in the model has an interaction with each of the other effects, I don't care what is the dummy in that category but I do care that it's interaction effects use the 0 from the other categories as their dummy i.e. their reference.
**Question 3)**
If it's not possible to fix can I just reverse the effect sizes of the resulting model where appropriate? So if 1 is the dummy for that effect then I just multiply the effect sizes by -1?
|
closed
|
2023-02-12T01:08:56Z
|
2023-10-27T09:57:41Z
|
https://github.com/statsmodels/statsmodels/issues/8666
|
[] |
AngCamp
| 0 |
zwczou/weixin-python
|
flask
| 87 |
ๅพฎไฟกๆฏไปtrade_type APPๆถๅ ๆฅๅๆทๆฏไปไธๅID้ๆณ็้่ฏฏ
|
ๅพฎไฟกๆฏไปtrade_type APPๆถๅ ๆฅๅๆทๆฏไปไธๅID้ๆณ็้่ฏฏใ
ๆ็็ฝไธ่ตๆ ๅฅฝๅๆฏprepay_id ๅ prepayid ็้ฎ้ข ใ
|
open
|
2021-06-13T06:50:43Z
|
2021-06-13T06:50:43Z
|
https://github.com/zwczou/weixin-python/issues/87
|
[] |
Sulayman
| 0 |
mckinsey/vizro
|
data-visualization
| 86 |
Absence of openpyxl library causes console errors
|
### Description
When trying to export xlsx file using actions, console error occurs
```
Traceback (most recent call last):
File "/Users/o/PycharmProjects/vizro/vizro-core/src/vizro/models/_action/_action.py", line 80, in callback_wrapper
return_value = self.function(**inputs) or {}
File "/Users/o/PycharmProjects/vizro/vizro-core/src/vizro/models/types.py", line 59, in __call__
return self.__function(**bound_arguments.arguments)
File "/Users/o/PycharmProjects/vizro/vizro-core/src/vizro/actions/export_data_action.py", line 61, in export_data
callback_outputs[f"download-dataframe_{target_id}"] = dcc.send_data_frame(
File "/Users/o/anaconda3/envs/w3af/lib/python3.8/site-packages/dash/dcc/express.py", line 91, in send_data_frame
return _data_frame_senders[name](writer, filename, type, **kwargs)
File "/Users/o/anaconda3/envs/w3af/lib/python3.8/site-packages/dash/dcc/express.py", line 32, in send_bytes
content = src if isinstance(src, bytes) else _io_to_str(io.BytesIO(), src, **kwargs)
File "/Users/o/anaconda3/envs/w3af/lib/python3.8/site-packages/dash/dcc/express.py", line 58, in _io_to_str
writer(data_io, **kwargs)
File "/Users/o/anaconda3/envs/w3af/lib/python3.8/site-packages/pandas/core/generic.py", line 2252, in to_excel
formatter.write(
File "/Users/o/anaconda3/envs/w3af/lib/python3.8/site-packages/pandas/io/formats/excel.py", line 934, in write
writer = ExcelWriter( # type: ignore[abstract]
File "/Users/o/anaconda3/envs/w3af/lib/python3.8/site-packages/pandas/io/excel/_openpyxl.py", line 56, in __init__
from openpyxl.workbook import Workbook
ModuleNotFoundError: No module named 'openpyxl'
```
### Expected behavior
_No response_
### vizro version
0.1.3
### Python version
3.8
### OS
Mac OS
### How to Reproduce
1. Install vizro package into clean env
2. Add export xlsx action to the dashboard
3. Run export action
### Output
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md).
|
closed
|
2023-10-02T10:26:07Z
|
2023-10-06T15:55:08Z
|
https://github.com/mckinsey/vizro/issues/86
|
[
"Bug Report :bug:"
] |
l0uden
| 1 |
QuivrHQ/quivr
|
api
| 3,427 |
Add Relevant Parameters to URL endpoint
|
closed
|
2024-10-25T08:18:31Z
|
2024-10-29T08:49:25Z
|
https://github.com/QuivrHQ/quivr/issues/3427
|
[] |
chloedia
| 1 |
|
darrenburns/posting
|
automation
| 229 |
Ctrl+D and Ctrl+U in read-only TextArea isn't integrated with "visual select" mode
|
open
|
2025-03-14T23:20:56Z
|
2025-03-14T23:21:04Z
|
https://github.com/darrenburns/posting/issues/229
|
[
"bug"
] |
darrenburns
| 0 |
|
kaliiiiiiiiii/Selenium-Driverless
|
web-scraping
| 241 |
[BUG] different iframes share same window
|
Reproducable Code Snippet (Edited from [Playwright Tests](https://github.com/microsoft/playwright-python/blob/47c1bc15122796da70a160b07f1e08a71f9032e2/tests/async/test_frames.py#L92-L108)):
```py
from selenium_driverless import webdriver
from selenium_driverless.types.by import By
import asyncio
EMPTY_PAGE = "https://google.com/blank.html"
async def attach_frame(driver: webdriver.Chrome, frame_id: str, url: str):
script = f"window.{frame_id} = document.createElement('iframe'); window.{frame_id}.src = '{url}'; window.{frame_id}.id = '{frame_id}'; document.body.appendChild(window.{frame_id});await new Promise(x => window.{frame_id}.onload = x);"
await driver.eval_async(script)
frame = await driver.execute_script(f"return window.{frame_id}")
return frame
async def main():
options = webdriver.ChromeOptions()
async with webdriver.Chrome(options=options) as driver:
await driver.get(EMPTY_PAGE)
await attach_frame(driver, "frame0", EMPTY_PAGE)
await attach_frame(driver, "frame1", EMPTY_PAGE)
iframes = await driver.find_elements(By.TAG_NAME, "iframe")
assert len(iframes) == 2
[frame1, frame2] = iframes
await asyncio.gather(
frame1.execute_script("window.a = 1"), frame2.execute_script("window.a = 2")
)
[a1, a2] = await asyncio.gather(
frame1.execute_script("return window.a"), frame2.execute_script("return window.a")
)
print(a1, a2) # 2 2 (Should be: 1 2)
assert a1 == 1
assert a2 == 2
asyncio.run(main())
```
|
closed
|
2024-06-12T10:59:49Z
|
2024-06-12T13:29:50Z
|
https://github.com/kaliiiiiiiiii/Selenium-Driverless/issues/241
|
[
"invalid"
] |
Vinyzu
| 1 |
serengil/deepface
|
machine-learning
| 1,118 |
The link to the postman project is wrong
|
the link should be navigated to `https://github.com/serengil/deepface/tree/master/deepface/api/postman` instead of `https://github.com/serengil/deepface/tree/master/api/postman`
|
closed
|
2024-03-17T18:13:57Z
|
2024-03-17T18:21:31Z
|
https://github.com/serengil/deepface/issues/1118
|
[
"documentation"
] |
rezakarbasi
| 2 |
noirbizarre/flask-restplus
|
flask
| 557 |
swagger.json not loading on Production-500 status code
|
closed
|
2018-11-27T06:00:49Z
|
2018-11-28T02:11:08Z
|
https://github.com/noirbizarre/flask-restplus/issues/557
|
[] |
rootVIII
| 0 |
|
davidteather/TikTok-Api
|
api
| 1,162 |
[BUG] - Failed to run even hashtag_example i added ms_token
|
I'm unable to run even the examples with an added ms_token, I'm not sure if i had done everything right but i cant seem to understand exactly if i had also to add a cookie, or how do i get a cookie except from ms_token.
I think the line ms_token, None i had filled correctly.
I modified the example:
```py
from TikTokApi import TikTokApi
import asyncio
import os
ms_token = os.environ.get("sySiUDTlgtCc9mZpdOuYuP1stjvSP8mgMSG3jQEDXBK6X86EuOqzLjjFAs8KMWVrg1G_8C7uTxm4We4_oDyjZidveAfd7lb5-DFhGFkaqsi8tRFnV7dYiIaPsxr0SFt6v-d6RSNE3JsNFqaYS8w=", None) # set your own ms_token
async def get_hashtag_videos():
async with TikTokApi() as api:
await api.create_sessions(ms_tokens=[ms_token], num_sessions=1, sleep_after=3)
tag = api.hashtag(name="coldplayathens")
async for video in tag.videos(count=150):
print(video)
print(video.as_dict)
if __name__ == "__main__":
asyncio.run(get_hashtag_videos())
```
**Error Trace (if any)**
Put the error trace below if there's any error thrown.
```
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/vaggosval/Desktop/toktik/TikTok-Api/examples/hashtag_example.py", line 18, in <module>
asyncio.run(get_hashtag_videos())
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/local/Cellar/python@3.10/3.10.14/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/vaggosval/Desktop/toktik/TikTok-Api/examples/hashtag_example.py", line 12, in get_hashtag_videos
async for video in tag.videos(count=150):
File "/Users/vaggosval/Desktop/toktik/TikTok-Api/TikTokApi/api/hashtag.py", line 118, in videos
resp = await self.parent.make_request(
File "/Users/vaggosval/Desktop/toktik/TikTok-Api/TikTokApi/tiktok.py", line 441, in make_request
raise EmptyResponseException(result, "TikTok returned an empty response")
TikTokApi.exceptions.EmptyResponseException: None -> TikTok returned an empty response
```
**Desktop (please complete the following information):**
- OS: [e.g. macOS 11.7.10]
- TikTokApi Version [e.g. 6.4.0] - if out of date upgrade before posting an issue
**Additional context**
Add any other context about the problem here.
|
open
|
2024-06-17T14:13:14Z
|
2024-07-14T18:33:56Z
|
https://github.com/davidteather/TikTok-Api/issues/1162
|
[
"bug"
] |
vagvalas
| 5 |
comfyanonymous/ComfyUI
|
pytorch
| 6,734 |
ComfyUI-Frame-Interpolation and ROCm Compatibility Issue
|
### Expected Behavior
Install and use the ComfyUI-Frame-Interpolation node. It completes without error, but fails in the back end (depending on version attempting to be installed).
### Actual Behavior
Installation fails with a "file missing" or "ROCm version unsupported" error.
### Steps to Reproduce
Install ComfyUI-Frame-Interpolation on the docker comfyui:v2-rocm-6.0-runtime-22.04-v0.2.7 version with AMD ROCm.
### Debug Logs
```powershell
See below.
```
### Other
To start, this may not be an issue with ComfyUI, but as all avalible version fail, and it not be being reported for NVidia users, I am starting here. Feel free to send me over to the Cupy team. I attempted to join the Matrix discosion, but it is locked behind an invite requirement.
While trying to use ComfyUI-Frame-Interpolation with my Radeon 7900 XTX, I encountered a ```No module named 'cupy'``` error. After looking through the log, I found this issue:
https://github.com/cupy/cupy/pull/8319
It sounds like there is a compatibility issue that was resolved, but no matter what version of this module I install, it fails.
I am using the docker comfyui:v2-rocm-6.0-runtime-22.04-v0.2.7 version.
I tried installing multiple version of the module, removing and reinstalling inbetween each. Here are the errors that I got for each.
Attempting to install 1.0.1, 1.0.5 or nightly version give a file missing error:
```
Download: git clone 'https://github.com/Fannovel16/ComfyUI-Frame-Interpolation'
100%|โโโโโโโโโโ| 615.0/615.0 [08:07<00:00, 1.26it/s]
Install: install script
## ComfyUI-Manager: EXECUTE => ['/opt/environments/python/comfyui/bin/python',
'-m', 'pip', 'install.py']
[!] ERROR: unknown command "install.py" - maybe you meant "install"
install script failed: https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
[ComfyUI-Manager] Installation failed:
Failed to execute install script: https://github.com/Fannovel16/ComfyUI-Frame-Interpolation
[ComfyUI-Manager] Queued works are completed.
{'install': 1}
After restarting ComfyUI, please refresh the browser.
```
Installing the latest release version 1.0.6 I get a ROCm version unsupported error:
```
[ComfyUI-Manager] Starting dependency installation/(de)activation for the extension
Downloading https://storage.googleapis.com/comfy-registry/fannovel16/comfyui-frame-interpolation/1.0.6/node.tar.gz to /workspace/ComfyUI/custom_nodes/CNR_temp_00bb29e8-6f00-4620-ba37-ebc094083891.zip
100%|โโโโโโโโโโ| 16800732/16800732 [00:03<00:00, 4920016.73it/s]
Extracted zip file to /workspace/ComfyUI/custom_nodes/comfyui-frame-interpolation
'/workspace/ComfyUI/custom_nodes/comfyui-frame-interpolation' is moved to '/workspace/ComfyUI/custom_nodes/comfyui-frame-interpolation'
Install: install script for '/workspace/ComfyUI/custom_nodes/comfyui-frame-interpolation'
Installing torch...
Installing numpy...
Installing einops...
Installing opencv-contrib-python...
Collecting opencv-contrib-python
Downloading opencv_contrib_python-4.11.0.86-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (69.1 MB)
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 69.1/69.1 MB 482.0 kB/s eta 0:00:00
Installing collected packages: opencv-contrib-python
Successfully installed opencv-contrib-python-4.11.0.86
Installing kornia...
Installing scipy...
Installing Pillow...
Installing torchvision...
Installing tqdm...
Checking cupy...
Uninstall cupy if existed...
[!] WARNING: Skipping cupy-wheel as it is not installed.
[!] WARNING: Skipping cupy-cuda102 as it is not installed.
[!] WARNING: Skipping cupy-cuda110 as it is not installed.
[!] WARNING: Skipping cupy-cuda111 as it is not installed.
[!] WARNING: Skipping cupy-cuda11x as it is not installed.
[!] WARNING: Skipping cupy-cuda12x as it is not installed.
Installing cupy...
Collecting cupy-wheel
Downloading cupy-wheel-12.3.0.tar.gz (2.9 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
[!] error: subprocess-exited-with-error
[!]
[!] ร python setup.py egg_info did not run successfully.
[!] โ exit code: 1
[!] โฐโ> [30 lines of output]
[!] [cupy-wheel] Trying to detect CUDA version from libraries: ['libnvrtc.so.12', 'libnvrtc.so.11.2', 'libnvrtc.so.11.1', 'libnvrtc.so.11.0', 'libnvrtc.so.10.2']
[!] [cupy-wheel] Looking for library: libnvrtc.so.12
[!] [cupy-wheel] Failed to open libnvrtc.so.12: libnvrtc.so.12: cannot open shared object file: No such file or directory
[!] [cupy-wheel] Looking for library: libnvrtc.so.11.2
[!] [cupy-wheel] Failed to open libnvrtc.so.11.2: libnvrtc.so.11.2: cannot open shared object file: No such file or directory
[!] [cupy-wheel] Looking for library: libnvrtc.so.11.1
[!] [cupy-wheel] Failed to open libnvrtc.so.11.1: libnvrtc.so.11.1: cannot open shared object file: No such file or directory
[!] [cupy-wheel] Looking for library: libnvrtc.so.11.0
[!] [cupy-wheel] Failed to open libnvrtc.so.11.0: libnvrtc.so.11.0: cannot open shared object file: No such file or directory
[!] [cupy-wheel] Looking for library: libnvrtc.so.10.2
[!] [cupy-wheel] Failed to open libnvrtc.so.10.2: libnvrtc.so.10.2: cannot open shared object file: No such file or directory
[!] [cupy-wheel] No more candidate library to find
[!] [cupy-wheel] Looking for library: libamdhip64.so
[!] [cupy-wheel] Detected version: 60032830
[!] Traceback (most recent call last):
[!] File "<string>", line 2, in <module>
[!] File "<pip-setuptools-caller>", line 34, in <module>
[!] File "/tmp/pip-install-m0j600uu/cupy-wheel_9391c5ef41c84043a2364574b9982ed4/setup.py", line 285, in <module>
[!] main()
[!] File "/tmp/pip-install-m0j600uu/cupy-wheel_9391c5ef41c84043a2364574b9982ed4/setup.py", line 269, in main
[!] package = infer_best_package()
[!] File "/tmp/pip-install-m0j600uu/cupy-wheel_9391c5ef41c84043a2364574b9982ed4/setup.py", line 257, in infer_best_package
[!] return _rocm_version_to_package(version)
[!] File "/tmp/pip-install-m0j600uu/cupy-wheel_9391c5ef41c84043a2364574b9982ed4/setup.py", line 220, in _rocm_version_to_package
[!] raise AutoDetectionFailed(
[!] __main__.AutoDetectionFailed:
[!] ============================================================
[!] Your ROCm version (60032830) is unsupported.
[!] ============================================================
[!]
[!] [end of output]
[!]
[!] note: This error originates from a subprocess, and is likely not a problem with pip.
[!] error: metadata-generation-failed
[!]
[!] ร Encountered error while generating package metadata.
[!] โฐโ> See above for output.
[!]
[!] note: This is an issue with the package mentioned above, not pip.
[!] hint: See above for details.
[ComfyUI-Manager] Startup script completed.
```
I am not sure if I just need to wait until this is resolved, if these versions should be removed from ComfyUI (for AMD users), or if this is a bug in ether ComfyUI or Cupy.
|
closed
|
2025-02-07T05:15:32Z
|
2025-02-08T22:51:03Z
|
https://github.com/comfyanonymous/ComfyUI/issues/6734
|
[
"Potential Bug"
] |
BloodBlight
| 8 |
horovod/horovod
|
pytorch
| 3,138 |
deadlock and scaling efficiency problem with horovod
|
**Environment:**
1. Framework: (TensorFlow)
2. Horovod version:0.21.0
3. MPI
Here is the problem: Horovod in one server can run 48 process, but in two servers, it can only run 76 process. PLS when the process exceed 76, the training will be blocked like the following bug report. And CPU efficiency in two servers is down to 70%.
**Bug report:**
W /tmp/pip-install-fzowsl52/horovod_cf7b4c2a81094c6ea91742be27043f3c/horovod/common/stall_inspector.cc:105] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock.
Missing ranks:
5: [tower_0/HorovodAllreduce_tower_0_gradients_AddN_12_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_13_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_14_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_15_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_16_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_17_0 ...]
10: [tower_0/HorovodAllreduce_tower_0_gradients_AddN_12_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_13_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_14_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_15_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_16_0, tower_0/HorovodAllreduce_tower_0_gradients_AddN_17_0 ...]
.......
|
open
|
2021-08-28T13:57:24Z
|
2023-03-29T13:19:55Z
|
https://github.com/horovod/horovod/issues/3138
|
[
"bug"
] |
huwenyue
| 3 |
amerkurev/scrapper
|
web-scraping
| 23 |
When running service for several days, service starts to fail
|
Hello,
When running the service for several days, the service starts to fail and the ping endpoint still returns success all the time.
Have you experienced such an issue? Maybe the ping endpoint has to be improved. Plus maybe there are some minimal requirements for docker resources?
Thank you!
|
open
|
2025-01-02T16:56:32Z
|
2025-02-25T09:28:33Z
|
https://github.com/amerkurev/scrapper/issues/23
|
[] |
mkazinauskas
| 2 |
ydataai/ydata-profiling
|
pandas
| 1,071 |
Getting DispatchError on Missing diagram bar
|
Hi,
I am getting DispatchError when trying to run below code:
```
from pandas_profiling import ProfileReport
profile = ProfileReport(efeature_db)
profile.to_file("report/profile_report.html")
```
Following Error:
```
TypeError Traceback (most recent call last)
File ~\Miniconda3\envs\py38\lib\site-packages\multimethod\__init__.py:312, in multimethod.__call__(self, *args, **kwargs)
311 try:
--> 312 return func(*args, **kwargs)
313 except TypeError as ex:
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\model\pandas\missing_pandas.py:20, in pandas_missing_bar(config, df)
18 @missing_bar.register
19 def pandas_missing_bar(config: Settings, df: pd.DataFrame) -> str:
---> 20 return plot_missing_bar(config, df)
File ~\Miniconda3\envs\py38\lib\contextlib.py:75, in ContextDecorator.__call__.<locals>.inner(*args, **kwds)
74 with self._recreate_cm():
---> 75 return func(*args, **kwds)
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\visualisation\missing.py:70, in plot_missing_bar(config, data)
61 """Generate missing values bar plot.
62
63 Args:
(...)
68 The resulting missing values bar plot encoded as a string.
69 """
---> 70 missingno.bar(
71 data,
72 figsize=(10, 5),
73 fontsize=get_font_size(data),
74 color=hex_to_rgb(config.html.style.primary_color),
75 labels=config.plot.missing.force_labels,
76 )
77 for ax0 in plt.gcf().get_axes():
File ~\Miniconda3\envs\py38\lib\site-packages\missingno\missingno.py:246, in bar(df, figsize, fontsize, labels, label_rotation, log, color, filter, n, p, sort, ax, orientation)
245 if orientation == 'bottom':
--> 246 (nullity_counts / len(df)).plot.bar(**plot_args)
247 else:
File ~\Miniconda3\envs\py38\lib\site-packages\pandas\plotting\_core.py:1131, in PlotAccessor.bar(self, x, y, **kwargs)
1122 """
1123 Vertical bar plot.
1124
(...)
1129 other axis represents a measured value.
1130 """
-> 1131 return self(kind="bar", x=x, y=y, **kwargs)
File ~\Miniconda3\envs\py38\lib\site-packages\pandas\plotting\_core.py:902, in PlotAccessor.__call__(self, *args, **kwargs)
901 if plot_backend.__name__ != "pandas.plotting._matplotlib":
--> 902 return plot_backend.plot(self._parent, x=x, y=y, kind=kind, **kwargs)
904 if kind not in self._all_kinds:
File ~\Miniconda3\envs\py38\lib\site-packages\plotly\__init__.py:107, in plot(data_frame, kind, **kwargs)
106 if kind == "bar":
--> 107 return bar(data_frame, **kwargs)
108 if kind == "barh":
TypeError: bar() got an unexpected keyword argument 'figsize'
The above exception was the direct cause of the following exception:
DispatchError Traceback (most recent call last)
Input In [12], in <cell line: 1>()
----> 1 profile.to_file("report/profile_report.html")
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\profile_report.py:277, in ProfileReport.to_file(self, output_file, silent)
274 self.config.html.assets_prefix = str(output_file.stem) + "_assets"
275 create_html_assets(self.config, output_file)
--> 277 data = self.to_html()
279 if output_file.suffix != ".html":
280 suffix = output_file.suffix
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\profile_report.py:388, in ProfileReport.to_html(self)
380 def to_html(self) -> str:
381 """Generate and return complete template as lengthy string
382 for using with frameworks.
383
(...)
386
387 """
--> 388 return self.html
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\profile_report.py:205, in ProfileReport.html(self)
202 @property
203 def html(self) -> str:
204 if self._html is None:
--> 205 self._html = self._render_html()
206 return self._html
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\profile_report.py:307, in ProfileReport._render_html(self)
304 def _render_html(self) -> str:
305 from pandas_profiling.report.presentation.flavours import HTMLReport
--> 307 report = self.report
309 with tqdm(
310 total=1, desc="Render HTML", disable=not self.config.progress_bar
311 ) as pbar:
312 html = HTMLReport(copy.deepcopy(report)).render(
313 nav=self.config.html.navbar_show,
314 offline=self.config.html.use_local_assets,
(...)
322 version=self.description_set["package"]["pandas_profiling_version"],
323 )
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\profile_report.py:199, in ProfileReport.report(self)
196 @property
197 def report(self) -> Root:
198 if self._report is None:
--> 199 self._report = get_report_structure(self.config, self.description_set)
200 return self._report
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\profile_report.py:181, in ProfileReport.description_set(self)
178 @property
179 def description_set(self) -> Dict[str, Any]:
180 if self._description_set is None:
--> 181 self._description_set = describe_df(
182 self.config,
183 self.df,
184 self.summarizer,
185 self.typeset,
186 self._sample,
187 )
188 return self._description_set
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\model\describe.py:127, in describe(config, df, summarizer, typeset, sample)
125 missing_map = get_missing_active(config, table_stats)
126 pbar.total += len(missing_map)
--> 127 missing = {
128 name: progress(get_missing_diagram, pbar, f"Missing diagram {name}")(
129 config, df, settings
130 )
131 for name, settings in missing_map.items()
132 }
133 missing = {name: value for name, value in missing.items() if value is not None}
135 # Sample
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\model\describe.py:128, in <dictcomp>(.0)
125 missing_map = get_missing_active(config, table_stats)
126 pbar.total += len(missing_map)
127 missing = {
--> 128 name: progress(get_missing_diagram, pbar, f"Missing diagram {name}")(
129 config, df, settings
130 )
131 for name, settings in missing_map.items()
132 }
133 missing = {name: value for name, value in missing.items() if value is not None}
135 # Sample
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\utils\progress_bar.py:11, in progress.<locals>.inner(*args, **kwargs)
8 @wraps(fn)
9 def inner(*args, **kwargs) -> Any:
10 bar.set_postfix_str(message)
---> 11 ret = fn(*args, **kwargs)
12 bar.update()
13 return ret
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\model\missing.py:123, in get_missing_diagram(config, df, settings)
120 if len(df) == 0:
121 return None
--> 123 result = handle_missing(settings["name"], settings["function"])(config, df)
124 if result is None:
125 return None
File ~\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\model\missing.py:99, in handle_missing.<locals>.inner(*args, **kwargs)
89 warnings.warn(
90 f"""There was an attempt to generate the {missing_name} missing values diagrams, but this failed.
91 To hide this warning, disable the calculation
(...)
95 (include the error message: '{error}')"""
96 )
98 try:
---> 99 return fn(*args, *kwargs)
100 except ValueError as e:
101 warn_missing(name, str(e))
File ~\Miniconda3\envs\py38\lib\site-packages\multimethod\__init__.py:314, in multimethod.__call__(self, *args, **kwargs)
312 return func(*args, **kwargs)
313 except TypeError as ex:
--> 314 raise DispatchError(f"Function {func.__code__}") from ex
DispatchError: Function <code object pandas_missing_bar at 0x000001C30AEB6030, file "C:\Users\username\Miniconda3\envs\py38\lib\site-packages\pandas_profiling\model\pandas\missing_pandas.py", line 18>
```
Version Info:
pandas 1.4.4
pandas-profiling 3.3.0
matplotlib 3.5.3
|
closed
|
2022-09-27T12:56:52Z
|
2022-10-03T10:28:15Z
|
https://github.com/ydataai/ydata-profiling/issues/1071
|
[
"needs-triage"
] |
ashutosh486
| 6 |
Kav-K/GPTDiscord
|
asyncio
| 314 |
Add environment variable to disable ShareGPT
|
**Is your feature request related to a problem? Please describe.**
In its current implementation content is always shared with a 3rd party service (shareGPT).
**Describe the solution you'd like**
The ability to define a variable in .env to enable or disable sending any content to shareGPT
**Describe alternatives you've considered**
None.
**Additional context**
Sending all content to shareGPT adds potential risk of information disclosure to a third party, and in its current state there is no simple solution for the bot host to determine whether or not they are ok with that risk.
|
closed
|
2023-05-10T05:22:19Z
|
2023-10-21T05:46:23Z
|
https://github.com/Kav-K/GPTDiscord/issues/314
|
[] |
jeffe
| 1 |
d2l-ai/d2l-en
|
tensorflow
| 1,728 |
chapter 16 (recommender systems) needs translation to pytorch
|
chapter 16 (recommender systems) needs translation to pytorch
|
open
|
2021-04-19T23:59:57Z
|
2022-01-19T04:09:59Z
|
https://github.com/d2l-ai/d2l-en/issues/1728
|
[] |
murphyk
| 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.