repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
geopandas/geopandas | pandas | 3,138 | ENH: support multiple geometry columns in read_postgis | #### Is your feature request related to a problem?
I'm loading the results of a PostGIS query with multiple geometry columns, and I wish geopandas would convert all of them. The problem is that now, only one of the columns is converted to a real geometry type and the other geometries I have to convert myself using `shapely.wkb`.
#### Describe the solution you'd like
`geopandas.read_postgis` should have an argument named `geom_cols: list[str]`. When this argument is set, all these columns are parsed into a geometry type.
#### API breaking implications
This will add an additional optional parameter. Behaviour wouldn't change if the parameter is not included, so it would be backwards compatible if I'm not mistaken.
#### Describe alternatives you've considered
Current solution is to use `shapely` to do the conversion after loading the data. | open | 2024-01-12T15:38:26Z | 2024-05-28T10:11:14Z | https://github.com/geopandas/geopandas/issues/3138 | [
"enhancement",
"postgis"
] | Gijs-Koot | 6 |
jupyter/nbgrader | jupyter | 1,464 | Fails to install with miniconda on Windows | ### Operating system
Windows 10
### `nbgrader --version`
n/a since the installation failed
### `jupyterhub --version` (if used with JupyterHub)
```
(base) C:\Users\Stefnotch>jupyterhub --version
'jupyterhub' is not recognized as an internal or external command,
operable program or batch file.
```
### `jupyter notebook --version`
```
(base) C:\Users\Stefnotch>jupyter notebook --version
6.4.0
```
### Expected behavior
I expected nbgrader to get installed
### Actual behavior
```
(base) C:\Users\Stefnotch>conda install -c conda-forge nbgrader
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: /
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
Examining nbgrader: 80%|███████████████████████████████████████████████████▏ | 4/5 [00:00<00:00, 14.02it/s]/failed /
UnsatisfiableError: The following specifications were found
to be incompatible with the existing python installation in your environment:
Specifications:
- nbgrader -> python[version='>=2.7,<2.8.0a0|>=3.6,<3.7.0a0|>=3.7,<3.8.0a0|>=3.8,<3.9.0a0']
Your python: python=3.9
If python is on the left-most side of the chain, that's the version you've asked for.
When python appears to the right, that indicates that the thing on the left is somehow
not available for the python version you are constrained to. Note that conda will not
change your python version to a different minor version unless you explicitly specify
that.
```
### Steps to reproduce the behavior
1. Start up Windows 10
2. Install miniconda
3. `conda install -c conda-forge jupyterlab`
4. `conda install jupyter`
5. `conda install -c conda-forge nbgrader`
| closed | 2021-07-13T11:59:58Z | 2021-07-13T12:26:10Z | https://github.com/jupyter/nbgrader/issues/1464 | [] | stefnotch | 1 |
lepture/authlib | django | 361 | Authlib 0.15.x crashing with httpx 0.18.2 | After upgrading to HTTPX 0.18.2, the starlette oauth module (provided by Authlib) started to crash.
When i call the oauth authorize_redirect method, i get the following error: Invalid "auth" argument: <httpx._config.UnsetType object at 0x7fb5ac920310>"}
In HTTPX 0.18.1, everything works like a charm.
| closed | 2021-06-24T16:16:05Z | 2021-06-25T00:04:56Z | https://github.com/lepture/authlib/issues/361 | [
"bug"
] | alessandroralc | 2 |
cleanlab/cleanlab | data-science | 359 | CI: Performance Benchmarking during CI | We want our CI to assess the performance of the code to avoid regressions.
These are benchmarks that should run via Github Actions, but also be optionally disabled/enabled via a simple boolean flag.
* Runtime Benchmarks: Some of these benchmarks should time the execution of certain cleanlab function calls, and report error if the execution time in this version of the code has significantly increased from the previous version of the code. Ideas for these cleanlab function calls including copying some from the more substantial unit tests, or taking some from: github.com/cleanlab/examples/
* Performance/Accuracy Benchmarks: Other benchmarks should measure the accuracy of cleanlab's label error detection capabilities and CleanLearning's supervised learning accuracy. These would report error if the accuracy of certain function calls in this newly updated version of the code has significantly decreased from the previous version of the code. Ideas for sources of data/performance numbers for these benchmarks include:
* Copying some of the more substantial unit tests
* [Examples](https://github.com/cleanlab/examples/) notebooks, e.g. in particular:
1. https://github.com/cleanlab/examples/blob/master/classifier_comparison.ipynb
* Tutorial Colab notebooks
* Benchmark repos such as:
1. https://github.com/cleanlab/label-error-detection-benchmarks
2. https://github.com/cleanlab/ood-detection-benchmarks
| open | 2022-08-23T23:42:15Z | 2022-12-17T05:00:23Z | https://github.com/cleanlab/cleanlab/issues/359 | [] | jwmueller | 0 |
tfranzel/drf-spectacular | rest-api | 1,286 | Not allowed to use word "Request" as a part of serializer name for some reason | **Describe the bug**
I was extending my API and creating new serializers. After adding another one and regenerating the schema I encountered this error message:
```
Warning [FilledFormAnswerViewSet > FilledFormAnswerSerializer > FormQuestionSerializer]: Encountered 2 components with identical names "FormQuestionRequest" and different classes <class 'forms.serializers.FormQuestionSerializer'> and <class 'forms.serializers.FormQuestionRequestSerializer'>. This will very likely result in an incorrect schema. Try renaming one.
```
This error appears after I type-hint a SerializerMethodField.
Class structure is as follows:
```
class FormQuestionRequestMappingSerializer(serializers.Serializer):
label = serializers.CharField()
value = serializers.CharField()
class Meta:
fields = ("label", "value")
class FormQuestionRequestSerializer(serializers.Serializer):
url = serializers.CharField()
mapping = serializers.DictField(
child=FormQuestionRequestMappingSerializer(),
allow_empty=False,
)
params = serializers.DictField()
class Meta:
fields = ("url", "mapping", "params")
class FormQuestionSerializer(serializers.ModelSerializer):
choices = FormQuestionChoiceSerializer(many=True, read_only=True)
request_params = serializers.SerializerMethodField()
@extend_schema_field(FormQuestionRequestSerializer)
def get_request_params(self, obj):
if obj.data_type != QuestionDataType.REQUEST:
return None
return dict(
url=reverse(obj.request_field_route) if obj.request_field_route else None,
mapping={
"value": obj.request_field_mapping.get("value", "id"),
"label": obj.request_field_mapping.get("label", "title"),
},
request_params=obj.request_field_params,
)
class Meta:
model = FormQuestion
fields = (
"id",
"title",
"data_type",
"request_params",
)
```
And as soon as I rename FormQuestionRequestSerializer to NOT contain the exact "Request" word - the error is gone.
This serializer class name is unique in the project - so I am sure it does not clash with an existing one.
I also have a model class `FormQuestion` in my app, and maybe somehow that causes the name conflict, but I don't quite get how it may happen.
DRF-spectacular settings are as follows:
```
SPECTACULAR_SETTINGS = {
"TITLE": "API Project",
"DESCRIPTION": "API for my project",
"VERSION": "1.0.0",
"COMPONENT_SPLIT_REQUEST": True,
"POSTPROCESSING_HOOKS": [],
}
```
**Expected behavior**
Being able to name serializer classes without causing critical errors.
| open | 2024-09-04T12:54:10Z | 2024-09-04T12:54:10Z | https://github.com/tfranzel/drf-spectacular/issues/1286 | [] | KristobalJunta | 0 |
jupyterhub/repo2docker | jupyter | 1,195 | Supporting JupyterHub singleuser 3.0.0 | I had a look at what's required for this to bump `jupyterhub-singleuser` to `3.0.0` (https://github.com/jupyterhub/binderhub/pull/1544#issuecomment-1278971454).
JupyterHub 3.0.0 requires Python 3.7+
## Python 3.7, 3.8, 3.9, 3.10 environments
This is easy, just update
https://github.com/jupyterhub/repo2docker/blob/d7be04efb590a2a1227614ccaf9b504ad9333470/repo2docker/buildpacks/conda/environment.yml#L11
and run [`freeze.py`](https://github.com/jupyterhub/repo2docker/blob/d7be04efb590a2a1227614ccaf9b504ad9333470/repo2docker/buildpacks/conda/freeze.py) to update Python 3.7, 3.8, 3.9 and 3.10 environments.
## Python 2.7 environment
Also easy, since the kernel environment is decoupled from the notebook/JupyterLab environment.
## Python 3.5 environment
Not automatically updated:
https://github.com/jupyterhub/repo2docker/blob/d7be04efb590a2a1227614ccaf9b504ad9333470/repo2docker/buildpacks/conda/environment.py-3.5.yml#L1-L2
Stuck on a very old version, too old for 3.0.0:
https://github.com/jupyterhub/repo2docker/blob/d7be04efb590a2a1227614ccaf9b504ad9333470/repo2docker/buildpacks/conda/requirements.py-3.5.pip#L8
## Python 3.6 environment
Not automatically updated:
https://github.com/jupyterhub/repo2docker/blob/d7be04efb590a2a1227614ccaf9b504ad9333470/repo2docker/buildpacks/conda/environment.py-3.6.yml#L1-L2
Stuck on an old version, too old for 3.0.0:
https://github.com/jupyterhub/repo2docker/blob/d7be04efb590a2a1227614ccaf9b504ad9333470/repo2docker/buildpacks/conda/environment.py-3.6.lock#L131
## Options
- Separate Python 3.5 and 3.6 environments from the notebook/JupyterLab environment, similar to Python 2.7
- Separate all Python environments from the notebook/JupyterLab environment. If we're going to migrate 3.5 and 3.6 environments to the decoupled setup, it probably makes sense to do it for all Python versions to make things easier in the future, and also solves some potential package conflicts: https://github.com/jupyterhub/repo2docker/issues/741
- Separate all environments for all languages from the notebook/JupyterLab environment. Architecturally I think this makes sense, but is more work: https://github.com/jupyterhub/repo2docker/issues/868 | closed | 2022-10-14T22:11:48Z | 2023-02-15T18:28:33Z | https://github.com/jupyterhub/repo2docker/issues/1195 | [
"needs: discussion"
] | manics | 0 |
yt-dlp/yt-dlp | python | 11,849 | Can't download video (and subtitles) when many comments | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Yt-dlp downloads video page, parses it, extracts links (subtitles, video stream), downloads comments, and then downloads content using extracted links.
The problem is downloading comment can take too long, and extracted links become expired, producing 404 error.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--get-comments', '--write-subs', 'https://www.youtube.com/watch?v=uD4izuDMUQA']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8 (No ANSI), error utf-8 (No ANSI), screen utf-8 (No ANSI)
[debug] yt-dlp version stable@2024.12.13 from yt-dlp/yt-dlp [542166962] (zip)
[debug] Python 3.9.4 (CPython x86_64 64bit) - macOS-10.13.6-x86_64-i386-64bit (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 4.4.1 (setts), ffprobe 4.4.1, phantomjs 2.1.1
[debug] Optional libraries: certifi-2021.10.08, mutagen-1.45.1, requests-2.27.1, sqlite3-3.34.0, urllib3-1.26.9
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.12.13 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.12.13 from yt-dlp/yt-dlp)
[youtube] Extracting URL: https://www.youtube.com/watch?v=uD4izuDMUQA
[youtube] uD4izuDMUQA: Downloading webpage
[youtube] uD4izuDMUQA: Downloading ios player API JSON
[youtube] uD4izuDMUQA: Downloading mweb player API JSON
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig 1S4AiOPPlr8bwtmrL => XM5rNJw4_pWm0Q
[debug] Loading youtube-nsig.03dbdfab from cache
[debug] [youtube] Decrypted nsig smffvg_Fcc0uOm2Al => N-Y9lkp46e7kvg
[youtube] uD4izuDMUQA: Downloading m3u8 information
[info] uD4izuDMUQA: Downloading subtitles: en
[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec, channels, acodec, lang, proto
[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec, channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id
[youtube] Downloading comment section API JSON
[youtube] Downloading ~330599 comments
[youtube] Sorting comments by newest first
[youtube] Downloading comment API JSON page 1 (0/~330599)
[youtube] Downloading comment API JSON reply thread 1 (1/~330599)
[youtube] Downloading comment replies API JSON page 1 (11/~330599)
[youtube] Downloading comment replies API JSON page 2 (61/~330599)
[youtube] Downloading comment replies API JSON page 3 (111/~330599)
[youtube] Downloading comment replies API JSON page 4 (161/~330599)
[youtube] Downloading comment replies API JSON page 5 (211/~330599)
[youtube] Downloading comment replies API JSON page 6 (261/~330599)
[youtube] Downloading comment replies API JSON page 7 (311/~330599)
[youtube] Downloading comment replies API JSON page 8 (361/~330599)
[youtube] Downloading comment replies API JSON page 9 (411/~330599)
[youtube] Downloading comment replies API JSON page 10 (461/~330599)
[youtube] Downloading comment replies API JSON page 11 (511/~330599)
[youtube] Downloading comment replies API JSON page 12 (561/~330599)
[youtube] Downloading comment API JSON reply thread 2 (589/~330599)
[youtube] Downloading comment API JSON reply thread 3 (591/~330599)
[youtube] Downloading comment API JSON page 2 (600/~330599)
[youtube] Downloading comment API JSON page 3 (620/~330599)
[youtube] Downloading comment API JSON reply thread 1 (633/~330599)
[youtube] Downloading comment API JSON page 4 (641/~330599)
[youtube] Downloading comment API JSON reply thread 1 (642/~330599)
[youtube] Downloading comment API JSON page 5 (662/~330599)
[youtube] Downloading comment API JSON reply thread 1 (672/~330599)
[youtube] Downloading comment API JSON page 6 (683/~330599)
[youtube] Downloading comment API JSON reply thread 1 (688/~330599)
[youtube] Downloading comment API JSON reply thread 2 (700/~330599)
[youtube] Downloading comment API JSON reply thread 3 (707/~330599)
[youtube] Downloading comment API JSON page 7 (713/~330599)
[youtube] Downloading comment API JSON reply thread 1 (724/~330599)
[youtube] Downloading comment API JSON reply thread 2 (726/~330599)
[youtube] Downloading comment API JSON page 8 (737/~330599)
[youtube] Downloading comment API JSON reply thread 1 (747/~330599)
[youtube] Downloading comment API JSON page 9 (758/~330599)
[youtube] Downloading comment API JSON reply thread 1 (766/~330599)
[youtube] Downloading comment API JSON reply thread 2 (769/~330599)
[youtube] Downloading comment API JSON reply thread 3 (778/~330599)
[youtube] Downloading comment API JSON page 10 (781/~330599)
[youtube] Downloading comment API JSON reply thread 1 (788/~330599)
[youtube] Downloading comment API JSON page 11 (802/~330599)
[youtube] Downloading comment API JSON reply thread 1 (811/~330599)
[youtube] Downloading comment API JSON page 12 (823/~330599)
[youtube] Downloading comment API JSON page 13 (843/~330599)
[youtube] Downloading comment API JSON reply thread 1 (853/~330599)
[youtube] Downloading comment API JSON page 14 (865/~330599)
[youtube] Downloading comment API JSON reply thread 1 (870/~330599)
[youtube] Downloading comment API JSON reply thread 2 (872/~330599)
[youtube] Downloading comment API JSON reply thread 3 (882/~330599)
[youtube] Downloading comment API JSON reply thread 4 (884/~330599)
[youtube] Downloading comment API JSON page 15 (890/~330599)
[youtube] Downloading comment API JSON reply thread 1 (891/~330599)
[youtube] Downloading comment API JSON reply thread 2 (911/~330599)
[youtube] Downloading comment API JSON page 16 (913/~330599)
[youtube] Downloading comment API JSON reply thread 1 (929/~330599)
[youtube] Downloading comment API JSON page 17 (934/~330599)
[youtube] Downloading comment API JSON reply thread 1 (945/~330599)
[youtube] Downloading comment API JSON page 18 (955/~330599)
[youtube] Downloading comment API JSON reply thread 1 (961/~330599)
[youtube] Downloading comment API JSON reply thread 2 (971/~330599)
[youtube] Downloading comment API JSON page 19 (979/~330599)
[youtube] Downloading comment API JSON reply thread 1 (987/~330599)
[youtube] Downloading comment API JSON reply thread 2 (997/~330599)
[youtube] Downloading comment API JSON reply thread 3 (1003/~330599)
...
[youtube] Downloading comment API JSON page 10700 (328997/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329007/~330599)
[youtube] Downloading comment API JSON page 10701 (329018/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329028/~330599)
[youtube] Downloading comment API JSON page 10702 (329042/~330599)
[youtube] Downloading comment API JSON page 10703 (329062/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329078/~330599)
[youtube] Downloading comment API JSON page 10704 (329083/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329085/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329095/~330599)
[youtube] Downloading comment API JSON page 10705 (329112/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329124/~330599)
[youtube] Downloading comment API JSON page 10706 (329133/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329150/~330599)
[youtube] Downloading comment API JSON page 10707 (329157/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329168/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329179/~330599)
[youtube] Downloading comment API JSON page 10708 (329180/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329196/~330599)
[youtube] Downloading comment replies API JSON page 1 (329206/~330599)
[youtube] Downloading comment API JSON page 10709 (329212/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329214/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329231/~330599)
[youtube] Downloading comment API JSON page 10710 (329240/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329246/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329252/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329258/~330599)
[youtube] Downloading comment API JSON page 10711 (329263/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329267/~330599)
[youtube] Downloading comment API JSON page 10712 (329284/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329288/~330599)
[youtube] Downloading comment API JSON page 10713 (329306/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329309/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329320/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329334/~330599)
[youtube] Downloading comment API JSON page 10714 (329338/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329350/~330599)
[youtube] Downloading comment replies API JSON page 1 (329360/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329369/~330599)
[youtube] Downloading comment API JSON page 10715 (329371/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329386/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329392/~330599)
[youtube] Downloading comment API JSON page 10716 (329393/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329402/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329413/~330599)
[youtube] Downloading comment API JSON page 10717 (329419/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329427/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329432/~330599)
[youtube] Downloading comment API JSON page 10718 (329441/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329461/~330599)
[youtube] Downloading comment API JSON page 10719 (329462/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329468/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329472/~330599)
[youtube] Downloading comment API JSON page 10720 (329484/~330599)
[youtube] Downloading comment API JSON page 10721 (329504/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329524/~330599)
[youtube] Downloading comment API JSON page 10722 (329526/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329527/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329536/~330599)
[youtube] Downloading comment API JSON page 10723 (329555/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329565/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329567/~330599)
[youtube] Downloading comment API JSON page 10724 (329578/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329591/~330599)
[youtube] Downloading comment API JSON page 10725 (329599/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329600/~330599)
[youtube] Downloading comment API JSON page 10726 (329620/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329628/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329649/~330599)
[youtube] Downloading comment API JSON page 10727 (329651/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329653/~330599)
[youtube] Downloading comment replies API JSON page 1 (329663/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329673/~330599)
[youtube] Downloading comment API JSON page 10728 (329687/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329694/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329698/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329702/~330599)
[youtube] Downloading comment API JSON page 10729 (329719/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329737/~330599)
[youtube] Downloading comment replies API JSON page 1 (329747/~330599)
[youtube] Downloading comment API JSON page 10730 (329776/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329787/~330599)
[youtube] Downloading comment API JSON page 10731 (329803/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329812/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329817/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329823/~330599)
[youtube] Downloading comment API JSON page 10732 (329828/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329837/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329843/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329848/~330599)
[youtube] Downloading comment replies API JSON page 1 (329858/~330599)
[youtube] Downloading comment API JSON page 10733 (329863/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329879/~330599)
[youtube] Downloading comment replies API JSON page 1 (329889/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329912/~330599)
[youtube] Downloading comment API JSON page 10734 (329914/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329922/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329926/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329933/~330599)
[youtube] Downloading comment API JSON page 10735 (329945/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329949/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329955/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329963/~330599)
[youtube] Downloading comment API JSON reply thread 4 (329967/~330599)
[youtube] Downloading comment API JSON reply thread 5 (329970/~330599)
[youtube] Downloading comment API JSON page 10736 (329975/~330599)
[youtube] Downloading comment API JSON reply thread 1 (329976/~330599)
[youtube] Downloading comment API JSON reply thread 2 (329979/~330599)
[youtube] Downloading comment replies API JSON page 1 (329989/~330599)
[youtube] Downloading comment API JSON reply thread 3 (329993/~330599)
[youtube] Downloading comment API JSON reply thread 4 (329999/~330599)
[youtube] Downloading comment replies API JSON page 1 (330009/~330599)
[youtube] Downloading comment replies API JSON page 2 (330059/~330599)
[youtube] Downloading comment API JSON reply thread 5 (330067/~330599)
[youtube] Downloading comment replies API JSON page 1 (330077/~330599)
[youtube] Downloading comment replies API JSON page 2 (330127/~330599)
[youtube] Downloading comment replies API JSON page 3 (330177/~330599)
[youtube] Downloading comment replies API JSON page 4 (330227/~330599)
[youtube] Downloading comment replies API JSON page 5 (330277/~330599)
[youtube] Downloading comment replies API JSON page 6 (330327/~330599)
[youtube] Downloading comment replies API JSON page 7 (330377/~330599)
[youtube] Downloading comment replies API JSON page 8 (330427/~330599)
[youtube] Downloading comment replies API JSON page 9 (330477/~330599)
[youtube] Downloading comment replies API JSON page 10 (330527/~330599)
[youtube] Downloading comment API JSON reply thread 6 (330538/~330599)
[youtube] Downloading comment API JSON reply thread 7 (330540/~330599)
[youtube] Downloading comment replies API JSON page 1 (330550/~330599)
[youtube] Downloading comment API JSON reply thread 8 (330557/~330599)
[youtube] Downloading comment API JSON page 10737 (330558/~330599)
[youtube] Downloading comment API JSON reply thread 1 (330564/~330599)
[youtube] Downloading comment API JSON reply thread 2 (330568/~330599)
[youtube] Downloading comment API JSON reply thread 3 (330574/~330599)
[youtube] Extracted 330580 comments
[debug] Default format spec: bestvideo*+bestaudio/best
[info] uD4izuDMUQA: Downloading 1 format(s): 401+251
[info] Writing video subtitles to: TIMELAPSE OF THE FUTURE: A Journey to the End of Time (4K) [uD4izuDMUQA].en.vtt
[debug] Invoking http downloader on "https://www.youtube.com/api/timedtext?v=uD4izuDMUQA&ei=QQxiZ_uZN-_UxN8P0ufXoQc&caps=asr&opi=112496729&exp=xbt&xoaf=5&hl=en&ip=0.0.0.0&ipbits=0&expire=1734504113&sparams=ip%2Cipbits%2Cexpire%2Cv%2Cei%2Ccaps%2Copi%2Cexp%2Cxoaf&signature=5C32FB21273E3176945C8CDF31245E71038E40BD.19F6AEC3D4E5045E0BAFC6CB0B74DAACB6B8DBAC&key=yt8&lang=en&fmt=vtt"
ERROR: Unable to download video subtitles for 'en': HTTP Error 404: Not Found
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 398, in _send
res = opener.open(urllib_req, timeout=self._calculate_timeout(request))
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 523, in open
response = meth(req, response)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 632, in http_response
response = self.parent.error(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 561, in error
return self._call_chain(*args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 641, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 404: Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4351, in _write_subtitles
self.dl(sub_filename, sub_copy, subtitle=True)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3199, in dl
return fd.download(name, new_info, subtitle)
File "/usr/local/bin/yt-dlp/yt_dlp/downloader/common.py", line 464, in download
ret = self.real_download(filename, info_dict)
File "/usr/local/bin/yt-dlp/yt_dlp/downloader/http.py", line 367, in real_download
establish_connection()
File "/usr/local/bin/yt-dlp/yt_dlp/downloader/http.py", line 118, in establish_connection
ctx.data = self.ydl.urlopen(request)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4162, in urlopen
return self._request_director.send(req)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 117, in send
response = handler.send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_helper.py", line 208, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/common.py", line 340, in send
return self._send(request)
File "/usr/local/bin/yt-dlp/yt_dlp/networking/_urllib.py", line 403, in _send
raise HTTPError(UrllibResponseAdapter(e.fp), redirect_loop='redirect error' in str(e)) from e
yt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1624, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1780, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 1839, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3011, in process_video_result
self.process_info(new_info)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 177, in wrapper
return func(self, *args, **kwargs)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3267, in process_info
sub_files = self._write_subtitles(info_dict, temp_filename)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 4359, in _write_subtitles
raise DownloadError(msg)
yt_dlp.utils.DownloadError: Unable to download video subtitles for 'en': HTTP Error 404: Not Found
```
| open | 2024-12-18T15:38:47Z | 2024-12-21T06:12:37Z | https://github.com/yt-dlp/yt-dlp/issues/11849 | [
"question"
] | defder-su | 11 |
sigmavirus24/github3.py | rest-api | 555 | Not installing on OS X El Capitan with Python 2.7 | EDIT: The problem below was actually triggered by a binary compatible but different Python(Stackless).
There is currently a problem on OS X.
Actually, this is a cryptography issue.
But since I think this package is much more popular,
it is probably adequate to report the error here.
The cure to this problem is this:
```
brew install openssl
env ARCHFLAGS="-arch x86_64" \
LDFLAGS="-L/usr/local/opt/openssl/lib" \
CFLAGS="-I/usr/local/opt/openssl/include" \
pip install cryptography
```
##
<bountysource-plugin>
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/30276887-not-installing-on-os-x-el-capitan-with-python-2-7?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F183477&utm_medium=issues&utm_source=github).
</bountysource-plugin> | closed | 2016-01-28T00:14:17Z | 2020-08-11T15:31:46Z | https://github.com/sigmavirus24/github3.py/issues/555 | [
"Needs documentation"
] | ctismer | 7 |
JohnSnowLabs/nlu | streamlit | 100 | Where to obtain "license_keys"? | All the colab examples I wanted to run [e.g. this one](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/Certification_Trainings/Healthcare/3.Clinical_Entity_Resolvers.ipynb) are asking for a license file via `files.upload()`.
How do I obtain one?
Thanks. | closed | 2022-02-17T21:48:46Z | 2022-02-28T16:17:51Z | https://github.com/JohnSnowLabs/nlu/issues/100 | [
"question"
] | CatChenal | 2 |
widgetti/solara | jupyter | 70 | [Feature Request] Create a solara component for a scrollable text area | Often times you want a scrollable text area for markdown, text, sql, code, anything really. You should be able to do something like
```
with sl.Scrollable(max_height="500px"):
sl.Markdown("## Hi")
sl.Markdown(<some_long_text>)
```
or something similar. Currently, I need to do this, which would be nearly impossible to figure out for a new user
```
import reacton.ipyvuetify as v
with sl.Card():
card_md = v.Container(
style_=(
"height: 700px; overflow-y: scroll;"
)
)
card_md.add_children([sl.Markdown(my_content)])
``` | open | 2023-04-15T16:39:54Z | 2024-12-27T15:05:52Z | https://github.com/widgetti/solara/issues/70 | [
"enhancement",
"good first issue"
] | Ben-Epstein | 0 |
pyppeteer/pyppeteer | automation | 149 | [Question] Would you like to have a recorder for pyppeteer? | Hi,
I'm Thach - Smart Monkey's author.
I have added a functionality to Smart Money for exporting a project to Puppeteer code (jest puppeteer). Here is the video for demo: https://www.youtube.com/watch?v=eBzK85MOo-A
In the most of cases, you don't need to write any line of code for creating a test suite.
I have one question: would you like to have similar functionality for exporting project to pyppeteer code? If yes, please react :+1: this post! In case there are enough votes (1000), I will add the function!
Sincerely,
Thach
| open | 2020-07-09T13:29:23Z | 2022-08-12T07:01:57Z | https://github.com/pyppeteer/pyppeteer/issues/149 | [
"enhancement",
"discussion"
] | nglthach | 4 |
kizniche/Mycodo | automation | 650 | DHT22 unable to activate with daemon | ## Mycodo Issue Report:
- Specific Mycodo Version: 7.4.3
#### Problem Description
Please list:
- trying to add and activate DHT22 unit
- tested the unit seperately and works but seems to run into an error with the mycodo daemon
### Errors
Error: Could not activate Input controller with ID 7829a1e3-f8bf-4430-a1e4-04ff0ed4c403: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
DAEMON LOG
2019-04-21 13:57:00,148 - mycodo.daemon - WARNING - Input controller with ID 7829a1e3-f8bf-4430-a1e4-04ff0ed4c403 not found
2019-04-21 13:57:23,466 - mycodo.daemon - ERROR - Could not activate Input controller with ID 495f207f-ed95-4f2c-b9d6-3635ea7c4f0f: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
Traceback (most recent call last):
File "/var/mycodo-root/mycodo/mycodo_daemon.py", line 556, in controller_activate
ready, cont_id)
File "/var/mycodo-root/mycodo/controller_input.py", line 173, in __init__
self.measure_input = input_loaded.InputModule(self.input_dev)
File "/home/pi/Mycodo/mycodo/inputs/dht22.py", line 117, in __init__
self.gpio = int(input_dev.gpio_location)
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
from line;
https://github.com/kizniche/Mycodo/blob/b220ed6f04b9718653e6b8a77ca7f9a4a2e70811/mycodo/mycodo_daemon.py#L564-L569
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. Add DHT to Mycodo using Setup > Data > Input dropdown > DHT22
2. Activate DHT using the Activate button
3. Error thrown up saying;
> "Error: Could not activate Input controller with ID 495f207f-ed95-4f2c-b9d6-3635ea7c4f0f: int() argument must be a string, a bytes-like object or a number, not 'NoneType' | closed | 2019-04-21T12:41:36Z | 2019-04-30T20:25:42Z | https://github.com/kizniche/Mycodo/issues/650 | [] | jlb-qi | 3 |
nltk/nltk | nlp | 2,478 | ValueError when unpickling ParentedTree with Python 3.7 or higher | While working in Python 3.6, when unpickling a ParentedTree with Python 3.7 or Python 3.8 following ValueError occurs:
`ValueError: Can not insert a subtree that already has a parent.`
Following example script produces the Error:
```python
import pickle
from nltk.tree import ParentedTree
tree = ParentedTree.fromstring('(S (NN x) (NP x) (NN x))')
pickled = pickle.dumps(tree)
tree_2 = pickle.loads(pickled)
print(tree)
print(tree_2)
```
Output of Python 3.6 (working):
```
(S (NN x) (NP x) (NN x))
(S (NN x) (NP x) (NN x))
```
Output in Python 3.7 / 3.8:
```
Traceback (most recent call last):
File "nltk_pickle_test.py", line 7, in <module>
tree_2 = pickle.loads(pickled)
File "pickletest/venv3.8/lib/python3.8/site-packages/nltk/tree.py", line 1192, in extend
self._setparent(child, len(self))
File "pickletest/venv3.8/lib/python3.8/site-packages/nltk/tree.py", line 1358, in _setparent
raise ValueError('Can not insert a subtree that already ' 'has a parent.')
ValueError: Can not insert a subtree that already has a parent.
```
The error also occurs when saving and loading via a file. Tested with nltk 3.4.5. | closed | 2019-12-14T18:09:56Z | 2021-11-12T13:31:54Z | https://github.com/nltk/nltk/issues/2478 | [] | movabo | 3 |
miguelgrinberg/python-socketio | asyncio | 207 | permessage-deflate compression support | Official Socket.IO [since 1.4.0](https://socket.io/blog/socket-io-1-4-0/) supports compression via [permessage-deflate (rfc7692)](https://tools.ietf.org/html/rfc7692).
What would be needed to support that here? Or is it better to support that perhaps with a middleware or something? | closed | 2018-10-19T08:53:27Z | 2018-10-31T16:44:03Z | https://github.com/miguelgrinberg/python-socketio/issues/207 | [
"enhancement"
] | gordol | 7 |
reloadware/reloadium | flask | 95 | PyCharm with virtualenv and reloadium plugin does not work - No module named reloadium.corium | PyCharm Pro 2022.3.1 with virtualenv and Python 3.10 after icon click "Debug 'dl' with Reloadium" debug console output:
/home/user/py310env/bin/python -m reloadium pydev_proxy /home/user/pycharm/plugins/python/helpers/pydev/pydevd.py --multiprocess --save-signatures --qt-support=auto --client 127.0.0.1 --port 41999 --file /home/user/XXX/yyy/dl.py
It seems like your platform or Python version are not supported yet.
Windows, Linux, macOS and Python 64 bit >= 3.7 (>= 3.9 for M1) <= 3.10 are currently supported.
Please submit a github issue if you believe Reloadium should be working on your system at
https://github.com/reloadware/reloadium
To see the exception run reloadium with environmental variable RW_DEBUG=True
Traceback (most recent call last):
File "/usr/lib/python3.10/runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/usr/lib/python3.10/runpy.py", line 146, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/usr/lib/python3.10/runpy.py", line 110, in _get_module_details
__import__(pkg_name)
File "/home/user/.reloadium/package/3.7/reloadium/__init__.py", line 4, in <module>
pre_import_check()
File "/home/user/.reloadium/package/3.7/reloadium/__utils__.py", line 21, in pre_import_check
import reloadium.corium
**ModuleNotFoundError: No module named 'reloadium.corium'**
Process finished with exit code 1
| closed | 2023-01-30T11:39:30Z | 2023-06-17T02:37:48Z | https://github.com/reloadware/reloadium/issues/95 | [] | saphireee | 6 |
google-research/bert | nlp | 542 | How do you train custom corpus with bert? | I am using a domain specific dataset for text classification.
But major of my data points are treated with [UNK] token in Bert.
Can I please get help on how to keep my custom corpus tokens? | open | 2019-04-03T07:11:47Z | 2019-07-19T08:51:05Z | https://github.com/google-research/bert/issues/542 | [] | shadylpstan | 1 |
sczhou/CodeFormer | pytorch | 56 | !python no longer working in colab | !python is not working (for me) in Colab any longer.
Fix:
Use
%run
instead of
!python
in ALL instances. This fixes the issue.
Also, if .jpg not working, but sure not using saved version from clipboard in windows which stores as JPG (all caps) --- be sure extension is all lower case. | closed | 2022-10-22T04:30:15Z | 2022-11-20T07:57:52Z | https://github.com/sczhou/CodeFormer/issues/56 | [] | Rob-Milliken | 0 |
fugue-project/fugue | pandas | 232 | [BUG] transform does not work for Fugue DataFrames | **Describe the bug**
The problem is here https://github.com/fugue-project/fugue/blob/ff004917b48b31796f064cc390e1465efcd6cfd4/fugue/interfaceless.py#L67
**Expected behavior**
It should be `result`, otherwise it will just return the original dataframe
**Environment (please complete the following information):**
- Backend: pandas/dask/ray?
- Backend version:
- Python version:
- OS: linux/windows
| closed | 2021-07-22T23:33:01Z | 2021-07-23T01:23:27Z | https://github.com/fugue-project/fugue/issues/232 | [
"bug",
"high priority",
"core feature"
] | goodwanghan | 0 |
autogluon/autogluon | computer-vision | 4,794 | Functionality to resume from checkpoint in `TimeSeriesPredictor` | ## Description
`MultiModelPredictor.load` allows one to specify `resume=True` to allow you to resume training from a checkpoint. Is there any plan to include similar functionality for `TimeSeriesPredictor.load` and the associated training procedures? Or a way to hack it perhaps. I have a large selection of large time series datasets and I'm interested in training a big global model to predict them.
## References
- multi modal [docs](https://auto.gluon.ai/stable/api/autogluon.multimodal.MultiModalPredictor.load.html)
- time series [docs](https://auto.gluon.ai/stable/api/autogluon.timeseries.TimeSeriesPredictor.load.html)
| closed | 2025-01-14T12:27:26Z | 2025-01-22T15:44:42Z | https://github.com/autogluon/autogluon/issues/4794 | [
"enhancement",
"module: timeseries"
] | chrissype | 3 |
google-deepmind/sonnet | tensorflow | 110 | snt.Conv2DTranspose and tf.layers.conv2d_transpose yield different results | The sonnet documentation states the about the Conv2DTranspose:
> This acts as a light wrapper around the TensorFlow op `tf.nn.conv2d_transpose`
> abstracting away variable creation and sharing.
```tf.layers.conv2d_transpose(inputs, filters=64, kernel_size=1, strides=5, padding='valid')```
--> yields shape `(?, 5, 5, 64)`
`snt.Conv2DTranspose(64, 1, 5, padding='VALID')(inputs)`
--> yields shape `(?, 1, 1, 64)`
for `inputs` shape `(?, 1, 1, 218)`
Changing the semantics of the called parameters from `input_channels` (tensorflow) to `output_channel` (Sonnet) in the API to me does not state not a 'light' wrapper and either the wording or the code should therefore be adjusted in Sonnet.
| closed | 2018-12-05T06:13:25Z | 2020-04-17T09:19:01Z | https://github.com/google-deepmind/sonnet/issues/110 | [] | ferreirafabio | 2 |
holoviz/panel | matplotlib | 7,347 | Opening xarray dataset suppresses panel error messages | <!--
Thanks for contacting us! Please read and follow these instructions carefully, then you can delete this introductory text. Note that the issue tracker is NOT the place for usage questions and technical assistance; post those at [Discourse](https://discourse.holoviz.org) instead. Issues without the required information below may be closed immediately.
-->
#### ALL software version info
(this library, plus any other relevant software, e.g. bokeh, python, notebook, OS, browser, etc should be added within the dropdown below.)
<details>
<summary>Software Version Info</summary>
```plaintext
bokeh 3.5.2 pyhd8ed1ab_0 conda-forge
panel 1.5.0 pyhd8ed1ab_0 conda-forge
rioxarray 0.17.0 pyhd8ed1ab_0 conda-forge
xarray 2024.9.0 pyhd8ed1ab_0 conda-forge
```
</details>
#### Description of expected behavior and the observed behavior
I run the code below as a panel app
```
panel serve example.py --show
```
When ``x`` is selected, the app results in a blank screen (tile shows OK) and there is no stack trace printed to the terminal. The stack trace is printed when the ``ds = xr.open_dataset(file)`` is commented out.
I tracked the issue to xarray's plugin ``rioxarray.xarray_plugin:RasterioBackend``. When I comment out line in ``xarray.backends.plugins.py``, function ``build_engines`` that loads that plugin:
```
def build_engines(entrypoints: EntryPoints) -> dict[str, BackendEntrypoint]:
backend_entrypoints: dict[str, type[BackendEntrypoint]] = {}
for backend_name, (module_name, backend) in BACKEND_ENTRYPOINTS.items():
if module_name is None or module_available(module_name):
backend_entrypoints[backend_name] = backend
entrypoints_unique = remove_duplicates(entrypoints)
# external_backend_entrypoints = backends_dict_from_pkg(entrypoints_unique)
# backend_entrypoints.update(external_backend_entrypoints)
backend_entrypoints = sort_backends(backend_entrypoints)
set_missing_parameters(backend_entrypoints)
return {name: backend() for name, backend in backend_entrypoints.items()}
```
the example works as expected, i.e. fails with the traceback printed.
I don't know whether this is a problem with panel, or rioxarray.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
# code goes here between backticks
# example.py
import panel as pn
import xarray as xr
def update(value):
file = '/tmp/foo.nc'
ds = xr.open_dataset(file)
return pn.pane.Str(int(value))
selector = pn.widgets.Select(name="options", options=["x", 2], value="x")
bf = pn.bind(update, selector)
panel_bf = pn.panel(bf)
pn.Row(selector, pn.panel(bf)).servable()
```
#### Stack traceback and/or browser JavaScript console output
#### Screenshots or screencasts of the bug in action
- [ ] I may be interested in making a pull request to address this
| closed | 2024-09-30T06:23:57Z | 2024-10-21T13:37:03Z | https://github.com/holoviz/panel/issues/7347 | [] | yt87 | 10 |
ivy-llc/ivy | tensorflow | 28,294 | Fix Ivy Failing Test: torch - shape.shape__rmul__ | closed | 2024-02-15T14:22:01Z | 2024-02-21T06:42:18Z | https://github.com/ivy-llc/ivy/issues/28294 | [
"Sub Task"
] | fnhirwa | 0 |
|
apragacz/django-rest-registration | rest-api | 280 | Django signal user_logged_in is not always fired when login via login endpoint is performed successfully | ### Checklist
* [x] I read [Contribution Guidelines](https://github.com/apragacz/django-rest-registration/blob/master/CONTRIBUTING.md#issues)
* [x] I searched existing issues before opening this one
* [x] This is not a major security issue
* [x] I reproduced the bug with the newest version
#### Optional checklist
* [ ] I attached a PR with a test case reproducing the problem
### Describe the bug
This is spin-off from issue #276
For session-based logins, we're using `auth.login()`:
https://github.com/apragacz/django-rest-registration/blob/e6421ac9541b683dcb86e8136df39d59c0d5f3b4/rest_registration/api/views/login.py#L103
which in turn fires `user_logged_in` signal:
https://github.com/django/django/blob/5e98959d9242c57a55c65847758781f82d386fa4/django/contrib/auth/__init__.py#L152
which will run connected `update_last_login` handler:
https://github.com/django/django/blob/9c19aff7c7561e3a82978a272ecdaad40dda5c00/django/contrib/auth/apps.py#L28
However, for token-based logins, we're not using `auth.login()` for obvious reasons. but we neither fire `user_logged_in` signal, which should be fired in that case.
The consequence of that is `last_login` field is not updated.
### Expected behavior
`user_logged_in` is being fired exactly once when successful login happens using the `LoginView` / `login()` API view.
### Actual behavior
`user_logged_in` is being fired only when session-based login happens.
### Steps to reproduce
Steps to reproduce the behavior:
Provide a token-only login based configuration
### Diagnostic info
n/a
### Additional context
n/a
| closed | 2024-01-25T22:45:19Z | 2024-02-08T00:06:54Z | https://github.com/apragacz/django-rest-registration/issues/280 | [
"type:bug"
] | apragacz | 0 |
python-restx/flask-restx | api | 285 | Namespace error handlers broken when propagate_exceptions=True | ### Details
When an `errorhandler` is registered on a namespace, and `PROPAGATE_EXCEPTIONS` is set to `True` in the Flask app, then the namespace handler will not catch the exceptions. It looks like this is due to the `handle_error` function not checking the error handlers that exist in any child classes.
### **Code**
`api.py:653`
```python
if (
not isinstance(e, HTTPException)
and current_app.propagate_exceptions
and not isinstance(e, tuple(self.error_handlers.keys()))
):
```
Should check for potential error handlers in the class and child classes:
```python
if (
not isinstance(e, HTTPException)
and current_app.propagate_exceptions
and not isinstance(e, tuple(self._own_and_child_error_handlers.keys()))
):
```
### **Repro Steps** (if applicable)
1. Set `propagate_exceptions=True` in the app
2. Create a namespace, and register it to the API
3. Add a `@namespace.errorhandler` function
4. Raise error in a route, which won't get caught by namespace's error handler
### **Expected Behavior**
Error handler defined on a namespace should still catch exceptions when `propagate_exceptions` is `True`. | closed | 2021-02-20T21:35:45Z | 2022-03-01T16:38:24Z | https://github.com/python-restx/flask-restx/issues/285 | [
"bug"
] | mjreiss | 0 |
pyqtgraph/pyqtgraph | numpy | 3,044 | Segfault when iterating on a transform-mapped Vector in a thread | <!-- In the following, please describe your issue in detail! -->
<!-- If some sections do not apply, just remove them. -->
### Short description
Take a Vector, map it with a transform, put it in a _threading_ Thread, iterate, and BOOM.
### Code to reproduce
<!-- Please provide a minimal working example that reproduces the issue in the code block below.
Ideally, this should be a full example someone else could run without additional setup. -->
```python
import pyqtgraph as pg
import threading
def do_it():
xform = pg.SRTTransform3D()
v = xform.map(pg.Vector((0, 0, 0)))
tuple(v)
pg.mkQApp()
threading.Thread(target=do_it).start()
```
### Expected behavior
<!-- What should happen? -->
### Real behavior
<!-- What happens? -->
`SIGSEGV`
### Tested environment(s)
* PyQtGraph version: 0.13.8dev0
* Qt Python binding: PyQt5 5.15.10 Qt 5.15.2
* Python version: 3.11
* NumPy version: 1.26.3
* Operating system: linux
* Installation method: git
### Additional context
Workaround: use a `QThread`, and this isn't an issue. | closed | 2024-06-04T17:46:51Z | 2024-06-07T18:16:58Z | https://github.com/pyqtgraph/pyqtgraph/issues/3044 | [] | outofculture | 8 |
autokey/autokey | automation | 395 | word wrap : the text box should be set to word wrap by default | ## Classification:UI/Usability
## Reproducibility: Always
## Version
AutoKey version: 0.95.7
Used GUI (Gtk, Qt, or both): Gtk
If the problem is known to be present in more than one version, please list all of those.
Installed via: (PPA, pip3, …).
Linux Distribution: Linux Mint Cinnamon 19.3
## Summary
There is a big box, but sentences appear as a long text string. Word wrap would allow it to be easier edit.
## Steps to Reproduce (if applicable)
- type long sentence
- I do that
## Expected Results
- should be able to see everything that have typed.
## Actual Results
- Instead, this happens. :(I cannot see the rest of the sentence, until hit ENTER
| open | 2020-03-31T04:53:36Z | 2022-01-28T08:13:22Z | https://github.com/autokey/autokey/issues/395 | [
"enhancement"
] | elmarr | 3 |
lanpa/tensorboardX | numpy | 339 | Tensorboard crashes when a scalar is added to a histogram | `tensorboardX==1.6.0`
See title, eg tensorboard will report:
```
ValueError: can only convert an array of size 1 to a Python scalar
E0117 23:32:28.772649 Thread-1 _internal.py:88] Error on request:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 270, in run_wsgi
execute(self.server.app)
File "/usr/local/lib/python3.7/site-packages/werkzeug/serving.py", line 258, in execute
application_iter = app(environ, start_response)
File "/usr/local/lib/python3.7/site-packages/tensorboard/backend/application.py", line 307, in __call__
return self.data_applications[clean_path](environ, start_response)
File "/usr/local/lib/python3.7/site-packages/werkzeug/wrappers.py", line 308, in application
resp = f(*args[:-2] + (request,))
File "/usr/local/lib/python3.7/site-packages/tensorboard/plugins/scalar/scalars_plugin.py", line 199, in scalars_route
(body, mime_type) = self.scalars_impl(tag, run, experiment, output_format)
File "/usr/local/lib/python3.7/site-packages/tensorboard/plugins/scalar/scalars_plugin.py", line 161, in scalars_impl
for tensor_event in tensor_events]
File "/usr/local/lib/python3.7/site-packages/tensorboard/plugins/scalar/scalars_plugin.py", line 161, in <listcomp>
for tensor_event in tensor_events]
ValueError: can only convert an array of size 1 to a Python scalar
``` | open | 2019-01-18T07:34:11Z | 2019-03-16T03:51:43Z | https://github.com/lanpa/tensorboardX/issues/339 | [
"wait for response"
] | TimZaman | 1 |
smarie/python-pytest-cases | pytest | 13 | Update the tests so that they run pytest (move to "meta-testing") | Basically we should copy what is done in [pytest-harvest](https://smarie.github.io/python-pytest-harvest/). Note that the continuous integration configuration should also be updated so as to run the correct matrix of pytest version x python version | closed | 2018-11-09T12:49:26Z | 2018-12-21T09:25:15Z | https://github.com/smarie/python-pytest-cases/issues/13 | [
"enhancement"
] | smarie | 1 |
ets-labs/python-dependency-injector | asyncio | 650 | Pass dict to provider | Hi, I have a dict in my Configuration that I want to pass to a provider but I can't figure out how to do it.
```
container = Container()
container.wire(packages=[__name__])
container.configuration.from_dict({
"provider1": {
"config": {
"key1": "value1",
"key2": "value2"
}
}
})
```
```
class Container(containers.DeclarativeContainer):
configuration = providers.Configuration()
provider1 = providers.Singleton(
Provider1,
config=configuration.provider1.config # I want to pass this as a dict
)
``` | closed | 2022-12-19T14:15:14Z | 2022-12-19T14:38:39Z | https://github.com/ets-labs/python-dependency-injector/issues/650 | [] | santalvarez | 1 |
chatanywhere/GPT_API_free | api | 336 | TypeError: Client.__init__() got an unexpected keyword argument 'proxies' | closed | 2024-12-05T21:13:01Z | 2024-12-09T17:29:21Z | https://github.com/chatanywhere/GPT_API_free/issues/336 | [] | MarkLo127 | 3 |
|
sanic-org/sanic | asyncio | 2,984 | KEEP_ALIVE doesn't work from the 21.3 sanic version | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Describe the bug
When you put in the sanic config keep_alive to false, it doesn't work you can see in all responses "Connection": "keep-alive".
I tryed with version 20.12.7 and sanic works correctly but in the version 21.3.0 it doesn't works
### Code snippet
```python
import asyncio
import uvloop
from sanic import Sanic
from sanic.response import text
asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
loop = asyncio.get_event_loop()
app = Sanic('appname')
@app.get('/status')
async def status_handler(request):
return text("", status=200)
def main():
app.config.KEEP_ALIVE = False
server = app.run(host='0.0.0.0', port=8000)
asyncio.run(server, loop=loop)
if __name__ == '__main__':
main()
```
### Expected Behavior
I expect that the header "connection" of the response should be close if you put the config of KEEP_ALIVE=false
### How do you run Sanic?
As a script (`app.run` or `Sanic.serve`)
### Operating System
Linux
### Sanic Version
>=21.3.0
### Additional context
_No response_ | open | 2024-07-10T12:04:11Z | 2024-10-04T13:09:13Z | https://github.com/sanic-org/sanic/issues/2984 | [
"bug"
] | ivancallefon | 1 |
modoboa/modoboa | django | 2,785 | A user can change its password without specifying the current one | It should not be allowed.
# Impacted versions
* Modoboa: 2.0.4
| closed | 2023-02-16T15:17:28Z | 2023-02-16T15:44:34Z | https://github.com/modoboa/modoboa/issues/2785 | [
"bug",
"security"
] | tonioo | 0 |
streamlit/streamlit | deep-learning | 10,361 | Support .env files for configuration | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Configuration options provided through the environment are ignored by streamlit and default are used instead.
This directly contradicts intended behaviour:
- As per described the [configuration options](https://docs.streamlit.io/develop/concepts/configuration/options)
- As per described per option when running `streamlit run --help`
### Reproducible Code Example
```Python
`.env`
.env
# ? Streamlit
STREAMLIT_GLOBAL_DEVELOPMENT_MODE=False
## ? Server variables
STREAMLIT_SERVER_HOST='localhost'
STREAMLIT_SERVER_PORT=80
STREAMLIT_SERVER_MAX_UPLOAD_SIZE=1000
STREAMLIT_SERVER_ENABLE_STATIC_SERVING=True
## ? Theme variables
STREAMLIT_BROWSER_SERVER_ADDRESS='localhost'
STREAMLIT_BROWSER_SERVER_PORT=80
STREAMLIT_BROWSER_GATHER_USAGE_STATS=False
## ? Theme variables
STREAMLIT_THEME_BASE="dark"
STREAMLIT_THEME_PRIMARY_COLOR="#5c49e1"
STREAMLIT_THEME_BACKGROUND_COLOR="#0E1117"
STREAMLIT_THEME_SECONDARY_BACKGROUND_COLOR="#373e75"
STREAMLIT_THEME_TEXT_COLOR="#FAFAFA"
`main.py`
import os
import streamlit as st
from dotenv import load_dotenv
load_dotenv()
options = [
"global.developmentMode",
"server.address",
"server.port",
"server.maxUploadSize",
"server.enableStaticServing",
"browser.serverAddress",
"browser.serverPort",
"browser.gatherUsageStats",
"theme.base",
"theme.primaryColor",
"theme.backgroundColor",
"theme.secondaryBackgroundColor",
"theme.textColor"
]
st.header("Variables matching `STREAMLIT_*` in ENV")
env_matches = []
for entry in os.environ:
if "STREAMLIT" in entry:
env_matches.append(f"{entry}={os.getenv(entry)}")
st.write(env_matches)
st.header("Used variables, as read by `st.get_option()`")
option_matches = []
for option in options:
option_matches.append(f"{option}={st.get_option(option)}")
st.write(option_matches)
```
### Steps To Reproduce
With the folder structure:
```
/error_to_reproduce
/main.py
/.env
```
1. Create folder and step into it: `mkdir error_to_reproduce && cd error_to_reproduce`
2. Copy contents under `.env` to `error_to_reproduce/.env` and `main.py` to `error_to_reproduce/main.py`
3. Create a `virtualenv` and activate it: `python -m venv venv && source venv/bin/activate`
4. Install dependencies: `pip3 install streamlit python-dotenv`
5. Execute application `streamlit run main.py`
### Expected Behavior
The expected behaviour would be for streamlit to use the supplied environment variables:
- As per described in the [configuration options](https://docs.streamlit.io/develop/concepts/configuration/options)
- As per described per option when running `streamlit run --help`
- See environment examples in image:

### Current Behavior
The current behaviour is that the configuration options set in the environment are ignored for defaults. See example using the provided reproducible steps below.

### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: `1.41.1`
- Python DotEnv version: `1.0.1`
- Python version: `3.10.12`
- Operating System: Linux
- Distribution `Pop!_OS 22.04 LTS x86_64`
- Kernel: `6.9.3-76060903-generic`
- Browser: Firefox `134.0.2`
### Additional Information
_No response_ | open | 2025-02-07T14:04:36Z | 2025-02-12T19:33:15Z | https://github.com/streamlit/streamlit/issues/10361 | [
"type:enhancement",
"feature:config"
] | str00bs | 13 |
Avaiga/taipy | automation | 1,995 | [🐛 BUG] Scenario Selector: "Delete" button is active/inactive when it shouldn't | ### What went wrong? 🤔
"Delete" button in scenario selector (when editing the scenario) is active when it should be inactive and vice-versa.

### Expected Behavior
They should be active (clickable) when the scenario can be deleted.
### Steps to Reproduce Issue
Run this code, create a scenario and edit the scenario in the scenario selector. You should see the "Delete" button.
```python
from taipy import Config, Frequency
import taipy as tp
import pandas as pd
import datetime as dt
data = pd.read_csv(
"https://raw.githubusercontent.com/Avaiga/taipy-getting-started-core/develop/src/daily-min-temperatures.csv"
)
# Normal function used by Taipy
def predict(
historical_temperature: pd.DataFrame, date_to_forecast: dt.datetime
) -> float:
print(f"Running baseline...")
historical_temperature["Date"] = pd.to_datetime(historical_temperature["Date"])
historical_same_day = historical_temperature.loc[
(historical_temperature["Date"].dt.day == date_to_forecast.day)
& (historical_temperature["Date"].dt.month == date_to_forecast.month)
]
return historical_same_day["Temp"].mean()
# Configuration of Data Nodes
historical_temperature_cfg = Config.configure_data_node("historical_temperature")
date_to_forecast_cfg = Config.configure_data_node("date_to_forecast")
predictions_cfg = Config.configure_data_node("predictions", storage_type="json")
# Configuration of tasks
predictions_cfg = Config.configure_task(
"predict",
predict,
[historical_temperature_cfg, date_to_forecast_cfg],
predictions_cfg,
)
# Configuration of scenario
scenario_cfg = Config.configure_scenario(
id="my_scenario", task_configs=[predictions_cfg], frequency=Frequency.MONTHLY
)
if __name__ == "__main__":
# Run of the Core
tp.Core().run()
# Creation of the scenario and execution
scenario = tp.create_scenario(scenario_cfg)
scenario.historical_temperature.write(data)
scenario.date_to_forecast.write(dt.datetime.now())
tp.submit(scenario)
scenario_md = """
<|{scenario}|scenario_selector|>
<|{scenario}|scenario|>
"""
tp.Gui(scenario_md).run()
```
### Screenshots

### Version of Taipy
4.0.0
### Acceptance Criteria
- [ ] Ensure new code is unit tested, and check code coverage is at least 90%.
- [ ] Create related issue in taipy-doc for documentation and Release Notes.
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional)ue (optional) | closed | 2024-10-09T15:22:03Z | 2024-11-07T09:23:49Z | https://github.com/Avaiga/taipy/issues/1995 | [
"Core",
"🖰 GUI",
"💥Malfunction",
"🟨 Priority: Medium",
"🔒 Staff only"
] | FlorianJacta | 2 |
reloadware/reloadium | flask | 3 | [M1 Chip, macos] It seems like your platform or Python version are not supported yet. | I Get this:
(venv) xxx/venv/bin/python -m reloadium pydev_proxy /Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/pydevd.py --multiprocess --save-signatures --qt-support=auto --client 127.0.0.1 --port 55863 --file xx/__init__.py --infer_schema
It seems like your platform or Python version are not supported yet.
Please submit a github issue to let us know at https://github.com/reloadware/reloadium
im trying to debug a module | closed | 2022-04-23T00:26:15Z | 2022-09-05T15:39:11Z | https://github.com/reloadware/reloadium/issues/3 | [] | siilats | 18 |
2noise/ChatTTS | python | 805 | Transformer Engine版本的LLaMA模型是不是还没实现?后续有没有开发计划? | open | 2024-10-28T11:33:17Z | 2024-10-30T13:36:09Z | https://github.com/2noise/ChatTTS/issues/805 | [
"documentation",
"wontfix",
"performance"
] | sundoon | 1 |
|
pytest-dev/pytest-qt | pytest | 292 | Segmentation Fault - wait_signal.wait & _WaitWidgetContextManager.__exit__ | Hi,
With python 3.8 I am getting a segmentation fault at some wait functions when using macOS and WIndows. I could not reproduce it with Linux.
With Python 3.7 and 2.7 everything works fine for all OS flavors.
## Versions:
OS: macOS 10.14.6
Qt: 5.12.5 (conda-forge)
PyQt: 5.12.3 (conda-forge)
pytest-qt: master
pytest: 5.4.1 (conda-forge)
## Stacktrace - wait_signal.wait
For this particular test I am using `qtbot.waitSignal`.
```
Fatal Python error: Segmentation fault
Current thread 0x00000001098155c0 (most recent call first):
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pytestqt/wait_signal.py", line 51 in wait
File "/Users/slepicka/sandbox/git-slaclab/pydm-git/pydm/tests/widgets/test_rules.py", line 97 in test_rules_full
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/python.py", line 184 in pytest_pyfunc_call
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/python.py", line 1479 in runtest
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 135 in pytest_runtest_call
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 217 in <lambda>
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 244 in from_call
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 216 in call_runtest_hook
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 186 in call_and_report
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 100 in runtestprotocol
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/runner.py", line 85 in pytest_runtest_protocol
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 272 in pytest_runtestloop
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 247 in _main
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 191 in wrap_session
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/main.py", line 240 in pytest_cmdline_main
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/Users/slepicka/mc/envs/py38/lib/python3.8/site-packages/_pytest/config/__init__.py", line 124 in main
File "run_tests.py", line 21 in <module>
Segmentation fault: 11
```
## Stacktrace.- _WaitWidgetContextManager.__exit__
In this case my test was using `qtbot.waitExposed`.
```
Current thread 0x000000010f0e35c0 (most recent call first):
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pytestqt/qtbot.py", line 723 in __exit__
File "/Users/runner/runners/2.165.2/work/1/s/pydm/tests/widgets/test_drawing.py", line 508 in test_pydmdrawingline_draw_item
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/python.py", line 184 in pytest_pyfunc_call
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/python.py", line 1479 in runtest
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/runner.py", line 135 in pytest_runtest_call
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/runner.py", line 217 in <lambda>
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/runner.py", line 244 in from_call
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/runner.py", line 216 in call_runtest_hook
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/runner.py", line 186 in call_and_report
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/runner.py", line 100 in runtestprotocol
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/runner.py", line 85 in pytest_runtest_protocol
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/main.py", line 272 in pytest_runtestloop
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/main.py", line 247 in _main
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/main.py", line 191 in wrap_session
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/main.py", line 240 in pytest_cmdline_main
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/callers.py", line 187 in _multicall
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 84 in <lambda>
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/manager.py", line 93 in _hookexec
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/pluggy/hooks.py", line 286 in __call__
File "/usr/local/miniconda/envs/test-environment-3.8/lib/python3.8/site-packages/_pytest/config/__init__.py", line 124 in main
File "run_tests.py", line 21 in <module>
/Users/runner/runners/2.165.2/work/_temp/7e5f89e4-e30b-455e-83d2-e2695b1f105e.sh: line 2: 955 Segmentation fault: 11
``` | closed | 2020-04-01T16:50:56Z | 2020-05-08T22:17:20Z | https://github.com/pytest-dev/pytest-qt/issues/292 | [] | hhslepicka | 6 |
fohrloop/dash-uploader | dash | 76 | Multi-file upload: All files are uploaded to same folder (subfolders are not created) | As the title says, when using multi-file upload functionality, all the uploaded files are uploaded into same folder. The uploader does not create subfolders.
- Affected dash-uploader version: 0.7.0dev 279ee87
### Todo
- [ ] Check if this can be fixed on Chrome
- [ ] Check if this can be fixed on other browsers
- [ ] If not, check if an alert could be shown to users with unsupported browsers
| open | 2022-02-27T17:33:58Z | 2022-04-25T12:36:18Z | https://github.com/fohrloop/dash-uploader/issues/76 | [] | fohrloop | 2 |
youfou/wxpy | api | 315 | 监听群消息时会报错 | Exception in thread _listen: Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner self.run() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 864, in run self._target(*self._args, **self._kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/bot.py", line 504, in _listen msg = Message(self.core.msgList.get(timeout=0.5), self) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/messages/message.py", line 49, in __init__ setattr(self, 'reply' + method, getattr(self.chat, 'send' + method)) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/messages/message.py", line 319, in chat return self.sender File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/messages/message.py", line 329, in sender return self._get_chat_by_user_name(self.raw.get('FromUserName')) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/messages/message.py", line 379, in _get_chat_by_user_name _chat = match_in_chats(self.bot.groups()) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/utils/misc.py", line 96, in wrapped ret = Groups(ret) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/chats/groups.py", line 35, in __init__ if group.bot.self in group: File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/chats/group.py", line 42, in __contains__ for member in self.members: File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/chats/group.py", line 36, in members raw_member_list() or raw_member_list(True) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/chats/group.py", line 30, in raw_member_list self.update_group() File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/chats/group.py", line 112, in update_group super(Group, self).__init__(do(), self.bot) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/utils/misc.py", line 67, in wrapped ret = func(*args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/wxpy/api/chats/group.py", line 110, in do return self.bot.core.update_chatroom(self.user_name, members_details) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/itchat/components/contact.py", line 74, in update_chatroom update_local_chatrooms(self, chatroomList) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/itchat/storage.py", line 11, in _contact_change return fn(core, *args, **kwargs) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/itchat/components/contact.py", line 154, in update_local_chatrooms 'UserName', oldChatroom['ChatRoomOwner']).get('Uin', 0) AttributeError: 'NoneType' object has no attribute 'get' | open | 2018-07-06T13:23:41Z | 2019-05-20T07:01:56Z | https://github.com/youfou/wxpy/issues/315 | [] | yanweitao | 3 |
localstack/localstack | python | 12,350 | feature request: Aws eks add aws-ebs-csi-driver | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Feature description
We need this EKS add-on support as Prometheus will not work without this. As per documentation from k8s 1.23 Prometheus needs aws-ebs-csi-driver to be installed in eks to work.
### 🧑💻 Implementation
_No response_
### Anything else?
_No response_ | open | 2025-03-06T15:22:34Z | 2025-03-13T06:57:05Z | https://github.com/localstack/localstack/issues/12350 | [
"type: feature",
"aws:eks",
"status: backlog"
] | ashelar4 | 1 |
FactoryBoy/factory_boy | sqlalchemy | 500 | Mutually Exclusive Traits | #### The problem
Traits are an awesome feature, but sometimes I build a factory where there are several traits that should never be used with each other. While I understand that it is an up to the user thing to make sure that people don't do something dumb with their factories and traits and documentation should be present to prevent people from using them incorrectly, a functional solution would also be great.
#### Proposed solution
Allow setting traits to False inside of each other. I believe this currently kind of exists, but if you set 2 different traits to false inside of each other, the execution of the factory throws an error indicating that there is a loop in the trait calls. I simply suggest an update to the logic of that check that can detect when a trait is negating another one instead of activating it, and allow that through, overriding whatever was placed on the actual factory call.
#### Extra notes
The traits are executed in alphabetical order as normal, and the value of a trait passed in as a kwarg on the factory call is overriden by the value from the other trait, for example the following factory setting the type of user to 'A' every time with the call made below because the admin trait is evaluated first and prevents the evaluation of the user trait.
Sample factory
```
class UserFactory(ModelFactory)
type = factory.LazyAttribute(lambda: choice('A', 'U'))
class Params:
admin = factory.Trait(
type='A',
user=False
)
user = factory.Trait(
type='U',
admin=False
)
```
Sample call
```
user = UserFactory(user=True, admin=True)
user.type == 'A'
``` | open | 2018-08-09T21:31:42Z | 2018-08-20T13:13:36Z | https://github.com/FactoryBoy/factory_boy/issues/500 | [
"Feature",
"DesignDecision"
] | stephenross | 2 |
apify/crawlee-python | web-scraping | 822 | tiered proxy documentation or coding error involving 'None' | Please scroll down to the tiered proxy section on [this](https://crawlee.dev/python/docs/guides/proxy-management) page.
Both tabbed code examples include the following:
```
# No proxy tier. (Not needed, but optional in case you do not want to use any proxy on lowest tier.)
[None],
```
However, the Pydantic validator rejects the None value in the list.
Something needs to be fixed. I’m not sure whether the sample code in the documentation is incorrect or if the validation code has an issue.
Here is a related [closed issue](https://github.com/apify/crawlee-python/issues/687).
Sample code:
```
import asyncio
from crawlee.proxy_configuration import ProxyConfiguration
async def config_proxy() -> ProxyConfiguration:
# Create and return a ProxyConfiguration object.
proxy_configuration = ProxyConfiguration(
tiered_proxy_urls=[
# No proxy tier. (Not needed, but optional in case you do not want to use any proxy on lowest tier.)
[None],
# lower tier, cheaper, preferred as long as they work
['http://example.com:8080', 'https://example.com:8080'],
# higher tier, more expensive, used as a fallback
['http://domain.com:8080', 'https://domain.com:8080'],
]
)
return proxy_configuration
asyncio.run(config_proxy())
```
Terminal output:
```
/Users/matecsaj/PycharmProjects/wat-crawlee/venv/bin/python /Users/matecsaj/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch_1.py
Traceback (most recent call last):
File "/Users/matecsaj/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch_1.py", line 19, in <module>
asyncio.run(config_proxy())
~~~~~~~~~~~^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 194, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/asyncio/base_events.py", line 720, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "/Users/matecsaj/Library/Application Support/JetBrains/PyCharm2024.3/scratches/scratch_1.py", line 6, in config_proxy
proxy_configuration = ProxyConfiguration(
tiered_proxy_urls=[
...<6 lines>...
]
)
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/crawlee/proxy_configuration.py", line 103, in __init__
[[URL(url) for url in tier if self._url_validator.validate_python(url)] for tier in tiered_proxy_urls]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^
File "/Users/matecsaj/PycharmProjects/wat-crawlee/venv/lib/python3.13/site-packages/pydantic/type_adapter.py", line 412, in validate_python
return self.validator.validate_python(
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
object,
^^^^^^^
...<3 lines>...
allow_partial=experimental_allow_partial,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
pydantic_core._pydantic_core.ValidationError: 1 validation error for function-wrap[wrap_val()]
URL input should be a string or URL [type=url_type, input_value=None, input_type=NoneType]
For further information visit https://errors.pydantic.dev/2.10/v/url_type
Process finished with exit code 1
``` | closed | 2024-12-16T01:36:29Z | 2024-12-16T15:24:27Z | https://github.com/apify/crawlee-python/issues/822 | [
"t-tooling"
] | matecsaj | 1 |
matterport/Mask_RCNN | tensorflow | 2,739 | Traning process get stuck at the first and fails to lauch | Hi there,
I am using
`Ubuntu 18.04`
`Tensorflow 1.15`
`Cuda 10.0`
`cuDNN 7.6.5`
The `tf.test.is_gpu_available()` returns True.
The whole program just gets stuck when I try to train the model no matter how small the model is.
Sometime after waiting for minutes I got the exception
`InternalError: (0) Internal: Blass GEMM launch failed`
I know this error occurs when there are multiple sessions running at the same time but I just have exact one session.
I have been tortured for days, can someone give me some advice on it? | open | 2021-12-11T01:43:45Z | 2021-12-11T07:35:05Z | https://github.com/matterport/Mask_RCNN/issues/2739 | [] | MulongXie | 1 |
FujiwaraChoki/MoneyPrinter | automation | 114 | Getting CORS Error | When I run the code there is a cors error shown as below:
Access to fetch at 'http://127.0.0.1:8080/api/cancel' from origin 'http://0.0.0.0:3000' has been blocked by CORS policy: The request client is not a secure context and the resource is in more-private address space `local`.
| closed | 2024-02-09T07:56:49Z | 2024-02-09T08:03:00Z | https://github.com/FujiwaraChoki/MoneyPrinter/issues/114 | [] | bishal-dd | 0 |
twelvedata/twelvedata-python | matplotlib | 21 | [Bug] SyntaxError: invalid syntax after installing the library | I'm getting SyntaxError: invalid syntax after installing the library
Traceback (most recent call last):
File "script.py", line 1, in <module>
import twelvedata
File "/Users/work/Library/Python/2.7/lib/python/site-packages/twelvedata/__init__.py", line 3, in <module>
from .client import TDClient
File "/Users/work/Library/Python/2.7/lib/python/site-packages/twelvedata/client.py", line 2, in <module>
from .endpoints import (
File "/Users/work/Library/Python/2.7/lib/python/site-packages/twelvedata/endpoints.py", line 113
def get_symbol(symbol) -> (str, bool):
^
SyntaxError: invalid syntax
**To Reproduce**
Steps to reproduce the behavior:
1- pip install twelvedata
2- in script.py: from twelvedata import TDClient
3- run python script.py
| closed | 2020-11-03T06:16:20Z | 2020-11-11T21:21:18Z | https://github.com/twelvedata/twelvedata-python/issues/21 | [] | MyNameIsAlaa | 2 |
tqdm/tqdm | pandas | 632 | Print in different Line in Zeppelin | - [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
>>> 4.24.0 3.5.2 (default, Nov 23 2017, 16:37:01)
>>> [GCC 5.4.0 20160609] linux
```
The issue is the library does not work in zeppelin (>= 0.7). Indeed, it just write each progress bar in a differerent line.
I should have mentioned that we can set the progress bar manually in zeppelin 0.8.0 likes[ this](https://github.com/apache/zeppelin/pull/2454).
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| open | 2018-10-24T11:51:17Z | 2018-11-22T14:55:32Z | https://github.com/tqdm/tqdm/issues/632 | [
"p4-enhancement-future 🧨",
"submodule-notebook 📓"
] | dimoibiehg | 4 |
deeppavlov/DeepPavlov | tensorflow | 1,362 | Classifier Classes do not have unified interfaces | .classes напр у бертклассифаера
BertClassifier and KerasClassifier do not share the list of public fields. (e.g. `.classes` field of classifier) | closed | 2020-12-22T14:40:00Z | 2023-07-06T12:10:41Z | https://github.com/deeppavlov/DeepPavlov/issues/1362 | [
"bug"
] | oserikov | 0 |
opengeos/leafmap | jupyter | 559 | Add support for PMTiles | The source code is adapted from [folium-pmtiles](https://github.com/jtmiclat/folium-pmtiles). For now, it only supports folium. It would be great to support ipyleaflet as well.
```python
import folium
from folium.elements import JSCSSMixin
from folium.map import Layer
from jinja2 import Template
class PMTilesMapLibreLayer(JSCSSMixin, Layer):
"""Based of
https://github.com/python-visualization/folium/blob/56d3665fdc9e7280eae1df1262450e53ec4f5a60/folium/plugins/vectorgrid_protobuf.py
"""
_template = Template(
"""
{% macro script(this, kwargs) -%}
let protocol = new pmtiles.Protocol();
maplibregl.addProtocol("pmtiles", protocol.tile);
{{ this._parent.get_name() }}.createPane('overlay');
{{ this._parent.get_name() }}.getPane('overlay').style.zIndex = 650;
{{ this._parent.get_name() }}.getPane('overlay').style.pointerEvents = 'none';
var {{ this.get_name() }} = L.maplibreGL({
pane: 'overlay',
style: {{ this.style|tojson}}
}).addTo({{ this._parent.get_name() }});
{%- endmacro %}
"""
)
default_css = [
("maplibre_css", "https://unpkg.com/maplibre-gl@2.4.0/dist/maplibre-gl.css")
]
default_js = [
("pmtiles", "https://unpkg.com/pmtiles@2.7.1/dist/index.js"),
("maplibre-lib", "https://unpkg.com/maplibre-gl@2.2.1/dist/maplibre-gl.js"),
(
"maplibre-leaflet",
"https://unpkg.com/@maplibre/maplibre-gl-leaflet@0.0.19/leaflet-maplibre-gl.js",
),
]
def __init__(self, url, layer_name=None, style=None, **kwargs):
self.layer_name = layer_name if layer_name else "PMTilesVector"
super().__init__(name=self.layer_name, **kwargs)
self.url = url
self._name = "PMTilesVector"
if style is not None:
self.style = style
else:
self.style = {}
m = folium.Map(location=[43.7798, 11.24148], zoom_start=13)
pmtiles_url = "https://open.gishub.org/data/pmtiles/protomaps_firenze.pmtiles"
pmtiles_layer = PMTilesMapLibreLayer(
"folium_layer_name",
overlay=True,
style={
"version": 8,
"sources": {
"example_source": {
"type": "vector",
"url": "pmtiles://" + pmtiles_url,
"attribution": '<a href="https://protomaps.com">Protomaps</a> © <a href="https://openstreetmap.org/copyright">OpenStreetMap</a>',
}
},
"layers": [
{
"id": "buildings",
"source": "example_source",
"source-layer": "landuse",
"type": "fill",
"paint": {"fill-color": "steelblue"},
},
{
"id": "roads",
"source": "example_source",
"source-layer": "roads",
"type": "line",
"paint": {"line-color": "black"},
},
],
},
)
m.add_child(pmtiles_layer)
folium.LayerControl().add_to(m)
m
```

****
| closed | 2023-09-20T16:14:39Z | 2023-09-20T19:22:27Z | https://github.com/opengeos/leafmap/issues/559 | [
"Feature Request"
] | giswqs | 2 |
microsoft/nni | data-science | 4,766 | I have meet an unknown error when I try the mnist nas example. | **Describe the issue**:
[2022-04-13 19:34:12] INFO (nni.experiment/MainThread) Creating experiment, Experiment ID: cau4bj2g
[2022-04-13 19:34:12] INFO (nni.experiment/MainThread) Connecting IPC pipe...
[2022-04-13 19:34:13] INFO (nni.experiment/MainThread) Starting web server...
[2022-04-13 19:34:15] INFO (nni.experiment/MainThread) Setting up...
[2022-04-13 19:34:17] INFO (nni.runtime.msg_dispatcher_base/Thread-3) Dispatcher started
[2022-04-13 19:34:17] INFO (nni.retiarii.experiment.pytorch/MainThread) Web UI URLs: http://169.254.240.252:8081 http://169.254.175.234:8081 http://169.254.108.7:8081 http://169.254.237.19:8081 http://192.168.195.1:8081 http://192.168.75.1:8081 http://192.168.160.7:8081 http://169.254.27.48:8081 http://127.0.0.1:8081
[2022-04-13 19:34:17] INFO (nni.retiarii.experiment.pytorch/MainThread) Start strategy...
[2022-04-13 19:34:21] INFO (root/MainThread) Successfully update searchSpace.
[2022-04-13 19:34:21] INFO (nni.retiarii.strategy.bruteforce/MainThread) Random search running in fixed size mode. Dedup: on.
Exception in thread Thread-3:
Traceback (most recent call last):
File "D:\anaconda_home\envs\nni_test\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "D:\anaconda_home\envs\nni_test\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "D:\anaconda_home\envs\nni_test\lib\site-packages\nni\runtime\msg_dispatcher_base.py", line 51, in run
command, data = receive()
File "D:\anaconda_home\envs\nni_test\lib\site-packages\nni\runtime\protocol.py", line 61, in receive
header = _in_file.read(16)
OSError: [Errno 22] Invalid argument
**Environment**:
- NNI version: 2.6.1
- Training service (local|remote|pai|aml|etc): local
- Client OS:windows
- Server OS (for remote mode only):
- Python version:3.6
- PyTorch/TensorFlow version:1.10
- Is conda/virtualenv/venv used?:is
- Is running in Docker?:
**<html>
<body>
absl-py | 1.0.0 |
-- | -- | --
bzip2 | 1.0.8 |
ca-certificates | 2021.10.8 |
cachetools | 4.2.4 |
certifi | 2021.10.8 |
charset-normalizer | 2.0.12 |
cloudpickle | 2.0.0 |
colorama | 0.4.4 |
contextlib2 | 21.6.0 |
cudatoolkit | 9.2.148 |
dill | 0.3.4 |
filelock | 3.3.2 |
google-auth | 2.6.2 |
google-auth-oauthlib | 0.4.6 |
grpcio | 1.45.0 |
idna | 3.3 |
importlib-resources | 5.4.0 |
json-tricks | 3.15.5 |
libffi | 3.4.2 |
libzlib | 1.2.11 |
markdown | 3.3.6 |
networkx | 2.5.1 |
nni | 2.6.1 |
numpy | 1.19.3 |
oauthlib | 3.2.0 |
openssl | 3.0.2 |
pandas | 1.1.5 |
pillow | 8.4.0 |
pip | 21.3.1 |
protobuf | 3.19.4 |
pyasn1 | 0.4.8 |
pyasn1-modules | 0.2.8 |
python | 3.6.15 |
python-dateutil | 2.8.2 |
pytz | 2022.1 |
requests | 2.27.1 |
requests-oauthlib | 1.3.1 |
rsa | 4.8 |
schema | 0.7.5 |
scipy | 1.5.4 |
setuptools | 59.6.0 |
six | 1.16.0 |
sqlite | 3.38.2 |
tensorboard | 2.8.0 |
tensorboard-data-server | 0.6.1 |
tensorboard-plugin-wit | 1.8.1 |
thop | 0.0.31-2005241907 |
tk | 8.6.12 |
torch | 1.10.0+cpu |
torchaudio | 0.10.0+cu113 |
torchvision | 0.11.0+cpu |
tqdm | 4.63.0 |
typeguard | 2.13.3 |
typing-extensions | 4.1.1 |
ucrt | 10.0.20348.0 |
urllib3 | 1.26.9 |
vc | 14.2 |
vs2015_runtime | 14.29.30037 |
wcwidth | 0.2.5 |
werkzeug | 2.0.3 |
wheel | 0.37.1 |
xz | 5.2.5 |
zipp | 3.6.0 |
</body>
</html>**
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
[2022-04-13 20:17:33] INFO (NNIManager) Trial job x70fi status changed from WAITING to RUNNING
[2022-04-13 20:17:41] INFO (NNIManager) Trial job x70fi status changed from RUNNING to FAILED
- dispatcher.log:
[2022-04-13 20:16:58] INFO (nni.experiment/MainThread) Creating experiment, Experiment ID: [36mbntk1sfo[0m
[2022-04-13 20:16:58] INFO (nni.experiment/MainThread) Connecting IPC pipe...
[2022-04-13 20:17:00] INFO (nni.experiment/MainThread) Starting web server...
[2022-04-13 20:17:02] INFO (nni.experiment/MainThread) Setting up...
[2022-04-13 20:17:04] INFO (nni.runtime.msg_dispatcher_base/Thread-3) Dispatcher started
[2022-04-13 20:17:04] INFO (nni.retiarii.experiment.pytorch/MainThread) Web UI URLs: [36mhttp://169.254.240.252:8081 http://169.254.175.234:8081 http://169.254.108.7:8081 http://169.254.237.19:8081 http://192.168.195.1:8081 http://192.168.75.1:8081 http://192.168.160.7:8081 http://169.254.27.48:8081 http://127.0.0.1:8081[0m
[2022-04-13 20:17:04] INFO (nni.retiarii.experiment.pytorch/MainThread) Start strategy...
[2022-04-13 20:17:08] INFO (root/MainThread) Successfully update searchSpace.
[2022-04-13 20:17:08] INFO (nni.retiarii.strategy.bruteforce/MainThread) Random search running in fixed size mode. Dedup: on.
- nnictl stdout and stderr:
stdout:
ERROR: C:\Users\suntao\nni-experiments\bntk1sfo\log\nnictl_stdout.log does not exist!
stderr:
ERROR: C:\Users\suntao\nni-experiments\bntk1sfo\log\nnictl_stderr.log does not exist!
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/Nnictl.md#nnictl%20log%20stdout
-->
**How to reproduce it?**: | closed | 2022-04-13T12:26:26Z | 2022-09-09T03:12:35Z | https://github.com/microsoft/nni/issues/4766 | [
"Training Service"
] | sun-tao | 2 |
xinntao/Real-ESRGAN | pytorch | 745 | 在RealESRNet基础上finetune RealESRGAN 随着迭代次数的增加 出现了模式崩塌? | 您好,
我在训练GAN模型的过程中,随着迭代次数的增多,产生的超分图片效果出现了比较严重的色彩的改变,请问训练GAN模型的参数是默认就是repo里面的那个yml文件中的吗? 不知道您训练的过程中是否遇到过类似的问题。
**结果展示**
低分辨率:

fintune 10000 iterration:

finetune 100000 iteration:

fintune 260000 iteration:

| open | 2024-02-03T07:58:44Z | 2025-02-21T07:48:29Z | https://github.com/xinntao/Real-ESRGAN/issues/745 | [] | ANYMS-A | 4 |
allenai/allennlp | pytorch | 4,963 | Include evaluation metrics of officially-supported models in model cards | I think it'd be useful for the model cards of officially-supported models to report the evaluation metrics / model performance. I know I can check for myself with `allennlp evaluate`, it's nice to have this sort of information on-hand (e.g., in this case I want to see whether a model is ~ the published performance) | closed | 2021-02-06T02:30:17Z | 2021-03-02T01:04:57Z | https://github.com/allenai/allennlp/issues/4963 | [
"Feature request"
] | nelson-liu | 3 |
matterport/Mask_RCNN | tensorflow | 2,730 | After I trained the model for text recognition, it was confusing to recognize characters between 6 and 9. | After I trained the model for text recognition, it was confusing to recognize characters between 6 and 9.
Could anyone tell me the reason? | closed | 2021-12-03T04:09:04Z | 2021-12-05T17:18:44Z | https://github.com/matterport/Mask_RCNN/issues/2730 | [] | leekwunfung817 | 3 |
marimo-team/marimo | data-science | 3,722 | TypeScript / JavaScript Support | ### Description
Hello,
I recently discovered marimo and am impressed with its reactive programming model and improved notebook experience. However, our team heavily relies on TypeScript alongside Python in our development workflow. Currently, we use Jupyter Notebooks with the [tslab TypeScript kernel](https://github.com/yunabe/tslab), which has been essential for our development process.
After reviewing the documentation, I couldn't find any information about support for other programming languages, particularly TypeScript. I understand this might be architecturally challenging given marimo's current design, but I believe TypeScript support would be valuable for several reasons.
I'd appreciate any insights into whether this kind of language support is possible within marimo's architecture, and if so, whether it's being considered for future development.
Would love to hear your thoughts on this potential feature. Happy to provide more details about our use case if helpful.
### Suggested solution
### Support JavaScript / Typescript
## Business Impact & Use Cases:
1. Supports teams transitioning from Jupyter+tslab who want to leverage marimo's improved features.
2. Allows for prototyping and testing TypeScript code in the same interactive environment as Python.
3. Facilitates documentation and knowledge sharing in mixed Python/TypeScript codebases.
### Alternative
_No response_
### Additional context
Technical Context:
- Current solution: Jupyter Notebooks + tslab TypeScript kernel
- Desired: Similar TypeScript execution capabilities within marimo's reactive environment | closed | 2025-02-08T00:17:37Z | 2025-02-08T01:41:34Z | https://github.com/marimo-team/marimo/issues/3722 | [
"enhancement"
] | keremnalbant | 1 |
vllm-project/vllm | pytorch | 14,669 | [Bug]: ROCm fail to build due to compilation error of `moe_wna16.cu` | ### Your current environment
<details>
<summary>The output of `python collect_env.py`</summary>
```text
INFO 03-12 09:10:06 [__init__.py:256] Automatically detected platform rocm.
Collecting environment information...
PyTorch version: 2.7.0a0+git6c0e746
Is debug build: False
CUDA used to build PyTorch: N/A
ROCM used to build PyTorch: 6.3.42133-1b9c17779
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: 18.0.0git (https://github.com/RadeonOpenCompute/llvm-project roc-6.3.1 24491 1e0fda770a2079fbd71e4b70974d74f62fd3af10)
CMake version: version 3.31.6
Libc version: glibc-2.35
Python version: 3.12.9 (main, Feb 5 2025, 08:49:00) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-5.15.0-116-generic-x86_64-with-glibc2.35
GPU models and configuration: AMD Instinct MI300X (gfx942:sramecc+:xnack-)
Nvidia driver version: Could not collect
cuDNN version: Could not collect
HIP runtime version: 6.3.42133
MIOpen runtime version: 3.3.0
Is XNNPACK available: True
PYTORCH_ROCM_ARCH=gfx942
LD_LIBRARY_PATH=/usr/local/lib/python3.12/dist-packages/cv2/../../lib64:/opt/rocm/lib:/usr/local/lib:
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1
CUDA_MODULE_LOADING=LAZY
```
</details>
### 🐛 Describe the bug
When compiling the latest vLLM commit on ROCm, it gives the following error.

The error comes from the compilation of CUDA only kernels, introduced in https://github.com/vllm-project/vllm/commit/90e88ab75632745c137647bf710d63997529fb89.
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-12T09:12:25Z | 2025-03-12T21:01:48Z | https://github.com/vllm-project/vllm/issues/14669 | [
"bug"
] | tjtanaa | 1 |
Colin-b/pytest_httpx | pytest | 13 | Document migration from aioresponses | For those using aioresponses to mock aiohttp | closed | 2020-03-25T12:50:14Z | 2020-08-13T12:20:34Z | https://github.com/Colin-b/pytest_httpx/issues/13 | [
"documentation"
] | Colin-b | 1 |
albumentations-team/albumentations | machine-learning | 1,631 | [Tech debt] Improve interface for RandomSunFlare | Right now in the transform we have separate parameters for `angle_lower`, `angle_upper`, `num_flare_circles_lower`, `num_flare_circles_upper`
Better would be to have:
- `num_flare_circles_range= [num_flare_circles_lower, num_flare_circles_upper]`
- `angle_range = [angle_lower, angle_higher]`
=>
We can update transform to use new signature, keep old as working, but mark as deprecated.
----
PR could be similar to https://github.com/albumentations-team/albumentations/pull/1704 | closed | 2024-04-05T18:39:49Z | 2024-06-22T02:47:54Z | https://github.com/albumentations-team/albumentations/issues/1631 | [
"good first issue",
"Tech debt"
] | ternaus | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,152 | AI separation models don't support audio files with surround sound. | Hey UVR devs,
I know you probably don't make the separation models yourselves but you should know that the models don't support audio files with surround sound. What you do with this information is up to you, I ain't telling you what to do (but a warning [or fix :o] would be nice I guess). Thank you for the program, it's absolutely fantastic.
Crash report in the picture below
 | open | 2024-02-06T00:04:14Z | 2024-02-06T00:04:14Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1152 | [] | riyvk | 0 |
deepspeedai/DeepSpeed | deep-learning | 7,028 | [BUG] `import deepspeed` crashes on `deepspeed==0.16.3` with `triton==3.2.0` on CPU machine | **Describe the bug**
A clear and concise description of what the bug is.
- [deepspeed uses the @triton.autotuner decorator](https://github.com/deepspeedai/DeepSpeed/blob/22d7fdc0f444571131d77ab13be858b5118770ef/deepspeed/ops/transformer/inference/triton/triton_matmul_kernel.py#L51), which leads to the autotuner being initialized when `import deepspeed` happens
- in triton 3.2.0, they add [logic to the autotuner](https://github.com/triton-lang/triton/blob/7685e96ae428e1de5750d4c171d78b9bfdf1e73b/python/triton/runtime/autotuner.py#L126) that leads to [a check for torch.cuda.is_available()](https://github.com/triton-lang/triton/blob/7685e96ae428e1de5750d4c171d78b9bfdf1e73b/python/triton/runtime/driver.py#L5) in the autotuner constructor
Before this updates of triton, it's safe to import deepspeed on a CPU machine.
**To Reproduce**
run `import deepspeed` on a CPU machine will lead to the following error message:
```
>>> import deepspeed
[2025-02-12 18:28:06,516] [WARNING] [real_accelerator.py:181:get_accelerator] Setting accelerator to CPU. If you have GPU or other accelerator, we were unable to detect it.
[2025-02-12 18:28:06,530] [INFO] [real_accelerator.py:222:get_accelerator] Setting ds_accelerator to cpu (auto detect)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/__init__.py", line 25, in <module>
from . import ops
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/ops/__init__.py", line 11, in <module>
from . import transformer
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/ops/transformer/__init__.py", line 7, in <module>
from .inference.config import DeepSpeedInferenceConfig
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/__init__.py", line 7, in <module>
from ....model_implementations.transformers.ds_transformer import DeepSpeedTransformerInference
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/model_implementations/__init__.py", line 6, in <module>
from .transformers.ds_transformer import DeepSpeedTransformerInference
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/model_implementations/transformers/ds_transformer.py", line 18, in <module>
from deepspeed.ops.transformer.inference.triton.mlp import TritonMLP
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/triton/__init__.py", line 10, in <module>
from .ops import *
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/triton/ops.py", line 6, in <module>
import deepspeed.ops.transformer.inference.triton.matmul_ext as matmul_ext
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/triton/matmul_ext.py", line 10, in <module>
import deepspeed.ops.transformer.inference.triton.triton_matmul_kernel as triton_matmul_kernel
File "/home/ray/anaconda3/lib/python3.9/site-packages/deepspeed/ops/transformer/inference/triton/triton_matmul_kernel.py", line 120, in <module>
def _fp_matmul(
File "/home/ray/anaconda3/lib/python3.9/site-packages/triton/runtime/autotuner.py", line 368, in decorator
return Autotuner(fn, fn.arg_names, configs, key, reset_to_zero, restore_value, pre_hook=pre_hook,
File "/home/ray/anaconda3/lib/python3.9/site-packages/triton/runtime/autotuner.py", line 130, in __init__
self.do_bench = driver.active.get_benchmarker()
File "/home/ray/anaconda3/lib/python3.9/site-packages/triton/runtime/driver.py", line 23, in __getattr__
self._initialize_obj()
File "/home/ray/anaconda3/lib/python3.9/site-packages/triton/runtime/driver.py", line 20, in _initialize_obj
self._obj = self._init_fn()
File "/home/ray/anaconda3/lib/python3.9/site-packages/triton/runtime/driver.py", line 8, in _create_driver
raise RuntimeError(f"{len(actives)} active drivers ({actives}). There should only be one.")
RuntimeError: 0 active drivers ([]). There should only be one.
```
**Expected behavior**
A clear and concise description of what you expected to happen.
import deepspeed on a CPU mahcine should not crash.
**ds_report output**
Please run `ds_report` to give us details about your setup.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**System info (please complete the following information):**
- OS: [e.g. Ubuntu 18.04]
- GPU count and types [e.g. two machines with x8 A100s each]
- Interconnects (if applicable) [e.g., two machines connected with 100 Gbps IB]
- Python version
- Any other relevant info about your setup
**Launcher context**
Are you launching your experiment with the `deepspeed` launcher, MPI, or something else?
**Docker context**
Are you using a specific docker image that you can share?
**Additional context**
Add any other context about the problem here.
| open | 2025-02-13T02:42:03Z | 2025-02-14T02:04:16Z | https://github.com/deepspeedai/DeepSpeed/issues/7028 | [
"bug",
"training"
] | hongpeng-guo | 3 |
deezer/spleeter | tensorflow | 526 | [Discussion] Is it possible to use GPU with Spleeter Python API without conda? | Hello all,
I'm using Spleeter Python API to separate audio signals in many stems. Is it possible to use GPU for separations without conda or docker image?
Thanks | closed | 2020-12-03T16:52:43Z | 2020-12-03T17:20:35Z | https://github.com/deezer/spleeter/issues/526 | [
"question"
] | Tiago622 | 1 |
pytorch/pytorch | numpy | 149,732 | stride asserts should name the operator involved | ```
File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
File "/packages/aps.ads.icvr/icvr_launcher#link-tree/torch/_inductor/utils.py", line 2348, in run
return model(new_inputs)
File "/tmp/torchinductor_nobody/27/c2765fyur2v7aek4rc762oibztfzekpdgupovpfnad463vcqmrtj.py", line 205, in call
assert_size_stride(primals_1, (768, 192), (192, 1))
AssertionError: expected size 1024==768, stride 192==192 at dim=0
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
pitch: we can probably pass the name of the operator to assert_size_stride.
I have debugged 3 of these in the last week
cc @ezyang @gchanan @kadeng @msaroufim @chauhang @penguinwu | open | 2025-03-21T14:56:41Z | 2025-03-24T17:12:57Z | https://github.com/pytorch/pytorch/issues/149732 | [
"high priority",
"triage review",
"oncall: pt2"
] | zou3519 | 1 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,458 | Want to understand the loss | Hi, Thank you for the awesome work.
I have read the issues about blurred outputs like: [1](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1388#issuecomment-1062187740) and [2](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/656#issuecomment-510935442).
May i have your explanations why increase the weight of identity loss and cycle-consistenty loss can improve the problem?
Thank you! | open | 2022-07-13T13:49:19Z | 2022-07-13T13:49:19Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1458 | [] | yicheng6o6 | 0 |
ultralytics/ultralytics | computer-vision | 19,294 | Cannot access segment model in mobile hub | Hi
When I try to use my segment model I get the message that currently only detection models are supported.
Ok, but how does this fit with the remark
> @AstroCIEL Segment models also automatically are Detect models, they output both bounding boxes and segment masks.
_Originally posted by @glenn-jocher in [#14648](https://github.com/ultralytics/ultralytics/issues/14648#issuecomment-2247479874)_
Thanks for any clarification | open | 2025-02-18T09:51:38Z | 2025-02-20T23:51:27Z | https://github.com/ultralytics/ultralytics/issues/19294 | [
"question",
"HUB",
"segment"
] | metagic | 4 |
iperov/DeepFaceLab | machine-learning | 5,502 | Can't get RTX 3090 to work with deepfacelab. | THIS IS NOT TECH SUPPORT FOR NEWBIE FAKERS
POST ONLY ISSUES RELATED TO BUGS OR CODE
## Expected behavior
I have the highest specs with an rtx 3090 and a ryzen 5800x cpu. I expect it to run, but it comes up with this error:Error: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[5376,34,34] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_38 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_4/concat/_1141]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: OOM when allocating tensor with shape[5376,34,34] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_38 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node Pad_38:
LeakyRelu_28 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)
Input Source operations connected to node Pad_38:
LeakyRelu_28 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)
Original stack trace for 'Pad_38':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 410, in on_initialize
gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 225, in forward
x = self.upscale2(x)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 71, in forward
x = self.conv1(x)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
x = tf.pad (x, padding, mode='CONSTANT')
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
result = gen_array_ops.pad(tensor, paddings, name=name)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
"Pad", input=input, paddings=paddings, name=name)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
Traceback (most recent call last):
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[5376,34,34] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node Pad_38}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_4/concat/_1141]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: OOM when allocating tensor with shape[5376,34,34] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node Pad_38}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 129, in trainerThread
iter, iter_time = model.train_one_iter()
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 474, in train_one_iter
losses = self.onTrainOneIter()
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 774, in onTrainOneIter
src_loss, dst_loss = self.src_dst_train (warped_src, target_src, target_srcm, target_srcm_em, warped_dst, target_dst, target_dstm, target_dstm_em)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 584, in src_dst_train
self.target_dstm_em:target_dstm_em,
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message) # pylint: disable=no-value-for-parameter
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[5376,34,34] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_38 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
[[concat_4/concat/_1141]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
(1) Resource exhausted: OOM when allocating tensor with shape[5376,34,34] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node Pad_38 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py:87) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node Pad_38:
LeakyRelu_28 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)
Input Source operations connected to node Pad_38:
LeakyRelu_28 (defined at C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py:29)
Original stack trace for 'Pad_38':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\ModelBase.py", line 193, in __init__
self.on_initialize()
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 410, in on_initialize
gpu_pred_dst_dst, gpu_pred_dst_dstm = self.decoder_dst(gpu_dst_code)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 225, in forward
x = self.upscale2(x)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\models\ModelBase.py", line 117, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\archis\DeepFakeArchi.py", line 71, in forward
x = self.conv1(x)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\LayerBase.py", line 14, in __call__
return self.forward(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\DeepFaceLab\core\leras\layers\Conv2D.py", line 87, in forward
x = tf.pad (x, padding, mode='CONSTANT')
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3528, in pad
result = gen_array_ops.pad(tensor, paddings, name=name)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 6487, in pad
"Pad", input=input, paddings=paddings, name=name)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _create_op_internal
op_def=op_def)
File "C:\Users\Redux\Downloads\Reface\DeepFaceLab_NVIDIA_RTX3000_series\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 2045, in __init__
self._traceback = tf_stack.extract_stack_for_node(self._c_op)
## Actual behavior
I go through all the settings in SAEHD without changing anything and it comes up with this long paragraph that I can't understand
or it says bad allocation or out of memory.
## Steps to reproduce
I cannot recall exactly what i changed but i don't think it would be detrimental because I then used the cpu instead of the gpu and worked fine, but I don't think im getting the quickest result with the cpu.
resolution: 128 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: df-ud ==
== ae_dims: 256 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 22 ==
== masked_training: True ==
== eyes_mouth_prio: True ==
== uniform_yaw: True ==
== blur_out_mask: True ==
== adabelief: True ==
== lr_dropout: n ==
== random_warp: False ==
== random_hsv_power: 0.0 ==
== true_face_power: 0.0 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: True ==
== pretrain: True ==
== autobackup_hour: 1 ==
== write_preview_history: True ==
== target_iter: 0 ==
== random_src_flip: False ==
== random_dst_flip: False ==
== batch_size: 21 ==
== gan_power: 0.0 ==
== gan_patch_size: 16 ==
== gan_dims: 16 ==
== ==
==------------------ Running On ------------------==
== ==
== Device index: 0 ==
== Name: NVIDIA GeForce RTX 3090 ==
== VRAM: 21.17GB ==
==
## Other relevant information
- **Command lined used (if not specified in steps to reproduce)**: main.py ...
- **Operating system and version:** Windows, macOS, Linux
- **Python version:** 3.5, 3.6.4, ... (if you are not using prebuilt windows binary) | open | 2022-03-28T16:32:52Z | 2023-06-08T23:20:22Z | https://github.com/iperov/DeepFaceLab/issues/5502 | [] | diegito55 | 5 |
charlesq34/pointnet | tensorflow | 64 | About implementation and transformations | Hi,
First of all great work! But I have some questions:
1) Your implementation is using Conv2D and in the first layer it has kernel size of (1,3) so it is basically taking linear combinations of the coordinates (assume no activation here). So instead of using Conv2D using Conv1D on the points and taking the x, y, z coordinates as channels should be the same right? Is there a reasong why you prefer Conv2D?
2) I did not understand the underlying theory for using the transformation nets. I get the fact that the network should not be effected by the transformations but we are not learning transformations separately. Can you provide me a document so I can understand what is going on there? | closed | 2017-12-08T14:17:47Z | 2018-01-09T03:47:30Z | https://github.com/charlesq34/pointnet/issues/64 | [] | ceteke | 1 |
PokeAPI/pokeapi | api | 327 | How can I get all Alola pokémon including alolan forms? | I'm working on a command-line Pokémon Sun and Moon game and I'm trying to figure out how to get the alola id and name of all the Pokémon in alola. The entry for the Alolan Raichu has the id 10100 and the name is "raichu-alola", rather than 26 and "raichu".(It would be very confusing for two entries to have the same name)
How can I get the alola id and the species name of all the pokemon? | closed | 2018-03-09T19:53:24Z | 2020-08-19T10:22:55Z | https://github.com/PokeAPI/pokeapi/issues/327 | [] | thechief389 | 9 |
Zeyi-Lin/HivisionIDPhotos | machine-learning | 173 | 希望增加6寸相纸和一寸二寸混合排版,谢谢! | 希望增加6寸相纸和一寸二寸混合排版,谢谢! | open | 2024-09-27T07:50:54Z | 2024-09-27T07:50:54Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/173 | [] | gcl52 | 0 |
sqlalchemy/sqlalchemy | sqlalchemy | 12,432 | handle cast of postgresql empty ARRAY using specified type | ### Describe the use case
[PostgreSQL requires an explicit cast when constructing an empty `ARRAY`](https://www.postgresql.org/docs/17/sql-expressions.html#SQL-SYNTAX-ARRAY-CONSTRUCTORS).
In SQLAlchemy, when using `postgresql.array([], type_=CHAR)` (with the `type_` argument specified) to build an `ARRAY` literal, it would be nice if the cast could be handled automatically instead of requiring users to write `cast(postgresql.array([]), ARRAY(CHAR))`.
### Databases / Backends / Drivers targeted
postgresql
### Example Use
Current behaviour:
```python
>>> import sqlalchemy
>>> from sqlalchemy.dialects import postgresql
>>> str(postgresql.array([], type_=sqlalchemy.CHAR).compile())
'ARRAY[]'
```
which then fails to execute in the backend.
Proposal:
```python
>>> str(postgresql.array([], type_=sqlalchemy.CHAR).compile())
'ARRAY[]::CHAR[]'
```
### Additional context
I've been hacking on this in https://github.com/dlax/sqlalchemy/commit/1dc325cc8687ee6f1d22b13dd327df4628510b0c, so if this is okay, I can open a PR. | closed | 2025-03-14T14:10:17Z | 2025-03-20T01:42:11Z | https://github.com/sqlalchemy/sqlalchemy/issues/12432 | [
"postgresql",
"sql",
"PRs (with tests!) welcome",
"use case"
] | dlax | 7 |
TencentARC/GFPGAN | deep-learning | 429 | 为什么我修复的只有脸部? | 修复效果非常棒,但是我发现,官网上修复后,脖子和手这些部位的皮肤都会修复。而我本地项目修复,则达不到这个效果,是我哪里没设置好吗? | open | 2023-08-14T08:54:56Z | 2024-03-12T06:52:48Z | https://github.com/TencentARC/GFPGAN/issues/429 | [] | w269219808 | 2 |
netbox-community/netbox | django | 17,941 | Also changelog non-request-based changes | ### NetBox version
v4.1.6
### Feature type
Change to existing functionality
### Triage priority
I volunteer to perform this work (if approved)
### Proposed functionality
Currently, the `handle_changed_object` function bails out if the change does not originate from a request.
It would be desirable to log these changes as well.
### Use case
We have added custom management commands that alter data, but the changes never appear in the change log.
### Database changes
### External dependencies
| closed | 2024-11-06T12:50:52Z | 2025-03-06T03:09:05Z | https://github.com/netbox-community/netbox/issues/17941 | [
"type: feature"
] | mulmat | 1 |
littlecodersh/ItChat | api | 879 | 🙂上不去 | 1 | closed | 2019-12-04T02:32:54Z | 2023-11-16T12:51:28Z | https://github.com/littlecodersh/ItChat/issues/879 | [] | 2048537793 | 4 |
paperless-ngx/paperless-ngx | django | 9,045 | [BUG] Error occurred while consuming big pdf document SubprocessOutputError tesseract get_deskew | ### Description
After I successfully imported loads of PDFs, I now try to upload a big PDF with 170 MB and 114 pages.
After some minutes of processing an error appears when running tesseract.
I already tried putting `'tesseract_timeout': 1800` which didn't help.
I tried finding the `/tmp/ocrmypdf.io.2fpa9ots/000022_rasterize.png` to rerun the command on my own, but wasn't able to find it in any of the Paperless docker containers.
### Steps to reproduce
Drag & Drop the PDF file into the web page and wait for the processing to run for a few minutes.
### Webserver logs
```bash
[2025-02-08 17:09:52,920] [INFO] [paperless.consumer] Consuming 20250208_155934.PDF
[2025-02-08 17:09:52,965] [DEBUG] [paperless.consumer] Detected mime type: application/pdf
[2025-02-08 17:09:52,978] [DEBUG] [paperless.consumer] Parser: RasterisedDocumentParser
[2025-02-08 17:09:52,980] [DEBUG] [paperless.consumer] Parsing 20250208_155934.PDF...
[2025-02-08 17:09:53,027] [INFO] [paperless.parsing.tesseract] pdftotext exited 0
[2025-02-08 17:09:53,132] [DEBUG] [paperless.parsing.tesseract] Calling OCRmyPDF with args: {'input_file': PosixPath('/tmp/paperless/paperless-ngx18j4guwj/20250208_155934.PDF'), 'output_file': PosixPath('/tmp/paperless/paperless-cmazb6rc/archive.pdf'), 'use_threads': True, 'jobs': '4', 'language': 'deu+eng', 'output_type': 'pdfa', 'progress_bar': False, 'color_conversion_strategy': 'RGB', 'skip_text': True, 'clean': True, 'deskew': True, 'rotate_pages': True, 'rotate_pages_threshold': 12.0, 'sidecar': PosixPath('/tmp/paperless/paperless-cmazb6rc/sidecar.txt'), 'tesseract_timeout': 1800}
[2025-02-08 17:09:53,519] [INFO] [ocrmypdf._pipelines.ocr] Start processing 4 pages concurrently
[2025-02-08 17:09:54,801] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:09:54,801] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:09:54,801] [ERROR] [ocrmypdf._exec.tesseract] [tesseract] Error during processing.
[2025-02-08 17:09:54,802] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 0.00 - no change
[2025-02-08 17:09:54,936] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:09:54,936] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:09:54,936] [ERROR] [ocrmypdf._exec.tesseract] [tesseract] Error during processing.
[2025-02-08 17:09:54,936] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 0.00 - no change
[2025-02-08 17:09:55,425] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 4.51 - no change
[2025-02-08 17:09:55,444] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 15.13 - rotation appears correct
[2025-02-08 17:10:20,990] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:10:20,990] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:10:20,990] [ERROR] [ocrmypdf._exec.tesseract] [tesseract] Error during processing.
[2025-02-08 17:10:20,990] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 0.00 - no change
[2025-02-08 17:10:21,361] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 11.87 - no change
[2025-02-08 17:10:36,136] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 11.58 - no change
[2025-02-08 17:10:41,760] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 11.87 - no change
[2025-02-08 17:11:06,505] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 10.01 - no change
[2025-02-08 17:11:39,599] [WARNING] [ocrmypdf._exec.tesseract] [tesseract] lots of diacritics - possibly poor OCR
[2025-02-08 17:11:40,989] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 1.61 - no change
[2025-02-08 17:11:50,335] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 11.64 - no change
[2025-02-08 17:11:52,248] [WARNING] [ocrmypdf._exec.tesseract] [tesseract] lots of diacritics - possibly poor OCR
[2025-02-08 17:11:53,194] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 6.17 - no change
[2025-02-08 17:11:54,100] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 10.52 - no change
[2025-02-08 17:12:09,975] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:12:09,976] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:12:09,976] [ERROR] [ocrmypdf._exec.tesseract] [tesseract] Error during processing.
[2025-02-08 17:12:09,976] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 0.00 - no change
[2025-02-08 17:12:30,485] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 10.14 - no change
[2025-02-08 17:12:31,139] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Image too small to scale!! (2x36 vs min width of 3)
[2025-02-08 17:12:31,139] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Line cannot be recognized!!
[2025-02-08 17:12:31,139] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Image too small to scale!! (2x36 vs min width of 3)
[2025-02-08 17:12:31,139] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Line cannot be recognized!!
[2025-02-08 17:12:32,418] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:12:32,418] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:12:32,419] [ERROR] [ocrmypdf._exec.tesseract] [tesseract] Error during processing.
[2025-02-08 17:12:32,419] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 0.00 - no change
[2025-02-08 17:12:36,605] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 10.93 - no change
[2025-02-08 17:12:39,684] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 6.72 - no change
[2025-02-08 17:13:00,726] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 7.65 - no change
[2025-02-08 17:13:14,728] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:13:14,728] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:13:14,728] [ERROR] [ocrmypdf._exec.tesseract] [tesseract] Error during processing.
[2025-02-08 17:13:14,728] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 0.00 - no change
[2025-02-08 17:13:18,729] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 11.09 - no change
[2025-02-08 17:13:19,787] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:13:19,787] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Too few characters. Skipping this page
[2025-02-08 17:13:19,787] [ERROR] [ocrmypdf._exec.tesseract] [tesseract] Error during processing.
[2025-02-08 17:13:19,788] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 0.00 - no change
[2025-02-08 17:13:23,342] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Empty page!!
[2025-02-08 17:13:29,250] [INFO] [ocrmypdf._pipeline] page is facing ⇧, confidence 8.29 - no change
[2025-02-08 17:13:29,592] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Empty page!!
[2025-02-08 17:13:45,015] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Empty page!!
[2025-02-08 17:13:45,015] [INFO] [ocrmypdf._exec.tesseract] [tesseract] Empty page!!
[2025-02-08 17:14:04,896] [WARNING] [ocrmypdf._exec.tesseract] [tesseract] lots of diacritics - possibly poor OCR
[2025-02-08 17:14:13,230] [DEBUG] [paperless.parsing.tesseract] Deleting directory /tmp/paperless/paperless-cmazb6rc
[2025-02-08 17:14:13,242] [ERROR] [paperless.consumer] Error occurred while consuming document 20250208_155934.PDF: SubprocessOutputError: . See logs for more information.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_exec/tesseract.py", line 201, in get_deskew
p = run(args_tesseract, stdout=PIPE, stderr=STDOUT, timeout=timeout, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/subprocess/__init__.py", line 62, in run
proc = subprocess_run(args, env=env, check=check, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['tesseract', '-l', 'deu+eng', '--psm', '2', '/tmp/ocrmypdf.io.2fpa9ots/000022_rasterize.png', 'stdout']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 382, in parse
ocrmypdf.ocr(**args)
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/api.py", line 380, in ocr
return run_pipeline(options=options, plugin_manager=plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 214, in run_pipeline
return _run_pipeline(options, plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 181, in _run_pipeline
optimize_messages = exec_concurrent(context, executor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 117, in exec_concurrent
executor(
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_concurrent.py", line 78, in __call__
self._execute(
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/builtin_plugins/concurrency.py", line 144, in _execute
result = future.result()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 78, in _exec_page_sync
ocr_image_out, pdf_page_from_image_out, orientation_correction = process_page(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/_common.py", line 417, in process_page
ocr_image, preprocess_out = make_intermediate_images(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/_common.py", line 370, in make_intermediate_images
preprocess_out = preprocess(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/_common.py", line 340, in preprocess
image = preprocess_deskew(image, page_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipeline.py", line 595, in preprocess_deskew
deskew_angle_degrees = ocr_engine.get_deskew(input_file, page_context.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/builtin_plugins/tesseract_ocr.py", line 259, in get_deskew
return tesseract.get_deskew(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_exec/tesseract.py", line 212, in get_deskew
raise SubprocessOutputError() from e
ocrmypdf.exceptions.SubprocessOutputError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 327, in main_wrap
raise exc_info[1]
File "/usr/src/paperless/src/documents/consumer.py", line 477, in run
document_parser.parse(self.working_copy, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 405, in parse
raise ParseError(
documents.parsers.ParseError: SubprocessOutputError: . See logs for more information.
[2025-02-08 17:14:13,247] [ERROR] [paperless.tasks] ConsumeTaskPlugin failed: 20250208_155934.PDF: Error occurred while consuming document 20250208_155934.PDF: SubprocessOutputError: . See logs for more information.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_exec/tesseract.py", line 201, in get_deskew
p = run(args_tesseract, stdout=PIPE, stderr=STDOUT, timeout=timeout, check=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/subprocess/__init__.py", line 62, in run
proc = subprocess_run(args, env=env, check=check, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/subprocess.py", line 571, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['tesseract', '-l', 'deu+eng', '--psm', '2', '/tmp/ocrmypdf.io.2fpa9ots/000022_rasterize.png', 'stdout']' returned non-zero exit status 1.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 382, in parse
ocrmypdf.ocr(**args)
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/api.py", line 380, in ocr
return run_pipeline(options=options, plugin_manager=plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 214, in run_pipeline
return _run_pipeline(options, plugin_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 181, in _run_pipeline
optimize_messages = exec_concurrent(context, executor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 117, in exec_concurrent
executor(
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_concurrent.py", line 78, in __call__
self._execute(
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/builtin_plugins/concurrency.py", line 144, in _execute
result = future.result()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/local/lib/python3.12/concurrent/futures/thread.py", line 59, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/ocr.py", line 78, in _exec_page_sync
ocr_image_out, pdf_page_from_image_out, orientation_correction = process_page(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/_common.py", line 417, in process_page
ocr_image, preprocess_out = make_intermediate_images(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/_common.py", line 370, in make_intermediate_images
preprocess_out = preprocess(
^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipelines/_common.py", line 340, in preprocess
image = preprocess_deskew(image, page_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_pipeline.py", line 595, in preprocess_deskew
deskew_angle_degrees = ocr_engine.get_deskew(input_file, page_context.options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/builtin_plugins/tesseract_ocr.py", line 259, in get_deskew
return tesseract.get_deskew(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/ocrmypdf/_exec/tesseract.py", line 212, in get_deskew
raise SubprocessOutputError() from e
ocrmypdf.exceptions.SubprocessOutputError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/asgiref/sync.py", line 327, in main_wrap
raise exc_info[1]
File "/usr/src/paperless/src/documents/consumer.py", line 477, in run
document_parser.parse(self.working_copy, mime_type, self.filename)
File "/usr/src/paperless/src/paperless_tesseract/parsers.py", line 405, in parse
raise ParseError(
documents.parsers.ParseError: SubprocessOutputError: . See logs for more information.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/src/paperless/src/documents/tasks.py", line 154, in consume_file
msg = plugin.run()
^^^^^^^^^^^^
File "/usr/src/paperless/src/documents/consumer.py", line 509, in run
self._fail(
File "/usr/src/paperless/src/documents/consumer.py", line 151, in _fail
raise ConsumerError(f"{self.filename}: {log_message or message}") from exception
documents.consumer.ConsumerError: 20250208_155934.PDF: Error occurred while consuming document 20250208_155934.PDF: SubprocessOutputError: . See logs for more information.
```
### Browser logs
```bash
```
### Paperless-ngx version
2.14.5
### Host OS
Ubuntu 22.04.5
### Installation method
Docker - official image
### System status
```json
```
### Browser
_No response_
### Configuration changes
PAPERLESS_TIME_ZONE=Europe/Berlin
PAPERLESS_OCR_LANGUAGE=deu+eng
PAPERLESS_SECRET_KEY=XXX
PAPERLESS_CONSUMER_RECURSIVE=true
PAPERLESS_FILENAME_DATE_ORDER=DMY
PAPERLESS_TASK_WORKERS=4
PAPERLESS_THREADS_PER_WORKER=4
PAPERLESS_OCR_USER_ARGS: '{"tesseract_timeout": 1800}'
### Please confirm the following
- [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools.
- [x] I have already searched for relevant existing issues and discussions before opening this report.
- [x] I have updated the title field above with a concise description. | closed | 2025-02-08T16:38:01Z | 2025-03-16T03:15:32Z | https://github.com/paperless-ngx/paperless-ngx/issues/9045 | [
"not a bug"
] | gooney47 | 5 |
PaddlePaddle/models | computer-vision | 4,980 | 度量学习test的时候卡住不动了 | 度量学习官方例子
运行到26000次的时候,在测试集的时候就卡住不动了,也不报错

强制停止的时候发现卡在这里了

| open | 2020-12-05T04:19:20Z | 2024-02-26T05:09:44Z | https://github.com/PaddlePaddle/models/issues/4980 | [] | wengooooo | 6 |
modAL-python/modAL | scikit-learn | 177 | Suggestion on how to improve acquisition.UCB for active GP example | closed | 2023-08-22T07:19:28Z | 2023-08-22T07:46:44Z | https://github.com/modAL-python/modAL/issues/177 | [] | avivajpeyi | 1 |
|
ets-labs/python-dependency-injector | asyncio | 630 | [QUESTION] How to decorate a provider? | dependency-injector is great!
Using the Decorator pattern, `Foo` decorates `Bar`.
Goal is that `foo_factory` will return `Bar` instances which use `Foo` instances.
How can this be achieved?
Really appreciate any help ...
Example (read the comments):
```python
from dependency_injector import providers
# This is just a service.
class Foo:
def __init__(self, dep1, dep2, dep3) -> None:
self.dep1 = dep1
self.dep1 = dep2
self.dep1 = dep3
def do_something(self):
print('foo ')
# This is a service which decorates Foo (decorator pattern).
class Bar:
def __init__(self, foo: Foo) -> None:
self.foo = foo
def do_something(self):
self.foo.do_something()
print('bar ')
# Foo has a factory to pass in dependencies.
foo_factory = providers.Factory(Foo, dep1=1, dep2=2, dep3=3)
# Case_1 : Pass foo_factory in order to instantiate as late as possible.
bar_factory = providers.Factory(Bar, foo=foo_factory)
# Case_2 : Pass foo_factory instance.
# bar_factory = providers.Factory(Bar, foo=foo_factory())
# override the factory
foo_factory.override(bar_factory)
# Triggers recursion error in Case_1
bar_1 = foo_factory()
bar_2 = foo_factory()
# Goal_1: foo bar
bar_1.do_something()
# Goal_2: Have different foo objects. Achieved only with Case_1.
assert bar_1.foo != bar_2.foo
```
Error sample:
```
File "src/dependency_injector/providers.pxd", line 445, in dependency_injector.providers.__provide_keyword_args
File "src/dependency_injector/providers.pxd", line 365, in dependency_injector.providers.__get_value
File "src/dependency_injector/providers.pyx", line 223, in dependency_injector.providers.Provider.__call__
File "src/dependency_injector/providers.pyx", line 225, in dependency_injector.providers.Provider.__call__
File "src/dependency_injector/providers.pyx", line 2689, in dependency_injector.providers.Factory._provide
File "src/dependency_injector/providers.pxd", line 650, in dependency_injector.providers.__factory_call
File "src/dependency_injector/providers.pxd", line 577, in dependency_injector.providers.__call
File "src/dependency_injector/providers.pxd", line 445, in dependency_injector.providers.__provide_keyword_args
File "src/dependency_injector/providers.pxd", line 365, in dependency_injector.providers.__get_value
File "src/dependency_injector/providers.pyx", line 223, in dependency_injector.providers.Provider.__call__
RecursionError: maximum recursion depth exceeded while calling a Python object
``` | closed | 2022-10-18T18:05:04Z | 2022-10-20T10:43:29Z | https://github.com/ets-labs/python-dependency-injector/issues/630 | [] | vlad-ghita | 2 |
Tinche/aiofiles | asyncio | 200 | Abandon Python 3.8 | Hello.
Are you OK with bumping the Python from 3.8 to 3.9 as a minimum supported version?
References:
- https://endoflife.date/python
- https://devguide.python.org/versions/ | closed | 2025-02-01T17:47:20Z | 2025-02-05T20:29:41Z | https://github.com/Tinche/aiofiles/issues/200 | [] | stankudrow | 3 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 840 | General question about embedding size | Hey,
I am interested in training a one-language model, which also has less accent. I would like to train a good quality model on about 1000 speakers. Then I want to fine tune on a single speaker (like #437) (with 5 minutes or even hours of audio) to finally get a good single speaker model. Now my question is: Does the model benefit from an embedding size (or only hidden size in encoder) of 768 like sberryman did in #126, even if training time and Vram usage increases heavily? Or is it only interesting for multi-language/multi-accent models and I would definitely waste my time with that? Or even get worse results?
I also use 48000 as sample_rate, as most of my samples (of commonVoice) are in 48k, maybe this has an impact?
Thanks in advance :)
| closed | 2021-09-06T18:57:14Z | 2021-09-13T21:58:01Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/840 | [] | Bebaam | 2 |
graphdeco-inria/gaussian-splatting | computer-vision | 605 | Can not install diff-gaussian-rasterization | I have installed CUDA11.8 already, but when i executed python setup.py install, there met some problems.
RuntimeError:
The detected CUDA version (10.1) mismatches the version that was used to compile
PyTorch (11.8). Please make sure to use the same CUDA versions.
(guassian_splatting) work@work:~/data_ssd/3D/gaussian-splatting-main/submodules/diff-gaussian-rasterization$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:33:58_PDT_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
| open | 2024-01-10T09:02:19Z | 2024-02-09T23:03:24Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/605 | [] | David-655 | 1 |
sqlalchemy/alembic | sqlalchemy | 658 | Accumulate autogenerated migrations over multiple calls to `run_migrations()` | I am creating a multi-tenant application, with several tenant-specific schema and also public shared tables. For this I made a setup that works quite well. I'm now only struggling with the auto-generation of migrations.
I have simplified my use case to this minimal example. In `run_migrations_online`, I call several times the `context.run_migrations()`, every time with a different `target_metadata` that only concerns the table that are linked to one schema:
```python
def run_migrations_online():
connectable = create_engine(os.environ["DATABASE_URL"])
with connectable.connect() as connection:
schema = Schema.PUBLIC
print()
print("-" * 80)
print(f"Migrating schema {schema.schema_name}\n")
context.configure(
connection=connection,
target_metadata=schema.base.metadata,
include_schemas=True,
)
with context.begin_transaction():
context.run_migrations()
schema = Schema.TENANT
print()
print("-" * 80)
print(f"Migrating schema {schema.schema_name}\n")
context.configure(
connection=connection,
target_metadata=schema.base.metadata,
include_schemas=True,
)
with context.begin_transaction():
context.run_migrations()
```
When running `alembic revision --autogenerate -m "create tables"`, we can see that alembic detect correctly the modifications (one table added) for each call:
```
--------------------------------------------------------------------------------
Migrating schema public
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'exchange_rate'
--------------------------------------------------------------------------------
Migrating schema tenant
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'tenant.inventory'
```
But the problem is that when it generate the migration script, it keeps only the modifications detected during the last call of `context.run_migrations()`, namely `tenant.inventory` table in this case:
```python
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('inventory',
sa.Column('id', sa.Integer(), nullable=False),
sa.PrimaryKeyConstraint('id'),
schema='tenant'
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('inventory', schema='tenant')
# ### end Alembic commands ###
```
Now if I invert the order in which TENANT and PUBLIC are processed, both changes are correctly discovered, but in this case the migration script only keep tracks of `public.exchange_rate` table:
```
--------------------------------------------------------------------------------
Migrating schema tenant
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'tenant.inventory'
--------------------------------------------------------------------------------
Migrating schema public
INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
INFO [alembic.runtime.migration] Will assume transactional DDL.
INFO [alembic.autogenerate.compare] Detected added table 'exchange_rate'
```
```python
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('exchange_rate',
sa.Column('date', sa.DateTime(), nullable=False),
sa.PrimaryKeyConstraint('date'),
schema='public'
)
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('exchange_rate', schema='public')
# ### end Alembic commands ###
```
In other words the auto-generate features generate the script only for the last run of `context.run_migrations()`.
---
**Question**:
Is it somehow possible to accumulate the changes over multiple calls to `context.run_migrations()` when using `autogenerate` feature?
(«No» is a perfectly valid answer, at least I would know I should stop trying :smile:)
_Some notes_:
- I have to keep this structure where I reconfigure the context several time, because when I have multiple tenants I actually want one `version_table` per tenant, and this is set using `context.configure()`.
- Online upgrade (with `alembic upgrade head`) does not have this problem because the changes are applied directly, there is no need to accumulate them. | closed | 2020-02-16T10:37:53Z | 2020-02-16T16:32:59Z | https://github.com/sqlalchemy/alembic/issues/658 | [
"question"
] | StreakyCobra | 4 |
miguelgrinberg/microblog | flask | 82 | How to change the structure of the app | Hi @miguelgrinberg thanks for your great e-book about the app.
Now I want to reconstruct the app.
when I access the app, I found that I should sign in firstly the app and then I can see the posts in app.
Now I want to change this.
I mean when users firstly access the site, the posts or blog should be seen at the Home or explore webpage even they haven't sign in the app. After they click posts or user, the sign up should be required.
Do you have advices how to get this or what parts should be changed. thanks! | closed | 2018-02-06T04:05:54Z | 2019-01-13T22:21:05Z | https://github.com/miguelgrinberg/microblog/issues/82 | [
"question",
"auto-closed"
] | tianke0711 | 38 |
Kludex/mangum | asyncio | 193 | Mangum 0.12 regression: Unable to determine handler from trigger event | A lambda invoke call against mangum 0.11 works fine but with upgrade to version 0.12 same call fails with:
[ERROR] TypeError: Unable to determine handler from trigger event
Traceback (most recent call last):
File "/var/task/mangum/adapter.py", line 86, in __call__
handler = AbstractHandler.from_trigger(
File "/var/task/mangum/handlers/abstract_handler.py", line 113, in from_trigger
raise TypeError("Unable to determine handler from trigger event")
START RequestId: 7d86054b-94df-4bba-b5e0-60106f2d9fa9 Version: $LATEST
| closed | 2021-07-19T14:40:22Z | 2021-07-29T17:40:12Z | https://github.com/Kludex/mangum/issues/193 | [
"bug"
] | IlyaSukhanov | 5 |
ansible/awx | automation | 15,591 | add optional text to approval steps | ### Please confirm the following
- [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html).
- [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates.
- [X] I understand that AWX is open source software provided for free and that I might not receive a timely response.
### Feature type
Enhancement to Existing Feature
### Feature Summary
it would be great if the user could motivate rejection /approval in the approval steps of workflow. at its simplest this would be an optional text area where the user can input text that would then populate an artifact within the workflow, to be used in further steps
### Select the relevant components
- [X] UI
- [ ] API
- [ ] Docs
- [ ] Collection
- [ ] CLI
- [ ] Other
### Steps to reproduce
..
### Current results
..
### Sugested feature result
..
### Additional information
_No response_ | open | 2024-10-21T20:50:58Z | 2024-10-21T20:51:18Z | https://github.com/ansible/awx/issues/15591 | [
"type:enhancement",
"component:ui",
"needs_triage",
"community"
] | dberardo-com | 0 |
strawberry-graphql/strawberry-django | graphql | 36 | Import error in example from README.md | When trying to create a type similar to the example in README.md, an error occurs:

This is the example in README.md:

The error can be solved by changing `import strawberry` to `import strawberry.django`, so I think README.md should be changed:

| closed | 2021-06-02T17:35:19Z | 2021-06-16T05:00:24Z | https://github.com/strawberry-graphql/strawberry-django/issues/36 | [] | neolight1010 | 5 |
davidteather/TikTok-Api | api | 543 | [BUG] - search_for_music and search_for_users methods does not return results | **Describe the bug**
search_for_music and search_for_users methods do not return results, always empty arrays
**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
```
TikTokApi.get_instance().search_for_music('say so', 5)
TikTokApi.get_instance().search_for_users('baka prase', 5)
```
**Expected behavior**
To get 5 songs or users, regarding the method we are using.
**Error Trace (if any)**
Put the error trace below if there's any error thrown.
```
No trace error
```
**Desktop (please complete the following information):**
- OS: [macOS Mojave]
- TikTokApi Version [3.9.4]
| closed | 2021-03-30T09:45:02Z | 2021-04-02T23:21:02Z | https://github.com/davidteather/TikTok-Api/issues/543 | [
"bug"
] | nikolamajmunovic | 1 |
arogozhnikov/einops | numpy | 27 | Add layers for tf and tf.keras | Continuing discussion started in pull-request #25 .
So far: `tf.keras` и `keras` are different things now, they work on different input and have different recommendations for creating custom layers.
This version seems to work for me with tensorflow.
```python
import tensorflow as tf
from einops.layers.keras import RearrangeMixin, ReduceMixin, UnknownSize
class Rearrange(RearrangeMixin, tf.keras.layers.Layer):
def call(self, inputs):
return self._apply_recipe(inputs)
class Reduce(ReduceMixin, tf.keras.layers.Layer):
def call(self, inputs):
return self._apply_recipe(inputs)
```
Example for eager execution
```python
tf.enable_eager_execution()
x = tf.zeros([4, 5], dtype='float32')
Rearrange('i j -> j i')(x).shape
Reduce('i j -> j', 'max')(x).shape
```
And example without eager execution
```python
import numpy
x = tf.placeholder('float32')
x.set_shape([None, None])
with tf.Session().as_default():
y = Rearrange('i j -> j i')(x).eval({x: numpy.zeros([5, 6], dtype='float32')})
y = Reduce('i j -> j', 'max')(x).eval({x: numpy.zeros([5, 6], dtype='float32')})
```
At least this seems to comply with tf guide
https://www.tensorflow.org/tutorials/eager/custom_layers
My env:
```python
python 3.6 (should not affect)
In [2]: tensorflow.__version__
Out[2]: '1.10.0'
In [4]: keras.__version__ (should not affect)
Out[4]: '2.2.4'
```
| closed | 2018-12-08T18:54:07Z | 2021-01-09T21:21:27Z | https://github.com/arogozhnikov/einops/issues/27 | [] | arogozhnikov | 3 |
apache/airflow | python | 47,872 | DAG Processor crashing Asset.ref & Asset.ref | ### Apache Airflow version
3.0.0
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
DAG Processor crashing with `triggering_asset_events ` DAG
**LOGS**
```
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 10, in <module>
sys.exit(main())
File "/opt/airflow/airflow/__main__.py", line 58, in main
args.func(args)
File "/opt/airflow/airflow/cli/cli_config.py", line 49, in command
return func(*args, **kwargs)
File "/opt/airflow/airflow/utils/cli.py", line 111, in wrapper
return f(*args, **kwargs)
File "/opt/airflow/airflow/utils/providers_configuration_loader.py", line 55, in wrapped_function
return func(*args, **kwargs)
File "/opt/airflow/airflow/cli/commands/local_commands/dag_processor_command.py", line 54, in dag_processor
run_command_with_daemon_option(
File "/opt/airflow/airflow/cli/commands/local_commands/daemon_utils.py", line 86, in run_command_with_daemon_option
callback()
File "/opt/airflow/airflow/cli/commands/local_commands/dag_processor_command.py", line 57, in <lambda>
callback=lambda: run_job(job=job_runner.job, execute_callable=job_runner._execute),
File "/opt/airflow/airflow/utils/session.py", line 101, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/airflow/airflow/jobs/job.py", line 342, in run_job
return execute_job(job, execute_callable=execute_callable)
File "/opt/airflow/airflow/jobs/job.py", line 371, in execute_job
ret = execute_callable()
File "/opt/airflow/airflow/jobs/dag_processor_job_runner.py", line 61, in _execute
self.processor.run()
File "/opt/airflow/airflow/dag_processing/manager.py", line 252, in run
return self._run_parsing_loop()
File "/opt/airflow/airflow/dag_processing/manager.py", line 341, in _run_parsing_loop
self._collect_results()
File "/opt/airflow/airflow/utils/session.py", line 101, in wrapper
return func(*args, session=session, **kwargs)
File "/opt/airflow/airflow/dag_processing/manager.py", line 778, in _collect_results
self._file_stats[file] = process_parse_results(
File "/opt/airflow/airflow/dag_processing/manager.py", line 1099, in process_parse_results
update_dag_parsing_results_in_db(
File "/opt/airflow/airflow/dag_processing/collection.py", line 326, in update_dag_parsing_results_in_db
for attempt in run_with_db_retries(logger=log):
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 443, in __iter__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 376, in iter
result = action(retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 398, in <lambda>
self._add_action_func(lambda rs: rs.outcome.result())
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/opt/airflow/airflow/dag_processing/collection.py", line 336, in update_dag_parsing_results_in_db
DAG.bulk_write_to_db(bundle_name, bundle_version, dags, session=session)
File "/opt/airflow/airflow/utils/session.py", line 98, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/airflow/models/dag.py", line 1888, in bulk_write_to_db
asset_op.add_dag_asset_name_uri_references(session=session)
File "/opt/airflow/airflow/dag_processing/collection.py", line 685, in add_dag_asset_name_uri_references
self._add_dag_asset_references(
File "/opt/airflow/airflow/dag_processing/collection.py", line 680, in _add_dag_asset_references
session.execute(delete(model).where(tuple_(model.dag_id, getattr(model, attr)).in_(old_refs)))
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/orm/session.py", line 1717, in execute
result = conn._execute_20(statement, params or {}, execution_options)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1710, in _execute_20
return meth(self, args_10style, kwargs_10style, execution_options)
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement
ret = self._execute_context(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1816, in _execute_context
self._handle_dbapi_exception(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2134, in _handle_dbapi_exception
util.raise_(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1810, in _execute_context
context = constructor(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 1037, in _init_compiled
expanded_state = compiled._process_parameters_for_postcompile(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/compiler.py", line 1257, in _process_parameters_for_postcompile
new_processors.update(
File "/usr/local/lib/python3.9/site-packages/sqlalchemy/sql/compiler.py", line 1265, in <genexpr>
and processors[name][j - 1] is not None
sqlalchemy.exc.StatementError: (builtins.IndexError) tuple index out of range
[SQL: DELETE FROM dag_schedule_asset_name_reference WHERE (dag_schedule_asset_name_reference.dag_id, dag_schedule_asset_name_reference.name) IN (__[POSTCOMPILE_param_1])]
[parameters: [{}]]
root@4b44e5c6544e:/opt/airflow#
```
### What you think should happen instead?
_No response_
### How to reproduce
1. Use below DAG
```
from __future__ import annotations
from airflow.decorators import dag, task
from airflow.sdk.definitions.asset import Asset
from airflow.sdk.definitions.asset.decorators import asset
@asset(uri="s3://bucket/asset1_producer", schedule=None)
def producer1():
pass
@asset(uri="s3://bucket/asset2_producer", schedule=None)
def producer2():
pass
@dag(
schedule=Asset.ref(name="asset1_producer") & Asset.ref(name="asset2_producer"),
catchup=False,
tags=["asset"],
)
def consumer():
@task()
def process_nothing(triggering_asset_events):
for a, events in triggering_asset_events.items():
print(a.name, events)
consumer()
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Official Apache Airflow Helm Chart
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| closed | 2025-03-17T17:05:06Z | 2025-03-19T08:51:40Z | https://github.com/apache/airflow/issues/47872 | [
"kind:bug",
"priority:critical",
"area:core",
"affected_version:3.0.0beta"
] | vatsrahul1001 | 3 |
timkpaine/lantern | plotly | 195 | cut release | closed | 2019-08-16T14:28:19Z | 2019-08-19T21:10:13Z | https://github.com/timkpaine/lantern/issues/195 | [
"ready",
"feature"
] | timkpaine | 0 |
|
BayesWitnesses/m2cgen | scikit-learn | 88 | Reduce RAM and ROM footprint | I'm using `m2cgen` to convert some classifier to C. It works great and results are consistent, thanks for the library!
1. I have the problem that the compiled binaries are too large to fit on my embedded device. I checked and the binaries are around double the size of the binaries created with e.g [`sklearn_porter`](https://github.com/nok/sklearn-porter). However, `m2cgen` is the only libraries that can convert my python classifiers to C without introducing errors into the classification.
2. Even if I reduce the size of the classifier, I run into the problem that the RAM of the device is exceeded (think of something in the kB range).
Do you have any idea how the footprint of the c code could be reduced? | open | 2019-05-13T13:41:26Z | 2019-07-31T14:15:04Z | https://github.com/BayesWitnesses/m2cgen/issues/88 | [] | skjerns | 9 |
noirbizarre/flask-restplus | api | 560 | Can't install development dependencies with documentation instructions | According to the [contribution guide](https://flask-restplus.readthedocs.io/en/latest/contributing.html), to install the development dependencies we should run `pip install -e .[dev]`. When I execute the command I get the following warning:
> flask-restplus 0.12.2.dev0 does not provide the extra 'dev'
Some packages are installed, but not everything listed in requirements/develop.pip. Here is the result of pip freeze:
```bash
$ pip freeze
aniso8601==4.0.1
Click==7.0
Flask==1.0.2
-e git+git@github.com:hygorxaraujo/flask-restplus.git@a8f35823fe40b2c7385632a2ad6b35b26467402c#egg=flask_restplus
itsdangerous==1.1.0
Jinja2==2.10
jsonschema==2.6.0
MarkupSafe==1.1.0
pytz==2018.7
six==1.11.0
Werkzeug==0.14.1
```
Steps to reproduce:
1. Fork flask-restplus repository in GitHub
2. Clone forked repository
3. Create new branch `git checkout -b new-branch`
4. Run `pip install -e .[dev]`
Am I missing something or is the documentation outdated?
Python: 3.6.7
Flask-Restplus: 0.12.2.dev | closed | 2018-11-28T17:05:10Z | 2019-04-19T13:07:00Z | https://github.com/noirbizarre/flask-restplus/issues/560 | [] | hygorxaraujo | 3 |
LAION-AI/Open-Assistant | machine-learning | 3,223 | Cannot open other tabs with custom preset | Cannot open other tabs with custom preset
https://www.youtube.com/watch?v=jTUiHbFnbP8 | open | 2023-05-24T13:03:30Z | 2024-05-25T13:47:31Z | https://github.com/LAION-AI/Open-Assistant/issues/3223 | [
"bug",
"website"
] | echo0x22 | 2 |
MaartenGr/BERTopic | nlp | 1,696 | topic_model.transform(docs)[0][i] is sometimes different from topic_model.transform(docs[i])[0][0] | Hello
I read https://maartengr.github.io/BERTopic/api/bertopic.html#bertopic._bertopic.BERTopic.transform and understood from the documents parameter (described as "A single document or a list of documents to predict on") that I could submit a list of documents or a single document and still receive the same result when predicting with a fitted model.
I found that this is not true. Am I overlooking something?
Below you find a minimal working example
```
from bertopic import BERTopic
from sklearn.datasets import fetch_20newsgroups
docs = fetch_20newsgroups(subset='all')['data'][:200]
topic_model = BERTopic().fit(docs)
topics,_=topic_model.transform(docs)
import numpy as np
topics=np.array(topics)
#calling the model with a single document several times
import tqdm
topics_single = []
for doc in tqdm.tqdm(docs):
topic, _ = topic_model.transform([doc])
topics_single.append(topic[0])
topics_single = np.array(topics_single)
mask_identical = topics_single == topics
percentage_equal = 100 * np.sum(mask_identical) / len(mask_identical)
print(f"{percentage_equal=}%") #returns for example about 60%, but varying
#loop till finding a different entry
for i in range(len(docs)):
print(
i,
topic_model.transform(docs[i])[0][0],
topic_model.transform([docs[i]])[0][0],
topics[i],
topic_model.transform(docs)[0][i],
)
if topic_model.transform(docs[i])[0][0] != topic_model.transform(docs)[0][i]:
print(f"Different outcome at iteration {i}")
break
```
The repeated execution with the same documents seems fine:
```
topics2,_=topic_model.transform(docs)
percentage_equal_executed_with_multiple_docs = 100*np.sum(np.array(topics2)==topics)/len(topics)
print(f"{percentage_equal_executed_with_multiple_docs=}%") #this gives 100%
```

Thank you in advance!
PS:
The python version is 3.10.12
The list of installed packages is
```
absl-py==1.0.0
accelerate==0.23.0
adagio==0.2.4
aiohttp==3.8.6
aiosignal==1.3.1
ansi2html==1.9.1
antlr4-python3-runtime==4.11.1
anyio==3.5.0
appdirs==1.4.4
arch==6.2.0
argon2-cffi==21.3.0
argon2-cffi-bindings==21.2.0
astor==0.8.1
asttokens==2.0.5
astunparse==1.6.3
async-timeout==4.0.3
attrs==22.1.0
audioread==3.0.1
azure-core==1.29.1
azure-cosmos==4.3.1
azure-storage-blob==12.19.0
azure-storage-file-datalake==12.14.0
backcall==0.2.0
bcrypt==3.2.0
beautifulsoup4==4.11.1
bertopic==0.16.0
black==22.6.0
bleach==4.1.0
blinker==1.4
blis==0.7.11
boto3==1.24.28
botocore==1.27.96
cachetools==5.3.2
catalogue==2.0.10
category-encoders==2.6.2
certifi==2022.12.7
cffi==1.15.1
chardet==4.0.0
charset-normalizer==2.0.4
click==8.1.7
cloudpathlib==0.16.0
cloudpickle==2.0.0
cmake==3.27.7
cmdstanpy==1.2.0
comm==0.1.2
confection==0.1.3
configparser==5.2.0
contourpy==1.0.5
cryptography==39.0.1
cycler==0.11.0
cymem==2.0.8
Cython==0.29.32
dacite==1.8.1
dash==2.14.2
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-table==5.0.0
dask==2023.12.0
databricks-automl-runtime==0.2.20
databricks-cli==0.18.0
databricks-feature-engineering==0.1.2
databricks-feature-store==0.16.1
databricks-sdk==0.1.6
dataclasses-json==0.6.2
datasets==2.14.5
dbl-tempo==0.1.26
dbus-python==1.2.18
debugpy==1.6.7
decorator==5.1.1
deepspeed==0.11.1
defusedxml==0.7.1
dill==0.3.6
diskcache==5.6.3
distlib==0.3.7
distributed==2023.12.0
distro==1.7.0
distro-info==1.1+ubuntu0.1
docstring-to-markdown==0.11
dtw-python==1.3.0
einops==0.7.0
entrypoints==0.4
evaluate==0.4.1
executing==0.8.3
facets-overview==1.1.1
fastjsonschema==2.19.0
fasttext==0.9.2
filelock==3.9.0
filterpy==1.4.5
flash-attn==2.3.2
Flask==2.2.5
flatbuffers==23.5.26
fonttools==4.25.0
frozenlist==1.4.0
fs==2.4.16
fsspec==2023.6.0
fugue==0.8.7
fugue-sql-antlr==0.2.0
future==0.18.3
gast==0.4.0
gitdb==4.0.11
GitPython==3.1.27
gluonts==0.14.3
google-api-core==2.14.0
google-auth==2.21.0
google-auth-oauthlib==1.0.0
google-cloud-core==2.3.3
google-cloud-storage==2.11.0
google-crc32c==1.5.0
google-pasta==0.2.0
google-resumable-media==2.6.0
googleapis-common-protos==1.61.0
greenlet==2.0.1
grpcio==1.48.2
grpcio-status==1.48.1
gunicorn==20.1.0
gviz-api==1.10.0
h5py==3.7.0
hdbscan==0.8.33
hjson==3.1.0
hmmlearn==0.3.0
holidays==0.35
horovod==0.28.1
htmlmin==0.1.12
httplib2==0.20.2
huggingface-hub==0.16.4
idna==3.4
ImageHash==4.3.1
imbalanced-learn==0.11.0
importlib-metadata==7.0.0
importlib-resources==6.1.1
ipykernel==6.25.0
ipython==8.14.0
ipython-genutils==0.2.0
ipywidgets==7.7.2
isodate==0.6.1
itsdangerous==2.0.1
jedi==0.18.1
jeepney==0.7.1
Jinja2==3.1.2
jmespath==0.10.0
joblib==1.2.0
joblibspark==0.5.1
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.17.3
jupyter-client==7.3.4
jupyter-server==1.23.4
jupyter_core==5.2.0
jupyterlab-pygments==0.1.2
jupyterlab-widgets==1.0.0
keras==2.14.0
keras-self-attention==0.51.0
keyring==23.5.0
kiwisolver==1.4.4
kotsu==0.3.3
langchain==0.0.314
langcodes==3.3.0
langsmith==0.0.64
launchpadlib==1.10.16
lazr.restfulclient==0.14.4
lazr.uri==1.0.6
lazy_loader==0.3
libclang==15.0.6.1
librosa==0.10.1
lightgbm==4.1.0
lit==17.0.5
llvmlite==0.39.1
locket==1.0.0
lxml==4.9.1
Mako==1.2.0
Markdown==3.4.1
MarkupSafe==2.1.1
marshmallow==3.20.1
matplotlib==3.7.0
matplotlib-inline==0.1.6
mccabe==0.7.0
mistune==0.8.4
ml-dtypes==0.2.0
mlflow-skinny==2.8.0
mne==1.6.0
more-itertools==8.10.0
mpmath==1.2.1
msgpack==1.0.7
multidict==6.0.4
multimethod==1.10
multiprocess==0.70.14
murmurhash==1.0.10
mypy-extensions==0.4.3
nbclassic==0.5.2
nbclient==0.5.13
nbconvert==6.5.4
nbformat==5.7.0
nest-asyncio==1.5.6
networkx==2.8.4
ninja==1.11.1.1
nltk==3.7
nodeenv==1.8.0
notebook==6.5.2
notebook_shim==0.2.2
numba==0.56.4
numpy==1.23.5
oauthlib==3.2.0
openai==0.28.1
opt-einsum==3.3.0
packaging==22.0
pandas==1.5.3
pandocfilters==1.5.0
paramiko==2.9.2
parso==0.8.3
partd==1.4.1
pathspec==0.10.3
pathy==0.10.3
patsy==0.5.3
petastorm==0.12.1
pexpect==4.8.0
phik==0.12.3
pickleshare==0.7.5
Pillow==9.4.0
platformdirs==2.5.2
plotly==5.9.0
pluggy==1.0.0
pmdarima==2.0.3
polars==0.19.19
pooch==1.8.0
preshed==3.0.9
prompt-toolkit==3.0.36
prophet==1.1.5
protobuf==4.24.0
psutil==5.9.0
psycopg2==2.9.3
ptyprocess==0.7.0
pure-eval==0.2.2
py-cpuinfo==9.0.0
pyaml==23.9.7
pyarrow==8.0.0
pyarrow-hotfix==0.5
pyasn1==0.4.8
pyasn1-modules==0.2.8
pybind11==2.11.1
pycatch22==0.4.2
pycparser==2.21
pydantic==1.10.6
pyflakes==3.1.0
Pygments==2.11.2
PyGObject==3.42.1
PyJWT==2.3.0
pykalman-bardo==0.9.7
PyNaCl==1.5.0
pynndescent==0.5.11
pyod==1.1.2
pyodbc==4.0.32
pyparsing==3.0.9
pyright==1.1.294
pyrsistent==0.18.0
pytesseract==0.3.10
python-apt==2.4.0+ubuntu2
python-dateutil==2.8.2
python-editor==1.0.4
python-lsp-jsonrpc==1.1.1
python-lsp-server==1.8.0
pytoolconfig==1.2.5
pytz==2022.7
PyWavelets==1.4.1
PyYAML==6.0
pyzmq==23.2.0
qpd==0.4.4
regex==2022.7.9
requests==2.28.1
requests-oauthlib==1.3.1
responses==0.18.0
retrying==1.3.4
rope==1.7.0
rsa==4.9
s3transfer==0.6.2
safetensors==0.4.0
scikit-base==0.6.1
scikit-learn==1.1.1
scikit-optimize==0.9.0
scikit-posthocs==0.8.0
scipy==1.10.0
seaborn==0.12.2
seasonal==0.3.1
SecretStorage==3.3.1
Send2Trash==1.8.0
sentence-transformers==2.2.2
sentencepiece==0.1.99
shap==0.43.0
simplejson==3.17.6
six==1.16.0
skpro==2.1.1
sktime==0.24.1
slicer==0.0.7
smart-open==5.2.1
smmap==5.0.0
sniffio==1.2.0
sortedcontainers==2.4.0
soundfile==0.12.1
soupsieve==2.3.2.post1
soxr==0.3.7
spacy==3.7.1
spacy-legacy==3.0.12
spacy-loggers==1.0.5
spark-tensorflow-distributor==1.0.0
SQLAlchemy==1.4.39
sqlglot==20.2.0
sqlparse==0.4.2
srsly==2.4.8
ssh-import-id==5.11
stack-data==0.2.0
stanio==0.3.0
statsforecast==1.6.0
statsmodels==0.13.5
stumpy==1.12.0
sympy==1.11.1
tabulate==0.8.10
tangled-up-in-unicode==0.2.0
tbats==1.1.3
tblib==3.0.0
tenacity==8.1.0
tensorboard==2.14.0
tensorboard-data-server==0.7.2
tensorboard-plugin-profile==2.14.0
tensorflow==2.14.0
tensorflow-estimator==2.14.0
tensorflow-io-gcs-filesystem==0.34.0
termcolor==2.3.0
terminado==0.17.1
thinc==8.2.1
threadpoolctl==2.2.0
tiktoken==0.5.1
tinycss2==1.2.1
tokenize-rt==4.2.1
tokenizers==0.14.0
tomli==2.0.1
toolz==0.12.0
torch==2.0.1+cu118
torchvision==0.15.2+cu118
tornado==6.1
tqdm==4.64.1
traitlets==5.7.1
transformers==4.34.0
triad==0.9.3
triton==2.0.0
tsfresh==0.20.1
tslearn==0.5.3.2
typeguard==2.13.3
typer==0.9.0
typing-inspect==0.9.0
typing_extensions==4.4.0
ujson==5.4.0
umap-learn==0.5.5
unattended-upgrades==0.1
urllib3==1.26.14
virtualenv==20.16.7
visions==0.7.5
wadllib==1.3.6
wasabi==1.1.2
wcwidth==0.2.5
weasel==0.3.4
webencodings==0.5.1
websocket-client==0.58.0
Werkzeug==2.2.2
whatthepatch==1.0.2
widgetsnbextension==3.6.1
wordcloud==1.9.2
wrapt==1.14.1
xarray==2023.12.0
xgboost==1.7.6
xxhash==3.4.1
yapf==0.33.0
yarl==1.9.2
ydata-profiling==4.2.0
zict==3.0.0
zipp==3.11.0
```
| open | 2023-12-14T22:15:35Z | 2024-01-27T01:19:11Z | https://github.com/MaartenGr/BERTopic/issues/1696 | [] | jonaslandsgesell | 4 |
pallets/flask | flask | 5,366 | GPU performance issues of flask framework | When I loaded an ultralytics YOLO model outside the framework, I read the image from the client in a request processing function and performed inference. I found that the inference speed was 10 times slower than normal.
I don’t have this problem when I use fastapi.
```python
from flask import Flask, request
from ultralytics import YOLO
import cv2
import numpy
import base64
model = YOLO("/workspace/yolov8s.pt")
def b64_cv(frame_b64):
return cv2.imdecode(numpy.frombuffer(base64.b64decode(frame_b64), numpy.uint8), cv2.IMREAD_COLOR)
app = Flask(__name__)
@app.route('/frame', methods=['POST'])
def read_item():
data = request.json
frame = data.get('frame', None)
results = model(source=b64_cv(frame))
return {}
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8000)
```
```python
import requests
import cv2
import numpy
import base64
def cv_b64(frame_cv):
return str(base64.b64encode(cv2.imencode(".jpg", frame_cv)[1]))[2:-1]
while True:
stream = cv2.VideoCapture("/workspace/road_30_1920x1080.mp4")
while stream.isOpened():
ret, frame = stream.read()
if not ret: break
data = {"frame": cv_b64(frame)}
response = requests.post(url="http://127.0.0.1:8000/frame", json=data)
```
Environment:
- Python version: 3.10.12
- Flask version: 3.0.0
| closed | 2023-12-22T00:22:59Z | 2023-12-22T01:55:38Z | https://github.com/pallets/flask/issues/5366 | [] | hky3535 | 0 |
2noise/ChatTTS | python | 256 | batch推理中,调换文本的生成顺序会影响合成效果 | 在尝试batch合成时,我发现 texts 中多个 text 之间的相对位置会影响到每个text所合成出来的音频,不仅仅体现在所合成的音频文件MD5上,更体现在音频的主观听感上。
我已再次确认并验证了我成功固定了全局随机种子: ``` torch.manual_seed(args.seed)```
以下是我的具体测试场景:
我期望分别合成”你好“、”我是Chat T T S“ 这两句话,看到ChatTTS项目本身是支持batch推理的,只需要在 chat.infer 时传入的 text 改造为 list 即可,因此我尝试将两句话构造在同一个 list 中传入 chat.infer:
```
text = ["我是CHAT T T S[uv_break]"] + ["你好[uv_break]"]
wavs = chat.infer(text, ...)
```
多次运行可以发现每次生成的音频文本 MD5 是一致的(证明已固定全局随机种子)
偶然发现调换 text 内两句话的顺序后,所合成的音频在听感上有明显的大幅度波动,具体可以看截图中 第三次运行的 结果,不仅仅是 音频文件MD5的改变,更糟糕的是听感上发生了显著的变化。
附件是我的运行截图,
<img width="1262" alt="image" src="https://github.com/2noise/ChatTTS/assets/22251425/891f8e54-c281-4de5-9dd1-936793b0a4f5">
- 我尝试在 issue 中搜索了相关内容但是没有发现相似的问题
- 我尝试在 ChaTTS/model/gpt.py 中进行调试,但是发现 batch 相关的代码都按预期的行为进行(batch 这个维度似乎自始至终都是独立的,没有与 T 这个维度进行摊平)
我想知道有其他人也遇到这种情况吗?是否有定位到原因呢? | closed | 2024-06-04T17:03:20Z | 2024-06-05T14:00:59Z | https://github.com/2noise/ChatTTS/issues/256 | [] | viewlei | 1 |
pydata/bottleneck | numpy | 127 | Should we port bottleneck to C? | I'm trying to port nansum to C (without using cython) to get a feel for how bottleneck would look written in pure C.
What I have so far (in the c_rewrite branch) is a nansum with reduced features that compiles but does not work at all. I have not yet tried to deal with reference counting because I don't yet know how. Any help, comments, appreciated.
Here's a demo on how to compile:
```
/bottleneck/bottleneck/template (c_rewrite)$ python setup.py build_ext --inplace
running build_ext
building 'nansum' extension
<snip>
In [1]: from nansum import nansum
In [2]: a=np.random.rand(4)
In [3]: nansum(a)
---------------------------------------------------------------------------
SystemError Traceback (most recent call last)
<ipython-input-3-1d96a990bfd9> in <module>()
----> 1 nansum(a)
SystemError: error return without exception set
```
| closed | 2016-06-15T15:15:44Z | 2016-08-01T18:22:48Z | https://github.com/pydata/bottleneck/issues/127 | [] | kwgoodman | 30 |
yzhao062/pyod | data-science | 350 | `predict_proba` documented with wrong output shape | According the docs, `predict_proba` returns
> numpy array of shape (n_samples,)
However it actually returns `(n_samples, n_classes)`
https://github.com/yzhao062/pyod/blob/c8d07f723c588aee40fccad6d0258c814384a057/pyod/models/base.py#L197-L203 | closed | 2021-10-25T03:35:14Z | 2021-10-27T02:26:03Z | https://github.com/yzhao062/pyod/issues/350 | [] | Dobatymo | 1 |
aidlearning/AidLearning-FrameWork | jupyter | 106 | How to install go,goland,clion,clang ctc,thank you | closed | 2020-06-02T01:09:40Z | 2020-07-28T16:03:43Z | https://github.com/aidlearning/AidLearning-FrameWork/issues/106 | [] | ddkwork | 3 |
|
ageitgey/face_recognition | machine-learning | 631 | CMake Error at C:/dlib-19.6/dlib/cmake_utils/add_python_module:116 (message): Boost python library not found. Call Stack (most recent call first): CMakeLists.txt:9 (include) -- Configuring incomplete, errors occurred! See also "C:/dlib-19.6/tools/python/build/CMakeFiles/CMakeOutput.log". error: cmake configuration failed! | * face_recognition version:
* Python version: 3.5
* Operating System: windows 10
### Description
trying to install face_recognition library
### What I Did
```
C:\dlib-19.6>python setup.py install
running install
running bdist_egg
running build
Detected Python architecture: 64bit
Detected platform: win32
Removing build directory C:\dlib-19.6\./tools/python/build
Configuring cmake ...
-- Building for: Visual Studio 15 2017
-- The C compiler identification is MSVC 19.15.26730.0
-- The CXX compiler identification is MSVC 19.15.26730.0
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x86/cl.exe
-- Check for working C compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x86/cl.exe -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x86/cl.exe
-- Check for working CXX compiler: C:/Program Files (x86)/Microsoft Visual Studio/2017/Community/VC/Tools/MSVC/14.15.26726/bin/Hostx86/x86/cl.exe -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
CMake Warning at C:/Program Files/CMake/share/cmake-3.12/Modules/FindBoost.cmake:1727 (message):
No header defined for python-py34; skipping header check
Call Stack (most recent call first):
C:/dlib-19.6/dlib/cmake_utils/add_python_module:61 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
CMake Warning at C:/Program Files/CMake/share/cmake-3.12/Modules/FindBoost.cmake:1727 (message):
No header defined for python-py35; skipping header check
Call Stack (most recent call first):
C:/dlib-19.6/dlib/cmake_utils/add_python_module:63 (FIND_PACKAGE)
CMakeLists.txt:9 (include)
-- Could NOT find Boost
-- Could NOT find Boost
-- Could NOT find Boost
-- Found PythonLibs: C:/Users/DELL/Anaconda3/libs/python36.lib (found suitable version "3.6.5", minimum required is "3.4")
-- *****************************************************************************************************
-- We couldn't find the right version of boost python. If you installed boost and you are still getting this error then you might have installed a version of boost that was compiled with a different version of visual studio than the one you are using. So you have to make sure that the version of visual studio is the same version that was used to compile the copy of boost you are using.
-- Set the BOOST_ROOT and BOOST_LIBRARYDIR environment variables before running cmake.
-- E.g. Something like this:
-- set BOOST_ROOT=C:\local\boost_1_57_0
-- set BOOST_LIBRARYDIR=C:\local\boost_1_57_0\stage\lib
--
-- You will also likely need to compile boost yourself rather than using one of the precompiled
-- windows binaries. Do this by going to the folder tools\build\ within boost and running
-- bootstrap.bat. Then run the command:
-- b2 install
-- And then add the output bin folder to your PATH. Usually this is the C:\boost-build-engine\bin
-- folder. Finally, go to the boost root and run a command like this:
-- b2 -a --with-python address-model=64 toolset=msvc runtime-link=static
-- When it completes, set BOOST_LIBRARYDIR equal to wherever b2 put the compiled libraries.
-- Note that you will need to set the address-model based on if you want a 32 or 64bit python library.
--
-- Next, when you invoke cmake to compile dlib you may have to use cmake's -G option to set the
-- 64 vs. 32bit mode of visual studio. Also, if you want a Python3 library you will need to
-- add -DPYTHON3=1. You do this with a statement like:
-- cmake -G "Visual Studio 12 2013 Win64" -DPYTHON3=1 ..\..\tools\python
-- Rather than:
-- cmake ..\..\tools\python
-- Which will build a 32bit Python2 module by default on most systems.
--
-- *****************************************************************************************************
CMake Error at C:/dlib-19.6/dlib/cmake_utils/add_python_module:116 (message):
Boost python library not found.
Call Stack (most recent call first):
CMakeLists.txt:9 (include)
-- Configuring incomplete, errors occurred!
See also "C:/dlib-19.6/tools/python/build/CMakeFiles/CMakeOutput.log".
error: cmake configuration failed!
```
| open | 2018-09-26T10:51:37Z | 2019-03-06T21:51:50Z | https://github.com/ageitgey/face_recognition/issues/631 | [] | saurabhbidwai | 1 |
JaidedAI/EasyOCR | pytorch | 514 | Incompatible options: paragraph=True, output_format='dict' | Running the example:
```python
import easyocr
reader = easyocr.Reader(['ch_sim','en']) # need to run only once to load model into memory
result = reader.readtext('chinese.jpg', paragraph=True, output_format='dict')
```
gives the error:
```python
.../easyocr/easyocr.py in <listcomp>(.0)
365 return [item[1] for item in result]
366 elif output_format == 'dict':
--> 367 return [ {'boxes':item[0],'text':item[1],'confident':item[2]} for item in result]
368 else:
369 return result
IndexError: list index out of range
```
because `paragraph=True` option produces `result` that is list of 2-element lists,
while `output_format='dict'` waits for 3-element list ("confident" is missing). | closed | 2021-08-10T16:23:04Z | 2023-03-23T09:46:13Z | https://github.com/JaidedAI/EasyOCR/issues/514 | [] | AndreyPikunov | 2 |
ipython/ipython | data-science | 14,102 | Can IPython benefit from PEP 703 (making the GIL optional)? | Hi!
I'm Gabriel de Marmiesse and I'm helping Sam Gross to find out if maintainers of selected Python packages would benefit from PEP703.
If you don't know what is PEP 703, it's about making the global interpreter lock (GIL) optional.
Long story short, there would be a ~10% performance impact on single threaded programs, but would allow multithreaded programms to take full advantage of multiple cores.
[You can play with it here](https://github.com/colesbury/nogil).
If you have the time, we would like to know is if the maintainers of IPython would benefit from PEP 703 or not. There are multiple situation IPython could be in.
1) IPython already uses threads and PEP 703 will likely make some features faster by allowing parallelization of cpu computation without any change.
2) IPython doesn't use threads, but PEP 703 would allow you to rewrite some features with multithreading, thus making it faster.
3) IPython uses multithreading with a low-level language (rust, C, C++...) or multiprocessing and using PEP 703 would allow a rewrite and lower the maintenance burden.
4) There is no computation that can be parallelized.
5) IPython wouldn't use multithreading, but it's very likely that users will call this package functions and classes from multiple threads because the workload can often be parralelized.
The Python language could never use multiple cores with multithreading, so it's hard to imagine what the future of the language would look like if we were to enable it. Feel free to tell us if you see new potential use cases with this PEP. Thanks for your time!
If you want to know more about PEP 703:
* [The PEP](https://peps.python.org/pep-0703/)
* [Python 3.9 implementation with install instructions](https://github.com/colesbury/nogil)
* [Python 3.12 implementation](https://github.com/colesbury/nogil-3.12)
* [Discussion about PEP 703](https://discuss.python.org/t/pep-703-making-the-global-interpreter-lock-optional-3-12-updates/26503/141)
* [Discussion about the multicore future of Python](https://discuss.python.org/t/a-fast-free-threading-python/27903) | open | 2023-06-22T14:08:15Z | 2023-06-22T14:08:15Z | https://github.com/ipython/ipython/issues/14102 | [] | gabrieldemarmiesse | 0 |
indico/indico | sqlalchemy | 6,395 | Cannot move session to another day | **Is your feature request related to a problem? Please describe.**
In https://indico.cern.ch/event/1381446/ I had a session scheduled on Wednesday and wanted to move the whole session to Friday. Strangely I could edit the start and end **times** of the session but not the day.

**Describe the solution you'd like**
I would like a calendar next to the starting times.
**Describe alternatives you've considered**
Instead I created a session on Friday and moved the talks to this new session one-by-one. That works for 5 talks, but can be painful when there are 20.
| closed | 2024-06-17T07:50:03Z | 2025-02-26T14:15:20Z | https://github.com/indico/indico/issues/6395 | [
"enhancement"
] | pkoppenb | 1 |
explosion/spaCy | data-science | 13,628 | ValueError while importing spacy module related to thinc (?) |
This is the description of the error: "ValueError: numpy.dtype size changed, may indicate binary incompatibility. Expected 96 from C header, got 88 from PyObject"

## Your Environment
* Operating System: Windows
* Python Version Used: 3.9.18
* spaCy Version Used: 3.7.6
* Environment Information: Enterprise codespace
I encounter the error below when importing spacy module. It happens sometimes:

| closed | 2024-09-19T18:12:57Z | 2024-10-31T00:02:56Z | https://github.com/explosion/spaCy/issues/13628 | [] | alexcorral | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.