repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/peft | pytorch | 1,935 | [Call for contributions] help us improve LoKr, LoHa, and other LyCORIS | Originally reported by @bghira in https://github.com/huggingface/peft/issues/1931.
Our LoKr, LoHA, and other LyCORIS modules are outdated and could benefit from your help quite a bit. The following is a list of things that need modifications and fixing:
- [ ] fixed rank dropout implementation
- [ ] fixed maths (not multiplying against the vector, but only the scalar)
- [ ] full matrix tuning
- [ ] 1x1 convolutions
- [ ] quantised LoHa/LoKr
- [ ] weight-decomposed LoHa/LoKr
So, if you are interested, feel free to take one of these up at a time and open PRs. Of course, we will be with you for the PRs, learning from them and provide guidance as needed.
Please mention this issue when opening PRs and tag @BenjaminBossan and myself. | open | 2024-07-18T04:42:18Z | 2025-02-06T14:24:43Z | https://github.com/huggingface/peft/issues/1935 | [
"wip",
"contributions-welcome",
"good-second-pr"
] | sayakpaul | 42 |
davidsandberg/facenet | tensorflow | 276 | Factors that influence distance between face embeddings | Hi,
I am using the facenet project in order to calculate the distances between two faces in images in order to evaluate if the two persons depicted are the same or not according to the compare.py file.
normally it is claimed that if the distance < =1 the faces depicted are the same person.
However, after making a lot of tests, the distance threshold can differ for different cases.
I was wondering what can influence this difference in the distance threshold? The reason is that I want to come up with a number for the distance threshold to use as a metric that produces correct results in good percent.
Is there smth I can improve in the code, or is there some criteria to have in mind when selecting tha images to be compared?For example, lightness of the image, or if the face is turned slighly to the side, etc. What prereseuites should an image have in order to give it as input to the comparison?
Thanks in asdvance!
| closed | 2017-05-15T13:03:35Z | 2017-06-03T09:38:31Z | https://github.com/davidsandberg/facenet/issues/276 | [] | Dagalaki | 1 |
axnsan12/drf-yasg | django | 472 | Swagger throws Internal Server Error when using reverse relationships in serializer | **error:**
File "/usr/local/lib/python3.6/dist-packages/drf_yasg/inspectors/field.py", line 85, in field_to_swagger_object
raise SwaggerGenerationError("cannot instantiate nested serializer as " + swagger_object_type.__name__)
drf_yasg.errors.SwaggerGenerationError: cannot instantiate nested serializer as Parameter
```
//models
class ModelA(BaseModel):
...some fields...
class Meta:
db_table = 'model_a'
class ModelB(BaseModel):
model_a_id = models.ForeignKey(ModelA, on_delete=models.CASCADE, related_name='model_b_info')
...some fields....
class Meta:
db_table = 'model_b'
//serializers
class ModelBReadSerializer(serializers.ModelSerializer):
some_id = someSerializer(many=True, read_only=True)
class Meta:
model = ModelB
fields = "__all__"
class ModelASerializer(serializers.ModelSerializer):
model_b_info = ModelBReadSerializer(many=True,read_only=True)
...more fields...
class Meta:
model = ModelA
fields = "__all__"
```
| open | 2019-10-15T07:20:58Z | 2025-03-07T12:16:29Z | https://github.com/axnsan12/drf-yasg/issues/472 | [
"triage"
] | JaveedSpritle | 2 |
neuml/txtai | nlp | 589 | Update torch version in Dockerfile | Currently, the CPU build of Docker still uses torch 1.x. Update to the latest version. | closed | 2023-10-27T11:17:22Z | 2023-10-27T19:19:13Z | https://github.com/neuml/txtai/issues/589 | [] | davidmezzetti | 0 |
Ehco1996/django-sspanel | django | 511 | 中转节点支持倍率 | closed | 2021-04-28T14:06:50Z | 2021-12-28T00:41:00Z | https://github.com/Ehco1996/django-sspanel/issues/511 | [
"enhancement"
] | Ehco1996 | 4 |
|
aiortc/aioquic | asyncio | 379 | WebTransport server script | Is there a webtransport server script?
Or does [http3_server.py](https://github.com/aiortc/aioquic/blob/main/examples/http3_server.py) script can receive the streams(unidi&bidi) and datagrams ?
If yes, does it also work with WebTransport javascript client? | closed | 2023-06-09T15:03:47Z | 2023-07-04T06:39:00Z | https://github.com/aiortc/aioquic/issues/379 | [] | pj8912 | 3 |
oegedijk/explainerdashboard | dash | 30 | Request: Plotting parameters in WhatifComponent | As far as I can see, WhatifComponent includes ShapContributionsGraphComponent, but does not include plot styling parameters such as "sort" and "orientation". I think it makes sens to add these :-) | closed | 2020-11-26T16:34:06Z | 2020-12-01T15:41:28Z | https://github.com/oegedijk/explainerdashboard/issues/30 | [] | hkoppen | 5 |
dgtlmoon/changedetection.io | web-scraping | 2,376 | [feature] Add possibility to save the backup per command | **Version and OS**
Synology DSM 7.2.1 with docker
**Is your feature request related to a problem? Please describe.**
It would be great to have the possibility to have an automatic export function of the link that his didn't has to be done by hand.
Therefore it would be good to have an export command, which can be triggered through e.g. synology task manager to do the export e.g. every friday at 01:00.
**Describe the solution you'd like**
For example for paperless-ngx there is a command like that
bash -c "docker exec Paperless-ngx document_exporter ../export -z"
Something similar for Changedetection would be nice.
**Describe the use-case and give concrete real-world examples**
To have a full automized system also a automatic backup solution would be nice that it is not required to do the backup very time I change something in the list. | closed | 2024-05-18T09:35:04Z | 2024-05-20T07:32:19Z | https://github.com/dgtlmoon/changedetection.io/issues/2376 | [
"enhancement"
] | update-freak | 3 |
ageitgey/face_recognition | machine-learning | 742 | Convert image to geometric parameters in python | I try use tutorial from kaggle site https://www.kaggle.com/c/facial-keypoints-detection#getting-started-with-r
They perform face recogntion using the R language, but the analysis is not important. It doesn't matter!!!
It is important that they work with transformed parameters(this data ready for analysis)!
I'm interested in the process of preparing (convert)data from the image to the data frame
Here, they provided two datasets(test and train) These datasets have decomposed on geometric parameters or pictures like here
data.frame': 7049 obs. of 30 variables:
$ left_eye_center_x : num 66 64.3 65.1 65.2 66.7 ...
$ left_eye_center_y : num 39 35 34.9 37.3 39.6 ...
$ right_eye_center_x : num 30.2 29.9 30.9 32 32.2 ...
$ right_eye_center_y : num 36.4 33.4 34.9 37.3 38 ...
$ left_eye_inner_corner_x : num 59.6 58.9 59.4 60 58.6 ...
$ left_eye_inner_corner_y : num 39.6 35.3 36.3 39.1 39.6 ...
$ left_eye_outer_corner_x : num 73.1 70.7 71 72.3 72.5 ...
$ left_eye_outer_corner_y : num 40 36.2 36.3 38.4 39.9 ...
$ right_eye_inner_corner_x : num 36.4 36 37.7 37.6 37 ...
$ right_eye_inner_corner_y : num 37.4 34.4 36.3 38.8 39.1 ...
$ right_eye_outer_corner_x : num 23.5 24.5 25 25.3 22.5 ...
$ right_eye_outer_corner_y : num 37.4 33.1 36.6 38 38.3 ...
$ left_eyebrow_inner_end_x : num 57 54 55.7 56.4 57.2 ...
$ left_eyebrow_inner_end_y : num 29 28.3 27.6 30.9 30.7 ...
$ left_eyebrow_outer_end_x : num 80.2 78.6 78.9 77.9 77.8 ...
$ left_eyebrow_outer_end_y : num 32.2 30.4 32.7 31.7 31.7 ...
$ right_eyebrow_inner_end_x: num 40.2 42.7 42.2 41.7 38 ...
$ right_eyebrow_inner_end_y: num 29 26.1 28.1 31 30.9 ...
$ right_eyebrow_outer_end_x: num 16.4 16.9 16.8 20.5 15.9 ...
$ right_eyebrow_outer_end_y: num 29.6 27.1 32.1 29.9 30.7 ...
$ nose_tip_x : num 44.4 48.2 47.6 51.9 43.3 ...
$ nose_tip_y : num 57.1 55.7 53.5 54.2 64.9 ...
$ mouth_left_corner_x : num 61.2 56.4 60.8 65.6 60.7 ...
$ mouth_left_corner_y : num 80 76.4 73 72.7 77.5 ...
$ mouth_right_corner_x : num 28.6 35.1 33.7 37.2 31.2 ...
$ mouth_right_corner_y : num 77.4 76 72.7 74.2 77 ...
$ mouth_center_top_lip_x : num 43.3 46.7 47.3 50.3 45 ...
$ mouth_center_top_lip_y : num 72.9 70.3 70.2 70.1 73.7 ...
$ mouth_center_bottom_lip_x: num 43.1 45.5 47.3 51.6 44.2 ...
$ mouth_center_bottom_lip_y: num 84.5 85.5 78.7 78.3 86.9 ...
so as you can see for each picture 30 variables
suppose i have any pictures here "C:\Users\Admin\Downloads\mypicture"
How can i decompose it on these variables using python
Thank you
| open | 2019-02-11T15:06:38Z | 2019-02-11T17:10:47Z | https://github.com/ageitgey/face_recognition/issues/742 | [] | jasperDD | 1 |
tensorpack/tensorpack | tensorflow | 1,217 | OOM error when use the lastest code. | If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
I use the train.py with my own data. and comment the mp.set_start_method('spawn'), because with it, runtimeerror context has already been set.
### 2. What you observed:
(1) **Include the ENTIRE logs here:**
OOM in memory, no gpu, and I observed that memory used growth all the time.
### 4. Your environment:
-------------------- -----------------------------------------------------------------
sys.platform linux
Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609]
Tensorpack v0.9.4-83-g891dc48-dirty
Numpy 1.15.2
TensorFlow 1.12.0/v1.12.0-0-ga6d8ffae09
TF Compiler Version 4.8.5
TF CUDA support True
TF MKL support False
Nvidia Driver /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.410.104
CUDA /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudart.so.9.0.176
CUDNN /usr/lib/x86_64-linux-gnu/libcudnn.so.7.2.1
NCCL /usr/lib/x86_64-linux-gnu/libnccl.so.2.2.13
CUDA_VISIBLE_DEVICES None
GPU 0,1,2,3,4,5,6,7 Tesla V100-SXM2-32GB
Free RAM 58.28/250.86 GB
CPU Count 64
cv2 4.1.0
msgpack 0.6.1
python-prctl False
-------------------- -----------------------------------------------------------------
You may often want to provide extra information related to your issue, but
at the minimum please try to provide the above information __accurately__ to save effort in the investigation.
| closed | 2019-05-28T02:55:48Z | 2019-06-18T14:50:44Z | https://github.com/tensorpack/tensorpack/issues/1217 | [
"examples"
] | realwill | 14 |
plotly/dash-core-components | dash | 321 | Autorange values stored in dcc.Graph.figure['layout']['xaxis' 'yaxis']['range'] no longer available after version v0.30.1 | I was using the initial autorange values to "hold" my plotly.graph_objs.Layout plot axes from recalculating when dynamically adding or removing plotly.graph_objs.scatter.Markers. Three questions:
1. Is there some place else I can read Layout axes range values after they have been calculated by the autorange function?
2. Is there a better way to "hold" a plotly.graph_objs.Layout to the initial autorange values?
3. Can the plotly.graph_objs.Layout axes autorange function be invoked directly? | closed | 2018-10-05T18:49:06Z | 2021-08-23T16:36:34Z | https://github.com/plotly/dash-core-components/issues/321 | [] | crosenth | 4 |
mage-ai/mage-ai | data-science | 5,556 | Support cafile in NATS streaming source | https://github.com/mage-ai/mage-ai/blob/d0eaef32a40a771b9ac8485e14ed08da83ca2dc6/mage_ai/data_preparation/templates/data_loaders/streaming/nats.yaml#L20
Issue with NATS template:
```python
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/mage_ai/shared/retry.py", line 38, in retry_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/executors/streaming_pipeline_executor.py", line 116, in __execute_with_retry
self.__execute_in_python(
File "/usr/local/lib/python3.10/site-packages/mage_ai/data_preparation/executors/streaming_pipeline_executor.py", line 164, in __execute_in_python
source = SourceFactory.get_source(
File "/usr/local/lib/python3.10/site-packages/mage_ai/streaming/sources/source_factory.py", line 36, in get_source
return NATSSource(config, **kwargs)
File "/usr/local/lib/python3.10/site-packages/mage_ai/streaming/sources/nats_js.py", line 62, in __init__
super().__init__(config)
File "/usr/local/lib/python3.10/site-packages/mage_ai/streaming/sources/base.py", line 23, in __init__
self.config = self.config_class.load(config=config)
File "/usr/local/lib/python3.10/site-packages/mage_ai/shared/config.py", line 48, in load
return self(**config)
TypeError: NATSConfig.__init__() got an unexpected keyword argument 'cafile
```
resolved by this addition:
```yaml
ssl_config:
cafile: "/tmp/.certs/ca.pem"
``` | closed | 2024-11-12T14:43:19Z | 2024-11-24T06:54:16Z | https://github.com/mage-ai/mage-ai/issues/5556 | [
"enhancement"
] | trover97 | 0 |
JaidedAI/EasyOCR | deep-learning | 893 | Text Properties Support | Hi Team,
Is it possible to extract text properties like font-size, font-width, font-style, font-color. | open | 2022-11-28T04:57:29Z | 2022-11-28T04:57:29Z | https://github.com/JaidedAI/EasyOCR/issues/893 | [] | prithivi1 | 0 |
plotly/dash | jupyter | 2,722 | [Feature Request] Render table can add Grand Total | Thanks so much for your interest in Dash!
Before posting an issue here, please check the Dash [community forum](https://community.plotly.com/c/dash) to see if the topic has already been discussed. The community forum is also great for implementation questions. When in doubt, please feel free to just post the issue here :)
**Is your feature request related to a problem? Please describe.**
Hey, i have ideal in visualized table, how to add grand total. It's a footer such as the column header. But it's calculate by column number in table.
**Describe the solution you'd like**
I encourage encountered when make a table very long and have pagination. So, how about add html or add a row grand total below table with pagination.
**Describe alternatives you've considered**
It's helpful for me.

| closed | 2024-01-08T16:05:47Z | 2024-01-08T20:49:07Z | https://github.com/plotly/dash/issues/2722 | [] | trantrinhquocviet | 1 |
MagicStack/asyncpg | asyncio | 428 | asyncpg.exceptions.DataError: invalid input for query argument $1: <Record chat_id=-1001218255164> (an integer is required (got type asyncpg.Record)) | <!--
Thank you for reporting an issue/feature request.
If this is a feature request, please disregard this template. If this is
a bug report, please answer to the questions below.
It will be much easier for us to fix the issue if a test case that reproduces
the problem is provided, with clear instructions on how to run it.
Thank you!
-->
* **asyncpg version**: 0.18.3
* **PostgreSQL version**: 4.3
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: No
* **Python version**: 3.7.1
* **Platform**: Linux Ubuntu
* **Do you use pgbouncer?**: No
* **Did you install asyncpg with pip?**: No (i installed it in pycharm)
* **If you built asyncpg locally, which version of Cython did you use?**: I don't use Cython
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: I don't know
<!-- Enter your issue details below this comment. -->
i have a column with bigint type, when i want make a record
gives an error message: ``` asyncpg.exceptions.DataError: invalid input for query argument $1: <Record chat_id=-1001218255164> (an integer is required (got type asyncpg.Record)) ``` | closed | 2019-04-03T14:28:56Z | 2019-04-04T16:02:20Z | https://github.com/MagicStack/asyncpg/issues/428 | [] | iaiiai | 7 |
explosion/spaCy | machine-learning | 13,726 | Advance NLP with spaCy load issue | In the the online course https://course.spacy.io/
import spacy failing to load in exercises (see Section 2 Getting Started)
| open | 2025-01-07T14:43:26Z | 2025-01-07T14:43:26Z | https://github.com/explosion/spaCy/issues/13726 | [] | natandatasci | 0 |
litl/backoff | asyncio | 196 | question about max_time | It appears that `max_time` sets a time limit for the time spent retrying, that does not include the initial try. If that is correct, if all I have is, for example, the total time budget for an API call, then I have to subtract the maximum time the initial attempt take from `max_time` to make sure that the total time spent = `(initial attempt + all retries) < time_budget`. Is that the intent of the code?
By way of context, we assumed that `max_time` could be set to our `time_budget` for the total time spent calling an API, but this assumption seemed to be incorrect. | open | 2023-03-02T01:58:24Z | 2023-03-02T01:58:24Z | https://github.com/litl/backoff/issues/196 | [] | jrobbins-LiveData | 0 |
hack4impact/flask-base | sqlalchemy | 181 | config.env database_url not working | i run a postgresql container in docker,
and add this in config.env:
`DATABASE_URL=postgresql://postgres:docker@localhost:5432/postgres`
but every time i run:
`python manage.py setup_dev`
it will creates data-dev.sqlite
ps:the db name and password is correct,i can use the command to connect:
`psql -h localhost -U postgres -d postgres`
| closed | 2019-01-28T07:54:36Z | 2019-03-08T06:38:20Z | https://github.com/hack4impact/flask-base/issues/181 | [] | tangyiming | 3 |
onnx/onnxmltools | scikit-learn | 100 | New Core ML Spec Released | Some features are added so this tool needs to be updated accordingly. See [their spec](https://apple.github.io/coremltools/coremlspecification/index.html) for details. | closed | 2018-06-26T18:43:12Z | 2018-11-29T01:32:06Z | https://github.com/onnx/onnxmltools/issues/100 | [
"bug"
] | wschin | 0 |
robotframework/robotframework | automation | 5,067 | Timezone in log.html same as the browser and not the one from the device under test | Hello,
The logs generated in log.html doesn't contain a timezone specific, and when opening the logs they also are the same as the browser, and not the same as the device under test.
I saw this was commented in another thread in google groups, but I couldn't see more recent info:
https://groups.google.com/g/robotframework-users/c/Wtg8EYwNVJ8
Could you please check? Is this expected?
Many thanks | open | 2024-02-29T21:45:40Z | 2024-03-12T17:20:24Z | https://github.com/robotframework/robotframework/issues/5067 | [] | alefelipeoliveira | 2 |
donnemartin/data-science-ipython-notebooks | pandas | 9 | Add notebook info for Boto, the official AWS SDK for Python. | closed | 2015-07-17T12:11:48Z | 2016-05-18T02:07:57Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/9 | [
"feature-request"
] | donnemartin | 0 |
|
replicate/cog | tensorflow | 1,965 | Cog push error “Failed to get last layer digest for cog base image” | I have push the model successful once.
But failed after that, the error message like following:
```
Building Docker image from environment in cog.yaml as r8.im/ultimatech-cn/instant-id-basic...
⚠ Stripping patch version from Python version 3.10.6 to 3.10
⚠ Stripping patch version from Python version 3.10.6 to 3.10
[+] Building 4.0s (15/15) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 833B 0.0s
=> resolve image config for docker-image://docker.io/docker/dockerfile:1.4 0.0s
=> CACHED docker-image://docker.io/docker/dockerfile:1.4 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 787B 0.0s
=> [internal] load metadata for r8.im/cog-base:cuda12.1-python3.10 2.5s
=> [stage-0 1/8] FROM r8.im/cog-base:cuda12.1-python3.10@sha256:ab0faae83dd6f205e62ff3ef44cd91f47d86 0.0s
=> [internal] load build context 0.2s
=> => transferring context: 375.87kB 0.2s
=> CACHED [stage-0 2/8] RUN --mount=type=cache,target=/var/cache/apt,sharing=locked apt-get update - 0.0s
=> CACHED [stage-0 3/8] COPY .cog/tmp/build20240923193006.8539553181594056/requirements.txt /tmp/req 0.0s
=> CACHED [stage-0 4/8] RUN pip install --no-cache-dir -r /tmp/requirements.txt 0.0s
=> CACHED [stage-0 5/8] RUN curl -o /usr/local/bin/pget -L "https://github.com/replicate/pget/releas 0.0s
=> CACHED [stage-0 6/8] RUN pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visua 0.0s
=> CACHED [stage-0 7/8] WORKDIR /src 0.0s
=> [stage-0 8/8] COPY . /src 0.7s
=> exporting to image 0.4s
=> => exporting layers 0.4s
=> => preparing layers for inline cache 0.0s
=> => writing image sha256:92c95bb5fc6a03683d9dedcf18df08d8470b01722612ed7ebca28a4113e55a1d 0.0s
=> => naming to r8.im/ultimatech-cn/instant-id-basic 0.0s
Validating model schema...
Adding labels to image...
ⅹ Failed to get last layer digest for cog base image: Get "https://us-docker.pkg.dev/artifacts-downloads/namespaces/replicate-production/repositories/replicate-us/downloads/ALpRbLCfrWK_tufbyhxnC-........aMiUdTBCrLMaN5VJ_XssKaOIv8uNyVD9Ll24PrjxChAjFFYjjHTLJR0Gp_DA2s28ysxmEc1aszSaDLBMMH78O85KU7i4-xH16ohNh1D_LwJc72qzz8c3X2uU5ub0rjsnnZffm8MldtMAFjjJfirVfg_S39SlFxvybtlC76VZWn1Ka-QKX2wOrQ70jtqCzEHL1mh_GCg58TEvyJn08BJeLOO5TvOplu1QDDGqTq0baGtfkvcWVZNUnbiv6MgMg94wgCb_hZv2K7YybKljzwmscv3AbdSQwOICR763scZ2frRSNKicS6egBc9KK-MafGbSu-qSyTrJ": dial tcp 74.125.20.82:443: i/o timeout
```
Any solution for this? | open | 2024-09-23T11:35:34Z | 2025-02-17T06:59:30Z | https://github.com/replicate/cog/issues/1965 | [] | ultimatech-cn | 2 |
pytest-dev/pytest-html | pytest | 90 | Title of checkbox disappears when unchecked |


Regression was introduced in 972058bb. Reverting changes for main.js returned previous behaviour.
| closed | 2016-11-24T12:26:18Z | 2016-11-25T09:31:04Z | https://github.com/pytest-dev/pytest-html/issues/90 | [
"bug",
"help wanted"
] | vashirov | 2 |
zappa/Zappa | flask | 864 | [Migrated] zappa.io is down | Originally from: https://github.com/Miserlou/Zappa/issues/2114 by [dekoza](https://github.com/dekoza)
I just want to let you know that the site **zappa.io** and all its subdomains are down. The domain name does not resolve. | closed | 2021-02-20T13:03:01Z | 2022-07-16T05:35:36Z | https://github.com/zappa/Zappa/issues/864 | [] | jneves | 1 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 488 | coco数据集简介 ppt的密码有误 | coco数据集简介 链接:https://pan.baidu.com/s/1HfCvjt-8o9j5a916IYNVjw[](https://github.com/WZMIAOMIAO/deep-learning-for-imageprocessing/tree/master/course_ppt#%E5%9B%BE%E5%83%8F%E5%88%86%E5%89%B2%E7%BD%91%E7%BB%9C%E7%9B%B8%E5%85%B3) 密码:6rec
图像分割网络相关 | closed | 2022-03-11T12:25:02Z | 2022-03-11T12:53:06Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/488 | [] | vansin | 1 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,132 | Dummy Camera Rendering | I am trying to run render.py using only cameras.json from a trained GS model, without the original dataset. I created a dummy camera and applied all parameters, including R and T, as well as camera-related settings from cameras.json. I also confirmed that the T values are identical in the SIBR viewer, but the rendered result is almost invisible with render.py. It seems like there might be an issue with the rotation property. Shouldn't R|T be directly applied as is? | closed | 2025-01-06T04:29:55Z | 2025-01-06T04:32:58Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1132 | [] | Lee-JaeWon | 0 |
sunscrapers/djoser | rest-api | 14 | DFR3 User Activation | I get an unauthorized error when clicking the email activation link in the sent email. Do you know if this should work out-of-the-box?
| closed | 2015-02-16T06:49:27Z | 2015-02-20T15:09:31Z | https://github.com/sunscrapers/djoser/issues/14 | [] | parietal-io | 2 |
sinaptik-ai/pandas-ai | data-science | 1,339 | Pandas v2 adoption | ### 🚀 The feature
I have complex projects and pandas-ai is holding me back because of 1.5 requirement. Other major packages seem to have made the migration. I'm wondering what's holding it here.
Thanks.
### Motivation, pitch
NA
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2024-08-28T09:12:12Z | 2024-12-04T16:08:57Z | https://github.com/sinaptik-ai/pandas-ai/issues/1339 | [
"enhancement"
] | thisismygitrepo | 1 |
pydata/bottleneck | numpy | 439 | [BUG]would it be possible to have Python-3.12 wheels ? | **Describe the bug**
Whish of Windows Python-3.12 wheel
| closed | 2024-01-06T13:06:36Z | 2024-02-25T00:18:09Z | https://github.com/pydata/bottleneck/issues/439 | [
"bug"
] | stonebig | 2 |
ageitgey/face_recognition | machine-learning | 1,193 | On import error: unable to open shape_predictor_68_face_landmarks.dat | * face_recognition version: 1.3.0-py2.py3-none-any
* Python version: 3.8.5
* Operating System: Windows 7 Ultimate SP1 x64 (version 6.1, build 7601)
### Description
I wanted to install python for a single user, but python didn't want to do this, so I had to give it increased permissions. opencv installed without elevated rights works fine on this PC (installed only for one user). On another PC, face_recognition also works correctly. But here, for some reason, he refuses - he writes that he can't access a file that EXISTS in the user's folder, which means that it is available not only for reading, but also for writing.

### What I Did
*install python 3.8.5*
```
pip install opencv-python
pip install dlib
pip install face_recognition
```
Everything was installed successfully. Everything was installed in the user's folder because I run ps without elevated rights. Python itself was installed with elevated rights because it refused to be installed without them.
### P.S.
If you set face_recognition with elevated rights (for all users), it works. However, it should work without it. | open | 2020-07-27T20:18:57Z | 2024-07-23T08:48:38Z | https://github.com/ageitgey/face_recognition/issues/1193 | [] | averov90 | 0 |
facebookresearch/fairseq | pytorch | 5,096 | Error restarting training and inability to enable checkpoint activations after compiling Fairseq model using torch.compile() | ## ❓ Questions and Help
#### compiled fairseq model using torch.compile() and observed significant training speed . but observed few issues .
1) Error occurred while restarting the training from checkpoint_last .
2) not able to enable --checkpoint-activations while using torch.compile
#### Code
```
if cfg.distributed_training.ddp_backend == "fully_sharded":
with fsdp_enable_wrap(cfg.distributed_training):
model = fsdp_wrap(task.build_model(cfg.model))
else:
model = task.build_model(cfg.model)
model = torch.compile(model)
```
#### compiled model using torch.compile() ,enabled checkpoint activations using --checkpoint-activations and observed below error.
KeyError: packed_non_tensor_outputs
from user code:
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/modules/checkpoint_activations.py", line 67, in <graph break in _checkpointed_forward>
packed_non_tensor_outputs = parent_ctx_dict["packed_non_tensor_outputs"]
Also when I tried to restart training from the last checkpoint, the below error occurred.
2023-05-02 18:08:17 | INFO | fairseq.trainer | Preparing to load checkpoint checkpoint/checkpoint_last.pt
Traceback (most recent call last):
File "/home/santha-11585/miniconda3/envs/ddp/bin/fairseq-train", line 8, in <module>
sys.exit(cli_main())
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq_cli/train.py", line 632, in cli_main
distributed_utils.call_main(cfg, main)
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/distributed/utils.py", line 344, in call_main
torch.multiprocessing.spawn(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 239, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 197, in start_processes
while not context.join():
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 6 terminated with the following error:
Traceback (most recent call last):
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/trainer.py", line 567, in load_checkpoint
self.model.load_state_dict(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/distributed/module_proxy_wrapper.py", line 53, in load_state_dict
return self.module.module.load_state_dict(*args, **kwargs)
TypeError: Module.load_state_dict() got an unexpected keyword argument 'model_cfg'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/distributed/utils.py", line 328, in distributed_main
main(cfg, **kwargs)
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq_cli/train.py", line 170, in main
extra_state, epoch_itr = checkpoint_utils.load_checkpoint(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/checkpoint_utils.py", line 248, in load_checkpoint
extra_state = trainer.load_checkpoint(
File "/home/santha-11585/miniconda3/envs/ddp/lib/python3.10/site-packages/fairseq/trainer.py", line 579, in load_checkpoint
raise Exception(
Exception: Cannot load model parameters from checkpoint checkpoint/checkpoint_last.pt; please ensure that the architectures match.
#### What's your environment?
- fairseq Version (e.g., 1.0 or main): 0.12.2
- PyTorch Version (e.g., 1.0) : 2.0.0+cu117
- OS (e.g., Linux): ubuntu
- How you installed fairseq (`pip`, source): yes
- Build command you used (if compiling from source):
- Python version:3.10
- CUDA/cuDNN version: 11.7
- GPU models and configuration: A6000
- Any other relevant information:
| open | 2023-05-02T12:26:57Z | 2023-05-15T09:00:48Z | https://github.com/facebookresearch/fairseq/issues/5096 | [
"question",
"needs triage"
] | santha96 | 5 |
seleniumbase/SeleniumBase | pytest | 2,255 | Looks like Cloudflare found out about SeleniumBase UC Mode | The makers of the **Turnstile** have found out about **SeleniumBase UC Mode**:
<img width="480" alt="Screenshot 2023-11-08 at 5 47 30 PM" src="https://github.com/seleniumbase/SeleniumBase/assets/6788579/08fa67af-262e-48e4-8699-33e04c15ab54">
**To quote Dr. Emmett Brown from Back to the Future:**
> **"They found me. I don't how, but they found me."**

I guess that means they watched the **SeleniumBase UC Mode** video: https://www.youtube.com/watch?v=5dMFI3e85ig
--------
In other news, I'm working on more updates and demo pages for running tests.
Once the next release is shipped, I'll start going through the notification queue. | closed | 2023-11-08T23:43:16Z | 2023-11-15T02:40:06Z | https://github.com/seleniumbase/SeleniumBase/issues/2255 | [
"News / Announcements",
"UC Mode / CDP Mode",
"Fun"
] | mdmintz | 10 |
vi3k6i5/flashtext | nlp | 20 | Replace keyword not working properly | '''from flashtext import KeywordProcessor
kp = KeywordProcessor()
kp.add_keyword('+', 'plus')
kp.replace_keywords('c++')''' | closed | 2017-11-16T06:32:51Z | 2017-11-16T18:53:38Z | https://github.com/vi3k6i5/flashtext/issues/20 | [
"invalid"
] | thakur-nandan | 1 |
mirumee/ariadne-codegen | graphql | 117 | Field/Type descriptions from schema could be added to the generated code. | This will presumably result in a similar discussion to this. https://github.com/jhnnsrs/turms/pull/54
I've implemented this as docstrings on a local branch using similar logic to that used in turms. | open | 2023-03-31T17:06:59Z | 2023-07-21T16:15:40Z | https://github.com/mirumee/ariadne-codegen/issues/117 | [] | strue36 | 3 |
huggingface/transformers | tensorflow | 36,343 | Failed to import transformers.models.auto.modeling_auto because numpy.core.multiarray failed to import | ### System Info
I am currently trying to import `peft`. But I got the following error:
```
Traceback (most recent call last):
File "/home/paperspace/cache/ml/experiments/hhem2/hhem2/finetune/new_train.py", line 18, in <module>
import peft
File "/home/paperspace/.local/lib/python3.10/site-packages/peft/__init__.py", line 22, in <module>
from .auto import (
File "/home/paperspace/.local/lib/python3.10/site-packages/peft/auto.py", line 21, in <module>
from transformers import (
File "<frozen importlib._bootstrap>", line 1075, in _handle_fromlist
File "/home/paperspace/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1852, in __getattr__
value = getattr(module, name)
File "/home/paperspace/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1851, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/paperspace/.local/lib/python3.10/site-packages/transformers/utils/import_utils.py", line 1865, in _get_module
raise RuntimeError(
RuntimeError: Failed to import transformers.models.auto.modeling_auto because of the following error (look up to see its traceback):
Failed to import transformers.generation.utils because of the following error (look up to see its traceback):
numpy.core.multiarray failed to import
```
I have already upgraded `transformers` and `numpy` to the latest version.
```
numpy 2.2.3
transformers 4.49.0
peft 0.14.0
```
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
run the following command on ur Linux terminal:
```
python3 -c "import peft"
```
### Expected behavior
Shouldn't see any error. Silence. | closed | 2025-02-22T04:49:16Z | 2025-02-28T16:06:25Z | https://github.com/huggingface/transformers/issues/36343 | [
"bug"
] | forrestbao | 3 |
encode/databases | asyncio | 575 | Allow passing string values for ssl query string in PostgreSQL URL | When using the PostgreSQL backend, you can pass the `ssl` key in the query string, with the values `true` or `false`. These are converted to boolean and passed as arguments to [asyncpg.create_pool](https://magicstack.github.io/asyncpg/current/api/index.html#asyncpg.pool.create_pool)
The asyncpg library accepts other values than only `True` and `False`, which can be used to choose how certificate validation is done (or not done).
~~For the record, when setting `ssl=true`, the ssl mode used is `prefer`, which will fallback to plain if SSL connection fails, so it is not a secure default.~~ (Edit: This is not true, certificate is checked with `ssl=true`, the documentation is not clear on that topic).
I'm going to send a PR that permits to send string values, but it will not change the default settings. | closed | 2023-11-28T15:21:08Z | 2023-11-28T18:26:28Z | https://github.com/encode/databases/issues/575 | [] | Exagone313 | 1 |
fastapi/sqlmodel | pydantic | 115 | PyCharm: No autocomplete when creating new instances | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from datetime import datetime
from sqlmodel import Field, SQLModel
from typing import Optional
class MyModel(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
dev_email: str = Field(index=True)
account_id: str = Field(index=True)
other_id: Optional[str] = None
failed: Optional[bool] = False
created: datetime = Field(sa_column_kwargs={'default': datetime.utcnow})
updated: datetime = Field(sa_column_kwargs={'default': datetime.utcnow, 'onupdate': datetime.utcnow})
```
### Description
- Create Model
- Create new instance of model
- I expected to see autocompletion for model attributes as shown in the features section of docs
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.4
### Python Version
3.8
### Additional Context
No autocompletion in PyCharm when creating new instance. (only when fetching an instance from db)

| open | 2021-09-28T22:45:30Z | 2025-02-25T17:59:06Z | https://github.com/fastapi/sqlmodel/issues/115 | [
"question"
] | adlmtl | 15 |
jazzband/django-oauth-toolkit | django | 955 | Is documentation correct about OIDC_ISS_ENDPOINT? | The [OIDC_ISS_ENDPOINT](https://django-oauth-toolkit.readthedocs.io/en/latest/settings.html#oidc-iss-endpoint) documentation states that discovery is at `OIDC_ISS_ENDPOINT + /.well-known/openid-configuration/`. That would indicate that one should include the mount point of the `oauth2_provider.urls` right?
But `ConnectDiscoveryInfoView` uses `reverse` to get the path of the `oauth2_provider:authorize`, `oauth2_provider:token`, `oauth2_provider:user-info`, `oauth2_provider:jwks-info` which results in the doubling of the mount point info.
So a if `oauth2_provider.urls` is mounted at `/some-initial-path/o` all the endpoints, except `issuer`, included in the response has doubled the mount point information. So if the `OIDC_ISS_ENDPOINT` is `http://localhost:8001/some-initial-path/o` the issuer will be `http://localhost:8001/some-initial-path/o` but `authorization_endpoint` will be `http://localhost:8001/some-initial-path/o/some-initial-path/o/authorize/`. Same pattern for `token_endpoint`, `userinfo_endpoint`, and `jwks_uri`
Looking at the tests, there seems to be a little ambivalence about the topic. See below
https://github.com/jazzband/django-oauth-toolkit/blob/9d2aac2480b2a1875eb52612661992f73606bade/tests/test_oidc_views.py#L15
`test_get_connect_discovery_info` expects a url without a path and `test_get_connect_discovery_info_without_issuer_url` expects a url with a path `/o` (the default `oauth2` path?).
Anyways - I'm confused. Can anyone clarify if `OIDC_ISS_ENDPOINT` should just be the root url of the Django app or if it should include the mount point of the `oauth2_provider.urls`?
EDIT:
It looks as if the OIDC specification mentions [this](https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderConfig). The correct pattern should follow the `django_oauth_toolkit` documentation. So `OIDC_ISS_ENDPOINT ` + `/.well-known/openid-configuration` should resolve.
If this is true, then the test `test_get_connect_discovery_info` should expect `http://localhost/o` instead of `http://localhost` as `issuer` - I think.
EDIT2:
If `OIDC_ISS_ENDPOINT ` is defined, couldn't it be located somewhere else (another domain) than where `ConnectDiscoveryInfoView` is located? If yes, isn't it then a mistake to base the location of `authorization_endpoint`, `token_endpoint`, `userinfo_endpoint`, and `jwks_uri` on the use of `reverse` for the url patterns on the same host where `ConnectDiscoveryInfoView` is located.
Why not just hardcode the endpoints to `OIDC_ISS_ENDPOINT` + `{/authorize/, /token/, /userinfo/, o/.well-known/jwks.json}`?
Or urlparse `OIDC_ISS_ENDPOINT` and use the scheme + netloc + reverse of all the endpoints to fill the output of `ConnectDiscoveryInfoView`. | closed | 2021-04-02T19:16:44Z | 2021-04-12T10:08:41Z | https://github.com/jazzband/django-oauth-toolkit/issues/955 | [] | dollarklavs | 0 |
ploomber/ploomber | jupyter | 457 | Azure ML integration | We need to create a similar integration + tutorial to this one we have with AWS Batch: https://soopervisor.readthedocs.io/en/latest/tutorials/aws-batch.html
The story is about:
Integrating the client with Azure ML batch functionality.
Creating a tutorial similar to the one above.
If possible create a short video to guide through the tutorial | closed | 2021-12-29T18:34:11Z | 2022-04-16T13:59:10Z | https://github.com/ploomber/ploomber/issues/457 | [] | idomic | 1 |
OpenInterpreter/open-interpreter | python | 795 | Open Files | ### Describe the bug
I tried to open a screenshot with this command and Open Interpreter Crashed:
`> open /Users/maxpetrusenko/Desktop/photo_2023-11-23_17-58-50.jpg please
Python Version: 3.10.12
Pip Version: 23.2.1
Open-interpreter Version: cmd:A, pkg: 0.1.15
OS Version and Architecture: macOS-14.1-arm64-arm-64bit
CPU Info: arm
RAM Info: 16.00 GB, used: 6.80, free: 0.19
Interpreter Info
Vision: False
Model: openai/gpt-4
Function calling: False
Context window: 3000
Max tokens: 1000
Auto run: False
API base: http://localhost:1234/v1
Local: True
Curl output: [Errno 2] No such file or directory: 'curl http://localhost:1234/v1'
Traceback (most recent call last):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/respond.py", line 49, in respond
for chunk in interpreter._llm(messages_for_llm):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/llm/convert_to_coding_llm.py", line 65, in coding_llm
for chunk in text_llm(messages):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/llm/setup_text_llm.py", line 32, in base_llm
messages = tt.trim(
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/tokentrim/tokentrim.py", line 189, in trim
shorten_message_to_fit_limit(message, tokens_remaining, model)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/tokentrim/tokentrim.py", line 95, in shorten_message_to_fit_limit
new_length = int(len(encoding.encode(content)) * ratio)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
TypeError: expected string or buffer
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/maxpetrusenko/miniforge3/bin/interpreter", line 8, in <module>
sys.exit(cli())
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 24, in cli
cli(self)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/cli/cli.py", line 268, in cli
interpreter.chat()
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 86, in chat
for _ in self._streaming_chat(message=message, display=display):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 106, in _streaming_chat
yield from terminal_interface(self, message)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/terminal_interface/terminal_interface.py", line 115, in terminal_interface
for chunk in interpreter.chat(message, display=False, stream=True):
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 127, in _streaming_chat
yield from self._respond()
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/core.py", line 162, in _respond
yield from respond(self)
File "/Users/maxpetrusenko/miniforge3/lib/python3.10/site-packages/interpreter/core/respond.py", line 97, in respond
raise Exception(
Exception: expected string or buffer
Please make sure LM Studio's local server is running by following the steps above.
If LM Studio's local server is running, please try a language model with a different architecture.`
### Reproduce
1. run interpreter --local
2. start server ( LM Studio tried with mistral instruct v0 1 cguf )
3. open "/path/to/screenshot" please
### Expected behavior
no crash
### Screenshots
_No response_
### Open Interpreter version
0.1.15
### Python version
3.11
### Operating System name and version
mac m2
### Additional context
_No response_ | closed | 2023-11-27T20:21:25Z | 2024-02-12T17:47:19Z | https://github.com/OpenInterpreter/open-interpreter/issues/795 | [
"Bug"
] | maxpetrusenko | 9 |
Layout-Parser/layout-parser | computer-vision | 194 | Show Element Id when set to true breaks the code due to FreeTypeFont has no getsize attribute | **Describe the bug**
I was trying to use show_element_id as True in draw_box but suddenly got an AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version, see the [Layout Parser Releases](https://github.com/Layout-Parser/layout-parser/releases/)
**To Reproduce**
Steps to reproduce the behavior:
1. What command or script did you run?
```
lp.draw_box(pdf_images[4], text_blocks, box_width=3, show_element_id=True) # Use the default font provided by the library
```
**Environment**
1. Used on windows with jupyter lab on conda
2. Using layoutparser version 0.3.4
3. All other libraries has been installed
**Error traceback**
AttributeError Traceback (most recent call last)
Cell In[17], line 1
----> 1 lp.draw_box(pdf_images[4], text_blocks,
2 box_width=3, show_element_id=True) # Use the default font provided by the library
File ~\miniconda3\Lib\site-packages\layoutparser\visualization.py:194, in image_loader.<locals>.wrap(canvas, layout, *args, **kwargs)
192 elif isinstance(canvas, np.ndarray):
193 canvas = Image.fromarray(canvas)
--> 194 out = func(canvas, layout, *args, **kwargs)
195 return out
File ~\miniconda3\Lib\site-packages\layoutparser\visualization.py:392, in draw_box(canvas, layout, box_width, box_alpha, box_color, color_map, show_element_id, show_element_type, id_font_size, id_font_path, id_text_color, id_text_background_color, id_text_background_alpha)
389 text = str(ele.type) if not text else text + ": " + str(ele.type)
391 start_x, start_y = ele.coordinates[:2]
--> 392 text_w, text_h = font_obj.getsize(text)
394 text_box_object = Rectangle(
395 start_x, start_y, start_x + text_w, start_y + text_h
396 )
397 # Add a small background for the text
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
**Screenshots**
<img width="595" alt="image" src="https://github.com/Layout-Parser/layout-parser/assets/56075784/a0c616e8-ad07-4b2d-bbbf-d15f4e30a222">
| open | 2023-08-23T04:52:10Z | 2024-03-08T22:56:18Z | https://github.com/Layout-Parser/layout-parser/issues/194 | [
"bug"
] | Extrosoph | 2 |
amdegroot/ssd.pytorch | computer-vision | 279 | How can I get the structure graph about network,Any tool?? | open | 2018-12-28T08:34:45Z | 2018-12-28T08:34:45Z | https://github.com/amdegroot/ssd.pytorch/issues/279 | [] | hahanigehaha233 | 0 |
|
serengil/deepface | machine-learning | 715 | why failed to allocate GPU's memory | i write a function:
```python
def updatepkl(business):
file_list = os.listdir("FR/dataset/" + business)
for file in file_list:
file_path = 'FR/dataset/' + business + '/' + file
if os.path.isdir(file_path):
for i in os.listdir(file_path):
DeepFace.find(img_path=file_path + '/' + i, db_path="FR/dataset/" + business + '/',
model_name='ArcFace', detector_backend='dlib',
distance_metric="euclidean_l2", enforce_detection=False) # update pkl file
```
When I execute this method, the GPU's memory is occupied greatly (14G, total:16G).
When I call it again, the following error occurs:
```
tensorflow.python.framework.errors_impl.ResourceExhaustedError: {{function_node __wrapped__AddV2_device_/job:localhost/replica:0/task:0/device:GPU:0}} failed to allocate memory [Op:AddV2]
```
Please explain why this situation occurs.
thanks
| closed | 2023-04-11T06:59:01Z | 2023-04-11T07:09:59Z | https://github.com/serengil/deepface/issues/715 | [
"question"
] | fanandli | 2 |
LibrePhotos/librephotos | django | 918 | Filter by Video/Photos on main screen | **Describe the enhancement you'd like**
A clear and concise description of what you want to happen.
Add the hability to display only videos/photos (filtering).
**Describe why this will benefit the LibrePhotos**
A clear and concise explanation on why this will make LibrePhotos better.
Better organisation.
**Additional context**
Add any other context or screenshots about the enhancement request here.

| closed | 2023-07-12T10:20:51Z | 2023-10-24T08:14:38Z | https://github.com/LibrePhotos/librephotos/issues/918 | [
"enhancement"
] | hardwareadictos | 1 |
koaning/scikit-lego | scikit-learn | 228 | List of features in readme is out of date[DOCS] | closed | 2019-10-24T11:56:10Z | 2019-10-31T19:10:35Z | https://github.com/koaning/scikit-lego/issues/228 | [
"documentation"
] | MBrouns | 1 |
|
streamlit/streamlit | streamlit | 9,944 | `y_min` and `y_max` not correctly honored in `*ChartColumn`. Input Values not clamped, if value range outside y limits defined in `column_config` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
The scale in `LineChartColumn` and `BarChartColumn` changes, if the values provided are outside of the defined `y_min` and `y_max`.
### Reproducible Code Example
[](https://issues.streamlitapp.com/?issue=gh-9944)
```Python
data_df = pd.DataFrame(
{
"sales": [
[0,50,100],
[0,50,200]
],
}
)
st.data_editor(
data_df,
column_config={
"sales": st.column_config.BarChartColumn(
"Sales (last 6 months)",
help="The sales volume in the last 6 months",
y_min=0,
y_max=100,
),
},
hide_index=True,
)
```
### Steps To Reproduce
_No response_
### Expected Behavior
It is my understanding that the two input lists should generate the exact same charts, with `y_min=0` and `y_max=100`: i.e. the 200 value will just get clamped to 100.
### Current Behavior

The y limits change depending on the max and min value of the input list.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.40.01
- Python version: 3.12.7
- Operating System: Ubuntu 20.04
- Browser: Firefox
### Additional Information
_No response_ | open | 2024-11-28T14:54:44Z | 2024-11-29T22:41:29Z | https://github.com/streamlit/streamlit/issues/9944 | [
"type:enhancement",
"feature:st.column_config"
] | jonasViehweger | 2 |
vllm-project/vllm | pytorch | 14,536 | [Usage]: Does vllm support inflight batch? | ### Your current environment
### How would you like to use vllm
Does vllm support inflight batch?
trtllm supports it but I can't find any information on vllm documentation
Could some kind person explain it?
Thank you so much in advance
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | closed | 2025-03-10T03:45:47Z | 2025-03-10T03:52:12Z | https://github.com/vllm-project/vllm/issues/14536 | [
"usage"
] | SoundProvider | 1 |
autogluon/autogluon | scikit-learn | 4,162 | [tabular] Add logging of inference throughput of best model at end of fit | [From user](https://www.kaggle.com/competitions/playground-series-s4e5/discussion/499495#2789917): "It wasn't really clear that predict was going to be going for a long time"
I think we can make this a bit better by mentioning at the end of training the estimated inference throughput of the selected best model, which the user can refer to when gauging how long it will take to do inference on X rows. We have the number already calculated, we just haven't put it as part of the user-visible logging yet.
| closed | 2024-05-02T22:32:18Z | 2024-05-16T23:53:30Z | https://github.com/autogluon/autogluon/issues/4162 | [
"API & Doc",
"module: tabular"
] | Innixma | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 2,086 | ChromeDriver is up-to date still it says chromedriver version is 114 | I have downloaded the latest chromedriver but when I run the script, it still says:
`selenium.common.exceptions.WebDriverException: Message: unknown error: cannot connect to chrome at 127.0.0.1:59122
from session not created: This version of ChromeDriver only supports Chrome version 114
Current browser version is 131.0.6778.86`
```
def get_driver():
print(f'Opening webdriver...')
path = "chromedriver.exe"
options = uc.ChromeOptions()
options.add_argument("--start-maximized")
options.binary_location = path
options.headless = False
caps = DesiredCapabilities.CHROME
caps["acceptInsecureCerts"] = True
caps['goog:loggingPrefs'] = {'performance': 'ALL'}
options.set_capability(
"goog:loggingPrefs", {"performance": "ALL"}
)
# try:
driver = uc.Chrome(executable_path=path,options=options,desired_capabilities=caps)
print('Webdriver Opened.')
time.sleep(2)
return driver
``` | closed | 2024-11-20T02:45:11Z | 2024-11-26T00:48:13Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/2086 | [] | majidabbasi788 | 3 |
onnx/onnx | tensorflow | 5,943 | [Feature request] Adding output_dtype attribute to QuantizeLinear | ### System information
Main top-of-tree.
### What is the problem that this feature solves?
QuantizeLinear supports output types UINT8, INT8, UINT16, INT16, UINT4, INT4, FLOAT8*.
In order to specify any type other than the default UINT8, the user should provide a zero-point tensor. The output dtype is derived from the zero-point tensor. This leads to defining the zero-point tensor just to signal the output datatype.
Using the zero-point solely to specify the output data type poses several problems:
1. Increased Model Size: The need to include a zero-point tensor, especially when dealing with block quantization, can lead to unnecessary inflation of the model size. This is because additional data_size/block_size zeros must be included, which do not contribute to the model's functionality but occupy storage and memory resources.
2. Computational Overhead: For backends processing the QuantizeLinear operation, the presence of large zero-point tensors (filled with zeros) requires either checking the values of the zero-point tensor are all zeros, or performing the addition operation.
3. Difficulty in Generating Non-standard Data Types: When exporting models from frameworks such as PyTorch, generating tensors for non-standard data types (e.g., FLOAT8) to serve as zero points is a challenge, limiting the accessibility of model quantization.
### Alternatives considered
_No response_
### Describe the feature
Add an optional output_dtype attribute to QuantizeLinear.
The output_dtype attribute will allow users to directly specify the desired output data type for the QuantizeLinear operation without the need to provide a zero_point tensor.
Supported data types will include UINT8, INT8, UINT16, INT16, UINT4, INT4, and FLOAT8, aligning with the current supported output types.
In case output_dtype is not supplied and zero_point is supplied - data type will be derived from zero_point.
In case neither output_dtype or zero_point are supplied, the default data type will be UINT8.
In case output_dtype and zero_point show conflicting data types - the model is invalid.
### Will this influence the current api (Y/N)?
Yes
Adding an attribute to QuantizeLinear
### Feature Area
Operators
### Are you willing to contribute it (Y/N)
Yes
### Notes
@xadupre | closed | 2024-02-18T09:31:19Z | 2024-04-08T06:23:49Z | https://github.com/onnx/onnx/issues/5943 | [
"topic: enhancement"
] | galagam | 2 |
pallets/flask | python | 5,356 | jsonify does not support integer keys | The snippets below are self-descriptive and reflect the problem mentioned in the title.
Expected behavior: jsonify builds a response irrespective of the key/value data types (at least for basic types like int and str)
Actual behavior: keys of type `int` break `jsonify`
Personal suggestion: just typecast to str, but issue a warning
Minimal code to reproduce the issue:
```
from flask import Flask, jsonify
import json
d={32: "aa", "something":"else"}
print(json.dumps(d)) # works # <-------
app = Flask('app')
# app.config['JSON_SORT_KEYS'] = False #<-- makes no difference
with app.app_context():
print(jsonify(d)) # b0rks # <-------
```
Error log:
```
TypeError Traceback (most recent call last)
<ipython-input-12-d8fbf48063d9> in <module>
1 with app.app_context():
----> 2 jsonify(d)
3
~/.local/lib/python3.10/site-packages/flask/json/__init__.py in jsonify(*args, **kwargs)
168 .. versionadded:: 0.2
169 """
--> 170 return current_app.json.response(*args, **kwargs)
~/.local/lib/python3.10/site-packages/flask/json/provider.py in response(self, *args, **kwargs)
213
214 return self._app.response_class(
--> 215 f"{self.dumps(obj, **dump_args)}\n", mimetype=self.mimetype
216 )
~/.local/lib/python3.10/site-packages/flask/json/provider.py in dumps(self, obj, **kwargs)
178 kwargs.setdefault("ensure_ascii", self.ensure_ascii)
179 kwargs.setdefault("sort_keys", self.sort_keys)
--> 180 return json.dumps(obj, **kwargs)
181
182 def loads(self, s: str | bytes, **kwargs: t.Any) -> t.Any:
/usr/lib/python3.10/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
236 check_circular=check_circular, allow_nan=allow_nan, indent=indent,
237 separators=separators, default=default, sort_keys=sort_keys,
--> 238 **kw).encode(obj)
239
240
/usr/lib/python3.10/json/encoder.py in encode(self, o)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
/usr/lib/python3.10/json/encoder.py in iterencode(self, o, _one_shot)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
TypeError: '<' not supported between instances of 'str' and 'int'
```
- Python version: Python 3.10.12
- Flask version: Flask 2.3.3
- Werkzeug 2.3.7
| closed | 2023-12-06T13:59:05Z | 2023-12-21T00:05:58Z | https://github.com/pallets/flask/issues/5356 | [] | theiosif | 1 |
huggingface/diffusers | deep-learning | 10,162 | Move UFOGen Pipeline and Scheduler to Research Projects | **Is your feature request related to a problem? Please describe.**
After reviewing #6133 and a bit of searching has led me to the implementation of the UFOGen paper by the co-authors. By reading the repo ReadME and it seems there is no model checkpoints provided by the authors, only the training code. Does it make sense to move the pipeline to example/research projects?
Repo link: https://github.com/xuyanwu/SIDDMs-UFOGen
Please let me know.
cc: @yiyixuxu , @sayakpaul | closed | 2024-12-09T17:08:39Z | 2025-01-14T19:44:41Z | https://github.com/huggingface/diffusers/issues/10162 | [
"stale"
] | ParagEkbote | 2 |
amidaware/tacticalrmm | django | 1,163 | Add 2 more special Automation Policies | Like Default workstation and default server.
Have a "Post Installation" Automation Policy
Have a "Pre-Uninstall" Automation Policy
Then you can have onboarding, and offboarding scripts. I'm sure people will want Pre-Uninstall per Client/Site/Agent... | open | 2022-06-02T15:50:29Z | 2023-12-28T12:01:56Z | https://github.com/amidaware/tacticalrmm/issues/1163 | [
"enhancement"
] | silversword411 | 1 |
plotly/dash-table | plotly | 755 | Maximum & minimum syntax in filter_query for conditional formatting | closed | 2020-04-20T18:19:54Z | 2020-04-27T17:48:15Z | https://github.com/plotly/dash-table/issues/755 | [] | chriddyp | 1 |
|
jowilf/starlette-admin | sqlalchemy | 542 | Bug: changelog docs page refers to 2023 for releases made during 2024 | **Describe the bug**
Was just getting acquainted with this project and noticed that latest release is shown as being from 2023-02-04 on the changelog page:
https://github.com/jowilf/starlette-admin/blob/7465db977d748baa43c3f39a20a307c3636bd7be/docs/changelog/index.md?plain=1#L19
All releases made during early 2024 have this typo in the changelog. | closed | 2024-05-03T20:13:06Z | 2024-05-28T04:23:39Z | https://github.com/jowilf/starlette-admin/issues/542 | [
"bug"
] | ricardogsilva | 0 |
plotly/dash-bio | dash | 449 | Clustergram row labels change heatmap values/colors | Passing a list of labels to `row_labels` seem to change the values that are plotted in the heatmap rather than just adding a textual label on the side.
```python
import dash_bio as db
import plotly.express as px
iris = px.data.iris()
db.Clustergram(iris.select_dtypes('number').to_numpy())
```

In the plot above there it one value per row in each color. Below, rows are grouped together so that there are are only three values/colors per column.
```python
db.Clustergram(iris.select_dtypes('number').to_numpy(), row_labels=iris['species'].to_list())
```

The font size of the labels also don't adjust to the size of the plot. I have to change the height to 2000 before I can read what they say:

```
-----
dash 1.6.1
dash_bio 0.4.4
dash_core_components 1.5.1
dash_html_components 1.0.2
numpy 1.17.3
pandas 0.25.3
plotly 4.3.0
-----
IPython 7.9.0
jupyter_client 5.3.3
jupyter_core 4.6.1
notebook 6.0.1
-----
Python 3.8.0 | packaged by conda-forge | (default, Nov 22 2019, 19:11:38) [GCC 7.3.0]
``` | closed | 2019-11-24T07:15:52Z | 2020-03-16T19:11:48Z | https://github.com/plotly/dash-bio/issues/449 | [
"dash-type-bug"
] | joelostblom | 3 |
nerfstudio-project/nerfstudio | computer-vision | 2,660 | Gaussian splatting: assertion error (coeffs.shape[-2] == num_sh_bases(degree)) | **Describe the bug**
Running ns-train gaussian-splatting crashed with assertion failure
**To Reproduce**
I built a docker image using the gaussian-splatting branch.
1. `git clone https://github.com/nerfstudio-project/nerfstudio.git -b gaussian-splatting --recurse-submodules`
2. `docker build --build-arg CUDA_VERSION=11.8.0 --build-arg CUDA_ARCHITECTURES=86 --build-arg OS_VERSION=22.04 --tag nerfstudio-gs --file Dockerfile .`
3. `docker run --gpus all --privileged --network host --rm -it -v /home/user/workspace/:/workspace -v /mnt/data:/data --shm-size=32G --name nerfstudio-gs nerfstudio-gs`
4. Inside container: `ns-train gaussian-splatting --data data/posters_v3/`
**Expected behavior**
The training should not fail.
**Screenshots**
Logs here
```
user@uscnsl-exxact-server:/workspace/nerfstudio/nerfstudio_ws$ ns-train gaussian-splatting --data data/posters_v3/
[22:49:38] Using --data alias for --data.pipeline.datamanager.data train.py:230
──────────────────────────────────────────────────────── Config ────────────────────────────────────────────────────────
TrainerConfig(
_target=<class 'nerfstudio.engine.trainer.Trainer'>,
output_dir=PosixPath('outputs'),
method_name='gaussian-splatting',
experiment_name=None,
project_name='nerfstudio-project',
timestamp='2023-12-08_224938',
machine=MachineConfig(seed=42, num_devices=1, num_machines=1, machine_rank=0, dist_url='auto', device_type='cuda'),
logging=LoggingConfig(
relative_log_dir=PosixPath('.'),
steps_per_log=10,
max_buffer_size=20,
local_writer=LocalWriterConfig(
_target=<class 'nerfstudio.utils.writer.LocalWriter'>,
enable=True,
stats_to_track=(
<EventName.ITER_TRAIN_TIME: 'Train Iter (time)'>,
<EventName.TRAIN_RAYS_PER_SEC: 'Train Rays / Sec'>,
<EventName.CURR_TEST_PSNR: 'Test PSNR'>,
<EventName.VIS_RAYS_PER_SEC: 'Vis Rays / Sec'>,
<EventName.TEST_RAYS_PER_SEC: 'Test Rays / Sec'>,
<EventName.ETA: 'ETA (time)'>,
<EventName.GAUSSIAN_NUM: 'Number of Gaussians'>
),
max_log_size=10
),
profiler='basic'
),
viewer=ViewerConfig(
relative_log_filename='viewer_log_filename.txt',
websocket_port=None,
websocket_port_default=7007,
websocket_host='0.0.0.0',
num_rays_per_chunk=32768,
max_num_display_images=512,
quit_on_train_completion=False,
image_format='jpeg',
jpeg_quality=70,
make_share_url=False
),
pipeline=VanillaPipelineConfig(
_target=<class 'nerfstudio.pipelines.base_pipeline.VanillaPipeline'>,
datamanager=FullImageDatamanagerConfig(
_target=<class 'nerfstudio.data.datamanagers.full_images_datamanager.FullImageDatamanager'>,
data=PosixPath('data/posters_v3'),
masks_on_gpu=False,
images_on_gpu=False,
dataparser=ColmapDataParserConfig(
_target=<class 'nerfstudio.data.dataparsers.colmap_dataparser.ColmapDataParser'>,
data=PosixPath('.'),
scale_factor=1.0,
downscale_factor=None,
scene_scale=1.0,
orientation_method='up',
center_method='poses',
auto_scale_poses=True,
train_split_fraction=0.9,
depth_unit_scale_factor=0.001,
images_path=PosixPath('images'),
masks_path=None,
depths_path=None,
colmap_path=PosixPath('colmap/sparse/0'),
load_3D_points=True,
max_2D_matches_per_3D_point=-1
),
camera_res_scale_factor=1.0,
eval_num_images_to_sample_from=-1,
eval_num_times_to_repeat_images=-1,
eval_image_indices=(0,),
cache_images='cpu'
),
model=GaussianSplattingModelConfig(
_target=<class 'nerfstudio.models.gaussian_splatting.GaussianSplattingModel'>,
enable_collider=True,
collider_params={'near_plane': 2.0, 'far_plane': 6.0},
loss_coefficients={'rgb_loss_coarse': 1.0, 'rgb_loss_fine': 1.0},
eval_num_rays_per_chunk=4096,
prompt=None,
warmup_length=500,
refine_every=100,
resolution_schedule=250,
num_downscales=2,
cull_alpha_thresh=0.1,
cull_scale_thresh=0.5,
reset_alpha_every=30,
densify_grad_thresh=0.0002,
densify_size_thresh=0.01,
n_split_samples=2,
sh_degree_interval=1000,
cull_screen_size=0.15,
split_screen_size=0.05,
stop_screen_size_at=4000,
random_init=False,
extra_points=0,
ssim_lambda=0.2,
stop_split_at=15000,
sh_degree=4,
camera_optimizer=CameraOptimizerConfig(
_target=<class 'nerfstudio.cameras.camera_optimizers.CameraOptimizer'>,
mode='off',
trans_l2_penalty=0.0001,
rot_l2_penalty=0.0001,
optimizer=None,
scheduler=None
)
)
),
optimizers={
'xyz': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.00016,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=1.6e-06,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
},
'color': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.0005,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=0.0001,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
},
'opacity': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.05,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
},
'scaling': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.005,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=0.001,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
},
'rotation': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.001,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': None
},
'camera_opt': {
'optimizer': AdamOptimizerConfig(
_target=<class 'torch.optim.adam.Adam'>,
lr=0.001,
eps=1e-15,
max_norm=None,
weight_decay=0
),
'scheduler': ExponentialDecaySchedulerConfig(
_target=<class 'nerfstudio.engine.schedulers.ExponentialDecayScheduler'>,
lr_pre_warmup=1e-08,
lr_final=5e-05,
warmup_steps=0,
max_steps=30000,
ramp='cosine'
)
}
},
vis='viewer_beta',
data=PosixPath('data/posters_v3'),
prompt=None,
relative_model_dir=PosixPath('nerfstudio_models'),
load_scheduler=True,
steps_per_save=2000,
steps_per_eval_batch=100,
steps_per_eval_image=100,
steps_per_eval_all_images=100000,
max_num_iterations=30000,
mixed_precision=False,
use_grad_scaler=False,
save_only_latest_checkpoint=True,
load_dir=None,
load_step=None,
load_config=None,
load_checkpoint=None,
log_gradients=False,
gradient_accumulation_steps={'camera_opt': 100, 'color': 10, 'shs': 10}
)
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Saving config to: outputs/posters_v3/gaussian-splatting/2023-12-08_224938/config.yml experiment_config.py:141
Saving checkpoints to: trainer.py:135
outputs/posters_v3/gaussian-splatting/2023-12-08_224938/nerfstudio_models
Using image downscale factor of 2 colmap_dataparser.py:471
[22:49:40] Caching / undistorting train images full_images_datamanager.py:128
[22:49:45] Caching / undistorting eval images full_images_datamanager.py:199
╭─────────────── viser ───────────────╮
│ ╷ │
│ HTTP │ http://0.0.0.0:7007 │
│ Websocket │ ws://0.0.0.0:7007 │
│ ╵ │
╰─────────────────────────────────────╯
[NOTE] Not running eval iterations since only viewer is enabled.
Use --vis {wandb, tensorboard, viewer+wandb, viewer+tensorboard} to run with eval.
No Nerfstudio checkpoint to load, so training from scratch.
Disabled comet/tensorboard/wandb event writers
/home/user/.local/lib/python3.10/site-packages/torchvision/transforms/functional.py:1603: UserWarning: The default value of the antialias parameter of all the resizing transforms (Resize(), RandomResizedCrop(), etc.) will change from None to True in v0.17, in order to be consistent across the PIL and Tensor backends. To suppress this warning, directly pass antialias=True (recommended, future default), antialias=None (current default, which means False for Tensors and True for PIL), or antialias=False (only works on Tensors - PIL will still use antialiasing). This also applies if you are using the inference transforms from the models weights: update the call to weights.transforms(antialias=True).
warnings.warn(
Printing profiling stats, from longest to shortest duration in seconds
Trainer.train_iteration: 0.9369
VanillaPipeline.get_train_loss_dict: 0.9365
Traceback (most recent call last):
File "/home/user/.local/bin/ns-train", line 8, in <module>
sys.exit(entrypoint())
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 262, in entrypoint
main(
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 247, in main
launch(
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 189, in launch
main_func(local_rank=0, world_size=world_size, config=config)
File "/home/user/nerfstudio/nerfstudio/scripts/train.py", line 100, in train_loop
trainer.train()
File "/home/user/nerfstudio/nerfstudio/engine/trainer.py", line 253, in train
loss, loss_dict, metrics_dict = self.train_iteration(step)
File "/home/user/nerfstudio/nerfstudio/utils/profiler.py", line 127, in inner
out = func(*args, **kwargs)
File "/home/user/nerfstudio/nerfstudio/engine/trainer.py", line 471, in train_iteration
_, loss_dict, metrics_dict = self.pipeline.get_train_loss_dict(step=step)
File "/home/user/nerfstudio/nerfstudio/utils/profiler.py", line 127, in inner
out = func(*args, **kwargs)
File "/home/user/nerfstudio/nerfstudio/pipelines/base_pipeline.py", line 306, in get_train_loss_dict
model_outputs = self._model(ray_bundle) # train distributed data parallel model if world_size > 1
File "/home/user/.local/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/home/user/nerfstudio/nerfstudio/models/base_model.py", line 143, in forward
return self.get_outputs(ray_bundle)
File "/home/user/nerfstudio/nerfstudio/models/gaussian_splatting.py", line 588, in get_outputs
rgbs = SphericalHarmonics.apply(n, viewdirs, colors_crop)
File "/home/user/.local/lib/python3.10/site-packages/torch/autograd/function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/home/user/.local/lib/python3.10/site-packages/gsplat/sh.py", line 39, in forward
assert coeffs.shape[-2] == num_sh_bases(degree)
AssertionError
```
Any help would be much appreciated. | closed | 2023-12-08T22:56:06Z | 2023-12-13T22:55:23Z | https://github.com/nerfstudio-project/nerfstudio/issues/2660 | [] | oscarpang | 4 |
jrieke/traingenerator | scikit-learn | 17 | Add mlflow tracking | First of all thanks for the project, it's an interesting way to take a stab at reducing the amount of boilerplate needed even for fairly simple models. Secondly, it would be interesting to implement experiment/run tracking using [MLflow][1].
Have a working example on the `Image classification_PyTorch/` template, happy to submit a PR if you consider this of any interest.
[1]: https://www.mlflow.org/docs/latest/index.html | open | 2021-02-03T08:48:02Z | 2021-02-03T20:36:10Z | https://github.com/jrieke/traingenerator/issues/17 | [
"existing template"
] | andodet | 1 |
scikit-learn/scikit-learn | machine-learning | 30,904 | PowerTransformer overflow warnings | ### Describe the bug
I'm running into overflow warnings using PowerTransformer in some not-very-extreme scenarios. I've been able to find at least one boundary of the problem, where a vector of `[[1]] * 354 + [[0]] * 1` works fine, while `[[1]] * 355 + [[0]] * 1` throws up ("overflow encountered in multiply"). Also, an additional warning starts happening at `[[1]] * 359 + [[0]] * 1` ("overflow encountered in reduce").
Admittedly, I haven't looked into the underlying math of Yeo-Johnson, so an overflow might make sense in that light. (If that's the case, though, perhaps this is an opportunity for a clearer warning?)
### Steps/Code to Reproduce
```python
import sys
from sklearn.preprocessing import PowerTransformer
for n in range(350, 360):
print(f"[[1]] * {n}, [[0]] * 1", file=sys.stderr)
_ = PowerTransformer().fit_transform([[1]] * n + [[0]] * 1)
print(file=sys.stderr)
```
### Expected Results
```
[[1]] * 350, [[0]] * 1
[[1]] * 351, [[0]] * 1
[[1]] * 352, [[0]] * 1
[[1]] * 353, [[0]] * 1
[[1]] * 354, [[0]] * 1
[[1]] * 355, [[0]] * 1
[[1]] * 356, [[0]] * 1
[[1]] * 357, [[0]] * 1
[[1]] * 358, [[0]] * 1
[[1]] * 359, [[0]] * 1
```
### Actual Results
```
[[1]] * 350, [[0]] * 1
[[1]] * 351, [[0]] * 1
[[1]] * 352, [[0]] * 1
[[1]] * 353, [[0]] * 1
[[1]] * 354, [[0]] * 1
[[1]] * 355, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 356, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 357, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 358, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
[[1]] * 359, [[0]] * 1
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:194: RuntimeWarning: overflow encountered in multiply
x = um.multiply(x, x, out=x)
/Users/*****/lib/python3.11/site-packages/numpy/_core/_methods.py:205: RuntimeWarning: overflow encountered in reduce
ret = umr_sum(x, axis, dtype, out, keepdims=keepdims, where=where)
```
### Versions
```shell
System:
python: 3.11.9 (main, May 16 2024, 15:17:37) [Clang 14.0.3 (clang-1403.0.22.14.1)]
executable: /Users/*****/.pyenv/versions/3.11.9/envs/disposable/bin/python
machine: macOS-15.2-arm64-arm-64bit
Python dependencies:
sklearn: 1.6.1
pip: 24.0
setuptools: 65.5.0
numpy: 2.2.3
scipy: 1.15.2
Cython: None
pandas: None
matplotlib: None
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libomp
filepath: /Users/*****/.pyenv/versions/3.11.9/envs/disposable/lib/python3.11/site-packages/sklearn/.dylibs/libomp.dylib
version: None
``` | open | 2025-02-26T01:45:05Z | 2025-02-28T11:39:21Z | https://github.com/scikit-learn/scikit-learn/issues/30904 | [] | rcgale | 1 |
minimaxir/textgenrnn | tensorflow | 249 | Facing Issue while runing this code utf-8' codec can't decode byte 0xa4 in position 14: invalid start byte | utf-8' codec can't decode byte 0xa4 in position 14: invalid start byte
Full code is attached in Word file with csv file too
[mcqs.csv](https://github.com/minimaxir/textgenrnn/files/7950821/mcqs.csv)
[quiz.docx](https://github.com/minimaxir/textgenrnn/files/7950823/quiz.docx)
| open | 2022-01-27T13:36:45Z | 2022-01-27T13:36:45Z | https://github.com/minimaxir/textgenrnn/issues/249 | [] | sardar-abdullah-698 | 0 |
RomelTorres/alpha_vantage | pandas | 50 | TimeSeries not working | This code does not work:
```py
ts = TimeSeries(key=ALPHAVANTAGE_API_KEY, output_format='pandas')
data, meta_data = ts.get_intraday(symbol='MSFT',interval='1min', outputsize='full')
```
It creates the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-49784970f022> in <module>()
3
4 ts = TimeSeries(key=ALPHAVANTAGE_API_KEY, output_format='pandas')
----> 5 data, meta_data = ts.get_intraday(symbol='MSFT',interval='1min', outputsize='full')
6
7 data['close'].plot()
D:\Programs\Anaconda3\lib\site-packages\alpha_vantage\alphavantage.py in _format_wrapper(self, *args, **kwargs)
171 def _format_wrapper(self, *args, **kwargs):
172 call_response, data_key, meta_data_key = func(
--> 173 self, *args, **kwargs)
174 if 'json' in self.output_format.lower() or 'pandas' \
175 in self.output_format.lower():
D:\Programs\Anaconda3\lib\site-packages\alpha_vantage\alphavantage.py in _call_wrapper(self, *args, **kwargs)
156 else:
157 url = '{}&apikey={}'.format(url, self.key)
--> 158 return self._handle_api_call(url), data_key, meta_data_key
159 return _call_wrapper
160
D:\Programs\Anaconda3\lib\site-packages\alpha_vantage\alphavantage.py in _retry_wrapper(self, *args, **kwargs)
75 except ValueError as err:
76 error_message = str(err)
---> 77 raise ValueError(str(error_message))
78 return _retry_wrapper
79
ValueError: Invalid API call. Please retry or visit the documentation (https://www.alphavantage.co/documentation/) for TIME_SERIES_INTRADAY.
```
I am using Python 3.6.4 on Jupyter Notebook. | closed | 2018-03-05T12:11:53Z | 2018-03-07T12:24:13Z | https://github.com/RomelTorres/alpha_vantage/issues/50 | [] | Fylipp | 3 |
keras-team/keras | data-science | 20,128 | Deserializing Error when loading models from '.keras' files in Keras 3, issue with dense layers | I am using Google Colab with the Tensorflow v2.17 and Keras v 3.4.1 libraries.
I need to save and load my models, but I haven't been able to make the '.keras' file format load correctly.
Here is the line for saving the model:
```model.save(os.path.join(model_path, 'model_' + model_name + '.keras'))```
Here is the line for loading the model:
```model = keras.models.load_model(os.path.join(model_path, 'model_' + model_name + '.keras'), custom_objects=custom_objects)```
This is my error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-9-882590e77519>](https://localhost:8080/#) in <cell line: 10>()
8
9 # Load the model
---> 10 model = keras.models.load_model(os.path.join(model_path, 'model_' + model_name + '.keras'), custom_objects=custom_objects)
11
12
3 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _raise_loading_failure(error_msgs, warn_only)
454 warnings.warn(msg)
455 else:
--> 456 raise ValueError(msg)
457
458
ValueError: A total of 2 objects could not be loaded. Example error message for object <Dense name=z_mean, built=True>:
Layer 'z_mean' expected 2 variables, but received 0 variables during loading. Expected: ['kernel', 'bias']
List of objects that could not be loaded:
[<Dense name=z_mean, built=True>, <Dense name=z_log_var, built=True>]
```
This is the model that I trained:
```
latent_dim = 32
# Encoder
encoder_input = Input(shape=(height, width, channels), name='encoder_input')
x = Conv2D(64, (3, 3), activation='relu', padding='same')(encoder_input)
# Flatten layer
shape_before_flattening = K.int_shape(x)[1:]
x = Flatten()(x)
z_mean = Dense(latent_dim, name='z_mean')(x)
z_log_var = Dense(latent_dim, name='z_log_var')(x)
# Reparameterization trick
@keras.saving.register_keras_serializable()
def sampling(args):
z_mean, z_log_var = args
epsilon = K.random_normal(shape=(K.shape(z_mean)[0], latent_dim), mean=0., stddev=1.0)
return z_mean + K.exp(z_log_var / 2) * epsilon
z = Lambda(sampling, output_shape=(latent_dim,), name='z')([z_mean, z_log_var])
# Decoder
decoder_input = Input(K.int_shape(z)[1:])
x = Dense(np.prod(shape_before_flattening))(decoder_input)
x = Reshape(shape_before_flattening)(x)
decoder_output = Conv2D(channels, (3, 3), activation='sigmoid', padding='same')(x)
@register_keras_serializable('CustomLayer')
class CustomLayer(keras.layers.Layer):
def __init__(self, beta=1.0, **kwargs):
self.is_placeholder = True
super(CustomLayer, self).__init__(**kwargs)
self.beta = beta
self.recon_loss_metric = tf.keras.metrics.Mean(name='recon_loss')
self.kl_loss_metric = tf.keras.metrics.Mean(name='kl_loss')
def vae_loss(self, x, z_decoded, z_mean, z_log_var):
recon_loss = keras.losses.binary_crossentropy(K.flatten(x), K.flatten(z_decoded))
kl_loss = -0.5 * K.mean(1 + z_log_var - K.square(z_mean) - K.exp(z_log_var), axis=-1)
return recon_loss, self.beta * kl_loss
def call(self, inputs):
x = inputs[0]
z_decoded = inputs[1]
z_mean = inputs[2]
z_log_var = inputs[3]
recon_loss, kl_loss = self.vae_loss(x, z_decoded, z_mean, z_log_var)
self.add_loss(K.mean(recon_loss + kl_loss))
self.recon_loss_metric.update_state(recon_loss)
self.kl_loss_metric.update_state(kl_loss)
return x
def compute_output_shape(self, input_shape):
return input_shape[0]
def get_metrics(self):
return {'recon_loss': self.recon_loss_metric.result().numpy(),
'kl_loss': self.kl_loss_metric.result().numpy()}
# Models
encoder = Model(encoder_input, [z_mean, z_log_var, z], name='encoder')
decoder = Model(decoder_input, decoder_output, name='decoder')
vae_output = decoder(encoder(encoder_input)[2])
y = CustomLayer()([encoder_input, vae_output, z_mean, z_log_var])
model = Model(encoder_input, y, name='vae')
```
This model was just used for testing the bug. I have used ```tf.keras``` as an alternative for loading the model, but I received the same error. Interestingly, when I run the code for the first time, this is included in the error output. When The same code is run again, the line is no longer included:
```
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py:576: UserWarning: Skipping variable loading for optimizer 'adam', because it has 30 variables whereas the saved optimizer has 22 variables.
saveable.load_own_variables(weights_store.get(inner_path))
```
I have tested the code on the latest Keras v3.5, and have gotten similiar results:
```
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py:713: UserWarning: Skipping variable loading for optimizer 'adam', because it has 30 variables whereas the saved optimizer has 22 variables.
saveable.load_own_variables(weights_store.get(inner_path))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-9-00610835a4a5>](https://localhost:8080/#) in <cell line: 10>()
8
9 # Load the model
---> 10 model = keras.models.load_model(os.path.join(model_path, 'model_' + model_name + '.keras'), custom_objects=custom_objects)
11
12
3 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _raise_loading_failure(error_msgs, warn_only)
591 warnings.warn(msg)
592 else:
--> 593 raise ValueError(msg)
594
595
ValueError: A total of 2 objects could not be loaded. Example error message for object <Dense name=z_mean, built=True>:
Layer 'z_mean' expected 2 variables, but received 0 variables during loading. Expected: ['kernel', 'bias']
List of objects that could not be loaded:
[<Dense name=z_mean, built=True>, <Dense name=z_log_var, built=True>]
```
I have tested the bug again by saving and loading the model into separate weights and json files:
```
# saving
with open(os.path.join(model_path, 'model_' + model_name + '.json'), 'w') as json_file:
json_file.write(model.to_json())
model.save_weights(os.path.join(model_path, 'model_' + model_name + '.weights.h5'))
# loading
with open(os.path.join(model_path, 'model_' + model_name + '.json'), 'r') as json_file:
model_json = json_file.read()
model = model_from_json(model_json, custom_objects=custom_objects)
model.load_weights(os.path.join(model_path, 'model_' + model_name + '.weights.h5'))
```
The error is at least slightly different:
```
/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py:713: UserWarning: Skipping variable loading for optimizer 'adam', because it has 34 variables whereas the saved optimizer has 22 variables.
saveable.load_own_variables(weights_store.get(inner_path))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-14-52bd158e3e0f>](https://localhost:8080/#) in <cell line: 11>()
9 model_json = json_file.read()
10 model = model_from_json(model_json, custom_objects=custom_objects)
---> 11 model.load_weights(os.path.join(model_path, 'model_' + model_name + '.weights.h5'))
12
13 # Load the encoder architecture and weights
1 frames
[/usr/local/lib/python3.10/dist-packages/keras/src/saving/saving_lib.py](https://localhost:8080/#) in _raise_loading_failure(error_msgs, warn_only)
591 warnings.warn(msg)
592 else:
--> 593 raise ValueError(msg)
594
595
ValueError: A total of 3 objects could not be loaded. Example error message for object <Conv2D name=conv2d, built=True>:
Layer 'conv2d' expected 2 variables, but received 0 variables during loading. Expected: ['kernel', 'bias']
List of objects that could not be loaded:
[<Conv2D name=conv2d, built=True>, <Dense name=z_mean, built=True>, <Dense name=z_log_var, built=True>]
```
Ultimately it would be a lot better to find out that I've been doing something wrong and I can fix this problem myself. I've been hung up on this for awhile, and I have a thesis to write. | closed | 2024-08-15T23:29:07Z | 2025-02-20T02:02:38Z | https://github.com/keras-team/keras/issues/20128 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | ErrolDaRocha | 7 |
developmentseed/lonboard | data-visualization | 756 | ArcLayer example fails | ## Context
What results were you expecting? <br/>
ArcLayer example fails due to lack of geometry data in `pyarrow` table when running the `U.S. County-to-County Migration` python example.
## Resulting behaviour, error message or logs
The example works perfectly all the way up to the creation of the `ArcLayer` object layer for the map generation.
Log:
```python
---------------------------------------------------------------------------
TraitError Traceback (most recent call last)
Cell In[13], line 6
1 # value = np.array([arc["value"] for arc in arcs])
2 # get_source_position = np.array([arc["source"] for arc in arcs])
3 # get_target_position = np.array([arc["target"] for arc in arcs])
4 # table = pa.table({"value": value})
----> 6 arc_layer = ArcLayer(
7 table=table,
8 get_source_position=get_source_position,
9 get_target_position=get_target_position,
10 get_source_color=SOURCE_COLOR,
11 get_target_color=TARGET_COLOR,
12 get_width=1,
13 opacity=0.4,
14 pickable=False,
15 extensions=[brushing_extension],
16 brushing_radius=brushing_radius,
17 )
File [~/python3.10/site-packages/lonboard/_layer.py:359](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/_layer.py#line=358), in BaseArrowLayer.__init__(self, table, _rows_per_chunk, **kwargs)
355 self._rows_per_chunk = rows_per_chunk
357 table_o3 = table_o3.rechunk(max_chunksize=rows_per_chunk)
--> 359 super().__init__(table=table_o3, **kwargs)
File [~/python3.10/site-packages/lonboard/_layer.py:95](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/_layer.py#line=94), in BaseLayer.__init__(self, extensions, **kwargs)
88 def __init__(self, *, extensions: Sequence[BaseExtension] = (), **kwargs):
89 # We allow layer extensions to dynamically inject properties onto the layer
90 # widgets where the layer is defined. We wish to allow extensions and their
91 # properties to be passed in the layer constructor. _However_, if
93 extension_kwargs = remove_extension_kwargs(extensions, kwargs)
---> 95 super().__init__(extensions=extensions, **kwargs)
97 # Dynamically set layer traits from extensions after calling __init__
98 self._add_extension_traits(extensions)
File [~/python3.10/site-packages/lonboard/_base.py:25](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/_base.py#line=24), in BaseWidget.__init__(self, **kwargs)
22 if provided_trait_name not in layer_trait_names:
23 raise TypeError(msg.format(provided_trait_name=provided_trait_name))
---> 25 super().__init__(**kwargs)
File [~/python3.10/site-packages/ipywidgets/widgets/widget.py:503](http://localhost:8888/lab/tree/~/python3.10/site-packages/ipywidgets/widgets/widget.py#line=502), in Widget.__init__(self, **kwargs)
501 """Public constructor"""
502 self._model_id = kwargs.pop('model_id', None)
--> 503 super().__init__(**kwargs)
505 Widget._call_widget_constructed(self)
506 self.open()
File [~/python3.10/site-packages/traitlets/traitlets.py:1355](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=1354), in HasTraits.__init__(self, *args, **kwargs)
1353 for key, value in kwargs.items():
1354 if self.has_trait(key):
-> 1355 setattr(self, key, value)
1356 changes[key] = Bunch(
1357 name=key,
1358 old=None,
(...)
1361 type="change",
1362 )
1363 else:
1364 # passthrough args that don't set traits to super
File [~/python3.10/site-packages/traitlets/traitlets.py:716](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=715), in TraitType.__set__(self, obj, value)
714 if self.read_only:
715 raise TraitError('The "%s" trait is read-only.' % self.name)
--> 716 self.set(obj, value)
File [~/python3.10/site-packages/traitlets/traitlets.py:690](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=689), in TraitType.set(self, obj, value)
689 def set(self, obj: HasTraits, value: S) -> None:
--> 690 new_value = self._validate(obj, value)
691 assert self.name is not None
692 try:
File [~/python3.10/site-packages/traitlets/traitlets.py:722](http://localhost:8888/lab/tree/~/python3.10/site-packages/traitlets/traitlets.py#line=721), in TraitType._validate(self, obj, value)
720 return value
721 if hasattr(self, "validate"):
--> 722 value = self.validate(obj, value)
723 if obj._cross_validation_lock is False:
724 value = self._cross_validate(obj, value)
File [~/python3.10/site-packages/lonboard/traits.py:204](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/traits.py#line=203), in ArrowTableTrait.validate(self, obj, value)
201 geom_col_idx = get_geometry_column_index(value.schema)
203 if geom_col_idx is None:
--> 204 return self.error(obj, value, info="geometry column in table")
206 # No restriction on the allowed geometry types in this table
207 if allowed_geometry_types:
File [~/python3.10/site-packages/lonboard/traits.py:153](http://localhost:8888/lab/tree/~/python3.10/site-packages/lonboard/traits.py#line=152), in FixedErrorTraitType.error(self, obj, value, error, info)
145 else:
146 e = "The '{}' trait expected {}, not {}.".format(
147 self.name,
148 # CHANGED:
(...)
151 describe("the", value),
152 )
--> 153 raise TraitError(e)
TraitError: The 'table' trait of an ArcLayer instance expected geometry column in table, not the Table arro3.core.Table
-----------
value: Int64
```
## Environment
- OS: mac os 14.5, python venv Python 3.10.14
- Browser:Chrome
- Lonboard Version: 0.10.4
- geoarrow-c==0.1.2
- geoarrow-pandas==0.1.1
- geoarrow-pyarrow==0.1.2
- geoarrow-rust==0.1.0
- geoarrow-rust-compute==0.3.0
- geoarrow-rust-core==0.4.0b3
- geoarrow-rust-io==0.3.0
- pyarrow==19.0.0
## Steps to reproduce the bug
Describe the actions that led you to encounter the bug. Example:
1. Run the `U.S. County-to-County Migration` jupyter notebook example ( https://github.com/developmentseed/lonboard/blob/main/examples/migration.ipynb )
Thank you for all the great work and effort put into this library and the geo* libraries too!
Have a good day. | open | 2025-02-18T14:44:47Z | 2025-02-21T02:51:24Z | https://github.com/developmentseed/lonboard/issues/756 | [
"bug"
] | hadesto | 1 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 814 | Is it possible to run this project on an Intel GPU with OpenCL ? | Will a scratch implementation of this project (highly optimized for intel gpus) be any faster than that of Cuda implementation.
Or will it be any faster than intel "cpu-only" implementation when rewritten to work with plaidml. | closed | 2021-08-10T06:07:44Z | 2021-08-25T08:51:34Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/814 | [] | Akash7789 | 1 |
liangliangyy/DjangoBlog | django | 320 | 后台上传图片,地址错误 | 后台上传图片,地址是media,urls设置在非调试模式下面不处理图片。nginx没配置,这算bug了吧 | closed | 2019-09-08T08:38:25Z | 2019-09-27T06:13:33Z | https://github.com/liangliangyy/DjangoBlog/issues/320 | [] | johnson329 | 4 |
deepfakes/faceswap | deep-learning | 1,206 | Convert invokes FFmpeg with redundant & conflicting arguments | **Crash reports MUST be included when reporting bugs.**
**Describe the bug**
FaceSwap convert invokes FFmpeg on the writer side with 2 sets of conflicting output codec options. The first set is generated by write_frames in imageio-ffmpeg, the second by output_params in convert's ffmpeg module.
/mnt/data/homedir/miniconda3/envs/faceswap/bin/ffmpeg -y -f rawvideo -vcodec rawvideo -s 3840x2160 -pix_fmt rgb24 -r 29.97 -i - -an **-vcodec libx264 -pix_fmt yuv420p -crf 25** -v error -vf scale=3840:2160 **-c:v libx264 -crf 23 -preset medium** /mnt/data/workspace/18/output.mp4
https://github.com/deepfakes/faceswap/blob/183aee37e93708c0ae73845face5b4469319ebd3/plugins/convert/writer/ffmpeg.py#L95
**To Reproduce**
Steps to reproduce the behavior:
1. Run a convert
2. Inspect ffmpeg arguments with `ps aux | grep ffmpeg`
**Expected behavior**
FFmpeg invocation should not have redundant/conflicting arguments.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: CentOS 8
- Python Version 3.6.8
- Conda Version [e.g. 4.5.12]
- Commit ID 09c7d8aca3c608d1afad941ea78e9fd9b64d9219
| closed | 2022-01-22T06:49:00Z | 2022-05-16T00:25:05Z | https://github.com/deepfakes/faceswap/issues/1206 | [] | HG4554 | 1 |
ultralytics/yolov5 | machine-learning | 13,498 | freetype font and pillow | It seems in Pillow library version over 9.5 getsize method has been removed. So either change the method or limit the library to be installed to 9.5.
Here is the log I get for now.
```
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 305, in plot_images
annotator.box_label(box, label, color=color)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 91, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
Exception in thread Thread-6 (plot_images):
Traceback (most recent call last):
File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/usr/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 305, in plot_images
annotator.box_label(box, label, color=color)
File "/home/vahidajalluian/yolov5-7.0/utils/plots.py", line 91, in box_label
w, h = self.font.getsize(label) # text width, height
AttributeError: 'FreeTypeFont' object has no attribute 'getsize'
```
| open | 2025-01-23T02:40:46Z | 2025-01-23T19:35:22Z | https://github.com/ultralytics/yolov5/issues/13498 | [
"bug",
"dependencies"
] | vahidajalluian | 2 |
autogluon/autogluon | scikit-learn | 3,960 | Installation in Kaggle give error | !pip install autogluon
Give me error in Kaggle notebook. How to resolve this?
File /opt/conda/lib/python3.10/site-packages/sklearn/feature_selection/_base.py:14
11 from scipy.sparse import csc_matrix, issparse
13 from ..base import TransformerMixin
---> 14 from ..utils import (
15 _is_pandas_df,
16 _safe_indexing,
17 check_array,
18 safe_sqr,
19 )
20 from ..utils._set_output import _get_output_config
21 from ..utils._tags import _safe_tags
ImportError: cannot import name '_is_pandas_df' from 'sklearn.utils' (/opt/conda/lib/python3.10/site-packages/sklearn/utils/__init__.py) | closed | 2024-03-04T04:28:30Z | 2025-01-03T11:15:04Z | https://github.com/autogluon/autogluon/issues/3960 | [
"bug: unconfirmed",
"Needs Triage"
] | sumantabasak | 11 |
nvbn/thefuck | python | 601 | Error unknown option | Hello, I have this error every time I use fuck `history: Unknown option '--exact'`
How to reproduce: use the fuck
The Fuck 3.14 using Python 2.7.10
fish, version 2.3.1
mac OS Sierra 10.12.2 | closed | 2017-02-02T15:14:02Z | 2021-08-08T20:21:18Z | https://github.com/nvbn/thefuck/issues/601 | [
"fish"
] | shiro-42 | 3 |
gradio-app/gradio | python | 10,662 | ERROR: Exception in ASGI application | I am installing Kohya on Kaggle Guid
It gives below error can't make any sense
It looks like gradio related since entire code is about gradio
both gradio live share and local run on Kaggle gives error so i doubt related to graido live share
```
gradio==5.4.0
gradio_client==1.4.2
fastapi==0.115.8
uvicorn==0.34.0
starlette==0.45.3
anyio==3.7.1
python 3.10.12
```
```
* Running on local URL: http://127.0.0.1:7860/
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/applications.py", line 112, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.10/dist-packages/gradio/route_utils.py", line 790, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.10/dist-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.10/dist-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.10/dist-packages/fastapi/routing.py", line 214, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
File "/usr/local/lib/python3.10/dist-packages/starlette/concurrency.py", line 37, in run_in_threadpool
return await anyio.to_thread.run_sync(func)
File "/usr/local/lib/python3.10/dist-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.10/dist-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 549, in main
gradio_api_info = api_info(request)
File "/usr/local/lib/python3.10/dist-packages/gradio/routes.py", line 579, in api_info
api_info = utils.safe_deepcopy(app.get_blocks().get_api_info())
File "/usr/local/lib/python3.10/dist-packages/gradio/blocks.py", line 2982, in get_api_info
python_type = client_utils.json_schema_to_python_type(info)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 911, in json_schema_to_python_type
type_ = _json_schema_to_python_type(schema, schema.get("$defs"))
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 965, in _json_schema_to_python_type
des = [
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 966, in <listcomp>
f"{n}: {_json_schema_to_python_type(v, defs)}{get_desc(v)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 973, in _json_schema_to_python_type
f"str, {_json_schema_to_python_type(schema['additionalProperties'], defs)}"
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 919, in _json_schema_to_python_type
type_ = get_type(schema)
File "/usr/local/lib/python3.10/dist-packages/gradio_client/utils.py", line 880, in get_type
if "const" in schema:
TypeError: argument of type 'bool' is not iterable
ERROR: Exception in ASGI application
```

| closed | 2025-02-23T23:23:29Z | 2025-03-05T16:59:36Z | https://github.com/gradio-app/gradio/issues/10662 | [
"bug",
"pending clarification"
] | FurkanGozukara | 5 |
AutoGPTQ/AutoGPTQ | nlp | 752 | [BUG] Right Enviroment for custom Qwen2-VL quantization using AutoGPTQ | Hi,
Since last couple of weeks I am struggling to quantize my custom Qwen2-VL model using GTPQ.
There is a lot of confusion regarding the correct version of CUDA, PyTorch, Auto-GPTQ, transformers and tokenizers required to successfully quantize the mode.
If anyone can help me out for the same, that would be great.
For now my environment is:
CUDA : 12.1
Python : 3.12
Pytorch : 2.4
auto_gptq : 0.5.0 (also tried 0.6.0 and 0.7.0 but not working)
transformers : 4.46.3
tokenizers : 0.20.3
My quantization code :
```
from transformers import AutoTokenizer, TextGenerationPipeline
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
import logging
import ast
logging.basicConfig(
format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S"
)
pretrained_model_dir = "/home/bhavya/Desktop/bhavya/llm/LLaMA-Factory/models/qwen2_vl_lora_sft"
quantized_model_dir = "/home/bhavya/Desktop/bhavya/llm/LLaMA-Factory/models/qwen2_vl_7b_4bit_gptq"
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True)
# opening the file in read mode
my_file = open("/home/bhavya/Desktop/bhavya/llm/quantize_code/dataset_caliber.txt", "r")
# reading the file
data = my_file.read()
data_into_list = data.split("\n")
datasetlist = data_into_list[:-1]
# printing the data
print(len(datasetlist))
print(type(datasetlist[0]))
dataset = []
for x in datasetlist:
print('x')
print(x)
x1 = ast.literal_eval(x)
dataset.append(x1)
print('dataset')
print(dataset[0])
print(type(dataset[0]))
quantize_config = BaseQuantizeConfig(
bits=4, # quantize model to 4-bit
group_size=128, # it is recommended to set the value to 128
desc_act=False, # set to False can significantly speed up inference but the perplexity may slightly bad
)
# load un-quantized model, by default, the model will always be loaded into CPU memory
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config)
# quantize model, the examples should be list of dict whose keys can only be "input_ids" and "attention_mask"
model.quantize(dataset)
# save quantized model
# model.save_quantized(quantized_model_dir)
# save quantized model using safetensors
model.save_quantized(quantized_model_dir, use_safetensors=True)
# push quantized model to Hugging Face Hub.
# to use use_auth_token=True, Login first via huggingface-cli login.
# or pass explcit token with: use_auth_token="hf_xxxxxxx"
# (uncomment the following three lines to enable this feature)
# repo_id = f"YourUserName/{quantized_model_dir}"
# commit_message = f"AutoGPTQ model for {pretrained_model_dir}: {quantize_config.bits}bits, gr{quantize_config.group_size}, desc_act={quantize_config.desc_act}"
# model.push_to_hub(repo_id, commit_message=commit_message, use_auth_token=True)
# alternatively you can save and push at the same time
# (uncomment the following three lines to enable this feature)
# repo_id = f"YourUserName/{quantized_model_dir}"
# commit_message = f"AutoGPTQ model for {pretrained_model_dir}: {quantize_config.bits}bits, gr{quantize_config.group_size}, desc_act={quantize_config.desc_act}"
# model.push_to_hub(repo_id, save_dir=quantized_model_dir, use_safetensors=True, commit_message=commit_message, use_auth_token=True)
# load quantized model to the first GPU
# model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, device="cuda:0")
# download quantized model from Hugging Face Hub and load to the first GPU
# model = AutoGPTQForCausalLM.from_quantized(repo_id, device="cuda:0", use_safetensors=True, use_triton=False)
# inference with model.generate
# print(tokenizer.decode(model.generate(**tokenizer("auto_gptq is", return_tensors="pt").to(model.device))[0]))
# or you can also use pipeline
# pipeline = TextGenerationPipeline(model=model, tokenizer=tokenizer)
# print(pipeline("auto-gptq is")[0]["generated_text"])
```
Error:
```
Traceback (most recent call last):
File "/home/bhavya/Desktop/bhavya/llm/quantize_code/qwen2_quantize_gptq.py", line 45, in <module>
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config)
File "/home/bhavya/anaconda3/envs/autogptq-env/lib/python3.10/site-packages/auto_gptq/modeling/auto.py", line 75, in from_pretrained
model_type = check_and_get_model_type(pretrained_model_name_or_path, trust_remote_code)
File "/home/bhavya/anaconda3/envs/autogptq-env/lib/python3.10/site-packages/auto_gptq/modeling/_utils.py", line 305, in check_and_get_model_type
raise TypeError(f"{config.model_type} isn't supported yet.")
TypeError: qwen2_vl isn't supported yet.
```
Please help me out. | closed | 2024-12-05T08:26:54Z | 2024-12-13T09:06:34Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/752 | [
"bug"
] | bhavyajoshi-mahindra | 4 |
iterative/dvc | data-science | 9,651 | pull: produces empty directory | # Bug Report
<!--
## Issue name
Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug.
Example: `repro: doesn't detect input changes`
-->
## Description
I have DVC set up with S3 remotes, probably misconfigured. When I do `dvc pull` it creates empty directories for the data and claims to have succeeded even though the `file.dvc` file lists many files taking up much space. Subsequent use of `dvc pull` claims everything is up to date.
<!--
A clear and concise description of what the bug is.
-->
### Reproduce
1. Clone the git repository of a project successfully using dvc
2. `dvc add remote s3://something-i-probably-dont-have-access-to`
3. `dvc pull`
4. Confirm that no data has been obtained and no error message has been emitted.
<!--
Step list of how to reproduce the bug
-->
<!--
Example:
1. dvc init
6. Copy dataset.zip to the directory
7. dvc add dataset.zip
8. dvc run -d dataset.zip -o model ./train.sh
9. modify dataset.zip
10. dvc repro
-->
### Expected
Either data is downloaded from the remote or an error message is emitted.
The size, number of files, and md5sum in the `.dvc` file match what is present after a `dvc pull`. Mismatches lead to error messages with commands like `dvc pull` and `dvc update`.
<!--
A clear and concise description of what you expect to happen.
-->
### Environment information
<!--
This is required to ensure that we can reproduce the bug.
-->
**Output of `dvc doctor`:**
```console
$ dvc doctor
DVC version: 3.1.0 (pip)
------------------------
Platform: Python 3.8.10 on Linux-5.4.0-147-generic-x86_64-with-glibc2.29
Subprojects:
dvc_data = 2.0.2
dvc_objects = 0.23.0
dvc_render = 0.5.3
dvc_task = 0.3.0
scmrepo = 1.0.4
Supports:
http (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
https (aiohttp = 3.8.4, aiohttp-retry = 2.8.3),
s3 (s3fs = 2023.6.0, boto3 = 1.26.76)
Config:
Global: /home/anne/.config/dvc
System: /etc/xdg/dvc
Cache types: <https://error.dvc.org/no-dvc-cache>
Caches: local
Remotes: s3, s3
Workspace directory: ext4 on /dev/mapper/ubuntu--vg-ubuntu--lv
Repo: dvc, git
Repo.site_cache_dir: /var/tmp/dvc/repo/af052bc392ee89f0efbc7a8ac0aa350b
```
**Additional Information (if any):**
<!--
Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue.
If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`.
If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons.
-->
```console
$ dvc pull --verobse
2023-06-22 13:59:55,569 DEBUG: v3.1.0 (pip), CPython 3.8.10 on Linux-5.4.0-147-generic-x86_64-with-glibc2.29
2023-06-22 13:59:55,569 DEBUG: command: /home/anne/.cache/pypoetry/virtualenvs/explore-dvc-lhhzaVKj-py3.8/bin/dvc pull --verbose
Everything is up to date.
2023-06-22 13:59:55,803 DEBUG: Analytics is enabled.
2023-06-22 13:59:55,827 DEBUG: Trying to spawn '['daemon', '-q', 'analytics', '/tmp/tmpqg__r3dm']'
2023-06-22 13:59:55,828 DEBUG: Spawned '['daemon', '-q', 'analytics', '/tmp/tmpqg__r3dm']'
``` | closed | 2023-06-22T14:00:28Z | 2023-08-04T13:05:45Z | https://github.com/iterative/dvc/issues/9651 | [
"bug",
"research",
"regression",
"A: data-sync"
] | td-anne | 21 |
tensorlayer/TensorLayer | tensorflow | 541 | 🚀🚀Real-time Face Recognition in TensorLayer | ## A discussion for:
- Face recognition algorithm
- Face recognition history
- Face recognition implementation using TensorLayer and TensorFlow.
**Feel free to add more papers and discuss here or in the [Slack channel](https://join.slack.com/t/tensorlayer/shared_invite/enQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc).**
### Background
SphereFace : Face recognition (FP) can be categorized as face identification and face verification. The **identification** classifies a face to a specific identity, while the **verification** determines whether a pair of faces belongs to the same identity.
For **closed-set protocol**, all testing identities are predefined in training set. Therefore, closed- set FR can be well addressed as a classification problem.
For **open-set protocol**, the testing identities are usually not in the training set, so we need to map faces to a discriminative feature space. Then face identification can be viewed as performing face verification between the probe face and every identity in the gallery (given some faces of the identities). **<--- industry usually use this one.**
### Paper History
- [Dimensionality reduction by learning an invariant mapping. In CVPR, 2006.]()
- triplet loss
- [Deep learning face representation from predicting 10,000 classes. In CVPR, 2014.]()
- softmax loss, treats open-set FR as a multi-class classification problem
- open-set
- [Deepface: Closing the gap to human-level performance in face verifica- tion. In CVPR, 2014.]()
- softmax loss, treats open-set FR as a multi-class classification problem
- open-set
- [Deep learning face representation by joint identification-verification. In NIPS, 2014]()
- softmax loss + contrastive loss (Euclidean margin based loss)
- greatly boosting the performance.
- [Facenet: A unified embedding for face recognition and clustering. In CVPR, 2015]()
- triplet loss
- [**code** davidsandberg](https://github.com/davidsandberg/facenet)
- learn a unified face embedding, 200 million face images, current state-of-the-art FR accuracy
- [A discriminative feature learning approach for deep face recognition. In ECCV, 2016]()
- softmax loss + centre loss (Euclidean margin based loss)
SphereFace : One could notice that state-of-the-art FR meth- ods usually adopt ideas (e.g. contrastive loss, triplet loss) from metric learning, showing open-set FR could be well addressed by discriminative metric learning.
- [Sparsifying neural network connections for face recognition. In CVPR, 2016]()
- softmax loss + contrastive loss (Euclidean margin based loss)
- [Targeting ultimate accuracy: Face recognition via deep embedding. arXiv preprint:1506.07310, 2015.]()
- ? loss
- [Large-margin softmax loss for convolutional neural networks. In ICML, 2016. 2,]()
- L-Softmax loss, also **implicitly** involves the concept of angles like SphereFace. Differently, SphereFace A-Softmax loss is developed to **explicitly** learn discriminative face embedding.
- it shows great improvement on closed-set classification problems.
SphereFace : Center loss only explicitly encourages intra-class compactness. Both contrastive loss and triplet loss can not constrain on each individual sample, and thus require carefully designed pair/triplet mining procedure, which is both time-consuming and performance-sensitive.
- [SphereFace: Deep Hypersphere Embedding for Face Recognition. In CVPR, 2017]()
- angular softmax (A-Softmax) loss
- open-set
- haijun : 100x100 still works fine
- [**code** wy1iu](https://github.com/wy1iu/sphereface)
- We extract the deep features (SphereFace) from the output of the FC1 layer. For all experiments, the final representation of a testing face is obtained by **concatenating its original face features and its horizontally flipped features**. The score (metric) is computed by the **cosine distance** of two features.
- [CosFace: Large Margin Cosine Loss for Deep Face Recognition In ArXiv, 2018]()
- [ArcFace/InsightFace: Additive Angular Margin Loss for Deep Face Recognition. ArXiv, 2018](https://arxiv.org/abs/1801.07698)
- jiankang : follows sphereface and cosface
- [**code** insightface](https://github.com/deepinsight/insightface) (ArcFace) from Imperial College and DeepInsight using MXNET
- [**code** InsightFace_TF](https://github.com/auroua/InsightFace_TF) (ArcFace) using **TensorLayer**
- [MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices. ArXiv, 2018](https://arxiv.org/abs/1804.07573)
### Implementation Hints
- https://github.com/auroua/InsightFace_TF/blob/master/losses/face_losses.py
- http://tensorlayer.readthedocs.io/en/latest/modules/cost.html#cosine-similarity
- https://github.com/sirius-ai/MobileFaceNet_TF | closed | 2018-05-04T01:09:37Z | 2021-01-06T01:44:30Z | https://github.com/tensorlayer/TensorLayer/issues/541 | [
"discussion",
"feature_request"
] | zsdonghao | 2 |
mars-project/mars | numpy | 3,352 | A more universal function to replace subgraph in OptimizationRule | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
Currently in OptimizationRule, there're only collapsable_predecessors adding/removing and node replacing functions to handle graph mutation. However, some new rules may need to replace a piece of subgraph by adding/removing nodes and edges. Thus we need a more universal function to do this work.
| open | 2023-06-27T09:22:31Z | 2023-06-27T09:22:31Z | https://github.com/mars-project/mars/issues/3352 | [] | ericpai | 0 |
sczhou/CodeFormer | pytorch | 182 | How to generate HQ dataset? | thanks for this great job!
The resolution of FFHQ original dataset is 1024x1024, but in your paper the resolution of HQ data is 512x512. So how to generate the 512x512 HQ dataset? like resize_bilinear or resize_bicubic?
| open | 2023-03-17T02:21:41Z | 2023-12-07T06:25:09Z | https://github.com/sczhou/CodeFormer/issues/182 | [] | YilanWang | 2 |
gradio-app/gradio | data-science | 10,398 | gradio.State is always null in JavaScript callbacks | ### Describe the bug
I'm trying to extend `gr.Gallery` component with custom logic. The idea is to scroll it to the position selected by the user.
I wrote a custom JS handler which does the job and it works if I provide `gr.Number` component as input. However, I can not
do the same with `gr.State`. The debugger shows that the passed value is always `null`.
**Expected behavior**: should be able to pass either gr.Number or gr.State as inputs to JS handlers
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
extra_js_scripts = (
r"""
<script>
const scrollToIndex = (index) => {
const thumbnails = document.querySelectorAll('.thumbnail-item');
if (index !== null && index >= 0 && index < thumbnails.length) {
thumbnails[index].scrollIntoView({ behavior: 'smooth', block: 'nearest', inline: 'center' });
} else {
console.error('Index out of bounds');
}
};
</script>
"""
)
with gr.Blocks(head=extra_js_scripts) as demo:
current_index = gr.State(0)
gallery = gr.Gallery(columns=1,
object_fit='contain',
allow_preview=False,
show_download_button=False,
show_fullscreen_button=False)
index_input = gr.Number(label="Image Index", value=0)
scroll_button = gr.Button("Scroll to Image")
index_input.change(fn=lambda x: x, inputs=index_input, outputs=current_index)
# This does not work
scroll_button.click(fn=None, js="(index) => scrollToIndex(index)", inputs=current_index)
# This works
# scroll_button.click(fn=None, js="(index) => scrollToIndex(index)", inputs=index_input)
demo.load(lambda: ["https://placebear.com/200/200" for _ in range(6)], outputs=gallery)
demo.launch()
```
### Screenshot
<img width="1673" alt="Image" src="https://github.com/user-attachments/assets/f1c212c1-2b81-4de6-b2a3-6e8d02173d5a" />
### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.12.0
gradio_client version: 1.5.4
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.4 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 1.26.4
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.5
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.2.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
I can work around it | closed | 2025-01-21T09:27:21Z | 2025-01-23T13:48:46Z | https://github.com/gradio-app/gradio/issues/10398 | [
"bug"
] | qbit- | 3 |
graphql-python/graphene | graphql | 1,025 | Interfaces lead to circular imports when splitting type definitions across files | I'm not sure if this issue has been raised before. Did a quick search through issues and didn't find anything. But I was wondering if anyone had run into this issue before and there's an existing Python solution I'm not thinking about or if this is an existing issue.
Basically, when defining a type that implements an interface, we specify the interface class that is being implemented by this new type in the Meta interface option.
```
class Character(graphene.Interface):
id = graphene.ID(required=True)
name = graphene.String(required=True)
friends = graphene.List(lambda: Character)
class Human(graphene.ObjectType):
class Meta:
interfaces = (Character, ) # <----- here
starships = graphene.List(Starship)
home_planet = graphene.String()
```
But if we would like to split the class definitions across two files:
```
#schema.py
import graphene
from human import Human
from droid import Droid
class Character(graphene.Interface):
id = graphene.ID(required=True)
name = graphene.String(required=True)
friends = graphene.List(lambda: Character)
def resolve_type(cls, instance, info):
if instance.type == 'DROID':
return Droid
return Human # <----- here we need to import Human
....
schema = graphene.Schema(query=Query, mutation=Mutation, types=[Human])
```
and
```
#human.py
class Human(graphene.ObjectType):
class Meta:
interfaces = (Character, ) # <--- would require us to import Character
starships = graphene.List(Starship)
home_planet = graphene.String()
```
then how would the `Human` class be able to import the `Character` class it needs to specify in the interfaces it implements without introducing circular imports? | closed | 2019-07-02T23:19:31Z | 2019-07-16T21:23:44Z | https://github.com/graphql-python/graphene/issues/1025 | [] | klairetan | 7 |
zihangdai/xlnet | nlp | 160 | XLNET Base for Malay and Indonesian languages (not an issue) | Hi! This is not an issue, I just want to say XLNET is really great and I successfully pretrained XLNET from scratch for Malay and Indonesian languages. You can read comparison and download pretrained from here, https://github.com/huseinzol05/Malaya/tree/master/xlnet
I am planning to release XLNET Large for these languages! | closed | 2019-07-14T10:31:20Z | 2019-07-28T09:46:51Z | https://github.com/zihangdai/xlnet/issues/160 | [] | huseinzol05 | 23 |
huggingface/diffusers | deep-learning | 10,395 | [Quantization] enable multi-backend `bitsandbytes` | Similar to https://github.com/huggingface/transformers/pull/31098/ | open | 2024-12-27T11:24:04Z | 2025-02-20T19:15:18Z | https://github.com/huggingface/diffusers/issues/10395 | [
"wip",
"contributions-welcome",
"quantization",
"bitsandbytes"
] | sayakpaul | 6 |
jofpin/trape | flask | 112 | When I run trape I get this error | ```
Loading trape...
[x] ERROR: cannot import name base_manager
```
How to fix ? | open | 2018-12-01T12:57:03Z | 2019-02-10T22:01:48Z | https://github.com/jofpin/trape/issues/112 | [] | empt1xo | 4 |
Anjok07/ultimatevocalremovergui | pytorch | 1,204 | Linux, GTX 3060 - MDXNet does not use GPU | MDX-Net is not using my GPU, despite UVR recognising my GPU and having "GPU Conversion" checked. Whenever I try to use MDX-Net, processing is extremely slow and CPU usage skyrockets.
Here is my hardware:
OS: Pop!_OS 22.04 LTS x86_64
Host: B660M DS3H AX DDR4
Kernel: 6.6.6-76060606-generic
Shell: bash 5.1.16
Resolution: 1920x1080
DE: GNOME 42.5
WM: Mutter
WM Theme: Pop
Theme: Pop-dark [GTK2/3]
Icons: Pop [GTK2/3]
Terminal: gnome-terminal
CPU: 13th Gen Intel i5-13500 (20) @ 4
GPU: Intel AlderLake-S GT1
GPU: NVIDIA GeForce RTX 3060 Lite Has | open | 2024-02-24T13:49:06Z | 2024-03-13T13:42:59Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1204 | [] | eldomtom2 | 5 |
dask/dask | numpy | 11,579 | regression in graph construction time for `Array.ravel()` | **Minimal Complete Verifiable Example**:
```python
import dask.array
import numpy as np
shape=(28, 30, 8, 1, 21, 3)
dtype=np.int64
chunksize=(1, 1, 1, 1, 1, 3)
array = dask.array.from_array(np.arange(np.prod(shape)).reshape(shape), chunks=chunksize)
%timeit array.ravel()
```
This times at 60ms on 2024.6.0, and 275ms on 2024.12.0
Snakeviz blames `_task_spec.py`
<img width="776" alt="image" src="https://github.com/user-attachments/assets/3b231308-dbcc-4314-addc-487715c28826">
**Environment**:
- Dask version: 2024.12.0
- Python version: 3.12
- Operating System: macos
- Install method (conda, pip, source): pip
| closed | 2024-12-04T04:31:22Z | 2024-12-04T15:17:24Z | https://github.com/dask/dask/issues/11579 | [
"needs triage"
] | dcherian | 2 |
dask/dask | pandas | 11,585 | dask.array buggy with pandas multiindex.values / object dtype arrays | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
**Describe the issue**:
**Minimal Complete Verifiable Example**:
```python
import dask.array
import pandas as pd
import numpy as np
idx = pd.MultiIndex.from_product([list("abc"), [0, 1]])
dask.array.from_array(idx.values, chunks=-1)[0].compute()
```
Interestingly
```
array = np.array([('a', 0), ('a', 1), ('b', 0), ('b', 1), ('c', 0), ('c', 1)], dtype=object)
dask.array.from_array(array, chunks=-1)[0].compute()
```
succeeds :/ even though that should be identical to `idx.values`
**Anything else we need to know?**:
**Environment**:
- Dask version: 2024.12.0
- Python version:
- Operating System:
- Install method (conda, pip, source):
| open | 2024-12-05T16:55:49Z | 2025-02-24T02:01:25Z | https://github.com/dask/dask/issues/11585 | [
"array",
"needs attention"
] | dcherian | 9 |
tatsu-lab/stanford_alpaca | deep-learning | 193 | Are special tokens wrong? | In the vocab of llama, eos_token is "\</s\>", bos_token is "\<s\>", unk_token is "\<unk\>", and the corresponding token ids are 0, 1, 2.
So I think in train.py, [line 214-221](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L214) should be removed.
And are [DEFAULT_BOS_TOKEN](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L30) and [DEFAULT_UNK_TOKEN](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L31) wrong?
And for [line 151](https://github.com/tatsu-lab/stanford_alpaca/blob/main/train.py#L151), should we add a space between example['output'] and tokenizer.eos_token? | open | 2023-04-08T15:23:07Z | 2023-04-08T15:26:26Z | https://github.com/tatsu-lab/stanford_alpaca/issues/193 | [] | gauss-clb | 0 |
tflearn/tflearn | data-science | 762 | UrlRetrieve does not accept key argument context | So I was running the [example](https://github.com/tflearn/tflearn/blob/master/examples/nlp/lstm_generator_cityname.py) using Python 3.5.2 on Anaconda 4.2.0 and I happen to receive this particular error `TypeError: urlretrieve() got an unexpected keyword argument 'context'`.
One stackoverflow [post](http://stackoverflow.com/questions/28575070/urllib-not-taking-context-as-a-parameter) suggested that the problem will be resolved in Python 3.4 - how it is yet to be resolved .
Has anyone managed to resolve this issue. Thanks. | open | 2017-05-18T01:44:43Z | 2017-05-18T13:19:38Z | https://github.com/tflearn/tflearn/issues/762 | [] | motiur | 1 |
huggingface/datasets | numpy | 6,854 | Wrong example of usage when config name is missing for community script-datasets | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name is missing.
Please pick one among the available configs: ['af_za', 'am_et', 'ar_eg', 'as_in', 'ast_es', 'az_az', 'be_by', 'bg_bg', 'bn_in', 'bs_ba', 'ca_es', 'ceb_ph', 'ckb_iq', 'cmn_hans_cn', 'cs_cz', 'cy_gb', 'da_dk', 'de_de', 'el_gr', 'en_us', 'es_419', 'et_ee', 'fa_ir', 'ff_sn', 'fi_fi', 'fil_ph', 'fr_fr', 'ga_ie', 'gl_es', 'gu_in', 'ha_ng', 'he_il', 'hi_in', 'hr_hr', 'hu_hu', 'hy_am', 'id_id', 'ig_ng', 'is_is', 'it_it', 'ja_jp', 'jv_id', 'ka_ge', 'kam_ke', 'kea_cv', 'kk_kz', 'km_kh', 'kn_in', 'ko_kr', 'ky_kg', 'lb_lu', 'lg_ug', 'ln_cd', 'lo_la', 'lt_lt', 'luo_ke', 'lv_lv', 'mi_nz', 'mk_mk', 'ml_in', 'mn_mn', 'mr_in', 'ms_my', 'mt_mt', 'my_mm', 'nb_no', 'ne_np', 'nl_nl', 'nso_za', 'ny_mw', 'oc_fr', 'om_et', 'or_in', 'pa_in', 'pl_pl', 'ps_af', 'pt_br', 'ro_ro', 'ru_ru', 'sd_in', 'sk_sk', 'sl_si', 'sn_zw', 'so_so', 'sr_rs', 'sv_se', 'sw_ke', 'ta_in', 'te_in', 'tg_tj', 'th_th', 'tr_tr', 'uk_ua', 'umb_ao', 'ur_pk', 'uz_uz', 'vi_vn', 'wo_sn', 'xh_za', 'yo_ng', 'yue_hant_hk', 'zu_za', 'all']
Example of usage:
`load_dataset('fleurs', 'af_za')`
```
Note the example of usage in the error message suggests loading "fleurs" instead of "google/fleurs". | closed | 2024-05-02T06:59:39Z | 2024-05-03T15:51:59Z | https://github.com/huggingface/datasets/issues/6854 | [
"bug"
] | albertvillanova | 0 |
harry0703/MoneyPrinterTurbo | automation | 199 | 找不到gpt-4-turbo-preview:NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4-turbo-preview` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}} | NotFoundError: Error code: 404 - {'error': {'message': 'The model `gpt-4-turbo-preview` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Traceback:
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "E:\aivideo\MoneyPrinterTurbo\webui\Main.py", line 378, in <module>
result = tm.start(task_id=task_id, params=params)
File "E:\aivideo\MoneyPrinterTurbo\app\services\task.py", line 42, in start
video_script = llm.generate_script(video_subject=video_subject, language=params.video_language,
File "E:\aivideo\MoneyPrinterTurbo\app\services\llm.py", line 167, in generate_script
response = _generate_response(prompt=prompt)
File "E:\aivideo\MoneyPrinterTurbo\app\services\llm.py", line 130, in _generate_response
response = client.chat.completions.create(
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
return func(*args, **kwargs)
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\resources\chat\completions.py", line 667, in create
return self._post(
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_base_client.py", line 1208, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_base_client.py", line 897, in request
return self._request(
File "D:\Users\miniconda3\envs\MoneyPrinterTurbo\lib\site-packages\openai\_base_client.py", line 988, in _request
raise self._make_status_error_from_response(err.response) from None | closed | 2024-04-08T13:47:14Z | 2024-04-08T13:51:24Z | https://github.com/harry0703/MoneyPrinterTurbo/issues/199 | [] | vensend | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,372 | This version of ChromeDriver only supports Chrome version 114 Current browser version is 103.0.5060.53 | I tried below and it still fails
My code:
import undetected_chromedriver as uc
driver = uc.Chrome(version_main=103)
driver.get("https://example.com") | open | 2023-06-29T15:36:20Z | 2023-08-16T06:18:28Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1372 | [] | Chens11111010001 | 2 |
pandas-dev/pandas | data-science | 60,561 | BUG: set_index with pyarrow timestamp type does not produce DatetimeIndex | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [X] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import io
import pandas as pd
buf = io.StringIO("date,value\n2024-01-01 00:00:00,1\n2024-02-01 00:00:00,2")
df = pd.read_csv(buf, parse_dates=["date"])
df.set_index("date").loc["2024-01"] # works
buf = io.StringIO("date,value\n2024-01-01 00:00:00,1\n2024-02-01 00:00:00,2")
df = pd.read_csv(buf, parse_dates=["date"], dtype_backend="pyarrow", engine="pyarrow")
df.set_index("date").loc["2024-01"] # KeyError
```
```
### Issue Description
The pyarrow timestamp type gets put into a generic `Index` when assigned via set_index, so the datetime overloads do not work correctly
### Expected Behavior
The pyarrow timestamp type should be wrapped by a DatetimeIndex
### Installed Versions
3.0.0.dev0+1696.gfae3e8034f' | open | 2024-12-13T20:03:30Z | 2024-12-16T13:01:56Z | https://github.com/pandas-dev/pandas/issues/60561 | [
"Bug",
"Datetime",
"Index",
"Arrow"
] | WillAyd | 2 |
StackStorm/st2 | automation | 6,290 | release instructions location | Where are the current release instructions?
I am building a release locally so if we have instructions somewhere that would be beneficial. Also I could work on the latest release. | closed | 2024-12-16T15:24:25Z | 2025-02-08T09:52:50Z | https://github.com/StackStorm/st2/issues/6290 | [] | guzzijones | 1 |
kornia/kornia | computer-vision | 2,995 | Failed to export onnx model when I use kornia.geometry.transform.imgwarp.warp_perspective in the model forward function | ### Discussed in https://github.com/kornia/kornia/discussions/2992
<div type='discussions-op-text'>
<sup>Originally posted by **knavezl** August 23, 2024</sup>
my export model code is:
input_img = torch.randn(1 ,4, 3, 864, 1536).cuda()
num_cameras = 4
num_classes = 3
resolution = [360 ,4 ,360]
Y, Z, X = resolution
encoder_name = 'res50'
model_params = torch.load(pt_path)
state_dict = {}
for key in model_params["state_dict"].keys():
state_dict_key = key.replace('model.' , '')
state_dict[state_dict_key] = model_params["state_dict"][key]
model = MVDet(Y, Z, X, encoder_type = encoder_name, num_cameras = num_cameras, num_classes = num_classes)
model.load_state_dict(state_dict,strict=True)
model.cuda()
torch.onnx.export(model, input_img , onnx_path, verbose=False,opset_version=13)
The error is:
File "/home/user/BEV/TrackTacular/WorldTrack/models/mvdet.py", line 172, in forward
feat_mems_ = warp_perspective(feat_cams_, proj_mats, (self.Y, self.X), align_corners=False)
File "/home/user/anaconda3/envs/pytorch2.1/lib/python3.9/site-packages/kornia/geometry/transform/imgwarp.py", line 126, in warp_perspective
grid = transform_points(src_norm_trans_dst_norm[:, None, None], grid)
File "/home/user/anaconda3/envs/pytorch2.1/lib/python3.9/site-packages/kornia/geometry/linalg.py", line 191, in transform_points
trans_01 = torch.repeat_interleave(trans_01, repeats=points_1.shape[0] // trans_01.shape[0], dim=0)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper_CUDA__index_select)
Have you tried to transform the model and encountered the same problem? Do you have any solutions?
</div> | closed | 2024-08-26T09:29:33Z | 2024-08-30T04:52:38Z | https://github.com/kornia/kornia/issues/2995 | [
"bug :bug:"
] | edgarriba | 3 |
clovaai/donut | computer-vision | 4 | Local custom dataset & Potential typo in test.py | Hi, thanks for this interesting work!
I tried to use this model on a local custom dataset and followed the dataset structure as specified but it failed to load correctly. I ended up having to hard code some data loading code to make it work. It would be greatly appreciated if you guys can provide a demo or example of local dataset. Thanks!
PS: I think there may be a typo in the test.py: the '--pretrained_path' should probably be '--pretrained_model_name_or_path' ? | closed | 2022-07-25T11:53:00Z | 2022-07-29T08:58:10Z | https://github.com/clovaai/donut/issues/4 | [] | xingjianz | 1 |
dynaconf/dynaconf | fastapi | 1,081 | [RFC] Use profile link of a contributor in the CHANGELOG | ## Problem
The new release script introduced in #1078 generates a changelog without the link to the author github profile:
```
dependabot alert 21 about Django (on tests) (#1067). By Bruno Rocha.
```
## Proposal
I want it to be rendered as:
```
dependabot alert 21 about Django (on tests) (#1067). By [Bruno Rocha](https://github.com/rochacbruno).
```
## Additional Context
The reason the github link is not fetched is because `git-changelog` only uses information available in git (name and email) in the commit range to be released.
To fetch the github link, we could hit the github API:
1. Fetch a [list of users](https://docs.github.com/en/rest/users/users?apiVersion=2022-11-28#list-users) from latest release to the current one
2. Map `user-email: url`
3. Link git `user.email` with `github url` using that map
We may have to open a PR in `git-changelog` to support this. Possible approaches to present to the project:
* add the GitHub API fetch inside git-changelog
* make it possible to provide a map (`user-email: url`), which should be available internally the template.
* This map could be a simple toml file
* We should generate this file externally using the GH API.
Possibly, we could integrate that solution with fixing the [`update_contributors`](https://github.com/dynaconf/dynaconf/blob/master/.github/workflows/update_contributors.yml) workflow, which is broken.
| open | 2024-03-22T17:42:13Z | 2024-07-08T18:37:54Z | https://github.com/dynaconf/dynaconf/issues/1081 | [
"Not a Bug",
"RFC",
"good first issue"
] | pedro-psb | 0 |
d2l-ai/d2l-en | deep-learning | 2,022 | Where is internal state used in train_2d | In `train_2d` defined in [gd](https://github.com/d2l-ai/d2l-en/blob/master/chapter_optimization/gd.md), it says `s1` and `s2` are internal state variables that will be used later, but where exactly are those variable used are not clear, even after reading the following sections.
Perhaps it is better to put a reference in the comments, since it is confusing whether the internal states will be used in the same section later or in the same chapter later.
```
def train_2d(trainer, steps=20, f_grad=None): #@save
"""Optimize a 2D objective function with a customized trainer."""
# `s1` and `s2` are internal state variables that will be used later
x1, x2, s1, s2 = -5, -2, 0, 0
results = [(x1, x2)]
for i in range(steps):
if f_grad:
x1, x2, s1, s2 = trainer(x1, x2, s1, s2, f_grad)
else:
x1, x2, s1, s2 = trainer(x1, x2, s1, s2)
results.append((x1, x2))
print(f'epoch {i + 1}, x1: {float(x1):f}, x2: {float(x2):f}')
return results
``` | closed | 2022-01-25T10:51:39Z | 2022-01-26T22:45:50Z | https://github.com/d2l-ai/d2l-en/issues/2022 | [] | shanmo | 0 |
lepture/authlib | flask | 148 | RFE: integration with FastAPI/Starlette | [FastAPI ](https://github.com/tiangolo/fastapi)is rapidly gaining popularity as an API framework. It would be great if there was an integration client for FastAPI like there is for Flask etc.
FastAPI doesn't have a plugin system like Flask, but Starlette supports middlewares, and FastAPI supports dependency injection, so I think it should be possible.
| closed | 2019-09-14T23:32:58Z | 2019-10-05T13:05:45Z | https://github.com/lepture/authlib/issues/148 | [
"client"
] | jonathanunderwood | 7 |
koaning/scikit-lego | scikit-learn | 617 | [FEATURE] VarianceThresholdClassifier | You can use quantile regression tricks to make predictions about quantiles.
But what if, at prediction time, you'd like to predict `P(y >= value)`?
To answer that question you'd need more than just a quantile you'd need some distribution prediction instead.
So maybe there's an opportunity for a component here. | closed | 2024-02-09T10:45:55Z | 2024-02-20T21:15:54Z | https://github.com/koaning/scikit-lego/issues/617 | [
"enhancement"
] | koaning | 1 |
ethanopp/fitly | dash | 13 | peloton_auto_bookmark_metric is an invalid keyword argument for athlete | Hi! Having some trouble getting started. I've pulled the latest image from dockerhub, but the container is crashing:
~/$ docker run -e MODULE_NAME=src.fitly.app -e VARIABLE_NAME=server -p 8050:80 -v /home/me/fitly:/app/config ethanopp/fitly:latest
Checking for script in /app/prestart.sh
Running script /app/prestart.sh
Running inside /app/prestart.sh, you could add migrations to this file, e.g.:
#! /usr/bin/env bash
# Let the DB start
sleep 10;
# Run migrations
alembic upgrade head
{"loglevel": "info", "workers": 8, "bind": "0.0.0.0:80", "workers_per_core": 2.0, "host": "0.0.0.0", "port": "80"}
Traceback (most recent call last):
File "/usr/local/bin/gunicorn", line 8, in <module>
sys.exit(run())
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 58, in run
WSGIApplication("%(prog)s [OPTIONS] [APP_MODULE]").run()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 228, in run
super().run()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 72, in run
Arbiter(self).run()
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 58, in __init__
self.setup(app)
File "/usr/local/lib/python3.7/site-packages/gunicorn/arbiter.py", line 118, in setup
self.app.wsgi()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 49, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.7/site-packages/gunicorn/app/wsgiapp.py", line 39, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.7/site-packages/gunicorn/util.py", line 358, in import_app
mod = importlib.import_module(module)
File "/usr/local/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/app/src/fitly/app.py", line 13, in <module>
db_startup(app)
File "/app/src/fitly/__init__.py", line 86, in db_startup
peloton_auto_bookmark_metric='readiness'
File "<string>", line 4, in __init__
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 433, in _initialize_instance
manager.dispatch.init_failure(self, args, kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
with_traceback=exc_tb,
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 182, in raise_
raise exception
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/orm/state.py", line 430, in _initialize_instance
return manager.original_init(*mixed[1:], **kwargs)
File "/usr/local/lib/python3.7/site-packages/sqlalchemy/ext/declarative/base.py", line 840, in _declarative_constructor
"%r is an invalid keyword argument for %s" % (k, cls_.__name__)
TypeError: 'peloton_auto_bookmark_metric' is an invalid keyword argument for athlete
I was previous getting some configuration errors, but I worked through those, and I'm now at this error. Happy to provide any additional info/ Thanks! | closed | 2021-01-04T04:45:28Z | 2021-01-05T03:55:59Z | https://github.com/ethanopp/fitly/issues/13 | [] | spawn-github | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 731 | ValueError: Dimension has to be a list or tuple | Traceback (most recent call last):
File "<ipython-input-91-3ab6d73131bd>", line 1, in <module>
runfile('/Users/sameepshah/Desktop/Data/Practice/skoptHyperParm.py', wdir='/Users/sameepshah/Desktop/Data/Practice')
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/spyder/utils/site/sitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/Users/sameepshah/Desktop/Data/Practice/skoptHyperParm.py", line 315, in <module>
x0=default_parameters)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/optimizer/gp.py", line 214, in gp_minimize
space = normalize_dimensions(dimensions)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/utils.py", line 472, in normalize_dimensions
space = Space(dimensions)
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/space/space.py", line 570, in __init__
self.dimensions = [check_dimension(dim) for dim in dimensions]
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/space/space.py", line 570, in <listcomp>
self.dimensions = [check_dimension(dim) for dim in dimensions]
File "/Users/sameepshah/anaconda3/lib/python3.6/site-packages/skopt/space/space.py", line 70, in check_dimension
raise ValueError("Dimension has to be a list or tuple.")
ValueError: Dimension has to be a list or tuple.
Hi guys I was trying to run your posted hyperparameter tuning model. I am getting this error help much appreciated.
Thanks | closed | 2018-11-08T07:12:36Z | 2020-08-19T07:39:39Z | https://github.com/scikit-optimize/scikit-optimize/issues/731 | [] | Liquidten | 6 |
huggingface/transformers | deep-learning | 36,659 | Qwen2 MoE manual `head_dim` | ### Feature request
https://github.com/huggingface/transformers/blob/81aa9b2e07b359cd3555c118010fd9f26c601e54/src/transformers/models/qwen2_moe/modeling_qwen2_moe.py#L317
For qwen2 moe, `head_dim` is now forced to be `hidden_size // num_heads`.
### Motivation
manual `head_dim` setting support in llama, mistal, mixtral modeling
### Your contribution
PR | open | 2025-03-12T07:43:54Z | 2025-03-12T12:28:12Z | https://github.com/huggingface/transformers/issues/36659 | [
"Feature request"
] | yunju63 | 1 |
sktime/pytorch-forecasting | pandas | 1,496 | Why does NHiTS need the target variable specified in the time_varying_unknown_reals attribute? | I was wondering why do I need to specify the target variable twice, when building a `TimeSeriesDataset` for a **NHiTS**? Once for the attribute 'target' and once for 'time_varying_unknown_reals'. If I don't specify it for the second attribute, I get a `ValueError: [target_name] is not in list.` | closed | 2024-01-22T23:17:49Z | 2024-09-22T19:06:51Z | https://github.com/sktime/pytorch-forecasting/issues/1496 | [] | TeodorChiaburu | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.