repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
psf/requests | python | 6,530 | Automatically Close Requests Session After Specified Timeout, Even with Retries | **Desc**
Currently, when using the requests library in Python to create a session with a timeout and there are retries implemented within the request, the session remains open even after the specified timeout has elapsed. This behavior can lead to resource leakage and unexpected behavior when managing sessions with timeouts.
**Proposal:**
Add functionality to automatically close a requests session after a specified timeout, even if there are retries in the request itself. This enhancement would improve the library's usability and ensure that sessions are properly managed, even in cases involving retries.
**Use Case:**
As a developer, I often need to make HTTP requests with retries in case of transient errors. However, I also need to ensure that the session is closed after a certain timeout to prevent resource leaks. This feature would enable me to have both retry functionality and automatic session closure without having to implement custom solutions.
**Suggested Implementation:**
One way to implement this feature is to introduce an optional session_timeout parameter when creating a session or making requests. This parameter would specify the maximum duration the session can remain open, and if the timeout is reached, the session would be closed automatically, regardless of ongoing retries.
| closed | 2023-09-14T10:34:35Z | 2024-09-15T00:04:22Z | https://github.com/psf/requests/issues/6530 | [] | shoval-c | 1 |
raphaelvallat/pingouin | pandas | 262 | Support for Bayesian Credible Intervals | Hello, is there an implementation for Bayesian credible intervals as there is for frequentist confidence intervals?
https://en.wikipedia.org/wiki/Credible_interval
Helpful Literature:
1. https://jakevdp.github.io/blog/2014/06/12/frequentism-and-bayesianism-3-confidence-credibility/ | closed | 2022-04-15T10:22:07Z | 2023-11-13T19:30:39Z | https://github.com/raphaelvallat/pingouin/issues/262 | [
"feature request :construction:"
] | filipmarkoski | 2 |
AutoViML/AutoViz | scikit-learn | 61 | No plot visible in local Jupiter | from autoviz.AutoViz_Class import AutoViz_Class
%matplotlib
AV = AutoViz_Class()
df = AV.AutoViz(filename='',dfte=train,depVar='Species',verbose=1)
Using matplotlib backend: Qt5Agg
Shape of your Data Set loaded: (150, 5)
############## C L A S S I F Y I N G V A R I A B L E S ####################
Classifying variables in data set...
Number of Numeric Columns = 4
Number of Integer-Categorical Columns = 0
Number of String-Categorical Columns = 0
Number of Factor-Categorical Columns = 0
Number of String-Boolean Columns = 0
Number of Numeric-Boolean Columns = 0
Number of Discrete String Columns = 0
Number of NLP String Columns = 0
Number of Date Time Columns = 0
Number of ID Columns = 0
Number of Columns to Delete = 0
4 Predictors classified...
No variables removed since no ID or low-information variables found in data set
################ Multi_Classification VISUALIZATION Started #####################
Data Set Shape: 150 rows, 5 cols
Data Set columns info:
* SepalLengthCm: 0 nulls, 35 unique vals, most common: {5.0: 10, 5.1: 9}
* SepalWidthCm: 0 nulls, 23 unique vals, most common: {3.0: 26, 2.8: 14}
* PetalLengthCm: 0 nulls, 43 unique vals, most common: {1.5: 14, 1.4: 12}
* PetalWidthCm: 0 nulls, 22 unique vals, most common: {0.2: 28, 1.3: 13}
* Species: 0 nulls, 3 unique vals, most common: {'Iris-setosa': 50, 'Iris-versicolor': 50}
--------------------------------------------------------------------
Columns to delete:
' []'
Boolean variables %s
' []'
Categorical variables %s
' []'
Continuous variables %s
" ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm']"
Discrete string variables %s
' []'
Date and time variables %s
' []'
ID variables %s
' []'
Target variable %s
' Species'
Total Number of Scatter Plots = 10
No categorical or boolean vars in data set. Hence no pivot plots...
No categorical or numeric vars in data set. Hence no bar charts.
Time to run AutoViz = 2 seconds
###################### AUTO VISUALIZATION Completed ########################
but no plot.
In kaggle it' was working fine
https://www.kaggle.com/gauravduttakiit/multi-classification-problem-iris/notebook | closed | 2022-02-03T15:42:44Z | 2022-02-15T13:12:26Z | https://github.com/AutoViML/AutoViz/issues/61 | [] | GDGauravDutta | 2 |
modoboa/modoboa | django | 2,510 | Missing requirement: asgiref | See https://github.com/modoboa/modoboa/discussions/2509
| closed | 2022-05-09T13:17:40Z | 2022-05-10T14:45:51Z | https://github.com/modoboa/modoboa/issues/2510 | [
"bug"
] | tonioo | 0 |
paulpierre/RasaGPT | fastapi | 31 | PDF integration | Thank you for the awesome repo. Is it also possible to add a pdf and use the chatbot to answer questions from that PDF? | open | 2023-06-13T10:28:12Z | 2023-06-13T10:28:12Z | https://github.com/paulpierre/RasaGPT/issues/31 | [] | kargarisaac | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 711 | Eerror | Last Error Received:
Process: Ensemble Mode
If this error persists, please contact the developers with the error details.
Raw Error Details:
ZeroDivisionError: "float division by zero"
Traceback Error: "
File "UVR.py", line 4716, in process_start
File "separate.py", line 672, in seperate
File "separate.py", line 803, in inference_vr
File "separate.py", line 767, in _execute
"
Error Time Stamp [2023-08-01 18:42:08]
Full Application Settings:
vr_model: Choose Model
aggression_setting: 10
window_size: 512
batch_size: 4 | open | 2023-08-01T16:42:28Z | 2023-08-01T16:42:28Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/711 | [] | TheDencker | 0 |
mlfoundations/open_clip | computer-vision | 740 | Fine tuning on a pretrained model can lead to poor performance | When I used the laion400M dataset on ViT-B-16_ Laion2b_ S34b_ b88k when fine-tuning it, I found that the zero-shot performance of Imagenet decreased by more than 20% with training. And the more you train, the more performance will decrease | closed | 2023-11-15T13:58:55Z | 2023-11-16T15:09:11Z | https://github.com/mlfoundations/open_clip/issues/740 | [] | lyc200522 | 6 |
scikit-optimize/scikit-optimize | scikit-learn | 716 | Integrated acquistion function | In the following paper https://arxiv.org/pdf/1206.2944.pdf, the authors introduced the integrated acquisition function. Would people be interested in this new feature to scikit-opt? | open | 2018-08-29T02:11:07Z | 2020-03-06T12:41:29Z | https://github.com/scikit-optimize/scikit-optimize/issues/716 | [
"Enhancement"
] | aritrasen | 1 |
holoviz/panel | matplotlib | 7,113 | 'katex' extension was not loaded | I was trying to improve the LaTeX reference guide using the Panel `main` branch.
When serving the example below I see
```bash
WARNING:param.main: pn.extension was initialized but 'katex' extension was not loaded. In order for the required resources to be initialized ensure the extension is loaded with the following argument(s):
```
This is confusing for users and should not be shown.
----
```python
import sympy as sp
import panel as pn
pn.extension("mathjax")
# Define a symbol and a symbolic expression using SymPy
x = sp.symbols('x')
expression = sp.integrate(sp.sin(x)**2, x)
# Create a LaTeX pane to display the expression
latex_pane = pn.pane.LaTeX(expression, styles={'font-size': '20px'})
# Serve the panel
pn.Column(
"# A sympy expression rendered in Panel: ", latex_pane
).servable()
``` | closed | 2024-08-09T18:20:41Z | 2024-08-27T09:05:43Z | https://github.com/holoviz/panel/issues/7113 | [] | MarcSkovMadsen | 0 |
2noise/ChatTTS | python | 190 | ๅฆไฝไฝฟ็จ่ชๅทฑ็ๅฃฐ้ณๅข๏ผ | ๅฆไฝไฝฟ็จ่ชๅทฑ็้ณ่ฒๆฅๅฎ็ฐTTSๅข๏ผ่ฟไธชๅบๆฏๅฆๆไพๆ นๆฎๅฝ้ณ้ๆ ทๆฅๅ่ฎญ็ป็ๅ่ฝ๏ผ
ๅฆๅค๏ผ้ๅธธๆ่ฐขไฝ่
| closed | 2024-06-02T03:34:28Z | 2024-07-20T08:34:05Z | https://github.com/2noise/ChatTTS/issues/190 | [
"stale"
] | alamise | 5 |
Anjok07/ultimatevocalremovergui | pytorch | 1,093 | [SOLVED] Error when using MDX23C-8KFFT-InstVoc_HQ model | UVR version: latest (for Mac Silicon)
OS: Sonoma latest revision
Input file: WAV stereo (about 25 minutes long)
```
Last Error Received:
Process: MDX-Net
If this error persists, please contact the developers with the error details.
Raw Error Details:
NotImplementedError: "convolution_overrideable not implemented. You are likely triggering this with tensor backend other than CPU/CUDA/MKLDNN, if this is intended, please use TORCH_LIBRARY_IMPL to override this function "
Traceback Error: "
File "UVR.py", line 6584, in process_start
File "separate.py", line 623, in seperate
File "separate.py", line 742, in demix
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib_v5/tfc_tdf_v3.py", line 232, in forward
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "lib_v5/tfc_tdf_v3.py", line 139, in forward
File "torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/conv.py", line 460, in forward
return self._conv_forward(input, self.weight, self.bias)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "torch/nn/modules/conv.py", line 456, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
"
Error Time Stamp [2024-01-09 11:58:52]
Full Application Settings:
vr_model: UVR-DeEcho-DeReverb
aggression_setting: 5
window_size: 512
mdx_segment_size: 8192
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: Choose Model
segment: Default
overlap: 0.25
overlap_mdx: 0.99
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: MDX23C-InstVoc HQ
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: Default
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: False
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: 32-bit Float
cuda_set: Default
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: Vocals
``` | open | 2024-01-09T09:01:01Z | 2024-01-10T08:33:16Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1093 | [] | COOLak | 1 |
plotly/dash-cytoscape | plotly | 85 | [Feature request]: Integration with maps | Hi Cytoscape team. First of all: Great work!
It would also be great if there was an easy way to overlay cytoscape elements on a map, e.g. like the integration of Mapbox with Plotly Scatter plots.
Or is there already a documented or undocumented way to do this? | open | 2020-04-29T10:23:54Z | 2021-05-19T10:38:45Z | https://github.com/plotly/dash-cytoscape/issues/85 | [
"suggestion"
] | optimulate | 3 |
jina-ai/serve | machine-learning | 5,362 | Batch indexing documents on a CPU within a docker container fails | **Describe the bug**
The test case below where a flow is initiated and bulk documents 10000> are indexed spits out the error below after briefly running on CPU machines.
**To reproduce**
**Build image:**
docker build -t jina-app .
**Run image:**
docker run -t -p 8080:8080 jina-app
**Dockerfile:**
`
FROM jinaai/jina:3.9-standard
ADD . /app
ENV JINA_LOG_LEVEL=DEBUG
WORKDIR /app
ENTRYPOINT ["python", "/app/server.py"]
EXPOSE 8080
`
**server.py:**
```python
from docarray import DocumentArray, Document
from jina import Flow, Executor, requests
import re
f = (
Flow(port_expose=8080)
.add(
name="encoder",
uses="jinahub://TransformerTorchEncoder",
# replicas=8,
uses_with={
"traversal_paths": "@r, c, cc, ccc",
'pretrained_model_name_or_path': 'sentence-transformers/all-MiniLM-L6-v2'
},
install_requirements=True,
)
)
da = []
for x in range(10000):
da.append(Document(id=str(x + 1), text="Hello World i am a text", chunks=[Document(text="Hello World i am a large text. "*100) for i in range(10)]))
with f:
d = f.index(inputs=DocumentArray(da), show_progress=True, return_responses = True)
```
**ERROR LOG:**
```
DEBUG encoder/rep-0@33 start listening on 0.0.0.0:60841
DEBUG encoder/rep-0@ 1 ready and listening [11/07/22 19:28:28]
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ ๏ฟฝ Flow is ready to serve! โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โญโโโโโโโโโโโโโโ ๐ Endpoint โโโโโโโโโโโโโโโโฎ
โ โ Protocol GRPC โ
โ ๐ Local 0.0.0.0:8080 โ
โ ๐ Private x.x.x.x:8080 โ
โ ๐ Public x.x.x.x.:8080 โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
DEBUG Flow@ 1 2 Deployments (i.e. 2 Pods) are running in this Flow [11/07/22 19:28:28]
โ Working... โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 0:00:00 0% ETA: -:--:-- DEBUG encoder/rep-0@33 got an endpoint discovery request [11/07/22 19:28:29]
DEBUG encoder/rep-0@33 recv DataRequest at /index with id: 20e78aa694094f58a3300ad3c935b798
โ ผ Working... โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 0:01:43 0% ETA: -:--:-- DEBUG encoder/rep-0@33 recv DataRequest at /index with id: 42a6446d695d418d9f075815f51f394f [11/07/22 19:30:12]
โ ด Working... โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 0:01:50 0% ETA: -:--:-- INFO: 127.0.0.1:39024 - "POST /post HTTP/1.1" 200 OK
โ ง Working... โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 0:03:31 1% ETA: -:--:-- DEBUG encoder/rep-0@33 recv DataRequest at /index with id: 7aadb347dc2f4931b9b5bcea5c85fa90 [11/07/22 19:31:59]
โ Working... โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 0:03:39 1% ETA: -:--:-- INFO: 127.0.0.1:39026 - "POST /post HTTP/1.1" 200 OK
โ Working... โธโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ 0:05:00 2% ETA: -:--:--
DEBUG gateway/rep-0@ 1 waiting for ready or shutdown signal from runtime [11/07/22 19:33:29]
DEBUG gateway/rep-0@ 1 terminate
DEBUG gateway/rep-0@ 1 terminating the runtime process
DEBUG gateway/rep-0@ 1 runtime process properly terminated
INFO: Shutting down
INFO: 127.0.0.1:39182 - "POST /post HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/uvicorn/protocols/http/httptools_impl.py", line 405, in run_asgi
self.scope, self.receive, self.send
File "/usr/local/lib/python3.7/site-packages/uvicorn/middleware/proxy_headers.py", line 78, in __call__
return await self.app(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/fastapi/applications.py", line 270, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/applications.py", line 124, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 184, in __call__
raise exc
File "/usr/local/lib/python3.7/site-packages/starlette/middleware/errors.py", line 162, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.7/site-packages/starlette/middleware/exceptions.py", line 75, in __call__
raise exc
File "/usr/local/lib/python3.7/site-packages/starlette/middleware/exceptions.py", line 64, in __call__
await self.app(scope, receive, sender)
File "/usr/local/lib/python3.7/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
raise e
File "/usr/local/lib/python3.7/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
await self.app(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 680, in __call__
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 275, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.7/site-packages/starlette/routing.py", line 65, in app
response = await func(request)
File "/usr/local/lib/python3.7/site-packages/fastapi/routing.py", line 232, in app
dependant=dependant, values=values, is_coroutine=is_coroutine
File "/usr/local/lib/python3.7/site-packages/fastapi/routing.py", line 160, in run_endpoint_function
return await dependant.call(**values)
File "/usr/local/lib/python3.7/site-packages/jina/serve/runtimes/gateway/http/app.py", line 199, in post
request_generator(**req_generator_input)
File "/usr/local/lib/python3.7/site-packages/jina/serve/runtimes/gateway/http/app.py", line 380, in _get_singleton_result
async for k in streamer.stream(request_iterator=request_iterator):
File "/usr/local/lib/python3.7/site-packages/jina/serve/stream/__init__.py", line 73, in stream
async for response in async_iter:
File "/usr/local/lib/python3.7/site-packages/jina/serve/stream/__init__.py", line 202, in _stream_requests
response = self._result_handler(future.result())
concurrent.futures._base.CancelledError
INFO: 127.0.0.1:39180 - "POST /post HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
```
---
**Docker images used:**
jinaai/jina:3.9-standard
jinaai/jina:3.10.1-py39-standard
| closed | 2022-11-07T16:36:45Z | 2022-11-09T00:37:30Z | https://github.com/jina-ai/serve/issues/5362 | [] | dpanpat | 11 |
gradio-app/gradio | deep-learning | 10,742 | [BUG] when uploading image for inpainting | ### Describe the bug
When i click on run, getting error:
the size of file is less then 1 kb
### Have you searched existing issues? ๐
- [x] I have searched and found no existing issues
### Reproduction
clone repo:
```
https://huggingface.co/spaces/Himanshu806/fluxInpaint-testing
```
and setup .env
then run app.py file
It is working in docker image. but when i run it using python venv, getting error
### Screenshot
<img width="1680" alt="Image" src="https://github.com/user-attachments/assets/fcda4724-c290-4fa7-aed0-88e22a27e80d" />
### Logs
```shell
logging set to debug:
* Running on public URL: https://a5814071f4d9704665.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from the terminal in the working directory to deploy to Hugging Face Spaces (https://huggingface.co/spaces)
2025-03-06 11:13:40 - DEBUG - Starting new HTTPS connection (1): huggingface.co:443
2025-03-06 11:13:40 - DEBUG - https://huggingface.co:443 "HEAD /api/telemetry/gradio/launched HTTP/1.1" 200 0
2025-03-06 11:14:01 - DEBUG - Calling on_part_begin with no data
2025-03-06 11:14:01 - DEBUG - Calling on_header_field with data[42:61]
2025-03-06 11:14:01 - DEBUG - Calling on_header_value with data[63:89]
2025-03-06 11:14:01 - DEBUG - Calling on_header_end with no data
2025-03-06 11:14:01 - DEBUG - Calling on_headers_finished with no data
2025-03-06 11:14:01 - DEBUG - Calling on_part_data with data[93:94]
2025-03-06 11:14:01 - DEBUG - Calling on_part_end with no data
2025-03-06 11:14:01 - DEBUG - Calling on_part_begin with no data
2025-03-06 11:14:01 - DEBUG - Calling on_header_field with data[138:157]
2025-03-06 11:14:01 - DEBUG - Calling on_header_value with data[159:185]
2025-03-06 11:14:01 - DEBUG - Calling on_header_end with no data
2025-03-06 11:14:01 - DEBUG - Calling on_headers_finished with no data
2025-03-06 11:14:01 - DEBUG - Calling on_part_data with data[189:190]
2025-03-06 11:14:01 - DEBUG - Calling on_part_end with no data
2025-03-06 11:14:01 - DEBUG - Calling on_end with no data
2025-03-06 11:14:33 - DEBUG - Calling on_part_begin with no data
2025-03-06 11:14:33 - DEBUG - Calling on_header_field with data[42:61]
2025-03-06 11:14:33 - DEBUG - Calling on_header_value with data[63:110]
2025-03-06 11:14:33 - DEBUG - Calling on_header_end with no data
2025-03-06 11:14:33 - DEBUG - Calling on_header_field with data[112:124]
2025-03-06 11:14:33 - DEBUG - Calling on_header_value with data[126:150]
2025-03-06 11:14:33 - DEBUG - Calling on_header_end with no data
2025-03-06 11:14:33 - DEBUG - Calling on_headers_finished with no data
2025-03-06 11:14:33 - DEBUG - Calling on_part_data with data[154:44530]
2025-03-06 11:14:33 - DEBUG - Calling on_part_end with no data
2025-03-06 11:14:33 - DEBUG - Calling on_end with no data
versions:
(venv) ubuntu@ip-172-31-24-219:~/fluxinpaint$ pip3 show gradio
Name: gradio
Version: 5.14.0
Summary: Python library for easily interacting with trained machine learning models
Home-page: https://github.com/gradio-app/gradio
Author:
Author-email: Abubakar Abid <gradio-team@huggingface.co>, Ali Abid <gradio-team@huggingface.co>, Ali Abdalla <gradio-team@huggingface.co>, Dawood Khan <gradio-team@huggingface.co>, Ahsen Khaliq <gradio-team@huggingface.co>, Pete Allen <gradio-team@huggingface.co>, รmer Faruk รzdemir <gradio-team@huggingface.co>, Freddy A Boulton <gradio-team@huggingface.co>, Hannah Blair <gradio-team@huggingface.co>
License:
Location: /home/ubuntu/fluxinpaint/venv/lib/python3.12/site-packages
Requires: aiofiles, anyio, fastapi, ffmpy, gradio-client, httpx, huggingface-hub, jinja2, markupsafe, numpy, orjson, packaging, pandas, pillow, pydantic, pydub, python-multipart, pyyaml, ruff, safehttpx, semantic-version, starlette, tomlkit, typer, typing-extensions, uvicorn
Required-by:
(venv) ubuntu@ip-172-31-24-219:~/fluxinpaint$ python3 --version
Python 3.12.6
(venv) ubuntu@ip-172-31-24-219:~/fluxinpaint$
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.14.0
gradio_client version: 1.7.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.0 is not installed.
httpx: 0.28.1
huggingface-hub: 0.29.2
jinja2: 3.1.6
markupsafe: 2.1.5
numpy: 2.2.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.9
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.0
tomlkit: 0.13.2
typer: 0.15.2
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.2
```
### Severity
Blocking usage of gradio | closed | 2025-03-06T11:16:49Z | 2025-03-07T14:12:39Z | https://github.com/gradio-app/gradio/issues/10742 | [
"bug",
"needs repro"
] | Himasnhu-AT | 2 |
alirezamika/autoscraper | automation | 41 | Won't find special characters | When trying to find anything that contains a `.` in it I get no results.
```
url = 'https://pastebin.com/APSMFRLL'
# We can add one or multiple candidates here.
# You can also put urls here to retrieve urls.
wanted_list = ["."]
scraper = AutoScraper()
result = scraper.build(url, wanted_list)
print(result)
```
I would've expected to get:
```
one.two
three.four
five.six
seven.eight
```
Maybe I'm not doing something correctly perhaps. | closed | 2020-11-26T04:43:00Z | 2020-12-06T09:28:05Z | https://github.com/alirezamika/autoscraper/issues/41 | [] | BeenHijacked | 1 |
NullArray/AutoSploit | automation | 1,278 | Unhandled Exception (7db0a1a1d) | Autosploit version: `2.2.3`
OS information: `Linux-4.14.94+-armv8l-with-libc`
Running context: `/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit.py`
Error meesage: `[Errno 17] File exists: '/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit_out/2020-07-02_11h55m44s/'`
Error traceback:
```
Traceback (most recent call):
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit/main.py", line 123, in main
terminal.terminal_main_display(loaded_exploits)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 331, in terminal_main_display
self.custom_host_list(loaded_mods)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 277, in custom_host_list
self.exploit_gathered_hosts(mods, hosts=provided_host_file)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/term/terminal.py", line 266, in exploit_gathered_hosts
exploiter.start_exploit()
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/lib/exploitation/exploiter.py", line 102, in start_exploit
makedirs(current_host_path)
File "/data/data/com.thecrackertechnology.andrax/ANDRAX/python2/lib/python2.7/os.py", line 157, in makedirs
mkdir(name, mode)
OSError: [Errno 17] File exists: '/data/data/com.thecrackertechnology.andrax/ANDRAX/AutoSploit/autosploit_out/2020-07-02_11h55m44s/'
```
Metasploit launched: `True`
| open | 2020-07-02T08:56:35Z | 2020-07-02T08:56:35Z | https://github.com/NullArray/AutoSploit/issues/1278 | [] | AutosploitReporter | 0 |
reloadware/reloadium | django | 17 | No files are watched. | **Describe the bug**
When I try to debug the python file, I get an error and can not hot reload the file. It tells me that I do not watch this file. The details could be found in the picture.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**

**Desktop (please complete the following information):**
- OS: [Windows]
- OS version: [22]
- Reloadium package version: [0.8.8]
- PyCharm plugin version: [ 0.8.2 ]
- Editor: [PyCharm]
- Run mode: [Debug]
**Additional context**
Add any other context about the problem here. | closed | 2022-05-26T23:41:03Z | 2022-06-16T10:20:49Z | https://github.com/reloadware/reloadium/issues/17 | [] | Easilifer | 4 |
taverntesting/tavern | pytest | 44 | No handlers could be found for logger "tavern.core" | When excecuting `(env) @math:tavern $ tavern-ci minimal_test.tavern.yaml` an error is printed on console
and the test doesn't run.
Contents from `minimal_test.tavern.yaml` is the 'Simple test' from: https://taverntesting.github.io/examples

I instaled tavern with: `pip install tavern`
Error doesnt happen when using python3.6. | closed | 2018-03-02T13:03:41Z | 2018-03-05T11:21:08Z | https://github.com/taverntesting/tavern/issues/44 | [] | edvm | 1 |
JaidedAI/EasyOCR | deep-learning | 697 | Integrate TorchMetrics for faster computation | I noticed `edit_distance` is being used from NLTK package for model evaluation which can be migrated to use `TorchMetrics` for faster computation.
With using [TM metrics](https://torchmetrics.readthedocs.io/en/stable/) you may rely on the widely tested correctness (testing against good standard scikit-learn in multiple OS environments and all PyTorch versions above v1.4) and later you can use nn.Module to leverage update and compute.
Feel free to get in touch and let us know what you think about this, and if you need any help with the PR. Thanks! ๐ | open | 2022-04-01T09:35:59Z | 2022-04-01T09:35:59Z | https://github.com/JaidedAI/EasyOCR/issues/697 | [] | aniketmaurya | 0 |
nolar/kopf | asyncio | 1,087 | Handle large resource spec being annotated to `last-handled-configuration` | ### Keywords
annotation, spec, last-handled-configuration, finalizers
### Problem
When dealing with a resource that has a large `spec`, the operator is throwing an error
```
APIClientError('Pod \"<REDACTED>\" is invalid: metadata.annotations: Too long: must have at most 262144 bytes'
```
This is due to the fact that it is trying to store the spec in the `last-handled-configuration` annotation.
It is especially problematic during deletion as it blocks the deletion using the finalizer, which never gets removed. So, how does one go about configuring the operator to handle such a scenario?
Note:
As per the doc [Troubleshooting](https://kopf.readthedocs.io/en/stable/troubleshooting/)
Restarting the operator doesn't solve the problem. Only running this command unblocks it.
```bash
kubectl patch kopfexample kopf-example-1 -p '{"metadata": {"finalizers": []}}' --type merge
``` | open | 2024-01-09T20:44:13Z | 2024-06-24T09:49:47Z | https://github.com/nolar/kopf/issues/1087 | [
"question"
] | Bharat23 | 3 |
Kinto/kinto | api | 2,866 | Kinto does not start when auth policy is not available | I create a docker-compose file with Keycloak as the Identity Provider and Keycloak. As Keycloak takes some time to start, Kinto does not start and exists with this message:
<class 'json.decoder.JSONDecodeError'>: Expecting value: line 1 column 1 (char 0)
Im pretty sure this is as the issuer does not provide a valid json file. In Previous versions of docker-compose it was possible to make the kinto container depended. But this is no longer recommended.
"There's been a move away from specifying container dependencies in compose. They're only valid at startup time and don't work when dependent containers are restarted at run time. Instead, each container should include mechanism to retry to reconnect to dependent services when the connection is dropped. Many libraries to connect to databases or REST API services have configurable built-in retries. I'd look into that. It is needed for production code anyway."
What is the best way to handle this ? | open | 2021-09-21T08:38:27Z | 2024-07-23T20:04:48Z | https://github.com/Kinto/kinto/issues/2866 | [
"bug",
"stale"
] | SimonKlausLudwig | 3 |
flairNLP/flair | nlp | 3,000 | Training | Hi,
I want to use tag_format = 'B' in the flair sequence tagger instead of using "BIOES". How can I do this? | closed | 2022-11-26T10:38:32Z | 2023-05-21T15:37:15Z | https://github.com/flairNLP/flair/issues/3000 | [
"question",
"wontfix"
] | shrimonmuke0202 | 2 |
huggingface/datasets | computer-vision | 6,829 | Load and save from/to disk no longer accept pathlib.Path | Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296:
> This change is breaking in
> https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515
> when the input is `pathlib.Path`. The issue is that `url_to_fs` expects a `str` and cannot deal with `Path`. `get_fs_token_paths` converts to `str` so it is not a problem
This change was introduced in:
- #6704 | open | 2024-04-23T09:44:45Z | 2024-04-23T09:44:46Z | https://github.com/huggingface/datasets/issues/6829 | [
"bug"
] | albertvillanova | 0 |
psf/requests | python | 6,915 | HTTPDigestAuth add optional Realm parameter to the authentication | I noticed that urllib allows to specify the realm of a HTTPDigestAuth but the requests library doesn't allow it. Is it possible to add it on the standard requests library?
There are many devices that specify the realm where it is not the url domain name.
It is possible to do it with the urllib library that requests uses underneath.
Example (A Hickvision network camera):
==========================
#0000: GET /ISAPI/notification/alertStream HTTP/1.1
#002e: Host: 192.168.10.64
#0043: Authorization: Digest username="admin",realm="iDS-TCM403-BI",non
#0083: ce="4e45497a4d4459774e45553659544d334d6d55344d7a6b3d",uri="/ISAP
#00c3: I/notification/alertStream",cnonce="c9bc93c864ef47d8e6879678ad85
#0103: 511c",nc=00000001,algorithm=MD5,response="ffeaecca50dc4b23a7bc68
#0143: 6c0200c339",qop="auth"
#015b: User-Agent: curl/8.10.1
#0174: Accept: */*
#0181: Connection: keep-alive
Where it would be a good idea to implement it:
====================================
from requests.auth import HTTPDigestAuth
At the auth.py file
It now only accepts two parameters:
https://requests.readthedocs.io/en/latest/user/authentication/
Use example with urllib without the requests library:
=======================================
import urllib
import urllib.request
# Examples:
Username="oneusername"
Password="onepassword"
Realm ="iDS-TCM403-BI"
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
password_mgr.add_password(Realm, "http://192.168.10.64/", Username, Password)
handler = urllib.request.HTTPDigestAuthHandler(password_mgr)
opener = urllib.request.build_opener(handler)
urllib.request.install_opener(opener)
response = urllib.request.urlopen(URL)
========================================= | closed | 2025-03-18T15:04:14Z | 2025-03-18T15:04:27Z | https://github.com/psf/requests/issues/6915 | [
"Feature Request",
"actions/autoclose-feat"
] | ideechaniz | 1 |
deepfakes/faceswap | machine-learning | 1,059 | crash when launching faceswap gui on windows | **Describe the bug**
When launching gui using command "python faceswap.py gui", the gui pops up and then goes away by itself almost immediately.
**To Reproduce**
run python faceswap.py gui command
**Expected behavior**
GUI shows up normally.
**Desktop (please complete the following information):**
- OS: [e.g. iOS] Windows 10
- Python Version [e.g. 3.5, 3.6] 3.8
- Conda Version [e.g. 4.5.12] 4.8.4
- Commit ID [e.g. e83819f] e6d62b8
-
**Additional context**
Add any other context about the problem here.
**Crash Report**
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: extract)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'options_panel_width')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 30)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'console_panel_height')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 20)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'icon_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 14)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: default)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 9)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'autosave_last_session')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: prompt)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'timeout')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 120)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'auto_load_model_stats')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'bool'>, value: True)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'fullscreen')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'bool'>, value: False)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'tab')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: extract)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'options_panel_width')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 30)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'console_panel_height')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 20)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'icon_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 14)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: default)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 9)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'autosave_last_session')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: prompt)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'timeout')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 120)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'auto_load_model_stats')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'bool'>, value: True)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'fullscreen')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'bool'>, value: False)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'tab')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: extract)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'options_panel_width')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 30)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'console_panel_height')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 20)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'icon_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 14)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: default)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 9)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'autosave_last_session')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: prompt)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'timeout')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 120)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'auto_load_model_stats')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'bool'>, value: True)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'fullscreen')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'bool'>, value: False)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'tab')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: extract)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'options_panel_width')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 30)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'console_panel_height')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 20)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'icon_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 14)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: default)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'font_size')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 9)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'autosave_last_session')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'str'>, value: prompt)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'timeout')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'int'>, value: 120)
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Getting config item: (section: 'global', option: 'auto_load_model_stats')
09/04/2020 11:35:18 MainProcess MainThread config get DEBUG Returning item: (type: <class 'bool'>, value: True)
09/04/2020 11:35:18 MainProcess MainThread utils set_geometry DEBUG Geometry: 1602x854
09/04/2020 11:35:18 MainProcess MainThread wrapper __init__ DEBUG Initializing ProcessWrapper
09/04/2020 11:35:18 MainProcess MainThread wrapper set_callbacks DEBUG Setting tk variable traces
09/04/2020 11:35:18 MainProcess MainThread wrapper __init__ DEBUG Initializing FaceswapControl
09/04/2020 11:35:18 MainProcess MainThread wrapper __init__ DEBUG Initialized FaceswapControl
09/04/2020 11:35:18 MainProcess MainThread wrapper __init__ DEBUG Initialized ProcessWrapper
09/04/2020 11:35:18 MainProcess MainThread utils delete_preview DEBUG Deleting previews
09/04/2020 11:35:18 MainProcess MainThread utils _clear_image_cache DEBUG Clearing image cache
09/04/2020 11:35:18 MainProcess MainThread utils __init__ DEBUG Initializing: PreviewTrigger
09/04/2020 11:35:18 MainProcess MainThread utils __init__ DEBUG Initialized: PreviewTrigger (trigger_file: E:\Code\python\deepfakes_new\lib\gui\.cache\.preview_trigger)
09/04/2020 11:35:18 MainProcess MainThread gui build_gui DEBUG Building GUI
09/04/2020 11:35:18 MainProcess MainThread menu __init__ DEBUG Initializing MainMenuBar
09/04/2020 11:35:18 MainProcess MainThread menu __init__ DEBUG Initializing FileMenu
09/04/2020 11:35:18 MainProcess MainThread menu build DEBUG Building File menu
09/04/2020 11:35:18 MainProcess MainThread menu build DEBUG Built File menu
09/04/2020 11:35:18 MainProcess MainThread menu __init__ DEBUG Initialized FileMenu
09/04/2020 11:35:18 MainProcess MainThread menu __init__ DEBUG Initializing SettingsMenu
09/04/2020 11:35:18 MainProcess MainThread menu scan_for_plugin_configs DEBUG Scanning path: 'e:\Code\python\deepfakes_new\plugins'
09/04/2020 11:35:18 MainProcess MainThread config __init__ DEBUG Initializing: Config
09/04/2020 11:35:18 MainProcess MainThread config get_config_file DEBUG Config File location: 'e:\Code\python\deepfakes_new\config\convert.ini'
09/04/2020 11:35:18 MainProcess MainThread _config set_defaults DEBUG Setting defaults
09/04/2020 11:35:18 MainProcess MainThread _config load_module DEBUG Adding defaults: (filename: color_transfer_defaults.py, module_path: Code.python.deepfakes_new.plugins.convert.color, plugin_type: color
09/04/2020 11:35:18 MainProcess MainThread _config load_module DEBUG Importing defaults module: Code.python.deepfakes_new.plugins.convert.color.color_transfer_defaults
Traceback (most recent call last):
File "e:\Code\python\deepfakes_new\lib\cli\launcher.py", line 155, in execute_script
process = script(arguments)
File "e:\Code\python\deepfakes_new\scripts\gui.py", line 193, in __init__
self.root = FaceswapGui(arguments.debug)
File "e:\Code\python\deepfakes_new\scripts\gui.py", line 35, in __init__
self.build_gui()
File "e:\Code\python\deepfakes_new\scripts\gui.py", line 66, in build_gui
self.configure(menu=MainMenuBar(self))
File "e:\Code\python\deepfakes_new\lib\gui\menu.py", line 44, in __init__
self.settings_menu = SettingsMenu(self)
File "e:\Code\python\deepfakes_new\lib\gui\menu.py", line 59, in __init__
self.configs = self.scan_for_plugin_configs()
File "e:\Code\python\deepfakes_new\lib\gui\menu.py", line 73, in scan_for_plugin_configs
config = self.load_config(plugin_type)
File "e:\Code\python\deepfakes_new\lib\gui\menu.py", line 90, in load_config
config = module.Config(None)
File "e:\Code\python\deepfakes_new\lib\config.py", line 27, in __init__
self.set_defaults()
File "e:\Code\python\deepfakes_new\plugins\convert\_config.py", line 31, in set_defaults
self.load_module(filename, import_path, plugin_type)
File "e:\Code\python\deepfakes_new\plugins\convert\_config.py", line 40, in load_module
mod = import_module("{}.{}".format(module_path, module))
File "C:\Users\abcdef\Anaconda3\envs\deepfakes\lib\importlib\__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'Code'
============ System Information ============
encoding: cp1252
git_branch: master
git_commits: e6d62b8 Bugfixes: - dfl_h128 Legacy Weights update - lib.faces_detect - Logging fix. 35b6cd1 Merge branch 'staging'. 074f305 Update losses unit tests. 1e86299 Update INSTALL.md. 24c45f9 Update INSTALL.md
gpu_cuda: 10.1
gpu_cudnn: No global version found. Check Conda packages for Conda cuDNN
gpu_devices: GPU_0: GeForce RTX 2070
gpu_devices_active: GPU_0
gpu_driver: 452.06
gpu_vram: GPU_0: 8192MB
os_machine: AMD64
os_platform: Windows-10-10.0.19041-SP0
os_release: 10
py_command: faceswap.py gui
py_conda_version: conda 4.8.4
py_implementation: CPython
py_version: 3.8.5
py_virtual_env: True
sys_cores: 12
sys_processor: Intel64 Family 6 Model 158 Stepping 10, GenuineIntel
sys_ram: Total: 65379MB, Available: 58612MB, Used: 6766MB, Free: 58612MB
=============== Pip Packages ===============
absl-py==0.10.0
astunparse==1.6.3
cachetools==4.1.1
certifi==2020.6.20
chardet==3.0.4
cycler==0.10.0
fastcluster==1.1.26
ffmpy==0.2.3
gast==0.3.3
google-auth==1.21.1
google-auth-oauthlib==0.4.1
google-pasta==0.2.0
grpcio==1.31.0
h5py==2.10.0
idna==2.10
imageio @ file:///tmp/build/80754af9/imageio_1594161405741/work
imageio-ffmpeg @ file:///home/conda/feedstock_root/build_artifacts/imageio-ffmpeg_1589202782679/work
joblib @ file:///tmp/build/80754af9/joblib_1594236160679/work
Keras-Preprocessing==1.1.2
kiwisolver==1.2.0
Markdown==3.2.2
matplotlib @ file:///C:/ci/matplotlib-base_1592837548929/work
mkl-fft==1.1.0
mkl-random==1.1.1
mkl-service==2.3.0
numpy @ file:///C:/ci/numpy_and_numpy_base_1596215850360/work
nvidia-ml-py3 @ git+https://github.com/deepfakes/nvidia-ml-py3.git@6fc29ac84b32bad877f078cb4a777c1548a00bf6
oauthlib==3.1.0
olefile==0.46
opencv-python==4.4.0.42
opt-einsum==3.3.0
pathlib==1.0.1
Pillow @ file:///C:/ci/pillow_1594298230227/work
protobuf==3.13.0
psutil @ file:///C:/ci/psutil_1598370330503/work
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==2.4.7
python-dateutil==2.8.1
pywin32==227
requests==2.24.0
requests-oauthlib==1.3.0
rsa==4.6
scikit-learn @ file:///C:/ci/scikit-learn_1598377018496/work
scipy==1.4.1
sip==4.19.13
six==1.15.0
tensorboard==2.2.2
tensorboard-plugin-wit==1.7.0
tensorflow-gpu==2.2.0
tensorflow-gpu-estimator==2.2.0
termcolor==1.1.0
threadpoolctl @ file:///tmp/tmp9twdgx9k/threadpoolctl-2.1.0-py3-none-any.whl
tornado==6.0.4
tqdm @ file:///tmp/build/80754af9/tqdm_1596810128862/work
urllib3==1.25.10
Werkzeug==1.0.1
wincertstore==0.2
wrapt==1.12.1
============== Conda Packages ==============
# packages in environment at C:\Users\abcdef\Anaconda3\envs\deepfakes:
#
# Name Version Build Channel
absl-py 0.10.0 pypi_0 pypi
astunparse 1.6.3 pypi_0 pypi
blas 1.0 mkl
ca-certificates 2020.7.22 0
cachetools 4.1.1 pypi_0 pypi
certifi 2020.6.20 py38_0
chardet 3.0.4 pypi_0 pypi
cudatoolkit 10.1.243 h74a9793_0
cudnn 7.6.5 cuda10.1_0
cycler 0.10.0 py38_0
fastcluster 1.1.26 py38hbe40bda_1 conda-forge
ffmpeg 4.3.1 ha925a31_0 conda-forge
ffmpy 0.2.3 pypi_0 pypi
freetype 2.10.2 hd328e21_0
gast 0.3.3 pypi_0 pypi
google-auth 1.21.1 pypi_0 pypi
google-auth-oauthlib 0.4.1 pypi_0 pypi
google-pasta 0.2.0 pypi_0 pypi
grpcio 1.31.0 pypi_0 pypi
h5py 2.10.0 pypi_0 pypi
icc_rt 2019.0.0 h0cc432a_1
icu 58.2 ha925a31_3
idna 2.10 pypi_0 pypi
imageio 2.9.0 py_0
imageio-ffmpeg 0.4.2 py_0 conda-forge
intel-openmp 2020.2 254
joblib 0.16.0 py_0
jpeg 9b hb83a4c4_2
keras-preprocessing 1.1.2 pypi_0 pypi
kiwisolver 1.2.0 py38h74a9793_0
libpng 1.6.37 h2a8f88b_0
libtiff 4.1.0 h56a325e_1
lz4-c 1.9.2 h62dcd97_1
markdown 3.2.2 pypi_0 pypi
matplotlib 3.2.2 0
matplotlib-base 3.2.2 py38h64f37c6_0
mkl 2020.2 256
mkl-service 2.3.0 py38hb782905_0
mkl_fft 1.1.0 py38h45dec08_0
mkl_random 1.1.1 py38h47e9c7a_0
numpy 1.19.1 py38h5510c5b_0
numpy-base 1.19.1 py38ha3acd2a_0
nvidia-ml-py3 7.352.1 pypi_0 pypi
oauthlib 3.1.0 pypi_0 pypi
olefile 0.46 py_0
opencv-python 4.4.0.42 pypi_0 pypi
openssl 1.1.1g he774522_1
opt-einsum 3.3.0 pypi_0 pypi
pathlib 1.0.1 py_1
pillow 7.2.0 py38hcc1f983_0
pip 20.2.2 py38_0 anaconda
protobuf 3.13.0 pypi_0 pypi
psutil 5.7.2 py38he774522_0
pyasn1 0.4.8 pypi_0 pypi
pyasn1-modules 0.2.8 pypi_0 pypi
pyparsing 2.4.7 py_0
pyqt 5.9.2 py38ha925a31_4
python 3.8.5 he1778fa_0 anaconda
python-dateutil 2.8.1 py_0
python_abi 3.8 1_cp38 conda-forge
pywin32 227 py38he774522_1
qt 5.9.7 vc14h73c81de_0
requests 2.24.0 pypi_0 pypi
requests-oauthlib 1.3.0 pypi_0 pypi
rsa 4.6 pypi_0 pypi
scikit-learn 0.23.2 py38h47e9c7a_0
scipy 1.4.1 pypi_0 pypi
setuptools 49.6.0 py38_0 anaconda
sip 4.19.13 py38ha925a31_0
six 1.15.0 py_0
sqlite 3.33.0 h2a8f88b_0 anaconda
tensorboard 2.2.2 pypi_0 pypi
tensorboard-plugin-wit 1.7.0 pypi_0 pypi
tensorflow-gpu 2.2.0 pypi_0 pypi
tensorflow-gpu-estimator 2.2.0 pypi_0 pypi
termcolor 1.1.0 pypi_0 pypi
threadpoolctl 2.1.0 pyh5ca1d4c_0
tk 8.6.10 he774522_0
tornado 6.0.4 py38he774522_1
tqdm 4.48.2 py_0
urllib3 1.25.10 pypi_0 pypi
vc 14.1 h0510ff6_4 anaconda
vs2015_runtime 14.16.27012 hf0eaf9b_3 anaconda
werkzeug 1.0.1 pypi_0 pypi
wheel 0.35.1 py_0 anaconda
wincertstore 0.2 py38_0 anaconda
wrapt 1.12.1 pypi_0 pypi
xz 5.2.5 h62dcd97_0
zlib 1.2.11 vc14h1cdd9ab_1 [vc14] anaconda
zstd 1.4.5 h04227a9_0
================= Configs ==================
--------- .faceswap ---------
backend: nvidia
--------- convert.ini ---------
[color.color_transfer]
clip: True
preserve_paper: True
[color.manual_balance]
colorspace: HSV
balance_1: 0.0
balance_2: 0.0
balance_3: 0.0
contrast: 0.0
brightness: 0.0
[color.match_hist]
threshold: 99.0
[mask.box_blend]
type: gaussian
distance: 11.0
radius: 5.0
passes: 1
[mask.mask_blend]
type: normalized
kernel_size: 3
passes: 4
threshold: 4
erosion: 0.0
[scaling.sharpen]
method: unsharp_mask
amount: 150
radius: 0.3
threshold: 5.0
[writer.ffmpeg]
container: mp4
codec: libx264
crf: 23
preset: medium
tune: none
profile: auto
level: auto
skip_mux: False
[writer.gif]
fps: 25
loop: 0
palettesize: 256
subrectangles: False
[writer.opencv]
format: png
draw_transparent: False
jpg_quality: 75
png_compress_level: 3
[writer.pillow]
format: png
draw_transparent: False
optimize: False
gif_interlace: True
jpg_quality: 75
png_compress_level: 3
tif_compression: tiff_deflate
--------- extract.ini ---------
[global]
allow_growth: False
[align.fan]
batch-size: 12
[detect.cv2_dnn]
confidence: 50
[detect.mtcnn]
minsize: 20
threshold_1: 0.6
threshold_2: 0.7
threshold_3: 0.7
scalefactor: 0.709
batch-size: 8
[detect.s3fd]
confidence: 70
batch-size: 4
[mask.unet_dfl]
batch-size: 8
[mask.vgg_clear]
batch-size: 6
[mask.vgg_obstructed]
batch-size: 2
--------- gui.ini ---------
[global]
fullscreen: False
tab: extract
options_panel_width: 30
console_panel_height: 20
icon_size: 14
font: default
font_size: 9
autosave_last_session: prompt
timeout: 120
auto_load_model_stats: True
--------- train.ini ---------
[global]
coverage: 68.75
icnr_init: False
conv_aware_init: False
optimizer: adam
learning_rate: 5e-05
reflect_padding: False
allow_growth: False
mixed_precision: False
convert_batchsize: 16
[global.loss]
loss_function: ssim
mask_loss_function: mse
l2_reg_term: 100
eye_multiplier: 12
mouth_multiplier: 8
penalized_mask_loss: True
mask_type: extended
mask_blur_kernel: 3
mask_threshold: 4
learn_mask: False
[model.dfl_h128]
lowmem: False
[model.dfl_sae]
input_size: 128
clipnorm: True
architecture: df
autoencoder_dims: 0
encoder_dims: 42
decoder_dims: 21
multiscale_decoder: False
[model.dlight]
features: best
details: good
output_size: 256
[model.original]
lowmem: False
[model.realface]
input_size: 64
output_size: 128
dense_nodes: 1536
complexity_encoder: 128
complexity_decoder: 512
[model.unbalanced]
input_size: 128
lowmem: False
clipnorm: True
nodes: 1024
complexity_encoder: 128
complexity_decoder_a: 384
complexity_decoder_b: 512
[model.villain]
lowmem: False
[trainer.original]
preview_images: 14
zoom_amount: 5
rotation_range: 10
shift_range: 5
flip_chance: 50
color_lightness: 30
color_ab: 8
color_clahe_chance: 50
color_clahe_max_size: 4
| closed | 2020-09-04T16:45:19Z | 2020-09-05T18:51:23Z | https://github.com/deepfakes/faceswap/issues/1059 | [] | walkingmu | 1 |
vitalik/django-ninja | pydantic | 946 | How can I convert dictionary instances into python objects in ninja Schema? (Never experienced that issue before major release) | Before the recent update, ProfileSchema looked like:
```
class ProfileSchema(Schema):
current_profile_pic: str = None
current_profile_cover_pic: str = None
class Config:
allow_population_by_field_name = True
@staticmethod
def resolve_current_profile_pic(obj) -> str:
return obj.current_profile_pic.x1080
```
Whether I would pass dictionary to it, or python object both used to work. I just checked my deployed project it is working fine.
After the major update:
```
class ProfileSchema(Schema):
model_config = ConfigDict(populate_by_name=True)
current_profile_pic: str = None
current_profile_cover_pic: str = None
@staticmethod
def resolve_current_profile_pic(obj) -> str:
try:
return obj.current_profile_pic.x1080
except Exception as exception:
print(exception)
```
For python objects this works, but if I pass dictionary I get `'dict' object has no attribute 'current_profile_pic'`
How is it possible to handle this? | closed | 2023-11-21T23:20:44Z | 2023-11-24T13:47:03Z | https://github.com/vitalik/django-ninja/issues/946 | [] | ssandr1kka | 2 |
ets-labs/python-dependency-injector | flask | 642 | Async dependency in __init__ | Hello,
Hello, I'm at a dead point with the following case.
I need to inject an async dependency in a constructor `__init__`, and I am not able to do it. I have read all the documentation (several times) and I don't know how to achieve it.
I would appreciate your help.
I'm using the latest version `4.40.0`
The key point is that I don't want use `init` and `shutdown` https://python-dependency-injector.ets-labs.org/providers/resource.html#asynchronous-initializers
I'm looking for a way to achieve that the following statement works directly
`obj = ClassWithAnAsyncDependency()`
An example:
```python
import asyncio
from dependency_injector import providers
from dependency_injector.containers import DeclarativeContainer
from dependency_injector.wiring import Provide, inject, Closing
async def _create_async_dependency_using_resource_provider():
print("Before")
yield "foo"
print("After")
@inject
class ClassWithAnAsyncDependencyUsingResourceProvider:
def __init__(self, a_async_dependency: str = Closing[Provide["async_dependency_using_resource_provider"]]):
self.a_async_dependency = a_async_dependency
class Container(DeclarativeContainer):
async_dependency_using_resource_provider = providers.Resource(_create_async_dependency_using_resource_provider)
async def main():
# TODO Here it's the problem
obj = ClassWithAnAsyncDependencyUsingResourceProvider()
# <Task pending name='Task-2' coro=<<async_generator_asend without __name__>()> cb=[Resource._async_init_callback(shutdowner=<built-in met...002B2A3299A40>)()]>
# Before
print(obj.a_async_dependency)
if __name__ == '__main__':
container = Container()
container.wire(modules=["__main__"])
asyncio.run(main())
```
I have seen that if instead of a class I use a function, `Resource` provider works at expected, but I cannot change a class by a function in my code base.
```python
import asyncio
from dependency_injector import providers
from dependency_injector.containers import DeclarativeContainer
from dependency_injector.wiring import Provide, inject, Closing
async def _create_async_dependency_using_resource_provider():
print("Before")
yield "foo"
print("After")
@inject
async def fn_with_an_async_dependency_using_resource_provider(
a_async_dependency: str = Closing[Provide["async_dependency_using_resource_provider"]]):
print(a_async_dependency)
class Container(DeclarativeContainer):
async_dependency_using_resource_provider = providers.Resource(_create_async_dependency_using_resource_provider)
async def main():
await fn_with_an_async_dependency_using_resource_provider()
# Before
# foo
# After
if __name__ == '__main__':
container = Container()
container.wire(modules=["__main__"])
asyncio.run(main())
```
Regards. | closed | 2022-12-10T17:31:15Z | 2022-12-10T19:25:01Z | https://github.com/ets-labs/python-dependency-injector/issues/642 | [] | panicoenlaxbox | 1 |
dynaconf/dynaconf | flask | 972 | [RFC] Make dynaconf CLI configurable for file to look at and validator to apply | **Is your feature request related to a problem? Please describe.**
I have several configuration files in a project used for different purposes. I would like to have separate text-based validator schemas for these. Right now it seems as though there is exactly one configuration file that is support (`config.toml`) with exactly one validator schema (`dynaconf_validators.toml`). It seems ironic to me that `dynaconf` itself is not configurable as to which configuration file to validate and with which validation/schema file.
**Describe the solution you'd like**
```
dynaconf -c path/to/my_special_config.toml validate --schema etc/config_schemas/my_special_validator_schema.toml
```
**Describe alternatives you've considered**
* Patching the code
* Writing a wrapper script that will copy the validator and all config files into a temporary working directory with files renamed to the expected (apparently hard-coded) filenames, and executing `dynaconf validate` there.
**Additional context**
None. But overall: Dynaconf is great, and I'm happily using it *except* for this one point which is an annoyance. | open | 2023-07-31T17:24:11Z | 2023-08-21T19:47:52Z | https://github.com/dynaconf/dynaconf/issues/972 | [
"Not a Bug",
"RFC"
] | ianstokesrees-gamma | 0 |
mljar/mljar-supervised | scikit-learn | 177 | Change docstring style to google-style | Right now there is numpy-style in docstring. Please change to google-style (https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) because I would like to integrate this doc with mkdocs-material and the only reasonable package to do this in automated way (mkdocstrings) works only with google-style docstring. | closed | 2020-09-09T16:31:39Z | 2020-09-15T13:43:43Z | https://github.com/mljar/mljar-supervised/issues/177 | [
"help wanted",
"good first issue",
"docs"
] | pplonski | 1 |
sunscrapers/djoser | rest-api | 316 | Disable browsable api | Hi, the list of uri endpoints (eg: accessible at mydomain.com/auth/) is very usefull when debbugging, but not in production, since I would like to hide them.
So...is there a way to disable browsable api? | closed | 2018-10-16T08:45:06Z | 2018-11-12T11:35:31Z | https://github.com/sunscrapers/djoser/issues/316 | [] | lorisgir | 2 |
thp/urlwatch | automation | 320 | GMail: Send email with OAuth2 | Provide OAuth2 support for email report for better security. Users won't have to check "Allow less secure apps" in their Google account.
The simplest way to do this is probably via [GMail API](https://developers.google.com/gmail/api/guides). I already have some rough code that can send email this way. The downside is that other mail services aren't covered.
A more general way would be to use SMTP, but this seems more difficult. | open | 2018-11-12T03:22:54Z | 2020-07-30T18:25:59Z | https://github.com/thp/urlwatch/issues/320 | [
"enhancement"
] | cfbao | 0 |
numpy/numpy | numpy | 28,155 | polygrid2d and similars not computing cartesian product | ### Describe the issue:
I'm writing code that requires 2D polynomial interpolation. I realized while working that, contrary to the documentation, functions like polygrid2d or leggrid2d do NOT compute cartesian product of x and y before evaluation, giving the same output as polyval2 and leggrid2d. I have not checked other subclasses of numpy.polynomial but I suppose they might be affected too. The code example takes two vector of 3 elements and outputs a vector of 3 elements, while according to the documentations it should output a vector of 3*3=9 elements. (For completion purposes, the coefficients describe the function x**2-y**2 and the output is[0. 0. 0.] while I expected [ 0. 0. 0. 0. 0. 0. 0. 0. 0.])
### Reproduce the code example:
```python
import numpy as np
x = np.linspace(-1, 1, 3)
y = np.linspace(-1, 1, 3)
c = [ 0., 0., -1., 0., 0., -0., 1., -0., 0.]
print(np.polynomial.polynomial.polygrid2d(x, y, c))
```
### Error message:
```shell
```
### Python and NumPy Versions:
Python 3.10.12, checked with numpy 2.0.2 and numpy 2.2.1
### Runtime Environment:
_No response_
### Context for the issue:
Consider this as a nuisance, as falling back to manual computation of cartesian product and evaluation with polyval2d and similars is an option. Yet, beginners might be confused by these issues. | closed | 2025-01-15T10:27:58Z | 2025-01-15T15:40:20Z | https://github.com/numpy/numpy/issues/28155 | [
"00 - Bug"
] | theBelpo | 0 |
streamlit/streamlit | deep-learning | 10,744 | Add an optimized search input widget | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
Add a specialized text input widget optimized for search usecases. Potential features:
- Search icon on the left side
- Clear icon on the right side
- Support for autocompletion
- Allows file upload (similar to `st.chat_input`)
### Why?
_No response_
### How?
Option 1: `st.search_input` widget
Option 2: `st.text_input(โฆ, type=โsearchโ)`
### Additional Context
Inspirations:




- This Streamlit forum post has 45k views: https://discuss.streamlit.io/t/creating-a-nicely-formatted-search-field/1804
- [Component gallery page](https://component.gallery/components/search-input/) | open | 2025-03-12T15:14:57Z | 2025-03-14T15:55:04Z | https://github.com/streamlit/streamlit/issues/10744 | [
"type:enhancement",
"area:widgets",
"feature:st.text_input"
] | lukasmasuch | 1 |
FactoryBoy/factory_boy | sqlalchemy | 321 | Update "fake-factory" dependency to point at "faker" | Per joke2k/faker#331, the Faker project is switching pypi package names from "fake-factory" to "faker". Between now and Dec 15, they plan to push the same code to both names, and on Dec 15 they plan to deprecate the fake-factory name.
| closed | 2016-09-16T18:36:01Z | 2016-09-16T22:51:37Z | https://github.com/FactoryBoy/factory_boy/issues/321 | [] | folz | 1 |
swisskyrepo/GraphQLmap | graphql | 59 | unable to run in windows | after running setup.py, it creates a file in bin named ``graphqlmap`` that cannot be executed | open | 2024-07-18T10:21:47Z | 2025-02-07T06:55:19Z | https://github.com/swisskyrepo/GraphQLmap/issues/59 | [] | AlizerUncaged | 1 |
gradio-app/gradio | data-visualization | 9,937 | Column names don't wrap with gradio==5.5.0 | ### Describe the bug
When I make a DataFrame with newer versions of Gradio, there is no way to wrap column names into multiple lines.
If I set `wrap==True` , nothing happens with the column names, nor if I set column widths.
### Have you searched existing issues? ๐
- [X] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
import pandas as pd
df = pd.DataFrame(
{
"Very long column name, something I would want to wrap into multiple lines": [
"value1",
"value2",
"value3",
"value4",
],
"Another long column name, something I would want to wrap into multiple lines": [
"value1",
"value2",
"value3",
"value4",
],
"A third long column name, something I would want to wrap into multiple lines": [
"value1",
"value2",
"value3",
"value4",
],
}
)
with gr.Blocks() as demo:
gr.DataFrame(df)
# If I set wrap=True, then the columns fit on screen but the names don't get wrapped
# If I set column_widths=["50px", "50px", "50px"], nothing happens, not even when wrap=True
demo.launch()
```
### Screenshot

### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.4.0
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.1
jinja2: 3.1.3
markupsafe: 2.1.5
numpy: 1.25.2
orjson: 3.10.7
packaging: 24.0
pandas: 2.2.2
pillow: 10.3.0
pydantic: 2.7.0
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.1
ruff: 0.3.7
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.12.5
typing-extensions: 4.11.0
urllib3: 2.2.1
uvicorn: 0.30.6
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.2.0
httpx: 0.27.2
huggingface-hub: 0.26.1
packaging: 24.0
typing-extensions: 4.11.0
websockets: 12.0
```
```
### Severity
Blocking usage of gradio | closed | 2024-11-11T13:51:16Z | 2024-11-15T21:08:39Z | https://github.com/gradio-app/gradio/issues/9937 | [
"bug",
"๐พ Dataframe"
] | x-tabdeveloping | 1 |
sinaptik-ai/pandas-ai | data-science | 1,464 | I see there is a context.memory attribute in the local_llm.py file, so what if I add memory? | I see there is a context.memory attribute in the local_llm.py file, so what if I add memory? | closed | 2024-12-10T03:19:23Z | 2024-12-13T10:17:43Z | https://github.com/sinaptik-ai/pandas-ai/issues/1464 | [] | lwdnxu | 5 |
SkalskiP/courses | nlp | 36 | Order of Traversal for Beginners | First of all thank you so much for creating this repo, I would like to suggest that you create a small roadmap that people can follow so that they know a rough order to view them in. | closed | 2023-05-31T06:04:46Z | 2023-05-31T15:21:40Z | https://github.com/SkalskiP/courses/issues/36 | [] | ordaz00 | 2 |
mitmproxy/pdoc | api | 507 | Folding large constants? | First of all, thanks for `pdoc`! I'm a very happy user, both on my hobby projects and professionally.
#### Problem Description
I'm working to port a handful of repositories away from `pdoc3`, for well-trodden reasons. Most of the differences between the two are minor, but one that I'm currently running into is how `pdoc` renders large constants: it renders the entire constant, rather than either truncating or folding it.
For example, this render:

...becomes this in `pdoc`:

(it goes on for quite a bit after that :slightly_smiling_face:)
#### Proposal
It'd be nice to have either a default or a configurable option for truncating or folding large constants! I'm not sure if there's a great heuristic for this, so offering folding (i.e., requiring the user to click a `<details>` element to unfurl) might be slightly better for UX.
#### Alternatives
I assume I could do this with some template hackery on my own, but I figured this would be a reasonable thing to raise upstream!
#### Additional context
No other context. If this is functionality that you'd be interested in, I'd be willing to make an attempt at implementing it. Thanks again!
| closed | 2023-02-17T04:18:04Z | 2023-02-19T18:00:45Z | https://github.com/mitmproxy/pdoc/issues/507 | [
"enhancement"
] | woodruffw | 2 |
zappa/Zappa | django | 516 | [Migrated] Zip file - Windows paths | Originally from: https://github.com/Miserlou/Zappa/issues/1358 by [pgpgpg](https://github.com/pgpgpg)
<!--- Provide a general summary of the issue in the Title above -->
## Context
When deploying a Django app (over 50mb) from a Windows 10 machine the tarball retains the Windows directory separators '\\\\', when deployed to Lambda this causes the error "No module named 'django.core.wsgi': ModuleNotFoundError"
## Expected Behavior
1. tarball should keep Unix directory separators
## Actual Behavior
1. tarball retains Windows directory separators
## Possible Fix
In core.py, line 683 can be replaced with:
`tarinfo = tarfile.TarInfo(posixpath.join(root.replace(temp_project_path, '').lstrip(os.sep).replace('\\', '/'), filename))`
Which fixed it for me but is quite hacky and probably not that robust.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1.` zappa deploy dev ` on Windows 10 machine with app over 50mb
## Your Environment
* Zappa version used: 0.45.1
* Operating System and Python version: Windows 10, Python 3.6
* Your `zappa_settings.py`:
```
{
"dev": {
"aws_region": "us-east-2",
"django_settings": "<redacted>",
"profile_name": "default",
"project_name": "<redacted>",
"runtime": "python3.6",
"s3_bucket": "<redacted>",
"exclude": ["*.env", "*.jpg", "*.png", "media*", "archive*", "node_*", ],
"slim_handler": true,
"timeout_seconds": 300,
}
}
```
| closed | 2021-02-20T09:43:48Z | 2024-07-13T08:17:55Z | https://github.com/zappa/Zappa/issues/516 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
benbusby/whoogle-search | flask | 226 | feature request: output http error code if searching has been ratelimited. | basically, since i run a public instance, i'd love to be able to check if the instance is ratelimited by checking http status using curl or something like that. it's not very important, but it'd be nice. | closed | 2021-03-18T04:32:02Z | 2021-03-21T01:59:43Z | https://github.com/benbusby/whoogle-search/issues/226 | [
"enhancement"
] | ghost | 4 |
iterative/dvc | data-science | 10,104 | Parallel File Download Support in DVCFileSystem | I track my YOLOv5 dataset in git repository with DVC remote being Azure Blob container.
The dataset contains many small text files and the structure is the following:
```bash
.
โโโ images
โย ย โโโ train
โย ย โย ย โโโ 4691.jpg
โย ย โย ย โโโ 4692.jpg
โย ย โย ย โโโ ...
โย ย โย ย โโโ 5028.jpg
โย ย โโโ val
โย ย โโโ 4753.jpg
โย ย โโโ 4808.jpg
โย ย โโโ ...
โย ย โโโ 5014.jpg
โโโ labels
โย ย โโโ train
โย ย โย ย โโโ 4691.txt
โย ย โย ย โโโ 4692.txt
โย ย โย ย โโโ ...
โย ย โย ย โโโ 5028.txt
โย ย โโโ val
โย ย โโโ 4753.txt
โย ย โโโ 4808.txt
โย ย โโโ ...
โย ย โโโ 5014.txt
โโโ settings.yaml
7 directories, 357 files
```
In my python training pipeline I would like to recursively download the tree of files to a local machine at the beginning of the training.
I tried to use the `DVCFileSystem.get` from `dvc.api` as [in the documentation](https://dvc.org/doc/api-reference/dvcfilesystem#downloading-a-file-or-a-directory), but the files seem to be downloaded sequentially, which is slow, when the dataset contains a lot of small files.
When I fetch the files manually with multiple parallel workers, the speedup on my machine with this demo dataset is approximately 4x.
Here's the code I used for the benchmark.
```python
from dvc.api import DVCFileSystem
import time
import os
import shutil
from concurrent.futures import ThreadPoolExecutor, as_completed
from fsspec import AbstractFileSystem
import tqdm
download_dir = "test_dvc_speed"
our_impl_dir = download_dir + "/our_impl"
original_impl_dir = download_dir + "/original_impl"
shutil.rmtree(download_dir, ignore_errors=True)
os.makedirs(download_dir, exist_ok=True)
repo_url = f"<github-repo_url>"
remote_path = "<path-to-remote-dir>"
def download_paralel(fs: AbstractFileSystem, dest: str, rpath: str, num_workers):
dest = str(dest)
os.makedirs(dest)
if fs.isfile(rpath):
# download single file
fs.get_file(rpath, dest)
return
# download directory recursively in parallel
remote_paths = fs.glob(os.path.join(rpath, "**"))
files = [f for f in remote_paths if fs.isfile(f)]
dirs = [d for d in remote_paths if fs.isdir(d)]
# make dirs
for d in dirs:
os.makedirs(os.path.join(dest, os.path.relpath(d, rpath)), exist_ok=True) # type: ignore
# download files in parallel
print(f"Downloading {len(files)} files to {dest}")
with ThreadPoolExecutor(max_workers=num_workers) as executor:
futures = []
for rfile in files:
file_dest = os.path.join(dest, os.path.relpath(rfile, rpath)) # type: ignore
futures.append(executor.submit(fs.get_file, rfile, file_dest))
for future in tqdm.tqdm(as_completed(futures), total=len(futures)):
_ = future.result()
pass
t = time.time()
fs = DVCFileSystem(repo_url)
download_paralel(fs, our_impl_dir, remote_path, num_workers=100)
print("Our impl:", time.time() - t)
t = time.time()
fs = DVCFileSystem(repo_url)
fs.get(rpath=remote_path, lpath=original_impl_dir, recursive=True)
print("Original impl:", time.time() - t)
###
# Our impl: 4.03s
# Original impl: 15.33s
```
I've been told by @tibor-mach, that `dvc get` uses `DVCFileSystem` behind the scenes and that `dvc get` fetches the remote files in parallel.
Is this a bug or is there another way to fetch the directory tree using `dvc` python API? | closed | 2023-11-23T09:33:46Z | 2024-09-03T08:11:40Z | https://github.com/iterative/dvc/issues/10104 | [] | jakubhejhal | 4 |
matplotlib/matplotlib | data-visualization | 28,833 | [MNT]: Data size consistency checks in _CollectionsWithSizes | ### Summary
Extracted from https://github.com/matplotlib/matplotlib/issues/12021#issuecomment-530086713. This is a tracking issue so that we can close #12021 but the idea is not lost. It does not need immediate action (and may even be hard to act upon).
There is no built-in size check for the data in _CollectionWithSizes subclasses. For example, for `PathCollection`, one can have 10 paths, 4 sizes and 2 edgecolors.
```
import matplotlib.pyplot as plt
from matplotlib.collections import PathCollection
from matplotlib.path import Path
paths = [Path([(0, 0), (0.5, i), (1, 0)]) for i in range(10)]
# 10 paths, 4 sizes, 2 edgecolors:
pc = PathCollection(paths, sizes=s, facecolor='none', edgecolors=['r', 'g'])
ax = plt.gca()
ax.add_collection(pc)
ax.set(xlim=(0, 3), ylim=(0, 20))
```

The behavior is largely undocumented (though some plotting functions mention cycling over properties like colors). AFAICS: The paths effectively define the number of elements sizes and facecolor etc. are cycled through to match the paths (if there are more sizes that paths, the additional sizes are simply unused. If there are less sizes, the sizes are cycled).
Central question: Is this behavior desired? On the one hand, it can be convenient. On the other hand it can be confusing and lead to unnoticed errors.
Note: I suspect that changing the behavior is difficult. (i) would need deprecation, which is cumbersome but possible, (ii) *thing* (e.g. paths) and properties (sizes, facecolors) are currently decoupled. They are brought together at draw-time. If we do size checks, they likely can also happen only at draw-time. We have the individual `set_*` method and size checks in there would prevent any later change of the number of elments: `set_paths(paths); set_sizes(sizes)` would mutually exclude changing the number of elements. Note that this is similar to #26410, but I think we cannot get away with a collective `set_XYUVC` style solution here.
### Proposed fix
_No response_ | open | 2024-09-18T09:04:25Z | 2024-09-18T09:15:26Z | https://github.com/matplotlib/matplotlib/issues/28833 | [
"Difficulty: Hard",
"Maintenance"
] | timhoffm | 0 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 1,075 | [BUG]: Chorme browsers open up, displays resume for 2 seconds, & closes. Nothing happens after that | ### Describe the bug
After running the command python mai.py in the terminal, the Chrome browser opens briefly, displays resume for 1seconds, and then closes immediately. No further action or output happens after the browser closes.
### Steps to reproduce
### Observed Behavior:
- Chrome browser opens.
- Displays a brief set of data.
- Closes immediately without further action.
- No further output in the terminal.
### STEPS TO REPRODUCE:
### python main.py
## Logs
### OpenAI API Response:
The OpenAI API responds successfully with a "200 OK" status. Here's the relevant log excerpt from the terminal:
<img width="1077" alt="Image" src="https://github.com/user-attachments/assets/df4b45f1-b394-41df-b360-e4409ec19209" />
### Browser Automation Logs:
The browser session was initiated, but it was immediately closed after performing the following:
<img width="1303" alt="Image" src="https://github.com/user-attachments/assets/529d2e1a-06ca-4cfa-9d2f-f7ce93255ae3" />
DEBUG:openai._base_client:request_id: req_24e671307cd94db203874672e0da4bc7
DEBUG:urllib3.connectionpool:http://localhost:49216 "POST /session/2213c4d143e20a8c7af7f635a1a29ae0/url HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:http://localhost:49216 "POST /session/2213c4d143e20a8c7af7f635a1a29ae0/goog/cdp/execute HTTP/1.1" 200 0
DEBUG:urllib3.connectionpool:http://localhost:49216 "DELETE /session/2213c4d143e20a8c7af7f635a1a29ae0 HTTP/1.1" 200 0
DEBUG:httpcore.connection:close.started
DEBUG:httpcore.connection:close.complete
### Expected behavior
The browser should open up show resume, and apply to jobs
### Actual behavior
The browser closes instantly
### Branch
None
### Branch name
main
### Python version
3.12.7
### LLM Used
CHATGPT
### Model used
GPT-40-mini
### Additional context
_No response_ | open | 2025-01-24T19:09:20Z | 2025-03-15T23:11:46Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/1075 | [
"bug"
] | ShrutikaSingh | 2 |
laughingman7743/PyAthena | sqlalchemy | 61 | larger than memory queries? | I'm hoping to use Athena with Dask, performing queries which return 10-30 GB and then training some distributed ML algorithms. Any suggestions for concurrent/distributed io for such a task? I've been quite happy with the pandas cursor for smaller local use, following the examples in the pyAthena documentation, but I still have no idea what I am actually doing-- does the pandas cursor do concurrent io, or is it limited to one core?
I apologize in advance if this question belongs on some other forum-- let me know and I'll gladly move the conversation there. Thanks! | open | 2019-01-17T17:32:18Z | 2022-10-06T20:45:31Z | https://github.com/laughingman7743/PyAthena/issues/61 | [] | mckeown12 | 8 |
iperov/DeepFaceLab | machine-learning | 5,728 | traceback(most recent call last) | 
how do i solve this.
Running DeepFaceLive.
Traceback (most recent call last):
File "_internal\DeepFaceLive\main.py", line 95, in <module>
main()
File "_internal\DeepFaceLive\main.py", line 88, in main
args.func(args)
File "_internal\DeepFaceLive\main.py", line 30, in run_DeepFaceLive
from apps.DeepFaceLive.DeepFaceLiveApp import DeepFaceLiveApp
File "C:\Users\HP Victus\Documents\met\DeepFaceLive_DirectX12\_internal\DeepFaceLive\apps\DeepFaceLive\DeepFaceLiveApp.py", line 11, in <module>
from . import backend
File "C:\Users\HP Victus\Documents\met\DeepFaceLive_DirectX12\_internal\DeepFaceLive\apps\DeepFaceLive\backend\__init__.py", line 1, in <module>
from .BackendBase import (BackendConnection, BackendConnectionData, BackendDB,
File "C:\Users\HP Victus\Documents\met\DeepFaceLive_DirectX12\_internal\DeepFaceLive\apps\DeepFaceLive\backend\BackendBase.py", line 7, in <module>
from xlib import time as lib_time
File "C:\Users\HP Victus\Documents\met\DeepFaceLive_DirectX12\_internal\DeepFaceLive\xlib\time\__init__.py", line 1, in <module>
from .time_ import timeit, measure, FPSCounter, AverageMeasurer
File "C:\Users\HP Victus\Documents\met\DeepFaceLive_DirectX12\_internal\DeepFaceLive\xlib\time\time_.py", line 11, in <module>
if not kernel32.QueryPerformanceFrequency(_perf_freq):
File "C:\Users\HP Victus\Documents\met\DeepFaceLive_DirectX12\_internal\DeepFaceLive\xlib\api\win32\wintypes\wintypes.py", line 37, in wrapper
raise RuntimeError(f'Unable to load {dll_name} library.')
RuntimeError: Unable to load kernel32 library. | open | 2023-09-24T07:54:11Z | 2023-11-04T06:03:37Z | https://github.com/iperov/DeepFaceLab/issues/5728 | [] | Ujah0 | 1 |
horovod/horovod | deep-learning | 3,606 | torch.utils.ffi was removed but is still used in horovod | **Environment:**
1. Framework: PyTorch
2. Framework version: 1.12.0
3. Horovod version: 0.25.0
4. MPI version: OpenMPI 4.1.4
5. CUDA version: N/A
6. NCCL version: N/A
7. Python version: 3.9.13
8. Spark / PySpark version: N/A
9. Ray version: N/A
10. OS and version: macOS 12.4
11. GCC version: Apple Clang 13.1.6
12. CMake version: 3.23.1
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
In `torch/mpi_lib/__init__.py` and `torch/mpi_lib_impl/__init__.py`, horovod imports `torch.utils.ffi`. But this module was deprecated in PyTorch 1.0 (https://github.com/pytorch/pytorch/pull/12122) almost 5 years ago and finally removed completely in PyTorch 1.12. | closed | 2022-07-16T07:08:51Z | 2022-08-11T07:09:57Z | https://github.com/horovod/horovod/issues/3606 | [
"bug"
] | adamjstewart | 2 |
explosion/spaCy | machine-learning | 12,270 | Entity ruler doesn't catch multi-token entities | Hello!
First off: Thanks for the great package!
I however, encountered some unexpected behavior while implementing my custom entity ruler.
The entity ruler is made to recognize dutch first names and last names ("achternaam").
However, the code below doesn't catch last names consisting of multiple tokens, while this does work in the regular NER component.
Moreover, the code below does work with singe-token last names.
What I already checked:
1) There are no other entities overwriting the names ruler.
2) When adding the NER pipeline in the last part, it sees "Andre van walderveen" correctly as a single person.
3) When changing the name to Lepelaar, it tags correctly.
```
# Load base model
nlp= sp.load("nl_core_news_lg", exclude=["tok2vec", "morphologizer", "tagger", "parser", "lemmatizer", "attribute_ruler"])
# Set split on "-"
infixes = nlp.Defaults.infixes + [r'-'] + [r'\('] + [r'\)']
infix_regex = sp.util.compile_infix_regex(infixes)
nlp.tokenizer.infix_finditer = infix_regex.finditer
# Get available pipelines
nlp.pipe_names
config_names = {
"validate": False,
"overwrite_ents": True,
"ent_id_sep": "||",
}
ruler_names = nlp.add_pipe(
"entity_ruler", "names_ruler", before="ner", config=config_names
)
with nlp.select_pipes(enable="ner"):
ruler_names.add_patterns(
[
{"label": "ACHTERNAAM", "pattern": [{"LOWER": "Lepelaar"}]},
{"label": "ACHTERNAAM", "pattern": [{"LOWER": "van Walderveen"}]}
]
)
# OR
with nlp.select_pipes(enable="ner"):
ruler_names.add_patterns(
[
{"label": "ACHTERNAAM", "pattern": [{"LOWER": "Lepelaar"}]},
{"label": "ACHTERNAAM", "pattern": [{"LOWER": "van"}, {"LOWER": "Walderveen"}]}
]
)
with nlp.select_pipes(enable=["sentencizer", "names_ruler"]):
doc1 = nlp("Ik ben Andre van Walderveen")
for ents in doc1.ents:
print(ents, ents.label_)
``` | closed | 2023-02-10T13:42:07Z | 2023-02-10T14:27:45Z | https://github.com/explosion/spaCy/issues/12270 | [
"feat / spanruler"
] | SjoerdBraaksma | 3 |
remsky/Kokoro-FastAPI | fastapi | 161 | Pause insertion request | Would it be possible to add a feature to support the addition of pauses when encountering e.g. [PAUSE 1.0] , which could introduce in this case a 1 second pause before continuing with the next section of text? | open | 2025-02-11T20:25:14Z | 2025-03-14T06:25:07Z | https://github.com/remsky/Kokoro-FastAPI/issues/161 | [] | riqbelcher | 6 |
seleniumbase/SeleniumBase | pytest | 3,005 | Upgrade the default `geckodriver` version to `0.35.0` | ## Upgrade the default `geckodriver` version to `0.35.0`
There's a new `geckodriver`: https://github.com/mozilla/geckodriver/releases/tag/v0.35.0
* `sbase get geckodriver` should now download that version by default, (if tests pass)
| closed | 2024-08-07T02:23:47Z | 2024-08-07T03:42:58Z | https://github.com/seleniumbase/SeleniumBase/issues/3005 | [
"enhancement",
"requirements"
] | mdmintz | 1 |
ExpDev07/coronavirus-tracker-api | fastapi | 45 | I'm using your API | Hi, Thanks for the API
I'm using your API to build a simple android app.
https://github.com/itsamirrezah/COVID-19 | closed | 2020-03-15T02:14:52Z | 2020-04-19T17:57:02Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/45 | [
"user-created"
] | itsamirrezah | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 797 | run_pt.sh output couldnt be merged with base model | ### Check before submitting issues
- [X] Make sure to pull the latest code, as some issues and bugs have been fixed.
- [X] Due to frequent dependency updates, please ensure you have followed the steps in our [Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)
- [X] I have read the [FAQ section](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/FAQ) AND searched for similar issues and did not find a similar problem or solution
- [X] Third-party plugin issues - e.g., [llama.cpp](https://github.com/ggerganov/llama.cpp), [text-generation-webui](https://github.com/oobabooga/text-generation-webui), [LlamaChat](https://github.com/alexrozanski/LlamaChat), we recommend checking the corresponding project for solutions
- [X] Model validity check - Be sure to check the model's [SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md). If the model is incorrect, we cannot guarantee its performance
### Type of Issue
Model conversion and merging
### Base Model
LLaMA-7B
### Operating System
Linux
### Describe your issue in detail
The following configs are used to start run_pt.sh
```
lr=2e-4
lora_rank=8
lora_alpha=32
lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05
pretrained_model=/data/llama/decapoda-research_7b-hf-model
chinese_tokenizer_path=/data/llama/merged_tokenizer_hf-25k
hugginface_cache_dir=/data/huggingface_cache
dataset_dir=/data/datasets/tokenizer_text_dir
data_cache=/data/llama/Chinese-LLaMA-Alpaca/scripts/training/temp_data_cache_dir
per_device_train_batch_size=32
per_device_eval_batch_size=16
gradient_accumulation_steps=16
output_dir=/data/llama/Chinese-LLaMA-Alpaca/scripts/training/pt_output_dir
deepspeed_config_file=ds_zero2_no_offload.json
export NCCL_DEBUG=INFO
export NCCL_SOCKET_IFNAME=team0
torchrun --nnodes 1 --nproc_per_node 2 run_clm_pt_with_peft.py \
--deepspeed ${deepspeed_config_file} \
--model_name_or_path ${pretrained_model} \
--tokenizer_name_or_path ${chinese_tokenizer_path} \
--dataset_dir ${dataset_dir} \
--data_cache_dir ${data_cache} \
--validation_split_percentage 0.001 \
--per_device_train_batch_size ${per_device_train_batch_size} \
--per_device_eval_batch_size ${per_device_eval_batch_size} \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--do_train \
--seed $RANDOM \
--fp16 \
--num_train_epochs 1 \
--max_train_samples 600 \
--max_eval_samples 100 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.05 \
--weight_decay 0.01 \
--logging_strategy steps \
--logging_steps 5 \
--save_strategy steps \
--modules_to_save ${modules_to_save} \
--gradient_checkpointing \
--save_total_limit 3 \
--save_steps 5 \
--preprocessing_num_workers 32 \
--block_size 1024 \
--output_dir ${output_dir} \
--overwrite_output_dir \
--ddp_timeout 30000 \
--logging_first_step True \
--lora_rank ${lora_rank} \
--lora_alpha ${lora_alpha} \
--trainable ${lora_trainable} \
--lora_dropout ${lora_dropout} \
--torch_dtype float16 \
--ddp_find_unused_parameters False
```
The following code is used to merge with base model:
```
python merge_llama_with_chinese_lora.py \
--base_model /data/llama/decapoda-research_7b-hf-model \
--lora_model /data/llama/Chinese-LLaMA-Alpaca/scripts/training/pt_output_dir/pt_lora_model \
--output_type huggingface \
--output_dir /data/llama/Chinese-LLaMA-Alpaca/scripts/merge_dir/llama-merge-hf-test
```
### Dependencies (must be provided for code-related issues)
```
Package Version Editable project location
----------------------------- ----------- -----------------------------------
accelerate 0.21.0
aiohttp 3.8.4
aiosignal 1.3.1
anyio 3.7.1
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
asttokens 2.2.1
async-lru 2.0.4
async-timeout 4.0.2
attrs 23.1.0
Babel 2.12.1
backcall 0.2.0
backports.functools-lru-cache 1.6.5
beautifulsoup4 4.12.2
bitsandbytes 0.41.0
bleach 6.0.0
boltons 23.0.0
brotlipy 0.7.0
certifi 2023.7.22
cffi 1.15.1
charset-normalizer 2.0.4
cmake 3.26.4
comm 0.1.3
conda 23.7.2
conda-content-trust 0.1.3
conda-package-handling 2.0.2
conda_package_streaming 0.7.0
cryptography 39.0.1
datasets 2.14.1
debugpy 1.6.7
decorator 5.1.1
deepspeed 0.10.0
defusedxml 0.7.1
dill 0.3.6
entrypoints 0.4
exceptiongroup 1.1.2
executing 1.2.0
fastjsonschema 2.18.0
filelock 3.12.2
flit_core 3.9.0
frozenlist 1.3.3
fsspec 2023.6.0
hjson 3.1.0
huggingface-hub 0.15.1
idna 3.4
importlib-metadata 6.8.0
importlib-resources 6.0.0
iniconfig 2.0.0
ipykernel 6.23.2
ipython 8.14.0
ipython-genutils 0.2.0
ipywidgets 8.0.7
jedi 0.18.2
Jinja2 3.1.2
joblib 1.2.0
json5 0.9.14
jsonpatch 1.32
jsonpointer 2.1
jsonschema 4.17.3
jupyter_client 8.3.0
jupyter_core 5.3.1
jupyter-events 0.6.3
jupyter-lsp 2.2.0
jupyter_server 2.7.0
jupyter_server_terminals 0.4.4
jupyterlab 4.0.3
jupyterlab-pygments 0.2.2
jupyterlab_server 2.24.0
jupyterlab-widgets 3.0.7
lit 16.0.6
MarkupSafe 2.1.3
matplotlib-inline 0.1.6
mistune 3.0.0
mpmath 1.3.0
multidict 6.0.4
multiprocess 0.70.14
nbclient 0.8.0
nbconvert 7.7.3
nbformat 5.9.1
nest-asyncio 1.5.6
networkx 3.1
ninja 1.11.1
notebook_shim 0.2.3
numpy 1.25.0
nvidia-cublas-cu11 11.10.3.66
nvidia-cuda-cupti-cu11 11.7.101
nvidia-cuda-nvrtc-cu11 11.7.99
nvidia-cuda-runtime-cu11 11.7.99
nvidia-cudnn-cu11 8.5.0.96
nvidia-cufft-cu11 10.9.0.58
nvidia-curand-cu11 10.2.10.91
nvidia-cusolver-cu11 11.4.0.1
nvidia-cusparse-cu11 11.7.4.91
nvidia-nccl-cu11 2.14.3
nvidia-nvtx-cu11 11.7.91
overrides 7.3.1
packaging 23.0
pandas 2.0.2
pandocfilters 1.5.0
parso 0.8.3
peft 0.3.0.dev0 /data/llama/peft_13e53fc
pexpect 4.8.0
pickleshare 0.7.5
Pillow 9.5.0
pip 23.0.1
pkgutil_resolve_name 1.3.10
platformdirs 3.9.1
pluggy 1.0.0
prometheus-client 0.17.1
prompt-toolkit 3.0.39
protobuf 4.23.4
psutil 5.9.5
ptyprocess 0.7.0
pure-eval 0.2.2
py-cpuinfo 9.0.0
pyarrow 12.0.1
pycosat 0.6.4
pycparser 2.21
pydantic 1.10.9
Pygments 2.15.1
pyOpenSSL 23.0.0
pyrsistent 0.18.0
PySocks 1.7.1
pytest 7.4.0
python-dateutil 2.8.2
python-json-logger 2.0.7
pytz 2023.3
PyYAML 6.0
pyzmq 25.1.0
regex 2023.6.3
requests 2.28.1
rfc3339-validator 0.1.4
rfc3986-validator 0.1.1
ruamel.yaml 0.17.21
ruamel.yaml.clib 0.2.6
safetensors 0.3.1
scikit-learn 1.3.0
scipy 1.11.1
Send2Trash 1.8.2
sentencepiece 0.1.99
setuptools 65.6.3
six 1.16.0
sniffio 1.3.0
soupsieve 2.3.2.post1
stack-data 0.6.2
sympy 1.12
terminado 0.17.1
threadpoolctl 3.1.0
tinycss2 1.2.1
tokenizers 0.13.3
tomli 2.0.1
toolz 0.12.0
torch 2.0.1
torchaudio 2.0.2
torchvision 0.15.2
tornado 6.3.2
tqdm 4.65.0
traitlets 5.9.0
transformers 4.30.0
triton 2.0.0
typing_extensions 4.7.1
typing-utils 0.1.0
tzdata 2023.3
urllib3 1.26.15
wcwidth 0.2.6
webencodings 0.5.1
websocket-client 1.6.1
wheel 0.38.4
widgetsnbextension 4.0.7
xxhash 3.2.0
yarl 1.9.2
zipp 3.16.2
zstandard 0.19.0
```
### Execution logs or screenshots
```
[2023-07-29 16:05:48,295] [INFO] [real_accelerator.py:133:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Base model: /data/llama/decapoda-research_7b-hf-model
LoRA model(s) ['/data/llama/Chinese-LLaMA-Alpaca/scripts/training/pt_output_dir/pt_lora_model']:
Loading checkpoint shards: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 3/3 [00:08<00:00, 2.68s/it]
Peft version: 0.3.0.dev0
Loading LoRA for 7B model
Loading LoRA /data/llama/Chinese-LLaMA-Alpaca/scripts/training/pt_output_dir/pt_lora_model...
base_model vocab size: 32000
tokenizer vocab size: 53246
Extended vocabulary size to 53246
Loading LoRA weights
merging base_model.model.model.embed_tokens.weight
merging base_model.model.lm_head.weight
merging base_model.model.model.layers.0.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.0.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.0.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.0.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.0.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.1.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.1.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.1.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.1.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.1.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.1.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.1.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.2.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.2.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.2.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.2.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.2.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.2.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.2.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.3.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.3.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.3.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.3.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.3.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.3.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.3.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.4.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.4.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.4.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.4.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.4.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.4.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.4.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.5.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.5.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.5.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.5.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.5.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.5.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.5.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.6.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.6.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.6.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.6.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.6.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.6.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.6.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.7.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.7.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.7.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.7.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.7.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.7.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.7.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.8.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.8.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.8.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.8.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.8.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.8.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.8.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.9.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.9.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.9.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.9.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.9.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.9.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.9.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.10.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.10.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.10.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.10.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.10.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.10.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.10.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.11.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.11.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.11.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.11.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.11.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.11.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.11.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.12.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.12.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.12.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.12.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.12.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.12.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.12.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.13.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.13.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.13.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.13.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.13.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.13.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.13.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.14.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.14.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.14.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.14.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.14.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.14.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.14.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.15.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.15.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.15.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.15.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.15.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.15.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.15.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.16.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.16.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.16.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.16.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.16.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.16.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.16.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.17.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.17.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.17.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.17.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.17.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.17.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.17.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.18.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.18.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.18.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.18.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.18.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.18.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.18.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.19.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.19.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.19.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.19.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.19.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.19.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.19.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.20.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.20.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.20.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.20.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.20.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.20.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.20.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.21.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.21.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.21.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.21.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.21.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.21.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.21.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.22.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.22.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.22.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.22.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.22.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.22.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.22.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.23.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.23.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.23.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.23.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.23.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.23.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.23.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.24.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.24.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.24.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.24.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.24.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.24.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.24.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.25.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.25.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.25.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.25.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.25.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.25.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.25.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.26.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.26.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.26.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.26.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.26.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.26.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.26.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.27.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.27.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.27.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.27.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.27.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.27.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.27.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.28.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.28.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.28.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.28.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.28.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.28.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.28.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.29.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.29.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.29.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.29.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.29.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.29.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.29.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.30.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.30.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.30.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.30.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.30.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.30.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.30.mlp.up_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.q_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.k_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.v_proj.lora_A.weight
merging base_model.model.model.layers.31.self_attn.o_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.gate_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.down_proj.lora_A.weight
merging base_model.model.model.layers.31.mlp.up_proj.lora_A.weight
Traceback (most recent call last):
File "/data/llama/Chinese-LLaMA-Alpaca/scripts/merge_llama_with_chinese_lora.py", line 327, in <module>
assert not torch.allclose(first_weight_old, first_weight)
AssertionError
```
Is it possible to merge pretrained lora weights with base model? | closed | 2023-07-29T13:22:00Z | 2023-11-22T11:13:50Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/797 | [
"stale"
] | yusufcakmakk | 8 |
babysor/MockingBird | deep-learning | 859 | StopIteration: Caught StopIteration in replica 0 on device 0 | **Summary[้ฎ้ข็ฎ่ฟฐ๏ผไธๅฅ่ฏ๏ผ]**
0.0.1 ็ๆฌไปฃ็ ๆ็
ง่ฎญ็ป่ฎญ็ปๅๆๅจๅบ้๏ผๆบๅจไธๆๅคไธชgpu๏ผ
**Env & To Reproduce[ๅค็ฐไธ็ฏๅข]**
ๆ่ฟฐไฝ ็จ็็ฏๅขใไปฃ็ ็ๆฌใๆจกๅ
็ฏๅข๏ผubuntu 20.04
ไปฃ็ ็ๆฌ๏ผ0.0.1
python๏ผ3.9.6
torch.__version__: 2.0.0+cu117
**Screenshots[ๆชๅพ๏ผๅฆๆ๏ผ]**
<img width="1595" alt="image" src="https://user-images.githubusercontent.com/29347833/227445578-c2a94c04-7b4a-4efc-ab41-d6809095bd05.png">
| open | 2023-03-24T06:45:32Z | 2023-03-24T06:45:32Z | https://github.com/babysor/MockingBird/issues/859 | [] | LZC6244 | 0 |
flairNLP/fundus | web-scraping | 193 | [Bug] A SZ Article crashes fundus during ld construction | The following code snippet crashes fundus while the ld is constructed:
```python
from fundus.publishers.de import SZParser
from fundus.scraping.scraper import Scraper
from fundus.scraping.source import StaticSource
test_source = StaticSource([
"https://www.sueddeutsche.de/projekte/artikel/politik/bremen-bremerhaven-wahl-protokolle-e142075/"])
scraper = Scraper(test_source, parser=SZParser())
for article in scraper.scrape(error_handling='raise'):
print(article)
```
The traceback is below:
```console
Traceback (most recent call last):
File "/home/aaron/Code/Python/Fundus/create_test.py", line 14, in <module>
for article in scraper.scrape(error_handling='raise'):
File "/home/aaron/Code/Python/Fundus/src/fundus/scraping/scraper.py", line 89, in scrape
raise err
File "/home/aaron/Code/Python/Fundus/src/fundus/scraping/scraper.py", line 82, in scrape
extraction = self.parser.parse(article_source.html, error_handling)
File "/home/aaron/Code/Python/Fundus/src/fundus/parser/base_parser.py", line 191, in parse
self._base_setup(html)
File "/home/aaron/Code/Python/Fundus/src/fundus/parser/base_parser.py", line 187, in _base_setup
self.precomputed = Precomputed(html, doc, get_meta_content(doc), LinkedDataMapping(collapsed_lds))
File "/home/aaron/Code/Python/Fundus/src/fundus/parser/data.py", line 47, in __init__
self.add_ld(ld)
File "/home/aaron/Code/Python/Fundus/src/fundus/parser/data.py", line 58, in add_ld
raise ValueError(f"Found no type for LD")
ValueError: Found no type for LD
Process finished with exit code 1
```
| closed | 2023-05-10T15:35:09Z | 2023-05-10T18:37:50Z | https://github.com/flairNLP/fundus/issues/193 | [] | Weyaaron | 2 |
ultralytics/ultralytics | machine-learning | 19,595 | Updating from Yolov11 to Yolov12 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
How can I move from using YOLOv11 to YOLOv12? what are the full steps?
### Additional
_No response_ | open | 2025-03-09T08:06:01Z | 2025-03-11T21:28:41Z | https://github.com/ultralytics/ultralytics/issues/19595 | [
"question"
] | anaduaa | 4 |
svc-develop-team/so-vits-svc | deep-learning | 353 | [Bug]: Package conflict with numpy | ### OS version
Windows 11 22H2
### GPU
NVIDIA MX130
### Python version
Python 3.10.6
### PyTorch version
Torch 1.13.1
### Branch of sovits
4.0(Default)
### Dataset source (Used to judge the dataset quality)
UVR-processed
### Where thr problem occurs or what command you executed
python inference_main.py -m "logs/44k/G_16000.pth" -c "logs/44k/config.json" -cm "/logs/44k/kmeans_10000.pt" -cr 0.5 -n "2.wav" -t 0 -s "Ado20mdataset" -f0p rmvpe
### Situation description
First thing first, somehow there is no choice of choosing 4.1 Stable branch of sovits at that section? I am using the 4.1 Stable, not the 4.0.
Anyway, I tried to edit the cluster infer ratio, but I always got the Error 1 ( Look at the Log section ). It said I am missing cluster_model, so I go pip install cluster_model.
After installing cluster_model, I run the same command again and this time it show up the Error 2, which is SystemError: initialization of _internal failed without raising an exception.
I have check the stack overflow, and it said that it is the numpy issues. So I exchange it from the cluster_model's 1.25.1 to version below 1.24 because I see some compability issues show up. And it have return back Error 1 after the changing.
Also it could work without any issues if there is no -cr 0.5 command, I think there is a trouble here.
### Log
```python
https://pastebin.com/W5kFS1vx
^ Error 1
https://pastebin.com/u3rtvewA
^ Error 2
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
espnet 202304 requires protobuf<=3.20.1, but you have protobuf 3.20.3 which is incompatible.
numba 0.56.4 requires numpy<1.24,>=1.18, but you have numpy 1.25.1 which is incompatible.
torchcrepe 0.0.20 requires librosa==0.9.1, but you have librosa 0.9.2 which is incompatible.
^ When installing numpy-1.25.1
```
### Supplementary description
_No response_ | open | 2023-07-29T11:20:06Z | 2023-07-29T11:32:06Z | https://github.com/svc-develop-team/so-vits-svc/issues/353 | [
"bug?"
] | Sewlell | 0 |
iperov/DeepFaceLab | deep-learning | 5,580 | Does landmark get recalculate during the merger process? | I think i find the way to change the landmark metadata in each jpg file to match the src face.but base on my skill i probably have to code fulltime for a week to achieve that.And im afraid it will be ignore in the merger process.
In the merger.py i see this LandmarksProcessor.get_transform_mat and LandmarksProcessor.transform_points.I have no idea how each work but it sound like it will recalculate landmark postions again. | open | 2022-11-03T15:52:52Z | 2023-06-08T23:19:24Z | https://github.com/iperov/DeepFaceLab/issues/5580 | [] | onwan946 | 1 |
Neoteroi/BlackSheep | asyncio | 81 | Support for background tasks | Not really an issue, more of a feature request I guess.
I'm wondering if you have thought about adding support for running background tasks in Blacksheep, similar to something like FastApi (Starlette) has.
Atm I'm using an `AsyncIOEventEmitter` (pyee) for running stuff in the background but the problem is obviously that it's not integrated with the DI container in any way.
What's your thoughts on this? | closed | 2021-02-05T09:40:59Z | 2021-05-03T14:36:10Z | https://github.com/Neoteroi/BlackSheep/issues/81 | [
"question",
"document"
] | skivis | 4 |
pydantic/pydantic-settings | pydantic | 520 | configparser.MissingSectionHeaderError: File contains no section headers. | I encountered a very strange problem. I just used the following code to import pydantic_settings.
```python
from pydantic_settings import BaseSettings
print("hello world")
```
But the following error was displayed:
```
root@ubuntu:~/projects/SightlessGuide2# python test.py
Traceback (most recent call last):
File "test.py", line 1, in <module>
from pydantic_settings import BaseSettings
File "/usr/local/lib/python3.8/dist-packages/pydantic_settings/__init__.py", line 1, in <module>
from .main import BaseSettings, CliApp, SettingsConfigDict
File "/usr/local/lib/python3.8/dist-packages/pydantic_settings/main.py", line 14, in <module>
from .sources import (
File "/usr/local/lib/python3.8/dist-packages/pydantic_settings/sources.py", line 40, in <module>
from pydantic import AliasChoices, AliasPath, BaseModel, Json, RootModel, TypeAdapter
File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist
File "/usr/local/lib/python3.8/dist-packages/pydantic/__init__.py", line 421, in __getattr__
module = import_module(module_name, package=package)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.8/dist-packages/pydantic/root_model.py", line 35, in <module>
class RootModel(BaseModel, typing.Generic[RootModelRootType], metaclass=_RootModelMetaclass):
File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 224, in __new__
complete_model_class(
File "/usr/local/lib/python3.8/dist-packages/pydantic/_internal/_model_construction.py", line 620, in complete_model_class
cls.__pydantic_validator__ = create_schema_validator(
File "/usr/local/lib/python3.8/dist-packages/pydantic/plugin/_schema_validator.py", line 38, in create_schema_validator
plugins = get_plugins()
File "/usr/local/lib/python3.8/dist-packages/pydantic/plugin/_loader.py", line 39, in get_plugins
for entry_point in dist.entry_points:
File "/usr/lib/python3.8/importlib/metadata.py", line 240, in entry_points
return EntryPoint._from_text(self.read_text('entry_points.txt'))
File "/usr/lib/python3.8/importlib/metadata.py", line 100, in _from_text
config.read_string(text)
File "/usr/lib/python3.8/configparser.py", line 723, in read_string
self.read_file(sfile, source)
File "/usr/lib/python3.8/configparser.py", line 718, in read_file
self._read(f, source)
File "/usr/lib/python3.8/configparser.py", line 1082, in _read
raise MissingSectionHeaderError(fpname, lineno, line)
configparser.MissingSectionHeaderError: File contains no section headers.
```
I wanted to look from the bottom to find out which f caused it and what it was. when I added print and reran it, the code was correct.

| closed | 2025-01-11T02:39:09Z | 2025-01-14T09:38:54Z | https://github.com/pydantic/pydantic-settings/issues/520 | [
"unconfirmed"
] | yejue | 5 |
netbox-community/netbox | django | 17,980 | FrontPort count gets wrong when doing bulk create | ### Deployment Type
Self-hosted
### Triage priority
N/A
### NetBox Version
v4.1.4
### Python Version
3.12
### Steps to Reproduce
1. Create a DeviceType with FrontPorts
2. Create a Device using the DeviceType
3. Delete all FrontPorts, there are now zero FrontPorts
4. Add the add_device_type_components.py script as custom script, https://github.com/netbox-community/customizations/blob/master/scripts/add_device_type_components.py
5. Import DeviceType component via custom script.
6. Check the Device, there is no tab for FronPorts due to "front_port_count" is 0
7. If editing URL to get FrontPorts by appending /front-ports/ to URL you will see them
8. If now deleting them again the number of FronPorts will now be a minus number
### Expected Behavior
I expect the front_port_count should be related to the number of front ports on the device
### Observed Behavior
Below is output from API for device
After running script
```
{
< snip snip >
"created": "2024-11-11T13:49:51.948244Z",
"last_updated": "2024-11-11T13:49:51.948265Z",
"console_port_count": 0,
"console_server_port_count": 0,
"power_port_count": 0,
"power_outlet_count": 0,
"interface_count": 0,
"front_port_count": 0,
"rear_port_count": 1,
"device_bay_count": 0,
"module_bay_count": 0,
"inventory_item_count": 0
}
```
After deleting the front ports again
```
{
< snip snip >
"created": "2024-11-11T13:49:51.948244Z",
"last_updated": "2024-11-11T13:49:51.948265Z",
"console_port_count": 0,
"console_server_port_count": 0,
"power_port_count": 0,
"power_outlet_count": 0,
"interface_count": 0,
"front_port_count": -16,
"rear_port_count": 1,
"device_bay_count": 0,
"module_bay_count": 0,
"inventory_item_count": 0
}
```

| closed | 2024-11-11T14:07:29Z | 2025-02-11T03:04:25Z | https://github.com/netbox-community/netbox/issues/17980 | [] | blipnet | 1 |
autogluon/autogluon | scikit-learn | 4,924 | [timeseries] Is it possible to get metrics per group/per series id? | ### Discussed in https://github.com/autogluon/autogluon/discussions/4920
<div type='discussions-op-text'>
<sup>Originally posted by **lucharo** February 22, 2025</sup>
As the question says, I'd like to get metrics per series id</div> | open | 2025-02-24T09:24:17Z | 2025-02-24T09:24:18Z | https://github.com/autogluon/autogluon/issues/4924 | [
"enhancement",
"module: timeseries"
] | shchur | 0 |
slackapi/bolt-python | fastapi | 598 | How to use the python bolt decorator function inside a class | Hi,
I am using slack_bolt for development, I found that slack commands and events can be handled by passing a decorator function like `@app.events("event_name")` or `@app.command("/command_name")`, I want to module the handler as class methods, but the decorator function seems not supported for the class Methods, I have tried calling it as a function also
```
class A:
.....
def calc():
pass
```
app.action("/command_name")(A().calc), but still my endpoint is not hitting. Is there a way I can achieve this?I appreciate any help over here.
| closed | 2022-02-22T13:21:16Z | 2022-03-03T22:47:13Z | https://github.com/slackapi/bolt-python/issues/598 | [
"question"
] | mohan-raheja | 3 |
tflearn/tflearn | tensorflow | 751 | MemoryError when running convnet_cifar10.py | I'm getting a MemoryError when running the convnet_cifar10.py built-in example. It seems like the error is generated by tflearn.data_utils.shuffle(). Is anyone else experiencing this issue when running this example? | closed | 2017-05-12T13:53:50Z | 2017-05-17T12:39:39Z | https://github.com/tflearn/tflearn/issues/751 | [] | octy40 | 2 |
AutoGPTQ/AutoGPTQ | nlp | 162 | ๅ ไธ็พๅทๆจกๅ | ๅฆ้ข | closed | 2023-06-18T09:01:50Z | 2023-06-18T23:26:31Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/162 | [] | Minami-su | 2 |
ResidentMario/geoplot | matplotlib | 209 | what does clip in kdeplot take? | Hi,
I'm trying to do a kdeplot of points in the ocean. Naturally the heatmap extends onto land. I want to clip this away using e.g. the shapes from cartopy.feature.LAND or coastline or similar. Whenever I pass in a shape (multipolygon) I get an error about mismatching number of values (there is not the same amount of polygons as rows I am trying to plot). From the boroughs example I see that this is not necessary, but how are they associated. And how can I do this with one giant shape (the ocean)?
Thanks for this package! | closed | 2020-03-24T10:44:33Z | 2020-03-30T00:34:50Z | https://github.com/ResidentMario/geoplot/issues/209 | [] | gauteh | 1 |
microsoft/MMdnn | tensorflow | 822 | Bug report | https://github.com/microsoft/MMdnn/blob/d34caa49d657bba391460decb9575ba228e47721/mmdnn/conversion/caffe/graph.py#L302
if `v=np.array(2.3)` , error raise :`TypeError: reduce() of empty sequence with no initial value` | open | 2020-04-14T11:52:24Z | 2020-04-15T01:30:42Z | https://github.com/microsoft/MMdnn/issues/822 | [] | iamhankai | 1 |
vanna-ai/vanna | data-visualization | 312 | SQL datetime format returns as integer value in Vanna flask app | while returning query results , Vanna App converts datetime fields into UNIX time format. how can i return the results in readable formats like "dd-MM-yyyy HH:mm:ss". i have tried adding in documentation like
`vn.train(documentation="Use dd-MM-yyyy HH:mm:ss format for sql field with type datetime2(0)")`
but couldnt find any changes. see attached image

| closed | 2024-03-25T05:50:32Z | 2024-04-01T21:29:01Z | https://github.com/vanna-ai/vanna/issues/312 | [
"bug"
] | akhilvk | 2 |
OpenVisualCloud/CDN-Transcode-Sample | dash | 102 | [Feature][Q1'20] Write native Kubernetes yaml files to remove the usage of Kompose and docker-compose | closed | 2019-11-26T01:58:41Z | 2020-06-10T05:03:04Z | https://github.com/OpenVisualCloud/CDN-Transcode-Sample/issues/102 | [
"enhancement"
] | wenquan-mao | 1 |
|
python-gino/gino | asyncio | 141 | Handle SQLAlchemy prefetch | * GINO version: 0.5, 0.6
* Python version: 3.6
### Description
Some clause element may cause prefetch in SQLAlchemy, for example:
```
User.insert().values(nickname=random_name)
```
(without `returning`) | closed | 2018-02-18T09:17:09Z | 2018-03-19T08:59:02Z | https://github.com/python-gino/gino/issues/141 | [
"bug",
"help wanted"
] | fantix | 0 |
ipython/ipython | jupyter | 14,082 | Produce nightly wheels | We're working on [CI recommendations for projects](https://scientific-python.org/specs/spec-0005/) recommended for projects. One of these is to test against the developer version of their dependencies. In order to do this efficiently, widely-used projects, like ipython, should build and upload nightly wheels to a [somewhat central location](https://anaconda.org/scientific-python-nightly-wheels). | closed | 2023-05-22T18:08:08Z | 2023-05-22T23:57:34Z | https://github.com/ipython/ipython/issues/14082 | [] | bsipocz | 1 |
paperless-ngx/paperless-ngx | machine-learning | 7,470 | [BUG] User cannot login after upgrade | ### Description
User cannot login after Upgrade from 2.11.0 to 2.11.4
However, Admin can login.
### Steps to reproduce
Load Login page.
login as regular user.
### Webserver logs
```bash
{"headers":{"normalizedNames":{},"lazyUpdate":null},"status":403,"statusText":"OK","url":"https://docs.my.domain/api/ui_settings/","ok":false,"name":"HttpErrorResponse","message":"Http failure response for https://docs.my.domain/api/ui_settings/: 403 OK","error":{"detail":"Sie sind nicht berechtigt diese Aktion durchzufรผhren."}}
```
### Browser logs
_No response_
### Paperless-ngx version
2.11.4
### Host OS
Ubuntu 22.04
### Installation method
Bare metal
### System status
```json
{
"pngx_version": "2.11.4",
"server_os": "Linux-6.8.12-1-pve-x86_64-with-glibc2.31",
"install_type": "bare-metal",
"storage": {
"total": 21474836480,
"available": 21346254848
},
"database": {
"type": "postgresql",
"url": "paperless",
"status": "OK",
"error": null,
"migration_status": {
"latest_migration": "paperless_mail.0001_initial_squashed_0009_mailrule_assign_tags",
"unapplied_migrations": []
}
},
"tasks": {
"redis_url": "redis://localhost:6379",
"redis_status": "OK",
"redis_error": null,
"celery_status": "OK",
"index_status": "OK",
"index_last_modified": "2024-07-04T00:00:01.886143+02:00",
"index_error": null,
"classifier_status": "WARNING",
"classifier_last_trained": null,
"classifier_error": "Classifier file does not exist (yet). Re-training may be pending."
}
}
```
### Browser
Edge
### Configuration changes
none
### Please confirm the following
- [X] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation.
- [X] I have already searched for relevant existing issues and discussions before opening this report.
- [X] I have updated the title field above with a concise description. | closed | 2024-08-14T19:27:30Z | 2024-09-14T03:04:25Z | https://github.com/paperless-ngx/paperless-ngx/issues/7470 | [
"not a bug"
] | manuelkamp | 3 |
graphdeco-inria/gaussian-splatting | computer-vision | 1,185 | The proper way of getting C2W matrix from self.world_view_transform? | Hi,
thanks for your amazing code work, we are able to access to W2C matrix(= [self.world_view_transform](https://github.com/graphdeco-inria/gaussian-splatting/blob/main/scene/cameras.py#L86)) here.
However, the code for the function ```getWorld2View2``` is very tricky; [The function](https://github.com/graphdeco-inria/gaussian-splatting/blob/main/utils/graphics_utils.py#L38) that I'm confused with the way of getting the proper C2W matrix from ``` self.world_view_transform```.
Do I need to 1. transpose 0th and 1st dimension against ``` self.world_view_transform``` and 2. apply the inverse function (just following the exact reverse step)?
Thanks,
Junyeong Ahn | open | 2025-03-10T07:27:49Z | 2025-03-11T17:22:33Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/1185 | [] | justin4ai | 3 |
serengil/deepface | deep-learning | 532 | load_image is not same from url or from file | ```python
# in colab
from deepface.commons import functions
import numpy as np
url = "https://upload.wikimedia.org/wikipedia/commons/thumb/4/45/MontreGousset001.jpg/440px-MontreGousset001.jpg"
file = "440px-MontreGousset001.jpg"
!wget {url} -O {file} -q
img_from_url = functions.load_image(url)
img_from_file = functions.load_image(file)
print(np.abs(img_from_url-img_from_file).mean())
# 98.4184350027692
```
https://github.com/serengil/deepface/blob/ba9f56755abdc9d1d80777d7680c4267d5d5e0e9/deepface/commons/functions.py#L85-L92
The reason is because you convert the image to RBG with url
And for file, you use cv2.imread, the default setting is rgb
Please align those two methods | closed | 2022-08-10T19:20:50Z | 2023-03-01T14:22:21Z | https://github.com/serengil/deepface/issues/532 | [
"dependencies"
] | mistake0316 | 2 |
errbotio/errbot | automation | 994 | Enable a hook on unknown_command | ### I am...
* [ ] Reporting a bug
* [x] Suggesting a new feature
* [ ] Requesting help with running my bot
* [ ] Requesting help writing plugins
* [ ] Here about something else
### I am running...
* Errbot version: 4.3.7
* OS version: Fedora 25
* Python version: 3.5
* Using a virtual environment: yes
### Issue description
I would like to have an option to replace, by a hook system, the unknown_command feature, this will enable a command to handle the unknown command situation.
**Rationale**: Check Additional Info.
The idea is to let one or more command to hook in the unknown_command to handle it in the desired way, keeping the current default behavior as a last step:
The basic idea is to:
- Get into the unknown_command.
- Dispatch the signal to the registered commands.
- The command do their magic as needed.
- Each register command can respond with 2 options:
- A tuple or dict with:
- A "message" string, that "adds" to the final response of the unknown_command response.
- A "mask" boolean, that indicate if the default message of the default behavior get masked or not.
- A None witch means that the default behavior will be done.
Additional explanation in Additional Info.
### Steps to reproduce
- None
### Additional info
#### Rationale:
- Statistics: Let me know how many times the unknown command happens.
- Abuse Control: Detect people that coming to "play" with us.
- Default behavior change: Modify the current friendly behavior for a custom one.
#### Additional explanation:
- With one plugin registered only is straightforward how it works.
- With two or more plugins the things became a little more complex, so:
1. All the commands make a response in the way it needs to do it, the base unknown_command handler receive all the outputs, making a list of each command response.
2. Check each command response and if anyone "masks" the default output, the output get masked.
3. Also there is a couple of things to account:
- There is no ensured order of execution, so each registered command can't rely on this.
- The command can't rely on the default behavior as it can be masked by another command.
I might start to work on some on this, but **i want to first come with some discussion about the implementation, so, shoot your ideas!.** | closed | 2017-04-17T22:36:27Z | 2021-08-15T05:41:03Z | https://github.com/errbotio/errbot/issues/994 | [
"type: feature"
] | qlixed | 1 |
tqdm/tqdm | jupyter | 727 | external_write_mode doesn't clear instances on python 2 | The following example script output is quite messed up on Python 2.7:
```
from tqdm import tqdm, trange
from time import sleep
for i in trange(10, desc='outer'):
for j in trange(10, desc='inner'):
# Print using tqdm class method .write()
sleep(0.1)
if not (j % 3):
tqdm.write("Done task %i.%i" % (i,j))
```
The issue seems to be the code that locates other instances sharing the same underlying fp -- on Python 2, this gets wrapped by SimpleTextIOWrapper, so then the comparisons of the fps with each other and with stdout never succeed, and the 'clear' code doesn't execute.
| closed | 2019-05-07T05:12:48Z | 2020-03-28T21:57:43Z | https://github.com/tqdm/tqdm/issues/727 | [
"to-fix โ",
"p2-bug-warning โ "
] | toddlipcon | 0 |
pallets/flask | flask | 5,353 | Cannot import Markup from flask (v3.0.0) | <!--
This issue tracker is a tool to address bugs in Flask itself. Please use
Pallets Discord or Stack Overflow for questions about your own code.
Replace this comment with a clear outline of what the bug is.
-->
<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
As [previously reported](https://github.com/pallets/flask/issues/5084).
### A
```python
import flask
print(flask.Markup('hi'))
```
```text
Traceback (most recent call last):
File "/tmp/fl/mu1.py", line 3, in <module>
print(flask.Markup('hi'))
File "/tmp/fl/.venv/lib/python3.10/site-packages/flask/__init__.py", line 60, in __getattr__
raise AttributeError(name)
AttributeError: Markup
```
### B
```python
from flask import Markup
print(Markup('hi'))
```
```
Traceback (most recent call last):
File "/tmp/fl/mu2.py", line 1, in <module>
from flask import Markup
ImportError: cannot import name 'Markup' from 'flask' (/tmp/fl/.venv/lib/python3.10/site-packages/flask/__init__.py)
```
<!--
Describe the expected behavior that should have happened but didn't.
-->
Environment:
- Python version: 3.10.12
- Flask version: 3.0.0
- (and flask-misaka-1.0.0 but I don't think that's relevant)
| closed | 2023-12-04T18:01:45Z | 2023-12-19T00:06:09Z | https://github.com/pallets/flask/issues/5353 | [] | unsliced | 1 |
vanna-ai/vanna | data-visualization | 315 | Support for choosing number of follow-up questions to be generated | **Is your feature request related to a problem? Please describe.**
Right now, it does not seem like Vanna chooses how many follow-up questions to be generated by the LLM. As per the code:
https://github.com/vanna-ai/vanna/blob/c8201f5d2c3fc1aecb95d47e4c7c6703baf39a7c/src/vanna/base/base.py#L170
It would also make sense to have a default number of follow-up questions to be generated, say 3.
**Describe the solution you'd like**
An API to choose how many follow-up questions to suggest (could make completion faster if fewer suggestions are of interest).
I can make a PR to address this feature request.
| closed | 2024-03-25T15:26:44Z | 2024-04-05T12:30:49Z | https://github.com/vanna-ai/vanna/issues/315 | [] | andreped | 0 |
pyg-team/pytorch_geometric | pytorch | 8,856 | [Bug] HeteroData.to_homogeneous() doesn't handle the train/val/test mask correctly | ### ๐ Describe the bug
When converting a heterogeneous graph to a homogeneous graph using HeteroData.to_homogeneous(), the train/val/test mask is not masking the correct node. The following is an example to reproduce it:
```Python
from torch_geometric.datasets import OGB_MAG
dataset = OGB_MAG()
data = dataset[0]
homo = data.to_homogeneous()
print(data['paper'].y[data['paper'].train_mask]) # This is the correct masked training set
print(homo.y[homo.train_mask]) # You will see -1 in the output label, which means irrelevant node got masked. OGB_MAG has -1 label for those nodes not in the train/val/test set.
print(data['paper'].y[data['paper'].train_mask].sum() == homo.y[homo.train_mask].sum()) # And the label sum does not equal
```
The above code snippet generates the following output:
```
tensor([246, 131, 189, ..., 266, 289, 1]) # This is the correct masked training set
tensor([246, 131, 189, ..., -1, -1, -1]) # You will see -1 in the output label, which means irrelevant node got masked. OGB_MAG has -1 label for those nodes not in the train/val/test set.
tensor(False) # And the label sum does not equal
```
### Versions
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Red Hat Enterprise Linux release 8.3 (Ootpa) (ppc64le)
GCC version: (GCC) 8.3.1 20191121 (Red Hat 8.3.1-5)
Clang version: Could not collect
CMake version: version 3.11.4
Libc version: glibc-2.28
Python version: 3.9.18 (main, Sep 11 2023, 14:34:07) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-240.el8.ppc64le-ppc64le-with-glibc2.28
Is CUDA available: True
CUDA runtime version: 11.4.152
CUDA_MODULE_LOADING set to:
GPU models and configuration:
GPU 0: Tesla V100-SXM2-32GB
GPU 1: Tesla V100-SXM2-32GB
GPU 2: Tesla V100-SXM2-32GB
GPU 3: Tesla V100-SXM2-32GB
Nvidia driver version: 535.86.10
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: False
CPU:
Architecture: ppc64le
Byte Order: Little Endian
CPU(s): 160
On-line CPU(s) list: 0,1,4,5,8,9,12,13,16,17,20,21,24,25,28,29,32,33,36,37,40,41,44,45,48,49,52,53,56,57,60,61,64,65,68,69,72,73,76,77,80,81,84,85,88,89,92,93,96,97,100,101,104,105,108,109,112,113,116,117,120,121,124,125,128,129,132,133,136,137,140,141,144,145,148,149,152,153,156,157
Off-line CPU(s) list: 2,3,6,7,10,11,14,15,18,19,22,23,26,27,30,31,34,35,38,39,42,43,46,47,50,51,54,55,58,59,62,63,66,67,70,71,74,75,78,79,82,83,86,87,90,91,94,95,98,99,102,103,106,107,110,111,114,115,118,119,122,123,126,127,130,131,134,135,138,139,142,143,146,147,150,151,154,155,158,159
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 6
Model: 2.1 (pvr 004e 1201)
Model name: POWER9, altivec supported
CPU max MHz: 3800.0000
CPU min MHz: 2300.0000
L1d cache: 32K
L1i cache: 32K
L2 cache: 512K
L3 cache: 10240K
NUMA node0 CPU(s): 0,1,4,5,8,9,12,13,16,17,20,21,24,25,28,29,32,33,36,37,40,41,44,45,48,49,52,53,56,57,60,61,64,65,68,69,72,73,76,77
NUMA node8 CPU(s): 80,81,84,85,88,89,92,93,96,97,100,101,104,105,108,109,112,113,116,117,120,121,124,125,128,129,132,133,136,137,140,141,144,145,148,149,152,153,156,157
NUMA node252 CPU(s):
NUMA node253 CPU(s):
NUMA node254 CPU(s):
NUMA node255 CPU(s):
Versions of relevant libraries:
[pip3] botorch==0.8.5
[pip3] gpytorch==1.10
[pip3] numpy==1.23.5
[pip3] numpydoc==1.6.0
[pip3] performer-pytorch==1.1.4
[pip3] pytorch-lightning==2.1.3
[pip3] torch==1.12.1
[pip3] torch-cluster==1.6.3
[pip3] torch-geometric==2.3.0
[pip3] torch-scatter==2.1.2
[pip3] torch-sparse==0.6.18
[pip3] torchmetrics==0.11.4
[pip3] torchtext==0.13.1a0+35066f2
[pip3] torchvision==0.13.1
[conda] _pytorch_select 2.0 cuda_2 https://ftp.osuosl.org/pub/open-ce/current
[conda] botorch 0.8.5 0 pytorch
[conda] cudatoolkit 11.4.4 h5d40d8d_10 https://opence.mit.edu
[conda] cudatoolkit-dev 11.4.4 h13ae079_2 https://opence.mit.edu
[conda] gpytorch 1.10 0 gpytorch
[conda] linear_operator 0.4.0 0 gpytorch
[conda] numpy 1.23.5 py39h181cc9a_0
[conda] numpy-base 1.23.5 py39h1bde650_0
[conda] numpydoc 1.6.0 pypi_0 pypi
[conda] performer-pytorch 1.1.4 pypi_0 pypi
[conda] pytorch 1.12.1 cuda11.4_py39_1 https://opence.mit.edu
[conda] pytorch-base 1.12.1 cuda11.4_py39_pb3.19_2 https://opence.mit.edu
[conda] pytorch-lightning 2.1.3 pypi_0 pypi
[conda] pytorch_geometric 2.4.0 pyhd8ed1ab_0 conda-forge
[conda] torch-cluster 1.6.3 pypi_0 pypi
[conda] torch-geometric 2.3.0 pypi_0 pypi
[conda] torch-scatter 2.1.2 pypi_0 pypi
[conda] torch-sparse 0.6.18 pypi_0 pypi
[conda] torchmetrics 1.1.2 pypi_0 pypi
[conda] torchtext-base 0.13.1 cuda11.4_py39_1 https://opence.mit.edu
[conda] torchvision-base 0.13.1 cuda11.4_py39_1 https://opence.mit.edu | closed | 2024-02-02T21:02:41Z | 2024-02-03T20:25:37Z | https://github.com/pyg-team/pytorch_geometric/issues/8856 | [
"bug"
] | junhongmit | 1 |
gradio-app/gradio | deep-learning | 10,043 | gr.Request for ChatInterface | ### Describe the bug
I can't seem to understand how to get gr.Request working for ChatInterface. I continue to get the error message `Exception: PrivateGptUi._chat() missing 1 required keyword-only argument: 'request'. Please help
### Have you searched existing issues? ๐
- [X] I have searched and found no existing issues
### Reproduction
"""This file should be imported if and only if you want to run the UI locally."""
import base64
import logging
import time
from collections.abc import Iterable
from enum import Enum
from pathlib import Path
from typing import Any
import gradio as gr # type: ignore
from fastapi import FastAPI
from gradio.themes.utils.colors import slate # type: ignore
from injector import inject, singleton
from llama_index.core.llms import ChatMessage, ChatResponse, MessageRole
from llama_index.core.types import TokenGen
from pydantic import BaseModel
from private_gpt.constants import PROJECT_ROOT_PATH
from private_gpt.di import global_injector
from private_gpt.open_ai.extensions.context_filter import ContextFilter
from private_gpt.server.chat.chat_service import ChatService, CompletionGen
from private_gpt.server.chunks.chunks_service import Chunk, ChunksService
from private_gpt.server.ingest.ingest_service import IngestService
from private_gpt.server.recipes.summarize.summarize_service import SummarizeService
from private_gpt.settings.settings import settings
from private_gpt.ui.images import logo_svg
logger = logging.getLogger(__name__)
THIS_DIRECTORY_RELATIVE = Path(__file__).parent.relative_to(PROJECT_ROOT_PATH)
# Should be "private_gpt/ui/avatar-bot.ico"
AVATAR_BOT = THIS_DIRECTORY_RELATIVE / "avatar-bot.ico"
UI_TAB_TITLE = "My Private GPT"
SOURCES_SEPARATOR = "<hr>Sources: \n"
class Modes(str, Enum):
RAG_MODE = "RAG"
SEARCH_MODE = "Search"
BASIC_CHAT_MODE = "Basic"
SUMMARIZE_MODE = "Summarize"
MODES: list[Modes] = [
Modes.RAG_MODE,
Modes.SEARCH_MODE,
Modes.BASIC_CHAT_MODE,
Modes.SUMMARIZE_MODE,
]
class Source(BaseModel):
file: str
page: str
text: str
class Config:
frozen = True
@staticmethod
def curate_sources(sources: list[Chunk]) -> list["Source"]:
curated_sources = []
for chunk in sources:
doc_metadata = chunk.document.doc_metadata
file_name = doc_metadata.get("file_name", "-") if doc_metadata else "-"
page_label = doc_metadata.get("page_label", "-") if doc_metadata else "-"
source = Source(file=file_name, page=page_label, text=chunk.text)
curated_sources.append(source)
curated_sources = list(
dict.fromkeys(curated_sources).keys()
) # Unique sources only
return curated_sources
@singleton
class PrivateGptUi:
@inject
def __init__(
self,
ingest_service: IngestService,
chat_service: ChatService,
chunks_service: ChunksService,
summarizeService: SummarizeService,
) -> None:
self._ingest_service = ingest_service
self._chat_service = chat_service
self._chunks_service = chunks_service
self._summarize_service = summarizeService
# Cache the UI blocks
self._ui_block = None
self._selected_filename = None
# Initialize system prompt based on default mode
default_mode_map = {mode.value: mode for mode in Modes}
self._default_mode = default_mode_map.get(
settings().ui.default_mode, Modes.RAG_MODE
)
self._system_prompt = self._get_default_system_prompt(self._default_mode)
def _chat(
self, message: str, history: list[list[str]], mode: Modes, *_: Any
) -> Any:
def yield_deltas(completion_gen: CompletionGen) -> Iterable[str]:
full_response: str = ""
stream = completion_gen.response
for delta in stream:
if isinstance(delta, str):
full_response += str(delta)
elif isinstance(delta, ChatResponse):
full_response += delta.delta or ""
yield full_response
time.sleep(0.02)
if completion_gen.sources:
full_response += SOURCES_SEPARATOR
cur_sources = Source.curate_sources(completion_gen.sources)
sources_text = "\n\n\n"
used_files = set()
for index, source in enumerate(cur_sources, start=1):
if f"{source.file}-{source.page}" not in used_files:
sources_text = (
sources_text
+ f"{index}. {source.file} (page {source.page}) \n\n"
)
used_files.add(f"{source.file}-{source.page}")
sources_text += "<hr>\n\n"
full_response += sources_text
yield full_response
def yield_tokens(token_gen: TokenGen) -> Iterable[str]:
full_response: str = ""
for token in token_gen:
full_response += str(token)
yield full_response
def build_history() -> list[ChatMessage]:
history_messages: list[ChatMessage] = []
for interaction in history:
history_messages.append(
ChatMessage(content=interaction[0], role=MessageRole.USER)
)
if len(interaction) > 1 and interaction[1] is not None:
history_messages.append(
ChatMessage(
# Remove from history content the Sources information
content=interaction[1].split(SOURCES_SEPARATOR)[0],
role=MessageRole.ASSISTANT,
)
)
# max 20 messages to try to avoid context overflow
return history_messages[:20]
new_message = ChatMessage(content=message, role=MessageRole.USER)
all_messages = [*build_history(), new_message]
# If a system prompt is set, add it as a system message
if self._system_prompt:
all_messages.insert(
0,
ChatMessage(
content=self._system_prompt,
role=MessageRole.SYSTEM,
),
)
match mode:
case Modes.RAG_MODE:
# Use only the selected file for the query
context_filter = None
if self._selected_filename is not None:
docs_ids = []
for ingested_document in self._ingest_service.list_ingested():
if (
ingested_document.doc_metadata["file_name"]
== self._selected_filename
):
docs_ids.append(ingested_document.doc_id)
context_filter = ContextFilter(docs_ids=docs_ids)
query_stream = self._chat_service.stream_chat(
messages=all_messages,
use_context=True,
context_filter=context_filter,
)
yield from yield_deltas(query_stream)
case Modes.BASIC_CHAT_MODE:
llm_stream = self._chat_service.stream_chat(
messages=all_messages,
use_context=False,
)
yield from yield_deltas(llm_stream)
case Modes.SEARCH_MODE:
response = self._chunks_service.retrieve_relevant(
text=message, limit=4, prev_next_chunks=0
)
sources = Source.curate_sources(response)
yield "\n\n\n".join(
f"{index}. **{source.file} "
f"(page {source.page})**\n "
f"{source.text}"
for index, source in enumerate(sources, start=1)
)
case Modes.SUMMARIZE_MODE:
# Summarize the given message, optionally using selected files
context_filter = None
if self._selected_filename:
docs_ids = []
for ingested_document in self._ingest_service.list_ingested():
if (
ingested_document.doc_metadata["file_name"]
== self._selected_filename
):
docs_ids.append(ingested_document.doc_id)
context_filter = ContextFilter(docs_ids=docs_ids)
summary_stream = self._summarize_service.stream_summarize(
use_context=True,
context_filter=context_filter,
instructions=message,
)
yield from yield_tokens(summary_stream)
# On initialization and on mode change, this function set the system prompt
# to the default prompt based on the mode (and user settings).
@staticmethod
def _get_default_system_prompt(mode: Modes) -> str:
p = ""
match mode:
# For query chat mode, obtain default system prompt from settings
case Modes.RAG_MODE:
p = settings().ui.default_query_system_prompt
# For chat mode, obtain default system prompt from settings
case Modes.BASIC_CHAT_MODE:
p = settings().ui.default_chat_system_prompt
# For summarization mode, obtain default system prompt from settings
case Modes.SUMMARIZE_MODE:
p = settings().ui.default_summarization_system_prompt
# For any other mode, clear the system prompt
case _:
p = ""
return p
@staticmethod
def _get_default_mode_explanation(mode: Modes) -> str:
match mode:
case Modes.RAG_MODE:
return "Get contextualized answers from selected files."
case Modes.SEARCH_MODE:
return "Find relevant chunks of text in selected files."
case Modes.BASIC_CHAT_MODE:
return "Chat with the LLM using its training data. Files are ignored."
case Modes.SUMMARIZE_MODE:
return "Generate a summary of the selected files. Prompt to customize the result."
case _:
return ""
def _set_system_prompt(self, system_prompt_input: str) -> None:
logger.info(f"Setting system prompt to: {system_prompt_input}")
self._system_prompt = system_prompt_input
def _set_explanatation_mode(self, explanation_mode: str) -> None:
self._explanation_mode = explanation_mode
def _set_current_mode(self, mode: Modes) -> Any:
self.mode = mode
self._set_system_prompt(self._get_default_system_prompt(mode))
self._set_explanatation_mode(self._get_default_mode_explanation(mode))
interactive = self._system_prompt is not None
return [
gr.update(placeholder=self._system_prompt, interactive=interactive),
gr.update(value=self._explanation_mode),
]
def _list_ingested_files(self) -> list[list[str]]:
files = set()
for ingested_document in self._ingest_service.list_ingested():
if ingested_document.doc_metadata is None:
# Skipping documents without metadata
continue
file_name = ingested_document.doc_metadata.get(
"file_name", "[FILE NAME MISSING]"
)
files.add(file_name)
return [[row] for row in files]
def _upload_file(self, files: list[str]) -> None:
logger.debug("Loading count=%s files", len(files))
paths = [Path(file) for file in files]
# remove all existing Documents with name identical to a new file upload:
file_names = [path.name for path in paths]
doc_ids_to_delete = []
for ingested_document in self._ingest_service.list_ingested():
if (
ingested_document.doc_metadata
and ingested_document.doc_metadata["file_name"] in file_names
):
doc_ids_to_delete.append(ingested_document.doc_id)
if len(doc_ids_to_delete) > 0:
logger.info(
"Uploading file(s) which were already ingested: %s document(s) will be replaced.",
len(doc_ids_to_delete),
)
for doc_id in doc_ids_to_delete:
self._ingest_service.delete(doc_id)
self._ingest_service.bulk_ingest([(str(path.name), path) for path in paths])
def _delete_all_files(self) -> Any:
ingested_files = self._ingest_service.list_ingested()
logger.debug("Deleting count=%s files", len(ingested_files))
for ingested_document in ingested_files:
self._ingest_service.delete(ingested_document.doc_id)
return [
gr.List(self._list_ingested_files()),
gr.components.Button(interactive=False),
gr.components.Button(interactive=False),
gr.components.Textbox("All files"),
]
def _delete_selected_file(self) -> Any:
logger.debug("Deleting selected %s", self._selected_filename)
# Note: keep looping for pdf's (each page became a Document)
for ingested_document in self._ingest_service.list_ingested():
if (
ingested_document.doc_metadata
and ingested_document.doc_metadata["file_name"]
== self._selected_filename
):
self._ingest_service.delete(ingested_document.doc_id)
return [
gr.List(self._list_ingested_files()),
gr.components.Button(interactive=False),
gr.components.Button(interactive=False),
gr.components.Textbox("All files"),
]
def _deselect_selected_file(self) -> Any:
self._selected_filename = None
return [
gr.components.Button(interactive=False),
gr.components.Button(interactive=False),
gr.components.Textbox("All files"),
]
def _selected_a_file(self, select_data: gr.SelectData) -> Any:
self._selected_filename = select_data.value
return [
gr.components.Button(interactive=True),
gr.components.Button(interactive=True),
gr.components.Textbox(self._selected_filename),
]
def _build_ui_blocks(self) -> gr.Blocks:
logger.debug("Creating the UI blocks")
with gr.Blocks(
title=UI_TAB_TITLE,
theme=gr.themes.Soft(primary_hue=slate),
css=".logo { "
"display:flex;"
"background-color: #C7BAFF;"
"height: 80px;"
"border-radius: 8px;"
"align-content: center;"
"justify-content: center;"
"align-items: center;"
"}"
".logo img { height: 25% }"
".contain { display: flex !important; flex-direction: column !important; }"
"#component-0, #component-3, #component-10, #component-8 { height: 100% !important; }"
"#chatbot { flex-grow: 1 !important; overflow: auto !important;}"
"#col { height: calc(100vh - 112px - 16px) !important; }"
"hr { margin-top: 1em; margin-bottom: 1em; border: 0; border-top: 1px solid #FFF; }"
".avatar-image { background-color: antiquewhite; border-radius: 2px; }"
".footer { text-align: center; margin-top: 20px; font-size: 14px; display: flex; align-items: center; justify-content: center; }"
".footer-zylon-link { display:flex; margin-left: 5px; text-decoration: auto; color: var(--body-text-color); }"
".footer-zylon-link:hover { color: #C7BAFF; }"
".footer-zylon-ico { height: 20px; margin-left: 5px; background-color: antiquewhite; border-radius: 2px; }",
) as blocks:
with gr.Row():
gr.HTML(f"<div class='logo'/><img src={logo_svg} alt=PrivateGPT></div")
with gr.Row(equal_height=False):
with gr.Column(scale=3):
default_mode = self._default_mode
mode = gr.Radio(
[mode.value for mode in MODES],
label="Mode",
value=default_mode,
)
explanation_mode = gr.Textbox(
placeholder=self._get_default_mode_explanation(default_mode),
show_label=False,
max_lines=3,
interactive=False,
)
upload_button = gr.components.UploadButton(
"Upload File(s)",
type="filepath",
file_count="multiple",
size="sm",
)
ingested_dataset = gr.List(
self._list_ingested_files,
headers=["File name"],
label="Ingested Files",
height=235,
interactive=False,
render=False, # Rendered under the button
)
upload_button.upload(
self._upload_file,
inputs=upload_button,
outputs=ingested_dataset,
)
ingested_dataset.change(
self._list_ingested_files,
outputs=ingested_dataset,
)
ingested_dataset.render()
deselect_file_button = gr.components.Button(
"De-select selected file", size="sm", interactive=False
)
selected_text = gr.components.Textbox(
"All files", label="Selected for Query or Deletion", max_lines=1
)
delete_file_button = gr.components.Button(
"๐๏ธ Delete selected file",
size="sm",
visible=settings().ui.delete_file_button_enabled,
interactive=False,
)
delete_files_button = gr.components.Button(
"โ ๏ธ Delete ALL files",
size="sm",
visible=settings().ui.delete_all_files_button_enabled,
)
deselect_file_button.click(
self._deselect_selected_file,
outputs=[
delete_file_button,
deselect_file_button,
selected_text,
],
)
ingested_dataset.select(
fn=self._selected_a_file,
outputs=[
delete_file_button,
deselect_file_button,
selected_text,
],
)
delete_file_button.click(
self._delete_selected_file,
outputs=[
ingested_dataset,
delete_file_button,
deselect_file_button,
selected_text,
],
)
delete_files_button.click(
self._delete_all_files,
outputs=[
ingested_dataset,
delete_file_button,
deselect_file_button,
selected_text,
],
)
system_prompt_input = gr.Textbox(
placeholder=self._system_prompt,
label="System Prompt",
lines=2,
interactive=True,
render=False,
)
# When mode changes, set default system prompt, and other stuffs
mode.change(
self._set_current_mode,
inputs=mode,
outputs=[system_prompt_input, explanation_mode],
)
# On blur, set system prompt to use in queries
system_prompt_input.blur(
self._set_system_prompt,
inputs=system_prompt_input,
)
def get_model_label() -> str | None:
"""Get model label from llm mode setting YAML.
Raises:
ValueError: If an invalid 'llm_mode' is encountered.
Returns:
str: The corresponding model label.
"""
# Get model label from llm mode setting YAML
# Labels: local, openai, openailike, sagemaker, mock, ollama
config_settings = settings()
if config_settings is None:
raise ValueError("Settings are not configured.")
# Get llm_mode from settings
llm_mode = config_settings.llm.mode
# Mapping of 'llm_mode' to corresponding model labels
model_mapping = {
"llamacpp": config_settings.llamacpp.llm_hf_model_file,
"openai": config_settings.openai.model,
"openailike": config_settings.openai.model,
"azopenai": config_settings.azopenai.llm_model,
"sagemaker": config_settings.sagemaker.llm_endpoint_name,
"mock": llm_mode,
"ollama": config_settings.ollama.llm_model,
"gemini": config_settings.gemini.model,
}
if llm_mode not in model_mapping:
print(f"Invalid 'llm mode': {llm_mode}")
return None
return model_mapping[llm_mode]
with gr.Column(scale=7, elem_id="col"):
# Determine the model label based on the value of PGPT_PROFILES
model_label = get_model_label()
if model_label is not None:
label_text = (
f"LLM: {settings().llm.mode} | Model: {model_label}"
)
else:
label_text = f"LLM: {settings().llm.mode}"
_ = gr.ChatInterface(
self._chat,
chatbot=gr.Chatbot(
label=label_text,
show_copy_button=True,
elem_id="chatbot",
render=False,
avatar_images=(
None,
AVATAR_BOT,
),
),
additional_inputs=[mode, upload_button, system_prompt_input],
)
with gr.Row():
avatar_byte = AVATAR_BOT.read_bytes()
f_base64 = f"data:image/png;base64,{base64.b64encode(avatar_byte).decode('utf-8')}"
gr.HTML(
f"<div class='footer'><a class='footer-zylon-link' href='https://zylon.ai/'>Maintained by Zylon <img class='footer-zylon-ico' src='{f_base64}' alt=Zylon></a></div>"
)
return blocks
def get_ui_blocks(self) -> gr.Blocks:
if self._ui_block is None:
self._ui_block = self._build_ui_blocks()
return self._ui_block
def mount_in_app(self, app: FastAPI, path: str) -> None:
blocks = self.get_ui_blocks()
blocks.queue()
logger.info("Mounting the gradio UI, at path=%s", path)
gr.mount_gradio_app(app, blocks, path=path, favicon_path=AVATAR_BOT)
if __name__ == "__main__":
ui = global_injector.get(PrivateGptUi)
_blocks = ui.get_ui_blocks()
_blocks.queue()
_blocks.launch(debug=False, show_api=False)
### Screenshot
_No response_
### Logs
_No response_
### System Info
```shell
Correct
```
### Severity
I can work around it | closed | 2024-11-26T17:43:12Z | 2024-11-27T17:47:49Z | https://github.com/gradio-app/gradio/issues/10043 | [
"bug"
] | rmuhawieh | 1 |
wkentaro/labelme | computer-vision | 808 | [BUG] | **Describe the bug**
when I was trying to convert labelme annotations into the coco format I am not able to do it successfully. I have 196 json file I am trying to convert all files but the program for that labelme2coc.py executing only one JSON and the process terminated.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**

**Desktop (please complete the following information):**
- OS: [e.g. Ubuntu 18.04]
- Labelme Version [e.g. 4.2.9]
**Additional context**
Add any other context about the problem here.
| closed | 2020-12-02T03:53:17Z | 2022-09-26T14:39:40Z | https://github.com/wkentaro/labelme/issues/808 | [
"issue::bug"
] | yashwant07 | 0 |
mars-project/mars | numpy | 3,288 | [BUG] xgboost 1.7: ImportError: cannot import name 'rabit' from 'xgboost' | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
A clear and concise description of what the bug is.
```python
ctx = {'d115db169882b5c06d671c887024b11f_0': (array([[0.83104541, 0.80554386, 0.3985519 , ..., 0.72182508, 0.77294997,
...f43806cf2fd20aef4b6e34fc0468be5d_0': [b'DMLC_NUM_WORKER=1', b'DMLC_TRACKER_URI=127.0.0.1', b'DMLC_TRACKER_PORT=32911']}
op = XGBTrain <key=d2d29ee95452257a278991ec8bb47865>
@classmethod
def execute(cls, ctx, op: "XGBTrain"):
if op.merge:
return super().execute(ctx, op)
> from xgboost import train, rabit
E ImportError: cannot import name 'rabit' from 'xgboost' (/home/vsts/miniconda/envs/test/lib/python3.9/site-packages/xgboost/__init__.py)
mars/learn/contrib/xgboost/train.py:167: ImportError
```
**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version
2. The version of Mars you use
3. Versions of crucial packages, such as numpy, scipy and pandas
4. Full stack of the error.
5. Minimized code to reproduce the error.
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| open | 2022-11-02T03:41:04Z | 2022-11-02T03:41:04Z | https://github.com/mars-project/mars/issues/3288 | [
"type: bug"
] | fyrestone | 0 |
piskvorky/gensim | nlp | 2,995 | Make the link to the Gensim 3.8.3 documentation dynamic | Clicking on the "Gensim 3.8.3 documentation" link in the header of the Gensim 4.0.0 documentation redirects the user to [the index page of the Gensim 3.8.3 documentation](https://radimrehurek.com/gensim_3.8.3/) rather than to the corresponding page, which is an unpleasant user experience. For example, try getting from [the 4.0.0 documentation of `gensim.models.phrases`](https://radimrehurek.com/gensim/models/phrases.html) to its 3.8.3 documentation.
A quick fix would be to set an ID attribute on the link and add inline JavaScript code that would update the HREF attribute of the link according to the current value of `location.pathname`.
https://github.com/RaRe-Technologies/gensim/blob/60a8f7f5599e241779b82f97d375a5046cb730c9/docs/src/sphinx_rtd_theme/notification.html#L2 | closed | 2020-11-04T00:03:47Z | 2020-11-16T22:51:09Z | https://github.com/piskvorky/gensim/issues/2995 | [
"bug",
"documentation"
] | Witiko | 0 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 57 | Blur vs Motion-Blur on Text Image Restoration. | Hello. At first, Thanks for s sharing this amazing project. You guys have done a really good job.
Currently, I'm going through your research paper. As you stated in the paper, the **local branch** targets the unstructured defects, such as noises and blurriness. This inspired me to try it over an ongoing problem about OCR ([here](https://stackoverflow.com/questions/64808986/scene-text-image-super-resolution-for-ocr)). Just to make a quick test, I've tried it on two cases, **gaussian blur** and **motion blur** text (with minimum acute). I've found that the model does a great job of restoring **gaussian blurry** text but not much on **motion blur**. Please see the results below, (`left-side`: normal image; `right-side`: restored image)

I believe that the model has much potential to restored more complex degraded raw samples. However, I like to know, in training time, was it ensured various blurring effects in the training data set? Or, to generalize the model on various blurry effects, do we need to fine-tune it? | closed | 2020-11-25T13:16:25Z | 2023-12-31T11:54:25Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/57 | [] | innat | 2 |
kennethreitz/records | sqlalchemy | 117 | query using params fail | 
| closed | 2017-12-01T10:02:58Z | 2018-04-28T22:20:49Z | https://github.com/kennethreitz/records/issues/117 | [
"bug"
] | EtheriousNatsu | 1 |
tflearn/tflearn | data-science | 636 | Can't load checkpoint files with tensorflow version 1.0.0. model.load() | model.load(model_file='/home/murl/faces_expressions/Checkpoints/-191.data-00000-of-00001')
does not work with the tf version 1.0.0.
I get the error: Data loss: not an sstable (bad magic number): perhaps your file is in a different file format and you need to use a different restore operator?
Anyone got a solution for that? | closed | 2017-02-27T17:23:05Z | 2017-03-14T21:36:15Z | https://github.com/tflearn/tflearn/issues/636 | [] | emurina | 3 |
babysor/MockingBird | deep-learning | 205 | ๆ ๆณๅ
้้ค่ชๅธฆ็ไนๅค็ๅญ๏ผๆฅๅฟไผๅบ็ฐๅพช็ฏ | 

| closed | 2021-11-09T11:18:17Z | 2021-11-09T13:08:54Z | https://github.com/babysor/MockingBird/issues/205 | [
"bug"
] | Zhangyide114514 | 0 |
HumanSignal/labelImg | deep-learning | 73 | "width" and "height" fields in "size" dict is set to zero instead of real img dims | I was trying to fix up some bad VOC2012 annotations and this came up | closed | 2017-03-30T08:38:14Z | 2017-04-21T14:21:35Z | https://github.com/HumanSignal/labelImg/issues/73 | [] | silcowitz | 2 |
reloadware/reloadium | django | 116 | vscode support | ## Describe the bug*
extension for vscode | closed | 2023-02-28T08:47:03Z | 2023-02-28T12:48:32Z | https://github.com/reloadware/reloadium/issues/116 | [] | diego351 | 1 |
marimo-team/marimo | data-visualization | 3,783 | Inconsistent data type of `table.value` | ### Describe the bug
I noticed that the data type of `table.value` changes when I sort the table. This only happens when the data is provided as a dictionary (`Dict[str, ListOrTuple[JSONType]]`), i.e. not when the data is provided as a list of dictionaries (`ListOrTuple[Dict[str, JSONType]]`).
https://github.com/user-attachments/assets/1b011b61-fac4-492d-92a6-c3e478c2f126
### Environment
<details>
```
{
"marimo": "0.11.2",
"OS": "Darwin",
"OS Version": "22.6.0",
"Processor": "i386",
"Python Version": "3.13.1",
"Binaries": {
"Browser": "133.0.6943.54",
"Node": "v21.6.1"
},
"Dependencies": {
"click": "8.1.8",
"docutils": "0.21.2",
"itsdangerous": "2.2.0",
"jedi": "0.19.2",
"markdown": "3.7",
"narwhals": "1.26.0",
"packaging": "24.2",
"psutil": "6.1.1",
"pygments": "2.19.1",
"pymdown-extensions": "10.14.3",
"pyyaml": "6.0.2",
"ruff": "0.9.6",
"starlette": "0.45.3",
"tomlkit": "0.13.2",
"typing-extensions": "missing",
"uvicorn": "0.34.0",
"websockets": "14.2"
},
"Optional Dependencies": {},
"Experimental Flags": {}
}
```
</details>
### Code to reproduce
```python
# /// script
# requires-python = ">=3.12"
# dependencies = [
# "marimo==0.11.2",
# "polars==1.22.0",
# "ruff==0.9.6",
# ]
# ///
import marimo
__generated_with = "0.11.2"
app = marimo.App(width="medium")
@app.cell
def _():
import marimo as mo
return (mo,)
@app.cell
def _():
import polars as pl
return (pl,)
@app.cell
def _(pl):
test_df = pl.DataFrame(
{
"num": [0, 1],
}
)
return (test_df,)
@app.cell
def _(mo, test_df):
table = mo.ui.table(test_df.to_dict(as_series=False))
table
return (table,)
@app.cell
def _(table):
table.value
return
``` | closed | 2025-02-13T15:23:15Z | 2025-02-13T17:29:42Z | https://github.com/marimo-team/marimo/issues/3783 | [
"bug"
] | tare | 1 |
man-group/arctic | pandas | 101 | Future Development? - Java API | Hi there,
I have been trialling this out and it looks a great framework for storage retrieval, is there any plans for the Java API or are you looking for contributors to help? I'm testing this out for data storage for zipline/quantopian backtester and also for a JVM based project and was wondering what stage that was at if any?
| closed | 2016-02-01T10:05:49Z | 2022-08-03T22:09:01Z | https://github.com/man-group/arctic/issues/101 | [] | michaeljohnbennett | 17 |
django-import-export/django-import-export | django | 1,738 | Resource is not being created with resource kwargs when showing export fields on admin UI | **Describe the bug**
I have a `UserAdmin` class that extends `ExportMixin` and declares the method `get_list_display` to change displayed fields according to the user on request. I want to reflect this on exported data, so the user can only export fields that are displayed to him on admin UI.
To do this, I override `get_export_resource_kwargs` method to send the list of displayed fields on admin to my `UserResource` class.
All of this worked like a charm since until version 3.3.3. However, on version 3.3.4, this fields are displayed on admin UI alongside export form. The problem is, when you're creating the list of fields to show on admin UI, you instantiate resource classes without calling `get_export_resource_kwargs`. This makes it impossible for me to know which fields the user that made the request can export.
**Versions:**
- Django Import Export >= 3.3.4
- Python 3.11.7
- Django 4.2.9
**Expected behavior**
The following code of method `export_action` from `ExportMixin` class on `admin.py`:
```python
context["fields_list"] = [
(
res.get_display_name(),
[
field.column_name
for field in res(model=self.model).get_user_visible_fields()
],
)
for res in self.get_export_resource_classes()
]
```
Should be like this:
```python
context["fields_list"] = [
(
res.get_display_name(),
[
field.column_name
for field in res(model=self.model, **self.get_export_resource_kwargs(request)).get_user_visible_fields()
],
)
for res in self.get_export_resource_classes()
]
```
| closed | 2024-01-15T15:51:11Z | 2024-01-17T16:55:42Z | https://github.com/django-import-export/django-import-export/issues/1738 | [
"bug"
] | pedrordgs | 1 |
fugue-project/fugue | pandas | 485 | [BUG] Make Fugue compatible with Ray 2.5.0 | Ray 2.5.0 had break changes again! So the `dataset_format` function becomes deprecated. But in lower version, we rely on it to determine whether the dataframe is empty and also to check whether it is arrow or pandas dataset. In addition, they start to introduce their own `Schema` class, and changed the return values of a few functions.
It breaks most of the functionalities of Fugue on Ray, so we have to make Fugue compatible from lower version Ray (~2.1.0) to the latest version. | closed | 2023-06-14T05:56:55Z | 2023-06-14T06:00:23Z | https://github.com/fugue-project/fugue/issues/485 | [
"compatibility"
] | goodwanghan | 0 |
horovod/horovod | deep-learning | 4,110 | [+[!๐
๐๐๐ ๐๐๐๐๐๐!]+]Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now | 20 seconds ago
L๐aked Video Sophie Rain Spiderman Original Video Viral Video L๐aked on X Twitter Telegram
..
..
[๐ ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐ข==โบโบ ๐ถ๐ ๐ณ๐ข๐ง ๐ญ๐ฎ๐ถ](https://usnews-daily.com/free-watch/)
..
..
[๐ด ๐ข๐ซ๐จ๐ข๐ช ๐ง๐ค๐ฑ๐ค ๐==โบโบ ๐ฃ๐๐๐๐
๐๐บ๐ฝ ๐ญ๐๐](https://usnews-daily.com/free-watch/?t)
..
..
<a href="https://usnews-daily.com/free-watch/?y" rel="nofollow" data-target="animated-image.originalLink"><img src="https://i.imgur.com/vN3eWE7.png"></a>
..
..
[-wATCH-]โ Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now
[-wATCH-]โ Sophie Rain Spiderman สแดแดแดแดแด
Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman สแดแดแดแดแด
Video แด ษชสแดส On Social Media หฃ แตสทโฑแตแตแตสณ
[-wATCH-]โ Sophie Rain Spiderman Video Original Video Link Sophie Rain Spiderman Video Viral On Social Media X Trending Now
Sophie Rain Spiderman Original Video video took the internet by storm and amazed viewers on various social media platforms. Sophie Rain Spiderman, a young and talented digital creator, recently became famous thanks to this interesting video.
L๐aked Video Sophie Rain Spiderman Original Video Viral Video L๐aked on X Twitter
Sophie Rain Spiderman Original Video video oficial twitter
L๐aked Video Sophie Rain Spiderman Original Video Viral Video L๐aked on X Twitter..
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
..
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. | closed | 2024-11-17T17:23:26Z | 2024-11-20T12:23:49Z | https://github.com/horovod/horovod/issues/4110 | [] | ghost | 1 |
AirtestProject/Airtest | automation | 912 | [LICENSE] Add name of copyright owner and date | Can you add name of copyright owner and date information to license file in package please. Now, there are only placeholder in this place
https://github.com/AirtestProject/Airtest/blob/master/LICENSE#L189 | open | 2021-06-01T10:50:09Z | 2021-06-01T10:50:09Z | https://github.com/AirtestProject/Airtest/issues/912 | [] | flaminestone | 0 |
keras-team/autokeras | tensorflow | 907 | Adopt the image augmentation from preprocessing layers | closed | 2020-01-19T20:39:12Z | 2020-04-02T22:46:38Z | https://github.com/keras-team/autokeras/issues/907 | [
"feature request",
"pinned"
] | haifeng-jin | 0 |
|
davidteather/TikTok-Api | api | 642 | [BUG] - Incorrect signature generation | **Describe the bug**
I am working on c# tiktok api for personal use based on your api and everything goes fine until the signature is used in practice, before that it is generated using a script from your API and selenium, everything seems ok but when I try to use the signature in practice, i.e. extract data from endpoint im receiving an empty response from the server with code 200, no error but also no text. Can you help me somehow? Is it some security against bots? Should i do something more?
Edit. I decided to check if it works if I use playwright so i rewrited a whole browser class, but the result was the same, I still get the code 200 and an empty response from the server.
**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
Below is my implementation of your code (Of course, earlier in the code I inject js with acrawler)
```
public (string, string, string) sign_url(string api_url, string verifyFp = "verify_khr3jabg_V7ucdslq_Vrw9_4KPb_AJ1b_Ks706M8zIJTq", string custom_did = null)
{
if (api_url == string.Empty) { DataParse.logData("Sign_url required a api_url parameter"); return (string.Empty, string.Empty, string.Empty); }
if (verifyFp == string.Empty || custom_did == string.Empty) { DataParse.logData("verifyFp & custom_did are required parameters"); return (string.Empty, string.Empty, string.Empty); }
string url = $"{api_url}&verifyFp={verifyFp}&did={custom_did}";
string signature = ((IJavaScriptExecutor)__driver).ExecuteScript("return window.byted_acrawler.sign({url: '" + url + "'});").ToString();
return (verifyFp, custom_did, signature);
}
```
Edit cont.
Rewrited sign_url function but now using playwright (still same result)
```
public async Task<(string, string, string)> sign_url(string api_url, string verifyFp = "verify_khr3jabg_V7ucdslq_Vrw9_4KPb_AJ1b_Ks706M8zIJTq", string custom_did = null)
{
// Check if parameters are valid
if (api_url == string.Empty) { Console.WriteLine("Sign_url required a api_url parameter"); return (string.Empty, string.Empty, string.Empty); }
if (verifyFp == string.Empty || custom_did == string.Empty) { Console.WriteLine("verifyFp & custom_did are required parameters"); return (string.Empty, string.Empty, string.Empty); }
// Create new page
IBrowserContext context;
IPage page;
(page, context) = this.create_new_page().Result;
// Prepare api url
string url = $"{api_url}&verifyFp={verifyFp}&did={custom_did}";
// Inject js into page
await page.SetContentAsync($"<script>{get_acrawler()}</script>");
var signature = page.EvaluateAsync("() => {var token = window.byted_acrawler.sign({url: '" + url + "'});return token;}").Result.Value.ToString();
await context.CloseAsync();
if (!page.IsClosed) await page.CloseAsync();
// Return data
return (verifyFp, custom_did, signature);
}
```
Original code:
```
def sign_url(self, **kwargs):
url = kwargs.get("url", None)
if url is None:
raise Exception("sign_url required a url parameter")
if kwargs.get("gen_new_verifyFp", False):
verifyFp = self.gen_verifyFp()
else:
verifyFp = kwargs.get(
"custom_verifyFp",
"verify_khgp4f49_V12d4mRX_MdCO_4Wzt_Ar0k_z4RCQC9pUDpX",
)
if kwargs.get("custom_did") is not None:
did = kwargs.get("custom_did", None)
elif self.did is None:
did = str(random.randint(10000, 999999999))
else:
did = self.did
return (
verifyFp,
did,
self.browser.execute_script(
'''
var url = "'''
+ url
+ "&verifyFp="
+ verifyFp
+ """&did="""
+ did
+ """"
var token = window.byted_acrawler.sign({url: url});
return token;
"""
),
)
```
**Expected behavior**
I thought that I would receive signatures, which, after concating with the API link, will give me the correct response from the server and I will get my data, but I'm receiving an empty responses from the server with the code 200
That's how link with generated signature looks like: https://m.tiktok.com/api/music/item_list/?aid=1988&app_name=tiktok_web&device_platform=web&referer=&root_referer=&user_agent=Mozilla%252F5.0%2B%28iPhone%253B%2BCPU%2BiPhone%2BOS%2B12_2%2Blike%2BMac%2BOS%2BX%29%2BAppleWebKit%252F605.1.15%2B%28KHTML%2C%2Blike%2BGecko%29%2BVersion%252F13.0%2BMobile%252F15E148%2BSafari%252F604.1&cookie_enabled=true&screen_width=794&screen_height=1022&browser_language=&browser_platform=&browser_name=&browser_version=&browser_online=true&ac=4g&timezone_name=&appId=1233&appType=m&isAndroid=False&isMobile=False&isIOS=False&OS=windows&secUid=&musicID=6900010125709413125&count=30&cursor=0&shareUid=&language=en&verifyFp=verify_khr3jabg_V7ucdslq_Vrw9_4KPb_AJ1b_Ks706M8zIJTq&did=8631211774948926517&_signature=_02B4Z6wo00f01VSvdQwAAIBAu7v1doQFFoFUr1GAADXjba
That's how the response from the server looks like.

**Desktop (please complete the following information):**
- OS: Windows 10
- TikTokApi==3.9.8
**Additional context** | closed | 2021-07-18T10:59:43Z | 2021-07-19T19:15:27Z | https://github.com/davidteather/TikTok-Api/issues/642 | [
"bug"
] | oskardve | 1 |
microsoft/MMdnn | tensorflow | 912 | Caffe to keras model AttributeError: type object 'h5py.h5.H5PYConfig' has no attribute '__reduce_cython__' | Platform : Windows 10
Python version: 3.7.9
Source framework with version: Caffe 1.x
Destination framework with version: keras (tensorflow 1.15)
Pre-trained model path: OpenPose Body 25 https://github.com/CMU-Perceptual-Computing-Lab/openpose
Running scripts:
I have installed this repo through PyPi using
pip install mmdnn
I am trying to convert the model using the following command:
`mmconvert --srcFramework caffe --inputWeight pose_iter_584000.caffemodel --inputNetwork pose_deploy.prototxt --dstFramework keras --outputModel base.h5`
However I get the following error:
```
File "c:\users\alecd\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\alecd\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\alecd\anaconda3\Scripts\mmconvert.exe\__main__.py", line 7, in <module>
File "c:\users\alecd\anaconda3\lib\site-packages\mmdnn\conversion\_script\convert.py", line 108, in _main
ret = IRToCode._convert(code_args)
File "c:\users\alecd\anaconda3\lib\site-packages\mmdnn\conversion\_script\IRToCode.py", line 17, in _convert
from mmdnn.conversion.keras.keras2_emitter import Keras2Emitter
File "c:\users\alecd\anaconda3\lib\site-packages\mmdnn\conversion\keras\keras2_emitter.py", line 14, in <module>
from mmdnn.conversion.keras.extra_layers import Scale
File "c:\users\alecd\anaconda3\lib\site-packages\mmdnn\conversion\keras\extra_layers.py", line 8, in <module>
from keras.engine import Layer, InputSpec
File "C:\Users\alecd\AppData\Roaming\Python\Python38\site-packages\keras\__init__.py", line 3, in <module>
from . import utils
File "C:\Users\alecd\AppData\Roaming\Python\Python38\site-packages\keras\utils\__init__.py", line 6, in <module>
from . import conv_utils
File "C:\Users\alecd\AppData\Roaming\Python\Python38\site-packages\keras\utils\conv_utils.py", line 9, in <module>
from .. import backend as K
File "C:\Users\alecd\AppData\Roaming\Python\Python38\site-packages\keras\backend\__init__.py", line 89, in <module>
from .tensorflow_backend import *
File "C:\Users\alecd\AppData\Roaming\Python\Python38\site-packages\keras\backend\tensorflow_backend.py", line 5, in <module>
import tensorflow as tf
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\__init__.py", line 41, in <module>
from tensorflow.python.tools import module_util as _module_util
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\__init__.py", line 84, in <module>
from tensorflow.python import keras
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\__init__.py", line 27, in <module>
from tensorflow.python.keras import models
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\models.py", line 24, in <module>
from tensorflow.python.keras import metrics as metrics_module
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\metrics.py", line 37, in <module>
from tensorflow.python.keras.engine import base_layer
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 59, in <module>
from tensorflow.python.keras.saving.saved_model import layer_serialization
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\saving\saved_model\layer_serialization.py", line 24, in <module>
from tensorflow.python.keras.saving.saved_model import save_impl
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\saving\saved_model\save_impl.py", line 34, in <module>
from tensorflow.python.keras.saving import saving_utils
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\saving\saving_utils.py", line 31, in <module>
from tensorflow.python.keras.utils.io_utils import ask_to_proceed_with_overwrite
File "c:\users\alecd\anaconda3\lib\site-packages\tensorflow\python\keras\utils\io_utils.py", line 31, in <module>
import h5py
File "c:\users\alecd\anaconda3\lib\site-packages\h5py\__init__.py", line 34, in <module>
from . import version
File "c:\users\alecd\anaconda3\lib\site-packages\h5py\version.py", line 17, in <module>
from . import h5 as _h5
File "h5py\h5.pyx", line 41, in init h5py.h5
AttributeError: type object 'h5py.h5.H5PYConfig' has no attribute '__reduce_cython__'
```
I have 2.8 of h5py in my environment
I can remove this error by installing keras 2.3.1 directly in my environment but then it throws the following error:
`AttributeError: module 'tensorflow' has no attribute 'placeholder'` which is a TF v2.x issue, and can only be bypassed to my knowledge by import tensorflow.compat.v1 and disabling v2 behavior. However, I would rather not have to dig through the source if there is another obvious fix.
I successfully can get the intermediate representation, and I have successfully converted the model to onnx but when I check the ```
onnx model using
onnx.checker.check I get the following bad node:
ValidationError: Node (prelu4_2) has input size 1 not in range [min=2, max=2].
```
```
==> Context: Bad node spec:
conv4_2prelu4_2prelu4_2"PRelu*
slope
``` | open | 2021-01-06T03:14:03Z | 2021-01-20T16:14:26Z | https://github.com/microsoft/MMdnn/issues/912 | [] | alecda573 | 2 |
lexiforest/curl_cffi | web-scraping | 107 | ่ฝๆฏๆๅจๆๆ็บนๅ | closed | 2023-08-21T08:03:13Z | 2023-08-21T08:14:41Z | https://github.com/lexiforest/curl_cffi/issues/107 | [] | huangpd | 2 |
|
Lightning-AI/pytorch-lightning | deep-learning | 19,526 | Model stuck after saving a checkpoing when using the FSDPStrategy | ### Bug description
I'm training a GPT model using Fabric. Below are the setups for Fabric
It works well if I'm running without saving a checkpoint. However, if I save a checkpoint using ethier `torch.save` with `fabric.barrier()` or with `fabric.save()` the training will stuck.
I saw `torch.distributed.barrier()` have a [similar issue](https://github.com/pytorch/pytorch/issues/54059). I don't have a similar utilities in my code. Not sure if there is a same usage in `Fabric`.
### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
```python
strategy = FSDPStrategy(
auto_wrap_policy={Block},
activation_checkpointing_policy={Block},
state_dict_type="full",
limit_all_gathers=True,
cpu_offload=False,
)
self.fabric = L.Fabric(accelerator=device, devices=n_devices, strategy=strategy, precision=precision)
```
Saving model with
```python
state = {"model": model}
full_save_path = os.path.abspath(get_path(base_dir, base_name, '.pt'))
fabric.save(full_save_path, state)
```
### Error messages and logs
```
# Error messages and logs here please
```
No errors, only stuck!
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): lightning, mainly using Fabric
#- PyTorch Lightning Version (e.g., 1.5.0): 2.1.3
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0): 2.1.2+cu118
#- Python version (e.g., 3.9): 3.10.13
#- OS (e.g., Linux): Ubuntu
#- CUDA/cuDNN version: 11.8
#- GPU models and configuration: A100 40Gx2
#- How you installed Lightning(`conda`, `pip`, source): pip
#- Running environment of LightningApp (e.g. local, cloud): local
```
</details>
### More info
I think it relates to the communications betweeen the systems.
cc @awaelchli @carmocca | closed | 2024-02-24T20:07:41Z | 2024-07-27T12:44:27Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19526 | [
"bug",
"strategy: fsdp",
"ver: 2.1.x",
"repro needed"
] | charlesxu90 | 3 |
luispedro/mahotas | numpy | 65 | Reconstruct surface from Zernike moments | It would be neat to have a function that reconstructs a surface based on the Zernike moments generated by mahotas.features.zernike_moments. In some cases one might want to compute the Euclidean distance between the original and the reconstructed image, thus justifying why this is useful.
Apparently someone has implemented such a thing (I have not tested):
https://github.com/tvwerkhoven/PyCourse/blob/master/python102/examples/py102-example2-zernike.py
| open | 2015-10-17T20:30:13Z | 2016-01-18T15:54:27Z | https://github.com/luispedro/mahotas/issues/65 | [
"enhancement"
] | fredericoschardong | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.