repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
httpie/cli | python | 935 | Refactor client.py | Me and @kbanc were working on #932 and we found the `client.py` file a bit hard to go through and thought we could refactor it to look a bit like the `sessions.py`: there would be a `Client` class and all the logic for preparing and sending the requests would be done from this file.
The idea is pretty rough but @jakubroztocil if you have some thoughts we would really appreciate them! | closed | 2020-06-18T20:35:25Z | 2021-12-28T12:04:41Z | https://github.com/httpie/cli/issues/935 | [] | gmelodie | 3 |
vitalik/django-ninja | pydantic | 981 | Model validation | Add validation logic from models to pydantic
- max_length for charfireld
- smallintegerfield
- decimalfield decimal_palces
| open | 2023-12-05T11:01:13Z | 2024-01-01T13:32:49Z | https://github.com/vitalik/django-ninja/issues/981 | [] | vitalik | 0 |
Avaiga/taipy | automation | 2,014 | [🐛 BUG] taipy 4.0 scenario multiple reruns | ### What went wrong? 🤔
Taipy 4.0 multiple times it is rerunning and the whole process is 10x slower now
### Steps to Reproduce Issue
```python
import taipy as tp
from taipy.gui import Gui
from src.config.config import *
from taipy import Config
from src.pages.root import *
from src.constants import *
if __name__ == '__main__':
# Load configuration
Config.configure_job_executions(mode="standalone", max_nb_of_workers =1 )
Config.load('src/config/config.toml')
scenario_cfg = Config.scenarios['nielsen_top_40']
pages = {
"/": root,
"Databases": databases,
"Data-Visualization-New": data_visualization_new,
"Visualization-Hierarchy": visualization_hierarchy,
"Emerging-Brands": emerging_brands,
"Projection-Trends": projection_trends,
}
tp.Orchestrator().run()
if tp.get_scenarios() == []:
print('create_scenario')
scenario = tp.create_scenario(scenario_cfg)
print(scenario)
tp.submit(scenario)
else:
print('get_scenario')
scenario = tp.get_scenarios()[0]
top40_country_list = TOP_40_MARKET_LS
# Read datasets
raw_dataset = scenario.raw_dataset.read()
stylekit = {
"color_primary":"#F40009",
"highlight-row": "#F40009",
}
gui = Gui(pages=pages)
gui.run(gui, title="NSP", dark_mode=False, port=8494, stylekit=stylekit )
```
### Code of Conduct
- [X] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2024-10-10T16:04:47Z | 2025-03-11T09:06:33Z | https://github.com/Avaiga/taipy/issues/2014 | [
"🖰 GUI",
"💥Malfunction",
"🟧 Priority: High",
"🔒 Staff only",
"🥶Waiting for contributor"
] | sumanth-tccc | 7 |
gradio-app/gradio | data-science | 10,318 | Image component omit comma on the file name | ### Describe the bug
while set Image component as type='filepath', and drag image to the component, found the temp filename have omitted the comma on the original file name. It's may be a issue while the file name carried some information, for example coordinates.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
```python
def show_image(image_path):
print(image_path)
# Gradio 界面
with gr.Blocks() as demo:
img = gr.Image(type="filepath", sources=["upload"], interactive=True)
btn = gr.Button("显示图片")
btn.click(show_image, inputs=img, outputs=None)
demo.launch()
```
### Screenshot


### Logs
_No response_
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.11.0
gradio_client version: 1.5.3
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.6
ffmpy: 0.5.0
gradio-client==1.5.3 is not installed.
httpx: 0.28.1
huggingface-hub: 0.27.1
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.1
orjson: 3.10.14
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.4
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.8.6
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.41.3
tomlkit: 0.13.2
typer: 0.15.1
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.12.0
httpx: 0.28.1
huggingface-hub: 0.27.1
packaging: 24.2
typing-extensions: 4.12.2
websockets: 14.1
```
### Severity
I can work around it | open | 2025-01-09T02:08:16Z | 2025-03-10T13:47:02Z | https://github.com/gradio-app/gradio/issues/10318 | [
"bug"
] | CNgg | 1 |
AirtestProject/Airtest | automation | 273 | When you enter text(text("Hello"))), the text is entered into the app player as an empty space. | Remove any following parts if does not have details about
**Describe the bug**
When you enter text(text("Hello"))), the text is entered into the app player as an empty space.
app player: nox (https://kr.bignox.com/)
```
paste traceback here
```
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
(https://user-images.githubusercontent.com/46311868/52838035-9044d000-3134-11e9-9afe-a60babdf675b.png)
**python version:** `python3.5`
**airtest version:** `1.0.22`
![2]
> You can get airtest version via `pip freeze` command.
**Smartphone (please complete the following [e.g. google pixel 2]):**
- Device: nox applyer
- OS: [e.g. Android 8.1]
- more information if have
**Additional context**
Add any other context about the problem here.
| closed | 2019-02-15T06:15:59Z | 2019-02-21T04:25:09Z | https://github.com/AirtestProject/Airtest/issues/273 | [] | JJunM | 2 |
modelscope/data-juicer | streamlit | 519 | Undefined symbol when running video_captioning_from_summarizer_mapper | Raise undefined symbol of fused_layer_norm_cuda.cpython when loading the T5 model in video_captioning_from_summarizer_mapper:
ImportError: /usr/local/lib/python3.10/dist-packages/fused_layer_norm_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZN2at4_ops19empty_memory_format4callEN3c108ArrayRefINS2_6SymIntEEENS2_8optionalINS2_10ScalarTypeEEENS6_INS2_6LayoutEEENS6_INS2_6DeviceEEENS6_IbEENS6_INS2_12MemoryFormatEEE
**I resolved the issue by just uninstalling apex** according to [this](https://github.com/huggingface/diffusers/issues/8624). | closed | 2024-12-24T03:33:39Z | 2024-12-24T03:36:25Z | https://github.com/modelscope/data-juicer/issues/519 | [] | BeachWang | 0 |
deepset-ai/haystack | nlp | 8,450 | Extracting global information about a Document | One idea that came to mind that the LLMMetadataExtractor doesn't yet quite solve how to handle extracting global information about a Document (at least not elegantly).
For example, let's say I want to extract the title from a really long PDF (e.g. more than 200 pages) and store this title in all future chunks I create from this PDF file. With the way this component works I would need to do something like:
File -> PDFConverter -> LLMMetadataExtractor -> DocumentSplitter -> DocumentWriter
but since we have such a long PDF there is a good chance that it will surpass the context window of the LLM and in reality all I need to do is send the first few pages of the file to find the title so this version also feels inefficient latency and cost-wise.
So I was wondering if we could brainstorm a way to handle this type of use case as well?
Maybe simply to allow a user to specify a page range as well at init time so we don't need to send the whole Document to the LLM? | closed | 2024-10-11T08:15:01Z | 2024-11-23T13:22:24Z | https://github.com/deepset-ai/haystack/issues/8450 | [] | davidsbatista | 0 |
jmcnamara/XlsxWriter | pandas | 792 | Possible bug: Inserted Images are warped when using worksheet.set_column(...,cell_format=...) | Title: Possible bug: Inserted Images are warped when using worksheet.set_column(...,cell_format=...)
Hi,
I am using XlsxWriter to insert row and column formatting automatically (and it does) but when using the `set_column(...)` feature a side effect occurs in which inserted images are warped.
I am using Python version 3.8.2 and XlsxWriter 1.3.6 and Excel version 2016 (16.0.5083.1000) MSO (16.0.5095.1000) 32-bit.
Here is some code that demonstrates the problem:
```python
import xlsxwriter as excel
import datetime as dt
from io import BytesIO
from PIL import Image, ImageTk, ImageFont, ImageDraw
import base64
b64_str = b'iVBORw0KGgoAAAANSUhEUgAAAQkAAADCCAIAAADVUqDwAAAHe0lEQVR4nO3dT0sbaQDH8acmDq4rFXquh6RVegh5C4JapQSEeMi9UEhBvKyHXDx68ZDdgxQaELznEEGQUrWBvoXgoShNXkEFQyphmsgeui2u+UWTmSSTyXw/9GRxHh3zNc/8e3z07elzA6DFmNdfADCkaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQaAPQwn3Z6Nbx43Skt9ts5pJX22e93SZwD943AI02AI02AI02AI02AK0v56mUSj2xfF0a1GiAa7xvABptABptANrAjjeGVXxtcnVl/GUkFP19Ib/SLJvmyfv6YaHR7wOkwY0es1IJ69ViKGpujfVruPKnmw9Hdv6s0cMBH+Ttnu/Ao29Pn/d8o+qekcEdi7eOLu83CacyE+tvrOi92yoXaxtpu5sv29vRxdfTyUC3Rqy/273OO7w3Z9i+d5cCOaeKr01+vnic7eAVE12YOjqeSsX8OHo4lZnqcKBbI05kD558zllxZ2M+xNs936XgtRHPTB/tTHT+cjERK3swlfLZ6Nb+cXdV3Nan16W3e757wWojnMpNH70Jdf+JVjZn+Wb0mLV/MbXk8j7oiJU9mN7qWR7e7nlnAtFG6NmcMcakco+zCw5+PMYYYxam9td8Mbq1fzC15HCYO0Jpt7+2vd3zLgWiDWOMSeWmswutH26Wi/XNZHVm9vLnv0SydlrRW1h6O+l4Fj6o0cNbx+3DqNi5TDWRvPw93MxsNZGp5YrN9hu0ssfOv+ufvN3zLgSjjejb6dbfW+ViLTF7NZ++vn3usnRmv16+TOypl0tkfNXRHGNgo8czf+pHyip2LlOdWa5tFxql/501apQK9nb6ama2utmukMjEbsb5mX5v97w7AWkjcufH0zzNVOfbnyIs7XzPid9hoZcJJ6+SAY0em9yVE/pKPbFc2y7cf+2ikU9fJTJ2Wf1f9M2E45mVt3venWC08X/2ZvLq9UOvle33dutHo1H3+6tfo6c21Cmgbi4rlQq1+YwY1xhr3cVbxy3e7vluDazGyMTRxYTjzz7NXL4u9OTrsDeTtY6ubRXs0x3r7tw9Eo4bNxek+jZ6bHJdzelzf3V5vbVQ21x50np4EF204jsur1V7u+cdCNb7Rsc/HmOMufna+uYeGXsxlKPHE+Otbxrlve8OVp/I79bFzMrtjN/bPe9MgNpo5rr48RhjGudtTpsM3+jh1cXWIw373Y6j+6POrt8VWz/qZsbv7Z53LDBtnGa8XMKnv6PHrJetp6eKdt7p9vIfeznj93bPuxCQNir1f3pzuDKUo8+NtU6oTtXru1MF+7T1g5Gwk4sM3u55VwJ/j/o98ulLx796Bzl6fLZ1QtX8eu5m8JuvFXP3rpPI2AtjBnM07O2e/4Xnxf3vRVS0ce5qGtM4rxhzd54WmosZ48vZkTMBmVONsvBc68FG5eaLu41+Kd9zI0lA0Aag0YbvjT0T7xv9eKb0v5tqA4M2AI02fE9eRXZ0vvUBLs99+Q5tABptjCTXdx/14byw79CG78m7j0Jzrm4N7Mt5Yb+hDf9T1yLcnVOSN2j15dzXMKMN/ytdiOt0SysuludQN2iVyzfON+hLtDECzm/EExcLluMHWVOiq+bJ0UBXBB0CtDECzuwTccjh9EFW+Qhh5cdhsA7EDW2MhsbhJzGtcrQGQnjrb/HcefmTt0vTeoI2RkLp6IdaIqTrRQHbrOLj9BFCf6ON0aAfZDVmYarzhZ/jGb0sZ3mvPgRPUwwebYwKvQZChws/x6ytduvVVuobQXzTML5Zg6fNX3LALWfXG3vj+vUdsbIHT9Yr9sn7+uH57aUNw/G1sdWVP9JtF6vtfhWf0cEzsSOktHO1GRWrS/0UjVjpHSvdzQZ9uwxCTzCnGi35dFWtmenEaabao/XyfIo2Rk1je7n9ws+dauaSlw8tzjnyaGME3bfw84PKxXpiNshTqd9oY0SVCrX52ermXheFlIv1zeTlfDqwB9939OXvxGLIxMKpxMSrxVD07qL/zXLFlCs/Pny080GfQbWiDUBjTgVotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFotAFo/wIpKtrkgIqs5wAAAABJRU5ErkJggg=='
exampleData = [b64_str,b64_str,b64_str,b64_str,b64_str,b64_str]
def decodeAndConvert_base64String(b64_str, memory_mode = False):
image = Image.open(BytesIO(base64.b64decode(b64_str)))
# if memory_mode == False:
# return ImageTk.PhotoImage(image)
if memory_mode == True:
buffered = BytesIO()
image.save(buffered, format='PNG')
return buffered
def export(l):
timestamp = f'{dt.datetime.now().strftime("%x")}_{dt.datetime.now().strftime("%X")}'
timestamp = timestamp.replace(":","-").replace("/","-")
workbook = excel.Workbook(timestamp+'.xlsx')
border_format = workbook.add_format({'pattern' : 14, 'bg_color' : 'white', 'font_size' : 14}) #for making borders 'shrink' : True
worksheet = workbook.add_worksheet()
x = 0
y = 0
for i in l:
#worksheet.insert_image(f'{alphabet[index]}{counter}', filename, {'image_data' : i})
worksheet.insert_image(x, y, 'dummy.png', {'image_data' : decodeAndConvert_base64String(i, True)})
x += 28
worksheet.set_row(x - 2, 14, border_format)
if x >= 81:
y += 15
worksheet.set_column(y - 2, y - 2, None, cell_format = border_format)
x = 0
workbook.close()
export(exampleData)
```
Specifically: with `set_column(...)` output is:

But without `set_column(...)` [i.e.]:
```
for i in l:
#worksheet.insert_image(f'{alphabet[index]}{counter}', filename, {'image_data' : i})
worksheet.insert_image(x, y, 'dummy.png', {'image_data' : decodeAndConvert_base64String(i, True)})
x += 28
worksheet.set_row(x - 2, 14, border_format)
if x >= 81:
y += 15
#worksheet.set_column(y - 2, y - 2, None, cell_format = border_format)
x = 0
```
The images are not warpped:

Workaround(s): [unimplemented] perhaps calculating how many columns will be needed, inserting those columns, and _then_ inserting images could work, OR upon inserting an image, have another column format ready that is "clear" (so that the function switches from two different column formats as-needed)
Conclusion: need to know if this is a quirk of excel _or_ xlsxwriter. | closed | 2021-03-03T13:21:05Z | 2021-03-03T13:37:49Z | https://github.com/jmcnamara/XlsxWriter/issues/792 | [] | IntegralWorks | 1 |
home-assistant/core | python | 140,899 | SmartThings Integration - can't control stage level for scent diffuser device | ### The problem
I can't control stage level for scent diffuser device.
You can see the device details on my.smartthings.com as below.
The stage level value are Stage1, Stage2, Stage3.

### What version of Home Assistant Core has the issue?
core-2025.3.2
### What was the last working version of Home Assistant Core?
core-2025.3.2
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
SmartThings
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/smartthings
### Diagnostics information
[smartthings-01JP75A6PAGJC47NSRN6K89P12-침실욕실 디퓨저-6e269a66f3af10348bbb9728896d5822.json](https://github.com/user-attachments/files/19330501/smartthings-01JP75A6PAGJC47NSRN6K89P12-.-6e269a66f3af10348bbb9728896d5822.json)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-19T00:42:09Z | 2025-03-19T15:21:34Z | https://github.com/home-assistant/core/issues/140899 | [
"integration: smartthings"
] | limitless00net | 2 |
miguelgrinberg/Flask-Migrate | flask | 255 | flask-migrate sqlalchemy.exc.NoReferencedColumnError: | Hello,
I have been getting this error when I try to migrate my schema with any sort of changes. Here is a section of my models.py
```
class Dataset(db.Model):
DatasetID = db.Column(db.Integer, primary_key = True)
SampleID = db.Column(db.String(50), db.ForeignKey('sample.SampleID', onupdate="cascade",ondelete="restrict"), nullable=False)
UploadDate = db.Column(db.Date, nullable=False)
UploadID = db.Column(db.Integer,db.ForeignKey('uploaders.UploadID', onupdate="cascade",ondelete="restrict"), nullable=False)
UploadStatus = db.Column(db.String(45), nullable=False)
HPFPath = db.Column(db.String(500))
DatasetType = db.Column(db.String(45), nullable=False)
SolvedStatus = db.Column(db.String(30), nullable=False)
InputFile = db.Column(db.Text)
RunID = db.Column(db.String(45))
Notes = db.Column(db.Text)
analyses = db.relationship('Analysis',backref='dataset',lazy='dynamic')
data2Cohorts = db.relationship('Dataset2Cohort',backref='dataset',lazy='dynamic')
class Dataset2Cohort(db.Model):
__tablename__='dataset2Cohort'
DatasetID = db.Column(db.Integer, db.ForeignKey('dataset.DatasetID', onupdate="cascade",ondelete="cascade"), nullable=False, primary_key = True)
CohortID = db.Column(db.Integer, db.ForeignKey('cohort.CohortID', onupdate="cascade", ondelete="restrict"), nullable=False, primary_key = True)
class Analysis(db.Model):
AnalysisID = db.Column(db.String(100), primary_key = True)
DatasetID = db.Column(db.Integer, db.ForeignKey('dataset.DatasetID', onupdate="cascade",ondelete="cascade"), nullable=False)
PipelineVersion = db.Column(db.String(30))
ResultsDirectory = db.Column(db.Text)
ResultsBAM = db.Column(db.Text)
AssignedTo = db.Column(db.String(100), nullable=True)
analysisStatuses = db.relationship('AnalysisStatus', backref='analysis', lazy='dynamic')
```
My initial migration succeeded without any errors from flask-migrate (using above schema with some additional tables), but when I try to make any additional changes to the schema (such as removing a table or adding a new column etc), I keep getting this error.
```sqlalchemy.exc.NoReferencedColumnError: Could not initialize target column for ForeignKey 'dataset.datasetid' on table 'analysis': table 'dataset' has no column named 'datasetid'```
Dataset model clearly has the column DatasetID which was referenced as a foreign key from the Analysis model. Is there a reason why flask-migrate is throwing this error??
Thank you so much,
Teja. | closed | 2019-02-19T19:32:42Z | 2019-02-19T19:40:15Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/255 | [] | nvteja | 1 |
HumanSignal/labelImg | deep-learning | 384 | No module resources | Hi
I try to execute labelImg.py but I get the following error:
File "labelImg.py", line 29, in <module>
import resources
ImportError: No module named resources.
Regards
| closed | 2018-10-27T18:56:29Z | 2018-11-11T05:21:39Z | https://github.com/HumanSignal/labelImg/issues/384 | [] | jessiffmm | 2 |
tensorlayer/TensorLayer | tensorflow | 935 | DropoutLayer raising error | ### New Issue Checklist
- [x] I have read the [Contribution Guidelines](https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md)
- [x] I searched for [existing GitHub issues](https://github.com/tensorlayer/tensorlayer/issues)
### Issue Description
When I use dropout layer, the code raise error like:
>File "/home/dir/anaconda3/lib/python3.6/site-packages/tensorlayer/decorators/deprecated_alias.py", line 24, in wrapper
return f(*args, **kwargs)
>File "/home/dir/anaconda3/lib/python3.6/site-packages/tensorlayer/layers/dropout.py", line 102, in __init__
LayersConfig.set_keep[name] = tf.placeholder(LayersConfig.tf_dtype)
>InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'methNet_1/Placeholder_2' with dtype float
### Reproducible Code
```python
with tf.variable_scope('methNet', reuse=reuse) as vs:
features = DenseLayer(features, 64, act = None, name='hidden')
features = tl.layers.BatchNormLayer(features, beta_init = w_init, gamma_init = w_init, is_train = is_train, name='bn_same3')
hidden = PReluLayer(features, channel_shared = True, name='prelu1')
hidden = DropoutLayer(hidden, 0.7, name='drop2')
```
| closed | 2019-02-14T14:13:15Z | 2019-02-14T14:23:08Z | https://github.com/tensorlayer/TensorLayer/issues/935 | [] | online-translation | 0 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 133 | 运行13B模型时报错:expected [5120 x 49953], got [5120 x 49952] | chinese-llama-lora-7b已经成功合并运行了,用同样的步骤测试13b的模型时,到了最后一步运行模型时报了以下错误:
```
main: seed = 1681268819
llama.cpp: loading model from ./models_cn/13B/ggml-model-q4_0.bin
llama_model_load_internal: format = ggjt v1 (latest)
llama_model_load_internal: n_vocab = 49953
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: f16 = 2
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 73.73 KB
llama_model_load_internal: mem required = 9917.04 MB (+ 1608.00 MB per state)
error loading model: llama.cpp: tensor 'output.weight' has wrong shape; expected [5120 x 49953], got [5120 x 49952]
llama_init_from_file: failed to load model
main: error: failed to load model './models_cn/13B/ggml-model-q4_0.bin'
```
运行的命令是:
`./main -m ./models_cn/13B/ggml-model-q4_0.bin --color -ins -t 8 --temp 0.2 -n 256 --repeat_penalty 1.3 -i -r "用户:" -p "对话"`
这里是说'output.weight'的形状不对,所以看了下上一步quantize的过程,发现第1步时是[5120 x 49953],但到最后一步的output.weight就变成了[5120 x 49952]。49953这个数量在config.json中有配置,我发现7B的目录下有config.json,但是13B的目录下没有这个文件。
quantize的输出如下:
```
llama.cpp: loading model from ./models_cn/13B/ggml-model-f16.bin
llama.cpp: saving model to ./models_cn/13B/ggml-model-q4_0.bin
[1/363] tok_embeddings.weight - [5120 x 49953], type = f16, quantizing .. size = 487.82 MB -> 152.44 MB | hist: 0.000 0.022 0.019 0.033 0.053 0.078 0.104 0.125 0.133 0.125 0.104 0.078 0.053 0.033 0.019 0.022
[2/363] layers.0.attention.wq.weight - [5120 x 5120], type = f16, quantizing .. 50.00 MB -> 15.62 MB | hist: 0.000 0.021 0.016 0.027 0.045 0.071 0.104 0.138 0.157 0.138 0.103 0.071 0.045 0.027 0.016 0.021
[3/363] layers.0.attention.wk.weight - [5120 x 5120], type = f16, quantizing .. size = 50.00 MB -> 15.62 MB | hist: 0.000 0.021 0.016 0.027 0.045 0.071 0.103 0.138 0.157 0.138 0.103 0.071 0.045 0.027 0.016 0.021
(中间步骤省略)
[361/363] layers.39.ffn_norm.weight - [5120], type = f32, size = 0.020 MB
[362/363] norm.weight - [5120], type = f32, size = 0.020 MB
[363/363] output.weight - [5120 x 49952], type = f16, quantizing .. size = 487.81 MB -> 152.44 MB | hist: 0.000 0.022 0.019 0.033 0.052 0.077 0.104 0.126 0.134 0.126 0.104 0.077 0.052 0.033 0.019 0.022
llama_model_quantize_internal: model size = 25177.22 MB
llama_model_quantize_internal: quant size = 7868.97 MB
llama_model_quantize_internal: hist: 0.000 0.022 0.019 0.033 0.053 0.078 0.104 0.125 0.133 0.125 0.104 0.078 0.053 0.033 0.019 0.022
main: quantize time = 140065.95 ms
main: total time = 140065.95 ms
```
请问这个可能是哪里的问题呢? | closed | 2023-04-12T03:29:26Z | 2023-04-22T07:49:53Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/133 | [] | Leo4zhou | 10 |
httpie/cli | python | 643 | [suggestion] Friendly Content-Length representation | I'm not sure how this would work in terms of making clear what was actually sent by the server and what has been marked up by httpie, but it would be nice to have a "friendly" representation of the content length in base two units so output along the lines of `http --headers ...` could be more easily understood at a glance. | closed | 2017-12-24T23:12:06Z | 2017-12-28T15:42:03Z | https://github.com/httpie/cli/issues/643 | [] | mqudsi | 1 |
HumanSignal/labelImg | deep-learning | 612 | [Feature request] | Hi tzutalin!
Thanks for the amazing tool! - I am trying to detect workstation seats on an office floor plan. That means I have sometimes 200-300 label boxes on one image. It would be amazing if there was a way to select and copy paste multiple boxes!
Thank you! | open | 2020-06-28T19:00:29Z | 2020-06-28T19:00:29Z | https://github.com/HumanSignal/labelImg/issues/612 | [] | philippds | 0 |
gevent/gevent | asyncio | 1,514 | RecursionError: maximum recursion depth exceeded while calling a Python object | * gevent version: but i added 1.2.x to my pacake dependency
>>> import gevent; print(gevent.__version__)
1.1.2
>>>
* Python version: 3.6.10
* Operating System: Linux
when i am trying to create a client in python 3.x it fails with the following error, but works fine with no issue in python2.7
File "/home/kamidivk/workplace/ASDKanbanTools/env/ASDKanbanTools-1.0/test-runtime/lib/python3.6/site-packages/botocore/httpsession.py", line 180, in __init__
self._manager = PoolManager(**self._get_pool_manager_kwargs())
File "/home/kamidivk/workplace/ASDKanbanTools/env/ASDKanbanTools-1.0/test-runtime/lib/python3.6/site-packages/botocore/httpsession.py", line 188, in _get_pool_manager_kwargs
'ssl_context': self._get_ssl_context(),
File "/home/kamidivk/workplace/ASDKanbanTools/env/ASDKanbanTools-1.0/test-runtime/lib/python3.6/site-packages/botocore/httpsession.py", line 197, in _get_ssl_context
return create_urllib3_context()
File "/home/kamidivk/workplace/ASDKanbanTools/env/ASDKanbanTools-1.0/test-runtime/lib/python3.6/site-packages/botocore/httpsession.py", line 72, in create_urllib3_context
context.options |= options
File "/home/kamidivk/workplace/ASDKanbanTools/env/ASDKanbanTools-1.0/test-runtime/python3.6/lib/python3.6/ssl.py", line 465, in options
super(SSLContext, SSLContext).options.__set__(self, value)
File "/home/kamidivk/workplace/ASDKanbanTools/env/ASDKanbanTools-1.0/test-runtime/python3.6/lib/python3.6/ssl.py", line 465, in options
super(SSLContext, SSLContext).options.__set__(self, value)
File "/home/kamidivk/workplace/ASDKanbanTools/env/ASDKanbanTools-1.0/test-runtime/python3.6/lib/python3.6/ssl.py", line 465, in options
super(SSLContext, SSLContext).options.__set__(self, value)
[Previous line repeated 320 more times]
RecursionError: maximum recursion depth exceeded while calling a Python object
| closed | 2020-01-16T23:35:54Z | 2020-01-18T17:14:38Z | https://github.com/gevent/gevent/issues/1514 | [
"Type: Question"
] | vkkamidi | 1 |
slackapi/python-slack-sdk | asyncio | 1,297 | Building user_auth_blocks with slack_sdk.models class objects for chat.unfurl API call fails | In [`WebClient.chat_unfurl`](https://github.com/slackapi/python-slack-sdk/blob/5f800c15a8172bc40c20b29d8473bf28bffbd666/slack_sdk/web/client.py#L2118), the `user_auth_blocks` parameter is typed as `Optional[Union[str, Sequence[Union[Dict, Block]]]] = None`. However, it doesn't actually accept `Sequence[Block]`. It seems that [`internal_utils._parse_web_class_objects()`](https://github.com/slackapi/python-slack-sdk/blob/3610592dce82d210418098949cc66bd9c261cdfe/slack_sdk/web/internal_utils.py#L187) is called to parse `Attachment`/`Block`/`Metadata` objects into `dict`s, but only for hardcoded classes (why not recursively on `JsonObject`?) passed to 3 specific parameters.
In other cases, parameters are typed `dict`, and you can't tell if you can use a `Block` object inside it, without diving into the source code and realizing that there's only a handful of cases where the SDK calls `to_dict()` for you. For example, it isn't possible to use `Block` objects in `WebClient.chat_unfurl`'s `unfurls` parameter, since it expects a `dict` in the form of `{"url": {"blocks": [dict1, dict2, ...]}, ...}` and does not accept `{"url": {"blocks": [Block1, Block2, ...]}, ...}`. Same thing for `SocketModeResponse`'s response payload.
### Reproducible in:
#### The Slack SDK version
```
slack-sdk==3.19.3
```
#### Python runtime version
```
Python 3.10.8
```
#### OS info
```
ProductName: macOS
ProductVersion: 12.6.1
BuildVersion: 21G217
Darwin Kernel Version 21.6.0: Thu Sep 29 20:13:56 PDT 2022; root:xnu-8020.240.7~1/RELEASE_ARM64_T6000
```
#### Steps to reproduce:
```python
from slack_sdk.models.blocks.blocks import DividerBlock
from slack_sdk.web.client import WebClient
# works
WebClient("SLACK_BOT_TOKEN").chat_unfurl(channel="CHANNEL", ts="TS", unfurls={}, user_auth_blocks=[{"type": "divider"}, {"type": "divider"}])
# TypeError: Object of type DividerBlock is not JSON serializable
WebClient("SLACK_BOT_TOKEN").chat_unfurl(channel="CHANNEL", ts="TS", unfurls={}, user_auth_blocks=[DividerBlock(), DividerBlock()])
# works
blocks = [DividerBlock(), DividerBlock()]
WebClient("SLACK_BOT_TOKEN").chat_unfurl(channel="CHANNEL", ts="TS", unfurls={}, user_auth_blocks=[block.to_dict() for block in blocks])
```
### Actual result:
```python
>>> WebClient("SLACK_BOT_TOKEN").chat_unfurl(channel="CHANNEL", ts="TS", unfurls={}, user_auth_blocks=[DividerBlock(), DividerBlock()])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/jsu/fuck/lib/python3.10/site-packages/slack_sdk/web/client.py", line 2150, in chat_unfurl
return self.api_call("chat.unfurl", json=kwargs)
File "/Users/jsu/fuck/lib/python3.10/site-packages/slack_sdk/web/base_client.py", line 156, in api_call
return self._sync_send(api_url=api_url, req_args=req_args)
File "/Users/jsu/fuck/lib/python3.10/site-packages/slack_sdk/web/base_client.py", line 187, in _sync_send
return self._urllib_api_call(
File "/Users/jsu/fuck/lib/python3.10/site-packages/slack_sdk/web/base_client.py", line 294, in _urllib_api_call
response = self._perform_urllib_http_request(url=url, args=request_args)
File "/Users/jsu/fuck/lib/python3.10/site-packages/slack_sdk/web/base_client.py", line 339, in _perform_urllib_http_request
body = json.dumps(args["json"])
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py", line 231, in dumps
return _default_encoder.encode(obj)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py", line 199, in encode
chunks = self.iterencode(o, _one_shot=True)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py", line 257, in iterencode
return _iterencode(o, 0)
File "/opt/homebrew/Cellar/python@3.10/3.10.8/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py", line 179, in default
raise TypeError(f'Object of type {o.__class__.__name__} '
TypeError: Object of type DividerBlock is not JSON serializable
``` | closed | 2022-11-12T01:37:03Z | 2022-11-16T00:56:30Z | https://github.com/slackapi/python-slack-sdk/issues/1297 | [
"bug",
"web-client",
"Version: 3x"
] | injust | 1 |
davidsandberg/facenet | tensorflow | 465 | Training is taking a lot of time | Im trying to train with a lot of images.
Training takes a lot of time.
Following s my output,
```
INFO:__main__:Processing iteration 15000 batch of size: 50
INFO:__main__:Created 2234450 embeddings
INFO:__main__:Training Classifier
```
Is it really training? because Im seeing this for the past 6 hours | closed | 2017-09-22T01:57:44Z | 2017-10-21T10:23:59Z | https://github.com/davidsandberg/facenet/issues/465 | [] | Zumbalamambo | 1 |
DistrictDataLabs/yellowbrick | scikit-learn | 906 | User cannot set color for FrequencyVisualizer | **Describe the bug**
The `FrequencyVisualizer` [takes a `color` parameter](https://github.com/DistrictDataLabs/yellowbrick/blob/develop/yellowbrick/text/freqdist.py#L129), but it doesn't actually know what to do with it!
**To Reproduce**
```python
from yellowbrick.text import FreqDistVisualizer
from yellowbrick.datasets import load_hobbies
from sklearn.feature_extraction.text import CountVectorizer
# Load the text data
corpus = load_hobbies()
y = LabelEncoder().fit_transform(corpus.target)
vectorizer = CountVectorizer(stop_words='english')
docs = vectorizer.fit_transform(corpus.data)
features = vectorizer.get_feature_names()
visualizer = FreqDistVisualizer(
features=features, orient='v', size=(600, 300), colors=["yellow"]
)
visualizer.fit(docs)
visualizer.poof()
```
**Dataset**
Just used the hobbies corpus from Yellowbrick.
**Expected behavior**
Yellow bars!
**Actual behavior**

**Desktop (please complete the following information):**
- OS: macOS
- Python Version: 3.7
- Yellowbrick Version: develop | closed | 2019-07-02T22:22:48Z | 2019-07-19T20:56:36Z | https://github.com/DistrictDataLabs/yellowbrick/issues/906 | [
"type: bug"
] | rebeccabilbro | 1 |
fastapi/sqlmodel | pydantic | 140 | How do I use UUIDs as the Primary key (Postgres DB) | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
import uuid
...
class Currency(CurrencyBase, table=True):
__table_args__ = {"schema": COUNTRY_SCHEMA}
uuid: uuid.UUID = Field(
default_factory=uuid.uuid4,
primary_key=True,
index=True,
nullable=False,
)
countries: List["Country"] = Relationship(
back_populates="currencies", link_model=CountryCurrencyLink
)
```
### Description
I am trying to create a UUID as the primary key for the above class/table
The only examples I have been able to find look like the above but when I run my fastapi server I get the following error:
```
Exception has occurred: AttributeError (note: full exception trace is shown but execution is paused at: <module>)
'FieldInfo' object has no attribute 'UUID'
File "/home/chris/Development/fastapi/world-service/app/models/country.py", line 124, in Currency
uuid: uuid.UUID = Field(
File "/home/chris/Development/fastapi/world-service/app/models/country.py", line 121, in <module> (Current frame)
class Currency(CurrencyBase, table=True):
File "/home/chris/Development/fastapi/world-service/app/routers/countries.py", line 14, in <module>
from app.models.country import (
File "/home/chris/Development/fastapi/world-service/app/main.py", line 6, in <module>
from .routers import countries
File "<string>", line 1, in <module>
```
I would really appreciate it someone can point me in the right direction
### Operating System
Linux
### Operating System Details
Linux Mint 20.2 Cinnamon
### SQLModel Version
0.0.4
### Python Version
3.8.10
### Additional Context
_No response_ | closed | 2021-10-19T23:32:53Z | 2021-12-10T14:29:16Z | https://github.com/fastapi/sqlmodel/issues/140 | [
"question"
] | chris-haste | 8 |
sktime/sktime | data-science | 7,521 | [DOC] Section Navigation Improvement | #### Describe the issue linked to the documentation
The Section Navigation could be improved with more subheadings, especially around the Notebook Examples.

#### Suggest a potential alternative/fix
I think the documentation would be improved if it had a designated section for each learning task:
Classification, Regression, Clustering, Forecasting, Detection
And add in other sections like Data Types, Data Transformations, Benchmarks, Hyperparameter Tuning.
Working example notebooks, or notebooks explaining project.
I think it would be valuable to have smaller notebooks on more specific topics instead of very large expansive ones. | open | 2024-12-13T20:24:34Z | 2025-01-17T19:56:36Z | https://github.com/sktime/sktime/issues/7521 | [
"documentation"
] | RobotPsychologist | 4 |
thunlp/OpenPrompt | nlp | 322 | pcl_loss | Prototypical Verbalizer for Prompt-based Few-shot Tuning | Prototypical Contrastive Learning | 在读了贵作[Prototypical Verbalizer for Prompt-based Few-shot Tuning](https://aclanthology.org/2022.acl-long.483.pdf) 是个很好的文章,但是我比较好奇代码是怎么实现的,然后我需要定位到程序的运行入口在哪里,我找了半天,感觉是在项目的`experiments/cli.py`这个[[文件](https://github.com/thunlp/OpenPrompt/blob/f6fb080ef755c37c01b7959e7560d007049510e8/experiments/cli.py#L42)]中。而这个文件是需要读取配置文件的,那么原型学习的配置文件是在`experiments/classification_proto_verbalizer.yaml`中。
最终我在openprompt/prompts/prototypical_verbalizer.py文件中找到了这个[pcl_loss]()的计算,结合论文来看,是由两个loss组成的,我根据自己的理解看了看代码,出现了一些困惑,下面是pcl_loss的计算
```python
def pcl_loss(self, v_ins):
# instance-prototype loss
sim_mat = torch.exp(self.sim(v_ins, self.proto))
num = sim_mat.shape[1]
loss = 0.
for i in range(num):
pos_score = torch.diag(sim_mat[:,i,:])
neg_score = (sim_mat[:,i,:].sum(1) - pos_score)
loss += - torch.log(pos_score / (pos_score + neg_score)).sum()
loss = loss / (num * self.num_classes * self.num_classes)
# instance-instance loss
loss_ins = 0.
for i in range(v_ins.shape[0]):
sim_instance = torch.exp(self.sim(v_ins, v_ins[i]))
pos_ins = sim_instance[i]
neg_ins = (sim_instance.sum(0) - pos_ins).sum(0)
loss_ins += - torch.log(pos_ins / (pos_ins + neg_ins)).sum()
loss_ins = loss_ins / (num * self.num_classes * num * self.num_classes)
loss = loss + loss_ins
return loss
```
对于 instance-prototype loss的计算我有一些困惑,我在代码中加了一些说明:
```python
# instance-prototype loss
sim_mat = torch.exp(self.sim(v_ins, self.proto)) # 维度应该为(num_classes, batch_size, num_classes)
num = sim_mat.shape[1] # 获取batch_size
loss = 0.
for i in range(num):
pos_score = torch.diag(sim_mat[:,i,:]) #
neg_score = (sim_mat[:,i,:].sum(1) - pos_score)
loss += - torch.log(pos_score / (pos_score + neg_score)).sum()
loss = loss / (num * self.num_classes * self.num_classes)
```
**困惑**
1. 对于这个loss的实现的感觉这不就变成了`loss += - torch.log(pos_score / (sim_mat[:,i,:].sum(1))).sum()`了吗。
2. `pos_score = torch.diag(sim_mat[:,i,:])`表示每个样本的在每个类别上的损失。但是从公式来看$\mathbf{v}_i^n$表示表征$\mathbf{v}_i$所属的类别为$n$,应该只需要考虑样本i所属的类别就可以了呀?
3. 论文中的exp体现在了什么地方呢?
| open | 2024-11-26T12:25:04Z | 2024-11-26T12:25:04Z | https://github.com/thunlp/OpenPrompt/issues/322 | [] | Charon-HN | 0 |
miguelgrinberg/python-socketio | asyncio | 373 | Sanic worker with python-socketio crashes when the client does a hot-reload | Hi,
I’m using python-socketio with Sanic, and having a client (front-end) running ReactJS (using [react-boilerplate](https://github.com/react-boilerplate/react-boilerplate)).
The Sanic app sometimes crashes when the client performs a hot-reload (when the ReactJS process auto-restarts in `dev` environment to update the newest changes). Here is the error logs retrieved from journalctl, the service’s name is Ora socketio:
```
Oct 31 16:04:39 ora-backend systemd[1]: Stopping Running Ora socketio...
Oct 31 16:04:39 ora-backend python3.7[15005]: [2019-10-31 16:04:39 +0000] [15005] [INFO] Stopping worker [15005]
Oct 31 16:04:39 ora-backend python3.7[15005]: [2019-10-31 16:04:39 +0000] [15005] [ERROR] Experienced exception while trying to serve
Oct 31 16:04:39 ora-backend python3.7[15005]: Traceback (most recent call last):
Oct 31 16:04:39 ora-backend python3.7[15005]: File "/root/venv/lib/python3.7/site-packages/sanic/app.py", line 1135, in run
Oct 31 16:04:39 ora-backend python3.7[15005]: serve(**server_settings)
Oct 31 16:04:39 ora-backend python3.7[15005]: File "/root/venv/lib/python3.7/site-packages/sanic/server.py", line 825, in serve
Oct 31 16:04:39 ora-backend python3.7[15005]: loop.run_until_complete(asyncio.sleep(0.1))
Oct 31 16:04:39 ora-backend python3.7[15005]: File "uvloop/loop.pyx", line 1415, in uvloop.loop.Loop.run_until_complete
Oct 31 16:04:39 ora-backend python3.7[15005]: RuntimeError: Event loop stopped before Future completed.
Oct 31 16:04:39 ora-backend python3.7[15005]: Traceback (most recent call last):
Oct 31 16:04:39 ora-backend python3.7[15005]: File "app_socketio.py", line 6, in <module>
Oct 31 16:04:39 ora-backend python3.7[15005]: app.run(**SOCKETIO_RUN_CONFIG)
Oct 31 16:04:39 ora-backend python3.7[15005]: File "/root/venv/lib/python3.7/site-packages/sanic/app.py", line 1135, in run
Oct 31 16:04:39 ora-backend python3.7[15005]: serve(**server_settings)
Oct 31 16:04:39 ora-backend python3.7[15005]: File "/root/venv/lib/python3.7/site-packages/sanic/server.py", line 825, in serve
Oct 31 16:04:39 ora-backend python3.7[15005]: loop.run_until_complete(asyncio.sleep(0.1))
Oct 31 16:04:39 ora-backend python3.7[15005]: File "uvloop/loop.pyx", line 1415, in uvloop.loop.Loop.run_until_complete
Oct 31 16:04:39 ora-backend python3.7[15005]: RuntimeError: Event loop stopped before Future completed.
Oct 31 16:04:39 ora-backend systemd[1]: ora_socketio.service: Main process exited, code=exited, status=1/FAILURE
Oct 31 16:04:39 ora-backend systemd[1]: ora_socketio.service: Failed with result 'exit-code'.
```
Do you know what the causes are and how to fix this ?
I asked this same question on the forum of `Sanic` and they think this seems to be an issue with `python-socketio`: https://community.sanicframework.org/t/sanic-worker-with-python-socketio-crashes-when-the-client-does-a-hot-reload/436 | closed | 2019-11-01T17:39:40Z | 2019-11-03T15:04:11Z | https://github.com/miguelgrinberg/python-socketio/issues/373 | [
"question"
] | snguyenthanh | 2 |
airtai/faststream | asyncio | 1,228 | Feature: Features for the RPC mod | **Two cases of rpc mods for producer.**
1) If None is returned from the consume function, I would like the publish function to return None instead of an empty byte string.
2) I would like to be able to put pydantic model on publish with rpc=True so that the RPC response is automatically validated.
```python
# ------------- SERVICE 1 -------------
from faststream.rabbit import RabbitRouter, RabbitBroker, RabbitQueue
broker = RabbitBroker()
service_1 = RabbitRouter()
broker.include_router(service_1)
class DTO(BaseModel):
user_id: UUID | None
queue: ClassVar[RabbitQueue] = RabbitQueue('v1.get', auto_delete=True)
class UserDTO(BaseModel):
id: UUID
name: str
@service_1.subscriber(
queue=DTO.queue
)
async def handler(body: DTO) -> UserDTO | None:
if body.user_id is None:
return None
return UserDTO(id=body.user_id, name='name')
# ------------- SERVICE 2 -------------
from faststream.rabbit import RabbitBroker, RabbitQueue
class DTO(BaseModel):
user_id: UUID | None
queue: ClassVar[RabbitQueue] = RabbitQueue('v1.get', auto_delete=True)
class UserDTO(BaseModel):
id: UUID
name: str
broker = RabbitBroker()
async def send():
# 1 CASE
message = DTO(user_id=None)
result = await broker.publish(message, message.queue, rpc=True)
# result => b''
# but I want to:
# result => None
# 2 CASE
message = DTO(user_id=uuid4())
result = await broker.publish(message, message.queue, rpc=True)
# result => {'id': *some UUID*, 'name': 'name'}
# but I want to:
# result = await broker.publish(message, message.queue, rpc=True, response_model=UserDTO)
# result => UserDTO(id=*some UUID*, name='name')
``` | open | 2024-02-15T10:40:52Z | 2024-08-21T19:15:53Z | https://github.com/airtai/faststream/issues/1228 | [
"enhancement",
"Core"
] | maxim-f1 | 2 |
fastapi/sqlmodel | pydantic | 243 | SQLAlchemy Error on Many To Many Relationship | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import List, Optional
from sqlmodel import Relationship, SQLModel, Field
class UserPostLink:
user_id: Optional[int] = Field(
default=None, foreign_key="users.id", primary_key=True
)
post_id: Optional[int] = Field(
default=None, foreign_key="posts.id", primary_key=True
)
class Users(SQLModel, table=True):
__table_args__ = (UniqueConstraint("email"),)
id: Optional[int] = Field(default=None, primary_key=True)
email: str = Field(index=True)
name: str
password: str
posts: List["Posts"] = Relationship(back_populates="users", link_model=UserPostLink)
class Posts(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
type: str
post_id: str
users: List["Users"] = Relationship(back_populates="posts", link_model=UserPostLink)
```
### Description
When I create these models, I get the following error.
```
Traceback (most recent call last):
File "/usr/local/Cellar/python@3.10/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/Cellar/python@3.10/3.10.2/Frameworks/Python.framework/Versions/3.10/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/Users/udasitharani/dev/code/later/server/main.py", line 15, in <module>
from . import constants, models
File "/Users/udasitharani/dev/code/later/server/models.py", line 14, in <module>
class Users(SQLModel, table=True):
File "/Users/udasitharani/dev/code/later/server/venv/lib/python3.10/site-packages/sqlmodel/main.py", line 356, in __init__
ins = inspect(rel_info.link_model)
File "/Users/udasitharani/dev/code/later/server/venv/lib/python3.10/site-packages/sqlalchemy/inspection.py", line 71, in inspect
raise exc.NoInspectionAvailable(
sqlalchemy.exc.NoInspectionAvailable: No inspection system is available for object of type <class 'type'>
```
Now my understanding is this is to do with the `link_model` argument passed to `Relationship` since when I remove that argument, my app runs fine.
On searching Google, I found that similar issue has been faced by people before for other packages on switching to 1.3.15, but SQLModel requires 1.4.17 or above.
My current version of SQLAlchemy is 1.4.31.
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10.2
### Additional Context
_No response_ | closed | 2022-02-13T12:24:27Z | 2022-02-14T15:36:19Z | https://github.com/fastapi/sqlmodel/issues/243 | [
"question"
] | niftytyro | 3 |
fbdesignpro/sweetviz | pandas | 119 | Use html correlation heatmap (Associations) instead of picture. | If we have more than 100 features, no label is clear in current correlation map.
<img width="680" alt="image" src="https://user-images.githubusercontent.com/8296331/169457422-aeaa68c9-b444-45a8-b8ea-2ce02d675a94.png">
But if we create heatmap by seaborn or just pandas, user can zoom html to see characters clearly.
<img width="719" alt="image" src="https://user-images.githubusercontent.com/8296331/169457533-8211e1b3-0233-4af4-b6c0-1f2e92b6a09a.png">
<img width="667" alt="image" src="https://user-images.githubusercontent.com/8296331/169457678-c6ef3d2b-0ed6-43f5-8250-2b9897c4f067.png">
Further more, use html+js can provide hover infomation on heatmap cells.
| open | 2022-05-20T05:35:43Z | 2023-10-04T13:01:11Z | https://github.com/fbdesignpro/sweetviz/issues/119 | [
"feature request"
] | PaleNeutron | 0 |
mage-ai/mage-ai | data-science | 5,559 | [BUG] Memory leak in mage app | ### Mage version
v0.9.74
### Describe the bug
We have upgraded 2 weeks ago our Mage instance to v0.9.74, from that moment we have seeing "weird" issues. The first problem we see is that there seems to be a memory leak in this version that I don't know if you are aware of. This screenshot shows the memory over 10 days and how in that time it has done a x3 in memory usage.

In the second screenshot you can see how after a reboot the memory problem is solved and returns to stable levels.

Another problem we have seen is that the pod takes a long time to start, I mean in about 10 sec it is running, but the app takes ~8min to be up and running. During all this time we see the installation of many packages that I do not know where they are defined or if we need them, for example in the example that I send you installs many versions of the SDK of sentry, why so many?

Also I guess that as a result of all this we are seeing that the increase in the duration of the pipelines has skyrocketed even taking up to 3 times longer than before.
we are seeing a lot of errors when we try to re-deploy the pod, most of them due to package mashumaro-3.13.1 here I will add just a bunch of them, but this is causing several issues in our side because we don't know when the container will be able to be up and running. Some examples:
**Example 1:**
```
Attempting uninstall: mashumaro
Found existing installation: mashumaro 3.13.1
Uninstalling mashumaro-3.13.1:
Uninstalling mashumaro-3.13.1:
ERROR: Could not install packages due to an OSError: [('/usr/local/lib/python3.10/site-packages/mashumaro-3.13.1.dist-info/METADATA', '/usr/local/lib/python3.10/site-packages/~-shumaro-3.13.1.dist-info/METADATA', "[Errno 2] No such file or directory: '/usr/local/lib/python3.10/site-packages/mashumaro-3.13.1.dist-info/METADATA'")]
```
**Example 2:**
```
Attempting uninstall: mashumaro
Found existing installation: mashumaro 3.13.1
Found existing installation: mashumaro 3.13.1
Uninstalling mashumaro-3.13.1:
Uninstalling mashumaro-3.13.1:
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'INSTALLER'
```
**Example 3:**
```
Attempting uninstall: mashumaro
Found existing installation: mashumaro 3.13.1
Found existing installation: mashumaro 3.13.1
Uninstalling mashumaro-3.13.1:
Uninstalling mashumaro-3.13.1:
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'RECORD'
```
when we see those errors the container keeps restarting and sometimes needs 5 restarts to go up, sometimes 2, sometimes 1...
```
We also have seen those error with other packages like
Attempting uninstall: urllib3
WARNING: No metadata found in /usr/local/lib/python3.10/site-packages
Found existing installation: urllib3 1.26.20
Can't uninstall 'urllib3'. No files were found to uninstall.
ERROR: Could not install packages due to an OSError: [Errno 39] Directory not empty: '/usr/local/lib/python3.10/site-packages/urllib3/'
```
or
```
Attempting uninstall: protobuf
Found existing installation: protobuf 4.21.12
Uninstalling protobuf-4.21.12:
ERROR: Could not install packages due to an OSError: [Errno 2] No such file or directory: 'source_context_pb2.py'
```
we also checked an the files or directories are there
```
root@mageai-0:/usr/local/lib/python3.10/site-packages# cd /usr/local/lib/python3.10/site-packages/mashumaro-3.13.1.dist-info/
root@mageai-0:/usr/local/lib/python3.10/site-packages/mashumaro-3.13.1.dist-info# ls -l
total 144
-rw-r--r-- 1 root root 4 Sep 24 21:08 INSTALLER
-rw-r--r-- 1 root root 10763 Sep 24 21:08 LICENSE
-rw-r--r-- 1 root root 114258 Sep 24 21:08 METADATA
-rw-r--r-- 1 root root 6321 Sep 24 21:08 RECORD
-rw-r--r-- 1 root root 92 Sep 24 21:08 WHEEL
-rw-r--r-- 1 root root 10 Sep 24 21:08 top_level.txt
```
### To reproduce
_No response_
### Expected behavior
_No response_
### Screenshots
_No response_
### Operating system
v1.30.2-gke.1587003
### Additional context
_No response_ | open | 2024-11-13T11:13:22Z | 2025-02-25T09:17:12Z | https://github.com/mage-ai/mage-ai/issues/5559 | [
"bug"
] | edulodgify | 5 |
Morizeyao/GPT2-Chinese | nlp | 251 | 下载的model怎么继续训练 有人知道吗 告知一下 | 如题 | open | 2022-08-12T09:40:20Z | 2023-02-16T01:47:59Z | https://github.com/Morizeyao/GPT2-Chinese/issues/251 | [] | b95595 | 2 |
apache/airflow | data-science | 48,184 | Asset is not created when called inside Metadata | ### Apache Airflow version
main (development)
### If "Other Airflow 2 version" selected, which one?
_No response_
### What happened?
**API server:**
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/usr/local/lib/python3.9/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.9/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
File "/usr/local/lib/python3.9/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
File "/usr/local/lib/python3.9/site-packages/cadwyn/structure/versions.py", line 475, in decorator
response = await self._convert_endpoint_response_to_version(
File "/usr/local/lib/python3.9/site-packages/cadwyn/structure/versions.py", line 521, in _convert_endpoint_response_to_version
response_or_response_body: Union[FastapiResponse, object] = await run_in_threadpool(
File "/usr/local/lib/python3.9/site-packages/starlette/concurrency.py", line 37, in run_in_threadpool
return await anyio.to_thread.run_sync(func)
File "/usr/local/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2470, in run_sync_in_worker_thread
return await future
File "/usr/local/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 967, in run
result = context.run(func, *args)
File "/usr/local/lib/python3.9/site-packages/cadwyn/schema_generation.py", line 504, in __call__
return self._original_callable(*args, **kwargs)
File "/opt/airflow/airflow-core/src/airflow/api_fastapi/execution_api/routes/task_instances.py", line 311, in ti_update_state
TI.register_asset_changes_in_db(
File "/opt/airflow/airflow-core/src/airflow/utils/session.py", line 98, in wrapper
return func(*args, **kwargs)
File "/opt/airflow/airflow-core/src/airflow/models/taskinstance.py", line 2826, in register_asset_changes_in_db
raise AirflowInactiveAssetAddedToAssetAliasException(bad_alias_asset_keys)
airflow.exceptions.AirflowInactiveAssetAddedToAssetAliasException: The following assets accessed by an AssetAlias are inactive: Asset(name='b', uri='b')
**Celery worker:**
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 475, in __call__
do = self.iter(retry_state=retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 376, in iter
result = action(retry_state)
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 418, in exc_check
raise retry_exc.reraise()
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 185, in reraise
raise self.last_attempt.result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/usr/local/lib/python3.9/site-packages/tenacity/__init__.py", line 478, in __call__
result = fn(*args, **kwargs)
File "/opt/airflow/task-sdk/src/airflow/sdk/api/client.py", line 539, in request
return super().request(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 827, in request
return self.send(request, auth=auth, follow_redirects=follow_redirects)
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 999, in _send_handling_redirects
raise exc
File "/usr/local/lib/python3.9/site-packages/httpx/_client.py", line 982, in _send_handling_redirects
hook(response)
File "/opt/airflow/task-sdk/src/airflow/sdk/api/client.py", line 108, in raise_on_4xx_5xx
return get_json_error(response) or response.raise_for_status()
File "/opt/airflow/task-sdk/src/airflow/sdk/api/client.py", line 104, in get_json_error
raise err
httpx.HTTPError: Server returned error

### What you think should happen instead?
_No response_
### How to reproduce
Use the below Dag:
```python
from airflow.decorators import dag, task
from airflow.datasets import Dataset, DatasetAlias
from airflow.sdk.definitions.asset.metadata import Metadata
from pendulum import datetime
my_alias_name = "alias-dataset-1"
@dag(
dag_display_name="example_dataset_alias_mapped",
start_date=datetime(2024, 8, 1),
schedule=None,
catchup=False,
tags=["datasets"],
)
def dataset_alias_dynamic_test():
@task
def upstream_task():
return ["a", "b"]
@task(outlets=[DatasetAlias(my_alias_name)])
def use_metadata(name):
print("NAME: ", name)
yield Metadata(
Dataset(name),
alias=DatasetAlias(my_alias_name),
extra={} # extra is NOT optional
)
use_metadata.expand(name=upstream_task())
dataset_alias_dynamic_test()
```
### Operating System
Linux
### Versions of Apache Airflow Providers
_No response_
### Deployment
Other
### Deployment details
_No response_
### Anything else?
_No response_
### Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
### Code of Conduct
- [x] I agree to follow this project's [Code of Conduct](https://github.com/apache/airflow/blob/main/CODE_OF_CONDUCT.md)
| open | 2025-03-24T09:39:35Z | 2025-03-24T13:02:48Z | https://github.com/apache/airflow/issues/48184 | [
"kind:bug",
"priority:high",
"area:core",
"area:datasets",
"affected_version:3.0.0beta"
] | atul-astronomer | 0 |
vaexio/vaex | data-science | 2,276 | [General Question / Potential Bug Report] Groupby operation on Large Dataset | Hi Vaex team! Firstly I want to say a big thank you for actively maintaining this project, it appears very promising!
I'm new to Vaex and I've been adopting Vaex for processing and computing statistics on larger datasets. As such I'm not sure whether my usecase is indeed a bug or whether there's something in the docs that I've missed.
## Usecase
I have a large dataset that I partitioned into 200 parquet files where each parquet file contains 1M rows and 6 columns. Each file has an identical schema and the majority of the columns either consist of UUID strings (e.g: uuid.uuid4()) or ordinary strings. For any of these columns there may be duplicated UUID / ordinary string entries. I read in all these files at once using vaex.open().
I'm interested to compute a groupby operation across the entire 200M rows for one of these columns and run a simple count operator to generate some statistics. However my program consistently gets killed when trying to run these operations on one of these columns.
For such a usecase would there be a different serialization format (e.g: arrow file) that might work better to process this data? Or is my issue likely related to requiring more RAM?
## Testing Environment Details
Vaex library used = 4.14
Operating System = Ubuntu 18.04
Specs of machine = 8 core CPU + 16 GiB RAM.
| closed | 2022-11-18T21:48:44Z | 2022-12-02T18:43:09Z | https://github.com/vaexio/vaex/issues/2276 | [] | armandgurgu23 | 7 |
jupyter/nbgrader | jupyter | 1,433 | Re: customising nbgrader release_assignment for large classes | Hi ,
**My use-case**: I teach engineering mechanics in a large class with about 300-odd first year undergraduate students. The entire module is being run with jupyterhub+nbgrader (4 assignments that make up 100% of the grade).
**My problem**: In theory this works great but cheating/plagiarism is laborious/time consuming.
**Desired solution**: In the future, my thought is to have a set of problems in an assessment but distribute only a subset of them in a random fashion; this way, only some problems are shared between students and plagiarism checks can be easier to do. For smaller classes, I could give unique problems.
**Question**: How would one go about automating this behaviour with nbgrader_release? Perhaps, someone has already tried this and would be kind enough to guide me on how to implement it? | open | 2021-04-14T19:14:58Z | 2023-08-09T21:37:32Z | https://github.com/jupyter/nbgrader/issues/1433 | [
"enhancement"
] | angadhn | 4 |
explosion/spaCy | machine-learning | 11,989 | Square brackets are still not tokenized properly |
## How to reproduce the behaviour
I found https://github.com/explosion/spaCy/issues/1144, which says it's fixed, but the same problem persists.
```python
>>> import spacy
>>> nlp = spacy.blank("en")
>>> print(' '.join([w.text for w in nlp('descent.[citation needed]fault')]))
descent.[citation needed]fault
```
I expected `[` and `]` to be tokenized independently.
## Your Environment
- **spaCy version:** 3.3.1
- **Platform:** macOS-12.6-arm64-arm-64bit
- **Python version:** 3.10.8
- **Pipelines:** en_core_web_trf (3.3.0)
| closed | 2022-12-18T23:39:21Z | 2022-12-19T17:20:40Z | https://github.com/explosion/spaCy/issues/11989 | [
"lang / en",
"feat / tokenizer"
] | neongreen | 2 |
aminalaee/sqladmin | asyncio | 401 | Using exclude lists breaks detail view ordering | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
When getting rid of the exclude columns the `set` difference is used, which breaks the original ordering
https://github.com/aminalaee/sqladmin/blob/2fb1d26b6ae473f00607424bef58a28182fe0d13/sqladmin/models.py#L914-L915
To fix this I would instead use the following:
```
exclude_columns = {self.get_model_attr(attr) for attr in exclude}
attrs = [attr for attr in self._attrs if attr not in exclude_columns]
```
### Steps to reproduce the bug
Use non-empty `ModelView.column_details_exclude_list`
### Expected behavior
Model attributes displayed in the order they were defined.are
### Actual behavior
Model attributes are displayed in random order.
### Debugging material
_No response_
### Environment
Linux / 3.8 / 0.8
### Additional context
_No response_ | closed | 2022-12-24T20:16:25Z | 2023-01-05T07:59:21Z | https://github.com/aminalaee/sqladmin/issues/401 | [] | TarasKuzyo | 1 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 625 | AttributeError: module 'numba' has no attribute 'jit' | I couldn't run. Having the following error:
AttributeError: module 'numba' has no attribute 'jit' | closed | 2021-01-10T23:21:33Z | 2021-01-14T11:21:58Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/625 | [] | atillayurtseven | 18 |
keras-team/keras | data-science | 20,375 | Conv2D with Torch Creates Wrong Shape | While trying to move from Keras 2 to Keras 3 with PyTorch, I ran into a problem in my model where a Conv2D layer outputs a shape wider in one dimension and shorter in another. It behaves appropriately when using the Tensorflow backend.
A reproducing gist is [here](https://gist.github.com/i418c/73c6e18f18eaac3fac9c9a591efca7cf). | closed | 2024-10-18T00:47:47Z | 2024-11-22T02:05:36Z | https://github.com/keras-team/keras/issues/20375 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug",
"backend:torch"
] | i418c | 4 |
dynaconf/dynaconf | flask | 883 | [CI] Skip tests if changes are made only to docs/ folder | When a PR includes only changes to docs/ folder the whole CI for tests must be skipped, the only workflow required would be the netlify preview build
example https://github.com/dynaconf/dynaconf/pull/882
To solve this we need to explore on how to skip workflows conditionally and use git diff to list all the changed files | closed | 2023-03-09T18:35:03Z | 2023-03-12T01:43:26Z | https://github.com/dynaconf/dynaconf/issues/883 | [
"Not a Bug",
"RFC"
] | rochacbruno | 1 |
modin-project/modin | data-science | 7,367 | df.max()/min() on 1 column df leads to "could not broadcast input array from shape (6,) into shape (5,)" what from parquet loaded with ray | I am still trying to reliably save and load from parquet, but running into new problems.
It seams most of my problems are windows related, as on Linux the experience is a lot less painful.
While atempting to get managable .parquet chunks, I used a partition column.
But modin.read_parquet does not support partition columns and is defaulting to pandas, which exploses my RAM.
ray.from_parquet works though, and with .to_modin() I get a modin dataframe again, that looks fine.
but when I do
`df['z'].max()`
I get
```
could not broadcast input array from shape (6,) into shape (5,)
```
those 2 ints are not always the same though. They depend on the file I load, and I think on the set partitions.
But I cant seam to figure out what they mean.
I tried repartitioning but that did not help.
Any hint whats going on here?
To make my code run on windows as well, it would be great if I could use this workaround. On Linux it seams the modin load and save to parquet methods work a lot better | closed | 2024-08-12T16:11:22Z | 2024-09-19T11:28:18Z | https://github.com/modin-project/modin/issues/7367 | [
"question ❓",
"Triage 🩹"
] | Liquidmasl | 1 |
modin-project/modin | pandas | 7,468 | FEAT: Add progress bar for engine switch | **Is your feature request related to a problem? Please describe.**
When `DataFrame.set_backend` is called, a progress bar should display switch progress. | closed | 2025-03-18T21:36:13Z | 2025-03-20T22:02:24Z | https://github.com/modin-project/modin/issues/7468 | [
"new feature/request 💬",
"Interfaces and abstractions"
] | sfc-gh-joshi | 0 |
dgtlmoon/changedetection.io | web-scraping | 2,098 | Browser Steps - 'Response' object is not subscriptable | **Describe the bug**
A clear and concise description of what the bug is.
When using Browser Steps, I get the error `'Response' object is not subscriptable` from the browser.
ChangeDetection logs show:
```
Starting connection with playwright
Starting connection with playwright - done
24.1.30.104,172.22.0.1 - - [10/Jan/2024 18:56:04] "GET /browser-steps/browsersteps_start_session?uuid=4f9ddf86-9bd5-4d86-98a0-d98754761aec HTTP/1.1" 200 222 0.328140
Exception when calling step operation goto_url 'Response' object is not subscriptable
24.1.30.104,172.22.0.1 - - [10/Jan/2024 18:56:04] "POST /browser-steps/browsersteps_update?uuid=4f9ddf86-9bd5-4d86-98a0-d98754761aec&browsersteps_session_id=0371132f-97db-49fe-b6e0-cb251c59f7fe HTTP/1.1" 401 219 0.005911
```
Playwright docker does not show an error.
**Version**
v0.45.12
**To Reproduce**
Steps to reproduce the behavior:
1. Go to Edit
2. Click on BrowserSteps -> Click here to Start
4. See error
**Expected behavior**
Browser Steps will connect and appear.
**Screenshots**

**Desktop (please complete the following information):**
- OS: macOS Sonoma
- Browser: Edge
- Version: Latest?
**Smartphone (please complete the following information):**
- Device: N/A
- OS: N/A
- Browser: N/A
- Version: N/A
**Additional context**
Docker compose file for setup is:
```
version: "3.8"
services:
changedetection:
image: ghcr.io/dgtlmoon/changedetection.io:latest
container_name: changedetection
environment:
- PUID=1000
- PGID=1000
- TZ=Americas/Example
- BASE_URL=monitor.example.com
- PLAYWRIGHT_DRIVER_URL="ws://playwright-chrome:3000/?stealth=1&--disable-web-security=true"
volumes:
- ./:/datastore
ports:
- 8083:5000
restart: always
depends_on:
playwright-chrome:
condition: service_started
playwright-chrome:
hostname: playwright-chrome
container_name: playwright-chrome
image: browserless/chrome
restart: always
ports:
- 3000:3000
environment:
- SCREEN_WIDTH=1920
- SCREEN_HEIGHT=1024
- SCREEN_DEPTH=16
- ENABLE_DEBUGGER=false
- PREBOOT_CHROME=true
- CONNECTION_TIMEOUT=300000
- MAX_CONCURRENT_SESSIONS=10
- CHROME_REFRESH_TIME=600000
- DEFAULT_BLOCK_ADS=true
- DEFAULT_STEALTH=true
- DEFAULT_IGNORE_HTTPS_ERRORS=true
```
I get the error below from the browser console as well:

I've reviewed the issues https://github.com/dgtlmoon/changedetection.io/issues/1627 and https://github.com/dgtlmoon/changedetection.io/issues/1823 but they do not appear to give adequate solutions to the problem. I'm having the issue with both the full URL as well as the localhost URL. I do not have any authentication set up (with the full URL behind CF Access/Tunnels).
Thank you! | closed | 2024-01-10T19:02:18Z | 2024-01-10T22:37:54Z | https://github.com/dgtlmoon/changedetection.io/issues/2098 | [
"triage"
] | mrnoisytiger | 0 |
pytest-dev/pytest-cov | pytest | 497 | mock pymysql in function but effect in other module | # Summary
when i mock pymysql conn error with use [@pytest.fixture(autouse=True)]
it did successed with pytest the py file
but other py file get failed with pytest-cov
with use pytest-cov,
at last, i find the mock_conn_error method is not only effected in one test class, sometime also effect in other module.
maybe this is a bug.
## mock_conn_error method is this
```python
@pytest.fixture(autouse=True)
def mock_conn_error():
mock_conn = mock.MagicMock()
mock_conn.side_effect = Exception
pymysql.connect = mock_conn
```
## Versions
Python 3.8.0
pytest 6.2.4
pytest-cov 2.12.1
## Code for temp solution
mock pymysql conn error in every test method like this
```python
def test_somemethod():
pass
#mock
mock_conn = mock.MagicMock()
mock_conn.side_effect = Exception
conn_bk = pymysql.connect
pymysql.connect = mock_conn
pass
#release the mock
pymysql.connect = conn_bk
```
| closed | 2021-09-27T02:26:44Z | 2021-10-26T07:33:06Z | https://github.com/pytest-dev/pytest-cov/issues/497 | [] | vekee | 2 |
PablocFonseca/streamlit-aggrid | streamlit | 19 | Format numbers with comma separating thousands | I'd like to display large numbers with comma separating thousands.
At first I assumed I could get this done with the Custom styling injecting JsCode on components front end.
I have this for example:

and I'd like to display like this to improve readability

here is what I tried but it's not producing any results
```
cellstyle_jscode = JsCode("""
valueFormatter(params) {
return params.toString().split( /(?=(?:\d{3})+(?:\.|$))/g ).join( "," ),
};
""")
gb2.configure_column("Value", cellStyle=cellstyle_jscode)
gridOptions = gb2.build()
```
Any Idea on how I could get this to work?
[I looked there to get the number formating in JavaScript](https://stackoverflow.com/questions/2254185/regular-expression-for-formatting-numbers-in-javascript#comment59393511_6179409)
| closed | 2021-04-16T21:50:39Z | 2021-04-17T18:02:58Z | https://github.com/PablocFonseca/streamlit-aggrid/issues/19 | [] | EcosystemEnergy | 1 |
dunossauro/fastapi-do-zero | pydantic | 202 | Suporte a fonte Opendyslexic | Algumas alterações para isso foram feitas no [commit](https://github.com/dunossauro/fastapi-do-zero/commit/afbc68586c523e33a0f3a7884bb6079697f83dbc)
Voltei atrás nesse [commit](https://github.com/dunossauro/fastapi-do-zero/commit/4868db090324bab45b6f04dd5e0e0df2bc671b07) Até ver o pdf gerado (o que eu erroneamente não tinha notado, os slides estão todos desconfigurados)
Isso ficará em aberto até eu corrigir o marp para suportar a fonte da maneira correta! | closed | 2024-07-16T19:36:38Z | 2024-07-30T19:33:36Z | https://github.com/dunossauro/fastapi-do-zero/issues/202 | [] | dunossauro | 0 |
microsoft/nni | data-science | 5,026 | frameworkcontroller reuseMode | **Describe the issue**: How to set 'reuseMode=false'? It seems that when reuseMode is true, the code in reusable is executed, and when reuseMode is false, the code in kubernetes is executed.

If the code in reusable is executed, the error as below will be got:

**config.yml**:
experimentName: example_mnist_pytorch
trialConcurrency: 1
maxExecDuration: 1h
maxTrialNum: 2
debug: false
reuse: false
#nniManagerIp:
#choice: local, remote, pai, kubeflow
trainingServicePlatform: frameworkcontroller
searchSpacePath: search_space.json
#choice: true, false
useAnnotation: false
tuner:
#choice: TPE, Random, Anneal, Evolution, BatchTuner, MetisTuner, GPTuner
builtinTunerName: TPE
classArgs:
#choice: maximize, minimize
optimize_mode: maximize
trial:
codeDir: .
taskRoles:
- name: worker
taskNum: 1
command: python3 model.py
gpuNum: 0
cpuNum: 1
memoryMB: 8192
image: frameworkcontroller/nni:v1.0
frameworkAttemptCompletionPolicy:
minFailedTaskCount: 1
minSucceededTaskCount: 1
frameworkcontrollerConfig:
storage: nfs
serviceAccountName: frameworkcontroller
nfs:
# Your NFS server IP, like 10.10.10.10
server: <my nfs server ip>
# Your NFS server export path, like /var/nfs/nni
path: /nfs/data
**Environment**:
- NNI version: 2.8
- Training service (local|remote|pai|aml|etc): k8s frameworkcontroller
| open | 2022-07-27T07:35:42Z | 2022-08-01T02:43:38Z | https://github.com/microsoft/nni/issues/5026 | [
"question",
"user raised",
"Framework Support"
] | N-Kingsley | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,837 | Add cookies to nodriver from httpx or requests | The documentation specifies how to use nodriver cookies in other libraries like requests. Could someone explain how to do the reverse process? That is, we obtain cookies from httpx or requests and load them into the nodriver browser. I have looked at the classes in core/browser.py and have not been able to load cookies in any way (set_all, load...). | closed | 2024-04-21T08:51:46Z | 2024-04-23T18:51:57Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1837 | [] | phaeton1199 | 2 |
docarray/docarray | pydantic | 1,301 | Fix broken Hubble DocStore tests | The Document -> Doc refactor broke the DocStore hubble tests: https://github.com/docarray/docarray/pull/1293/commits/70fde35b4800d3b024df7824588d5591c8ec025a
We should look into getting these back online. | closed | 2023-03-29T07:07:22Z | 2023-05-04T13:44:39Z | https://github.com/docarray/docarray/issues/1301 | [] | Jackmin801 | 3 |
NVIDIA/pix2pixHD | computer-vision | 305 | Is there a way i can use the loss log text file to get a tensorboard after training? I dont want to retrain the model for the tensorboard. | Is there a way i can use the loss log text file to get a tensorboard after training? I dont want to retrain the model for the tensorboard.
_Originally posted by @maximusron in https://github.com/NVIDIA/pix2pixHD/issues/205#issuecomment-993815519_ | open | 2022-06-14T15:05:40Z | 2022-06-14T15:05:40Z | https://github.com/NVIDIA/pix2pixHD/issues/305 | [] | Ghaleb-alnakhlani | 0 |
kizniche/Mycodo | automation | 1,413 | Custom input module for the Pimeroni Automation HAT with TDS sensor | I have made a module that can interface with the Pimeroni Automation HAT's ADC#4 input. This module reads the input voltage and converts that into an EC measurement with the mS/cm unit. This module allows the user to input a custom conversion factor in the input GUI, this way the module can be used with any generic EC/TDS sensor. As far as I know most sensors produces a voltage in a linear manner with the Total Dissolved Solids, therefore a simple conversion factor can be used. I used just a generic off-brand sensor to test the script and it works surprisingly well in regards to accuracy. I should say however that I do not know how to code that well, so I have had some help from AI.
`# -*- coding: utf-8 -*-
import copy
import time
import automationhat
from mycodo.inputs.base_input import AbstractInput
# A simple constraints function to ensure a positive value.
def constraints_pass_positive_value(mod_input, value):
errors = []
all_passed = True
if value <= 0:
all_passed = False
errors.append("Must be a positive value")
return all_passed, errors, mod_input
# Measurements dictionary; measurement index 0 holds our EC reading.
measurements_dict = {
0: {
'measurement': 'EC',
'unit': '_S_cm'
}
}
# Input information required by Mycodo.
INPUT_INFORMATION = {
'input_name_unique': 'AutomationHAT_EC',
'input_manufacturer': 'Pimoroni',
'input_name': 'Automation HAT EC Input',
'input_library': 'automationhat',
'measurements_name': 'Electrical Conductivity',
'measurements_dict': measurements_dict,
'url_manufacturer': 'https://shop.pimoroni.com',
'url_datasheet': 'https://docs.pimoroni.com/automationhat/',
'url_product_purchase': 'https://shop.pimoroni.com/products/automation-hat',
'dependencies_module': [
('pip-pypi', 'automationhat', 'automationhat')
],
'interfaces': ['I2C'],
'i2c_location': [], # Not needed because the library handles it internally.
'i2c_address_editable': False,
'options_enabled': ['period', 'pre_output'],
'options_disabled': ['interface', 'i2c_location'],
'custom_options': [
{
'id': 'conversion_factor',
'type': 'float',
'default_value': 200.0,
'constraints_pass': constraints_pass_positive_value,
'name': 'Conversion Factor',
'phrase': 'The conversion factor in µS/cm per Volt for converting the analog voltage reading.'
}
]
}
class InputModule(AbstractInput):
"""
Input module for reading Electrical Conductivity (EC) via the Pimoroni Automation HAT.
The EC is calculated from the voltage reading on ADC channel 4 (0-3.3V channel)
using a conversion factor (µS/cm per Volt) that is set via a custom option.
"""
def __init__(self, input_dev, testing=False):
super().__init__(input_dev, testing=testing, name=__name__)
# Initialize the custom option variable to None
self.conversion_factor = None
# Setup custom options using the web interface settings.
self.setup_custom_options(INPUT_INFORMATION['custom_options'], input_dev)
# Retrieve the conversion factor from the custom options.
self.conversion_factor = self.get_custom_option('conversion_factor')
self.logger.info(f"Using conversion factor: {self.conversion_factor} µS/cm per Volt")
def get_measurement(self):
"""
Read the voltage from the Automation HAT's ADC and convert it to an EC value.
Returns:
dict: A dictionary of measurements keyed by their index.
"""
self.return_dict = copy.deepcopy(measurements_dict)
try:
# For the 0-3.3V channel on the broken-out pins, use index 3.
voltage = automationhat.analog.four.read()
if voltage is None:
self.logger.error("No voltage reading available from Automation HAT ADC channel 4.")
return self.return_dict
# Calculate EC in µS/cm from the voltage.
ec_value = voltage * self.conversion_factor
# Set the measurement value for index 0.
self.value_set(0, ec_value)
return self.return_dict
except Exception as msg:
self.logger.exception("Input read failure: {}".format(msg))
` | open | 2025-03-23T10:07:09Z | 2025-03-23T10:07:09Z | https://github.com/kizniche/Mycodo/issues/1413 | [] | HanFarJR | 0 |
junyanz/iGAN | computer-vision | 23 | Versions of Theano and Cudnn that are supported | I am working in Windows. Could you tell the version of Theano and cudnn that the code will work on. ? | closed | 2018-04-24T09:40:52Z | 2018-05-02T05:39:20Z | https://github.com/junyanz/iGAN/issues/23 | [] | srutisiva | 1 |
pallets-eco/flask-sqlalchemy | flask | 482 | Make it easy to parse a Pagination object to dict | Being able to easely parse a Pagination object to dict would make it much more "ajax" friendly. | closed | 2017-03-04T02:20:39Z | 2022-09-18T18:10:06Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/482 | [
"pagination"
] | italomaia | 9 |
jazzband/django-oauth-toolkit | django | 923 | follow-up to #906 | @dag18e @n2ygk I think we added some things here that are not really needed:
1) AFAIK .filter() can't raise DoesNotExist
2) L412 casts the Queryset to a list and then slicing, I do no think this is needed and this slows down things a bit
I feel these things should be addressed in a follow up PR
_Originally posted by @MattBlack85 in https://github.com/jazzband/django-oauth-toolkit/pull/906#discussion_r572176021_ | closed | 2021-02-08T16:44:04Z | 2021-03-12T09:20:14Z | https://github.com/jazzband/django-oauth-toolkit/issues/923 | [
"enhancement"
] | n2ygk | 6 |
coqui-ai/TTS | pytorch | 3,019 | [Bug] TypeError: 'NoneType' object is not subscriptable | ### Describe the bug
Traceback (most recent call last):
File "G:\TTS\script.py", line 15, in <module>
tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
TypeError: 'NoneType' object is not subscriptable
### To Reproduce
I took an example from the readme
```python
import torch
from TTS.api import TTS
# Get device
device = "cuda" if torch.cuda.is_available() else "cpu"
# List available 🐸TTS models and choose the first one
model_name = TTS().list_models()[0]
# Init TTS
tts = TTS(model_name).to(device)
# Run TTS
# ❗ Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language
# Text to speech to a file
tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")```
### Expected behavior
_No response_
### Logs
```shell
"Ya zapustil venv"
Python 3.10.6
Microsoft Windows [Version 10.0.22631.2338]
(c) Корпорация Майкрософт (Microsoft Corporation). Все права защищены.
(venv) G:\TTS>python script.py
G:\TTS\venv\lib\site-packages\torchaudio\compliance\kaldi.py:22: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf . Check the section C-API incompatibility at the Troubleshooting ImportError section at https://numpy.org/devdocs/user/troubleshooting-importerror.html#c-api-incompatibility for indications on how to solve this problem . (Triggered internally at ..\torch\csrc\utils\tensor_numpy.cpp:84.)
EPSILON = torch.tensor(torch.finfo(torch.float).eps)
No API token found for 🐸Coqui Studio voices - https://coqui.ai
Visit 🔗https://app.coqui.ai/account to get one.
Set it as an environment variable `export COQUI_STUDIO_TOKEN=<token>`
> tts_models/multilingual/multi-dataset/xtts_v1 is already downloaded.
> Using model: xtts
Traceback (most recent call last):
File "G:\TTS\script.py", line 15, in <module>
tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
TypeError: 'NoneType' object is not subscriptable
(venv) G:\TTS>
```
### Environment
```shell
{
"CUDA": {
"GPU": [
"NVIDIA GeForce RTX 3060"
],
"available": true,
"version": "11.7"
},
"Packages": {
"PyTorch_debug": false,
"PyTorch_version": "2.0.1+cu117",
"TTS": "0.17.6",
"numpy": "1.22.0"
},
"System": {
"OS": "Windows",
"architecture": [
"64bit",
"WindowsPE"
],
"processor": "Intel64 Family 6 Model 167 Stepping 1, GenuineIntel",
"python": "3.10.6",
"version": "10.0.22631"
}
}
```
### Additional context
_No response_ | closed | 2023-09-30T18:03:02Z | 2023-10-16T10:04:04Z | https://github.com/coqui-ai/TTS/issues/3019 | [
"bug"
] | bropines | 6 |
chatanywhere/GPT_API_free | api | 299 | your secretKey is not configure |
<img width="229" alt="Snipaste_2024-09-25_13-32-13" src="https://github.com/user-attachments/assets/9059269c-0f42-4d69-b178-ceffa4cd4cc8">
**Describe the bug 描述bug**
A clear and concise description of what the bug is.
<img width="229" alt="Snipaste_2024-09-25_13-32-13" src="https://github.com/user-attachments/assets/9059269c-0f42-4d69-b178-ceffa4cd4cc8">
**Describe the bug 描述bug**
A clear and concise description of what the bug is.
**To Reproduce 复现方法**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Screenshots 截图**
If applicable, add screenshots to help explain your problem.
**Tools or Programming Language 使用的工具或编程语言**
Describe in detail the GPT tool or programming language you used to encounter the problem
**Additional context 其他内容**
Add any other context about the problem here.
| open | 2024-09-25T05:32:54Z | 2024-09-26T14:40:22Z | https://github.com/chatanywhere/GPT_API_free/issues/299 | [] | Luciou-gy | 2 |
dask/dask | numpy | 11,281 | Ensure that array-shuffle with range-like full indexer is a no-op | Assume
```
arr = da.from_array(...)
arr.shuffle(indexer)
```
where the flattened indexer is basically
```
list(range(0, len(are))
```
where the sub-lists also match the chunk structure
This should be a no-op. This is unlikely to be triggered in Dask, but xarray adds a shuffle as well and ``arr.shuffle().shuffle()`` should only trigger one shuffle there.
See https://github.com/pydata/xarray/pull/9320#discussion_r1707501882 for context | closed | 2024-08-07T19:22:55Z | 2024-08-13T10:51:34Z | https://github.com/dask/dask/issues/11281 | [
"array"
] | phofl | 0 |
holoviz/panel | matplotlib | 7,390 | bokeh.core.serialization.SerializationError: can't serialize | I'm on panel==1.5.2 and playing with Mermaid.js. Mermaid allows providing an initial configuration. For example a `theme`.
Instead of providing a `theme` parameter on the `MermaidDiagram` I would like to provide a `config` parameter exposing the various configuration options. But when I do I get a `SerializationError`.
```python
import panel as pn
import param
from panel.custom import JSComponent
pn.extension()
class MermaidConfig(param.Parameter):
theme = param.Selector(
default="default", objects=["default", "forest", "dark", "neutral"]
)
class MermaidDiagram(JSComponent):
"""A MermaidDiagram Pane
Example:
diagram = MermaidDiagram(
object=(
'''
graph LR
A --- B
B-->C[fa:fa-ban forbidden]
B-->D(fa:fa-spinner);
'''
)
)
"""
object = param.String(default="")
configuration = param.ClassSelector(class_=MermaidConfig, default=MermaidConfig())
_esm = """
import mermaid from 'mermaid';
mermaid.initialize({ startOnLoad: false, look: "handDrawn" });
async function getSVG(object) {
if (object) {
const { svg } = await mermaid.render('graphDiv', object);
return svg;
} else {
return null;
}
}
export async function render({ model, el }) {
async function update() {
console.log("update");
el.innerHTML = await getSVG(model.object);
}
await update();
model.on('object', update);
}
"""
_importmap = {
"imports": {
"mermaid": "https://cdn.jsdelivr.net/npm/mermaid@11/dist/mermaid.esm.min.mjs"
}
}
diagram = MermaidDiagram(
object="""
graph LR
A --- B
B-->C[fa:fa-ban forbidden]
B-->D(fa:fa-spinner);
""",
styles={"border": "1px solid black"},
width=500,
)
diagram.servable()
```
```bash
File "/opt/conda/lib/python3.11/site-packages/bokeh/core/serialization.py", line 475, in error
raise SerializationError(message)
bokeh.core.serialization.SerializationError: can't serialize <class 'bokeh_app_d749b42a2bfc468687139f97520c250e.MermaidConfig'>
```
If this is not a bug (:-)) then please document how to implement this correctly. Thanks.
For more context see https://discourse.holoviz.org/t/how-to-use-mermaid-with-panel/8324/2. | open | 2024-10-11T16:37:12Z | 2024-10-11T17:10:22Z | https://github.com/holoviz/panel/issues/7390 | [
"TRIAGE"
] | MarcSkovMadsen | 0 |
dynaconf/dynaconf | fastapi | 733 | [RFC] Allow multiple envvar prefixes. was: Switch env with additional env variables | I am wondering whether it exists some simple method to switch the env label with additional env variables.
I am writing a SDK for TencentCloud. I want to the environment variables in the runtime of function computing. A environment variable, `"TENCENTCLOUD_RUNTIME"="SCF" `, can be used to identify the environment. I hope to use them as additional default value or replace some addtional default value, for example, "TENCENTCLOUD_SECRETID" in replace of SDK's "QCLOUDSDK_SECRET_ID"。
I cannot understand how to use custom loader, or use proper parameters to control settings class.
In addition, the docs have no detailed description about "@str format" and "@jlina " with env. I cannot find proper ways to deal with more complicated condition, for example, "TENCENTCLOUD_SECRETID" only exists when access role is setted. | open | 2022-04-01T08:37:02Z | 2022-09-21T15:33:19Z | https://github.com/dynaconf/dynaconf/issues/733 | [
"question",
"RFC"
] | Guo-Zhang | 2 |
slackapi/bolt-python | fastapi | 470 | `slack_bolt` socket mode connection eventually fails at `connect()` | I am running a slack application in socket mode. While initiating the slack app websocket connection using `connect()`, I am handling an error event on the app by passing the error handler to `SocketModeHandler`. I now see the socket connections are inconsistent and eventually fail with below log messages.
### Reproducible in:
#### The `slack_bolt` version
```
slack-bolt==1.7.0
slack-sdk==3.8.0
```
#### Python runtime version
```
Python 3.6.8
```
#### OS info
```
$ uname -a
Linux dev-10-178-5-101 3.10.0-1160.15.2.el7.x86_64 #1 SMP Thu Jan 21 16:15:07 EST 2021 x86_64 x86_64 x86_64 GNU/Linux
```
#### Steps to reproduce:
1. I am using below code to initialise slack app in socket-mode.
```
def reinit_handle(self, webSocketApp, error):
log.info("re-init slack bot with new socket connection")
try:
webSocketApp.close()
except:
log.error("unable to close existing socket connections")
finally:
self.slack_init()
def slack_init(self):
log.info("slack socket server init")
host,port = env["PROXY"].split(":")
try:
while True and (self.sapp is not None):
app_handle = SocketModeHandler(self.sapp, env["SLACK_APP_TOKEN"], http_proxy_host=host, http_proxy_port=port, proxy_type='http')
app_handle.client.on_error_listeners.append(self.reinit_handle)
app_handle.connect()
Event().wait(timeout=500)
except Exception as e:
log.error("unable to init slack app: ",e)
o.slack_init()
```
### Actual result:
The connections seems to be working fine with below logs:
```
2021-09-16 23:14:20,895:INFO:ombot.routes:slack_init:slack socket server init
2021-09-16 23:14:20,969:INFO:slack_bolt.App:_monitor_current_session:The session seems to be already closed. Going to reconnect...
2021-09-16 23:14:20,969:INFO:slack_bolt.App:connect:A new session has been established
2021-09-16 23:14:21,396:INFO:slack_bolt.App:_run_current_session:Starting to receive messages from a new connection
```
However, soon after few minutes I get below log messages:
```
2021-09-17 01:46:42,806:INFO:slack_bolt.App:_monitor_current_session:The session seems to be already closed. Going to reconnect...
2021-09-17 01:46:47,166:ERROR:slack_bolt.App:on_error:on_error invoked (error: WebSocketBadStatusException, message: Handshake status 408 Request Timeout)
2021-09-17 01:46:47,166:INFO:ombot.routes:reinit_handle:re-init slack bot with new socket connection
2021-09-17 01:46:47,166:INFO:ombot.routes:slack_init:slack socket server init
```
As you can see above, my slack error handler `reinit_handle` is kicked in to restart the slack app with new socket connection. This seems to work fine as well. However after few hours the `reinit_handle` when trying to recover with a new connection, it fails with below log messages:
```
2021-09-17 03:50:35,311:WARNING:websocket:warning:send_ping routine terminated: socket is already closed.
2021-09-17 03:48:45,917:INFO:slack_bolt.App:_monitor_current_session:The session seems to be already closed. Going to reconnect...
2021-09-17 03:50:44,631:ERROR:slack_bolt.App:on_error:on_error invoked (error: WebSocketConnectionClosedException, message: Connection to remote host was lost.)
2021-09-17 03:50:52,212:ERROR:slack_bolt.App:on_error:on_error invoked (error: WebSocketConnectionClosedException, message: Connection to remote host was lost.)
2021-09-17 03:50:52,216:WARNING:websocket:warning:send_ping routine terminated: [Errno 9] Bad file descriptor
2021-09-17 03:52:12,110:ERROR:slack_bolt.App:on_error:on_error invoked (error: WebSocketConnectionClosedException, message: Connection to remote host was lost.)
2021-09-17 03:52:19,008:WARNING:websocket:warning:send_ping routine terminated: socket is already closed.
2021-09-17 03:52:30,218:WARNING:websocket:warning:send_ping routine terminated: [Errno 32] Broken pipe
2021-09-17 03:52:30,913:ERROR:slack_bolt.App:on_error:on_error invoked (error: BrokenPipeError, message: [Errno 32] Broken pipe)
2021-09-17 03:52:32,037:ERROR:slack_bolt.App:on_error:on_error invoked (error: WebSocketConnectionClosedException, message: Connection to remote host was lost.)
2021-09-17 03:55:20,419:ERROR:slack_bolt.App:on_error:on_error invoked (error: BrokenPipeError, message: [Errno 32] Broken pipe)
2021-09-17 03:55:21,291:ERROR:slack_bolt.App:on_error:on_error invoked (error: WebSocketConnectionClosedException, message: Connection to remote host was lost.)
2021-09-17 03:55:22,611:WARNING:websocket:warning:send_ping routine terminated: [SSL: BAD_LENGTH] bad length (_ssl.c:2483)
```
post to this, when `reinit_handle` continuously tries to recover from the above errors it gets into even bad state with below logs:
```
2021-09-17 04:05:44,106:INFO:ombot.routes:reinit_handle:re-init slack bot with new socket connection
2021-09-17 04:04:22,604:WARNING:websocket:warning:send_ping routine terminated: [Errno 32] Broken pipe
2021-09-17 04:04:23,414:ERROR:slack_bolt.App:on_error:on_error invoked (error: BrokenPipeError, message: [Errno 32] Broken pipe)
2021-09-17 04:04:23,419:ERROR:slack_bolt.App:on_error:on_error invoked (error: BrokenPipeError, message: [Errno 32] Broken pipe)
2021-09-17 04:04:24,009:WARNING:websocket:warning:send_ping routine terminated: [Errno 32] Broken pipe
2021-09-17 04:04:25,111:ERROR:slack_bolt.App:_perform_urllib_http_request:Failed to send a request to Slack API server: <urlopen error EOF occurred in violation of protocol (_ssl.c:1129)>
--- Logging error ---
Traceback (most recent call last):
File "/bb/libexec/ombot/python/lib/python39.zip/urllib/request.py", line 1346, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/bb/libexec/ombot/python/lib/python39.zip/http/client.py", line 1257, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/bb/libexec/ombot/python/lib/python39.zip/http/client.py", line 1303, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/bb/libexec/ombot/python/lib/python39.zip/http/client.py", line 1252, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/bb/libexec/ombot/python/lib/python39.zip/http/client.py", line 1012, in _send_output
self.send(msg)
File "/bb/libexec/ombot/python/lib/python39.zip/http/client.py", line 952, in send
self.connect()
File "/bb/libexec/ombot/python/lib/python39.zip/http/client.py", line 1426, in connect
self.sock = self._context.wrap_socket(self.sock,
File "/bb/libexec/ombot/python/lib/python39.zip/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/bb/libexec/ombot/python/lib/python39.zip/ssl.py", line 1040, in _create
self.do_handshake()
File "/bb/libexec/ombot/python/lib/python39.zip/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:1129)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/bb/libexec/ombot/dependencies/site-packages/ombot/routes.py", line 92, in slack_init
app_handle.connect()
File "/bb/libexec/ombot/dependencies/site-packages/slack_bolt/adapter/socket_mode/base_handler.py", line 31, in connect
self.client.connect()
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/socket_mode/websocket_client/__init__.py", line 188, in connect
self.wss_uri = self.issue_new_wss_url()
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/socket_mode/client.py", line 48, in issue_new_wss_url
response = self.web_client.apps_connections_open(app_token=self.app_token)
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/web/client.py", line 930, in apps_connections_open
return self.api_call("apps.connections.open", http_verb="POST", params=kwargs)
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/web/base_client.py", line 135, in api_call
return self._sync_send(api_url=api_url, req_args=req_args)
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/web/base_client.py", line 171, in _sync_send
return self._urllib_api_call(
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/web/base_client.py", line 284, in _urllib_api_call
response = self._perform_urllib_http_request(url=url, args=request_args)
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/web/base_client.py", line 437, in _perform_urllib_http_request
raise err
File "/bb/libexec/ombot/dependencies/site-packages/slack_sdk/web/base_client.py", line 409, in _perform_urllib_http_request
resp = opener.open(req, timeout=self.timeout) # skipcq: BAN-B310
File "/bb/libexec/ombot/python/lib/python39.zip/urllib/request.py", line 517, in open
response = self._open(req, data)
File "/bb/libexec/ombot/python/lib/python39.zip/urllib/request.py", line 534, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/bb/libexec/ombot/python/lib/python39.zip/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/bb/libexec/ombot/python/lib/python39.zip/urllib/request.py", line 1389, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/bb/libexec/ombot/python/lib/python39.zip/urllib/request.py", line 1349, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error EOF occurred in violation of protocol (_ssl.c:1129)>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/bb/libexec/ombot/python/lib/python39.zip/logging/__init__.py", line 1083, in emit
msg = self.format(record)
File "/bb/libexec/ombot/python/lib/python39.zip/logging/__init__.py", line 927, in format
return fmt.format(record)
File "/bb/libexec/ombot/python/lib/python39.zip/logging/__init__.py", line 663, in format
record.message = record.getMessage()
File "/bb/libexec/ombot/python/lib/python39.zip/logging/__init__.py", line 367, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
Call stack:
File "/bb/libexec/ombot/python/lib/python39.zip/runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/bb/libexec/ombot/python/lib/python39.zip/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/bb/libexec/ombot/dependencies/site-packages/ombot/__main__.py", line 3, in <module>
Factory().get_application()
File "/bb/libexec/ombot/dependencies/site-packages/ombot/routes.py", line 110, in get_application
self.slack_init()
File "/bb/libexec/ombot/dependencies/site-packages/ombot/routes.py", line 95, in slack_init
log.error("unable to init slack app: ",e)
Message: 'unable to init slack app: '
Arguments: (URLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1129)')),)
```
I am not sure if this is an issue with my implementation or an issue with `slack_bolt` or `slack_sdk`. can someone help me understand more about what's going on here ?
Attaching full length log file to the issue:
[logs-from-app-in-slackbot--slackbot-beta--apprel-7f7b9c7458-gvb7b.txt](https://github.com/slackapi/bolt-python/files/7198734/logs-from-app-in-slackbot--slackbot-beta--apprel-7f7b9c7458-gvb7b.txt)
## Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/bolt-python/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2021-09-20T21:08:08Z | 2021-09-23T22:29:24Z | https://github.com/slackapi/bolt-python/issues/470 | [
"question",
"area:adapter",
"need info"
] | 0bprashanthc | 6 |
huggingface/datasets | computer-vision | 7,006 | CI is broken after ruff-0.5.0: E721 | After ruff-0.5.0 release (https://github.com/astral-sh/ruff/releases/tag/0.5.0), our CI is broken due to E721 rule.
See: https://github.com/huggingface/datasets/actions/runs/9707641618/job/26793170961?pr=6983
> src/datasets/features/features.py:844:12: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks | closed | 2024-06-28T05:03:28Z | 2024-06-28T05:25:18Z | https://github.com/huggingface/datasets/issues/7006 | [
"maintenance"
] | albertvillanova | 0 |
microsoft/MMdnn | tensorflow | 522 | Pytorch Emitter has not supported operator [PRelu] | Platform (like ubuntu 16.04/win10):
centos
Python version:
2.7
Source framework with version (like Tensorflow 1.4.1 with GPU):
caffe
Destination framework with version (like CNTK 2.3 with GPU):
pytorch0.4
I convert caffe model to IR successfully, error happens when I tried to convert IR to pytorch code:
Pytorch Emitter has not supported operator [PRelu]
How should I deal with PRelu for IR to pytorch? | open | 2018-12-11T13:21:08Z | 2019-05-14T00:49:17Z | https://github.com/microsoft/MMdnn/issues/522 | [
"enhancement"
] | inkzk | 4 |
miguelgrinberg/python-socketio | asyncio | 84 | transports was set up 'websocket',show 'error during WebSocket handshake: Unexpected response code: 400' | i use sanic 0.41
python3.5 | closed | 2017-03-15T05:02:04Z | 2017-03-23T02:21:56Z | https://github.com/miguelgrinberg/python-socketio/issues/84 | [
"invalid"
] | larryclean | 10 |
wandb/wandb | data-science | 8,919 | [Q]: Train large model with LoRA and PP using One Single Sweep agent | ### Ask your question
we have a scenario to train large model shareded across 8gpus in the same node using sweep config with LoRa and PP,
so is it possible to use only one sweep agent to train entire model. if this is doable, if so, how can archive using sweep.
Note: we tried with sweep agent and accelerate, it does only parallelize the agent.
| closed | 2024-11-19T23:20:50Z | 2024-11-26T10:08:55Z | https://github.com/wandb/wandb/issues/8919 | [
"ty:question",
"c:sweeps"
] | rajeshitshoulders | 3 |
iperov/DeepFaceLab | machine-learning | 613 | when i try to merge SAEHD, my computer is off. | when i try to merge SAEHD, change merge option.
and shift + / , my computer down.
how can i fix ?? | closed | 2020-02-08T15:48:50Z | 2020-02-10T00:31:27Z | https://github.com/iperov/DeepFaceLab/issues/613 | [] | lhw5348 | 2 |
voxel51/fiftyone | data-science | 5,583 | [BUG] Label mapping error when using Grounding-DINO for object detection | ### Describe the problem
I was using `Grounding-DINO` model for an object detection task. The model works well when only one input class is provided. However, when I specify multiple input classes, the labels in outputs seem to be mapped incorrectly.
### Code to reproduce issue
```python
import fiftyone.zoo as foz
classes = ["cat", "face"]
model = foz.load_zoo_model(
"zero-shot-detection-transformer-torch",
"IDEA-Research/grounding-dino-tiny",
classes=classes,
)
dataset.apply_model(model, label_field="grounding_dino")
```
# commands and/or screenshots here
Here is the label output:

### System information
- **OS Platform and Distribution** (e.g., Linux Ubuntu 22.04): ubuntu 22.04
- **Python version** (`python --version`): 3.11.11
- **FiftyOne version** (`fiftyone --version`): FiftyOne v1.3.2, Voxel51, Inc.
- **FiftyOne installed from** (pip or source): pip
### Other info/logs
In fiftyone, the function used to predict is `image_process.post_process_object_detection` in file `transformers.py`, which defers from the function in huggingface example: `post_process_grounded_object_detection()`, where the integer label is already mapped to text labels.
### Willingness to contribute
The FiftyOne Community encourages bug fix contributions. Would you or another
member of your organization be willing to contribute a fix for this bug to the
FiftyOne codebase?
- [ ] Yes. I can contribute a fix for this bug independently
- [x] Yes. I would be willing to contribute a fix for this bug with guidance
from the FiftyOne community
- [ ] No. I cannot contribute a bug fix at this time
| open | 2025-03-14T16:32:00Z | 2025-03-14T16:32:07Z | https://github.com/voxel51/fiftyone/issues/5583 | [
"bug"
] | yumixx | 0 |
MagicStack/asyncpg | asyncio | 646 | timestamp string literals not allowed in parameterized queries | Lets say I have a table like this:
CREATE TABLE public.instrument (
listing timestamptz NULL
)
And this query:
await conn.execute(
"INSERT INTO public.instrument (listing) VALUES ($1)",
"2020-09-11T08:00:00.000Z"
)
I get this error:
asyncpg.exceptions.DataError: invalid input for query argument $1: '2020-09-11T08:00:00.000Z' (expected a datetime.date or datetime.datetime instance, got 'str')
I am surprised that this is not allowed?! The [postgres documentation](https://www.postgresql.org/docs/current/datatype-datetime.html) says:
> 8.5.1. Date/Time Input
>
> Date and time input is accepted in almost any reasonable format, including ISO 8601, SQL-compatible, traditional POSTGRES, and others.
| closed | 2020-10-29T16:50:33Z | 2020-10-29T17:27:27Z | https://github.com/MagicStack/asyncpg/issues/646 | [] | andersea | 5 |
nuvic/flask_for_startups | pytest | 2 | Typo in validator attribute | The `required` attribute is spelled wrong.
https://github.com/nuvic/flask_for_startups/blob/01b9d4a53055b4294645ed79bbc24c319eb75322/app/utils/validators.py#L27 | closed | 2022-05-22T04:34:31Z | 2022-05-22T15:28:05Z | https://github.com/nuvic/flask_for_startups/issues/2 | [] | niccolomineo | 1 |
microsoft/nni | machine-learning | 5,743 | # get parameters from tuner RECEIVED_PARAMS = nni.get_next_parameter() | **Describe the issue**:
# get parameters from tuner
RECEIVED_PARAMS = nni.get_next_parameter()
return {},is nan;
How to obtain the best parameters from the training results
example:https://github.com/microsoft/nni/tree/master/examples/trials/sklearn/classification
Thanks!
**Environment**:
- NNI version:
- Training service (local|remote|pai|aml|etc):
- Client OS:
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?:
- Is running in Docker?:
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | open | 2024-02-12T17:00:32Z | 2024-02-12T17:00:32Z | https://github.com/microsoft/nni/issues/5743 | [] | whao159107 | 0 |
ageitgey/face_recognition | machine-learning | 1,336 | How can I improve the recognition speed? |
### Description
I tried the demo of
Recognize faces in live video using your webcam - Simple / Slower Version (Requires OpenCV to be installed)
and
Recognize faces in live video using your webcam - Faster Version (Requires OpenCV to be installed)
Both of which worked well and able to recognize the faces.
But the fps seems very low.
I wonder how can I improve the recognition speed? By using GPUs or what.
Thx so much for help!
| open | 2021-06-30T03:06:28Z | 2021-07-27T13:36:54Z | https://github.com/ageitgey/face_recognition/issues/1336 | [] | JackJWWong | 2 |
deepspeedai/DeepSpeed | deep-learning | 6,714 | [BUG] pipeline parallelism+fp16+moe isn't working | **Describe the bug**
My model use deepspeed `PipelineModule(num_stages=4)` split into 4 parts, and my `deepspeed.moe.layer.MoE` is only set in the pipeline stage1 layer. When my model `train_batch`, the program will get stuck, the specific issue occurs in FP16_Optimizer step.
**Here is our deepspeed config**
```
{
"train_batch_size": 4,
"train_micro_batch_size_per_gpu" 1,
"fp16": {
"enabled": true,
"auto_cast": true
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": 0.001,
"betas": [
0.9,
0.95
],
"weight_decay": 0.05
}
},
"zero_optimization": {
"stage": 0
}
}
```
**Source code with issues**
my pipeline_parallel_world_size is 4, the code will enter the following branch, but my moe layer only is set in pipeline stage1, then all_reduce will make program stuck. If I delete this code, it will run successfully.
https://github.com/microsoft/DeepSpeed/blob/10ba3dde84d00742f3635c48db09d6eccf0ec8bb/deepspeed/runtime/utils.py#L892-L893
I don't know why all_reduce needs to be done here, it doesn't seem meaningful
| open | 2024-11-05T12:39:37Z | 2024-11-27T19:56:11Z | https://github.com/deepspeedai/DeepSpeed/issues/6714 | [] | NeferpitouS3 | 5 |
deepspeedai/DeepSpeed | deep-learning | 5,787 | [REQUEST]How to set Ulysses in deepspeed config json? | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-07-22T09:14:21Z | 2024-10-30T08:05:30Z | https://github.com/deepspeedai/DeepSpeed/issues/5787 | [
"enhancement"
] | xs1997zju | 2 |
0b01001001/spectree | pydantic | 158 | How to use spectry with aiohttp | closed | 2021-07-12T09:36:33Z | 2021-07-12T09:41:27Z | https://github.com/0b01001001/spectree/issues/158 | [] | RysnikM | 1 |
|
graphdeco-inria/gaussian-splatting | computer-vision | 582 | What is the meaning of `add_densification_stats `? | https://github.com/graphdeco-inria/gaussian-splatting/blob/2eee0e26d2d5fd00ec462df47752223952f6bf4e/scene/gaussian_model.py#L405-L407
A little confused about the `xyz_gradient_accum` and `viewspace_point_tensor` variable. What's the meaning of `xyz_gradient_accum`? And I think `xyz` is in world space, but viewspace_point is in 2D screen space, so what's the relation of the two? | closed | 2023-12-29T06:39:00Z | 2024-09-10T06:37:20Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/582 | [] | elenacliu | 3 |
PokemonGoF/PokemonGo-Bot | automation | 5,923 | Unrecognized MoveSrt | The bot says me to create an issue to report this bug:
```
2017-02-19 05:16:56] [ Pokemon] [ERROR] Unexpected moveset [Bug Bite, Swift] for #165 Ledyba, please update info in pokemon.json and create issue/PR
[2017-02-19 05:16:56] [ Pokemon] [ERROR] Unexpected moveset [Feint Attack, Drill Peck] for #198 Murkrow, please update info in pokemon.json and create issue/PR
[2017-02-19 05:16:56] [ Pokemon] [ERROR] Unexpected moveset [Peck, Drill Peck] for #177 Natu, please update info in pokemon.json and create issue/PR
[2017-02-19 05:16:56] [ Pokemon] [ERROR] Unexpected moveset [Peck, Psyshock] for #177 Natu, please update info in pokemon.json and create issue/PR
[2017-02-19 05:16:56] [ Pokemon] [ERROR] Unexpected moveset [Feint Attack, Aerial Ace] for #163 Hoothoot, please update info in pokemon.json and create issue/PR
[2017-02-19 05:16:56] [ Pokemon] [ERROR] Unexpected moveset [Peck, Drill Peck] for #198 Murkrow, please update info in pokemon.json and create issue/PR
[2017-02-19 05:16:56] [ Pokemon] [ERROR] Unexpected moveset [Feint Attack, Aerial Ace] for #163 Hoothoot, please update info in pokemon.json and create issue/PR
```
It seems that some movements must be added to pokemon.json list | closed | 2017-02-19T10:20:00Z | 2017-02-19T10:21:35Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5923 | [] | barreeeiroo | 1 |
ccxt/ccxt | api | 25,300 | ccxt.pro loadMarkets() request error in electron | ### Operating System
macOS \ Electron
### Programming Languages
JavaScript
### CCXT Version
4.4.60
### Description
_No response_
### Code
```js
const { pro } = require('ccxt');
const { okx } = pro;
async function initExchange() {
const exchange = new okx({
httpProxy: 'http://127.0.0.1:7890',
wsProxy: 'http://127.0.0.1:7890',
});
// 沙盒模式
exchange.setSandboxMode(true);
try {
const markets = await exchange.loadMarkets();
console.log('markets', markets);
} catch (error) {
console.error('loadMarkets 出错:', error);
}
}
initExchange();
```
The code can run normally in node, but it will throw an error in the electron main process.
```ts
import { app } from 'electron';
import { pro, Exchange } from 'ccxt';
const { okx } = pro;
app.on('ready', async () => {
await initExchange();
});
```
ERROR:
```
[1] loadMarkets 出错: RequestTimeout: okx GET https://www.okx.com/api/v5/public/instruments?instType=SPOT request timed out (10000 ms)
[1] at okx.fetch (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:60923:23)
[1] at async okx.fetch2 (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:64452:24)
[1] at async okx.request (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:64474:16)
[1] at async okx.fetchMarketsByType (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:295295:26)
[1] at async Promise.all (index 0)
[1] at async okx.fetchMarkets (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:295122:20)
[1] at async okx.loadMarketsHelper (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:61006:25)
[1] at async initExchange (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:42899:25)
[1] at async App.<anonymous> (/Users/yogo/project/maxmeng.top/cryptobot/.erb/dll/main.bundle.dev.js:42883:5)
``` | closed | 2025-02-18T03:05:57Z | 2025-02-21T08:16:45Z | https://github.com/ccxt/ccxt/issues/25300 | [] | maxmeng93 | 6 |
labmlai/annotated_deep_learning_paper_implementations | pytorch | 178 | Unable to run In-paint images script |
I'm unable to run the "in_paint.py" script. It is throwing the following error: Could anyone please help me with it?
`RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 6 but got size 5 for tensor number 1 in the list.` | closed | 2023-04-19T08:13:50Z | 2023-04-25T12:03:36Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/178 | [] | Vikramank | 1 |
cvat-ai/cvat | tensorflow | 8,461 | cvat secondary development | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
May I ask if cvat can be used for local secondary development under Windows 10? If so, what tools do I need to download, such as vscode, pyhton3.8, postgreSQL, etc. Do I need to download docker?
### Describe the solution you'd like
May I ask if cvat can be used for local secondary development under Windows 10? If so, what tools do I need to download, such as vscode, pyhton3.8, postgreSQL, etc. Do I need to download docker?
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | closed | 2024-09-20T15:57:48Z | 2024-09-23T06:37:33Z | https://github.com/cvat-ai/cvat/issues/8461 | [
"enhancement"
] | zhanghaihangandliuli | 1 |
mouredev/Hello-Python | fastapi | 470 | 2025最新腾龙公司注册账号游戏网址 | 该app的下载过程非常简单,用户只需在官网下载并安装应用,即可轻松注册并开始体验平台的各种功能。下载过程无需任何繁琐的步骤,并且应用支持多种设备兼容,让每位玩家都能快速上手专营:真人百家乐♠️ 🐯牛牛🐮
24小时🕐手机电脑同步同步无限上下分💰诚招代理
存取款:USDT,人民币,缅币,
大额无忧只汇干净安全款只做生意长久,
澳门AG游戏公司注册官网【376838.com】
复制到浏览器打开注册下载APP登录
澳门AG游戏游戏【体育游戏
飞机🤜(@lc15688)
开户联系在线客服微信:zdn200 微信:xiaolu460570
 | closed | 2025-03-06T05:35:52Z | 2025-03-10T13:47:05Z | https://github.com/mouredev/Hello-Python/issues/470 | [] | khyl55 | 0 |
fastapi/sqlmodel | pydantic | 232 | How to select specific columns from database table? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from typing import Optional
from sqlmodel import Field, Session, SQLModel, create_engine, select
class Hero(SQLModel, table=True):
id: Optional[int] = Field(default=None, primary_key=True)
name: str
secret_name: str
age: Optional[int] = None
sqlite_file_name = "database.db"
sqlite_url = f"sqlite:///{sqlite_file_name}"
engine = create_engine(sqlite_url, echo=True)
def create_db_and_tables():
SQLModel.metadata.create_all(engine)
def create_heroes():
hero_1 = Hero(name="Deadpond", secret_name="Dive Wilson")
hero_2 = Hero(name="Spider-Boy", secret_name="Pedro Parqueador")
hero_3 = Hero(name="Rusty-Man", secret_name="Tommy Sharp", age=48)
with Session(engine) as session:
session.add(hero_1)
session.add(hero_2)
session.add(hero_3)
session.commit()
def select_heroes():
with Session(engine) as session:
session.exec(select(Hero)).all()
def main():
create_db_and_tables()
create_heroes()
select_heroes()
if __name__ == "__main__":
main()
```
### Description
In the [docu](https://sqlmodel.tiangolo.com/tutorial/select/#select-fewer-columns) it is mentioned that it is possible to select only specific columns from the datatable. However, the example is only given for `SQL` and not for `SQLModel`
So how can I implement this query in SQLModel?
```
SELECT id, name
FROM hero
```
Finally I also dont want to hardcode the column of interest but the column name(s) will be stored in a variable.
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.8.1
### Additional Context
_No response_ | closed | 2022-02-01T10:13:47Z | 2024-11-24T14:29:41Z | https://github.com/fastapi/sqlmodel/issues/232 | [
"question"
] | christianholland | 8 |
horovod/horovod | pytorch | 3,384 | Mixed Precision Training Error, Missing ranks in validation. | **Environment:**
Horovod v0.21.0:
Available Frameworks:
[X] TensorFlow
[ ] PyTorch
[ ] MXNet
Available Controllers:
[X] MPI
[ ] Gloo
Available Tensor Operations:
[X] NCCL
[ ] DDL
[ ] CCL
[X] MPI
[ ] Gloo
TF 2.4 Successfully opened dynamic library libcudart.so.11.0. 2.4.0
**Bug report:**
During training process if I turned on mixed precision in tf with code
```python
policy = tf.keras.mixed_precision.experimental.Policy("mixed_float16", loss_scale="dynamic")
tf.keras.mixed_precision.experimental.set_policy(policy)
```
it will keep return results like, and I am using 2 machines 2gpus/each with batch size 32. Also, it will not throw this error immediately, normally after 10 epochs.
```python
[2022-01-26 05:54:49.745777: W /tmp/pip-install-ypgzgfmp/horovod_f2972a35cff84b9fb159b78871eab48a/horovod/common/stall_inspector.cc:105] One or more tensors were submitted to be reduced, gathered or broadcasted by subset of ranks and are waiting for remainder of ranks for more than 60 seconds. This may indicate that different ranks are trying to submit different tensors or that only subset of ranks is submitting tensors, which will cause deadlock.
Missing ranks:
0: [cond_1/then/_400/cond_1/Adam/PartitionedCall/DistributedAdam_Allreduce/cond/then/_2867/DistributedAdam_Allreduce/cond/HorovodAllreduce_DistributedAdam_Allreduce_cond_Cast_1_0, cond_1/then/_400/cond_1/Adam/PartitionedCall/DistributedAdam_Allreduce/cond_1/then/_2875/DistributedAdam_Allreduce/cond_1/HorovodAllreduce_DistributedAdam_Allreduce_cond_1_Cast_1_0, cond_1/then/_400/cond_1/Adam/PartitionedCall/DistributedAdam_Allreduce/cond_10/then/_2947/DistributedAdam_Allreduce/cond_10/HorovodAllreduce_DistributedAdam_Allreduce_cond_10_Cast_1_0, cond_1/then/_400/cond_1/Adam/PartitionedCall/DistributedAdam_Allreduce/cond_100/then/_3667/DistributedAdam_Allreduce/cond_100/HorovodAllreduce_DistributedAdam_Allreduce_cond_100_Cast_1_0, cond_1/then/_400/cond_1/Adam/PartitionedCall/DistributedAdam_Allreduce/cond_101/then/_3675/DistributedAdam_Allreduce/cond_101/HorovodAllreduce_DistributedAdam_Allreduce_cond_101_Cast_1_0, cond_1/then/_400/cond_1/Adam/PartitionedCall/DistributedAdam_Allreduce/cond_102/then/_3683/DistributedAdam_Allreduce/cond_102/HorovodAllreduce_DistributedAdam_Allreduce_cond_102_Cast_1_0 ...]
1: [HorovodAllreduce]
2: [HorovodAllreduce]
3: [HorovodAllreduce]
```
But If I turn out mixed-precision in tf, it will be training as expected.
here is my command line code
```shell
#!/bin/bash
mpirun -np 4 -H ${host A}:2, ${host B}:2 \
--allow-run-as-root -bind-to none -map-by slot -mca \
plm_rsh_args "-p 20385" -x NCCL_SHM_DISABLE=1 -x NCCL_IB_DISABLE=1 \
-x NCCL_DEBUG=INFO -x LD_LIBRARY_PATH -x PATH \
-x NCCL_SOCKET_IFNAME=enp72s0,enp3s0,enp73s0 -mca pml ob1 -mca btl ^openib \
-mca btl_tcp_if_include 10.0.0.0/16 ./train_test.sh
``` | open | 2022-01-26T05:58:41Z | 2022-08-29T17:30:54Z | https://github.com/horovod/horovod/issues/3384 | [
"bug"
] | QW-Is-Here | 1 |
freqtrade/freqtrade | python | 10,707 | The set leverage keeps changing intermittently. | <!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: window 10
* Python Version: Python 3.12.6
* CCXT version: ccxt==4.4.5
* Freqtrade Version: freqtrade docker-2024.9-dev-3bbc6cba
## Describe the problem:
lev = 10
def leverage(self, pair: str, current_time: datetime, current_rate: float,
proposed_leverage: float, max_leverage: float, entry_tag: Optional[str],
side: str, **kwargs) -> float:
return self.lev
def confirm_trade_exit(self, pair: str, trade: Trade, order_type: str, amount: float,
rate: float, time_in_force: str, exit_reason: str,
current_time: datetime, **kwargs) -> bool:
spantime= current_time - trade.open_date_utc
IsShort="short" if trade.is_short is True else "long"
cur_profit=trade.calc_profit_ratio(rate)
logger.info(f"{self.version()}:exit-> {trade.pair} {current_time.strftime('%Y-%m-%d %H:%M')}:({spantime.total_seconds() /(60.0*60):0.1f}h) ID={trade.id} cur={rate:0.4f} open ={trade.open_rate:0.4f} lev={trade.leverage:0.1f} {exit_reason} {IsShort} profit={ cur_profit*100:0.2f}% ")
return True
During backtesting, after setting leverage and entering a position, when checking trade.leverage in confirm_trade_exit, the leverage sometimes appears different from what was initially set.
The leverage should be set to 10 (lev=10), but sometimes it appears differently when checked in confirm_trade_exit.
The result of trade.calc_profit_ratio(rate) is being calculated using the modified leverage instead of the initially set leverage.
### Relevant code exceptions or logs
2024-09-25 20:48:43,819 - skzV9 - INFO - skzV9T1_0925_1:exit-> ROSE/USDT:USDT 2024-05-29 08:00:(8.0h) ID=821 cur=0.0941 open =0.0921 long atr_roi lev=4.0 profit=8.11%
2024-09-25 20:48:43,860 - skzV9 - INFO - skzV9T1_0925_1:exit-> LINK/USDT:USDT 2024-05-29 08:03:(8.1h) ID=822 cur=18.2230 open =18.5460 long atr_sl lev=10.0 profit=-18.66%
2024-09-25 20:48:44,536 - skzV9 - INFO - skzV9T1_0925_1:exit-> AR/USDT:USDT 2024-05-29 10:44:(10.7h) ID=823 cur=39.1820 open =38.1300 short atr_sl lev=5.0 profit=-14.21%
2024-09-25 20:48:44,561 - skzV9 - INFO - skzV9T1_0925_1:exit-> STORJ/USDT:USDT 2024-05-29 08:02:(8.0h) ID=824 cur=0.5617 open =0.5717 long atr_sl lev=4.0 profit=-7.47% | closed | 2024-09-25T12:29:52Z | 2024-09-25T16:14:13Z | https://github.com/freqtrade/freqtrade/issues/10707 | [
"Question",
"Non-spot"
] | isonex | 2 |
vitalik/django-ninja | pydantic | 927 | [BUG] foreign key with or without `_id` | **Describe the bug**
The same foreign key field serializer result changes (with or without `_id` ) when another api is added.
**Versions (please complete the following information):**
- Python version: 3.11.1
- Django version: 4.2.5
- Django-Ninja version: 0.22.2
- Pydantic version: 1.10.12
The minimal setup is :
```python
class TestModel(models.Model):
department = models.ForeignKey(Department, on_delete=models.CASCADE)
class TestModelSchema(ModelSchema):
class Config:
model = models.TestModel
model_fields = "__all__"
@router.get("/test_model", response=List[TestModelSchema])
def get_test_model(request):
return models.TestModel.objects.all()
```
Schema in docs page says the expected results is something like:
```
[
{
"id": 0,
"department": 0
}
]
```
but when I add another api:
```py
@router.put("/test_model/{test_model_id}", response=TestModelSchema)
def update_test_model(request, test_model_id: int, payload: TestModelSchema):
obj = get_object_or_404(models.TestModel, pk=test_model_id)
for attr, value in payload.dict().items():
setattr(obj, attr, value)
obj.save()
return obj
```
`department` field changes to `department_id` in both api docs, the real get api result is still `department`, which is really confusing.
| open | 2023-11-17T02:35:03Z | 2025-01-10T05:34:55Z | https://github.com/vitalik/django-ninja/issues/927 | [] | sigma-plus | 4 |
marshmallow-code/apispec | rest-api | 344 | Error in documentation about contributing | Hello,
In page https://apispec.readthedocs.io/en/stable/contributing.html , documentation indicate Following command:
$ pip install -r dev-requirements.txt
But there is no more `dev-requirements.txt` file.
| closed | 2018-11-13T13:17:23Z | 2018-11-13T20:49:35Z | https://github.com/marshmallow-code/apispec/issues/344 | [] | buxx | 2 |
CPJKU/madmom | numpy | 210 | TypeError when loading 24bit .wav files | `scipy` < 0.16 does not raise a `ValueError` for .wav files with 24 bit depth, thus `numpy`fails to mmap the data and raises a `TypeError`.
Possible solutions:
- require `scipy` > 0.16
- catch the `TypeError` in line [457 of madmom.audio.signal](https://github.com/CPJKU/madmom/blob/master/madmom/audio/signal.py#L457) and raise a `ValueError` instead.
| closed | 2016-08-31T08:00:08Z | 2017-03-02T08:10:59Z | https://github.com/CPJKU/madmom/issues/210 | [] | superbock | 0 |
jacobgil/pytorch-grad-cam | computer-vision | 96 | FP16 support | Hello,
If the model's parameters and input tensor are of type torch.float16 there is an error occurred:
```
/site-packages/pytorch_grad_cam/base_cam.py in forward(self, input_tensor, target_category, eigen_smooth)
81 result = []
82 for img in cam:
---> 83 img = cv2.resize(img, input_tensor.shape[-2:][::-1])
84 img = img - np.min(img)
85 img = img / np.max(img)
TypeError: Expected Ptr<cv::UMat> for argument 'src'
```
This is because cv2 cannot work with np.array's of type float16. It can be easy to fix if you add converting the output weights to float32 before converting to NumPy.
| closed | 2021-06-01T06:51:44Z | 2021-07-09T14:18:49Z | https://github.com/jacobgil/pytorch-grad-cam/issues/96 | [] | Animatory | 0 |
open-mmlab/mmdetection | pytorch | 11,512 | How to get the current training epoch in a custom loss function, as this parameter is needed in the loss calculation | open | 2024-03-04T10:46:41Z | 2024-03-04T10:46:57Z | https://github.com/open-mmlab/mmdetection/issues/11512 | [] | jstacer1 | 0 |
|
google-deepmind/sonnet | tensorflow | 272 | Laplace smoothing for EMA codebook update | Hi,
I understand that to calculate the normalized weights for the embeddings we divide by the Laplace smoothed cluster sizes as seen in the code [here](https://github.com/google-deepmind/sonnet/blob/v2/sonnet/src/nets/vqvae.py).
However, for the embeddings whose cluster sizes are zero, the Laplace smoothing replaces it with a very small value (some function of epsilon). When these updated cluster sizes are used to normalize (by dividing the running ema_dw with the updated cluster size) and update the embeddings, the corresponding embeddings with zero cluster sizes are updated to a very high value. These updated embeddings then have an ever lower probability of being chosen in the future.
Is my understanding of this issue correct or am I missing something? If this is indeed correct is there a way to mitigate this to have a higher perplexity score?
Thanks! | open | 2023-11-15T06:11:19Z | 2023-11-15T06:11:19Z | https://github.com/google-deepmind/sonnet/issues/272 | [] | VaishnaviSPatil | 0 |
MagicStack/asyncpg | asyncio | 333 | greenplum->copy_records_to_table support needed | * **asyncpg version**:0.17.0
* **Greenplum version**:5.7
* **Python version**:3.7
* **Platform**:Windows 10/windows 10 WSL
* **Do you use pgbouncer?**:no
* **Did you install asyncpg with pip?**:yes
* **If you built asyncpg locally, which version of Cython did you use?**:n/a
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**:n/a
<!-- Enter your issue details below this comment. -->
try to run copy_records_to_table against greenplum 5.7 which report error as:
File "D:\Anaconda3\envs\DataLightFire37\lib\site-packages\asyncpg\connection.py", line 848, in _copy_in_records
copy_stmt, None, None, records, intro_stmt, timeout)
File "asyncpg\protocol\protocol.pyx", line 504, in copy_in
asyncpg.exceptions.PostgresSyntaxError: syntax error at or near "("
Try to run those against PostgreSQL 9.2. It passed.
Also, execute and executemany work fine with Greenplum. | open | 2018-07-30T13:40:31Z | 2024-01-13T07:56:33Z | https://github.com/MagicStack/asyncpg/issues/333 | [] | HuangKaibo2017 | 1 |
KaiyangZhou/deep-person-reid | computer-vision | 503 | help needed urgent please | i used **feature extractor** from torchreid with model '**mudeep**' it gave 4096 tensor of 2 images each ,i want to compare that to tensor so i used compute_distance_matrix from **distance.py** but it gave me error of '**tensor dimension 2 need** ' as i am new to tensor **how should i proceed with comparing two images**?
please help please ,
thank you,
sorry for bad english | open | 2022-05-10T17:28:58Z | 2022-05-10T17:28:58Z | https://github.com/KaiyangZhou/deep-person-reid/issues/503 | [] | luciferstar66 | 0 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,848 | spam | ### 技术交流
``
| closed | 2025-02-17T03:50:01Z | 2025-02-17T12:44:45Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16848 | [] | RobinYang11 | 1 |
gunthercox/ChatterBot | machine-learning | 1,458 | ImportError: No module named storage | Hello Team,
I am getting below issue.
**Traceback (most recent call last):
File "test1.py", line 1, in <module>
from chatterbot import ChatBot
File "/opt/PythonLibs/ChatterBot/chatterbot/__init__.py", line 4, in <module>
from .chatterbot import ChatBot
File "/opt/PythonLibs/ChatterBot/chatterbot/chatterbot.py", line 2, in <module>
from chatterbot.storage import StorageAdapter
ImportError: No module named storage**
| closed | 2018-10-16T08:14:51Z | 2020-01-18T18:16:32Z | https://github.com/gunthercox/ChatterBot/issues/1458 | [
"answered"
] | dilip90 | 16 |
dgtlmoon/changedetection.io | web-scraping | 1,944 | [feature] Add a new processor plugin for "whois" information | Now that #1941 makes everything cleaner, we can go back to solving pluggable 'processors', a new one could be "whois" plugin
https://github.com/richardpenman/whois
this would just take the domain-name from the URL and pass it to the whois-processor, probably these should live as separate pypi packages
`changedetectionio-whois` or something could be nice
| open | 2023-11-07T18:04:12Z | 2024-07-10T11:24:17Z | https://github.com/dgtlmoon/changedetection.io/issues/1944 | [
"enhancement",
"plugin-candidate"
] | dgtlmoon | 1 |
fastapi-users/fastapi-users | asyncio | 510 | Problem get user | module 'fastapi_users' has no attribute 'current_user'
user: User = Depends(fastapi_users.current_user())
fastapi-users 5.1.0 | closed | 2021-02-17T18:55:52Z | 2021-02-19T20:02:32Z | https://github.com/fastapi-users/fastapi-users/issues/510 | [] | ScrimForever | 5 |
huggingface/datasets | tensorflow | 7,033 | `from_generator` does not allow to specify the split name | ### Describe the bug
I'm building train, dev, and test using `from_generator`; however, in all three cases, the logger prints `Generating train split:`
It's not possible to change the split name since it seems to be hardcoded: https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/generator/generator.py
### Steps to reproduce the bug
```
In [1]: from datasets import Dataset
In [2]: def gen():
...: yield {"pokemon": "bulbasaur", "type": "grass"}
...:
In [3]: ds = Dataset.from_generator(gen)
Generating train split: 1 examples [00:00, 133.89 examples/s]
```
### Expected behavior
It should be possible to specify any split name
### Environment info
- `datasets` version: 2.19.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- `huggingface_hub` version: 0.23.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.3
- `fsspec` version: 2023.10.0 | closed | 2024-07-09T07:47:58Z | 2024-07-26T12:56:16Z | https://github.com/huggingface/datasets/issues/7033 | [] | pminervini | 2 |
lundberg/respx | pytest | 277 | Incompatibility with httpx 0.28.0 | When upgrading httpx from 0.27.2 to 0.28.0, the mocks that were previously correctly detected as mocked, stop being considered as so and raise an `AllMockedAssertionError`. | closed | 2024-11-29T07:30:54Z | 2024-12-19T22:16:49Z | https://github.com/lundberg/respx/issues/277 | [] | PriOliveira | 4 |
pyro-ppl/numpyro | numpy | 1,522 | Example: Predator-Prey Model gives weird results when using num_chains > 2 | Hello, I just tried the example, [Example: Predator-Prey Model](https://num.pyro.ai/en/latest/examples/ode.html), with `adapt_step_size=False` and `num_chains=2`.
Also, I added the lines to enable multiple draws on the CPU.
```
numpyro.set_host_device_count(8)
print(jax.local_device_count())
```
When I plot the trace of sigma, weird patterns are shown as follows
```
sigma = mcmc.get_samples()["sigma"]
plt.plot(sigma[:, 0])
plt.plot(sigma[:, 1])
plt.show()
```

It seems the second draw wasn't performed. Any suggestions would be helpful.
Thank you. | closed | 2023-01-06T08:03:58Z | 2023-01-07T04:24:27Z | https://github.com/pyro-ppl/numpyro/issues/1522 | [
"question"
] | Sangwon91 | 2 |
Lightning-AI/pytorch-lightning | data-science | 19,907 | I think it's deadly necessary to add docs or tutorials for handling the case when We return multiple loaders in test_dataloaders() method? I think it | ### 📚 Documentation
I know that LightningDataModule supports return multiple dataloaders ,like list of dataloaders, or dict of dataloaders in LightningDataModule. But how to handle the test in LightningModule?
We always need print respective information of the computation upon each test_loader, and save the figs drawn for each dataloader, and we want to do some calculation on the whole dataset in each test_dataloader. But how to do these things in LightningModule's hooks about test? The developers only say that Lightning automatically handle the combination of multiple dataloaders, but how can Lightning compute for each Dataloader? and record information for each dataloaders??
cc @borda | open | 2024-05-25T11:01:27Z | 2024-05-25T11:01:48Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19907 | [
"docs",
"needs triage"
] | onbigion13 | 0 |
aio-libs/aiomysql | asyncio | 311 | import errror | 

I got import errror when run aiomysql, but it seems I already have pymysql installed.
| closed | 2018-07-06T03:09:42Z | 2018-07-06T20:20:09Z | https://github.com/aio-libs/aiomysql/issues/311 | [] | hanke0 | 3 |
fastapi/sqlmodel | sqlalchemy | 507 | Relationship from Model Data and inheritance | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class SurveyDTO(SQLModel):
responses: List[ResponsesDTO] = []
# other fields..
class ResponsesDTO(SQLModel):
code: int
response: str
class SurveyTable(SurveyDTO, table='True'):
id: Optional[int] = Field(default=None, primary_key=True)
# how to manage relationship from DTO?
class ResponsesTable(ResponsesDTO, table='True'):
id: Optional[int] = Field(default=None, primary_key=True)
# how to manage relationship from DTO?
In FastAPI:
@app.post(endpoint_paths.SURVEY)
def post_survey(session: Session = Depends(get_session),
survey: SurveyDTO= Body(..., embed=True)) -> Response:
# save the survey
```
### Description
I am trying to create a a one to many relationship through inheritance from a model class to a table class.
I don't understand how to create the relationship with the List[ResponsesDTO] in the table without duplicating code.
Maybe I am missing something?
Thank you for your help :)
### Operating System
Windows
### Operating System Details
_No response_
### SQLModel Version
0.08
### Python Version
3.8
### Additional Context
Seems related to #18 and #469 | open | 2022-11-23T11:20:44Z | 2022-11-24T10:47:39Z | https://github.com/fastapi/sqlmodel/issues/507 | [
"question"
] | Franksign | 3 |
babysor/MockingBird | deep-learning | 670 | 开始使用CPU训练 现在该如何改用GPU训练 尝试各种办法都不行 | Using model: Tacotron
Using device: cpu | closed | 2022-07-21T15:34:15Z | 2022-07-27T09:57:28Z | https://github.com/babysor/MockingBird/issues/670 | [] | b95595 | 1 |
zalandoresearch/fashion-mnist | computer-vision | 186 | Benchmark: MLP 4x25 hidden layers, 88% in 100 epochs | ERROR: type should be string, got "https://github.com/jklw10/dnns-from-scratch-in-zig/tree/benchmark-submission\r\n\r\nSimply create the folder ``data/mnist-fashion/`` extract the datasets into it,\r\nRun with ``zig build run -Doptimize=ReleaseFast`` \r\nZig version ``0.14.0-dev.1860+2e2927735``\r\n(Might run with 0.13.0)\r\n\r\nThe same setup is able to get to 97.2% on mnist number set. (different configuration, likely overfit) (98% with 4x100 neuron hidden layers)\r\n\r\nWhat's wacky about it:\r\nWeights are forcibly normalized (and adjusted):\r\n ``(grads[i]-avg(grads)) / (max(grads)-min(grads)) * (2-(2/inputSize))`` \r\nFradients are forcibly normalized (and adjusted) ``norm(weight) * (1+(2/inputSize))``\r\nFradients are biased to move the weight towards that weight's EMA:\r\n grads[i] / abs(ema[i]) + abs(grads[i] -EMA[i]-Weight[i])\r\nForward pass uses ``sign(weight) * sqrt(weight*ema)`` in place of weight\r\n\r\nSome of this is slightly off, please read \r\nhttps://github.com/jklw10/dnns-from-scratch-in-zig/blob/benchmark-submission/src/layerGrok.zig#L259\r\nto see the full context. Hopefully it's human readable enough.\r\n\r\nThis score probably isn't the maximum I can gain, just the fastest to test in an afternoon. Should I update here or just make a new issue in case I gain a higher score? (4x100 hidden neurons achieved 89%)" | open | 2024-10-30T17:36:37Z | 2024-10-31T16:52:55Z | https://github.com/zalandoresearch/fashion-mnist/issues/186 | [] | jklw10 | 0 |
tfranzel/drf-spectacular | rest-api | 924 | Using custom pagination with APIView | **Describe the bug**
Within an APIView, when returning a manually generated paginated response, there are no clear ways to merge the pagination schema with the child serializer
**To Reproduce**
```python
class StdPagination(PageNumberPagination):
def get_paginated_response(self, data):
return Response({
'page_no': self.page.number,
'num_pages': self.page.paginator.num_pages,
'page_size': self.page.paginator.per_page,
'count': self.page.paginator.count,
'next': self.get_next_link(),
'previous': self.get_previous_link(),
'results': data
})
class ParrainageView(views.APIView, StdPagination):
permission_classes = [IsAuthenticated]
@extend_schema(
responses=ParrainageSerializer(many=True),
# The error is probably here but I have no idea what to actually pass here
)
def get(self, request):
params = request.GET
qs = UserReferral.objects.all()
results = self.paginate_queryset(qs, request, view=self)
srl = ParrainageSerializer(results, many=True)
return self.get_paginated_response(srl.data)
```
**Actual behaviour**
Returns
```json
[]
```
**Expected behavior**
Returns
```json
{
"page_no": 1,
"num_pages": 1,
"page_size": 20,
"count": 0,
"next": null,
"previous": null,
"results": []
}
```
| closed | 2023-01-24T16:52:44Z | 2023-01-25T10:54:58Z | https://github.com/tfranzel/drf-spectacular/issues/924 | [] | VaZark | 4 |
mlflow/mlflow | machine-learning | 15,073 | [DOC] Quickstart tutorial on how to use mlflow with uv | ### Willingness to contribute
Yes. I would be willing to contribute a document fix with guidance from the MLflow community.
### URL(s) with the issue
1. It will likely be a brand new tutorial page in [Getting Started with MLflow](https://mlflow.org/docs/latest/getting-started/)
2. It's possible that some wording shows up in [Managing Dependencies in MLflow Models](https://mlflow.org/docs/latest/model/dependencies/)
### Description of proposal (what needs changing)
A follow-up to https://github.com/mlflow/mlflow/pull/13824 (mainly [this](https://github.com/mlflow/mlflow/pull/13824#issuecomment-2686350102) and [this](https://github.com/mlflow/mlflow/pull/13824#issuecomment-2518948627)).
I see no uv in [Managing Dependencies in MLflow Models](https://mlflow.org/docs/latest/model/dependencies) so I thought I'd take on it and learn MLflow while writing the doc about uv (I'm kinda familiar with already). | open | 2025-03-23T12:10:12Z | 2025-03-24T04:51:39Z | https://github.com/mlflow/mlflow/issues/15073 | [
"area/docs"
] | jaceklaskowski | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.