repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
sequencelengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
microsoft/unilm
nlp
1,494
[KOSMOS-G] How to prepare Laion dataset?
Thanks for your work, I cannot see how to prepare Laion dataset when training aligner
open
2024-04-02T03:52:32Z
2024-04-02T03:52:47Z
https://github.com/microsoft/unilm/issues/1494
[]
quanh1990
0
strawberry-graphql/strawberry
graphql
3,513
Functionality for filtering events in Subscriptions
It would be nice if an equivalent of the `withFilter`-function as described in Apollo Server (https://www.apollographql.com/docs/apollo-server/data/subscriptions#filtering-events) is added to Strawberry.
closed
2024-05-24T13:49:03Z
2025-03-20T15:56:44Z
https://github.com/strawberry-graphql/strawberry/issues/3513
[ "enhancement" ]
HenriKorver
2
PaddlePaddle/PaddleHub
nlp
1,475
bert文本分类评估指标问题
欢迎您对PaddleHub提出建议,非常感谢您对PaddleHub的贡献! 在留下您的建议时,辛苦您同步提供如下信息: - 您想要增加什么新特性? - 什么样的场景下需要该特性? - 没有该特性的条件下,PaddleHub目前是否能间接满足该需求? - 增加该特性,PaddleHub可能需要变化的部分。 - 如果可以的话,简要描述下您的解决方案 在使用模型库中的bert-wwm进行多文本分类项目时发现,输出的评估指标为acc,但是我需要用macro-f1来作为我的评估指标,请问paddlehub考虑增加多分类F1 Score等评估指标吗,或者如果我需要自己写自定义评估方法的话,需要到哪个文件中进行修改,是在模型中或是其他的什么地方。
open
2021-06-09T10:32:55Z
2021-06-16T08:21:45Z
https://github.com/PaddlePaddle/PaddleHub/issues/1475
[ "nlp" ]
mengjing12332
2
CorentinJ/Real-Time-Voice-Cloning
deep-learning
1,045
improve inference time
Does anyone have a clue on how to fasten the inference time? I know other vocoders have been tried but they were not satisfacotry ... right?
open
2022-03-29T15:38:37Z
2022-03-29T15:38:37Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1045
[]
ireneb612
0
pytest-dev/pytest-flask
pytest
78
"ImportError: cannot import name 'json'" when debugging with pydevd
Hi! I'm receiving this error when attempting to debug with a pydevd -debugger. ``` Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\pydevd.py", line 1664, in <module> main() File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\pydevd.py", line 1658, in main globals = debugger.run(setup['file'], None, None, is_module) File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\pydevd.py", line 1068, in run pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "C:\Program Files\JetBrains\PyCharm 2018.1.3\helpers\pycharm\_jb_pytest_runner.py", line 31, in <module> pytest.main(args, plugins_to_load) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 54, in main config = _prepareconfig(args, plugins) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 167, in _prepareconfig pluginmanager=pluginmanager, args=args File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 617, in __call__ return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 222, in _hookexec return self._inner_hookexec(hook, methods, kwargs) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 216, in <lambda> firstresult=hook.spec_opts.get('firstresult'), File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\callers.py", line 196, in _multicall gen.send(outcome) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\helpconfig.py", line 89, in pytest_cmdline_parse config = outcome.get_result() File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\callers.py", line 76, in get_result raise ex[1].with_traceback(ex[2]) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\callers.py", line 180, in _multicall res = hook_impl.function(*args) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 981, in pytest_cmdline_parse self.parse(args) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 1146, in parse self._preparse(args, addopts=addopts) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\config.py", line 1098, in _preparse self.pluginmanager.load_setuptools_entrypoints("pytest11") File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pluggy\__init__.py", line 397, in load_setuptools_entrypoints plugin = ep.load() File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pkg_resources\__init__.py", line 2318, in load return self.resolve() File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pkg_resources\__init__.py", line 2324, in resolve module = __import__(self.module_name, fromlist=['__name__'], level=0) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\_pytest\assertion\rewrite.py", line 216, in load_module py.builtin.exec_(co, mod.__dict__) File "D:\Projects\Python\FlaskSQLAlchemyMasterClass\lib\site-packages\pytest_flask\plugin.py", line 11, in <module> from flask import json ImportError: cannot import name 'json' ```
closed
2018-07-30T20:11:05Z
2020-10-29T03:51:21Z
https://github.com/pytest-dev/pytest-flask/issues/78
[ "needs more info" ]
4lph4-Ph4un
1
matplotlib/matplotlib
data-science
29,766
[MNT]: ci: ubuntu-20.04 GitHub Actions runner will soon be unmaintained
### Summary The `ubuntu-20.04` GitHub Actions runner image, currently [used in the `tests.yml` workflow](https://github.com/matplotlib/matplotlib/blob/3edda656dc211497de93b8c5d642f0f29c96a33a/.github/workflows/tests.yml#L60) will soon be unsupported, as notified at: https://github.com/actions/runner-images/issues/11101 ### Proposed fix Ubuntu 20.04 itself is a long-term-support release, however it is also nearing the end of that support cycle, and will no longer be supported by Canonical from May 2025: https://ubuntu.com/20-04 I'd suggest removing the `ubuntu-20.04` jobs from the `tests.yml` workflows at the end of this month (March 2025).
open
2025-03-18T09:22:13Z
2025-03-18T18:00:24Z
https://github.com/matplotlib/matplotlib/issues/29766
[ "Maintenance" ]
jayaddison
5
mwaskom/seaborn
matplotlib
3,053
Needed: mechanism for deriving property values from other properties
We need a way to specify that a property's values should be (a function of) another property. This is most relevant for assigning the outputs of statistical operations to properties. Tt's become an acute need with the introduction of the `Text` mark (#3051). It's impossible to annotate statistical results (e.g. to put bar counts above the bars. It's also impossible to assign x/y to the text annotation when using `Plot.pair`, even without a statistical transform. I've kicked around a few ideas for this. One would be to make this part of the stat config, e.g. through a method call like ``` Plot(x).add(so.Text(), so.Hist().assign(text="count")) ``` But that does not solve the `Plot.pair` problem. Another option would be some special within the variable assignment itself, akin to ggplot's `after_stat`. `Plot.add` accepts multiple transforms, so this would need to be "after all transforms"; I think it would be too complicated to specify that a variable should be assigned in the middle of the transform pipe. Will develop this idea further.
open
2022-10-03T01:10:57Z
2023-12-31T21:41:41Z
https://github.com/mwaskom/seaborn/issues/3053
[ "objects-plot", "feature" ]
mwaskom
1
fastapi/sqlmodel
sqlalchemy
347
Cannot get nested objects when converting to dict
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python import sqlalchemy as sa from sqlalchemy import func from sqlalchemy import orm from sqlmodel import Field, Relationship, SQLModel class Address(SQLModel, table=True): __tablename__ = 'addresses' id: Optional[int] = Field(default=None, primary_key=True, nullable=False) listing: 'Listing' = Relationship(back_populates='address') location: 'Location' = Relationship(back_populates='addresses') class ListingImage(SQLModel, table=True): __tablename__ = 'listing_images' id: Optional[int] = Field(default=None, primary_key=True, nullable=False) listing: 'Listing' = Relationship(back_populates='images') file: str = Field(max_length=255, nullable=False) listing_id: int = Field(foreign_key='listings.id', nullable=False) class Listing(SQLModel, table=True): __tablename__ = 'listings' created: Optional[datetime] = Field(sa_column_kwargs={'server_default': func.now()}, index=True) title: str = Field(max_length=100, nullable=False) price: int = Field(nullable=False) address_id: int = Field(foreign_key='addresses.id', nullable=True) id: Optional[int] = Field(default=None, primary_key=True, nullable=False) images: List['ListingImage'] = Relationship(back_populates='listing', sa_relationship_kwargs={'cascade': 'all, delete'}) address: 'Address' = Relationship(back_populates='listing', sa_relationship_kwargs={'cascade': 'all, delete', 'uselist': False}) __mapper_args__ = {"eager_defaults": True} async def get_multi( self, db: AsyncSession, *, skip: int = 0, limit: int = 100, order: Any = None, **filters ) -> List[Listing]: stmt = sa.select(Listing).options(orm.selectinload(Listing.images), orm.selectinload(Listing.address)).filter_by(**filters).order_by(order).offset(skip).limit(limit) result = await db.scalars(stmt) return result.all() ##### in some async function #### data = await get_multi(....) data[0].address shows Address(...) object normally data[0].dict() { "id":.. "created":... "title":... "price":... "address_id":... } no "images" or "address" nested object after converting to dict ``` ### Description When trying to convert an object using .dict() or .json() the resulting object does not include the nested fields like Address or Images in my example. I am using SQLAlchemy AsyncSession and doing eager loading of the objects when quering the DB. The nested object shows normally when trying to access it but it doesn't show up in resulting dict and json object and by contrast not being sent in FastAPI response. I can confirm this is not the case in Pydantic and models from Pydantic work fine when converted to dict. ### Operating System Windows ### Operating System Details _No response_ ### SQLModel Version 0.0.6 ### Python Version 3.8.8 ### Additional Context _No response_
closed
2022-05-21T15:31:58Z
2022-11-12T10:53:36Z
https://github.com/fastapi/sqlmodel/issues/347
[ "question" ]
farahats9
8
pallets/flask
flask
5,013
On Debug Mode my function is called twice
```py import shutil import os from flask import Flask, render_template, request from datetime import datetime from os.path import isfile, join from sqlite3 import connect class Server(Flask): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.db_path = join("back-end", "database" ,"news.db") #Routes @self.route("/") @self.route("/home") def home(): return render_template("home.html") @self.route("/about") def about(): return render_template("about.html") @self.route("/table") def table(): return render_template("table.html") @self.route("/sabah") def sabah(): return render_template("sabah.html") @self.route("/news") def news(): pass @self.route("/science") def science(): pass #Functionality @self.template_global("getsemester") def gettime()->str: """This function gets current time and determines the semester to show.""" now = datetime.now() if isfile("static/cedvel/cedvel.pdf"): if datetime(now.year, 9, 15) <= now < datetime(now.year+1, 2, 16): return f"{now.year} PAYIZ SEMESTERİ üçün nəzərdə tutulub!" elif datetime(now.year, 2, 15) <= now < datetime(now.year, 6, 15): return f"{now.year} YAZ SEMESTERİ üçün nəzərdə tutulub!" else: return "Hal hazırda yay tetili ilə əlaqadər cedvəl hazır deyil." else: return "Hal hazırda cədvəl hazır deyil" @self.template_global("insertnews") def insertnews(header:str, main:str, date=datetime.now().strftime("%Y-%m-%d"))->str: """this function inserts news to the server""" print("FUNCTION CALLED!!!!") conn = connect(self.db_path) curr = conn.cursor() curr.execute(""" INSERT INTO news (header, main, date) VALUES (?, ?, ?) """, (header, main, date)) conn.commit() #getting latest news id id = curr.lastrowid print(id) conn.close() news_path = join("static", "media", "news", f"NEWS-{id}") os.makedirs(news_path) # image = request.files["image"] # # image.save(news_path, f"NEWS:{header}-{id}") return "XƏBƏR BAŞARIYLA YÜKLƏNDİ!" @self.template_global("deletenews") def deletenews(id: int)->str: """This function deletes news from server""" conn = connect(self.db_path) c = conn.cursor() c.execute("DELETE FROM news WHERE rowid=?", (id,)) conn.commit() conn.close() news_path = join("static", "media", "news") #deleting images of deleted news for dirpath, dirnames, filenames in os.walk(news_path): if f"NEWS-{id}" in dirnames: dir_to_delete = os.path.join(dirpath, f"NEWS-{id}") shutil.rmtree(dir_to_delete) return "XƏBƏRLƏR BAŞARIYLA SİLİNDİ" @self.template_global("getlatestnews") def getlatestnews()->list[str, str, str]: """this function returns latest 4 news from db""" conn = connect(self.db_path) c = conn.cursor() c.execute("SELECT * FROM news ORDER BY rowid DESC LIMIT 4") conn.commit() print(c.fetchall()) conn.close() @self.template_global("getnews") def getnews()->str: pass insertnews("BASLIQ1", "XEBER1") if __name__ == "__main__": server = Server( import_name=__name__, template_folder="../front-end/template", static_folder="../static") server.run() ``` <!-- it works right when i turn off DEBUG mode --> <!-- it should call my insertnews() function once --> Environment: - Python version: 3.11.0 - Flask version: 2.2.2
closed
2023-03-04T16:03:37Z
2023-03-19T00:06:10Z
https://github.com/pallets/flask/issues/5013
[]
eso8086
3
ScrapeGraphAI/Scrapegraph-ai
machine-learning
16
add save to csv function
closed
2024-02-13T10:11:49Z
2024-02-13T11:37:02Z
https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/16
[]
VinciGit00
0
sinaptik-ai/pandas-ai
data-visualization
865
SmartDatalake persistently fails when asked to plot
### System Info pandasai version: 1.5.13 ### 🐛 Describe the bug Asking for different plots the function keeps returning the matplotlib.pyplot module (plt) which is an unexpected return type. I persistently see this pattern trying different queries: ``` 2024-01-10 16:54:08 [INFO] Code running: .... plt.show() result = {'type': 'plot', 'value': plt} ``` 2024-01-10 16:54:08 [ERROR] Pipeline failed on step 4: expected str, bytes or os.PathLike object, not module
closed
2024-01-10T14:59:41Z
2024-07-04T16:05:41Z
https://github.com/sinaptik-ai/pandas-ai/issues/865
[]
RoyKulik
10
aio-libs/aiopg
sqlalchemy
140
Using cursors with SQLAlchemy
First of all, thank you for the great job publishing this package. I would like to know how to properly use a cursor using SQLAlchemy abstraction in `aiopg`. Looks like the `sa` subpackage is using internally a cursor. Would you mind sharing an example about how to use it?
closed
2016-09-07T11:46:48Z
2016-09-07T13:09:28Z
https://github.com/aio-libs/aiopg/issues/140
[]
tomas-fp
6
sammchardy/python-binance
api
1,151
TypeError: get_orderbook_tickers() got an unexpected keyword argument 'symbol'
**Describe the bug** TypeError: get_orderbook_tickers() got an unexpected keyword argument 'symbol' **To Reproduce** `client.get_orderbook_tickers(symbol='BTCUSDT')` **Expected behavior** using symbol parameter should be able to get the orderbook ticker only for that symbol **Environment (please complete the following information):** - Python version: 3.8.10 - Virtual Env: Jupyter Notebook - OS: Ubuntu 20:04.4 LTS - python-binance version: v1.0.15 **Logs or Additional context** -
open
2022-02-27T03:30:48Z
2022-03-04T02:51:57Z
https://github.com/sammchardy/python-binance/issues/1151
[]
brianivander
1
hankcs/HanLP
nlp
796
优化CRF分词效率的方法
<!-- 注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。 --> ## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [x] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 <!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 --> 当前最新版本号是:1.6.3 我使用的版本是:1.6.3 <!--以上属于必填项,以下可自由发挥--> ## 我的问题 [CRF分词](https://github.com/hankcs/HanLP#6-crf%E5%88%86%E8%AF%8D)和[感知机分词](https://github.com/hankcs/HanLP/wiki/%E7%BB%93%E6%9E%84%E5%8C%96%E6%84%9F%E7%9F%A5%E6%9C%BA%E6%A0%87%E6%B3%A8%E6%A1%86%E6%9E%B6)的流程相差不大(都是提取特征->查概率/权重->累加->Viterbi), 但Wiki上面的测试结果差距却很大。而且HanLP早期的CRF模型特征模板数量少于当前感知机的七个模板。 因此查看了一下HanLP构造CRF模型的逻辑,我发现了一个问题: CRF++生成的特征都是以“U[0-9]+:”开头的,而模型使用BinTrie索引特征概率,这就导致BinTrie加速的第一层只有一个字符“U”,,所有的特征都都走了二分查找,难怪速度会慢。 ## 解决思路 需要解决的是如何把汉字索引到第一级同时又不影响效率,我觉得可以考虑拆解重组特征模板和特征Key,或者直接reverse字符串。
closed
2018-04-18T15:39:41Z
2020-01-01T10:50:27Z
https://github.com/hankcs/HanLP/issues/796
[ "ignored" ]
TylunasLi
3
neuml/txtai
nlp
352
Add config option for list of stopwords to ignore with topic generation
Add an option to ignore words when generating topic names. This list is in addition to standard tokenizer stop words.
closed
2022-10-04T10:25:32Z
2022-10-04T13:40:35Z
https://github.com/neuml/txtai/issues/352
[]
davidmezzetti
0
gradio-app/gradio
python
9,889
How to replace gr.ImageEditor uploaded user image and still keep image editing features
### Describe the bug **gradio==5.4.0** I want to auto resize / crop user uploaded image and replace processed image inside in gr.ImageEditor Here the code I use So after image edited the image editing features disappears Here what I mean ``` imgs = gr.ImageEditor(sources='upload', type="pil", label='Human. Mask with pen or use auto-masking', interactive=True) def process_editor_image(image_dict, enable_processing): if not enable_processing or image_dict is None: return image_dict if isinstance(image_dict, dict) and 'background' in image_dict: image_dict['background'] = process_image_to_768x1024(image_dict['background']) return image_dict def process_single_image(image, enable_processing): if not enable_processing or image is None: return image return process_image_to_768x1024(image) # Add image processing event handlers imgs.upload( fn=process_editor_image, inputs=[imgs, auto_process], outputs=imgs, ) def process_image_to_768x1024(img): if not isinstance(img, Image.Image): return img # Create a new white background image target_width, target_height = 768, 1024 new_img = Image.new('RGB', (target_width, target_height), 'white') # Calculate aspect ratios aspect_ratio = img.width / img.height target_aspect = target_width / target_height if aspect_ratio > target_aspect: # Image is wider than target new_width = target_width new_height = int(target_width / aspect_ratio) resize_img = img.resize((new_width, new_height), Image.Resampling.LANCZOS) paste_y = (target_height - new_height) // 2 new_img.paste(resize_img, (0, paste_y)) else: # Image is taller than target new_height = target_height new_width = int(target_height * aspect_ratio) resize_img = img.resize((new_width, new_height), Image.Resampling.LANCZOS) paste_x = (target_width - new_width) // 2 new_img.paste(resize_img, (paste_x, 0)) return new_img def process_uploaded_image(img, enable_processing): if not enable_processing or img is None: return img if isinstance(img, dict): # For ImageEditor if img.get('background'): img['background'] = process_image_to_768x1024(img['background']) return img return process_image_to_768x1024(img) ``` ![image](https://github.com/user-attachments/assets/a1f0685a-fe4c-44c9-879c-a17aafbce492) ![image](https://github.com/user-attachments/assets/288f16a7-0d6a-4641-81f6-350722c2aeda)
closed
2024-11-01T11:43:51Z
2025-01-22T18:35:04Z
https://github.com/gradio-app/gradio/issues/9889
[ "bug", "🖼️ ImageEditor" ]
FurkanGozukara
1
ansible/awx
automation
15,125
RFE: Implement Maximum Execution Limit for Scheduled Successful Jobs
### Please confirm the following - [X] I agree to follow this project's [code of conduct](https://docs.ansible.com/ansible/latest/community/code_of_conduct.html). - [X] I have checked the [current issues](https://github.com/ansible/awx/issues) for duplicates. - [X] I understand that AWX is open source software provided for free and that I might not receive a timely response. ### Feature type New Feature ### Feature Summary **Context:** The current version of AWX allows users to schedule job executions, but it does not offer a way to automatically disable these schedules after a certain number of successful executions. This enhancement proposes adding a feature to limit the maximum number of executions for a schedule. For example, a user could set a schedule to run a job three times every day, but after a total of nine successful executions, the schedule should automatically disable itself. This feature would be particularly useful in managing resources and ensuring that tasks do not run indefinitely. Consider a scenario where schedules are dynamically generated to perform specific checks a few times a day over several days. After the desired number of checks, it would be beneficial for the schedule to deactivate automatically. Schedules in AWX function similarly to a distributed cron job. By implementing this feature, it would be akin to having a distributed version of the "at" command, enhancing the flexibility and control over task executions in AWX. **Use Case:** This feature would be beneficial in scenarios where a task is required to run only a limited number of times, such as: - Temporary projects or jobs that are only relevant for a certain period or a specific number of executions. - Compliance or policy requirements that mandate certain tasks not exceed a specified number of runs. - Testing environments where jobs are needed for a finite number of runs to validate behavior under controlled repetitions. **Impact:** - Positive: Enhances control over job execution, prevents resource wastage, and improves manageability. - Negative: Slight increase in the complexity of the scheduling interface and additional validation required to manage the execution count. ### Select the relevant components - [X] UI - [X] API - [X] Docs - [X] Collection - [X] CLI - [ ] Other ### Steps to reproduce RFE ### Current results RFE ### Sugested feature result RFE ### Additional information _No response_
closed
2024-04-22T12:24:55Z
2024-07-27T13:29:54Z
https://github.com/ansible/awx/issues/15125
[ "type:enhancement", "component:api", "component:ui", "component:docs", "component:awx_collection", "community" ]
jangel97
1
hyperspy/hyperspy
data-visualization
3,236
Is there a way to control the aspect ratio of the navigation images?
#### Describe the functionality you would like to see. EELSSpectrum.plot() doesn't seem to take argument like plt.imshow(extent). Is there a way to control the aspect ratio of navigation images?
open
2023-10-08T02:24:39Z
2023-10-14T20:25:38Z
https://github.com/hyperspy/hyperspy/issues/3236
[]
HanHsuanWu
7
tiangolo/uwsgi-nginx-flask-docker
flask
178
Failed connections when running docker build to install new requirements.txt
Hi there! I followed your instructions on this video, very easy to follow and instructive, thank you! https://www.youtube.com/watch?v=DPBspKl2epk&t=849s However, I want to add new package dependancies to the docker image, so I first generated a new requirements.txt and then I modified the Dockerfile to uncomment: COPY requirements.txt / RUN pip install --no-cache-dir -U pip RUN pip install --no-cache-dir -U -r /requirements.txt However, I get these errors when I run the dockerfile: ``` (env) PS C:\Work\docker-azure-demo\flask-webapp-quickstart> docker build --rm -f .\Dockerfile -t apraapi.azurecr.io/flask-webapp-quickstart:latest . Sending build context to Docker daemon 808.1MB Step 1/7 : FROM tiangolo/uwsgi-nginx-flask:python3.6-alpine3.7 ---> cdec3b0d8f20 Step 2/7 : ENV LISTEN_PORT=8000 ---> Using cache ---> c5c66cc273b6 Step 3/7 : EXPOSE 8000 ---> Using cache ---> 15a504c395e6 Step 4/7 : COPY requirements.txt / ---> Using cache ---> 9ffd23cb8771 Step 5/7 : RUN pip install --no-cache-dir -U pip ---> Using cache ---> ca7dbaeb4b0e Step 6/7 : RUN pip install --no-cache-dir -U -r /requirements.txt ---> Running in ad5830d85dcb Collecting astroid==2.4.1 (from -r /requirements.txt (line 1)) Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790869e8>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dc550>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dc588>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dc240>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7ff3790dcda0>: Failed to establish a new connection: [Errno -3] Try again',)': /simple/astroid/ Could not find a version that satisfies the requirement astroid==2.4.1 (from -r /requirements.txt (line 1)) (from versions: ) No matching distribution found for astroid==2.4.1 (from -r /requirements.txt (line 1)) The command '/bin/sh -c pip install --no-cache-dir -U -r /requirements.txt' returned a non-zero code: 1 (env) PS C:\Work\docker-azure-demo\flask-webapp-quickstart> ``` Do you know what's happening? Alternatively, I do a pip install from my venv and all seems to work fine. Thanks!
closed
2020-05-11T01:16:27Z
2020-06-06T12:38:25Z
https://github.com/tiangolo/uwsgi-nginx-flask-docker/issues/178
[]
ghost
2
Evil0ctal/Douyin_TikTok_Download_API
fastapi
404
[BUG] Getting Error: response status is 400 when trying to run GET /api/hybrid/video_data
***Platform where the error occurred?*** Such as: TikTok ***The endpoint where the error occurred?*** Such as: API-V4/ WebApp ***Submitted input value?*** Such as: [video link ](https://www.tiktok.com/@nandaarsyinta/video/7298346423521742085) ***Have you tried again?*** Such as: Yes, the error still exists. ***Have you checked the readme or interface documentation for this project?*** Such as: Yes, and it is very sure that the problem is caused by the program. This is the error I am getting { "detail": { "code": 400, "message": "An error occurred.", "support": "Please contact us on Github: https://github.com/Evil0ctal/Douyin_TikTok_Download_API", "time": "2024-05-11 06:35:09", "router": "/api/hybrid/video_data", "params": { "url": "https://www.tiktok.com/@nandaarsyinta/video/7298346423521742085", "minimal": "false" } } }
closed
2024-05-22T01:35:47Z
2024-06-14T08:23:45Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/404
[ "BUG", "enhancement" ]
ketanmalempati
1
sanic-org/sanic
asyncio
2,847
sanic pytest
### Is there an existing issue for this? - [X] I have searched the existing issues ### Describe the bug When I made a test code with pytest, First test_client.get() function was passed. but next test_client.get() function has a trouble There is some error like this. ``` ERROR sanic.error:startup.py:960 Experienced exception while trying to serve Traceback (most recent call last): File "/py39//lib/python3.9/site-packages/sanic/mixins/startup.py", line 958, in serve_single worker_serve(monitor_publisher=None, **kwargs) File "/py39//lib/python3.9/site-packages/sanic/worker/serve.py", line 143, in worker_serve raise e File "/py39//lib/python3.9/site-packages/sanic/worker/serve.py", line 117, in worker_serve return _serve_http_1( File "/py39//lib/python3.9/site-packages/sanic/server/runners.py", line 222, in _serve_http_1 loop.run_until_complete(app._startup()) File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/py39//lib/python3.9/site-packages/sanic/app.py", line 1729, in _startup raise ServerError(message) sanic.exceptions.ServerError: Duplicate route names detected: App.wrapped_handler. You should rename one or more of them explicitly by using the `name` param, or changing the implicit name derived from the class and function name. For more details, please see https://sanic.dev/en/guide/release-notes/v23.3.html#duplicated-route-names-are-no-longer-allowed INFO sanic.root:startup.py:965 Server Stopped ``` ### sanic versions ``` sanic==23.6.0 sanic-compress==0.1.1 sanic-ext==23.6.0 sanic-jinja2==2022.11.11 sanic-routing==23.6.0 sanic-testing==23.6.0 ``` ### Code snippet ```python import pytest from was import webapp from utils.logger import logger @pytest.fixture def app(): app = webapp.app return app def test_app_root(app): _, response = app.test_client.get("/") logger.info(response.status) assert response.status == 200 _, response = app.test_client.get("/") logger.info(response.status) # assert request.method.lower() == "get" assert response.status == 200 ``` ### Expected Behavior _No response_ ### How do you run Sanic? Sanic CLI ### Operating System MacOS ### Sanic Version Sanic 23.6.0; Routing 23.6.0 ### Additional context _No response_
closed
2023-11-02T05:53:10Z
2023-11-02T06:37:58Z
https://github.com/sanic-org/sanic/issues/2847
[ "bug" ]
kobikun
1
SciTools/cartopy
matplotlib
1,660
Bug with 0.18.0: Matplotlib clabels become NoneType object for figs with projection=ccrs.PlateCarree()
Calling to ax.clabel plots contour labels as desired, yet the clabel object itself is somehow 'None' for matplotlib figures initilaized with projection=ccrs.PlateCarree(). This becomes an issue because I then wipe my contour and contour label objects clean after each iteration of a long loop, as in [this answer](https://stackoverflow.com/questions/47049296/matplotlib-how-to-remove-contours-clabel), but a TypeError is thrown because of course it can't iterate over a NoneType object. By rolling back to cartopy 0.17.0 and seeing clabel producing an object as intended, I can confirm that this is a bug that appears with version 0.18.0. Below is an example script that illustrates that contour labels become NoneType objects when projection=ccrs.PlateCarree() is invoked with cartopy 0.18.0. I'm running with Anaconda on a Win64 PC. #### Code to reproduce ``` import matplotlib as mpl mpl.use('Agg') import matplotlib.pyplot as plt import numpy as np from time import time from datetime import datetime, timedelta from siphon.catalog import TDSCatalog import cartopy.crs as ccrs # Recreate the gridded data from the matplotlib contour example delta = 0.025 x = np.arange(-3.0, 3.0, delta) y = np.arange(-2.0, 2.0, delta) X, Y = np.meshgrid(x, y) Z1 = np.exp(-X**2 - Y**2) Z2 = np.exp(-(X - 1)**2 - (Y - 1)**2) Z = (Z1 - Z2) * 2 # Contour it and label it to show that labels work and can be removed as desired fig, ax = plt.subplots() cN = ax.contour(X, Y, Z) lbl = ax.clabel(cN) #plt.show() print("\n\nContour label for a basic 2D plot is: ") print(lbl) # Now remove those labels # Will work as intended for lbl for label in lbl: label.remove() # Now try a dataset that needs to be geographically referenced # Use siphon to get a weather model dataset # This dataset link will expire on approximately March 9, 2021 model_url = "https://www.ncei.noaa.gov/thredds/catalog/model-rap130/202009/20200909/catalog.xml?dataset=rap130/202009/20200909/rap_130_20200909_1800_000.grb2" vtime = datetime.strptime('2020090918','%Y%m%d%H') # Get the data model = TDSCatalog(model_url) ds = model.datasets[0] ncss = ds.subset() query = ncss.query() query.accept('netcdf') query.time(vtime) # Set to the analysis hour only query.add_lonlat() query.variables('Geopotential_height_isobaric') data = ncss.get_data(query) # Get the lats and lons and a data field from the file lats = data.variables['lat'][:,:] lons = data.variables['lon'][:,:] hght = data.variables['Geopotential_height_isobaric'][0,24,:,:] # 700 hPa is 24th element # Contour that weather data grid # This requires cartopy, which seems to be the problem # Redefine the figure, because this time we need to georeference it fig = plt.figure(5, figsize=(1600/96,1600/96)) ax = fig.add_subplot(111, projection=ccrs.PlateCarree()) cN2 = ax.contour(lons, lats, hght) lbl2 = ax.clabel(cN2) #plt.show() print("\n\nContour label for weather data plot is: ") print(lbl2) # Removing labels will not work for lbl2 because it can't iterate over a NoneType object # if using cartopy 0.18.0 for label in lbl2: label.remove() ``` #### Traceback ``` Traceback (most recent call last): File "clabel_bug.py", line 67, in <module> for label in lbl2: TypeError: 'NoneType' object is not iterable ```
closed
2020-09-16T16:29:00Z
2020-09-16T18:59:38Z
https://github.com/SciTools/cartopy/issues/1660
[]
ThunderRoad75
1
unionai-oss/pandera
pandas
863
MultiIndex dropped by reset_index with default argument
**Describe the bug** A clear and concise description of what the bug is. - [X] I have checked that this issue has not already been reported. - [X] I have confirmed this bug exists on the latest version of pandera. - [ ] (optional) I have confirmed this bug exists on the master branch of pandera. **Note**: Please read [this guide](https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) detailing how to provide the necessary information for us to reproduce your bug. #### Code Sample, a copy-pastable example ```python import pandera as pa multi_index = pa.DataFrameSchema( columns={"test_col": pa.Column(int)}, index=pa.MultiIndex([pa.Index(int, name="index_1"), pa.Index(int, name="index_2")]), ) single_index = pa.DataFrameSchema( columns={"test_col": pa.Column(int)}, index=pa.Index(int, name="index_1") ) print(multi_index) print("-----") print(single_index) print("-----") print("-----") print(multi_index.reset_index()) print("-----") print(single_index.reset_index()) # By contrast, this will work as expected: print("-----") print(multi_index.reset_index(["index_1", "index_2"])) ``` #### Expected behavior The indices to become columns. #### Actual behavior The MultiIndex is completely dropped without being added to the columns. #### Desktop (please complete the following information): - OS: Win10 - Python 3.9.12 #### Screenshots ![image](https://user-images.githubusercontent.com/10162554/170385676-a33be121-e867-4814-8440-fc2afbfe8f06.png) ![image](https://user-images.githubusercontent.com/10162554/170386243-318b6215-9501-43bf-af1f-932f8a1263ee.png) #### Additional context Found in pandera-0.9.0 Exists as recently as pandera-0.11.0
closed
2022-05-25T23:34:46Z
2025-01-17T19:54:29Z
https://github.com/unionai-oss/pandera/issues/863
[ "bug" ]
mheguy
3
litestar-org/litestar
asyncio
3,787
Bug: "Lockfile hash doesn't match pyproject.toml, packages may be outdated" warning in pdm
### Description When running `pdm install` on `litestar` repo you get: ``` Run pdm install -G:all WARNING: Lockfile is generated on an older version of PDM WARNING: Lockfile hash doesn't match pyproject.toml, packages may be outdated Updating the lock file... ``` Link: https://github.com/litestar-org/litestar/actions/runs/11290808586/job/31403420888?pr=3784#step:5:13 I don't think that this is correct. ### URL to code causing the issue _No response_ ### MCVE _No response_ ### Steps to reproduce ```bash 1. Run `pdm install` on clean repo with no `venv` ``` ### Screenshots _No response_ ### Logs _No response_ ### Litestar Version `main` ### Platform - [X] Linux - [X] Mac - [X] Windows - [ ] Other (Please specify in the description above)
closed
2024-10-14T06:45:08Z
2025-03-20T15:54:58Z
https://github.com/litestar-org/litestar/issues/3787
[ "Bug :bug:", "Dependencies", "Package" ]
sobolevn
2
Lightning-AI/pytorch-lightning
machine-learning
20,423
Weird error while training a model with tabular data!!!! Some problem related self.log_dict
### Bug description The code can be accessed at https://www.kaggle.com/code/vigneshwar472/notebook5a03168e34 I am working on multiclass classification task and want to train a nueral network with pytorch lightning on 2x T4 GPUs on kaggle notebook. Everything seems to work fine but I encounter this error when I fitted the trainer. Training Step of lightning module ``` def training_step(self, batch, batch_idx): x, y = batch logits = self(x) loss = F.cross_entropy(logits, y) preds = F.softmax(logits, dim=1) preds.to(y) self.log_dict({ "train_Loss": loss, "train_Accuracy": self.accuracy(preds, y), "train_Precision": self.precision(preds, y), "train_Recall": self.recall(preds, y), "train_F1-Score": self.f1(preds, y), "train_F3-Score": self.f_beta(preds, y), "train_AUROC": self.auroc(preds, y), }, on_step=True, on_epoch=True, prog_bar=True, sync_dist=True) return loss ``` Initializing trainer ``` trainer = L.Trainer(max_epochs=5, devices=2, strategy='ddp_notebook', num_sanity_val_steps=0, profiler='simple', default_root_dir="/kaggle/working", callbacks=[DeviceStatsMonitor(), StochasticWeightAveraging(swa_lrs=1e-2), #EarlyStopping(monitor='train_Loss', min_delta=0.001, patience=100, verbose=False, mode='min'), ], enable_progress_bar=True, enable_model_summary=True, ) ``` trainer.fit(model, data_mod) => data_mod is LightningDataModule ``` W1116 14:03:37.546000 140135548491584 torch/multiprocessing/spawn.py:146] Terminating process 131 via signal SIGTERM INFO: [rank: 0] Received SIGTERM: 15 --------------------------------------------------------------------------- ProcessRaisedException Traceback (most recent call last) Cell In[14], line 1 ----> 1 trainer.fit(model, data_mod) File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py:538, in Trainer.fit(self, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path) 536 self.state.status = TrainerStatus.RUNNING 537 self.training = True --> 538 call._call_and_handle_interrupt( 539 self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path 540 ) File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py:46, in _call_and_handle_interrupt(trainer, trainer_fn, *args, **kwargs) 44 try: 45 if trainer.strategy.launcher is not None: ---> 46 return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs) 47 return trainer_fn(*args, **kwargs) 49 except _TunerExitException: File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/launchers/multiprocessing.py:144, in _MultiProcessingLauncher.launch(self, function, trainer, *args, **kwargs) 136 process_context = mp.start_processes( 137 self._wrapping_function, 138 args=process_args, (...) 141 join=False, # we will join ourselves to get the process references 142 ) 143 self.procs = process_context.processes --> 144 while not process_context.join(): 145 pass 147 worker_output = return_queue.get() File /opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py:189, in ProcessContext.join(self, timeout) 187 msg = "\n\n-- Process %d terminated with the following error:\n" % error_index 188 msg += original_trace --> 189 raise ProcessRaisedException(msg, error_index, failed_process.pid) ProcessRaisedException: -- Process 1 terminated with the following error: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 76, in _wrap fn(i, *args) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/launchers/multiprocessing.py", line 173, in _wrapping_function results = function(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 574, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 981, in _run results = self._run_stage() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/trainer.py", line 1025, in _run_stage self.fit_loop.run() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 205, in run self.advance() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/fit_loop.py", line 363, in advance self.epoch_loop.run(self._data_fetcher) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 140, in run self.advance(data_fetcher) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/training_epoch_loop.py", line 250, in advance batch_output = self.automatic_optimization.run(trainer.optimizers[0], batch_idx, kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 190, in run self._optimizer_step(batch_idx, closure) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 268, in _optimizer_step call._call_lightning_module_hook( File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 167, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 1306, in optimizer_step optimizer.step(closure=optimizer_closure) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/optimizer.py", line 153, in step step_output = self._strategy.optimizer_step(self._optimizer, closure, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/ddp.py", line 270, in optimizer_step optimizer_output = super().optimizer_step(optimizer, closure, model, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 238, in optimizer_step return self.precision_plugin.optimizer_step(optimizer, model=model, closure=closure, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/plugins/precision/precision.py", line 122, in optimizer_step return optimizer.step(closure=closure, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 484, in wrapper out = func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/optim/optimizer.py", line 89, in _use_grad ret = func(self, *args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/optim/adamw.py", line 204, in step loss = closure() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/plugins/precision/precision.py", line 108, in _wrap_closure closure_result = closure() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 144, in __call__ self._result = self.closure(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 129, in closure step_output = self._step_fn() File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/loops/optimization/automatic.py", line 317, in _training_step training_step_output = call._call_strategy_hook(trainer, "training_step", *kwargs.values()) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/call.py", line 319, in _call_strategy_hook output = fn(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 389, in training_step return self._forward_redirection(self.model, self.lightning_module, "training_step", *args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 640, in __call__ wrapper_output = wrapper_module(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1636, in forward else self._run_ddp_forward(*inputs, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 1454, in _run_ddp_forward return self.module(*inputs, **kwargs) # type: ignore[index] File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/strategies/strategy.py", line 633, in wrapped_forward out = method(*_args, **_kwargs) File "/opt/conda/lib/python3.10/site-packages/torch/_dynamo/eval_frame.py", line 433, in _fn return fn(*args, **kwargs) File "/tmp/ipykernel_30/3650372019.py", line 74, in training_step self.log('train_loss', loss, on_step=True, on_epoch=True, prog_bar=True, sync_dist=True) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 437, in log apply_to_collection(value, dict, self.__check_not_nested, name) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 438, in torch_dynamo_resume_in_log_at_437 apply_to_collection( File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 484, in torch_dynamo_resume_in_log_at_438 results.reset(metrics=False, fx=self._current_fx_name) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 508, in torch_dynamo_resume_in_log_at_484 and is_param_in_hook_signature(self.training_step, "dataloader_iter", explicit=True) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/core/module.py", line 525, in torch_dynamo_resume_in_log_at_508 results.log( File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 403, in log metric = _ResultMetric(meta, isinstance(value, Tensor)) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 404, in torch_dynamo_resume_in_log_at_403 self[key] = metric File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 411, in torch_dynamo_resume_in_log_at_404 self[key].to(value.device) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 414, in torch_dynamo_resume_in_log_at_411 self.update_metrics(key, value, batch_size) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 419, in update_metrics result_metric.forward(value, batch_size) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 270, in forward self.update(value, batch_size) File "/opt/conda/lib/python3.10/site-packages/torchmetrics/metric.py", line 483, in wrapped_func update(*args, **kwargs) File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 225, in update self._forward_cache = self.meta.sync(value.clone()) # `clone` because `sync` is in-place File "/opt/conda/lib/python3.10/site-packages/lightning/pytorch/trainer/connectors/logger_connector/result.py", line 144, in sync assert self._sync is not None AssertionError ``` # Please Help me resolve this error. I am very confused what to do ### What version are you seeing the problem on? v2.4 ### How to reproduce the bug ```python Check out the kaggle notebook [](https://www.kaggle.com/code/vigneshwar472/notebook5a03168e34) ``` ### Error messages and logs ``` # Error messages and logs here please ``` ### Environment <details> <summary>Current environment</summary> ``` #- PyTorch Lightning Version (e.g., 2.4.0): #- PyTorch Version (e.g., 2.4): #- Python version (e.g., 3.12): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): ``` </details> ### More info _No response_
open
2024-11-16T14:21:26Z
2024-11-18T23:58:56Z
https://github.com/Lightning-AI/pytorch-lightning/issues/20423
[ "bug", "ver: 2.4.x" ]
KeesariVigneshwarReddy
1
nerfstudio-project/nerfstudio
computer-vision
3,039
How to export cropped data
When I crop out the data I want on the interface, but when exporting, I can only export all the data. How can I export the cropped data.Thank you.
open
2024-04-03T07:25:18Z
2024-04-19T07:40:00Z
https://github.com/nerfstudio-project/nerfstudio/issues/3039
[]
cici19850
2
zappa/Zappa
django
1,351
Scheduled function truncated at 63 characters and fails to invoke
if I have a scheduled function with a name longer than 63 characters, then the name will be truncated in the CloudWatch event name/ARN: ``` { "production": { ... "events": [{ "function": "my_module.my_submodule.my_really_long_and_descriptive_function_name", "expressions": ["rate(1 day)"] }], ... } } ``` Event rule: `arn:aws:events:eu-west-2:000000000000:rule/-my_module.my_submodule.my_really_long_and_descriptive_function_` This results in the following exception when the event is handled by the lambda: ``` AttributeError: module 'my_module.my_submodule' has no attribute 'my_really_long_and_descriptive_function_' ``` ## Context <!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug --> <!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.8/3.9/3.10/3.11/3.12 --> It looks like the `whole_function` value is parsed out of the event ARN here: https://github.com/zappa/Zappa/blob/39f75e76d28c1a08d4de6501e6f794fe988cbc98/zappa/handler.py#L410 Since the ARNs are limited in length, the long module path gets truncated to 63 characters (possibly because of the leading `-` making 64 total). It looks like the full module and function path remains non-truncated in the description of the event rule. ## Expected Behavior It should invoke the non-truncated function, or should refuse to deploy with handler functions that are too long. ## Actual Behavior It throws an exception and the scheduled task never executes. ## Possible Fix Either: 1. Have the handler read the non-truncated `whole_function` value from the event description. This might require an extra AWS API call that existing deployments may or may not have permission to perform. 2. During deployment, a mapping of truncated names to full names could be created and embedded in the deployed app bundle, then referenced when handling events. 3. Raise an error (early) during deployment if a handler function name is too long and would result in truncation. It would be better to explicitly fail during deployment than to have guaranteed failures later on that might go unnoticed. ## Steps to Reproduce 1. Create a scheduled function whose fully qualified handler is longer than 63 characters. 2. Deploy. 3. Observe the error logs for the `AttributeError` above. ## Your Environment <!--- Include as many relevant details about the environment you experienced the bug in --> * Zappa version used: 0.56.1 * Operating System and Python version: Amazon Linux (lambda), Python 3.9
closed
2024-09-18T11:18:37Z
2024-09-30T07:31:03Z
https://github.com/zappa/Zappa/issues/1351
[ "duplicate" ]
eviltwin
2
lanpa/tensorboardX
numpy
222
UnboundLocalError: local variable 'start' referenced before assignment
Dear Author: I just found an code logical bug in histogram. I think here should be raise an Exception. ```python def make_histogram(values, bins): """Convert values into a histogram proto using logic from histogram.cc.""" values = values.reshape(-1) counts, limits = np.histogram(values, bins=bins) limits = limits[1:] # void Histogram::EncodeToProto in histogram.cc for i, c in enumerate(counts): if c > 0: start = max(0, i - 1) break for i, c in enumerate(reversed(counts)): if c > 0: end = -(i) break counts = counts[start:end] limits = limits[start:end] sum_sq = values.dot(values) return HistogramProto(min=values.min(), max=values.max(), num=len(values), sum=values.sum(), sum_squares=sum_sq, bucket_limit=limits, bucket=counts) ``` if all the elements in counts is 0 .there will be error like this: ``` File "/home/shuxiaobo/TR-experiments/cli/train.py", line 62, in train writer.add_histogram(name + '/grad', param.grad.clone().cpu().data.numpy(), j) File "/home/shuxiaobo/python3/lib/python3.6/site-packages/tensorboardX/writer.py", line 395, in add_histogram self.file_writer.add_summary(histogram(tag, values, bins), global_step, walltime) File "/home/shuxiaobo/python3/lib/python3.6/site-packages/tensorboardX/summary.py", line 142, in histogram hist = make_histogram(values.astype(float), bins) File "/home/shuxiaobo/python3/lib/python3.6/site-packages/tensorboardX/summary.py", line 162, in make_histogram counts = counts[start:end] UnboundLocalError: local variable 'start' referenced before assignment ```
closed
2018-09-06T03:28:00Z
2019-03-21T16:50:56Z
https://github.com/lanpa/tensorboardX/issues/222
[]
shuxiaobo
7
numpy/numpy
numpy
27,972
DOC: Unclear Documentation for debugging
### Issue with current documentation: Setting up spin and debugger setup is unclear and tedious. URL: https://numpy.org/devdocs/dev/development_environment.html I have followed the instructions here and after multiple attempts, the spin build system does not seem to be working for me. Maybe the documentation is missing something? ### Idea or request for content: _No response_
closed
2024-12-11T13:21:42Z
2024-12-18T10:37:02Z
https://github.com/numpy/numpy/issues/27972
[ "04 - Documentation", "33 - Question" ]
sai-srinivasan-v
8
kizniche/Mycodo
automation
1,324
Graph (Synchronous) [Highstock] widget not displaying past data beyond 24 h
### Versions: - Mycodo Version: 8.15.8 - Raspberry Pi Version: 400 Rev 1.1 - Raspbian OS Version: Bullseye ### Reproducibility 1. Add new Graph (Synchronous) [Highstock] Widget Configuration and any parameters to track. 2. Save graph, confirm it's rendering data correctly, and wait for more than 24 hours. 3. Clicking on the "Full" button in the top nav only shows data for 1 day (Fig. 1). Also, note not being able to click on prior days in the calendar drop-down (Fig. 2). ### Expected behavior I would expect there to be more data available beyond 24 h since I've had the graph successfully render data every day for 5 days now. ### Screenshots Fig. 1 ![image](https://github.com/kizniche/Mycodo/assets/69597/5ab4893c-e227-442a-90be-12dd067b54bd) Fig. 2 ![image](https://github.com/kizniche/Mycodo/assets/69597/ee5e70ae-bc42-44dd-8d34-5142229baa85) ### Additional context Probably missing something obvious in the software's config. Free Space (Input 06a2dd8c) Input (RPiFreeSpace), 15.0 second interval Measurement | Timestamp 2809.61 MB (Disk) CH0 | 2023/7/26 0:09:23
closed
2023-07-26T07:17:03Z
2023-07-26T12:17:00Z
https://github.com/kizniche/Mycodo/issues/1324
[]
andresn
1
chainer/chainer
numpy
7,717
Training fail when using create_mnbn_model() and Sequential()
When using the following model and `create_mnbn_model()` in mnist example, i got an error. model: ``` class BNMLP(chainer.Sequential): def __init__(self, n_units, n_out): super().__init__( # the size of the inputs to each layer will be inferred L.Linear(784, n_units), # n_in -> n_units L.BatchNormalization(n_units), L.Linear(n_units, n_units), # n_units -> n_units L.BatchNormalization(n_units), L.Linear(n_units, n_out), # n_units -> n_out ) ``` How to create mnbn model: ``` model = chainermn.links.create_mnbn_model( L.Classifier(BNMLP(args.unit, 10)), comm) ```
closed
2019-07-06T05:50:55Z
2019-08-01T06:56:59Z
https://github.com/chainer/chainer/issues/7717
[ "cat:bug", "ChainerMN" ]
shu65
1
mitmproxy/mitmproxy
python
6,902
DHCP failure in Local Redirect mode (Windows)
#### Problem Description I've been experimenting with local redirect mode on a Windows 11 AWS machine, and noticed the machine loses all connectivity after mitmdump has been running for a while (could take up to an hour). In order to debug this issue I've set both proxy_debug=true & termlog_verbosity=debug, and noticed at the end of the log when the machine loses all network capabilities, there are several DHCP broadcasts (UDP ports 67 & 68) that looks like this: `*:68 -> udp -> 255.255.255.255:67` After observing this, I'm thinking WinDivert might be having some issues with re-injecting broadcast packets. I also found the following issue that could be relevant: https://github.com/basil00/Divert/issues/320 I recompiled the windows-redirector binary with a modified filter (could be a nice feature flag to customize the WinDivert filter with an argument), and it seemed that the problem ceased when the filter (on WinDivert) was set to exclude broadcasts. I can open a PR for this if you think the fix is appropriate but it's pretty straight forward, I changed these 2 filters: https://github.com/mitmproxy/mitmproxy_rs/blob/c30c9d8ffc41a453670a27909b2cb0d97abbbb81/mitmproxy-windows/redirector/src/main2.rs#L112 https://github.com/mitmproxy/mitmproxy_rs/blob/c30c9d8ffc41a453670a27909b2cb0d97abbbb81/mitmproxy-windows/redirector/src/main2.rs#L117 From "tcp || udp" to "remoteAddr != 255.255.255.255 && (tcp || udp)" #### Steps to reproduce the behavior: 1. Start up a Windows EC2 instance on AWS 2. Let mitmdump run for a while in local redirect mode 3. Wait for machine to lose connection (eg. RDP session will disconnect and no further connections will be possible) #### System Information EC2 instance on AWS Mitmproxy: 10.3.0 binary Python: 3.12.3 OpenSSL: OpenSSL 3.2.1 30 Jan 2024 Platform: Windows-11-10.0.22631-SP0
open
2024-06-06T15:15:45Z
2024-11-07T02:21:08Z
https://github.com/mitmproxy/mitmproxy/issues/6902
[ "kind/bug", "os/windows", "area/rust" ]
Osherz5
8
litl/backoff
asyncio
22
Add support for asyncio (Python>=3.4)
It would be great if backoff would be available for use with asyncio's coroutines. This requires: 1. Handle coroutines in `on_predicate` and `on_exception` decorators. 2. Handle case when `on_success`/`on_backoff`/`on_giveup` are coroutines. 3. Use `asyncio.sleep()` instead of `time.sleep()`. 4. Conditionally installing/importing required deps on Python < 3.4; tests; CI update. Obviously sync and async versions of can't be trivially combined. This can be solved in one of the following ways: 1. Check in `on_predicate`/`on_exception` is wrapped function is coroutine and switch between sync and async implementations. Notice that in general `time.sleep` can't be used with asyncio, only in separate thread due to the nature of async code. This means that both implementations - sync and async - in single program will be used very rarely. Also I don't see easy way of sharing code between sync/async versions. At least tests will be completely duplicated. 2. Reimplement `backoff` using async primitives in separate library. Unfortunately this leads to code duplication. As starting point I forked `backoff` and reimplemented it with async primitives: https://github.com/rutsky/aiobackoff It passes all tests and now I'm trying to integrate it with my project. Please share ideas and intents of implementing asyncio support in backoff library, I would like to share efforts as much as possible. If there are no indents of adding asyncio support to `backoff` I can publish `aiobackoff` fork.
closed
2017-01-20T19:14:31Z
2017-02-06T20:29:11Z
https://github.com/litl/backoff/issues/22
[]
rutsky
17
ahmedfgad/GeneticAlgorithmPython
numpy
283
delay_after_gen warning
Hello, I am using PyGAD version: 3.3.1 on Windows with python 3.10 within jupyter notebook. When I run my GA, I am getting the following user warning. This is not something I am setting. It seems to emanate from the internal pygad code. How can I avoid having this warning displayed? Thank you ``` C:\Users\wassimj\AppData\Local\Programs\Python\Python310\lib\site-packages\pygad\pygad.py:1139: UserWarning: The 'delay_after_gen' parameter is deprecated starting from PyGAD 3.3.0. To delay or pause the evolution after each generation, assign a callback function/method to the 'on_generation' parameter to adds some time delay. ```
open
2024-03-21T14:23:31Z
2025-01-07T19:16:43Z
https://github.com/ahmedfgad/GeneticAlgorithmPython/issues/283
[ "enhancement" ]
wassimj
4
plotly/dash
flask
2,519
[BUG] `dash.get_relative_path()` docstring out of date
Docstrings for `dash.get_relative_path()` and `dash.strip_relative_path()` still refer to the `app` way of accessing those functions, which creates inconsistency in the docs: ![Screen Shot 2023-05-02 at 2 44 32 PM](https://user-images.githubusercontent.com/4672118/235759684-d386ad8c-cee1-48a4-ba6c-9b54fb442440.png) https://dash.plotly.com/reference#dash.get_relative_path
closed
2023-05-02T18:57:09Z
2023-05-15T20:29:16Z
https://github.com/plotly/dash/issues/2519
[]
emilykl
0
MagicStack/asyncpg
asyncio
453
Prepared statements being recreated on every call of fetch
<!-- Thank you for reporting an issue/feature request. If this is a feature request, please disregard this template. If this is a bug report, please answer to the questions below. It will be much easier for us to fix the issue if a test case that reproduces the problem is provided, with clear instructions on how to run it. Thank you! --> * **asyncpg version**: 0.18.3 * **PostgreSQL version**: 9.4 * **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce the issue with a local PostgreSQL install?**: I use the official Docker image with custom schema and test data. * **Python version**: 3.7 * **Platform**: MacOS * **Do you use pgbouncer?**: No * **Did you install asyncpg with pip?**: Yes * **If you built asyncpg locally, which version of Cython did you use?**: - * **Can the issue be reproduced under both asyncio and [uvloop](https://github.com/magicstack/uvloop)?**: Yes <!-- Enter your issue details below this comment. --> Hello, I am facing a peculiar problem with the way prepared statements are handled. I use the following architecture: aiohttp application, which initializes a pool of 1 to 20 db connections on init. Data is periodically refreshed from the DB (once in a few minutes for most tables). I have a special class which handles the loading of data from DB and caches them to memory and to Redis (since multiple containers of the same app are running and I would like to minimize fetches from DB). This class is instantiated by a factory method which creates (besides other arguments) a `load` coroutine, which gets query passed into it by the factory. The queries have no parameters and are static through out the runtime. `load` functions works by getting a connection from the pool, and calling `connection.fetch` on the given query. As per my understanding, the query should then be turned into a prepared statement, cached into a builtin LRU cache, and reused in later calls. However, it seems that each call to `load` (which is periodic) gets a new LRU cache for some reason, creating the prepared statements anew. But when I run `connection.fetch` on `SELECT * FROM pg_prepared_statements` I see that the number of prepared statements held by the connection increases in each call of `fetch`. Indeed, adding some prints to `connection.py` I found out that the statements get recreated and put into the cache on each call, since the cache is empty. I thought that perhaps it is because the connections I get from the pool differ, but since `pg_prepared_statements` is local to a session (a connection?) I think this is not the case. Indeed, limiting the size of the pool to `max_size=1` did not solve this issue. This causes my Postgres to slowly drain more and more memory until the connections are reset. Disabling the LRU cache with `statement_cache_size=0` avoids this, but I believe that this behaviour is not intended. I tried to make a minimal reproducer but haven't yet succeeded.
closed
2019-06-06T17:13:37Z
2019-06-07T16:24:44Z
https://github.com/MagicStack/asyncpg/issues/453
[]
mklokocka
3
hatchet-dev/hatchet
fastapi
666
feat: Deduplicated enqueue
I'm wondering if hatchet has any built in support for any sort of deduplicated enqueue, where a task/step/workflow could be enqueued in an idempotent way i.e. deduplicated based on its parameters. I realize that there are some tricky details here, but this would be super nice.
closed
2024-06-27T22:26:43Z
2024-07-26T16:47:47Z
https://github.com/hatchet-dev/hatchet/issues/666
[ "enhancement" ]
colonelpanic8
10
OpenBB-finance/OpenBB
python
6,778
Starry-eyed Supporter (150 Points)
### What side quest or challenge are you solving? Get 5 people to star our repository ### Points 150 points ### Description _No response_ ### Provide proof that you've completed the task ![IMG-20241015-WA0002](https://github.com/user-attachments/assets/f6bbd793-136c-4baf-8b22-b3ca97d1b218) ![IMG-20241015-WA0003](https://github.com/user-attachments/assets/dde9eb69-c189-43d8-966c-009078e08612) ![IMG-20241014-WA0024](https://github.com/user-attachments/assets/32815a78-bc31-4200-8ac5-6f794efd4851) ![IMG-20241015-WA0009](https://github.com/user-attachments/assets/17981916-bbfe-4697-b891-5f103ac03cf7) ![IMG-20241015-WA0010(1)](https://github.com/user-attachments/assets/129de0b7-7d29-4d5a-b287-707431fb0cc9)
closed
2024-10-15T07:32:56Z
2024-10-22T14:44:16Z
https://github.com/OpenBB-finance/OpenBB/issues/6778
[]
antilpiyush89
10
AUTOMATIC1111/stable-diffusion-webui
deep-learning
15,419
[Bug]: Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
### Checklist - [X] The issue exists after disabling all extensions - [ ] The issue exists on a clean installation of webui - [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui - [ ] The issue exists in the current version of the webui - [ ] The issue has not been reported before recently - [ ] The issue has been reported before but has not been fixed yet ### What happened? Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### Steps to reproduce the problem Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### What should have happened? Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### What browsers do you use to access the UI ? Google Chrome ### Sysinfo Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ### Console logs ```Shell Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. ``` ### Additional information Error: CUDA error: the launch timed out and was terminated CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
open
2024-03-31T16:41:52Z
2024-04-26T09:04:03Z
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15419
[ "bug-report" ]
maxdjx
2
graphql-python/graphene-django
graphql
971
DjangoObjectType duplicate models breaks Relay node resolution
I have exactly the same issue as #107. The proposed solution no longer works How to do this now in the current state ?
open
2020-05-24T22:06:06Z
2022-01-25T14:48:05Z
https://github.com/graphql-python/graphene-django/issues/971
[ "🐛bug" ]
boolangery
8
microsoft/MMdnn
tensorflow
547
tensorflow->caffe resnet_v2_152 conversion error
Platform :ubuntu16.04 Python version:2.7 Source framework with version ):Tensorflow 1.12.0with GPU Destination framework with version :caffe1.0.0 Pre-trained model path (webpath or webdisk path): Running scripts: mmconvert -sf tensorflow -in imagenet_resnet_v2_152.ckpt.meta -iw imagenet_resnet_v2_152.ckpt --dstNodeName MMdnn_Output -df caffe -om tf_resnet Traceback (most recent call last): File "/usr/local/bin/mmconvert", line 11, in <module> load_entry_point('mmdnn==0.2.3', 'console_scripts', 'mmconvert')() File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convert.py", line 102, in _main ret = convertToIR._convert(ir_args) File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 62, in _convert from mmdnn.conversion.tensorflow.tensorflow_parser import TensorflowParser File "/usr/local/lib/python2.7/dist-packages/mmdnn/conversion/tensorflow/tensorflow_parser.py", line 15, in <module> from tensorflow.tools.graph_transforms import TransformGraph ImportError: No module named graph_transforms
closed
2019-01-07T04:13:53Z
2019-01-09T03:26:24Z
https://github.com/microsoft/MMdnn/issues/547
[ "invalid" ]
xubin19939
2
deepspeedai/DeepSpeed
machine-learning
5,770
Multi-node multi-GPUs training is slower than single-node multi-GPUs training[BUG]
**Describe the bug** I utilized zero-stage3 of Deepspeed for fine-tuning the Qianwen2-1.5b model and observed that the **training speed of 2 nodes with total 4 A10 GPUs is one time slower than that of single node with total 2 A10 GPUs**. Here are some details. The training speed of 2 nodes with total 4 A10 GPUs: <img width="1521" alt="image (1)" src="https://github.com/user-attachments/assets/7b569170-4cdd-4851-8224-1ba404c83213"> <img width="671" alt="image" src="https://github.com/user-attachments/assets/edb69c6f-a2be-4c5a-acb0-7071fbd44ef8"> The training speed is about 8.68s/iter, and the forward latency, backward latency is 1.6s, 2.51s. However, the training speed of single node with total 2 A10 GPUs: ![image (2)](https://github.com/user-attachments/assets/2aeea642-7158-4763-986b-7a3ecb77004c) <img width="660" alt="image (3)" src="https://github.com/user-attachments/assets/463e50e5-8b67-45e5-9049-4a17557e8ef5"> The training speed is about 2.46s/iter, and the forward latency, backward latency is 357ms, 673ms. The above results show that the training speed of 2-node 4GPUs is much slower than that of single-node GPUs in **feedforward and feedback processes**. I thought it was a bandwidth problem of network, but my calculations showed it wasn't as follws: The average receiving and sending bandwidths during the training were 8.74Gbit/s and 9.28Gbit/s, respectively. model weights size: 1.5\*(10^8)\*16bit, gradient size: 1.5\*(10^8)\*16bit, the communication consume is: 3\*1.5\*(10^8)\*16bit/2/(8.74\*(10^9))=0.41s. So I want to know what's wrong with the results? I'd like to ask for people's help. Thanks!
closed
2024-07-15T06:51:08Z
2024-07-16T13:53:41Z
https://github.com/deepspeedai/DeepSpeed/issues/5770
[ "bug", "training" ]
yangkang2318
2
matplotlib/matplotlib
data-visualization
29,076
[Bug]: Calling start() multiple times on a macos timer doesn't stop the previous timer
### Bug summary When calling `timer.start(); timer.start()` on an already running timer, the previous timer should be stopped before starting a new one. On the macosx backend this causes two timers to be running under the hood. ### Code for reproduction ```Python import time import matplotlib.pyplot as plt timer = plt.figure().canvas.new_timer(interval=1000) timer.add_callback(lambda: print(f"{time.ctime()}")) timer.start() timer.start() plt.pause(2) ``` ### Actual outcome 4 prints, 2 per second ``` Tue Nov 5 09:07:27 2024 Tue Nov 5 09:07:27 2024 Tue Nov 5 09:07:28 2024 Tue Nov 5 09:07:28 2024 ``` ### Expected outcome 2 prints, 1 per second ``` Tue Nov 5 09:07:27 2024 Tue Nov 5 09:07:28 2024 ``` ### Additional information _No response_ ### Operating system macos ### Matplotlib Version main ### Matplotlib Backend macosx ### Python version _No response_ ### Jupyter version _No response_ ### Installation None
open
2024-11-05T16:11:01Z
2024-11-13T21:23:24Z
https://github.com/matplotlib/matplotlib/issues/29076
[ "GUI: MacOSX" ]
greglucas
0
django-oscar/django-oscar
django
3,768
Basket may be reused if an error happens after an order was placed
### Issue Summary If an error happens in `OrderPlacementMixin.handle_successful_order` (e.g. a mail can't be sent) then `PaymentDetailViews.submit` will subsequently thaw the basket (https://github.com/django-oscar/django-oscar/blob/d076d04593acf2c6ff9423e94148bb491cad8bd9/src/oscar/apps/checkout/views.py#L643). The basket will then remain open and thus re-used but the [default implementation](https://github.com/django-oscar/django-oscar/blob/d076d04593acf2c6ff9423e94148bb491cad8bd9/src/oscar/apps/order/utils.py#L34) of `OrderNumberGenerator.order_number` will prevent any order from being created from the same basket as that would result in [duplicate order numbers](https://github.com/django-oscar/django-oscar/blob/d076d04593acf2c6ff9423e94148bb491cad8bd9/src/oscar/apps/order/utils.py#L59). ### Steps to Reproduce 1. Create a basket 1. Make sure that you have a an order_placed email that would be sent 1. Ensure that sending an email will fail, e.g. by using the smtp backend with a server that doesn't exist 1. Submit the basket, observe an order object being created 1. Ensure that sending an email will succeed 1. Try submitting the basket again, subsequent submits fail because `There is already an order with number ...` 1. The user is now stuck with a basket that can't ever be submitted.
open
2021-09-09T09:20:58Z
2024-07-05T07:08:10Z
https://github.com/django-oscar/django-oscar/issues/3768
[]
RaphaelKimmig
8
jofpin/trape
flask
108
[x] ERROR: [Error 32] El proceso no tiene acceso al archivo porque estß siendo utilizado por otro proceso: 'trape.config'
Despues de ingresar el ngrok token y el api key se supondria que me dejaria avanzar pero solo me tira el mensaje - Congratulations! Successful configuration, now enjoy Trape! [x] ERROR: [Error 32] El proceso no tiene acceso al archivo porque estß siendo utilizado por otro proceso: 'trape.config' a alguien mas le ha pasado y me podria comentar como lo soluciono, gracias.
open
2018-11-30T05:51:50Z
2019-08-14T15:33:14Z
https://github.com/jofpin/trape/issues/108
[]
MynorFlorian
1
tiangolo/uvicorn-gunicorn-fastapi-docker
fastapi
48
Gunicorn instrumentation with statsd_host
I would like to configure`--statsd-host` for [Gunicorn instrumentation](https://docs.gunicorn.org/en/stable/instrumentation.html) in `tiangolo/uvicorn-gunicorn-fastapi:python3.7-alpine3.8`. So far, I have not had success e.g. with `echo "statsd_host = 'localhost:9125'" >> /gunicorn_conf.py` in `/apps/prestart.sh`. Is there a better way to try this and is is possible at all?
closed
2020-05-30T09:29:05Z
2020-06-19T00:29:13Z
https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/48
[ "answered" ]
christian-2
4
feature-engine/feature_engine
scikit-learn
237
add feature scaling transformers
Whenever I want to scale some numerical features I'm always importing transformers from sklearn. preprocessing. We know that the sklearn transformers don't take our required variable names to scale (like feature engine) and they don't return a data frame. It becomes somewhat frustrating when using a Pipeline to preprocess the data as the next transformer may need a data frame to transform the variables. It would be very useful to users in the data preprocessing Pipeline if we can add some feature scaling transformers like MinMaxScaler, StandardScaler, RobustScaler, etc... -> As a simple alternative solution, we can create a custom transformer as given below ``` from sklearn.preprocessing import StandardScaler from sklearn.base import BaseEstimator class CustomScaler(BaseEstimator): def __init__(self, variables): self.variables = variables self.scaler = StandardScaler() def fit(self, X, y=None): self.scaler.fit(X[self.variables]) return self def transform(self, X): X[self.columns] = self.scaler.transform(X[self.columns]) return X ``` **Additional context** Feature scaling is also an important feature engineering step for linear models. We can easily handle scalable variables in a preprocessing pipeline.
closed
2021-03-06T13:07:58Z
2021-04-12T16:16:49Z
https://github.com/feature-engine/feature_engine/issues/237
[]
ashok49473
3
vimalloc/flask-jwt-extended
flask
10
jwt decorator
Hi, I found an issue with `jwt_required` decorator, I don't understand why works when I used like: ``` @custom_api.route('/resellers/<token>/registrations', methods=['GET']) @jwt_required def get_resellers(token): ... ``` but NOT when: I'm using https://flask-restless.readthedocs.io/en/stable/ where I can use methods as preprocessor ``` @classmethod @jwt_required def get_many_preprocessor(cls, search_params=None, **kw): print "Here not work" ``` This worked me with `flask-jwt`, what could be?
closed
2016-10-16T15:32:27Z
2016-10-17T18:12:59Z
https://github.com/vimalloc/flask-jwt-extended/issues/10
[]
laranicolas
15
ludwig-ai/ludwig
data-science
4,035
Refactor BERTTokenizer
The [BERTTokenizer](https://github.com/ludwig-ai/ludwig/blob/00c51e0a286c3fa399a07a550e48d0f3deadc57d/ludwig/utils/tokenizers.py#L1109) is using torchtext. We want to remove torchtext as a dependency so this Tokenizer has to be refactored not using it.
open
2024-10-21T20:13:02Z
2025-01-08T12:20:12Z
https://github.com/ludwig-ai/ludwig/issues/4035
[ "help wanted", "dependency" ]
mhabedank
1
pyqtgraph/pyqtgraph
numpy
2,409
FR: Type Hints/Stubs
<!-- In the following, please describe your issue in detail! --> <!-- If some sections do not apply, just remove them. --> ### Short description <!-- This should summarize the issue. --> Feature request: Could you please add type hints to the source (and/or in stub files) and add a `py.typed` marker file per [PEP 561](https://peps.python.org/pep-0561/) ### Code to reproduce <!-- Please provide a minimal working example that reproduces the issue in the code block below. Ideally, this should be a full example someone else could run without additional setup. --> N/A ### Tested environment(s) N/A ### Additional context This is a feature request.
open
2022-09-07T20:14:23Z
2024-05-30T23:16:38Z
https://github.com/pyqtgraph/pyqtgraph/issues/2409
[]
adam-grant-hendry
44
kizniche/Mycodo
automation
814
Out of Disk Space
I have had Mycodo running for a while without any issues. I haven't been keeping an eye on it. Today I connected to my Raspberry Pi where Mycodo is installed and found that I am completely out of disk space. I am using Rasbian Lite and can't imagine what besides Mycodo would have filled up all the disk space. I have tried deleting some old Mycodo backups but even after doing so, the RPi reports that no space is available (when I use "df -h"). What could be causing this issue?
closed
2020-08-13T20:30:22Z
2020-08-14T05:11:22Z
https://github.com/kizniche/Mycodo/issues/814
[]
DarthPleurotus
18
Neoteroi/BlackSheep
asyncio
83
Add testing section to docs
It would be great to see the testing section in the documentation, especially would like to see service testing (di)
closed
2021-02-21T22:58:59Z
2021-02-25T11:39:00Z
https://github.com/Neoteroi/BlackSheep/issues/83
[]
Bloodielie
1
huggingface/datasets
machine-learning
7,001
Datasetbuilder Local Download FileNotFoundError
### Describe the bug So I was trying to download a dataset and save it as parquet and I follow the [tutorial](https://huggingface.co/docs/datasets/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage) of Huggingface. However, during the excution I face a FileNotFoundError. I debug the code and it seems there is a bug there: So first it creates a .incomplete folder and before moving its contents the following code deletes the directory [Code](https://github.com/huggingface/datasets/blob/98fdc9e78e6d057ca66e58a37f49d6618aab8130/src/datasets/builder.py#L984) hence as a result I face with: ``` FileNotFoundError: [Errno 2] No such file or directory: '~/data/Parquet/.incomplete '``` ### Steps to reproduce the bug ``` from datasets import load_dataset_builder from pathlib import Path parquet_dir = "~/data/Parquet/" Path(parquet_dir).mkdir(parents=True, exist_ok=True) builder = load_dataset_builder( "rotten_tomatoes", ) builder.download_and_prepare(parquet_dir, file_format="parquet") ``` ### Expected behavior Downloads the files and saves as parquet ### Environment info Ubuntu, Python 3.10 ``` datasets 2.19.1 ```
open
2024-06-25T15:02:34Z
2024-06-25T15:21:19Z
https://github.com/huggingface/datasets/issues/7001
[]
purefall
1
xuebinqin/U-2-Net
computer-vision
367
Is it support person segmentation now?
The result of DeepLabV3 is not quite well in distant image.
open
2023-11-10T07:15:56Z
2023-11-15T03:47:52Z
https://github.com/xuebinqin/U-2-Net/issues/367
[]
magic3584
2
automl/auto-sklearn
scikit-learn
996
Parallel processing with n_jobs > 1 failing on Azure Linux VM Ubuntu 20.04
Hello, Working with autosklearn has been a journey. Running small experiments for 1 hour on my local machine (Running WSL2 and the other with Majaro Linux) ends without problems. I have been using the argument `n_jobs=-1` to run it on all cores. Since I want to let the script running for 10 hour or so I want to be able to run this on a remote server that I can just leave on. After installing all the packages and setting an extra environment for autosklearn running the same script without `n_jobs` everithing runs fine. The erro that I am getting is the following ``` File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/context.py", line 283, in _Popen return Popen(process_obj) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 42, in _launch prep_data = spawn.get_preparation_data(process_obj._name) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/spawn.py", line 154, in get_preparation_data _check_not_importing_main() File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/multiprocessing/spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase. This probably means that you are not using fork to start your child processes and you have forgotten to use the proper idiom in the main module: if __name__ == '__main__': freeze_support() ... The "freeze_support()" line can be omitted if the program is not going to be frozen to produce an executable. ``` the exceution freezes. the script do not continue after the error. I have to kill it and then this appears: ``` self.start(timeout=timeout) self.start(timeout=timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/client.py", line 949, in start File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/client.py", line 949, in start sync(self.loop, self._start, **kwargs) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/utils.py", line 337, in sync e.wait(10) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 558, in wait sync(self.loop, self._start, **kwargs) sync(self.loop, self._start, **kwargs) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/utils.py", line 337, in sync File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/site-packages/distributed/utils.py", line 337, in sync signaled = self._cond.wait(timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 306, in wait e.wait(10) e.wait(10) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 558, in wait File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 558, in wait gotit = waiter.acquire(True, timeout) KeyboardInterrupt signaled = self._cond.wait(timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 306, in wait signaled = self._cond.wait(timeout) File "/home/elias/anaconda3/envs/autosklearn/lib/python3.8/threading.py", line 306, in wait gotit = waiter.acquire(True, timeout) gotit = waiter.acquire(True, timeout) KeyboardInterrupt ``` I am not able to debug this by myself. I am not an expert and do not know anything about dask or parallel computing. I would be happy if someone could help. Thanks in advance
closed
2020-11-05T09:41:05Z
2020-11-10T08:42:27Z
https://github.com/automl/auto-sklearn/issues/996
[]
ealvarezj
3
deeppavlov/DeepPavlov
nlp
1,220
GoBot breaks when calling Tensorflow
The GoBot example from tutrorial says at runtime: 2020-05-12 08:01:47.577 INFO in 'deeppavlov.core.data.simple_vocab'['simple_vocab'] at line 101: [saving vocabulary to /home/sgladkoff/Documents/MyWork/assistant_bot/word.dict] WARNING:tensorflow:From /home/sgladkoff/Documents/MyWork/env/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py:37: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead. WARNING:tensorflow:From /home/sgladkoff/Documents/MyWork/env/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py:222: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead. WARNING:tensorflow:From /home/sgladkoff/Documents/MyWork/env/lib/python3.7/site-packages/deeppavlov/core/models/tf_model.py:222: The name tf.train.Optimizer is deprecated. Please use tf.compat.v1.train.Optimizer instead. It looks like code in deeppavlov/core/models/tf_model.py does not correspond to current state of Tensorflow... Or am I doing smth wrong?
closed
2020-05-17T19:29:39Z
2020-05-17T19:32:56Z
https://github.com/deeppavlov/DeepPavlov/issues/1220
[]
sgladkoff
1
DistrictDataLabs/yellowbrick
matplotlib
701
Matplotlib version (>=3.0.0) backends don't support Yellowbrick
**Describe the issue** A clear and concise description of what the issue is. <!-- If you have a question, note that you can email us via our listserve: https://groups.google.com/forum/#!forum/yellowbrick --> <!-- This line alerts the Yellowbrick maintainers, feel free to use this @ address to alert us directly in follow up comments --> @DistrictDataLabs/team-oz-maintainers The error says YellowBrick 0.9 has requirement matplotlib <3.0 and >=1.5.1.So,the version of matplotlib without updating works fine.I will recommend users not to update matplotlib as its version 3.0.2's backends don't support yellowbrick.
closed
2019-01-29T01:20:11Z
2019-02-04T18:10:35Z
https://github.com/DistrictDataLabs/yellowbrick/issues/701
[ "type: question" ]
dnabanita7
2
liangliangyy/DjangoBlog
django
179
facebook登录
请问facebook登录,回调地址填的首页URL,登录后拿到code,但貌似没有登录成功,无法访问其他页面。这部分有点不明白,能否讲解下呢?
closed
2018-10-29T08:25:51Z
2018-11-03T11:35:21Z
https://github.com/liangliangyy/DjangoBlog/issues/179
[]
emunshe
1
litestar-org/litestar
pydantic
3,978
Docs: DTO tutorial should mention how to exclude from collection of nested union models
### Summary I was going through the documentation on [this](https://docs.litestar.dev/2/tutorials/dto-tutorial/03-nested-collection-exclude.html) it mentions how to exclude the fields from a collection of nested model. However, it needs an update where it mentions how to exclude fields if someone uses a collection of nested union models. For e.g. ```py # Assuming correct imports are in place @dataclass class Address: street: str city: str country: str @dataclass class Person: name: str age: int email: str address: Address children: list['Person' | None] # The DTO will change like: class ReadDTO(DataclassDTO[Person]): config = DTOConfig(exclude={"email", "address.street", "children.0.0.email", "children.0.0.address"}) # We need to provide two zeroes instead of one. ``` Now there is a line in the document which states `Given a generic type, with an arbitrary number of type parameters (e.g., GenericType[Type0, Type1, ..., TypeN]), we use the index of the type parameter to indicate which type the exclusion should refer to. For example, a.0.b, excludes the b field from the first type parameter of a, a.1.b excludes the b field from the second type parameter of a, and so on.` However, it is not very clear and an example in the document might help. ### Working without union (as per example) <img width="1526" alt="Image" src="https://github.com/user-attachments/assets/a271c46d-af8a-4c60-8a23-bebbfce53665" /> ### Union introduced <img width="1438" alt="Image" src="https://github.com/user-attachments/assets/36233b8d-839f-40b9-9217-fd7b9fcc1298" /> ### Union introduced (add extra zero to exclude) <img width="1447" alt="Image" src="https://github.com/user-attachments/assets/0f9b517c-8044-4367-819a-6a66dfb63082" /> > [!IMPORTANT] > Order of union matters too and therefore key must be changed based on the order.
open
2025-01-28T09:30:43Z
2025-01-28T09:30:43Z
https://github.com/litestar-org/litestar/issues/3978
[ "Documentation :books:" ]
HorusTheSonOfOsiris
0
miguelgrinberg/flasky
flask
373
TypeError: <flask_script.commands.Shell object at 0x0000000004D13C88>: 'dict' object is not callable
In windows 7 with Pycharm, For Chapter 5.10 "**Integration with the Python Shell**", register make_context: ‘def make_shell_context(): return dict(app=app,db=db,User=User,Role=Role) manager.add_command("shell", Shell(make_context=make_shell_context()))‘ when i run command "python app/welcome.py shell" in cmd or terminal in Pycharm, this error occurs: (venv) C:\Users\biont.liu\PycharmProjects\flasky>python app/welcome.py shell Traceback (most recent call last): File "app/welcome.py", line 106, in <module> manager.run() File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\__init__.py", line 417, in run result = self.handle(argv[0], argv[1:]) File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\__init__.py", line 386, in handle res = handle(*args, **config) File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\commands.py", line 216, in __call__ return self.run(*args, **kwargs) File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\commands.py", line 304, in run context = self.get_context() File "C:\Users\biont.liu\PycharmProjects\flasky\venv\lib\site-packages\flask_script\commands.py", line 293, in get_context return self.make_context() TypeError: <flask_script.commands.Shell object at 0x0000000004D13C88>: 'dict' object is not callable **when i return list/strings in make_shell_context(), it prompts "'list'/'str' object is not callable.”** **Further step:** 1. For command "python app/welcome.py db init": Creating directory C:\Users\biont.liu\PycharmProjects\flasky\migrations ... done Creating directory C:\Users\biont.liu\PycharmProjects\flasky\migrations\versions ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\alembic.ini ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\env.py ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\README ... done Generating C:\Users\biont.liu\PycharmProjects\flasky\migrations\script.py.mako ... done Please edit configuration/connection/logging settings in 'C:\\Users\\biont.liu\\PycharmProjects\\flasky\\migrations\\alembic.ini' before proceeding. 2. For command "python app/welcome.py db migrate -m "initial migration": INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. INFO [alembic.env] No changes in schema detected. 3. For command "python app/welcome.py db upgrade": INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL.
closed
2018-08-03T03:45:35Z
2018-10-14T22:16:48Z
https://github.com/miguelgrinberg/flasky/issues/373
[ "question" ]
zi-l
3
axnsan12/drf-yasg
django
266
Add ability to document OPTIONS methods
Hey there, I'm trying to document a simple ModelViewSet on which I have overriden the OPTIONS method, but I can't figure out why the method does not appear in swagger. Can ```python class ViewpointViewSet(viewsets.ModelViewSet): serializer_class = ViewpointSerializer @swagger_auto_schema(operation_description="OPTIONS /viewpoints/") def options(self, request, *args, **kwargs): if self.request.user.is_anonymous: raise PermissionDenied viewpoint_labels = ViewpointLabelSerializer( Viewpoint.objects.all(), many=True ).data return Response({ 'viewpoints': viewpoint_labels, }) ```
open
2018-12-06T15:33:51Z
2025-03-07T12:27:20Z
https://github.com/axnsan12/drf-yasg/issues/266
[ "triage" ]
SebCorbin
6
nltk/nltk
nlp
3,193
Add support for a `sort` argument in WordNet methods
WordNet object methods support a series of methods, such as `hypernyms`, `antonyms`, etc., that under the hood use a private method called `_related`, that supports an argument called `sort` which by default is `True`. This argument sorts the output objects by name. For example, see: https://github.com/nltk/nltk/blob/e2d368e00ef806121aaa39f6e5f90d9f8243631b/nltk/corpus/reader/wordnet.py#L134-L135 However, in some cases, we don't need the output to be sorted, and we may be performing these operations multiple times and on long lists, which incurs in considerable penalties because of multiple needless sorting going on under the hood. Thus, I believe it'd be important that such methods supported a `sort` argument (as `_related` does), whose default value is `True` (to avoid breaking backward compatibility).
closed
2023-10-10T18:35:15Z
2024-09-04T06:25:34Z
https://github.com/nltk/nltk/issues/3193
[ "enhancement" ]
bryant1410
22
home-assistant/core
python
140,903
Smart things integration gives "reached max subscriptions limit" even after it has been "fixed" on the new version
### The problem I know this have been an issue before, but it still happens at version 2025.3.3 even tho I waited 10 hours as people told me. It actually gives it no matter what I do with the device, turning on and off, turning volume up or down and etc. IF another issue exists for that(considering 2025.3.3) please redirect me there. ### What version of Home Assistant Core has the issue? core-2025.3.3 ### What was the last working version of Home Assistant Core? _No response_ ### What type of installation are you running? Home Assistant OS ### Integration causing the issue Smart Things ### Link to integration documentation on our website _No response_ ### Diagnostics information _No response_ ### Example YAML snippet ```yaml ``` ### Anything in the logs that might be useful for us? ```txt ``` ### Additional information _No response_
closed
2025-03-19T00:54:12Z
2025-03-22T10:17:06Z
https://github.com/home-assistant/core/issues/140903
[ "integration: smartthings" ]
C0dezin
13
iterative/dvc
data-science
10,391
dvc exp run (or dvc repro) in monorepo: inefficient crawling
# Bug Report <!-- ## Issue name Issue names must follow the pattern `command: description` where the command is the dvc command that you are trying to run. The description should describe the consequence of the bug. Example: `repro: doesn't detect input changes` --> ## Description In a monorepo scenario with a `.dvc` directory at the root of te monorepo and multiple subdirectory projects (each with their own `dvc.yaml` file), `dvc repro` seems to be checking the entire monorepo even when explicitly given a `dvc.yaml` file from a subdirectory (and even when run from that subdirectory). I am not sure why it does that but with a particularly large monorepo this can slow things down considerably. For example, with the example repo below when set to 1000 projects this increases the time to run simple experiments from about 2 seconds to about 24 seconds (1000 projects is a lot but they are very simple and their directory structure is also). Even if the other directories don't have a `dvc.yaml` file in them at all, `dvc repro` is still trying to collect stages from there (whereas I would expect it not to even look outside of the PWD). With `dvc exp run` the pattern is the same, only a bit more is going on there since the command does more than just `dvc repro` <!-- A clear and concise description of what the bug is. --> ### Reproduce There is a testing repo [here](https://github.com/iterative/monorepo-dvc-repro) with instructions on how to test this and reproduce the issue in the [README](https://github.com/iterative/monorepo-dvc-repro/blob/main/README.md). <!-- Step list of how to reproduce the bug --> <!-- Example: 1. dvc init 2. Copy dataset.zip to the directory 3. dvc add dataset.zip 4. dvc run -d dataset.zip -o model ./train.sh 5. modify dataset.zip 6. dvc repro --> ### Expected I would be expecting `dvc repro` to only scan the PWD of the `dvc.yaml` file (and its subdirectories) and not go through the entire directory tree. The same for `dvc exp run`. <!-- A clear and concise description of what you expect to happen. --> **Additional Information (if any):** Here are some logs that I generated with verbose runs of `dvc repro` and `dvc exp`. The first two are outputs when this is run from a single project in a monorepo with 5 projects in total (all of them with their own `dvc.yaml`). The last one is run in a monorepo with 2 projects, one of which does not contain any `dvc.yaml` file at all [dvc_repro.log](https://github.com/iterative/dvc/files/15040741/dvc_repro.log) [dvc_exp_run.log](https://github.com/iterative/dvc/files/15040743/dvc_exp_run.log) [dvc_exp_run_projects_wo_dvc.log](https://github.com/iterative/dvc/files/15040742/dvc_exp_run_projects_wo_dvc.log) <!-- Please check https://github.com/iterative/dvc/wiki/Debugging-DVC on ways to gather more information regarding the issue. If applicable, please also provide a `--verbose` output of the command, eg: `dvc add --verbose`. If the issue is regarding the performance, please attach the profiling information and the benchmark comparisons. -->
closed
2024-04-19T13:05:13Z
2024-04-19T13:38:40Z
https://github.com/iterative/dvc/issues/10391
[]
tibor-mach
1
vvbbnn00/WARP-Clash-API
flask
168
账户信息不准确,需要我做什么吗?
流量一直显示只有40个GB, 不明白是显示错误或者刷流量出现错误。 ![image](https://github.com/vvbbnn00/WARP-Clash-API/assets/20707383/50f6f66f-e75e-42de-9b8e-367f09bdefb5) 此外,需要更新docker吗?我是一个月前部署的服务? 现在的key是不是满大街都是,更新一次扔一次,。
closed
2024-04-05T20:12:09Z
2024-05-02T09:30:20Z
https://github.com/vvbbnn00/WARP-Clash-API/issues/168
[]
lhuanyun
1
desec-io/desec-stack
rest-api
569
api: support records that expire automatically?
open
2021-07-23T11:24:48Z
2021-07-23T12:00:57Z
https://github.com/desec-io/desec-stack/issues/569
[ "enhancement", "api" ]
peterthomassen
1
benbusby/whoogle-search
flask
117
[QUESTION] docker build on ARM
Hi, didn't want to file a bug report for this, because I'm sure it is not a bug per se, but rather an issue relating to architecture. I wanted to try whoogle on a Raspberry Pi, running Raspbian (an ARM version of Debian). I tried starting a Docker container, but that did not work because the code is compiled for x64, not ARM. So then I tried a docker build (using the command as instructed on your webpage). That fails to build though. It fails trying to make _cffi_backend.o, because it cannot find the header file ffi.h. I see that the -I for gcc is both /usr/include/ffi and /usr/include/libffi. I have libffi-dev installed, but both of those paths do not exist on my system. My system does have ffi.h, it's in /usr/include/arm-linux-gnueabihf/ So two questions: - Do you plan to include armv71 builds? - What do I need to change to make the docker build look in the right place? Any hints are appreciated! Thanks!
closed
2020-08-17T17:37:30Z
2020-09-04T20:32:30Z
https://github.com/benbusby/whoogle-search/issues/117
[ "question" ]
itenmeer
2
deepspeedai/DeepSpeed
pytorch
6,569
[BUG] The learning rate scheduler is being ignored in the first optimization step.
**Describe the bug** It appears that no matter how the learning rate scheduler is configured, during the first training step a learning rate of `1e-3` is always used. My understanding is that this happens because `lr_scheduler.step()` is always called after `optimizer.step()` and there is no initialization for the initial step learning rate. I'm currently using `optimizer.set_lr(0)` explicitly to work around this issue. It can be quite detrimental to full fine-tuning of LLMs (as opposed to just LoRA/PEFT training) as it throws the model off completely in the first training step. Is this intentional? If so, should this be fixed inside DeepSpeed or are we supposed to just use `optimizer.set_lr(0)` like I'm currently doing. In either case, I think this should be documented somewhere as it is somewhat surprising behavior. **To Reproduce** It's pretty easy to reproduce with any configuration you already have. I was using this when debugging and finding this out: ``` "optimizer": { "type": "AdamW", "params": { "betas": [0.9, 0.98], "eps": 1e-6, "weight_decay": 3e-6, }, }, "scheduler": { "type": "WarmupDecayLR", "params": { "warmup_min_lr": 0, "warmup_max_lr": 1e-4, "warmup_num_steps": 100, "total_num_steps": 1000, }, } ```
closed
2024-09-25T00:49:01Z
2024-10-14T20:59:38Z
https://github.com/deepspeedai/DeepSpeed/issues/6569
[ "bug", "training" ]
eaplatanios
1
smarie/python-pytest-cases
pytest
44
Error when using session- or module- scoped fixtures and fixture unions.
While trying to solve this : https://stackoverflow.com/questions/46909275/parametrizing-tests-depending-of-also-parametrized-values-in-pytest I found out that my proposal with union fixtures was not working correctly.
closed
2019-06-14T16:30:58Z
2019-06-14T16:34:05Z
https://github.com/smarie/python-pytest-cases/issues/44
[ "bug" ]
smarie
0
iMerica/dj-rest-auth
rest-api
254
Add ability to override the SocialLoginSerializer
In my application, I override the RegisterSerializer to do some further validation (I require users to have an invite before signing up). I'd like to do the same thing with social logins, but currently there is no way to override the SocialLoginSerializer in my application settings the way I can with the RegisterSerializer.
closed
2021-05-08T22:35:37Z
2021-06-20T16:00:05Z
https://github.com/iMerica/dj-rest-auth/issues/254
[]
cmraible
2
miLibris/flask-rest-jsonapi
sqlalchemy
152
Serialized relationship id should be string
According to [JSON:API](https://jsonapi.org/format/#document-resource-object-identification), the `id` field should always be a string, but this isn't the case when fetching relationships through `SqlalchemyDataLayer`. I suspect the problem originates here: https://github.com/miLibris/flask-rest-jsonapi/blob/b4cb5576b75ffaf6463abb8952edc8839b03463f/flask_rest_jsonapi/data_layers/alchemy.py#L256 @akira-dev I would be happy to contribute with a pull request if you consider this correct.
open
2019-05-08T10:44:23Z
2019-05-08T12:04:48Z
https://github.com/miLibris/flask-rest-jsonapi/issues/152
[]
vladmunteanu
2
d2l-ai/d2l-en
deep-learning
2,218
Bahdanau attention (attention mechanisms) tensorflow notebook build fails in Turkish version.
Currently, PR-63 (Fixing more typos) builds fail only for tensorflow notebook. I could not figure out the reason. @AnirudhDagar , I would be glad if you could take a look.
closed
2022-07-25T20:59:30Z
2022-08-01T05:08:26Z
https://github.com/d2l-ai/d2l-en/issues/2218
[]
semercim
1
nl8590687/ASRT_SpeechRecognition
tensorflow
146
关于路径的一点建议
首先感谢大神的分享! 其次有一点点小建议,就是在组合路径时可以用os.path.join() 函数,这样就不用判断是什么操作系统,程序上看起来更加简洁
closed
2019-10-08T09:12:46Z
2021-11-22T14:08:17Z
https://github.com/nl8590687/ASRT_SpeechRecognition/issues/146
[]
Janfliegt
2
scikit-tda/kepler-mapper
data-visualization
37
Add support for new kinds of nerves: n-simplices and min-intersection.
Now that @michiexile abstracted a nerve class, we can include support for other nerves. Most obvious are a min-intersection nerve and an n-simplices nerve. Though I believe we will also need a custom nerve for multi-mapper and we could experiment with a multi-nerve. - [x] min-intersection - [ ] n-simplices ## min-intersection It would be nice to set a minimum number of points each cluster has to intersect on to be considered connected. ``` edges = nerve({"a": [1,2,3], "b": [2,3,4], "c": [4,5,6]}, min_intersection=2) -> ["a", "b"] in edges and ["b", "c"] not in edges ``` ## n-simplices It would be nice to have a general n-simplex nerve that constructs simplices of order n or less. Before building this, is there an established format for simplexes? Are there any libraries that we could use? Most promising simplicial complex libraries found in the wild: - [pycomplex](https://github.com/EelcoHoogendoorn/pycomplex) - [simplicial](https://github.com/simoninireland/simplicial) I'd prefer not to reinvent the wheel but I think a strong python simplicial complex library could be useful to the community.
closed
2017-11-26T19:19:23Z
2018-04-29T21:16:50Z
https://github.com/scikit-tda/kepler-mapper/issues/37
[]
sauln
4
PaddlePaddle/ERNIE
nlp
92
下载的百度预训练模型,可以作为checkpoint继续进行训练吗?
因为bert pytorch版本提供了language model fine tuning的方法,对预训练的model进行language model层面的fine tuning,或者可以理解为预训练模型上再次进行预训练。ERNIE模型现在只找到了下载的model直接进行fine tuning的方法,想知道有没有类似的language model fine tuning的方法。
closed
2019-04-12T10:02:49Z
2019-04-15T02:15:18Z
https://github.com/PaddlePaddle/ERNIE/issues/92
[]
jeremyjiao
1
deepinsight/insightface
pytorch
1,896
Unable to get http://storage.insightface.ai/files/models/buffalo_l.zip
I think http://storage.insightface.ai is down, are there any alternative links for the model from which I can download the model and set it in `/root/.insightface/models/` manually? Code: ``` from insightface.app import FaceAnalysis app = FaceAnalysis() ``` Output: ``` download_path: /root/.insightface/models/buffalo_l Downloading /root/.insightface/models/buffalo_l.zip from http://storage.insightface.ai/files/models/buffalo_l.zip... --------------------------------------------------------------------------- TimeoutError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/urllib3/connection.py in _new_conn(self) 158 conn = connection.create_connection( --> 159 (self._dns_host, self.port), self.timeout, **extra_kw) 160 23 frames TimeoutError: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: NewConnectionError Traceback (most recent call last) NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7fc7b71ce910>: Failed to establish a new connection: [Errno 110] Connection timed out During handling of the above exception, another exception occurred: MaxRetryError Traceback (most recent call last) MaxRetryError: HTTPConnectionPool(host='storage.insightface.ai', port=80): Max retries exceeded with url: /files/models/buffalo_l.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc7b71ce910>: Failed to establish a new connection: [Errno 110] Connection timed out')) During handling of the above exception, another exception occurred: ConnectionError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/requests/adapters.py in send(self, request, stream, timeout, verify, cert, proxies) 514 raise SSLError(e, request=request) 515 --> 516 raise ConnectionError(e, request=request) 517 518 except ClosedPoolError as e: ConnectionError: HTTPConnectionPool(host='storage.insightface.ai', port=80): Max retries exceeded with url: /files/models/buffalo_l.zip (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fc7b71ce910>: Failed to establish a new connection: [Errno 110] Connection timed out')) ```
open
2022-01-27T11:19:15Z
2024-11-14T09:17:38Z
https://github.com/deepinsight/insightface/issues/1896
[]
sanidhyaagrawal
16
davidsandberg/facenet
tensorflow
927
Is it safe to say the selection of thresholds will not affect training process at all?
Thanks so much for this great framework. After reading the code, I feel that the selection of threshold only appears during evaluate(), which is after the training epoch. Can I say the choice of threshold does not have any impact on the training?
open
2018-12-05T18:33:12Z
2019-12-30T08:20:43Z
https://github.com/davidsandberg/facenet/issues/927
[]
qwangku
2
CorentinJ/Real-Time-Voice-Cloning
deep-learning
1,106
Generated output audio is only 1 sec.
Unfortunately, I could not find a solution in closed issues, so I hope for help. The final audio is only 1 second. What could cause this to happen?
open
2022-09-04T13:49:03Z
2022-09-04T13:49:03Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1106
[]
netrunner-exe
0
biolab/orange3
scikit-learn
6,820
On MacOS ARM some tests fail because some functions return different results
From https://github.com/biolab/orange3/actions/runs/9317726517/job/25648607002 ``` ====================================================================== FAIL: test_max_features_reg (Orange.tests.test_random_forest.RandomForestTest.test_max_features_reg) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/runner/work/orange3/orange3/.tox/orange-released/lib/python3.11/site-packages/Orange/tests/test_random_forest.py", line 134, in test_max_features_reg self.assertGreater(diff, 1.2) AssertionError: 0.030000000000001137 not greater than 1.2 ====================================================================== FAIL: test_info (Orange.widgets.evaluate.tests.test_owpermutationplot.TestOWPermutationPlot.test_info) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/runner/work/orange3/orange3/.tox/orange-released/lib/python3.11/site-packages/Orange/widgets/evaluate/tests/test_owpermutationplot.py", line 94, in test_info self.assertIn("0.5021", self.widget._info.text()) AssertionError: '0.5021' not found in '\n<table width=100% align="center" style="font-size:11px">\n <tr style="background:#fefefe">\n <th style="background:transparent;padding: 2px 4px"></th>\n <th style="background:transparent;padding: 2px 4px">Corr = 0</th>\n <th style="background:transparent;padding: 2px 4px">Corr = 100</th>\n </tr>\n <tr style="background:#fefefe">\n <th style="padding: 2px 4px" align=right>Train</th>\n <td style="padding: 2px 4px" align=right>0.9980</td>\n <td style="padding: 2px 4px" align=right>0.9996</td>\n </tr>\n <tr style="background:#fefefe">\n <th style="padding: 2px 4px" align=right>CV</th>\n <td style="padding: 2px 4px" align=right>0.4978</td>\n <td style="padding: 2px 4px" align=right>0.8951</td>\n </tr>\n</table>\n ' ```
closed
2024-06-07T07:58:53Z
2024-06-28T08:08:54Z
https://github.com/biolab/orange3/issues/6820
[]
markotoplak
1
pandas-dev/pandas
pandas
61,058
DOC: Pivot() example call incorrectly used and would give "error: duplicate index"
### Pandas version checks - [x] I have checked that the issue still exists on the latest versions of the docs on `main` [here](https://pandas.pydata.org/docs/dev/) ### Location of the documentation doc/source/user_guide/reshaping.rst ### Documentation problem the table given as an example for pivot() is wrong and cant be used. it would return "error duplicate index" as there are duplicate values in the column given for "index" parameter. <img width="772" alt="Image" src="https://github.com/user-attachments/assets/d7198715-b74e-4c35-88f8-5a01ee8eefe4" /> ### Suggested fix for documentation The "foo" column must contain unique values
open
2025-03-05T04:39:53Z
2025-03-05T21:21:16Z
https://github.com/pandas-dev/pandas/issues/61058
[ "Reshaping", "Error Reporting", "Needs Info" ]
mheskett
3
piskvorky/gensim
nlp
3,601
Need a wheel for python 3.13
When I am trying to install gensim with python 3.13, scipy is having trouble installing because it is trying to compile a new wheel but is unable to.
open
2025-02-10T23:17:11Z
2025-03-17T21:28:59Z
https://github.com/piskvorky/gensim/issues/3601
[]
mistahanish
7
plotly/dash-cytoscape
plotly
165
cose-bilkent does not work
few issues with this https://github.com/plotly/dash-cytoscape/blob/master/demos/usage-cose-bilkent-layout.py throws an error about the conflict between serving locally and external scripts You have set your config to `serve_locally=True` but A local version of https://cdn.rawgit.com/cytoscape/cytoscape.js-cose-bilkent/d810281d/cytoscape-cose-bilkent.js is not available. If you added this file with `app.scripts.append_script` or `app.css.append_css`, use `external_scripts` or `external_stylesheets` instead. See https://dash.plotly.com/external-resources seems like you have built in 'close-bilkent' into your code somewhere instead of 'cose-bilkent' as you get an error pop up saying it is looking for 'close-bilkent' Invalid argument `layout.name` passed into Cytoscape with ID "cytoscape". Expected one of ["random","preset","circle","concentric","grid","breadthfirst","cose","close-bilkent","cola","euler","spread","dagre","klay"]. demo works if you replace the external script loading with cyto.load_extra_layouts() then replace all instances of 'close-bilkent' in the local cyto files with 'cose-bilkent'
open
2022-02-16T17:22:49Z
2022-11-21T23:52:22Z
https://github.com/plotly/dash-cytoscape/issues/165
[]
hroyd
2
twopirllc/pandas-ta
pandas
443
Zig Zag Indicator
Hi, pandas_ta is a fantastic project, thanks for sharing it. Maybe it si a good idea to implement the Zig-Zag indicator. It is a indicator that is nog based on periods but on % of change. On this way will be easy to find the higest and lowest pivot points in the past. It is also great to see the trend is up or down. In the link below the indicator is explained: [https://www.investopedia.com/terms/z/zig_zag_indicator.asp](url)
closed
2021-11-29T13:47:44Z
2024-12-16T17:14:52Z
https://github.com/twopirllc/pandas-ta/issues/443
[ "enhancement", "help wanted", "good first issue" ]
AureliusMarcusHu
5
blacklanternsecurity/bbot
automation
2,098
Move /test folder
Our `/test` folder has reached 24MB, and we need to move it outside the main `bbot` folder so it's not packaged by PyPi.
open
2024-12-18T20:50:35Z
2025-02-28T15:02:44Z
https://github.com/blacklanternsecurity/bbot/issues/2098
[ "bug" ]
TheTechromancer
0
SALib/SALib
numpy
425
Two warning tests fail
Running `pytest` on 1.3.13 I get the following two failed tests: ``` _____________________________________________________________________________________________________________________ test_odd_num_levels_raises_warning ______________________________________________________________________________________________________________________ setup_param_file_with_groups = '/tmp/tmphtdhaumg' def test_odd_num_levels_raises_warning(setup_param_file_with_groups): parameter_file = setup_param_file_with_groups problem = read_param_file(parameter_file) with warnings.catch_warnings(record=True) as w: # Cause all warnings to always be triggered. warnings.simplefilter("always") # Trigger a warning. sample(problem, 10, num_levels=3) # Verify some things > assert len(w) == 1 E assert 2 == 1 E +2 E -1 tests/sample/morris/test_morris.py:55: AssertionError _______________________________________________________________________________________________________________________ test_even_num_levels_no_warning _______________________________________________________________________________________________________________________ setup_param_file_with_groups = '/tmp/tmpiur667kp' def test_even_num_levels_no_warning(setup_param_file_with_groups): parameter_file = setup_param_file_with_groups problem = read_param_file(parameter_file) with warnings.catch_warnings(record=True) as w: # Cause all warnings to always be triggered. warnings.simplefilter("always") # Trigger a warning. sample(problem, 10, num_levels=4) # Verify some things > assert len(w) == 0 E assert 1 == 0 E +1 E -0 tests/sample/morris/test_morris.py:70: AssertionError ```
closed
2021-06-09T14:28:40Z
2021-06-27T06:09:01Z
https://github.com/SALib/SALib/issues/425
[]
schmitts
6
Textualize/rich
python
3,110
[BUG] Opening and ending tag mismatch: meta line 3 and head
- [x] I've checked [docs](https://rich.readthedocs.io/en/latest/introduction.html) and [closed issues](https://github.com/Textualize/rich/issues?q=is%3Aissue+is%3Aclosed) for possible solutions. - [x] I can't find my issue in the [FAQ](https://github.com/Textualize/rich/blob/master/FAQ.md). **Describe the bug** Edit this with a clear and concise description of what the bug. ```py ansiText = "..." width = 80 console = Console(width=width, record=True, file=StringIO()) richText = text.Text.from_ansi(ansiText) console.print(richText) console.height = len(richText.wrap(console, width=width)) console.save_html(fileName) with sync_playwright() as p: install(p.chromium) browser = p.chromium.launch() page = browser.new_page() page.set_viewport_size({"width": console.width * 3, "height": console.height * 3}) page.goto(f"file:///{fileName}") page.screenshot(path=outFile, omit_background=True) browser.close() ``` Looks to be that the meta tag is not closed <meta charset="UTF-8"> ![image](https://github.com/Textualize/rich/assets/41634689/ff491241-b8bf-47ab-94bf-cebc075e27f5) Full implementation at https://github.com/FHPythonUtils/AnsiToImg/blob/master/ansitoimg/render.py **Platform** <details> <summary>Click to expand</summary> ``` ╭───────────────────────── <class 'rich.console.Console'> ─────────────────────────╮ │ A high level console interface. │ │ │ │ ╭──────────────────────────────────────────────────────────────────────────────╮ │ │ │ <console width=122 ColorSystem.TRUECOLOR> │ │ │ ╰──────────────────────────────────────────────────────────────────────────────╯ │ │ │ │ color_system = 'truecolor' │ │ encoding = 'utf-8' │ │ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │ │ height = 9 │ │ is_alt_screen = False │ │ is_dumb_terminal = False │ │ is_interactive = True │ │ is_jupyter = False │ │ is_terminal = True │ │ legacy_windows = False │ │ no_color = False │ │ options = ConsoleOptions( │ │ size=ConsoleDimensions(width=122, height=9), │ │ legacy_windows=False, │ │ min_width=1, │ │ max_width=122, │ │ is_terminal=True, │ │ encoding='utf-8', │ │ max_height=9, │ │ justify=None, │ │ overflow=None, │ │ no_wrap=False, │ │ highlight=None, │ │ markup=None, │ │ height=None │ │ ) │ │ quiet = False │ │ record = False │ │ safe_box = True │ │ size = ConsoleDimensions(width=122, height=9) │ │ soft_wrap = False │ │ stderr = False │ │ style = None │ │ tab_size = 8 │ │ width = 122 │ ╰──────────────────────────────────────────────────────────────────────────────────╯ ╭── <class 'rich._windows.WindowsConsoleFeatures'> ───╮ │ Windows features available. │ │ │ │ ╭─────────────────────────────────────────────────╮ │ │ │ WindowsConsoleFeatures(vt=True, truecolor=True) │ │ │ ╰─────────────────────────────────────────────────╯ │ │ │ │ truecolor = True │ │ vt = True │ ╰─────────────────────────────────────────────────────╯ ╭────── Environment Variables ───────╮ │ { │ │ 'TERM': None, │ │ 'COLORTERM': 'truecolor', │ │ 'CLICOLOR': None, │ │ 'NO_COLOR': None, │ │ 'TERM_PROGRAM': 'vscode', │ │ 'COLUMNS': None, │ │ 'JUPYTER_COLUMNS': None, │ │ 'JUPYTER_LINES': None, │ │ 'JPY_PARENT_PID': None, │ │ 'VSCODE_VERBOSE_LOGGING': None │ │ } │ ╰────────────────────────────────────╯ platform="Windows" rich==12.6.0 ``` </details>
closed
2023-08-28T12:59:40Z
2023-08-28T13:45:51Z
https://github.com/Textualize/rich/issues/3110
[ "Needs triage" ]
FredHappyface
4
psf/requests
python
5,929
latest version of requests is not backwards compatible
Summary. ## Expected Result requests(URL, headers, payload) returns payload What you expected. ## Actual Result Error, because new version of requests does not accept 3 parameters, only 2. It causes all python programs that use requests with 3 parameters to crash and need to be reprogrammed. What happened instead. ## Reproduction Steps ```python import requests r = requests.get(URL, headers=headers, params=payload) ``` ## System Information $ python -m requests.help ``` { "chardet": { "version": "3.0.4" }, "cryptography": { "version": "" }, "idna": { "version": "2.8" }, "implementation": { "name": "CPython", "version": "3.8.10" }, "platform": { "release": "5.11.0-27-generic", "system": "Linux" }, "pyOpenSSL": { "openssl_version": "", "version": null }, "requests": { "version": "2.22.0" }, "system_ssl": { "version": "1010106f" }, "urllib3": { "version": "1.25.8" }, "using_pyopenssl": false } ``` This command is only available on Requests v2.16.4 and greater. Otherwise, please provide some basic information about your system (Python version, operating system, &c).
closed
2021-09-05T17:07:36Z
2021-12-04T19:00:24Z
https://github.com/psf/requests/issues/5929
[]
jbrepogmailcom
3
ploomber/ploomber
jupyter
654
Specify template location for ploomber scaffold
It would be nice to specify the template location, whether a url or local directory, when calling ploomber scaffold. ploomber scaffold --from-template https://github.com/user/my-template ploomber scaffold --from-template ~/custom_template It'd also be nice to be able to set the default template from a user / all users (when using a system install of python) like: ploomber scaffold --set-template https://github.com/user/my-template
open
2022-03-17T16:15:13Z
2023-12-14T06:05:05Z
https://github.com/ploomber/ploomber/issues/654
[]
hornste
6
CorentinJ/Real-Time-Voice-Cloning
tensorflow
1,279
Ai voice cloning
open
2024-01-03T14:28:08Z
2024-01-03T14:28:08Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1279
[]
mohendra
0
hankcs/HanLP
nlp
1,098
可以提供隐马维特比训练命名实体的语料格式吗?
<!-- 注意事项和版本号必填,否则不回复。若希望尽快得到回复,请按模板认真填写,谢谢合作。 --> ## 注意事项 请确认下列注意事项: * 我已仔细阅读下列文档,都没有找到答案: - [首页文档](https://github.com/hankcs/HanLP) - [wiki](https://github.com/hankcs/HanLP/wiki) - [常见问题](https://github.com/hankcs/HanLP/wiki/FAQ) * 我已经通过[Google](https://www.google.com/#newwindow=1&q=HanLP)和[issue区检索功能](https://github.com/hankcs/HanLP/issues)搜索了我的问题,也没有找到答案。 * 我明白开源社区是出于兴趣爱好聚集起来的自由社区,不承担任何责任或义务。我会礼貌发言,向每一个帮助我的人表示感谢。 * [x] 我在此括号内输入x打钩,代表上述事项确认完毕。 ## 版本号 <!-- 发行版请注明jar文件名去掉拓展名的部分;GitHub仓库版请注明master还是portable分支 --> 当前最新版本号是:1.7.1 我使用的版本是:1.7.1 <!--以上属于必填项,以下可自由发挥--> ## 我的问题 <!-- 请详细描述问题,越详细越可能得到解决 --> hanks你好,在你的帮助下我这边已经熟悉感知机的操作,所以现在想试试用隐马和维特比来训练新实体。 我参照wiki中的https://github.com/hankcs/HanLP/wiki/%E8%A7%92%E8%89%B2%E6%A0%87%E6%B3%A8%E5%91%BD%E5%90%8D%E5%AE%9E%E4%BD%93 进行了仿写, 但是在TestNRDctionaryMaker.java中的语料文件【"data/dictionary/2014_dictionary.txt"】和【D:\JavaProjects\CorpusToolBox\data\2014\】并没有在任何地方找到,希望可以提供该处语料所需的格式,以准备新实体的语料。 我已在relese中下载了最新的data的压缩包并没有找到对应文件,https://github.com/hankcs/HanLP/issues/311 这个issue中所提供的页面也已经找不到了。 感谢您的时间!
closed
2019-02-21T01:57:22Z
2019-03-06T08:31:42Z
https://github.com/hankcs/HanLP/issues/1098
[ "question" ]
HitomeRyuu
5
RobertCraigie/prisma-client-py
asyncio
95
Add support for setting httpx client options
## Problem <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] --> While running the `postgresql` integration tests on a slow connection I ran into a lot of `httpx.ReadTimeout` errors. ## Suggested solution <!-- A clear and concise description of what you want to happen. --> We should either increase the timeout option or allow users to set it themselves, maybe something like: ```py client = Client( http={ 'timeout': 10, }, ) ```
closed
2021-11-05T15:41:16Z
2021-12-29T11:04:31Z
https://github.com/RobertCraigie/prisma-client-py/issues/95
[ "kind/feature" ]
RobertCraigie
0
influxdata/influxdb-client-python
jupyter
24
Add possibility to write dictionary-style object
It will be useful for backward compatibility: - [v1 example](https://github.com/influxdata/influxdb-python#examples)
closed
2019-10-14T08:09:19Z
2019-10-15T09:22:59Z
https://github.com/influxdata/influxdb-client-python/issues/24
[ "enhancement" ]
bednar
0
syrupy-project/syrupy
pytest
17
assert_match does not fail assertion
**Describe the bug** `snapshot.assert_match(ANY)` always passed **To Reproduce** Steps to reproduce the behavior: 1. Replace with `snapshot.assert_match(None)` https://github.com/tophat/syrupy/blob/a5c46b15942e4d75c9470b24f3e2bfc9fa661087/tests/test_snapshots.py#L14 2. Run command `inv test` 3. See that test does not fail **Expected behavior** Should fail with the same error as `assert None == snapshot`
closed
2019-12-05T03:57:16Z
2019-12-29T17:22:28Z
https://github.com/syrupy-project/syrupy/issues/17
[]
iamogbz
0
streamlit/streamlit
streamlit
9,951
Tabs dont respond when using nested cache functions
### Checklist - [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues. - [X] I added a very descriptive title to this issue. - [X] I have provided sufficient information below to help reproduce this issue. ### Summary [![Open in Streamlit Cloud](https://static.streamlit.io/badges/streamlit_badge_black_white.svg)](https://issues.streamlitapp.com/?issue=gh-9951) I detected a critical problem with tabs when updating streamlit to a version newer than 1.35.0 (1.36 and up all have this problem). I found the issue in mobile, but it reproduces also in PC. In my app I have the following scenario: - Multiple tabs - Several of them call functions that are cached - And those functions call also (sometimes several times) nested cache functions. In version 1.35 everything works fine on mobile and pc, but when I tried to update to a newer version, I detected that changing between tabs dont work (they become super irresponsibe and the app seems to crash). This is weird because my understanding was that changing tabs didnt trigger any runs/calculations. If you erase all the @st.cache_data from the reproducible code example, all the code works just fine. So the problem seem to be that streamlit is doing somthing with the cache data when I try to switch tabs. ### Reproducible Code Example ```Python import streamlit as st st.header(body = "Testing problem switching tabs") @st.cache_data(ttl=None) def cached_func_level4(): return "test" @st.cache_data(ttl=None) def cached_func_level3(): return cached_func_level4() @st.cache_data(ttl=None) def cached_func_level2(): return cached_func_level3() @st.cache_data(ttl=None) def cached_func_level1(): return cached_func_level2() @st.cache_data(ttl=None) def cached_func_level0(): # If you iterate more times than 2000, the tab problem is even bigger for _ in range(2000): x = cached_func_level1() return x # In this testing tabs I only print a value and execute the # "root" cached function, which calls other cached funcs admin_tabs = st.tabs(["test1", "test2"]) with admin_tabs[0]: st.write("Hello") val = cached_func_level0() with admin_tabs[1]: st.write("World!") val = cached_func_level0() ``` ### Steps To Reproduce Just run streamlit and when the page renders try to switch between the tabs. ### Expected Behavior The expected behavior would be to be able to switch tabs without delay ### Current Behavior Now the tabs crash when you try to switch between them, and the app does not respond or it does but super slowly. ### Is this a regression? - [X] Yes, this used to work in a previous version. ### Debug info - Streamlit version: 1.36 and up - Python version: 3.11.5 - Operating System: Windows and iOS - Browser: Testing in both safari and chrome ### Additional Information _No response_
closed
2024-12-01T16:47:01Z
2024-12-06T21:41:32Z
https://github.com/streamlit/streamlit/issues/9951
[ "type:bug", "feature:cache", "priority:P3" ]
ricardorfe
7
huggingface/datasets
tensorflow
6,808
Make convert_to_parquet CLI command create script branch
As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168 > When providing support, we sometimes suggest that users store their script in a script branch. What do you think of this alternative to deleting the files?
closed
2024-04-15T06:46:07Z
2024-04-17T08:38:19Z
https://github.com/huggingface/datasets/issues/6808
[ "enhancement" ]
albertvillanova
0
huggingface/datasets
numpy
6,726
Profiling for HF Filesystem shows there are easy performance gains to be made
### Describe the bug # Let's make it faster First, an evidence... ![image](https://github.com/huggingface/datasets/assets/159512661/a703a82c-43a0-426c-9d99-24c563d70965) Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106 seconds long. See? It's pretty slow. What is resolve pattern doing? ``` resolve_pattern called with **/train/** and hf://datasets/cerebras/SlimPajama-627B@2d0accdd58c5d5511943ca1f5ff0e3eb5e293543 resolve_pattern took 20.815081119537354 seconds ``` Makes sense. How to improve it? ## Bigger project, biggest payoff Databricks (and consequently, spark) store a compressed manifest file of the files contained in the remote filesystem. Then, you download one tiny file, decompress it, and all the operations are local instead of this shenanigans. It seems pretty straightforward to make dataset uploads compute a manifest and upload it alongside their data. This would make resolution time so fast that nobody would ever think about it again. It also means you either need to have the uploader compute it _every time_, or have a hook that computes it. ## Smaller project, immediate payoff: Be diligent in avoiding deepcopy Revise the _ls_tree method to avoid deepcopy: ``` def _ls_tree( self, path: str, recursive: bool = False, refresh: bool = False, revision: Optional[str] = None, expand_info: bool = True, ): ..... omitted ..... for path_info in tree: if isinstance(path_info, RepoFile): cache_path_info = { "name": root_path + "/" + path_info.path, "size": path_info.size, "type": "file", "blob_id": path_info.blob_id, "lfs": path_info.lfs, "last_commit": path_info.last_commit, "security": path_info.security, } else: cache_path_info = { "name": root_path + "/" + path_info.path, "size": 0, "type": "directory", "tree_id": path_info.tree_id, "last_commit": path_info.last_commit, } parent_path = self._parent(cache_path_info["name"]) self.dircache.setdefault(parent_path, []).append(cache_path_info) out.append(cache_path_info) return copy.deepcopy(out) # copy to not let users modify the dircache ``` Observe this deepcopy at the end. It is making a copy of a very simple data structure. We do not need to copy. We can simply generate the data structure twice instead. It will be much faster. ``` def _ls_tree( self, path: str, recursive: bool = False, refresh: bool = False, revision: Optional[str] = None, expand_info: bool = True, ): ..... omitted ..... def make_cache_path_info(path_info): if isinstance(path_info, RepoFile): return { "name": root_path + "/" + path_info.path, "size": path_info.size, "type": "file", "blob_id": path_info.blob_id, "lfs": path_info.lfs, "last_commit": path_info.last_commit, "security": path_info.security, } else: return { "name": root_path + "/" + path_info.path, "size": 0, "type": "directory", "tree_id": path_info.tree_id, "last_commit": path_info.last_commit, } for path_info in tree: cache_path_info = make_cache_path_info(path_info) out_cache_path_info = make_cache_path_info(path_info) # copy to not let users modify the dircache parent_path = self._parent(cache_path_info["name"]) self.dircache.setdefault(parent_path, []).append(cache_path_info) out.append(out_cache_path_info) return out ``` Note there is no longer a deepcopy in this method. We have replaced it with generating the output twice. This is substantially faster. For me, the entire resolution went from 1100s to 360s. ## Medium project, medium payoff After the above change, we have this profile: ![image](https://github.com/huggingface/datasets/assets/159512661/db7b83da-2dfc-4c2e-abab-0ede9477876c) Figure 2: x-axis is 355 seconds. Note that globbing and _ls_tree deep copy is gone. No surprise there. It's much faster now, but we still spend ~187seconds in get_fs_token_paths. Well get_fs_token_paths is part of fsspec. We don't need to fix that because we can trust their developers to write high performance code. Probably the caller has misconfigured something. Let's take a look at the storage_options being provided to the filesystem that is constructed during this call. Ah yes, streaming_download_manager::_prepare_single_hop_path_and_storage_options. We know streaming download manager is not compatible with async right now, but we really need this specific part of the code to be async. We're spending so much time checking isDir on the remote filesystem, it's a huge waste. We can make the call easily 20-30x faster by using async, removing this performance bottleneck almost entirely (and reducing the total time of this part of the code to <30s. There is no reason to block async isDir calls for streaming. I'm not going to mess w/ this one myself; I didn't write the streaming impl, and I don't know how it works, but I know the isDir check can be async. ### Steps to reproduce the bug ``` with cProfile.Profile() as pr: pr.enable() # Begin Data if not os.path.exists(data_cache_dir): os.makedirs(data_cache_dir, exist_ok=True) training_dataset = load_dataset(training_dataset_name, split=training_split, cache_dir=data_cache_dir, streaming=True).take(training_slice) eval_dataset = load_dataset(eval_dataset_name, split=eval_split, cache_dir=data_cache_dir, streaming=True).take(eval_slice) # End Data pr.disable() pr.create_stats() if not os.path.exists(profiling_path): os.makedirs(profiling_path, exist_ok=True) pr.dump_stats(os.path.join(profiling_path, "cprofile.prof")) ``` run this code for "cerebras/SlimPajama-627B" and whatever other params ### Expected behavior Something better. ### Environment info - `datasets` version: 2.18.0 - Platform: Linux-5.15.146.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.13 - `huggingface_hub` version: 0.21.3 - PyArrow version: 15.0.0 - Pandas version: 2.2.1 - `fsspec` version: 2024.2.0
open
2024-03-09T07:08:45Z
2024-03-09T07:11:08Z
https://github.com/huggingface/datasets/issues/6726
[]
awgr
2
dynaconf/dynaconf
django
1,075
[CI] Update codecov configuration file
I've closed #990 but forget to create an issue about the codecov configuration issue. Apparently, we should have a `coverage.yml` file for the Codecov app/github-action, but I'm not sure how this will go with local coverage reports, which seems to use `.coveragerc`. This require a little investigation. The goal here is to have: - a single config file (local reports and CI) - an up-to-date configuration file (as recommended in codecov docs): - as a consequence, we can more easily customize the codecov config (if we discuss we need to)
open
2024-03-06T12:42:47Z
2024-03-06T12:42:48Z
https://github.com/dynaconf/dynaconf/issues/1075
[ "enhancement", "CI" ]
pedro-psb
0
plotly/dash-cytoscape
plotly
93
Clarify usage of yarn vs npm in contributing.md
Right now, it's not clear in `contributing.md` whether yarn or npm should be used. Since `yarn` was chosen to be the package manager, we should more clearly indicate that in contributing.
closed
2020-07-03T16:30:13Z
2020-07-08T23:05:37Z
https://github.com/plotly/dash-cytoscape/issues/93
[]
xhluca
1
aimhubio/aim
data-visualization
2,405
Integrate the ITKWidgets
## 🚀 Feature <!-- A clear and concise description of the feature proposal --> Integrate the itkwidgets to view 3d images ### Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too --> 3d Visualization for medical images besides the slicer ### Pitch <!-- A clear and concise description of what you want to happen. --> ### Alternatives <!-- A clear and concise description of any alternative solutions or features you've considered, if any. --> ### Additional context <!-- Add any other context or screenshots about the feature request here. -->
open
2022-12-06T10:34:20Z
2022-12-06T10:43:54Z
https://github.com/aimhubio/aim/issues/2405
[ "type / enhancement", "area / Web-UI", "area / SDK-storage" ]
luxunxiansheng
1
plotly/dash
data-science
2,354
How to access an Iframe from an external source without uncaught DOMexception?
I have some other website I own and I want to embed html from that website as Iframes. I want to access some properties of the actual element (such as scroll height) to adjust the Iframe in dash. But I get a `Uncaught DOMException: Blocked a frame with origin "http://localhost:8050" from accessing a cross-origin frame.`, which is to be expected. In flask there's a way to whitelist other sites, is there a way to do this in Dash? thank you!
closed
2022-12-06T04:29:40Z
2024-07-24T17:00:26Z
https://github.com/plotly/dash/issues/2354
[]
matthewyangcs
2