repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
deepset-ai/haystack | nlp | 9,104 | DOCX Converter does not resolve link information | **Is your feature request related to a problem? Please describe.**
DOCX Converter does not resolve the link information of named links. Link information is dropped and unavailable for chunking.
**Describe the solution you'd like**
As a user I would like to specify the link format (similar to table_format), like markdown: [name-of-link](link-target).
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
https://github.com/deepset-ai/haystack/blob/6db8f0a40d85f08c4f29a0dc951b79e55cbbf606/haystack/components/converters/docx.py#L205 | open | 2025-03-24T18:35:35Z | 2025-03-24T18:35:35Z | https://github.com/deepset-ai/haystack/issues/9104 | [] | deep-rloebbert | 0 |
mitmproxy/mitmproxy | python | 6,235 | Certificate has expired | #### Problem Description
I have renewed my certificates because I have encountered this message, and after generating a new one, I still get the message.
<img width="508" alt="image" src="https://github.com/mitmproxy/mitmproxy/assets/3736805/7a9c8e42-2a14-4cf0-a344-649bd523fab9">
When I try to use mitm:
`[11:28:54.204][127.0.0.1:49414] Server TLS handshake failed. Certificate verify failed: certificate has expired`
#### System Information
Mitmproxy: 9.0.1
Python: 3.9.1
OpenSSL: OpenSSL 3.0.7 1 Nov 2022
Platform: macOS-13.4.1-arm64-arm-64bit
| closed | 2023-07-08T10:36:14Z | 2023-07-08T10:50:13Z | https://github.com/mitmproxy/mitmproxy/issues/6235 | [
"kind/triage"
] | AndreMPCosta | 1 |
openapi-generators/openapi-python-client | rest-api | 675 | VSCode pylance raised `reportPrivateImportUsage` error. | **Describe the bug**
In vscode,
current templates will cause pylance to raises the `reportPrivateImportUsage`.
**To Reproduce**
try.py
```py
from the_client import AuthenticatedClient
from the_client.api.the_tag import get_stuff
from the_client.models import Stuff
```
pylance message:
```
"AuthenticatedClient" is not exported from module "the_client"
Import from "the_client.client" instead
```
```
"Stuff" is not exported from module "the_client.models"
Import from "the_client.models.stuff" instead`
```
**Expected behavior**
The class should be imported without any error message.
It can be fixed by adding `__all__` symbol to solve this issue.
ref: https://github.com/microsoft/pyright/blob/main/docs/typed-libraries.md#library-interface
**OpenAPI Spec File**
Not related to OpenAPI spec.
**Desktop (please complete the following information):**
- OS: win10
- Python Version: 3.10
- openapi-python-client version 0.11.6
**Additional context**
Add any other context about the problem here.
| closed | 2022-09-26T18:42:32Z | 2022-09-26T23:22:43Z | https://github.com/openapi-generators/openapi-python-client/issues/675 | [
"🐞bug"
] | EltonChou | 1 |
chaoss/augur | data-visualization | 2,160 | Installation Extremely Difficult for Mac Users | Please help us help you by filling out the following sections as thoroughly as you can.
**Description:**
Hello, I am working on installing and building augur from my mac and am running into numerous issues. I have been trying to follow the documentation as closely as possible, but haven't been able to install correctly.
At first I tried installing using virtual box, which I downloaded and set up exactly how the documentation described. However, I could not get the machine to boot up. I got the error that: "The virtual machine failed to boot"... (screenshot below). I am confident that I followed the documentation to a tee and have retried installing 3 different times with the same result

After that failed, I just tried to clone and install from the command line. I was able to install the repo successfully and create a virtual environment. However, when I ran `make install` I got the error that `Failed to build pandas psutil h5py greenlet` I do not understand why this is the case, no error message was ever printed during installation and I have downloaded the packages just fine outside of the virtual environment. Everything looks like it was running fine until I get the error message at the end. Any ideas on how to debug either one of these 2 issues?
**Software versions:**
- Augur: v0.43.10
- OS: macOS (M1 chip) | closed | 2023-01-30T00:34:44Z | 2023-07-18T11:23:01Z | https://github.com/chaoss/augur/issues/2160 | [] | NerdyBaseballDude | 1 |
art049/odmantic | pydantic | 420 | Datetime not updating and not updating frequently | Hi all,
not sure if this is my pydantic model configuration problem, but i found some datetime fields not updating correctly and consistently (inside Mongodb Compass)
# Bug
2 Problems:
1. When using Pydantic V2.5 having the `@model_validator` or `@field_validator` decorator in the respective model to handle datetime update, Mongodb engine does not update `datetime` fields with `session.save()` . I tried / experimented using this for multiple fields for other datatypes, i.e., `string`, and `UUID`, both update consistently into the database.
2. Using `<collection>.model_update()` as recommended in the docs, does it correctly to update `datetime` fields. However the frequency of update is inconsistent.
**Temporary fix**
To overcome this (for the time being), i used `<collection>.model_copy()` to make a quick copy with `session.save()` . This now updates the document datetime field at the consistent rate.
**Things to note:**
I'm saving / updating documents in mongodb at the rate of 1 doc per sec and 3 doc per sec.
What i noticed after digging around is, inside the `_BaseODMModel` it does not capture the fields as `__fields_modified__` and `is_type_mutable` returns as false. Not sure if that helps
### Current Behavior
```
### Schema
from odmantic import Model, Field, bson
from odmantic.bson import BSON_TYPES_ENCODERS
from odmantic.config import ODMConfigDict
from pydantic import field_validator, UUID4, model_validator, BaseModel
from bson.binary import Binary
from datetime import datetime
from typing import Any
from uuid import uuid4
from typing_extensions import Self
class BookCollection(Model):
id: UUID4 = Field(primary_field=True, default_factory=uuid4)
book_id: UUID4
borrorwer: str
created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow)
model_config = ODMConfigDict(
collection="BOOK_COLLECTION",
)
@model_validator(mode="before") # type: ignore
def update_field_updated_at(cls, input: dict) -> dict:
input["updated_at"] = datetime.utcnow()
return input
# > This one works. (Above does not), Also did this using @field_validator, only book_id works, not for updated_at (datetime)
# @model_validator(mode="before")
# def validate_field_book_id(cls, input: dict) -> dict:
# input["book_id"] = uuid4()
# return input
```
``````
@asynccontextmanager
async def get_mongodb() -> AsyncGenerator[MongoDB, None]:
try:
mongodb = MongoDB()
# 01 Start session
await mongodb.start_session()
yield mongodb
finally:
# 02 gracefully end session
await mongodb.end_session()
def func_update_id(new_id: string) -> None:
async with get_mongodb() as mongodb:
result = await mongodb.session.find_one(
BookCollection,
BookCollection.book_id == UUID("69e4ea5b-1751-4579-abb1-60c31e939c0f")
)
if result is None:
logger.error("Book Doc not found!")
return None
else:
# Method 1
# Bug 1 > datetime does not save into mongodb, but returned model shows its updated.
result.model_update()
result.book_id = new_id
data = await mongodb.session.save(result)
# Method 2
# Bug 2 > datetime saves into mongodb, but not consistently.
result.model_update({"book_id": new_id})
data = await mongodb.session.save(result)
# quick fix
result.book_id = new_id
data = await mongodb.session.save(result.model_copy())
### Expected behavior
To update `datetime` fields in mongodb engine at a consistent rate using pydantic decorators
### Environment
- ODMantic version: "^1.0.0"
- MongoDB version: [6.00](mongo:6.0) Docker
- Pydantic: ^2.5
- Mongodb Compass: 1.35 | open | 2024-02-24T08:33:13Z | 2024-02-26T01:58:40Z | https://github.com/art049/odmantic/issues/420 | [
"bug"
] | TyroneTang | 0 |
keras-team/keras | machine-learning | 20,750 | Unrecognized keyword arguments passed to GRU: {'time_major': False} using TensorFlow 2.18 and Python 3.12 |
Hi everyone,
I encountered an issue when trying to load a Keras model saved as an `.h5` file. Here's the setup:
- **Python version**: 3.12
- **TensorFlow version**: 2.18
- **Code**:
```python
from tensorflow.keras.models import load_model
# Trying to load a pre-saved model
model = load_model('./resources/pd_voice_gru.h5')
```
- **Error message**:
```
ValueError: Unrecognized keyword arguments passed to GRU: {'time_major': False}
```
**What I've tried**:
1. Checked the TensorFlow and Keras versions compatibility.
2. Attempted to manually modify the `.h5` file (unsuccessfully).
3. Tried to re-save the model in a different format but don't have the original training script.
**Questions**:
1. Why is `time_major` causing this error in TensorFlow 2.18?
2. Is there a way to ignore or bypass this parameter during model loading?
3. If the issue is due to version incompatibility, which TensorFlow version should I use?
Any help or suggestions would be greatly appreciated!
Thank you! | closed | 2025-01-11T08:31:25Z | 2025-02-10T02:02:08Z | https://github.com/keras-team/keras/issues/20750 | [
"type:support",
"stat:awaiting response from contributor",
"stale"
] | HkThinker | 4 |
robotframework/robotframework | automation | 4,437 | The language(s) used to create the AST should be set as an attribute in the generated `File`. | I think this is interesting in general and needed if the language is ever going to be set in the `File` itself (because afterwards additional processing is needed to deal with bdd prefixes and this would need to be done based on information from the file and not from a global setting). | closed | 2022-08-16T17:59:54Z | 2022-08-19T12:13:58Z | https://github.com/robotframework/robotframework/issues/4437 | [] | fabioz | 1 |
2noise/ChatTTS | python | 756 | passing `past_key_values` as a tuple | We detected that you are passing `past_key_values` as a tuple and this is deprecated and will be removed in v4.43. Please use an appropriate `Cache` class (https://huggingface.co/docs/transformers/v4.41.3/en/internal/generation_utils#transformers.Cache) | open | 2024-09-16T06:55:11Z | 2024-09-17T12:58:01Z | https://github.com/2noise/ChatTTS/issues/756 | [
"enhancement",
"help wanted",
"algorithm"
] | xjlv1113 | 1 |
python-gino/gino | sqlalchemy | 191 | CRUDModel.get() does not work for string keys | * GINO version: 0.6.2
* Python version: 3.6.0
* Operating System: Ubuntu 17.10
### Description
await SomeModel.get('this key is a string') fails because the param is a string. gino thinks the param is a sequence of keys and tries to iterate it.
| closed | 2018-04-04T16:28:02Z | 2018-04-06T11:32:37Z | https://github.com/python-gino/gino/issues/191 | [
"bug"
] | aglassdarkly | 1 |
521xueweihan/HelloGitHub | python | 2,234 | 推荐个最好的Vue方向table库 VXE-table,甚至不止于table | ## 推荐项目
<!-- 这里是 HelloGitHub 月刊推荐项目的入口,欢迎自荐和推荐开源项目,唯一要求:请按照下面的提示介绍项目。-->
<!-- 点击上方 “Preview” 立刻查看提交的内容 -->
<!--仅收录 GitHub 上的开源项目,请填写 GitHub 的项目地址-->
- 项目地址:https://github.com/x-extends/vxe-table
<!--请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)-->
- 类别:JS
<!--请用 20 个左右的字描述它是做什么的,类似文章标题让人一目了然 -->
- 项目标题:
Vue生态内最好的表格和表单组件
<!--这是个什么项目、能用来干什么、有什么特点或解决了什么痛点,适用于什么场景、能够让初学者学到什么。长度 32-256 字符-->
- 项目描述:
无限长能渲染大数据的虚拟滚动表格(包括树形和行扩展)。大数据懒加载的树形表格。配置式书写表单表格等。全局任意组件封装渲染器,以声明使用在表单表格内。
<!--令人眼前一亮的点是什么?类比同类型项目有什么特点!-->
- 亮点:除了UI组件没有,差不多你能想到的需求都能实现,比起Element和antdv的表格高出好几个大气层
- 示例代码:(可选)
-- DOM:
<vxe-grid v-bind="gridSettings" class="mg-top-md" @cell-dblClick="editRow"></vxe-grid>
-- JS:
computed:{
gridSettings () {
return {
ref: 'mainGrid',
data: this.shiftData,
height: 600,
border: 'full',
align: 'center',
columns: [
{ field: 'shiftName', title: '班次名称', width: 180 },
{ field: 'dept', title: '班次部门', minWidth: 180 },
{ field: 'isFixed', title: '考勤分类', width: 100, formatter: ({ cellValue }) => cellValue ? '固定班次' : '其他班次' },
{
field: 'startAt', title: '上班时间', width: 100,
formatter: ({ cellValue, row }) => this.getFixedTime(cellValue, row.flexMinutes)
},
{
field: 'endAt', title: '下班时间', width: 100,
formatter: ({ cellValue, row }) => this.getFixedTime(cellValue, row.flexMinutes)
},
{ field: 'breakFrom', title: '间休开始时间', width: 120, formatter: ({ cellValue }) => this.formatTime(cellValue) },
{ field: 'breakTo', title: '间休结束时间', width: 120, formatter: ({ cellValue }) => this.formatTime(cellValue) },
{
field: 'weekendType',
title: '休息日',
width: 260,
formatter: ({ cellValue }) => cellValue ? this.weekendTypeOptions.find(f => f.value === cellValue).label : '周末双休'
},
{
field: 'operation',
title: '操作',
width: 220,
fixed: 'right',
cellRender: {
name: '$buttons',
children: [
{
props: { content: '分配部门', status: 'primary', icon: 'fa fa-wrench' },
events: { click: this.startDistribution }
},
{
props: { content: '删除', status: 'danger', icon: 'fa fa-trash' },
events: { click: this.deleteShift }
}
]
}
}
]
}
},
}
- 截图:(可选)gif/png/jpg
- 后续更新计划:
| closed | 2022-06-01T06:27:44Z | 2022-07-28T01:31:03Z | https://github.com/521xueweihan/HelloGitHub/issues/2234 | [
"已发布",
"JavaScript 项目"
] | adoin | 1 |
lux-org/lux | pandas | 362 | [BUG] Matplotlib code missing computed data for BarChart, LineChart and ScatterChart | **Describe the bug**
Without `self.code += f”df = pd.DataFrame({str(self.data.to_dict())})\n”`, exported BarChart, LineChart and ScatterChart that contain computed data throw an error.
**To Reproduce**
```
df = pd.read_csv('https://github.com/lux-org/lux-datasets/blob/master/data/hpi.csv?raw=true')
df
```
```
vis = df.recommendation["Occurrence"][0]
vis
print (vis.to_code("matplotlib"))
```
**Expected behavior**
Should render single BarChart
**Screenshots**
<img width="827" alt="Screen Shot 2021-04-15 at 11 22 25 AM" src="https://user-images.githubusercontent.com/11529801/114919503-2b04cc00-9ddd-11eb-90b1-1db3e59caa68.png">
Expected:
<img width="856" alt="Screen Shot 2021-04-15 at 11 22 51 AM" src="https://user-images.githubusercontent.com/11529801/114919507-2d672600-9ddd-11eb-8206-10801c9eb055.png">
| open | 2021-04-15T18:25:33Z | 2021-04-15T20:47:32Z | https://github.com/lux-org/lux/issues/362 | [
"bug"
] | caitlynachen | 0 |
docarray/docarray | fastapi | 1,824 | Integrate new Vector search library from Spotify | ### Initial Checks
- [X] I have searched Google & GitHub for similar requests and couldn't find anything
- [X] I have read and followed [the docs](https://docs.docarray.org) and still think this feature is missing
### Description
Spotify has released VOYAGER (https://github.com/spotify/voyager) a new vector search library
I propose to have a VoyagerIndex expecting to be very similar to HNSWIndex.
### Affected Components
- [X] [Vector Database / Index](https://docs.docarray.org/user_guide/storing/docindex/)
- [ ] [Representing](https://docs.docarray.org/user_guide/representing/first_step)
- [ ] [Sending](https://docs.docarray.org/user_guide/sending/first_step/)
- [ ] [storing](https://docs.docarray.org/user_guide/storing/first_step/)
- [ ] [multi modal data type](https://docs.docarray.org/data_types/first_steps/) | open | 2023-10-21T06:57:44Z | 2023-12-12T08:23:27Z | https://github.com/docarray/docarray/issues/1824 | [] | JoanFM | 3 |
pydantic/pydantic-ai | pydantic | 915 | Evals | (Or Evils as I'm coming to think of them)
We want to build an open-source, sane way to score the performance of LLM calls that is:
* local first - so you don't need to use a service
* flexible enough to work with whatever best practice emerges — ideally usable for any code that is stochastic enough to require scoring beyond passed/failed (that means LLM SDKs directly or even other agent frameworks)
* usable both for "offline evals" (unit-test style checks on performance) and "online evals" measuring performance in production or equivalent (presumably using an observability platform like Pydantic Logfire)
* usable with Pydantic Logfire when and where that actually helps
I believe @dmontagu has a plan. | open | 2025-02-12T20:43:19Z | 2025-02-12T21:06:33Z | https://github.com/pydantic/pydantic-ai/issues/915 | [
"Feature request"
] | samuelcolvin | 0 |
AutoGPTQ/AutoGPTQ | nlp | 20 | Install fails due to missing nvcc | Hi,
Thanks for this package, it seems very promising!
I've followed the instructions to install from source, but I get an error. Here are what I believe are the relevant lines:
```
copying auto_gptq/nn_modules/triton_utils/__init__.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
copying auto_gptq/nn_modules/triton_utils/custom_autotune.py -> build/lib.linux-x86_64-cpython-310/auto_gptq/nn_modules/triton_utils
running build_ext
/opt/miniconda3/lib/python3.10/site-packages/torch/utils/cpp_extension.py:476: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
error: [Errno 2] No such file or directory: '/usr/local/cuda/bin/nvcc'
```
It looks like I don't have nvcc installed, but apparently nvcc [isn't necessary](https://discuss.pytorch.org/t/pytorch-finds-cuda-despite-nvcc-not-found/166754) for PyTorch to run fine. My `nvidia-smi` command works fine.
Is there some way to remove the dependency on nvcc?
Thanks,
Dave | closed | 2023-04-26T18:22:51Z | 2023-04-26T19:47:21Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/20 | [] | dwadden | 2 |
deepset-ai/haystack | nlp | 8,624 | Pyright type check breaks with version 2.8.0 | **Describe the bug**
The instantiation of a class decorated with the `@component` decorator fails with a type check error. Also, the type checker does not recognize the parameters.
This seems related to the deprecation fo the `is_greedy` parameter, as it is the only change between this release and the previous one (2.7.0)
**Error message**
`Argument missing for parameter "cls"`
**Expected behavior**
No type-check error.
**To Reproduce**
Type-check any pipeline instantiated with the library using Pyright
**FAQ Check**
- [x ] Have you had a look at [our new FAQ page](https://docs.haystack.deepset.ai/docs/faq)?
**System:**
- OS: linux
- GPU/CPU: cpu
- Haystack version (commit or version number): 2.8.0
- DocumentStore: N/A
- Reader: N/A
- Retriever: N/A
| closed | 2024-12-11T11:23:06Z | 2025-01-27T13:52:25Z | https://github.com/deepset-ai/haystack/issues/8624 | [
"P2"
] | Calcifer777 | 2 |
QuivrHQ/quivr | api | 3,456 | Enable text-to-database retrieval | We should enable the connection of our RAG system with our customers' internal databases.
This requires developing capabilities for text-to-sql, text-to-cypher, etc…
* text-to-sql:
* [https://medium.com/pinterest-engineering/how-we-built-text-to-sql-at-pinterest-30bad30dabff](https://medium.com/pinterest-engineering/how-we-built-text-to-sql-at-pinterest-30bad30dabff)
* [https://vanna.ai/docs/](https://vanna.ai/docs/)
* benchmark: [https://bird-bench.github.io/](https://bird-bench.github.io/) | closed | 2024-11-04T16:37:18Z | 2025-02-07T20:06:30Z | https://github.com/QuivrHQ/quivr/issues/3456 | [
"Stale",
"area: backend",
"rag: retrieval"
] | jacopo-chevallard | 2 |
jacobgil/pytorch-grad-cam | computer-vision | 483 | Memory leakage during multiple loss calculations | Thank you for publishing this software package, it is very useful for the community. However, when I perform multiple loss calculations, the memory will continue to grow and will not be released in the end, resulting in the memory continuously increasing during the training process. When I restart a CAM object in a loop, even if the object is deleted at the end of the loop, memory accumulates. Have you ever thought about why this situation happened and how can I avoid it?

| open | 2024-02-27T04:08:26Z | 2025-03-14T12:00:44Z | https://github.com/jacobgil/pytorch-grad-cam/issues/483 | [] | IzumiKDl | 2 |
airtai/faststream | asyncio | 1,676 | Feature: Add ability to specify on_assign, on_revoke, and on_lost callbacks for a Confluent subscriber | **Is your feature request related to a problem? Please describe.**
Yes, I want to know when my confluent Consumer gets topic partitions assigned and removed.
Currently, I reach through FastStream into confluent_kafka.Consumer.assignment() every time my k8s liveness probe runs, but its noisy and most notably, not *right* when it happens.
I may even, at times, want to do something with the informaiton beyond logging. Potentially clear some cached state, cancel some running threads/processes, etc...
**Describe the solution you'd like**
I want to specify at the subscriber registration level the callbacks that I want called, and for FastStream to pass them into the
confluent_kafka.Consumer.subscribe() call inside AsyncConfluentConsumer.
**Feature code example**
```python
from faststream import FastStream
...
broker = KafkaBroker(...)
@broker.subscriber(
"my-topic",
on_assign=lambda consumer, partitions: ...,
on_revoke=lambda consumer, partitions: ...,
)
def my_handler(body: str):
print(body)
```
**Describe alternatives you've considered**
I monkey patch AsyncConfluentConsumer at import time in the FastStream library.
```python
import faststream.confluent.broker.broker
from faststream.confluent.broker.broker import AsyncConfluentConsumer
from observing.observing import logger
class PatchedAsyncConfluentConsumer(AsyncConfluentConsumer):
"""A patched version of the AsyncConfluentConsumer class."""
def __init__(self, *topics, **kwargs):
super().__init__(*topics, **kwargs)
self.topic_partitions = set()
def on_revoke(self, consumer, partitions):
"""Conditionally pauses the consumer when partitions are revoked."""
self.topic_partitions -= set(partitions)
logger.info(
"Consumer rebalance event: partitions revoked.",
topic_partitions=dict(
n_revoked=len(partitions),
revoked=[
dict(topic=tp.topic, partition=tp.partition) for tp in partitions
],
n_current=len(self.topic_partitions),
current=[
dict(topic=tp.topic, partition=tp.partition)
for tp in self.topic_partitions
],
),
memberid=self.consumer.memberid(),
topics=self.topics,
config=dict(
group_id=self.config.get("group.id"),
group_instance_id=self.config.get("group.instance.id"),
),
)
def on_assign(self, consumer, partitions):
"""Conditionally resumes the consumer when partitions are assigned."""
self.topic_partitions |= set(partitions)
logger.info(
"Consumer rebalance event: partitions assigned.",
topic_partitions=dict(
n_assigned=len(partitions),
assigned=[
dict(topic=tp.topic, partition=tp.partition) for tp in partitions
],
n_current=len(self.topic_partitions),
current=[
dict(topic=tp.topic, partition=tp.partition)
for tp in self.topic_partitions
],
),
memberid=self.consumer.memberid(),
topics=self.topics,
config=dict(
group_id=self.config.get("group.id"),
group_instance_id=self.config.get("group.instance.id"),
),
)
async def start(self) -> None:
"""Starts the Kafka consumer and subscribes to the specified topics."""
self.consumer.subscribe(
self.topics, on_revoke=self.on_revoke, on_assign=self.on_assign
)
def patch_async_confluent_consumer():
logger.info("Patching AsyncConfluentConsumer.")
faststream.confluent.broker.broker.AsyncConfluentConsumer = (
PatchedAsyncConfluentConsumer
)
```
Obviously, this is ideal for no one.
**Additional context**
| open | 2024-08-12T20:14:54Z | 2024-08-21T18:49:45Z | https://github.com/airtai/faststream/issues/1676 | [
"enhancement",
"good first issue",
"Confluent"
] | andreaimprovised | 0 |
tqdm/tqdm | jupyter | 1,539 | Feature proposal: JSON output to arbitrary FD to allow programmatic consumers | This ticket is spawned by a recent Stack Overflow question, [grep a live progress bar from a running program](https://stackoverflow.com/questions/77699411/linux-grep-a-live-progress-bar-from-a-running-program). The party asking this question was looking for a way to run a piece of software -- which, as it turns out, uses tqdm for its progress bar -- and monitor its progress programmatically, triggering an event when that progress exceeded a given threshold.
This is not an unreasonable request, and I've occasionally had the need for similar things in my career. Generally, solving such problems ends up going one of a few routes:
- Patching the software being run to add hooks that trigger when progress meets desired milestones (unfortunate because invasive).
- Parsing the output stream being generated (unfortunate because fragile -- prone to break when the progress bar format is updated, the curses or similar library being used to generate events changes, the terminal type or character set in use differs from that which the folks building the parser anticipated, etc).
- Leveraging support for writing progress information in a well-defined format to a well-defined destination.
This is a feature request to make the 3rd of those available by default across all applications using TQDM.
One reasonable interface accessible to non-Python consumers would be environment variables, such as:
```bash
TQDM_FD=3 TQDM_FORMAT=json appname...
```
...to instruct progress data to be written to file descriptor 3 (aka the file-like object returned by `os.fdopen(3, 'a')`) in JSONL form, such that whatever software is spawning `appname` can feed contents through a JSON parser. Some examples from the README might in this form look like:
```json
{"units": "bytes", "total_units": 369098752, "curr_units": 160432128, "pct": 44, "units_per_second": 11534336, "state": "Processing", "time_spent": "00:14", "time_remaining": "00:18"}
{"units": "bytes", "total_units": 369098752, "curr_units": 155189248, "pct": 42, "units_per_second": 11429478, "state": "Compressed", "time_spent": "00:14", "time_remaining": "00:19"}
```
...allowing out-of-process tools written in non-Python languages to perform arbitrary operations based on that status.
---
- [X] I have marked all applicable categories:
+ [ ] documentation request (i.e. "X is missing from the documentation." If instead I want to ask "how to use X?" I understand [StackOverflow#tqdm] is more appropriate)
+ [X] new feature request
- [ ] I have visited the [source website], and in particular
read the [known issues]
- [X] I have searched through the [issue tracker] for duplicates
- [ ] I have mentioned version numbers, operating system and
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
[StackOverflow#tqdm]: https://stackoverflow.com/questions/tagged/tqdm
| open | 2023-12-22T14:23:55Z | 2023-12-22T14:23:55Z | https://github.com/tqdm/tqdm/issues/1539 | [] | charles-dyfis-net | 0 |
lepture/authlib | flask | 423 | Backport fix to support Django 4 | Authlib 0.15.5 does not work with Django 4 due to the removal of `HttpRequest.get_raw_uri`. This is already fixed in master via https://github.com/lepture/authlib/commit/b3847d89dcd4db3a10c9b828de4698498a90d28c. Please backport the fix.
**To Reproduce**
With the latest release 0.15.5, run `tox -e py38-django` and see tests failing.
**Expected behavior**
All tests pass.
**Environment:**
- OS: Fedora
- Python Version: 3.8
- Authlib Version: 0.15.5 | closed | 2022-01-28T15:14:41Z | 2022-11-01T12:57:08Z | https://github.com/lepture/authlib/issues/423 | [
"bug"
] | V02460 | 1 |
flairNLP/flair | pytorch | 3,151 | [Question]: | ### Question
How can I create a custom evaluation metric for the SequenceTagger? I would like to use F05 and not F1-score.
I can't find any option to the trainer for alternative scoring during training. | closed | 2023-03-16T12:29:48Z | 2023-08-21T09:09:31Z | https://github.com/flairNLP/flair/issues/3151 | [
"question",
"wontfix"
] | larsbun | 3 |
pytorch/pytorch | numpy | 149,635 | avoid guarding on max() unnecessarily | here's a repro. theoretically the code below should not require a recompile. We are conditionally padding, producing an output tensor of shape max(input_size, 16). Instead though, we specialize on the pad value, and produce separate graphs for the `size_16` and `size_greater_than_16` cases
```
import torch
@torch.compile(backend="eager")
def f(x):
padded_size = max(x.shape[0], 16)
padded_tensor = torch.ones(padded_size, *x.shape[1:])
return padded_tensor + x.sum()
x = torch.arange(15)
torch._dynamo.mark_dynamic(x, 0)
out = f(x)
x = torch.arange(17)
torch._dynamo.mark_dynamic(x, 0)
out = f(x)
```
cc @chauhang @penguinwu @ezyang @bobrenjc93 @zou3519 | open | 2025-03-20T17:08:26Z | 2025-03-24T09:52:13Z | https://github.com/pytorch/pytorch/issues/149635 | [
"triaged",
"oncall: pt2",
"module: dynamic shapes",
"vllm-compile"
] | bdhirsh | 5 |
biolab/orange3 | numpy | 6,514 | Installation problem "Cannot continue" | <!--
Thanks for taking the time to report a bug!
If you're raising an issue about an add-on (i.e., installed via Options > Add-ons), raise an issue in the relevant add-on's issue tracker instead. See: https://github.com/biolab?q=orange3
To fix the bug, we need to be able to reproduce it. Please answer the following questions to the best of your ability.
-->
```
Creating an new conda env in "C:\Program Files\Orange"
Output folder: C:\Users\OGBOLU~1\AppData\Local\Temp\nsxC898.tmp\Orange-installer-data\conda-pkgs
Extract: _py-xgboost-mutex-2.0-cpu_0.tar.bz2
Extract: anyio-3.6.2-pyhd8ed1ab_0.tar.bz2
Extract: anyqt-0.2.0-pyh6c4a22f_0.tar.bz2
Extract: asttokens-2.2.1-pyhd8ed1ab_0.conda
Extract: backcall-0.2.0-pyh9f0ad1d_0.tar.bz2
Extract: backports-1.0-pyhd8ed1ab_3.conda
Extract: backports.functools_lru_cache-1.6.4-pyhd8ed1ab_0.tar.bz2
Extract: baycomp-1.0.2-py_1.tar.bz2
Extract: blas-2.116-openblas.tar.bz2
Extract: blas-devel-3.9.0-16_win64_openblas.tar.bz2
Extract: bottleneck-1.3.7-py39hc266a54_0.conda
Extract: brotli-1.0.9-hcfcfb64_8.tar.bz2
Extract: brotli-bin-1.0.9-hcfcfb64_8.tar.bz2
Extract: brotlipy-0.7.0-py39ha55989b_1005.tar.bz2
Extract: bzip2-1.0.8-h8ffe710_4.tar.bz2
Extract: ca-certificates-2022.12.7-h5b45459_0.conda
Extract: cachecontrol-0.12.11-pyhd8ed1ab_1.conda
Extract: cairo-1.16.0-hdecc03f_1015.conda
Extract: catboost-1.2-py39hcbf5309_4.conda
Extract: certifi-2022.12.7-pyhd8ed1ab_0.conda
Extract: cffi-1.15.1-py39h68f70e3_3.conda
Extract: chardet-5.1.0-py39hcbf5309_0.conda
Extract: charset-normalizer-3.1.0-pyhd8ed1ab_0.conda
Extract: colorama-0.4.6-pyhd8ed1ab_0.tar.bz2
Extract: commonmark-0.9.1-py_0.tar.bz2
Extract: conda-spec.txt
Extract: contourpy-1.0.7-py39h1f6ef14_0.conda
Extract: cryptography-40.0.2-py39hb6bd5e6_0.conda
Extract: cycler-0.11.0-pyhd8ed1ab_0.tar.bz2
Extract: debugpy-1.6.7-py39h99910a6_0.conda
Extract: decorator-5.1.1-pyhd8ed1ab_0.tar.bz2
Extract: dictdiffer-0.9.0-pyhd8ed1ab_0.tar.bz2
Extract: docutils-0.19-py39hcbf5309_1.tar.bz2
Extract: et_xmlfile-1.1.0-pyhd8ed1ab_0.conda
Extract: executing-1.2.0-pyhd8ed1ab_0.tar.bz2
Extract: expat-2.5.0-h63175ca_1.conda
Extract: font-ttf-dejavu-sans-mono-2.37-hab24e00_0.tar.bz2
Extract: font-ttf-inconsolata-3.000-h77eed37_0.tar.bz2
Extract: font-ttf-source-code-pro-2.038-h77eed37_0.tar.bz2
Extract: font-ttf-ubuntu-0.83-hab24e00_0.tar.bz2
Extract: fontconfig-2.14.2-hbde0cde_0.conda
Extract: fonts-conda-ecosystem-1-0.tar.bz2
Extract: fonts-conda-forge-1-0.tar.bz2
Extract: fonttools-4.39.3-py39ha55989b_0.conda
Extract: freetype-2.12.1-h546665d_1.conda
Extract: fribidi-1.0.10-h8d14728_0.tar.bz2
Extract: future-0.18.3-pyhd8ed1ab_0.conda
Extract: getopt-win32-0.1-h8ffe710_0.tar.bz2
Extract: gettext-0.21.1-h5728263_0.tar.bz2
Extract: glib-2.76.2-h12be248_0.conda
Extract: glib-tools-2.76.2-h12be248_0.conda
Extract: graphite2-1.3.13-1000.tar.bz2
Extract: graphviz-8.0.5-h51cb2cd_0.conda
Extract: gst-plugins-base-1.22.0-h001b923_2.conda
Extract: gstreamer-1.22.0-h6b5321d_2.conda
Extract: gts-0.7.6-h7c369d9_2.tar.bz2
Extract: h11-0.14.0-pyhd8ed1ab_0.tar.bz2
Extract: h2-4.1.0-py39hcbf5309_0.tar.bz2
Extract: harfbuzz-7.2.0-h196d34a_0.conda
Extract: hpack-4.0.0-pyh9f0ad1d_0.tar.bz2
Extract: httpcore-0.17.0-pyhd8ed1ab_0.conda
Extract: httpx-0.24.0-pyhd8ed1ab_1.conda
Extract: hyperframe-6.0.1-pyhd8ed1ab_0.tar.bz2
Extract: icu-72.1-h63175ca_0.conda
Extract: idna-3.4-pyhd8ed1ab_0.tar.bz2
Extract: importlib-metadata-6.6.0-pyha770c72_0.conda
Extract: importlib-resources-5.12.0-pyhd8ed1ab_0.conda
Extract: importlib_metadata-6.6.0-hd8ed1ab_0.conda
Extract: importlib_resources-5.12.0-pyhd8ed1ab_0.conda
Extract: install.bat
Extract: ipykernel-6.14.0-py39h832f523_0.tar.bz2
Extract: ipython-8.4.0-py39hcbf5309_0.tar.bz2
Extract: ipython_genutils-0.2.0-py_1.tar.bz2
Extract: jaraco.classes-3.2.3-pyhd8ed1ab_0.tar.bz2
Extract: jedi-0.18.2-pyhd8ed1ab_0.conda
Extract: joblib-1.2.0-pyhd8ed1ab_0.tar.bz2
Extract: jupyter_client-8.2.0-pyhd8ed1ab_0.conda
Extract: jupyter_core-5.3.0-py39hcbf5309_0.conda
Extract: keyring-23.13.1-py39hcbf5309_0.conda
Extract: keyrings.alt-4.2.0-pyhd8ed1ab_0.conda
Extract: kiwisolver-1.4.4-py39h1f6ef14_1.tar.bz2
Extract: krb5-1.20.1-heb0366b_0.conda
Extract: lcms2-2.15-h3e3b177_1.conda
Extract: lerc-4.0.0-h63175ca_0.tar.bz2
Extract: libblas-3.9.0-16_win64_openblas.tar.bz2
Extract: libbrotlicommon-1.0.9-hcfcfb64_8.tar.bz2
Extract: libbrotlidec-1.0.9-hcfcfb64_8.tar.bz2
Extract: libbrotlienc-1.0.9-hcfcfb64_8.tar.bz2
Extract: libcblas-3.9.0-16_win64_openblas.tar.bz2
Extract: libclang-16.0.3-default_h8b4101f_0.conda
Extract: libclang13-16.0.3-default_h45d3cf4_0.conda
Extract: libdeflate-1.18-hcfcfb64_0.conda
Extract: libexpat-2.5.0-h63175ca_1.conda
Extract: libffi-3.4.2-h8ffe710_5.tar.bz2
Extract: libflang-5.0.0-h6538335_20180525.tar.bz2
Extract: libgd-2.3.3-h2ed9e1d_6.conda
Extract: libglib-2.76.2-he8f3873_0.conda
Extract: libiconv-1.17-h8ffe710_0.tar.bz2
Extract: libjpeg-turbo-2.1.5.1-hcfcfb64_0.conda
Extract: liblapack-3.9.0-16_win64_openblas.tar.bz2
Extract: liblapacke-3.9.0-16_win64_openblas.tar.bz2
Extract: libogg-1.3.4-h8ffe710_1.tar.bz2
Extract: libopenblas-0.3.21-pthreads_h02691f0_0.tar.bz2
Extract: libpng-1.6.39-h19919ed_0.conda
Extract: libsodium-1.0.18-h8d14728_1.tar.bz2
Extract: libsqlite-3.40.0-hcfcfb64_1.conda
Extract: libtiff-4.5.0-h6c8260b_6.conda
Extract: libvorbis-1.3.7-h0e60522_0.tar.bz2
Extract: libwebp-1.3.0-hcfcfb64_0.conda
Extract: libwebp-base-1.3.0-hcfcfb64_0.conda
Extract: libxcb-1.13-hcd874cb_1004.tar.bz2
Extract: libxgboost-1.7.4-cpu_h20390bd_0.conda
Extract: libxml2-2.10.4-hc3477c8_0.conda
Extract: libzlib-1.2.13-hcfcfb64_4.tar.bz2
Extract: llvm-meta-5.0.0-0.tar.bz2
Extract: lockfile-0.12.2-py_1.tar.bz2
Extract: m2w64-gcc-libgfortran-5.3.0-6.tar.bz2
Extract: m2w64-gcc-libs-5.3.0-7.tar.bz2
Extract: m2w64-gcc-libs-core-5.3.0-7.tar.bz2
Extract: m2w64-gmp-6.1.0-2.tar.bz2
Extract: m2w64-libwinpthread-git-5.0.0.4634.697f757-2.tar.bz2
Extract: matplotlib-base-3.7.1-py39haf65ace_0.conda
Extract: matplotlib-inline-0.1.6-pyhd8ed1ab_0.tar.bz2
Extract: more-itertools-9.1.0-pyhd8ed1ab_0.conda
Extract: msgpack-python-1.0.5-py39h1f6ef14_0.conda
Extract: msys2-conda-epoch-20160418Janez-1.tar.bz2
Extract: munkres-1.1.4-pyh9f0ad1d_0.tar.bz2
Extract: nest-asyncio-1.5.6-pyhd8ed1ab_0.tar.bz2
Extract: networkx-3.1-pyhd8ed1ab_0.conda
Extract: numpy-1.24.3-py39h816b6a6_0.conda
Extract: openblas-0.3.21-pthreads_ha35c500_0.tar.bz2
Extract: openjpeg-2.5.0-ha2aaf27_2.conda
Extract: openmp-5.0.0-vc14_1.tar.bz2
Extract: openpyxl-3.1.2-py39ha55989b_0.conda
Extract: openssl-3.1.0-hcfcfb64_3.conda
Extract: opentsne-0.7.1-py39hbd792c9_0.conda
Extract: orange-canvas-core-0.1.31-pyhd8ed1ab_0.conda
Extract: orange-widget-base-4.21.0-pyhd8ed1ab_0.conda
Extract: orange3-3.35.0-py39h1679cfb_0.conda
Extract: packaging-23.1-pyhd8ed1ab_0.conda
Extract: pandas-1.5.3-py39h2ba5b7c_1.conda
Extract: pango-1.50.14-hd64ce24_1.conda
Extract: parso-0.8.3-pyhd8ed1ab_0.tar.bz2
Extract: pcre2-10.40-h17e33f8_0.tar.bz2
Extract: pickleshare-0.7.5-py_1003.tar.bz2
Extract: pillow-9.5.0-py39haa1d754_0.conda
Extract: pip-23.1.2-pyhd8ed1ab_0.conda
Extract: pixman-0.40.0-h8ffe710_0.tar.bz2
Extract: platformdirs-3.5.0-pyhd8ed1ab_0.conda
Extract: plotly-5.14.1-pyhd8ed1ab_0.conda
Extract: ply-3.11-py_1.tar.bz2
Extract: pooch-1.7.0-pyha770c72_3.conda
Extract: prompt-toolkit-3.0.38-pyha770c72_0.conda
Extract: psutil-5.9.5-py39ha55989b_0.conda
Extract: pthread-stubs-0.4-hcd874cb_1001.tar.bz2
Extract: pure_eval-0.2.2-pyhd8ed1ab_0.tar.bz2
Extract: py-xgboost-1.7.4-cpu_py39ha538f94_0.conda
Extract: pycparser-2.21-pyhd8ed1ab_0.tar.bz2
Extract: pygments-2.15.1-pyhd8ed1ab_0.conda
Extract: pyopenssl-23.1.1-pyhd8ed1ab_0.conda
Extract: pyparsing-3.0.9-pyhd8ed1ab_0.tar.bz2
Extract: pyqt-5.15.7-py39hb77abff_3.conda
Extract: pyqt5-sip-12.11.0-py39h99910a6_3.conda
Extract: pyqtgraph-0.13.3-pyhd8ed1ab_0.conda
Extract: pyqtwebengine-5.15.7-py39h2f4a3f1_3.conda
Extract: pysocks-1.7.1-py39hcbf5309_5.tar.bz2
Extract: python-3.9.12-hcf16a7b_1_cpython.tar.bz2
Extract: python-dateutil-2.8.2-pyhd8ed1ab_0.tar.bz2
Extract: python-graphviz-0.20.1-pyh22cad53_0.tar.bz2
Extract: python-louvain-0.16-pyhd8ed1ab_0.conda
Extract: python_abi-3.9-3_cp39.conda
Extract: pytz-2023.3-pyhd8ed1ab_0.conda
Extract: pywin32-304-py39h99910a6_2.tar.bz2
Extract: pywin32-ctypes-0.2.0-py39hcbf5309_1006.tar.bz2
Extract: pyyaml-6.0-py39ha55989b_5.tar.bz2
Extract: pyzmq-25.0.2-py39hea35a22_0.conda
Extract: qasync-0.24.0-pyhd8ed1ab_0.conda
Extract: qt-main-5.15.8-h7f2b912_9.conda
Extract: qt-webengine-5.15.8-h5b1ea0b_0.tar.bz2
Extract: qtconsole-5.4.3-pyhd8ed1ab_0.conda
Extract: qtconsole-base-5.4.3-pyha770c72_0.conda
Extract: qtpy-2.3.1-pyhd8ed1ab_0.conda
Extract: requests-2.29.0-pyhd8ed1ab_0.conda
Extract: scikit-learn-1.1.3-py39h6fe01c0_1.tar.bz2
Extract: scipy-1.10.1-py39hde5eda1_1.conda
Extract: serverfiles-0.3.0-py_0.tar.bz2
Extract: setuptools-67.7.2-pyhd8ed1ab_0.conda
Extract: sip-6.7.9-py39h99910a6_0.conda
Extract: sitecustomize.py
Extract: six-1.16.0-pyh6c4a22f_0.tar.bz2
Extract: sniffio-1.3.0-pyhd8ed1ab_0.tar.bz2
Extract: sqlite-3.40.0-hcfcfb64_1.conda
Extract: stack_data-0.6.2-pyhd8ed1ab_0.conda
Extract: tenacity-8.2.2-pyhd8ed1ab_0.conda
Extract: threadpoolctl-3.1.0-pyh8a188c0_0.tar.bz2
Extract: tk-8.6.12-h8ffe710_0.tar.bz2
Extract: toml-0.10.2-pyhd8ed1ab_0.tar.bz2
Extract: tomli-2.0.1-pyhd8ed1ab_0.tar.bz2
Extract: tornado-6.3-py39ha55989b_0.conda
Extract: traitlets-5.9.0-pyhd8ed1ab_0.conda
Extract: typing-extensions-4.5.0-hd8ed1ab_0.conda
Extract: typing_extensions-4.5.0-pyha770c72_0.conda
Extract: tzdata-2023c-h71feb2d_0.conda
Extract: ucrt-10.0.22621.0-h57928b3_0.tar.bz2
Extract: unicodedata2-15.0.0-py39ha55989b_0.tar.bz2
Extract: urllib3-1.26.15-pyhd8ed1ab_0.conda
Extract: vc-14.3-hb25d44b_16.conda
Extract: vc14_runtime-14.34.31931-h5081d32_16.conda
Extract: vs2015_runtime-14.34.31931-hed1258a_16.conda
Extract: wcwidth-0.2.6-pyhd8ed1ab_0.conda
Extract: wheel-0.40.0-pyhd8ed1ab_0.conda
Extract: win_inet_pton-1.1.0-py39hcbf5309_5.tar.bz2
Extract: xgboost-1.7.4-cpu_py39ha538f94_0.conda
Extract: xlrd-2.0.1-pyhd8ed1ab_3.tar.bz2
Extract: xlsxwriter-3.1.0-pyhd8ed1ab_0.conda
Extract: xorg-kbproto-1.0.7-hcd874cb_1002.tar.bz2
Extract: xorg-libice-1.0.10-hcd874cb_0.tar.bz2
Extract: xorg-libsm-1.2.3-hcd874cb_1000.tar.bz2
Extract: xorg-libx11-1.8.4-hcd874cb_0.conda
Extract: xorg-libxau-1.0.9-hcd874cb_0.tar.bz2
Extract: xorg-libxdmcp-1.1.3-hcd874cb_0.tar.bz2
Extract: xorg-libxext-1.3.4-hcd874cb_2.conda
Extract: xorg-libxpm-3.5.13-hcd874cb_0.tar.bz2
Extract: xorg-libxt-1.2.1-hcd874cb_2.tar.bz2
Extract: xorg-xextproto-7.3.0-hcd874cb_1003.conda
Extract: xorg-xproto-7.0.31-hcd874cb_1007.tar.bz2
Extract: xz-5.2.6-h8d14728_0.tar.bz2
Extract: yaml-0.2.5-h8ffe710_2.tar.bz2
Extract: zeromq-4.3.4-h0e60522_1.tar.bz2
Extract: zipp-3.15.0-pyhd8ed1ab_0.conda
Extract: zlib-1.2.13-hcfcfb64_4.tar.bz2
Extract: zstd-1.5.2-h12be248_6.conda
Output folder: C:\Users\OGBOLU~1\AppData\Local\Temp\nsxC898.tmp\Orange-installer-data\conda-pkgs
Installing packages (this might take a while)
Executing: cmd.exe /c install.bat "C:\Program Files\Orange" "C:\Users\Ogbolu Ify\miniconda3\condabin\conda.bat"
Creating a conda env in "C:\Program Files\Orange"
failed to create process.
Appending conda-forge channel
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
failed to create process.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
The system cannot find the path specified.
0 file(s) copied.
"conda" command exited with 1. Cannot continue.
```
**What's wrong?**
<!-- Be specific, clear, and concise. Include screenshots if relevant. -->
<!-- If you're getting an error message, copy it, and enclose it with three backticks (```). -->
**How can we reproduce the problem?**
<!-- Upload a zip with the .ows file and data. -->
<!-- Describe the steps (open this widget, click there, then add this...) -->
**What's your environment?**
<!-- To find your Orange version, see "Help → About → Version" or `Orange.version.full_version` in code -->
- Operating system:
- Orange version:
- How you installed Orange:
| closed | 2023-07-19T10:20:29Z | 2023-08-18T09:27:39Z | https://github.com/biolab/orange3/issues/6514 | [
"bug report"
] | IfyOgbolu | 3 |
plotly/plotly.py | plotly | 4,111 | FigureWidget modifies the position of Title | Dear Plotly,
I have encountered the issue that when using FigureWidget ans subplot while keeping shared_yaxis=True, the title of a second subplot is changed to the left side.
This is a make_subplot object

And this is what occurs when the Figure object is pass to FigureWidget

Is there any solution to keep shared_yaxis=True and get the title of the second subplot on the right side [YAxis side option was used to generate the first figure) | closed | 2023-03-16T14:55:49Z | 2024-07-11T14:01:13Z | https://github.com/plotly/plotly.py/issues/4111 | [] | AndresOrtegaGuerrero | 1 |
keras-team/keras | pytorch | 20,490 | ModelCheckpoint loses .h5 save support, breaking retrocompatibility | **Title:** ModelCheckpoint Callback Fails to Save Models in .h5 Format in TensorFlow 2.17.0+
**Description:**
I'm experiencing an issue with TensorFlow's `tf.keras.callbacks.ModelCheckpoint` across different TensorFlow versions on different platforms.
**Background:**
* **Platform 1:** Windows with TensorFlow 2.10.0 (GPU-enabled).
* **Platform 2:** Docker container on Linux using TensorFlow 2.3.0 (nvcr.io/nvidia/tensorflow:20.09-tf2-py3).
With versions up to TensorFlow 2.15.0, I was able to save models in `.h5` format using `tf.keras.callbacks.ModelCheckpoint` with the `save_weights_only=False` parameter. This allowed for easy cross-platform loading of saved models.
**Problem:** Since TensorFlow 2.17.0, `tf.keras.callbacks.ModelCheckpoint` appears unable to save models in `.h5` format, breaking backward compatibility. Models can only be saved in the `.keras` format, which versions prior to 2.17.0 cannot load, creating a compatibility issue for users maintaining models across different TensorFlow versions.
**Steps to Reproduce:**
1. Use TensorFlow 2.17.0 or later.
2. Try saving a model with `tf.keras.callbacks.ModelCheckpoint` using `save_weights_only=False` and specifying `.h5` as the file format.
3. Load the model in a previous version, such as TensorFlow 2.10.0 or earlier.
**Expected Behavior:** The model should be saved in `.h5` format without error, maintaining backward compatibility with earlier versions.
**Actual Behavior:** The model cannot be saved in `.h5` format, only in `.keras` format, making it incompatible with TensorFlow versions prior to 2.17.0.
**Question:** Is there a workaround to save models in `.h5` format in TensorFlow 2.17.0+? Or, is there a plan to restore `.h5` support in future updates for backward compatibility?
**Environment:**
* TensorFlow version: 2.17.0+
* Operating systems: Windows, Linux (Docker)
**Thank you for your help and for maintaining this project!** | closed | 2024-11-13T09:56:49Z | 2024-11-28T17:41:41Z | https://github.com/keras-team/keras/issues/20490 | [
"type:Bug"
] | TeoCavi | 3 |
JaidedAI/EasyOCR | machine-learning | 570 | Detected short number | Hello. I have image:

but easyOCR do not work.
I try use `detect` with diferrent parameters, but method return empty.
then i tried to make my simple-detector:
```
def recognition_number(self, img: np.ndarray) -> str:
image = self._filtration(img)
thresh = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 3, 1)
cnts, _ = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
ox = [i[0][0] for i in cnts[0]]
oy = [i[0][1] for i in cnts[0]]
data = self.reader.recognize(image, horizontal_list=[[min(ox), max(ox), min(oy), max(oy)]], free_list=[], allowlist="1234567890.,")
return "".join([el[1] for el in data])
```
this solved the problem and now the number is recognized.
is it possible to do something with a easyOCR `detect`? | closed | 2021-10-18T12:39:08Z | 2021-10-19T09:02:56Z | https://github.com/JaidedAI/EasyOCR/issues/570 | [] | sas9mba | 2 |
d2l-ai/d2l-en | deep-learning | 2,473 | GNNs theory and practice | Hi,
This is an excellent book to learn and practice Deep Learning. I have seen Attention, GANs, NLP, etc. but GNNs are not covered in the book. Will be included?
Best. | open | 2023-04-27T21:44:22Z | 2023-05-15T12:47:23Z | https://github.com/d2l-ai/d2l-en/issues/2473 | [
"feature request"
] | Cram3r95 | 1 |
mljar/mljar-supervised | scikit-learn | 488 | elementwise comparison fails on predict functions | ```
predictions_compete = automl_compete.predict_all(X_test)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/var/folders/kv/_qlrk0nj7zld91vzy1fj3hkr0000gn/T/ipykernel_48603/1194736169.py in <module>
----> 1 predictions_compete = automl_compete.predict_all(X_test)
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/automl.py in predict_all(self, X)
394
395 """
--> 396 return self._predict_all(X)
397
398 def score(self, X, y=None, sample_weight=None):
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/base_automl.py in _predict_all(self, X)
1380 def _predict_all(self, X):
1381 # Make and return predictions
-> 1382 return self._base_predict(X)
1383
1384 def _score(self, X, y=None, sample_weight=None):
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/base_automl.py in _base_predict(self, X, model)
1313 if model._is_stacked:
1314 self._perform_model_stacking()
-> 1315 X_stacked = self.get_stacked_data(X, mode="predict")
1316
1317 if model.get_type() == "Ensemble":
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/base_automl.py in get_stacked_data(self, X, mode)
418 oof = m.get_out_of_folds()
419 else:
--> 420 oof = m.predict(X)
421 if self._ml_task == BINARY_CLASSIFICATION:
422 cols = [f for f in oof.columns if "prediction" in f]
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/ensemble.py in predict(self, X, X_stacked)
306 y_predicted_from_model = model.predict(X_stacked)
307 else:
--> 308 y_predicted_from_model = model.predict(X)
309
310 prediction_cols = []
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/model_framework.py in predict(self, X)
426 for ind, learner in enumerate(self.learners):
427 # preprocessing goes here
--> 428 X_data, _, _ = self.preprocessings[ind].transform(X.copy(), None)
429 y_p = learner.predict(X_data)
430 y_p = self.preprocessings[ind].inverse_scale_target(y_p)
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/preprocessing/preprocessing.py in transform(self, X_validation, y_validation, sample_weight_validation)
401 for convert in self._categorical:
402 if X_validation is not None and convert is not None:
--> 403 X_validation = convert.transform(X_validation)
404
405 for dtt in self._datetime_transforms:
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/preprocessing/preprocessing_categorical.py in transform(self, X)
77 lbl = LabelEncoder()
78 lbl.from_json(lbl_params)
---> 79 X.loc[:, column] = lbl.transform(X.loc[:, column])
80
81 return X
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/supervised/preprocessing/label_encoder.py in transform(self, x)
31 def transform(self, x):
32 try:
---> 33 return self.lbl.transform(x) # list(x.values))
34 except ValueError as ve:
35 # rescue
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/sklearn/preprocessing/_label.py in transform(self, y)
136 return np.array([])
137
--> 138 return _encode(y, uniques=self.classes_)
139
140 def inverse_transform(self, y):
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/sklearn/utils/_encode.py in _encode(values, uniques, check_unknown)
185 else:
186 if check_unknown:
--> 187 diff = _check_unknown(values, uniques)
188 if diff:
189 raise ValueError(f"y contains previously unseen labels: {str(diff)}")
~/code_projects/chs_kaggle/.venv/lib/python3.9/site-packages/sklearn/utils/_encode.py in _check_unknown(values, known_values, return_mask)
259
260 # check for nans in the known_values
--> 261 if np.isnan(known_values).any():
262 diff_is_nan = np.isnan(diff)
263 if diff_is_nan.any():
TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
``` | closed | 2021-11-24T09:08:07Z | 2021-11-24T11:46:13Z | https://github.com/mljar/mljar-supervised/issues/488 | [
"bug"
] | cibic89 | 5 |
lepture/authlib | flask | 720 | error_description should not include invalid characters for oidc certification | [RFC6749](https://datatracker.ietf.org/doc/html/rfc6749#section-5.2) states that `error_description` field MUST NOT include characters outside the set %09-0A (Tab and LF) / %x0D (CR) / %x20-21 / %x23-5B / %x5D-7E.
For example the Token endpoint response returns this error, which makes the open id connect certification fail :
https://github.com/lepture/authlib/blob/4eafdc21891e78361f478479efe109ff0fb2f661/authlib/jose/errors.py#L85
The quotes should be removed from `error_descriptions` | open | 2025-03-14T15:22:38Z | 2025-03-15T01:58:29Z | https://github.com/lepture/authlib/issues/720 | [
"bug"
] | funelie | 0 |
tortoise/tortoise-orm | asyncio | 1,342 | Model printing requires the output of the primary key value | **Is your feature request related to a problem? Please describe.**
When I print a model,I can only got model's name.
The __str__ function should have the same logic as the __repr__ function.
In the past, every time I printed a model, I had to print an extra pk field.
**Describe alternatives you've considered**
Rewrite the Model's function:__str__
like:
```python
class Model(metaclass=ModelMeta):
"""
Base class for all Tortoise ORM Models.
"""
...
def __str__(self) -> str:
if self.pk:
return f"<{self.__class__.__name__}: {self.pk}>"
return f"<{self.__class__.__name__}>"
```
| open | 2023-02-16T05:52:38Z | 2023-03-01T03:14:17Z | https://github.com/tortoise/tortoise-orm/issues/1342 | [
"enhancement"
] | Chise1 | 6 |
ultralytics/ultralytics | computer-vision | 19,710 | Using `single_cls` while loading the data the first time causes warnings and errors | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,
I have a dataset with two classes: 0 and 1.
In my data.yaml file I have the following class def:
# class names
names:
0: 'cat'
1: 'duck'
Now I'm testing with the single_class flag set to true and I get a ton of errors complaining about the labels exceeding the maixmum class label. Since I set the single_class = True, shouldn't it change all my labels to zero?
### Additional
_No response_ | closed | 2025-03-15T04:54:32Z | 2025-03-21T14:06:16Z | https://github.com/ultralytics/ultralytics/issues/19710 | [
"bug",
"question",
"fixed",
"detect"
] | Nau3D | 11 |
widgetti/solara | jupyter | 640 | Feature Request: Programmatically focus on a component | It would be nice to make available the functionality to focus on a component (say a text input component) after interacting with another component (i.e. a button).
An example of this would be something like:
- user enters some text in an input text field
- user presses some button
- the same (or another) text field is automatically in focus -> no need to use the mouse to select that component in order to enter text.
I saw that this is possible via the `.focus()` method in `ipyvuetify`, but that is not exposed in solara. There is a kwarg that makes a certain input text field be in focus when the app starts, but that is lost once the user interacts with other components (buttons, sliders etc..). | closed | 2024-05-09T12:43:22Z | 2024-08-25T15:28:00Z | https://github.com/widgetti/solara/issues/640 | [] | JovanVeljanoski | 8 |
hankcs/HanLP | nlp | 1,937 | 汉语转拼音功能,涉及到比较复杂的多音字 就不行了 | <!--
感谢找出bug,请认真填写下表:
-->
**Describe the bug**
A clear and concise description of what the bug is.
引用maven,使用
`
<dependency>
<groupId>com.hankcs</groupId>
<artifactId>hanlp</artifactId>
<version>portable-1.8.5</version>
</dependency>
`
`
public static void main(String[] args) {
String text = "厦门行走";
// 使用 HanLP 进行拼音转换,HanLP 会处理多音字
String pinyin = HanLP.convertToPinyinString(text, "", true);
System.out.println(pinyin); // xiàmén xíngzǒu
System.out.println(HanLP.convertToPinyinFirstCharString(text, "", true));
System.out.println(HanLP.convertToPinyinString("人要是行,干一行行一行,一行行行行行。人要是不行,干一行不行一行,一行不行行行不行。", " ", false));
}
`
输出不对
`
ren yao shi xing , gan yi xing xing yi xing , yi xing xing xing xing xing 。 ren yao shi bu xing , gan yi xing bu xing yi xing , yi xing bu xing xing xing bu xing 。
`
**Code to reproduce the issue**
Provide a reproducible test case that is the bare minimum necessary to generate the problem.
```python
```
**Describe the current behavior**
A clear and concise description of what happened.
**Expected behavior**
A clear and concise description of what you expected to happen.
**System information**
- OS Platform and Distribution (e.g., Linux Ubuntu 16.04):
- Python version:
- HanLP version:
**Other info / logs**
Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached.
* [x] I've completed this form and searched the web for solutions.
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! -->
<!-- ⬆️此处务必勾选,否则你的issue会被机器人自动删除! --> | closed | 2024-12-25T02:44:25Z | 2024-12-25T04:06:40Z | https://github.com/hankcs/HanLP/issues/1937 | [
"invalid"
] | jsl1992 | 1 |
jofpin/trape | flask | 164 | Idk why is this happening, any solution? | C:\Users\Usuario\Desktop\trape>python trape.py -h
Traceback (most recent call last):
File "trape.py", line 23, in <module>
from core.utils import utils #
File "C:\Users\Usuario\Desktop\trape\core\utils.py", line 23, in <module>
import httplib
ModuleNotFoundError: No module named 'httplib' | open | 2019-06-03T05:13:06Z | 2019-06-03T20:21:11Z | https://github.com/jofpin/trape/issues/164 | [] | origmz | 2 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 853 | I am trying to run SmartScraperGraph() using ollama with llama3.2 model but i am getting this warning that "Token indices sequence length is longer than the specified maximum sequence length for this model (7678 > 1024)." and the whole website is not being scraped. | import json
from scrapegraphai.graphs import SmartScraperGraph
from ollama import Client
ollama_client = Client(host='http://localhost:11434')
# Define the configuration for the scraping pipeline
graph_config = {
"llm": {
"model": "ollama/llama3.2",
"temperature": 0.0,
"format": "json",
"model_tokens": 4096,
"base_url": "http://localhost:11434",
},
"embeddings": {
"model": "nomic-embed-text",
},
}
# Create the SmartScraperGraph instance
smart_scraper_graph = SmartScraperGraph(
prompt="Extract me all the news from the website along with headlines",
source="https://www.bbc.com/",
config=graph_config
)
# Run the pipeline
result = smart_scraper_graph.run()
print(json.dumps(result, `indent=4))`
Output*********************************************
>> from langchain_community.callbacks.manager import get_openai_callback
You can use the langchain cli to **automatically** upgrade many imports. Please see documentation here <https://python.langchain.com/docs/versions/v0_2/>
from langchain.callbacks import get_openai_callback
Token indices sequence length is longer than the specified maximum sequence length for this model (7678 > 1024). Running this sequence through the model will result in indexing errors
{
"headlines": [
"Life is not easy - Haaland penalty miss sums up Man City crisis",
"How a 1990s Swan Lake changed dance forever"
],
"articles": [
{
"title": "BBC News",
"url": "https://www.bbc.com/news/world-europe-63711133"
},
{
"title": "Matthew Bourne on his male Swan Lake - the show that shook up the dance world",
"url": "https://www.bbc.com/culture/article/20241126-matthew-bourne-on-his-male-swan-lake-the-show-that-shook-up-the-dance-world-forever"
}
]
}
Even after specifying that model tokens = 4096 it is not effecting its maximum sequence length(1024). How can i increase it ? How can i chunk the website into size of its max_sequence_length so that i can scrape the whole website.
PS: Also having the option to further crawl the links and scrape subsequent websites would be great. Thanks
Ubuntu 22.04 LTS
GPU : RTX 4070 12GB VRAM
RAM : 16GB DDR5
Ollama/Llama3.2:3B model | closed | 2024-12-27T03:55:42Z | 2025-01-06T19:19:40Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/853 | [] | GODCREATOR333 | 3 |
miguelgrinberg/microblog | flask | 97 | Can't find flask_migrate for the import | From flask_migrate import Migrate
Gives me E0401:Unable to import 'flask-migrate'. But pip installed and upgraded flask-migrate just fine. | closed | 2018-04-14T15:14:53Z | 2019-04-07T10:04:56Z | https://github.com/miguelgrinberg/microblog/issues/97 | [
"question"
] | tnovak123 | 23 |
ageitgey/face_recognition | machine-learning | 665 | 在Ubuntu下重新搭好的环境,相同机器下的人脸检测卡顿 | * face_recognition version:1.2.3
* Python version:Python3.6.6
* Operating System:Ubuntu18
### Description
> 由于显卡驱动出现了问题,重装了ubuntu系统,再次使用face_recognition包进行人脸检测和识别时,会特别卡顿,重装之前不会出现这样的问题,单CPU占用达到90%以上,其他cpu不到5%,会不会是dlib没有使用GPU加速?在我使用model='cnn'时,发现没有使用gpu,检测会变得更加卡顿。这个问题用该如何解决?
### The Code is:
```
def thread(id, label_list, known_faces):
# for id in index_th:
video_catch = cv2.VideoCapture(id)
# video_catch.set(cv2.CAP_PROP_FRAME_WIDTH, 640)
# video_catch.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
# video_catch.set(cv2.CAP_PROP_FPS, 30.0)
face_locations = []
face_encodings = []
face_names = []
frame_number = 0
print('%d isOpened!'%id, video_catch.isOpened())
if not video_catch.isOpened():
return
while True:
ret, frame = video_catch.read()
# print('%s' % id, ret)
frame_number += 1
if not ret:
break
small_frame = cv2.resize(frame, (0, 0), fx=0.25, fy=0.25)
rgb_frame = small_frame[:, :, ::-1]
face_locations = fr.face_locations(rgb_frame) # ,model="cnn"
face_encodings = fr.face_encodings(rgb_frame, face_locations)
face_names = []
for face_encoding in face_encodings:
match = fr.compare_faces(known_faces, face_encoding, tolerance=0.45)
name = "???"
for i in range(len(label_list)):
if match[i]:
name = label_list[i]
face_names.append(name)
for (top, right, bottom, left), name in zip(face_locations, face_names):
if not name:
continue
top *= 4
right *= 4
bottom *= 4
left *= 4
pil_im = Image.fromarray(frame)
draw = ImageDraw.Draw(pil_im)
font = ImageFont.truetype('/home/jonty/Documents/Project_2018/database/STHeiti_Medium.ttc', 24,
encoding='utf-8')
draw.text((left + 6, bottom - 25), name, (0, 0, 255), font=font)
frame = np.array(pil_im)
cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)
cv2.imshow('Video_%s' % id, frame)
c = cv2.waitKey(5)
if c & 0xFF == ord("q"):
break
video_catch.release()
cv2.destroyWindow('Video_%s' % id)
```
| closed | 2018-11-04T04:20:22Z | 2019-08-20T03:26:53Z | https://github.com/ageitgey/face_recognition/issues/665 | [] | JalexDooo | 3 |
d2l-ai/d2l-en | deep-learning | 1,807 | Tokenization in 8.2.2 | I think in tokenize function, if it's tokenizing words, it should add space_character to tokens too. Otherwise, in predict function, it will assume '<unk>' for spaces and the predictions doesn't have spaces between them (which can be solved by manipulating the predict function to this line: `return ''.join([vocab.idx_to_token[i] + ' ' for i in outputs])`)
I think tokenized should change like this:
`[line.split() for line in lines] + [[' ']]`
If I'm right, I can make a pr for both tokenize and predict funcs. (although for predict I might have to change inputs of function as well to recognize if it's a char level or word lever rnn) | closed | 2021-06-22T07:53:02Z | 2022-12-16T00:24:13Z | https://github.com/d2l-ai/d2l-en/issues/1807 | [] | armingh2000 | 2 |
tableau/server-client-python | rest-api | 1,167 | [2022.3] parent_id lost when creating project | **Describe the bug**
`parent_id` is `None` in the `ProjectItem` object returned by `projects.create()` even if it was specified when creating the new instance.
**Versions**
- Tableau Server 2022.3.1
- Python 3.10.9
- TSC library 0.23.4
**To Reproduce**
```
>>> new_project = TSC.ProjectItem('New Project', parent_id='c7fc0e49-8d67-4211-9623-dd9237bf3cda')
>>> new_project.id
>>> new_project.parent_id
'c7fc0e49-8d67-4211-9623-dd9237bf3cda'
>>> new_project = server.projects.create(new_project)
INFO:tableau.endpoint.projects:Created new project (ID: e0e88607-abe5-4548-963e-b5f5054c5cbe)
>>> new_project.id
'e0e88607-abe5-4548-963e-b5f5054c5cbe'
>>> new_project.parent_id
>>>
```
**Results**
What are the results or error messages received?
`new_project.parent_id` should be preserved, but it is reset to `None`
| closed | 2023-01-05T16:06:46Z | 2024-09-20T08:22:27Z | https://github.com/tableau/server-client-python/issues/1167 | [
"bug",
"Server-Side Enhancement",
"fixed"
] | nosnilmot | 5 |
sigmavirus24/github3.py | rest-api | 665 | how to check if no resource was found? | How do I check if github3 does not find resource?
for example:
```Python
import github3
g = github3.GitHub(token='mytoken')
i1 = g.issue('rmamba', 'sss3', 1) #exists, returns Issue
i2 = g.issue('rmamba', 'sss3', 1000) #does not exists, return NullObject
```
how do I do this:
```
if i is None:
handle none
```
I can do `if i2 is i2.Empty`. But then with i1 it does not work because it returns valid issue and i1.Empty does not exist. I tried with NullObject but it always says it's false.
```
i1 is NullObject #false
i1 is NullObject('Issue') #false
```
so I had to do this for now:
```
if '%s' % i2 == '':
handle None
```
because it works for both NullObjects and valid results returned | closed | 2017-01-04T12:22:52Z | 2017-01-21T17:10:22Z | https://github.com/sigmavirus24/github3.py/issues/665 | [] | rmamba | 1 |
HumanSignal/labelImg | deep-learning | 603 | [Feature request] Landmarks labelling in addition to rects. | <!--
Request to add marking of only points instead of rect boxes.
This will be useful for marking landmarks like facial and pose landmarks.
-->
| open | 2020-06-12T18:33:19Z | 2020-06-12T18:33:19Z | https://github.com/HumanSignal/labelImg/issues/603 | [] | poornapragnarao | 0 |
numba/numba | numpy | 9,986 | `np.left_shift` behaves differntly with it using `njit` | <!--
Thanks for opening an issue! To help the Numba team handle your information
efficiently, please first ensure that there is no other issue present that
already describes the issue you have
(search at https://github.com/numba/numba/issues?&q=is%3Aissue).
-->
## Reporting a bug
<!--
Before submitting a bug report please ensure that you can check off these boxes:
-->
- [x] I have tried using the latest released version of Numba (most recent is
visible in the release notes
(https://numba.readthedocs.io/en/stable/release-notes-overview.html).
- [x] I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
<!--
Please include details of the bug here, including, if applicable, what you
expected to happen!
-->
`np.left_shift` behaves differntly with it using `njit`
```python
from numba import njit
import numpy as np
@njit
def left_shift_njit():
return np.left_shift(np.uint8(255),1)
print(np.left_shift(np.uint8(255),1))
print(left_shift_njit())
```
Output:
```bash
254
510
```
Version information:
```bash
python: 3.10.16
numpy: 2.1.3
numba: 0.61.0
```
| open | 2025-03-16T08:58:22Z | 2025-03-17T17:30:21Z | https://github.com/numba/numba/issues/9986 | [
"bug - numerically incorrect",
"NumPy 2.0"
] | apiqwe | 1 |
tqdm/tqdm | pandas | 941 | Shorter description when console is not wide enough | Related to: https://github.com/tqdm/tqdm/issues/630
When my console width is too small compared to the description (e.g., set via ```tqdm.set_description()```), I often get a multiline output, adding many new lines to stdout. I would rather have a shorter description to display instead. For example:
```one very long description: ██████████| 10000/10000 [00:13<00:00, 737.46it/s]```
```short one: █████████████████| 10000/10000 [00:13<00:00, 737.46it/s]```
It could be an optional argument to ```tqdm.__init__()``` and/or to ```tqdm.set_description()```. If you think it has an interest, I can do a PR. | closed | 2020-04-18T23:30:50Z | 2020-04-21T07:37:17Z | https://github.com/tqdm/tqdm/issues/941 | [
"question/docs ‽"
] | rronan | 1 |
voila-dashboards/voila | jupyter | 894 | Echart's panel extension does not load in voila | Hi there,
I am trying to render a notebook with panel's extension `echart`.
The code:
```python
import panel as pn
pn.extension('echarts')
gauge = {
'tooltip': {
'formatter': '{a} <br/>{b} : {c}%'
},
'series': [
{
'name': 'Gauge',
'type': 'gauge',
'detail': {'formatter': '{value}%'},
'data': [{'value': 50, 'name': 'Value'}]
}
]
};
gauge_pane = pn.pane.ECharts(gauge, width=400, height=400)
slider = pn.widgets.IntSlider(start=0, end=100, width = 200)
slider.jscallback(args={'gauge': gauge_pane}, value="""
gauge.data.series[0].data[0].value = cb_obj.value
gauge.properties.data.change.emit()
""")
pn.Column(slider, gauge_pane)
```
On jupyter lab, this works fine, but on voilà, it fails.
When looking at the web console I see that voila tries to load `echart.js` locally instead of using CDN (while `echarts-gl` is always loaded via CDN.
echarts loading in jupyter:

echarts loading in voilà:

## Question
How can I force voila to load `echarts` from CDN (something like `pn.extension('echarts', fromCDN=True)`, or even better, serve the echart.js staticly ?
## Configuration
Jupyter and voila runs in separated docker container behind a proxy, note that other panel widgets works fine
+ Voila
+ version 0.2.7 (with more recent version panel interaction does not work but I have not identified why yet)
+ Command: `/usr/local/bin/python /usr/local/bin/voila --no-browser --enable_nbextensions=True --strip_sources=False --port 8866 --server_url=/voila/ --base_url=/voila/ /opt/dashboards`
+ `jupyter nbextension list`
```
Known nbextensions:
config dir: /usr/local/etc/jupyter/nbconfig
notebook section
jupyter-matplotlib/extension enabled
- Validating: OK
jupyter_bokeh/extension enabled
- Validating: OK
nbdime/index enabled
- Validating: OK
voila/extension enabled
- Validating: OK
jupyter-js-widgets/extension enabled
- Validating: OK
```
+ `jupyter labextension list`
```
JupyterLab v2.3.1
Known labextensions:
app dir: /usr/local/share/jupyter/lab
@bokeh/jupyter_bokeh v2.0.4 enabled OK
@jupyter-widgets/jupyterlab-manager v2.0.0 enabled OK
@pyviz/jupyterlab_pyviz v1.0.4 enabled OK
nbdime-jupyterlab v2.1.0 enabled OK
```
+ Jupyter
+ version 2.3.1
+ `jupyter nbextension list`
```
Known nbextensions:
config dir: /usr/local/etc/jupyter/nbconfig
notebook section
jupyter-matplotlib/extension enabled
- Validating: OK
jupyter_bokeh/extension enabled
- Validating: OK
nbdime/index enabled
- Validating: OK
jupyter-js-widgets/extension enabled
- Validating: OK
```
+ `jupyter labextension list`
```
JupyterLab v2.3.1
Known labextensions:
app dir: /usr/local/share/jupyter/lab
@bokeh/jupyter_bokeh v2.0.4 enabled OK
@jupyter-widgets/jupyterlab-manager v2.0.0 enabled OK
@pyviz/jupyterlab_pyviz v1.0.4 enabled OK
nbdime-jupyterlab v2.1.0 enabled OK
```
| closed | 2021-05-25T16:19:53Z | 2023-08-03T13:47:49Z | https://github.com/voila-dashboards/voila/issues/894 | [] | dbeniamine | 2 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 55 | Questions concerning Relationship definition | Hi @jonra1993 ,
I have questions on the Relationship usage, class [Hero](https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/blob/9cc2566e5e5a38611e2d8d4e0f9d3bdc473db7bc/backend/app/app/models/hero_model.py#L14):
Why not add a created_by_id to the HeroBase instead of Hero? Why is it Optional, or can it be required, since we always have a user that created a hero?
Where can I find information, which sa_relationship_kwargs I should use, and should there be another one to define what happens if records are deleted?
And also: why no backpopulates with the created_by relationship?
Do we really need the primaryjoin argument? | closed | 2023-03-21T08:48:19Z | 2023-03-22T09:25:42Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/55 | [] | vi-klaas | 2 |
open-mmlab/mmdetection | pytorch | 11,850 | Use get_flops.py problem | I use this command
python tools/analysis_tools/get_flops.py configs/glip/glip_atss_swin-t_a_fpn_dyhead_16xb2_ms-2x_funtune_coco.py
I want to get GFLOPs and parameters,but have error
mmcv == 2.1.0
mmdet == 3.3.0
mmengine == 0.10.4
loading annotations into memory...
Done (t=0.41s)
creating index...
index created!
Traceback (most recent call last):
File "tools/analysis_tools/get_flops.py", line 140, in <module>
main()
File "tools/analysis_tools/get_flops.py", line 120, in main
result = inference(args, logger)
File "tools/analysis_tools/get_flops.py", line 98, in inference
outputs = get_model_complexity_info(
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/mmengine/analysis/print_helper.py", line 748, in get_model_complexity_info
flops = flop_handler.total()
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/mmengine/analysis/jit_analysis.py", line 268, in total
stats = self._analyze()
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/mmengine/analysis/jit_analysis.py", line 570, in _analyze
graph = _get_scoped_trace_graph(self._model, self._inputs,
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/mmengine/analysis/jit_analysis.py", line 194, in _get_scoped_trace_graph
graph, _ = _get_trace_graph(module, inputs)
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/jit/_trace.py", line 1310, in _get_trace_graph
outs = ONNXTracedModule(
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/jit/_trace.py", line 138, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/jit/_trace.py", line 129, in wrapper
outs.append(self.inner(*trace_inputs))
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1582, in _call_impl
result = forward_call(*args, **kwargs)
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1522, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 96, in forward
return self._forward(inputs, data_samples)
File "/home/dsic/.virtualenvs/mmdet/lib/python3.8/site-packages/mmdet/models/detectors/single_stage.py", line 133, in _forward
results = self.bbox_head.forward(x)
TypeError: forward() missing 1 required positional argument: 'language_feats'
( | open | 2024-07-12T09:49:19Z | 2024-07-12T09:49:35Z | https://github.com/open-mmlab/mmdetection/issues/11850 | [] | xianhong1101 | 0 |
Zeyi-Lin/HivisionIDPhotos | fastapi | 115 | Windows安装教程(Windows Install Tutorial)!包括Webui、Comfyui教程! | 非常优秀的开源项目,超级方便快捷,方向完全正确,期待越来越好!!
我制作了Webui、Comfyui的本地安装教程视频,供有需要的朋友参考!
1、Webui安装教程:
YouTube:https://youtu.be/VDw2w1dKycE?si=0RBHliHebLwsszs1
Blibili:https://www.bilibili.com/video/BV1oQpueBEbT/?share_source=copy_web&vd_source=c38dcdb72a68f2a4e0b3c0f4f9a5a03c
2、Comfyui安装教程:
YouTube:https://youtu.be/dNE3tXHtuhc?si=27ZHve4vJjCooRgq
Blibili:https://www.bilibili.com/video/BV1sA4BezECw/?share_source=copy_web&vd_source=c38dcdb72a68f2a4e0b3c0f4f9a5a03c | open | 2024-09-12T16:01:38Z | 2024-09-13T00:56:48Z | https://github.com/Zeyi-Lin/HivisionIDPhotos/issues/115 | [] | walkingwithGod2017 | 1 |
horovod/horovod | deep-learning | 3,298 | Error when running example script `pytorch_spark_mnist.py` | Hello, I was trying to run the [example file](https://github.com/horovod/horovod/blob/master/examples/spark/pytorch/pytorch_spark_mnist.py) `pytorch_spark_mnist.py` on a spark cluster from a cloud provider, but got the following error. The script runs fine inside the `horovod-cpu` image locally. Could someone point to me what could be the cause? Thank you.
```writing dataframes
train_data_path=file:///opt/spark/work-dir/intermediate_train_data.0
val_data_path=file:///opt/spark/work-dir/intermediate_val_data.0
train_partitions=18
Traceback (most recent call last):
File "/tmp/spark-f2a912ef-52c7-443e-adad-873d31030415/pytorch_spark_mnist.py", line 124, in <module>
torch_model = torch_estimator.fit(train_df).setOutputCols(['label_prob'])
File "/opt/dataflow/python/lib/python3.6/site-packages/horovod/spark/common/estimator.py", line 35, in fit
return super(HorovodEstimator, self).fit(df, params)
File "/opt/spark/python/lib/pyspark.zip/pyspark/ml/base.py", line 129, in fit
File "/opt/dataflow/python/lib/python3.6/site-packages/horovod/spark/common/estimator.py", line 77, in _fit
verbose=self.getVerbose()) as dataset_idx:
File "/usr/lib64/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/opt/dataflow/python/lib/python3.6/site-packages/horovod/spark/common/util.py", line 737, in prepare_data
num_partitions, num_processes, verbose)
File "/opt/dataflow/python/lib/python3.6/site-packages/horovod/spark/common/util.py", line 648, in _get_or_create_dataset
saved_file_list = _get_spark_df_saved_file_list(train_data_path)
File "/opt/dataflow/python/lib/python3.6/site-packages/horovod/spark/common/util.py", line 594, in _get_spark_df_saved_file_list
return list(spark_session.read.parquet(saved_path)._jdf.inputFiles())
File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 353, in parquet
File "/opt/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1305, in __call__
File "/opt/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 134, in deco
File "<string>", line 3, in raise_from
pyspark.sql.utils.AnalysisException: Unable to infer schema for Parquet. It must be specified manually.;
``` | closed | 2021-12-01T19:00:51Z | 2022-09-14T03:19:50Z | https://github.com/horovod/horovod/issues/3298 | [
"wontfix"
] | jizezhang | 2 |
JoeanAmier/TikTokDownloader | api | 292 | 设置文件名称的作品描述长度限制可否在运行可执行文件启动方式下进行设置 | name_format参数中的desc参数,如果作品描述过长,导致文件名长度超出部分操作系统的限制,无法正常转储。目前该设置项位于高级配置中,仅有通过源码运行的项目才能设置。 | open | 2024-09-06T03:02:06Z | 2025-03-04T16:46:32Z | https://github.com/JoeanAmier/TikTokDownloader/issues/292 | [
"功能优化(enhancement)"
] | Baibai-zs | 1 |
google-research/bert | tensorflow | 533 | how to generates the vocab.txt file from corpus if I want do pre-training from scratch | open | 2019-04-01T08:04:54Z | 2020-02-04T12:25:39Z | https://github.com/google-research/bert/issues/533 | [] | SeekPoint | 5 |
|
reloadware/reloadium | flask | 85 | "SyntaxError: 'return' with value in async generator" when using async context managers | ## Describe the bug*
Reloadium crashes when using async context managers.
## To Reproduce
1. Run this code in debug mode:
```py
import asyncio
from contextlib import asynccontextmanager
@asynccontextmanager
async def async_context_manager():
print("entered async context manager")
test_value = 1
yield test_value
print("exiting async context manager")
async def async_main():
print("getting ready to enter async context manager")
async with async_context_manager() as manager:
print(manager)
print("exited async context manager")
if __name__ == "__main__":
asyncio.run(async_main())
```
## Expected behavior
No syntax error.
## Screenshots
<img width="831" alt="image" src="https://user-images.githubusercontent.com/15524072/213927505-e6146194-02fe-403d-9d6b-ae88bdd780c7.png">
## Desktop or remote (please complete the following information):**
- OS: macOS
- OS version: macOS Monterey 12.4
- M1 chip: no
- Reloadium package version: 0.9.7
- PyCharm plugin version: 0.9.2
- Editor: PyCharm
- Python Version: Python 3.10.0
- Python Architecture: 64bit, 3.10.0 (v3.10.0:b494f5935c, Oct 4 2021, 14:59:20) [Clang 12.0.5 (clang-1205.0.22.11)]
- Run mode: Debug
## Additional context
If I remove the `asynccontextmanager` decorator, it does not crash, however the code does not work the way I'd like. This seems to be an error with reloadium because running without it, my code works as expected:
```shell
❯ python main.py
getting ready to enter async context manager
entered async context manager
1
exiting async context manager
exited async context manager
```
I would help debug reloadium and find a root cause but it seems corium is obfuscated. | closed | 2023-01-22T16:53:23Z | 2023-01-25T09:19:22Z | https://github.com/reloadware/reloadium/issues/85 | [] | LilSpazJoekp | 1 |
thtrieu/darkflow | tensorflow | 956 | AssertionError after training custom model | I trained a model using the --train argument provided.
After training in the ckpt folder i got the following files :
1 .index file
2 .meta file
3 .profile file
4 .data-00000-of-00001 file
I think the 4th file is the saved weights so I loaded it using the --load argument along with the CFG i used for training the model.
I gave the following command - flow --model cfg/yolov2-tiny-voc-1c.cfg --load bin/yolov2-tiny-voc-1c-4.weights
This gave me an error:
AssertionError: expect 63082056 bytes, found 189197224
What is the fix? I am loading the correct CFG.
I even re-trained the model using the same CFG to check if it was the correct one and the training ran without any issues. | closed | 2018-12-20T16:49:53Z | 2019-12-31T10:27:35Z | https://github.com/thtrieu/darkflow/issues/956 | [] | tanmay-bhatnagar | 3 |
django-import-export/django-import-export | django | 1,437 | Skip row is not consistent when used inside and outside the admin | **Describe the bug**
Changes on the dataset are detected if using the admin to import a dataset.
Programmatic import shows no differences.
**To Reproduce**
CharField with null value in the database
`skip_unchanged = True` in the ModelResource
**Versions (please complete the following information):**
- Django Import Export: 2.8.0 and 3.0b3
- Python 3.8
- Django 3.2
**Expected behavior**
Same import result in both cases.
**Additional context**
Code from 3.0b3 as an example:
```python
def skip_row():
....
else:
# debug values for "name" column
if field.column_name == 'name':
print('instance: "%s"' % field.get_value(instance), 'original: "%s"' % field.get_value(original))
if field.get_value(instance) != field.get_value(original):
return False
```
Admin shows: instance "" original "None", so reports them as different
Management command shows: instance "None" original "None", no changes are reported
I think the Admin is wrong, or it could be a configuration issue :)
| closed | 2022-05-17T15:04:52Z | 2024-03-13T10:17:24Z | https://github.com/django-import-export/django-import-export/issues/1437 | [
"bug"
] | manelclos | 14 |
mars-project/mars | numpy | 3,171 | [BUG] Ray DAG mode access Mars WEB Dashboard error | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
**Describe the bug**
When access task status for ray DAG mode in mars dashboard, got incorrect task status. Following task is finished, the graph should be green instead of blank:


**To Reproduce**
To help us reproducing this bug, please provide information below:
1. Your Python version: 3.8
2. The version of Mars you use: https://github.com/mars-project/mars/pull/3165
3. Versions of crucial packages, such as numpy, scipy and pandas: pandas 1.4.2, numpy 1.19.5, scipy 1.8.1
4. Full stack of the error.
```
(RayMainPool pid=54669) 2022-06-27 16:06:15,283 ERROR web.py:2239 -- 500 GET /api/session/SFJWHJesbcMFjANMcqTskZ6R/task/iQvlLWn2zC9z63HSWWbw5maQ/tileable_detail (127.0.0.1) 5.22ms
^C(RayMainPool pid=54669) 2022-06-27 16:06:16,283 ERROR core.py:82 -- TypeError when handling request with TaskWebAPIHandler.get_tileable_details
(RayMainPool pid=54669) Traceback (most recent call last):
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/services/web/core.py", line 70, in wrapped
(RayMainPool pid=54669) res = await self._create_or_get_url_future(
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/services/task/api/web.py", line 132, in get_tileable_details
(RayMainPool pid=54669) res = await oscar_api.get_tileable_details(task_id)
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/services/task/api/oscar.py", line 77, in get_tileable_details
(RayMainPool pid=54669) return await self._task_manager_ref.get_tileable_details(task_id)
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 263, in __pyx_actor_method_wrapper
(RayMainPool pid=54669) async with lock:
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 266, in mars.oscar.core.__pyx_actor_method_wrapper
(RayMainPool pid=54669) result = await result
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/services/task/supervisor/manager.py", line 206, in get_tileable_details
(RayMainPool pid=54669) return await processor_ref.get_tileable_details()
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/oscar/backends/context.py", line 196, in send
(RayMainPool pid=54669) return self._process_result_message(result)
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/oscar/backends/context.py", line 76, in _process_result_message
(RayMainPool pid=54669) raise message.as_instanceof_cause()
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/oscar/backends/pool.py", line 586, in send
(RayMainPool pid=54669) result = await self._run_coro(message.message_id, coro)
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/oscar/backends/pool.py", line 343, in _run_coro
(RayMainPool pid=54669) return await coro
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/oscar/api.py", line 120, in __on_receive__
(RayMainPool pid=54669) return await super().__on_receive__(message)
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 523, in __on_receive__
(RayMainPool pid=54669) raise ex
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 516, in mars.oscar.core._BaseActor.__on_receive__
(RayMainPool pid=54669) return await self._handle_actor_result(result)
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 401, in _handle_actor_result
(RayMainPool pid=54669) task_result = await coros[0]
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 444, in mars.oscar.core._BaseActor._run_actor_async_generator
(RayMainPool pid=54669) async with self._lock:
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 445, in mars.oscar.core._BaseActor._run_actor_async_generator
(RayMainPool pid=54669) with debug_async_timeout('actor_lock_timeout',
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 450, in mars.oscar.core._BaseActor._run_actor_async_generator
(RayMainPool pid=54669) res = await gen.athrow(*res)
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/services/task/supervisor/task.py", line 159, in get_tileable_details
(RayMainPool pid=54669) tileable_to_details = yield asyncio.to_thread(self._get_tileable_infos)
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 455, in mars.oscar.core._BaseActor._run_actor_async_generator
(RayMainPool pid=54669) res = await self._handle_actor_result(res)
(RayMainPool pid=54669) File "mars/oscar/core.pyx", line 375, in _handle_actor_result
(RayMainPool pid=54669) result = await result
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/lib/aio/_threads.py", line 36, in to_thread
(RayMainPool pid=54669) return await loop.run_in_executor(None, func_call)
(RayMainPool pid=54669) File "/Users/chaokunyang/anaconda3/envs/mars3.8/lib/python3.8/concurrent/futures/thread.py", line 57, in run
(RayMainPool pid=54669) result = self.fn(*self.args, **self.kwargs)
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/services/task/supervisor/task.py", line 100, in _get_tileable_infos
(RayMainPool pid=54669) subtask_id_to_results = self._get_all_subtask_results()
(RayMainPool pid=54669) File "/Users/chaokunyang/Desktop/chaokun/python/mars/mars/services/task/supervisor/task.py", line 64, in _get_all_subtask_results
(RayMainPool pid=54669) for stage in processor.stage_processors:
(RayMainPool pid=54669) TypeError: [address=ray://ray-cluster-1656317089/0/0, pid=54669] 'NoneType' object is not iterable
(RayMainPool pid=54669) 2022-06-27 16:06:16,289 ERROR web.py:2239 -- 500 GET /api/session/SFJWHJesbcMFjANMcqTskZ6R/task/iQvlLWn2zC9z63HSWWbw5maQ/tileable_detail (127.0.0.1) 8.59ms
```
5. Minimized code to reproduce the error.
`pytest -v -s mars/deploy/oscar/tests/test_ray_dag_oscar.py::test_iterative_tiling`:
```
@require_ray
@pytest.mark.asyncio
async def test_iterative_tiling(ray_start_regular_shared2, create_cluster):
await test_local.test_iterative_tiling(create_cluster)
time.sleep(100000)
```
**Expected behavior**
A clear and concise description of what you expected to happen.
**Additional context**
Add any other context about the problem here.
| open | 2022-06-27T08:09:18Z | 2022-06-27T08:13:36Z | https://github.com/mars-project/mars/issues/3171 | [
"type: bug",
"mod: web",
"mod: ray integration"
] | chaokunyang | 0 |
BeanieODM/beanie | asyncio | 312 | Caching on Multi Process applications | The cache is currently limited on single process applications.
Example: You fetch and cache document x on process 1 and fetch it also on process 2.
If you modify the document on process 1 and fetch it afterwards on process 2 it will probably use the old cached version.
Possible solutions:
1. #### Allow clearing cache:
You could create actions which initiate a cache clear for all other processes trough ipc for example.
So if you do any action on Document x on process 1, all cached entries related to Document x's ID will be cleared on process
2. #### Cache Drivers:
Implement (async) cache drivers which allow using 3rd party cached like Redis or allow custom cache implementations. | closed | 2022-07-27T07:41:58Z | 2023-05-06T14:35:51Z | https://github.com/BeanieODM/beanie/issues/312 | [
"Stale"
] | Luc1412 | 5 |
plotly/dash | data-science | 2,974 | `prevent_initial_call="initial_duplicate"` not working as expected | Hello!
```
dash==2.17.1
dash-core-components==2.0.0
dash-html-components==2.0.0
dash-mantine-components==0.14.4
```
**Describe the bug**
I have callbacks I wish to start on load (prevent_initial_call=False) and do not care about the order of results eg. loading data and then sending dmc notification.
These are either from the same page or different pages targeting the same output. Having the ability to send from different pages helps the consistency app as the user sees notifications persist between pages.
Setting `prevent_initial_call="initial_duplicate"` and `allow_duplicate=True` results in effectively `prevent_initial_Call=True`. If only set to on some of the duplicate results in the following error, which says to do the thing I am already doing.
`dash.exceptions.DuplicateCallback: allow_duplicate requires prevent_initial_call to be True. The order of the call is not guaranteed to be the same on every page load. To enable duplicate callback with initial call, set prevent_initial_call='initial_duplicate' or globally in the config prevent_initial_callbacks='initial_duplicate'`
I also tried setting this at the global level but results in:
`
TypeError: Dash() got an unexpected keyword argument 'prevent_initial_callback'
`
Perhaps I am misunderstanding the purpose but I just want to run the callbacks with full awareness that the order is not guaranteed (desirable in this case).
The only documentation I could find is [https://community.plotly.com/t/dash-2-9-2-released-partial-property-updates-with-patch-duplicate-outputs-dcc-geolocation-scatter-group-attributes-and-more/72114#allowing-duplicate-callback-outputs-7](https://community.plotly.com/t/dash-2-9-2-released-partial-property-updates-with-patch-duplicate-outputs-dcc-geolocation-scatter-group-attributes-and-more/72114#allowing-duplicate-callback-outputs-7)
I've attached an MRE here [https://github.com/Lxstr/dash-multi-page-app-demos/tree/prevent-initial-duplicate](https://github.com/Lxstr/dash-multi-page-app-demos/tree/prevent-initial-duplicate)
Thanks! | open | 2024-09-03T11:29:29Z | 2024-09-03T18:53:36Z | https://github.com/plotly/dash/issues/2974 | [
"bug",
"P2"
] | Lxstr | 0 |
akfamily/akshare | data-science | 5,617 | AKShare 接口问题报告 | ak.stock_board_concept_hist_em和 ak.stock_board_industry_hist_em错误 |
Python 版本:3.11
AKShare 版本:release-v1.15.93
接口的名称和相应的调用代码:
使用示例代码:
import akshare as ak
stock_board_concept_hist_em_df = ak.stock_board_concept_hist_em(symbol="车联网", period="daily", start_date="20220101", end_date="20221128", adjust="qfq")
print(stock_board_concept_hist_em_df)
接口报错的截图或描述 | Screenshot or description of the error
Traceback (most recent call last):
File "/Users/allenqiang/xuangu/util/test.py", line 3, in <module>
stock_board_concept_hist_em_df = ak.stock_board_concept_hist_em(symbol="车联网", period="daily", start_date="20220101", end_date="20221128", adjust="qfq")
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/allenqiang/xuangu/.venv/lib/python3.13/site-packages/akshare/stock/stock_board_concept_em.py", line 126, in stock_board_concept_hist_em
stock_board_code = stock_board_concept_em_map[
~~~~~~~~~~~~~~~~~~~~~~~~~~~
stock_board_concept_em_map["板块名称"] == symbol
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
]["板块代码"].values[0]
~~~~~~~~~~~~~~~~~~~~^^^
IndexError: index 0 is out of bounds for axis 0 with size 0
期望获得的正确结果 | Expected correct results
不报错,返回正常数值 | closed | 2025-02-16T03:48:20Z | 2025-02-17T07:30:36Z | https://github.com/akfamily/akshare/issues/5617 | [
"bug"
] | allenqiangwei | 0 |
supabase/supabase-py | flask | 12 | Update record failed AttributeError: 'RequestBuilder' object has no attribute 'eq' | # Bug report
Tried installing locally looks there is some issue postgrest-py
When I tried updating a record it's throwing up this error
AttributeError: 'RequestBuilder' object has no attribute 'eq'
| closed | 2021-04-01T21:54:46Z | 2021-10-30T20:35:04Z | https://github.com/supabase/supabase-py/issues/12 | [
"bug"
] | pvbhanuteja | 3 |
statsmodels/statsmodels | data-science | 8,576 | SUMM: finite number of parameters of interest, large number of counfounders | Consider that we are only interested in a few parameters in a (linear or non-gaussian) regression model, but have a large number of possible extra explanatory variables (confounders/controls).
What's the implication for cov_params and inference for the parameter of interest if we use regularization or penalization on the extra counfounders or controls?
Both GAM and sure independence screening allow for unpenalized exog (parameters of interest).
I ran into this a few times, but never looked at the theory.
recent article with references
Galbraith, John W., and Victoria Zinde-Walsh. “Simple and Reliable Estimators of Coefficients of Interest in a Model with High-Dimensional Confounding Effects.” Journal of Econometrics 218, no. 2 (October 1, 2020): 609–32. https://doi.org/10.1016/j.jeconom.2020.04.031.
uses PCA on controls
an older article I saw some time ago
Belloni, Alexandre, Victor Chernozhukov, and Christian Hansen. "Inference on treatment effects after selection among high-dimensional controls." The Review of Economic Studies 81, no. 2 (2014): 608-650.
this looks also relatedd
Cattaneo, Matias D., Michael Jansson, and Whitney K. Newey. “Inference in Linear Regression Models with Many Covariates and Heteroscedasticity.” Journal of the American Statistical Association 113, no. 523 (July 3, 2018): 1350–61. https://doi.org/10.1080/01621459.2017.1328360.
without regularization of confounders
we need some explicit penalized models #7336 #4590 #7128 ... | open | 2022-12-14T17:05:51Z | 2023-02-16T17:18:53Z | https://github.com/statsmodels/statsmodels/issues/8576 | [
"type-enh",
"comp-regression",
"topic-penalization"
] | josef-pkt | 1 |
3b1b/manim | python | 1,960 | The program cannot find the tex log file | ### Describe the bug
This problem was encountered when running the GraphExample code in the official document( https://3b1b.github.io/manim/getting_started/example_scenes.html )。 The bin folder path of Letex has been added to the system environment variable, and Miktex has completed all updates and installation of all packages.
**Code**:
sin_label = axes.get_graph_label(sin_graph, "\\sin(x)")
relu_label = axes.get_graph_label(relu_graph, Text("ReLU"))
step_label = axes.get_graph_label(step_graph, Text("Step"), x=4)
**Wrong display or Error traceback**:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Scripts\manimgl.exe\__main__.py", line 7, in <module>
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\__main__.py", line 25, in main
scene.run()
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\scene\scene.py", line 91, in run
self.construct()
File "C:\Users\Pla\Desktop\python_test_file\Manim\Manimgl\axes.py", line 36, in construct
sin_label = axes.get_graph_label(sin_graph, "\\sin(x)")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\mobject\coordinate_systems.py", line 239, in get_graph_label
label = Tex(label)
^^^^^^^^^^
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\mobject\svg\tex_mobject.py", line 188, in __init__
super().__init__(full_string, **kwargs)
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\mobject\svg\tex_mobject.py", line 45, in __init__
super().__init__(**kwargs)
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\mobject\svg\svg_mobject.py", line 65, in __init__
self.init_svg_mobject()
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\mobject\svg\svg_mobject.py", line 76, in init_svg_mobject
self.generate_mobject()
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\mobject\svg\svg_mobject.py", line 91, in generate_mobject
file_path = self.get_file_path()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\mobject\svg\tex_mobject.py", line 66, in get_file_path
file_path = tex_to_svg_file(full_tex)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\utils\tex_file_writing.py", line 52, in tex_to_svg_file
tex_to_svg(tex_file_content, svg_file)
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\utils\tex_file_writing.py", line 60, in tex_to_svg
svg_file = dvi_to_svg(tex_to_dvi(tex_file))
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Pla\AppData\Local\Programs\Python\Python311\Lib\site-packages\manimlib\utils\tex_file_writing.py", line 91, in tex_to_dvi
with open(log_file, "r", encoding="utf-8") as file:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: 'C:\\Users\\PLA\\AppData\\Local\\Temp\\Tex\\42c962cc458aefe6.log'
| open | 2023-01-11T07:25:02Z | 2024-03-26T02:57:56Z | https://github.com/3b1b/manim/issues/1960 | [
"bug"
] | PlagueDoctors | 2 |
comfyanonymous/ComfyUI | pytorch | 6,304 | /api/users cannot access | ### Expected Behavior
I hope get the correct webpage
### Actual Behavior



### Steps to Reproduce
i run the code python main.py --listen 0.0.0.0, then the webpage cannot access
### Debug Logs
```powershell


```
### Other
_No response_ | closed | 2025-01-01T10:35:55Z | 2025-01-02T02:25:08Z | https://github.com/comfyanonymous/ComfyUI/issues/6304 | [
"Potential Bug"
] | xiaolinpeter | 3 |
jumpserver/jumpserver | django | 14,368 | [Bug] 目录名称是normalize.css,上传文件夹报错,提示 ”不能创建文件夹"" mkdir /tmp/normalize.css: not a directory“ | ### Product Version
3.10.13
### Product Edition
- [ ] Community Edition
- [X] Enterprise Edition
- [ ] Enterprise Trial Edition
### Installation Method
- [ ] Online Installation (One-click command installation)
- [ ] Offline Package Installation
- [ ] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] Source Code
### Environment Information
企业版 v3.10.13
### 🐛 Bug Description
文件夹名称是normalize.css,上传文件夹报错,提示 ”不能创建文件夹"" mkdir /tmp/normalize.css: not a directory“
### Recurrence Steps
1.新建一个文件夹,名称为normalize.css,文件夹内任意放入一些文件。
2.堡垒机web端使用sftp协议打开一个linux资产。
3.选择上传文件夹normalize.css,上传时报错。
<img width="875" alt="image" src="https://github.com/user-attachments/assets/7b9902c3-abf8-4c3c-ae66-7679807b6db4">
### Expected Behavior
_No response_
### Additional Information
_No response_
### Attempted Solutions
_No response_ | closed | 2024-10-28T02:08:51Z | 2024-11-28T08:35:58Z | https://github.com/jumpserver/jumpserver/issues/14368 | [
"🐛 Bug",
"⏰ Pending Reproduction"
] | liufei20151013 | 5 |
aiogram/aiogram | asyncio | 852 | how to send messages periodically? | Hi , thanks for developing aiogram .
Suppose I want to build a robot that sends a message periodically (every hour / every day / every week) . i don't see example in your document or google search , i hope there is a solution
| closed | 2022-02-27T16:09:10Z | 2022-03-21T16:57:10Z | https://github.com/aiogram/aiogram/issues/852 | [
"question issue",
"moved to discussion"
] | sajjadsabzkar | 2 |
Urinx/WeixinBot | api | 72 | 发送文本消息怎么@群里的某个人呢? | 参数怎么传?
| open | 2016-07-31T15:10:28Z | 2016-10-21T01:23:55Z | https://github.com/Urinx/WeixinBot/issues/72 | [] | kenny1002 | 1 |
flaskbb/flaskbb | flask | 451 | Why My FlaskBB's app.py can't run in pycharm? i have virtualenv installed | C:\Users\campu\Tryvenv\Scripts\python.exe C:/MyNuts/CodingWarehouse/cmcc-flaskbb/flaskbb-master/flaskbb/app.py
Traceback (most recent call last):
File "C:\Anaconda3\lib\site-packages\werkzeug\http.py", line 23, in <module>
from email.utils import parsedate_tz
File "C:\MyNuts\CodingWarehouse\cmcc-flaskbb\flaskbb-master\flaskbb\email.py", line 12, in <module>
from flask import render_template
ImportError: cannot import name 'render_template'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:/MyNuts/CodingWarehouse/cmcc-flaskbb/flaskbb-master/flaskbb/app.py", line 19, in <module>
from flask import Flask, request
File "C:\Anaconda3\lib\site-packages\flask\__init__.py", line 17, in <module>
from werkzeug.exceptions import abort
File "C:\Anaconda3\lib\site-packages\werkzeug\__init__.py", line 151, in <module>
__import__('werkzeug.exceptions')
File "C:\Anaconda3\lib\site-packages\werkzeug\exceptions.py", line 71, in <module>
from werkzeug.wrappers import Response
File "C:\Anaconda3\lib\site-packages\werkzeug\wrappers.py", line 27, in <module>
from werkzeug.http import HTTP_STATUS_CODES, \
File "C:\Anaconda3\lib\site-packages\werkzeug\http.py", line 25, in <module>
from email.Utils import parsedate_tz
File "C:\MyNuts\CodingWarehouse\cmcc-flaskbb\flaskbb-master\flaskbb\email.py", line 12, in <module>
from flask import render_template
ImportError: cannot import name 'render_template'
Process finished with exit code 1
| closed | 2018-04-24T05:39:28Z | 2018-04-26T07:16:00Z | https://github.com/flaskbb/flaskbb/issues/451 | [] | campusstella | 1 |
inventree/InvenTree | django | 8,804 | [FR] Custom Site Icon & favicon option in web panel. | ### Please verify that this feature request has NOT been suggested before.
- [x] I checked and didn't find a similar feature request
### Problem statement
I would really love to be able to change the Icon of Inventree to something of my own, which would also mean changing the Favicon. You can change the "Company name" yet you can't change, say, the "Company logo"? I am kind of shocked this is not already a feature. Anywho, I hope this is something that could be implemented!
### Suggested solution
An option in System Settings under the Server tab to upload a "Company Logo"?
### Describe alternatives you've considered
I have tried searching for all "favicon.ico" files in my inventree digitalocean droplet and I tried to replace three or four of them but that didn't work, and not only that, I see that the main logo is a SVG from what looks to be an external source.
### Examples of other systems
_No response_
### Do you want to develop this?
- [ ] I want to develop this.
- [x] If it's not crazy expensive, I'd be willing to fund this feature implementation. | open | 2024-12-31T02:57:25Z | 2025-03-20T14:47:56Z | https://github.com/inventree/InvenTree/issues/8804 | [
"enhancement"
] | chadleylyell | 2 |
developmentseed/lonboard | jupyter | 566 | Why "Input being reprojected to EPSG:4326 CRS"? | I'm plotting multiple layers of a Map and I get a bunch of warning saying:
`Input being reprojected to EPSG:4326 CRS` . Why is that warning showing up? I think a more explanatory message would be helpful.
| closed | 2024-07-09T00:00:36Z | 2024-09-24T19:48:47Z | https://github.com/developmentseed/lonboard/issues/566 | [] | ncclementi | 2 |
albumentations-team/albumentations | deep-learning | 1,914 | CoarseDropout removes the keypoints even the `remove_invisible=False` | ## Describe the bug
CoarseDropout removes the keypoints even the `remove_invisible=False`
If any holes produced from CoarseDropout includes the keypoints coordinate, the keypoints are removed from indices.
### To Reproduce
Set the
`
albumentations.Compose(
[albumentations.CoarseDropout(...),
...],
keypoint_params=albumentations.KeypointParams(format='xy', remove_invisible=False, angle_in_degrees=True)
`
, and performs transforms.
### Environment
- albumentations ==1.4.14
### Expected behavior
The keypoints shoud be remained even the holes include themselves.
### Screenshots


### Additional context
Same issue on #1838 #1469
To remain the keypoints regardless of `remove_invisible=False`, modify the script on `corase_dropout.py` -> `apply_to_keypoints()` as below:
`return [keypoint for keypoint in keypoints if not any(keypoint_in_hole(keypoint, hole) for hole in holes)]`
to
`return [keypoint for keypoint in keypoints]`
| closed | 2024-09-05T08:19:11Z | 2024-09-26T23:43:29Z | https://github.com/albumentations-team/albumentations/issues/1914 | [
"bug"
] | ZombaSY | 2 |
noirbizarre/flask-restplus | flask | 415 | embedded swagger into project | Great appreciate for your work, I install in my home computer with pip, it can install smoothly, but in company, I can not access pypi, and company owned jfrog service does contain this module, I install flask_restplus in the follow step:
1. read the change log, got the swagger version: 2.2.6,
2. download specific swagger version, uncompress and put it into flask_restplus folder, rename it to static,
3. run 'python setup.py develop', it installed completely.
if flask_restplus rely on specific swagger version, why not just put that specific swagger version into your package?? | closed | 2018-04-03T04:22:08Z | 2018-04-04T14:34:10Z | https://github.com/noirbizarre/flask-restplus/issues/415 | [] | hanleilei | 0 |
strawberry-graphql/strawberry | asyncio | 2,914 | First-Class support for `@stream`/`@defer` in strawberry | ## First-Class support for `@stream`/`@defer` in strawberry
This issue is going to collect all necessary steps for an awesome stream and defer devX in Strawberry 🍓
First steps collected today together in discovery session with @patrick91 @bellini666
#### ToDos for an initial support:
- [ ] Add support for `[async/sync]` generators in return types
- [ ] Make sure `GraphQLDeferDirective`, `GraphQLStreamDirective`, are in the GQL-Core schema
Flag in Schema(query=Query, config={enable_stream_defer: False}) - default true
- [ ] Add incremental delivery support to all the views
- [ ] FastAPI integration -> Maybe in the Async Base View?
- [ ] Explore Sync View Integration
### long term goals
_incomplete list of problems / design improvement potential of the current raw implementation_
#### Problem: streaming / n+1 -> first-level dataloaders are no longer working as every instance is resolved 1:1
#### Possible solutions
- dig deeper into https://github.com/robrichard/defer-stream-wg/discussions/40
- custom query execution plan engine
- Add @streamable directive to schema fields automatically if field is streamable, including custom validation rule
Some playground code: https://gist.github.com/erikwrede/993e1fc174ee75b11c491210e4a9136b | open | 2023-07-02T21:32:42Z | 2025-03-20T15:56:16Z | https://github.com/strawberry-graphql/strawberry/issues/2914 | [] | erikwrede | 9 |
serengil/deepface | machine-learning | 727 | ValueError: Error when checking input: expected conv2d_49_input to have 4 dimensions, but got array with shape (1, 48, 48) | Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
DeepFace.stream(db_path = "C:/Users/Admin/Desktop/database")
File "C:\Users\Admin\AppData\Local\Programs\Python\Python37\lib\site-packages\deepface\DeepFace.py", line 744, in stream
frame_threshold=frame_threshold,
File "C:\Users\Admin\AppData\Local\Programs\Python\Python37\lib\site-packages\deepface\commons\realtime.py", line 180, in analysis
silent=True,
File "C:\Users\Admin\AppData\Local\Programs\Python\Python37\lib\site-packages\deepface\DeepFace.py", line 336, in analyze
emotion_predictions = models["emotion"].predict(img_gray, verbose=0)[0, :]
File "C:\Users\Admin\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 1149, in predict
x, _, _ = self._standardize_user_data(x)
File "C:\Users\Admin\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training.py", line 751, in _standardize_user_data
exception_prefix='input')
File "C:\Users\Admin\AppData\Local\Programs\Python\Python37\lib\site-packages\keras\engine\training_utils.py", line 128, in standardize_input_data
'with shape ' + str(data_shape))
ValueError: Error when checking input: expected conv2d_49_input to have 4 dimensions, but got array with shape (1, 48, 48) | closed | 2023-04-20T14:45:25Z | 2023-04-23T12:23:33Z | https://github.com/serengil/deepface/issues/727 | [
"bug"
] | Darrshan-Sankar | 2 |
alirezamika/autoscraper | automation | 7 | add URL with save & load function | add URL with save & load function so that we don't need to build or remember the URL | closed | 2020-09-07T05:52:50Z | 2020-09-08T09:29:57Z | https://github.com/alirezamika/autoscraper/issues/7 | [] | imadarsh1001 | 1 |
pyeve/eve | flask | 1,321 | How to serve eve at different port? | Default flask port is 5000, I want to serve at 5001, so where to set it.
Thanks! | closed | 2019-10-23T02:19:06Z | 2022-10-03T15:53:50Z | https://github.com/pyeve/eve/issues/1321 | [] | huzech | 1 |
jupyter-incubator/sparkmagic | jupyter | 804 | [QST] How to automatically load sparkmagic.magics when open a new ipython kernel tab | Is there any way to avod typing `%load_ext sparkmagic.magics` every time for a new notebook?
I tried `jupyter serverextension enable --py sparkmagic` and i can see the added extension in `$USER/.jupyter/jupyter_notebook_config.json`. But when I open a new notebook, it still requires the `load_ext` action.
I also tried to add to `jupyter_notebok_config.py` at field `c.NotebookApp.nbserver_extensions = {"sparkmagic.magics": True}`, but this results in an error:
```
W 2023-01-31 18:10:18.354 ServerApp] sparkmagic.magics | extension failed loading with message: 'NoneType' object is not callable
[E 2023-01-31 18:10:18.354 ServerApp] sparkmagic.magics | stack trace
Traceback (most recent call last):
File "/home/allxu/miniconda3/lib/python3.8/site-packages/jupyter_server/extension/manager.py", line 362, in load_extension
extension.load_all_points(self.serverapp)
File "/home/allxu/miniconda3/lib/python3.8/site-packages/jupyter_server/extension/manager.py", line 234, in load_all_points
return [self.load_point(point_name, serverapp) for point_name in self.extension_points]
File "/home/allxu/miniconda3/lib/python3.8/site-packages/jupyter_server/extension/manager.py", line 234, in <listcomp>
return [self.load_point(point_name, serverapp) for point_name in self.extension_points]
File "/home/allxu/miniconda3/lib/python3.8/site-packages/jupyter_server/extension/manager.py", line 225, in load_point
return point.load(serverapp)
File "/home/allxu/miniconda3/lib/python3.8/site-packages/jupyter_server/extension/manager.py", line 147, in load
return loader(serverapp)
TypeError: 'NoneType' object is not callable
``` | closed | 2023-01-31T10:18:42Z | 2023-09-13T15:32:23Z | https://github.com/jupyter-incubator/sparkmagic/issues/804 | [] | wjxiz1992 | 1 |
521xueweihan/HelloGitHub | python | 2,132 | 生成Git ChangeLog的工具,包含在GitLab CI/CD中生成并推送Release工作流以及GitHub Action的推送Release工作流 | ## 项目推荐
- 项目地址:[Git-Release-Workflow](https://github.com/shencangsheng/Git-Release-Workflow)
- 类别:请从中选择(其他)
- 项目后续更新计划:
- 项目描述:
- 必写:适用于私有GitLab在企业内部使用CI/CD生成个性化的Release。
- 可选:GitLab CI/CD;GitHub Action初学者快速学习。
- 描述长度(不包含示例代码): 生成Git ChangeLog的工具,包含在GitLab CI/CD中生成并推送Release工作流以及GitHub Action的推送Release工作流
- 推荐理由:清晰易懂的脚本和示例,可快速学习Git相关工作流的使用。
- 示例代码:
```yml
release:
rules:
- if: '$CI_COMMIT_TAG != null && $CI_PIPELINE_SOURCE == "push"'
when: on_success
stage: release-note
image: shencangsheng/gitlab-pipeline-release:latest
script:
# 注意区分GitLab版本
# <= 13.x 使用 post-gitlab-release-13x
# >= 14.x 使用 post-gitlab-release-14x
- post-gitlab-release-14x
```
- 截图:

## 提示(提交时请删除以下内容)
> 点击上方 “Preview” 更方便地阅读以下内容,
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
| closed | 2022-03-21T05:10:24Z | 2022-03-23T10:50:07Z | https://github.com/521xueweihan/HelloGitHub/issues/2132 | [] | shencangsheng | 1 |
horovod/horovod | tensorflow | 3,315 | recipe for target 'horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/all' failed | **Environment:**
1. Framework: (TensorFlow, Keras, PyTorch, MXNet):PyTorch
2. Framework version: 1.5.1
3. Horovod version:0.23.0
4. MPI version:
5. CUDA version:10.2
6. NCCL version:2
7. Python version:3.7
8. Spark / PySpark version:
9. Ray version:
10. OS and version:
11. GCC version: 7.5.0
12. CMake version:0.23.0
**Checklist:**
1. Did you search issues to find if somebody asked this question before?
2. If your question is about hang, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/running.rst)?
3. If your question is about docker, did you read [this doc](https://github.com/horovod/horovod/blob/master/docs/docker.rst)?
4. Did you check if you question is answered in the [troubleshooting guide](https://github.com/horovod/horovod/blob/master/docs/troubleshooting.rst)?
**Bug report:**
Please describe erroneous behavior you're observing and steps to reproduce it.
running build_ext
-- Could not find CCache. Consider installing CCache to speed up compilation.
-- The CXX compiler identification is GNU 7.3.0
-- Check for working CXX compiler: /home/xcc/anaconda3/envs/lanegcn/bin/x86_64-conda_cos6-linux-gnu-c++
-- Check for working CXX compiler: /home/xcc/anaconda3/envs/lanegcn/bin/x86_64-conda_cos6-linux-gnu-c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Build architecture flags: -mf16c -mavx -mfma
-- Using command /home/xcc/anaconda3/envs/lanegcn/bin/python
-- Found CUDA: /usr/local/cuda-10.2 (found version "10.2")
-- Linking against static NCCL library
-- Found NCCL: /usr/include
-- Determining NCCL version from the header file: /usr/include/nccl.h
-- NCCL_MAJOR_VERSION: 2
-- Found NCCL (include: /usr/include, library: /usr/lib/x86_64-linux-gnu/libnccl_static.a)
-- Found NVTX: /usr/local/cuda-10.2/include
-- Found NVTX (include: /usr/local/cuda-10.2/include, library: dl)
-- Found Pytorch: 1.5.1 (found suitable version "1.5.1", minimum required is "1.2.0")
-- HVD_NVCC_COMPILE_FLAGS = --std=c++11 -O3 -Xcompiler -fPIC -gencode arch=compute_30,code=sm_30 -gencode arch=compute_32,code=sm_32 -gencode arch=compute_35,code=sm_35 -gencode arch=compute_37,code=sm_37 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_53,code=sm_53 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_62,code=sm_62 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_72,code=sm_72 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_75,code=compute_75
-- Configuring done
CMake Warning at horovod/torch/CMakeLists.txt:81 (add_library):
Cannot generate a safe runtime search path for target pytorch because files
in some directories may conflict with libraries in implicit directories:
runtime library [libcudart.so.10.2] in /home/xcc/anaconda3/envs/lanegcn/lib may be hidden by files in:
/usr/local/cuda-10.2/lib64
Some of these libraries may not be found correctly.
In file included from /usr/local/cuda-10.2/include/driver_types.h:77:0,
from /usr/local/cuda-10.2/include/builtin_types.h:59,
from /usr/local/cuda-10.2/include/cuda_runtime.h:91,
from <command-line>:0:
/usr/include/limits.h:26:10: fatal error: bits/libc-header-start.h: No such file or directory
#include <bits/libc-header-start.h>
^~~~~~~~~~~~~~~~~~~~~~~~~~
compilation terminated.
CMake Error at compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o.RelWithDebInfo.cmake:219 (message):
Error generating
/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/build/temp.linux-x86_64-3.7/RelWithDebInfo/horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir//./compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o
horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/build.make:70: recipe for target 'horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o' failed
make[2]: *** [horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/compatible_horovod_cuda_kernels_generated_cuda_kernels.cu.o] Error 1
make[2]: Leaving directory '/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/build/temp.linux-x86_64-3.7/RelWithDebInfo'
CMakeFiles/Makefile2:215: recipe for target 'horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/all' failed
make[1]: *** [horovod/common/ops/cuda/CMakeFiles/compatible_horovod_cuda_kernels.dir/all] Error 2
make[1]: Leaving directory '/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/build/temp.linux-x86_64-3.7/RelWithDebInfo'
Makefile:83: recipe for target 'all' failed
make: *** [all] Error 2
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py", line 211, in <module>
'horovodrun = horovod.runner.launch:run_commandline'
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/site-packages/setuptools/__init__.py", line 153, in setup
return distutils.core.setup(**attrs)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/core.py", line 148, in setup
dist.run_commands()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 966, in run_commands
self.run_command(cmd)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/site-packages/setuptools/command/install.py", line 61, in run
return orig.install.run(self)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/command/install.py", line 545, in run
self.run_command('build')
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/command/build.py", line 135, in run
self.run_command(cmd_name)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/cmd.py", line 313, in run_command
self.distribution.run_command(command)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/dist.py", line 985, in run_command
cmd_obj.run()
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/site-packages/setuptools/command/build_ext.py", line 79, in run
_build_ext.run(self)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/distutils/command/build_ext.py", line 340, in run
self.build_extensions()
File "/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py", line 101, in build_extensions
cwd=cmake_build_dir)
File "/home/xcc/anaconda3/envs/lanegcn/lib/python3.7/subprocess.py", line 363, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', '--build', '.', '--config', 'RelWithDebInfo', '--', 'VERBOSE=1']' returned non-zero exit status 2.
----------------------------------------
ERROR: Command errored out with exit status 1: /home/xcc/anaconda3/envs/lanegcn/bin/python -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py'"'"'; __file__='"'"'/tmp/pip-install-skjf0ukf/horovod_598e328bc68b407fb152042944d44da9/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-h8bnj0_b/install-record.txt --single-version-externally-managed --compile --install-headers /home/xcc/anaconda3/envs/lanegcn/include/python3.7m/horovod Check the logs for full command output.
steps to reproduce it:
HOROVOD_NCCL_INCLUDE=/usr/include HOROVOD_NCCL_LIB=/usr/lib/x86_64-linux-gnu HOROVOD_CUDA_HOME=/usr/local/cuda-10.2 HOROVOD_CUDA_INCLUDE=/usr/local/cuda-10.2/include HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITHOUT_MPI=1 HOROVOD_WITHOUT_TENSORFLOW=1 HOROVOD_WITHOUT_MXNET=1 pip install horovod
BTW,the cmake version is 3.10.2,and i do "pip install mpi4py"(in conda env)
| open | 2021-12-13T13:25:36Z | 2021-12-13T14:06:00Z | https://github.com/horovod/horovod/issues/3315 | [
"bug"
] | coco-99-coco | 0 |
litl/backoff | asyncio | 180 | How do I use this? | I have an api I'm trying to hit with the code below. How do I implement this code I have into backoff? I don't see any example and I'm confused on how to apply this to my code
` daily = "min_time=-1%20day&bucket=day"
weekly = "min_time=-1%20week&bucket=week"
monthly = "min_time=-4%20week&bucket=week"
headers = {'User-Agent': 'a1projects/1.0'}
url = "https://api.helium.io/v1/hotspots/" + address +"/rewards/sum?"+daily
response = r.get(url, headers=headers)#.json()
if str(response) == "<Response [200]>":
time.sleep(.6)
response = response.json()
data = response['data'][0]
daily_rewards = data['total']
daily_rewards = f"{daily_rewards:.2f}"` | open | 2022-11-03T14:17:55Z | 2022-11-03T14:17:55Z | https://github.com/litl/backoff/issues/180 | [] | wprojects | 0 |
litestar-org/litestar | asyncio | 3,786 | Enhancement: Be able to enforce `min_length`, `max_length`, etc. for `SecretString` | ### Summary
The `SecretString` feature is really nice, and other frameworks do similar things nowadays. However, unlike those other frameworks, Litestar doesn't appear to have the ability to set constraints on this type.
I'd much prefer to use `SecretString` over a basic `str` for things like passwords, but there are some cases where I'm legally obligated to enforce a minimum length, so this is important.
### Basic Example
Using Pydantic as an example, I can do this and it all works just fine:
```python
password: SecretStr = Field(min_length=12, max_length=64)
```
But when I try to achieve the same thing with Litestar:
```python
password: Annotated[SecretString, Meta(min_length=12, max_length=64)]
```
I get an error:
```
TypeError: Can only set `min_length` on a str, bytes, or collection type -
type `typing.Annotated[litestar.datastructures.secret_values.SecretString, msgspec.Meta(min_length=12)]` is invalid
```
My current workaround feels extremely hacky:
```python
class Signup(Struct):
username: str
password: SecretString
def __post_init__(self):
value = self.password.get_secret()
if len(value) < 12:
raise HTTPException(status_code=400, detail='Passwords must be at least 12 characters.')
if len(value) > 64:
raise HTTPException(status_code=400, detail='Passwords must not be more than 64 characters.')
del value
```
Ideally I'd be able to encapsulate all of this validation logic into a single field/type definition that I could then reuse multiple places.
### Drawbacks and Impact
It seems to me this feature could only be a good thing.
### Unresolved questions
Is there already a better way to do this than my current workaround? | open | 2024-10-13T03:07:09Z | 2025-03-20T15:54:58Z | https://github.com/litestar-org/litestar/issues/3786 | [
"Enhancement"
] | bdoms | 1 |
horovod/horovod | deep-learning | 3,247 | CUDA OOM when using horovod.torch to training multi-machine's task | Excuse me! Recently I met the strange problem. The original code uses pytorch DDP, when fixing `single_gpu_batch_size=4`, the `1 machine 8gpus` (total batch size is 32)and `2 machines 16gpus` (total batch size is 64) both work normally. After modifying into horovod and fixing `single_gpu_batch_size=4 `too, `the 1 machine 8gpus` (total batch size is 32)works normally., but the `2 machines 16gpus` (total batch size is 64) causes CUDA OOM:
<img width="1164" alt="截屏2021-10-28 下午8 12 20" src="https://user-images.githubusercontent.com/25579435/139252997-8c8cd62a-cc2e-4be3-9ae6-d3ac7e5aeed5.png">
What reasons could cause this problem? Looking forward to your reply!
### Environment:
Horovod: 0.19.2 or 0.22.1
PyTorch: 1.7.1
NCCL: 2.7.8
CUDA: 11.0 | closed | 2021-10-28T12:14:21Z | 2022-01-12T00:09:28Z | https://github.com/horovod/horovod/issues/3247 | [
"question",
"wontfix"
] | wuyujiji | 4 |
cosmic-byte/flask-restplus-boilerplate | sqlalchemy | 6 | Using PostgreSQL, results failing tests | Using Postgres, and results in failing tests.
Ideas on how to fix?
```
/home/sm/PycharmProjects/REST-API/venv/bin/python /home/sm/PycharmProjects/REST-API/manage.py test
test_non_registered_user_login (test_auth.TestAuthBlueprint)
Test for login of non-registered user ... 2019-02-10 09:07:51,551 INFO sqlalchemy.engine.base.Engine SELECT CAST('test plain returns' AS VARCHAR(60)) AS anon_1
2019-02-10 09:07:51,551 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,551 INFO sqlalchemy.engine.base.Engine SELECT CAST('test unicode returns' AS VARCHAR(60)) AS anon_1
2019-02-10 09:07:51,551 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,551 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("blacklist_tokens")
2019-02-10 09:07:51,551 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("api_users")
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_channels")
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_messages")
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users_Profile")
2019-02-10 09:07:51,552 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,553 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users")
2019-02-10 09:07:51,553 INFO sqlalchemy.engine.base.Engine ()
ERROR
test_registered_user_login (test_auth.TestAuthBlueprint)
Test for login of registered-user login ... 2019-02-10 09:07:51,556 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("blacklist_tokens")
2019-02-10 09:07:51,556 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("api_users")
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_channels")
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_messages")
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users_Profile")
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users")
2019-02-10 09:07:51,557 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,559 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("blacklist_tokens")
2019-02-10 09:07:51,559 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,559 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("api_users")
2019-02-10 09:07:51,559 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_channels")
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_messages")
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users_Profile")
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users")
2019-02-10 09:07:51,560 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,561 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("blacklist_tokens")
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("api_users")
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_channels")
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_messages")
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users_Profile")
2019-02-10 09:07:51,562 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,563 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users")
2019-02-10 09:07:51,563 INFO sqlalchemy.engine.base.Engine ()
ERROR
test_registered_with_already_registered_user (test_auth.TestAuthBlueprint)
Test registration with already registered email ... ERROR
test_registration (test_auth.TestAuthBlueprint)
Test for user registration ... ERROR
2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("blacklist_tokens")
2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine ()
test_valid_blacklisted_token_logout (test_auth.TestAuthBlueprint)
Test for logout after a valid token gets blacklisted ... 2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("api_users")
2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_channels")
2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_messages")
2019-02-10 09:07:51,564 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,565 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users_Profile")
2019-02-10 09:07:51,565 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,565 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users")
2019-02-10 09:07:51,565 INFO sqlalchemy.engine.base.Engine ()
ERROR
2019-02-10 09:07:51,566 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("blacklist_tokens")
2019-02-10 09:07:51,566 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,566 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("api_users")
2019-02-10 09:07:51,566 INFO sqlalchemy.engine.base.Engine ()
test_valid_logout (test_auth.TestAuthBlueprint)
Test for logout before token expires ... 2019-02-10 09:07:51,566 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_channels")
2019-02-10 09:07:51,567 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,567 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("discord_messages")
2019-02-10 09:07:51,567 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,567 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users_Profile")
2019-02-10 09:07:51,567 INFO sqlalchemy.engine.base.Engine ()
2019-02-10 09:07:51,567 INFO sqlalchemy.engine.base.Engine PRAGMA table_info("Users")
2019-02-10 09:07:51,567 INFO sqlalchemy.engine.base.Engine ()
ERROR
test_app_is_development (test_config.TestDevelopmentConfig) ... ok
test_app_is_production (test_config.TestProductionConfig) ... ok
test_app_is_testing (test_config.TestTestingConfig) ... ok
test_decode_auth_token (test_user_medol.TestUserModel) ... ERROR
test_encode_auth_token (test_user_medol.TestUserModel) ... ERROR
======================================================================
ERROR: test_non_registered_user_login (test_auth.TestAuthBlueprint)
Test for login of non-registered user
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
======================================================================
ERROR: test_registered_user_login (test_auth.TestAuthBlueprint)
Test for login of registered-user login
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
======================================================================
ERROR: test_registered_with_already_registered_user (test_auth.TestAuthBlueprint)
Test registration with already registered email
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
======================================================================
ERROR: test_registration (test_auth.TestAuthBlueprint)
Test for user registration
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
======================================================================
ERROR: test_valid_blacklisted_token_logout (test_auth.TestAuthBlueprint)
Test for logout after a valid token gets blacklisted
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
======================================================================
ERROR: test_valid_logout (test_auth.TestAuthBlueprint)
Test for logout before token expires
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261350438> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
======================================================================
ERROR: test_decode_auth_token (test_user_medol.TestUserModel)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261340a90> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261340a90> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
======================================================================
ERROR: test_encode_auth_token (test_user_medol.TestUserModel)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 86, in _compiler_dispatch
meth = getter(visitor)
AttributeError: 'SQLiteTypeCompiler' object has no attribute 'visit_JSON'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.UnsupportedCompilationError: Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261340a90> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/sm/PycharmProjects/REST-API/app/test/base.py", line 14, in setUp
db.create_all()
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 963, in create_all
self._execute_for_all_tables(app, bind, 'create_all')
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/flask_sqlalchemy/__init__.py", line 955, in _execute_for_all_tables
op(bind=self.get_engine(app, bind), **extra)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/schema.py", line 4200, in create_all
ddl.SchemaGenerator, self, checkfirst=checkfirst, tables=tables
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 2011, in _run_visitor
conn._run_visitor(visitorcallable, element, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1599, in _run_visitor
visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 779, in visit_metadata
_is_metadata_operation=True,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 130, in traverse_single
return meth(obj, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 824, in visit_table
include_foreign_key_constraints,
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 980, in execute
return meth(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 72, in _execute_on_connection
return connection._execute_ddl(self, multiparams, params)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/engine/base.py", line 1035, in _execute_ddl
else None,
File "<string>", line 1, in <lambda>
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/elements.py", line 448, in compile
return self._compiler(dialect, bind=bind, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/ddl.py", line 29, in _compiler
return dialect.ddl_compiler(dialect, self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 310, in __init__
self.string = self.process(self.statement, **compile_kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2802, in visit_create_table
% (table.description, column.name, ce.args[0])
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 296, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/util/compat.py", line 276, in reraise
raise value.with_traceback(tb)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2790, in visit_create_table
create_column, first_pk=column.primary_key and not first_pk
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 341, in process
return obj._compiler_dispatch(self, **kwargs)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 90, in _compiler_dispatch
return meth(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 2822, in visit_create_column
text = self.get_column_specification(column, first_pk=first_pk)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/dialects/sqlite/base.py", line 906, in get_column_specification
column.type, type_expression=column
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/compiler.py", line 391, in process
return type_._compiler_dispatch(self, **kw)
File "/home/sm/PycharmProjects/REST-API/venv/lib/python3.7/site-packages/sqlalchemy/sql/visitors.py", line 88, in _compiler_dispatch
raise exc.UnsupportedCompilationError(visitor, cls)
sqlalchemy.exc.CompileError: (in table 'Users', column 'Roles'): Compiler <sqlalchemy.dialects.sqlite.base.SQLiteTypeCompiler object at 0x7f6261340a90> can't render element of type <class 'sqlalchemy.sql.sqltypes.JSON'>
----------------------------------------------------------------------
Ran 11 tests in 0.029s
FAILED (errors=8)
Process finished with exit code 1
``` | open | 2019-02-10T07:10:00Z | 2019-02-10T10:56:34Z | https://github.com/cosmic-byte/flask-restplus-boilerplate/issues/6 | [] | samip5 | 1 |
tfranzel/drf-spectacular | rest-api | 921 | Cannot use `list[Serializer]` to model root JSON element in `extend_schema` | I would like to decorate an `@extend_schema` this way:
```py
@extend_schema(responses=list[MySerializer])
```
To publish that my endpoint returns a `list` at its root, and within that, each element is modeled by `MySerializer`.
But this raises an error
```
Warning [my_route]: could not resolve "list[MySerializer]"
```
I can't seem to model this with a normal `Serializer`, as mentioned in this discussion:
- https://github.com/encode/django-rest-framework/discussions/8851
Is there any other way to properly model the root element of the JSON response as a list when it doesn't have a attribute name? | closed | 2023-01-19T20:44:06Z | 2023-01-20T01:18:08Z | https://github.com/tfranzel/drf-spectacular/issues/921 | [] | johnthagen | 8 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 477 | 运行gradio demo 聊天内容有格式符号 | *请尽量具体地描述您遇到的问题,**必要时给出运行命令**。这将有助于我们更快速地定位问题所在。*
运行gradio demo 聊天内容有格式符号, 如下:
\<p\>hello\<\/p\>
\<p\>Hello! How can I assist you today?\<\/p\>
显示的每个问题和回答都带有\<p\>和\<\/p\>, 请问可以怎么消除。
### 运行截图或日志

### 必查项目(前三项只保留你要问的)
- [x] **基础模型**:Alpaca-Plus
- [x] **运行系统**: Linux
- [x] **问题分类**: 显示效果问题 / 其他问题
| closed | 2023-05-31T08:52:31Z | 2023-06-17T22:02:10Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/477 | [
"stale"
] | Peter-L-FANG | 4 |
flasgger/flasgger | api | 548 | Flasgger showing blank page | The page is blank

The code is as follows.
from flasgger import Swagger
swagger_config = {
"headers": [
],
"specs": [
{
"endpoint": 'apispec_1',
"route": '/apispec_1.json',
"rule_filter": lambda rule: True, # all in
"model_filter": lambda tag: True, # all in
}
],
"static_url_path": "/flasgger_static",
# "static_folder": "static", # must be set by user
"swagger_ui": True,
"specs_route": "/apidocs/"
}
swagger = Swagger(app,config=swagger_config,template=template1)
| open | 2022-09-27T08:58:47Z | 2023-08-07T19:35:20Z | https://github.com/flasgger/flasgger/issues/548 | [] | Anand240396 | 6 |
microsoft/nni | data-science | 5,613 | assert len(graph.nodes) == len(graph_check.nodes) | **Describe the bug**:
I compress my model with L1NormPruner and speedup it, then an error occurs. How can I solve this problem?
This is error:

I checked it, but I don't know how to solve this problem:

This is the code for the pruning part of my project:
device = torch.device("cpu")
inputs = torch.randn((1, 3, 768, 768))
model_path = 'weights/yolov3_cqxdq_total_300_300.pt'
pruner_model_path = 'weights/yolov3_cqxdq_pruner_weights.pth'
config_list = [{'sparsity': 0.6, 'op_types': ['Conv2d']}]
model = attempt_load(model_path, map_location=device) # load FP32 model
from nni.compression.pytorch.pruning import L1NormPruner
pruner = L1NormPruner(model, config_list)
_, masks = pruner.compress()
for name, mask in masks.items():
print(name, ' sparsity : ', '{:.2}'.format(mask['weight'].sum() / mask['weight'].numel()))
pruner._unwrap_model()
from nni.compression.pytorch.speedup.v2 import ModelSpeedup
m_speedup = ModelSpeedup(model, inputs, masks, device, batch_size=2)
m_speedup.speedup_model()
This is the structure and forward of my model:
[https://github.com/ultralytics/yolov3](https://github.com/ultralytics/yolov3)
Due to the high version of Pytorch, I have made modifications to this:_Originally posted by @EdwardAndersonMcDermott in https://github.com/ultralytics/yolov5/issues/6948#issuecomment-1075528897_

And I deleted the control-flow:

**Environment**:
NNI version: v3.0rc1
Training service (local|remote|pai|aml|etc): local
Python version: 3.8.5
PyTorch version: 1.11.0
Cpu or cuda version: cpu
**Reproduce the problem**
- Code|Example:
- How to reproduce: | closed | 2023-06-19T01:35:59Z | 2023-07-05T01:47:23Z | https://github.com/microsoft/nni/issues/5613 | [] | HuYue233 | 3 |
mwaskom/seaborn | data-science | 3,371 | feature request: plotly backend | great package but no interactivity. Please allow plotly backend like in pandas.plot | closed | 2023-05-23T18:27:35Z | 2023-05-23T22:30:22Z | https://github.com/mwaskom/seaborn/issues/3371 | [] | chanansh | 0 |
strawberry-graphql/strawberry | graphql | 3,512 | Allow create interfaces without fields. | <!--- Provide a general summary of the changes you want in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
Allow interfaces without fields.
## Feature Request Type
- [ ] Core functionality
- [x] Alteration (enhancement/optimization) of existing feature(s)
- [x] New behavior
## Description
I want to create an interface with "get" or whatever method which will get sqlalchemy object and cast to strawberry object.
```python
@strawberry.interface
class TypeBase:
model: ClassVar[Base]
@cassmethod
async def get(cls, session: AsyncSession, id_: int) -> Self | None:
try:
obj: Base = await cls.model.get(
session,
id_,
)
except ModelDoesNotExistError:
return None
values: dict[str, Any]
# Some magic with values
return cls(**values)
```
But in this case I'm getting `❌ Type TypeBase must define one or more fields.`, which doesn't make sense IMO, as for interfaces fields are not required (usually).
<!-- A few sentences describing what it is. --> | closed | 2024-05-24T12:33:23Z | 2025-03-20T15:56:44Z | https://github.com/strawberry-graphql/strawberry/issues/3512 | [] | koldakov | 2 |
kymatio/kymatio | numpy | 639 | Documentation for 3D scattering output | I am hoping to use the 3D scattering function, but I am somewhat confused by its output.
I saw the documentation (https://www.kymat.io/userguide.html#id14) for output size, but when I ran the HarmonicScattering3D function with J=7, L=4, P=1 (I assume this is the length of the integral_powers argument?) on a (2, 128, 128, 128) batch, the output had shape (2, 36, 5, 3), which I believe is not consistent with what is listed in the documentation.
Could you clarify what the dimensions of the output of the 3D scattering function correspond to (or point me to documentation that specifies this)? Also, what is C meant to represent in that section of the documentation?
System info:
Darwin-19.6.0-x86_64-i386-64bit
Python 3.7.7 (default, Mar 26 2020, 10:32:53)
[Clang 4.0.1 (tags/RELEASE_401/final)]
NumPy 1.18.1
SciPy 1.4.1
PyTorch 1.4.0
Kymatio 0.2.0 | closed | 2021-01-02T20:39:54Z | 2022-06-20T20:18:04Z | https://github.com/kymatio/kymatio/issues/639 | [] | mburhanpurkar | 3 |
FactoryBoy/factory_boy | django | 193 | debug print (django.py, line 197) | print("Returning file with filename=%r, contents=%r" % (filename, content))
| closed | 2015-03-27T12:47:15Z | 2015-03-27T12:52:31Z | https://github.com/FactoryBoy/factory_boy/issues/193 | [] | kwist-sgr | 1 |
miguelgrinberg/Flask-Migrate | flask | 82 | Question: Keeping Empty Database in Version Control | Sorry - if this is a _silly_ question, I'm new in general to flask, web apps, and database migrations.
I've got a flask app I'm working on that uses sqlite for the 'development' database. I'm using flask-migrate for the migrations.
The application is version controlled with git - including the database. What I'd like to do is keep the database in version control, but in a 'default' state - so that when a coworker checks it out they don't get my test data ... and vice-versa.
I'm unsure how to make this work. Any suggestions?
| closed | 2015-09-13T16:29:48Z | 2015-09-14T14:33:15Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/82 | [
"question"
] | nfarrar | 3 |
falconry/falcon | api | 1,548 | Resources should be able to support multiple hooks. | We've come across an issue where only a single falcon hook is applied at a given time; overriding previous hooks. It'd be great if resource endpoints supported multiple falcon hooks.
We also noticed that hooks are overriding each other silently. It'd be great if this was a noisier event (for developer convenience). | closed | 2019-05-29T22:07:42Z | 2019-11-03T21:33:03Z | https://github.com/falconry/falcon/issues/1548 | [
"bug",
"needs-information"
] | alekhrycaiko | 4 |
ARM-DOE/pyart | data-visualization | 1,268 | AttributeError: <''> object has no attribute '_autoscaleXon' | I have received this error with multiple objects, while trying to plot reflectivity from s3 bucket object. I think it is originating from the radar variable or the plot_ppi_map itself.
1) `AttributeError: 'GeoAxes' object has no attribute '_autoscaleXon'`
2) `AttributeError: 'GeoAxesSubplot' object has no attribute '_autoscaleXon'`
I tried to make sure its not on my end but not a 100% sure.
Here's one of the complete error for reference:
```
AttributeError Traceback (most recent call last)
<ipython-input-38-9e299f8058e6> in <module>
20 display = pyart.graph.RadarMapDisplay(my_radar)
21 fig = plt.figure(figsize = [10,8])
---> 22 display.plot_ppi_map('reflectivity', sweep = 0, resolution = 'c',
23 vmin = -8, vmax = 64, mask_outside = False,
24 cmap = pyart.graph.cm.NWSRef,
C:\Users\Nitin_Kumar\anaconda3\lib\site-packages\pyart\graph\radarmapdisplay.py in plot_ppi_map(self, field, sweep, mask_tuple, vmin, vmax, cmap, norm, mask_outside, title, title_flag, colorbar_flag, colorbar_label, ax, fig, lat_lines, lon_lines, projection, min_lon, max_lon, min_lat, max_lat, width, height, lon_0, lat_0, resolution, shapefile, shapefile_kwargs, edges, gatefilter, filter_transitions, embellish, raster, ticks, ticklabs, alpha, edgecolors, **kwargs)
298 if norm is not None: # if norm is set do not override with vmin/vmax
299 vmin = vmax = None
--> 300 pm = ax.pcolormesh(x * 1000., y * 1000., data, alpha=alpha,
301 vmin=vmin, vmax=vmax, cmap=cmap,
302 edgecolors=edgecolors, norm=norm,
C:\Users\Nitin_Kumar\anaconda3\lib\site-packages\cartopy\mpl\geoaxes.py in wrapper(self, *args, **kwargs)
308
309 kwargs['transform'] = transform
--> 310 return func(self, *args, **kwargs)
311 return wrapper
312
C:\Users\Nitin_Kumar\anaconda3\lib\site-packages\cartopy\mpl\geoaxes.py in pcolormesh(self, *args, **kwargs)
1559
1560 """
-> 1561 result = self._pcolormesh_patched(*args, **kwargs)
1562 self.autoscale_view()
1563 return result
C:\Users\Nitin_Kumar\anaconda3\lib\site-packages\cartopy\mpl\geoaxes.py in _pcolormesh_patched(self, *args, **kwargs)
1657 self.update_datalim(corners)
1658 self.add_collection(collection)
-> 1659 self.autoscale_view()
1660
1661 ########################
C:\Users\Nitin_Kumar\anaconda3\lib\site-packages\cartopy\mpl\geoaxes.py in autoscale_view(self, tight, scalex, scaley)
855 scalex=scalex, scaley=scaley)
856 # Limit the resulting bounds to valid area.
--> 857 if scalex and self._autoscaleXon:
858 bounds = self.get_xbound()
859 self.set_xbound(max(bounds[0], self.projection.x_limits[0]),
AttributeError: 'GeoAxesSubplot' object has no attribute '_autoscaleXon'
```
| closed | 2022-09-22T14:25:11Z | 2022-09-22T14:50:55Z | https://github.com/ARM-DOE/pyart/issues/1268 | [] | nkay28 | 2 |
jina-ai/serve | deep-learning | 6,103 | TransformerTorchEncoder Install Cannot Find torch Version | **Describe the bug**
I am attempting to leverage a TransformerTorchEncoder in my flow to index documents. When I run flow.index(docs, show_progress=True) with this encoder, it fails when attempting to install with the message:
CalledProcessError: Command '['/home/ec2-user/anaconda3/envs/python3/bin/python', '-m', 'pip', 'install', '--compile', '--default-timeout=1000', 'torch==1.9.0+cpu', 'transformers>=4.12.0', '-f', 'https://download.pytorch.org/whl/torch_stable.html']' returned non-zero exit status 1.
This is the detailed info I can find in the error:
Looking in links: https://download.pytorch.org/whl/torch_stable.html
ERROR: Could not find a version that satisfies the requirement torch==1.9.0+cpu (from versions: 1.11.0, 1.11.0+cpu, 1.11.0+cu102, 1.11.0+cu113, 1.11.0+cu115, 1.11.0+rocm4.3.1, 1.11.0+rocm4.5.2, 1.12.0, 1.12.0+cpu, 1.12.0+cu102, 1.12.0+cu113, 1.12.0+cu116, 1.12.0+rocm5.0, 1.12.0+rocm5.1.1, 1.12.1, 1.12.1+cpu, 1.12.1+cu102, 1.12.1+cu113, 1.12.1+cu116, 1.12.1+rocm5.0, 1.12.1+rocm5.1.1, 1.13.0, 1.13.0+cpu, 1.13.0+cu116, 1.13.0+cu117, 1.13.0+cu117.with.pypi.cudnn, 1.13.0+rocm5.1.1, 1.13.0+rocm5.2, 1.13.1, 1.13.1+cpu, 1.13.1+cu116, 1.13.1+cu117, 1.13.1+cu117.with.pypi.cudnn, 1.13.1+rocm5.1.1, 1.13.1+rocm5.2, 2.0.0, 2.0.0+cpu, 2.0.0+cpu.cxx11.abi, 2.0.0+cu117, 2.0.0+cu117.with.pypi.cudnn, 2.0.0+cu118, 2.0.0+rocm5.3, 2.0.0+rocm5.4.2, 2.0.1, 2.0.1+cpu, 2.0.1+cpu.cxx11.abi, 2.0.1+cu117, 2.0.1+cu117.with.pypi.cudnn, 2.0.1+cu118, 2.0.1+rocm5.3, 2.0.1+rocm5.4.2, 2.1.0, 2.1.0+cpu, 2.1.0+cpu.cxx11.abi, 2.1.0+cu118, 2.1.0+cu121, 2.1.0+cu121.with.pypi.cudnn, 2.1.0+rocm5.5, 2.1.0+rocm5.6)
ERROR: No matching distribution found for torch==1.9.0+cpu
Is there any way to update this version or ignore the version so that this can install correctly?
**Describe how you solve it**
<!-- copy past your code/pull request link -->
---
<!-- Optional, but really help us locate the problem faster -->
**Environment**
- jina 3.22.4
- docarray 0.21.0
- jcloud 0.3
- jina-hubble-sdk 0.39.0
- jina-proto 0.1.27
- protobuf 4.24.3
- proto-backend upb
- grpcio 1.47.5
- pyyaml 6.0
- python 3.10.12
- platform Linux
- platform-release 5.10.192-183.736.amzn2.x86_64
- platform-version #1 SMP Wed Sep 6 21:15:41 UTC 2023
- architecture x86_64
- processor x86_64
- uid 3083049066853
- session-id 20a184b4-7982-11ee-9c84-02cdd40b6165
- uptime 2023-11-02T13:17:09.019222
- ci-vendor (unset)
- internal False
* JINA_DEFAULT_HOST (unset)
* JINA_DEFAULT_TIMEOUT_CTRL (unset)
* JINA_DEPLOYMENT_NAME (unset)
* JINA_DISABLE_UVLOOP (unset)
* JINA_EARLY_STOP (unset)
* JINA_FULL_CLI (unset)
* JINA_GATEWAY_IMAGE (unset)
* JINA_GRPC_RECV_BYTES 4120095
* JINA_GRPC_SEND_BYTES 3172175
* JINA_HUB_NO_IMAGE_REBUILD (unset)
* JINA_LOG_CONFIG (unset)
* JINA_LOG_LEVEL (unset)
* JINA_LOG_NO_COLOR (unset)
* JINA_MP_START_METHOD (unset)
* JINA_OPTOUT_TELEMETRY (unset)
* JINA_RANDOM_PORT_MAX (unset)
* JINA_RANDOM_PORT_MIN (unset)
* JINA_LOCKS_ROOT (unset)
* JINA_K8S_ACCESS_MODES (unset)
* JINA_K8S_STORAGE_CLASS_NAME (unset)
* JINA_K8S_STORAGE_CAPACITY (unset)
* JINA_STREAMER_ARGS (unset)
**Screenshots**
<!-- If applicable, add screenshots to help explain your problem. --> | closed | 2023-11-02T13:17:48Z | 2024-02-16T00:17:18Z | https://github.com/jina-ai/serve/issues/6103 | [
"Stale"
] | dkintgen | 6 |
deepspeedai/DeepSpeed | deep-learning | 6,685 | [REQUEST] The all-reduce overlap in ZeRO 1 and ZeRO 2 | Hello, when I use DeepSpeed to train a GPT-Based Model by using zero1 or zero2, I found that there is no any overlap between all-reduce and backward computation. I don't know whether I set the config correctly, the following is my config
`{
"train_batch_size": 8,
"steps_per_print": 1,
"gradient_accumulation_steps": 1,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.00015,
}
},
"tensorboard": {
"enabled": true,
"output_path": "output/ds_logs/",
"job_name": "train_internlm"
},
"bf16": {
"enabled": true
},
"zero_optimization": {
"stage": 1,
"overlap_comm": true,
"reduce_bucket_size": 536870912,
"contiguous_gradients": false,
}
}`
Then I profiled the kernel, the profiling results are the following

As the above figure, the all-reduce(which is in purple color) can not be overlapped with the backward. So I found out that the all-reduce is the function [allreduce_and_copy](https://github.com/microsoft/DeepSpeed/blob/master/deepspeed/runtime/zero/stage_1_and_2.py#L1520), which has a copy operation at the end of the function [here](https://github.com/microsoft/DeepSpeed/blob/9b7fc5452471392b0f58844219fcfdd14a9cdc77/deepspeed/runtime/zero/stage_1_and_2.py#L1545).

As we can see, it seems that these copy operation blocks the following backward. Therefore, can any body help me with this?
Aslo, I set the contiguous_gradients True, the all-reduce can overlap with backward

And there is no any copy operation. So I have some questions:
1. Does the copy operation in allreduce_and_copy can have effects in following backward computations, although they are in different stream?
2. It seems that the contiguous gradient could avoiding copy, but the avgsamples/sec doesn't increase.
3. What if I want to obtain the most efficient training performance in ZeRO1, how can I set the zero optimization config?
Looking forward to any replies.
| closed | 2024-10-29T08:07:46Z | 2024-11-01T20:54:27Z | https://github.com/deepspeedai/DeepSpeed/issues/6685 | [
"enhancement"
] | yingtongxiong | 3 |
pytorch/pytorch | numpy | 149,691 | LazyLinear broken by new init logic | ### 🐛 Describe the bug
The [following PR](https://github.com/pytorch/pytorch/pull/147599) broke the init of lazy linear:
```
python -c """
from torch import nn
import torch
l = nn.LazyLinear(4)
print(l(torch.randn(3)))
print(l(torch.randn(3)))
"""
```
prints
```
tensor([0., 0., 0., 0.], grad_fn=<ViewBackward0>)
```
This is because now reset_parameters is ignored because the call to `reset_parameters()` here:
https://github.com/pytorch/pytorch/blob/362b40939dd6faeebf0569beac563afa51e81dcd/torch/nn/modules/linear.py#L291
triggers this block
https://github.com/pytorch/pytorch/blob/362b40939dd6faeebf0569beac563afa51e81dcd/torch/nn/modules/linear.py#L281-L283
but now the in_features are always 0 at that point (before, the value was set to a non-zero value before that call).
## Solution
I will submit a PR soon
### Versions
nightly
cc @ezyang @gchanan @zou3519 @kadeng @msaroufim @albanD @mruberry @jbschlosser @walterddr @mikaylagawarecki | open | 2025-03-21T02:16:42Z | 2025-03-21T15:42:09Z | https://github.com/pytorch/pytorch/issues/149691 | [
"high priority",
"module: nn",
"triaged",
"module: regression",
"module: lazy"
] | vmoens | 1 |
kennethreitz/responder | flask | 547 | Feature: Bring back CLI interface | ## About
Apparently, the CLI interface got removed with 7c19eca78a10744b. It might want to be restored from those PRs ff.:
- GH-72
- GH-104
## References
- GH-551
_Originally posted by @amotl in https://github.com/kennethreitz/responder/pull/546#discussion_r1817405607_
| closed | 2024-10-25T21:56:43Z | 2025-01-20T23:36:50Z | https://github.com/kennethreitz/responder/issues/547 | [
"feature",
"help wanted"
] | amotl | 3 |
onnx/onnx | deep-learning | 6,220 | onnx.reference: Cast to float8e4m3fnuz treats +/-inf wrong | # Bug Report
### Describe the bug
Casting +/-inf from float to float8e4m3fnuz / float8e5m2fnuz with `onnx.reference`
yields +/- FLT_MAX where it should be NaN according to the table in https://onnx.ai/onnx/operators/onnx__Cast.html
### System information
- OS Platform and Distribution (*e.g. Linux Ubuntu 20.04*): Ubuntu 22.04
- ONNX version (*e.g. 1.13*): 1.16.1
- Python version: 3.10
### Reproduction instructions
Running
```
import onnx
import onnx.reference
import numpy as np
model = """\
<
ir_version: 10,
opset_import: ["" : 21]
>
test_cast_FLOAT_to_FLOAT8E4M3FNUZ (float[3,5] input) => (float8e4m3fnuz[3,5] output) {
output = Cast <to: int = 18> (input)
}
"""
m = onnx.parser.parse_model(model)
inp = np.array([-np.inf, np.inf], dtype=np.float32)
out = onnx.reference.ReferenceEvaluator(m).run(None, {"input": inp})[0]
print("inputs", inp)
print("as uint8", out)
print("as float", onnx.numpy_helper.float8e4m3_to_float32(out, fn=True, uz=True))
```
prints
```
inputs [-inf inf]
as uint8 [255 127]
as float [-240. 240.]
```
### Expected behavior
I expect it to print
```
as float [nan nan]
```
### Notes
This also affects the reference outputs of the backend tests, e.g.
`test_cast_FLOAT_to_FLOAT8E4M3FNU/test_data_set_0/output_0.pb` | open | 2024-07-10T07:02:36Z | 2024-07-11T10:07:04Z | https://github.com/onnx/onnx/issues/6220 | [
"question"
] | mgehre-amd | 2 |
CorentinJ/Real-Time-Voice-Cloning | deep-learning | 841 | Cannot solve for 437 | Hello,
I am still having an issue following 437. I keep getting an assertion error for the filepath where the utterances are kept (code/error below).
I created 4 wav files and 4 text files. (see screenshot below)

I'm not sure what else I need to do to get these files for the model to train on.
(deepvoice) C:\Users\Family\Desktop\Real-Time-Voice-Cloning-master>python synthesizer_preprocess_audio.py synthesizer/saved_models/logs-singlespeaker/datasets_root --datasets_name LibriTTS --no_alignments
Arguments:
datasets_root: synthesizer\saved_models\logs-singlespeaker\datasets_root
out_dir: synthesizer\saved_models\logs-singlespeaker\datasets_root\SV2TTS\synthesizer
n_processes: None
skip_existing: False
hparams:
no_alignments: True
datasets_name: LibriTTS
subfolders: train-clean-100, train-clean-360
Using data from:
synthesizer\saved_models\logs-singlespeaker\datasets_root\LibriTTS\train-clean-100
synthesizer\saved_models\logs-singlespeaker\datasets_root\LibriTTS\train-clean-360
LibriTTS: 0%| | 0/1 [00:07<?, ?speakers/s]
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "C:\Users\Family\Anaconda3\envs\deepvoice\lib\multiprocessing\pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "C:\Users\Family\Desktop\Real-Time-Voice-Cloning-master\synthesizer\preprocess.py", line 76, in preprocess_speaker
assert text_fpath.exists()
AssertionError
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "synthesizer_preprocess_audio.py", line 59, in <module>
preprocess_dataset(**vars(args))
File "C:\Users\Family\Desktop\Real-Time-Voice-Cloning-master\synthesizer\preprocess.py", line 35, in preprocess_dataset
for speaker_metadata in tqdm(job, datasets_name, len(speaker_dirs), unit="speakers"):
File "C:\Users\Family\Anaconda3\envs\deepvoice\lib\site-packages\tqdm\std.py", line 1185, in __iter__
for obj in iterable:
File "C:\Users\Family\Anaconda3\envs\deepvoice\lib\multiprocessing\pool.py", line 748, in next
raise value
AssertionError
(deepvoice) C:\Users\Family\Desktop\Real-Time-Voice-Cloning-master> | closed | 2021-09-07T19:43:39Z | 2021-09-14T20:48:55Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/841 | [] | schae43 | 3 |
SALib/SALib | numpy | 400 | Booleans in bound parameter generates continouos values when using saltelli.sample | I am using a model with inputs of both scalars and booleans.
Saltelli.sample generates continuous values for the last 4 parameters which are booleans.
In my case, those variables can not be scalars between 0 and 1 which is what is produced.
Maybe this is how it is intended to work but I was a bit surprised so I am posting it.
See code below.
```
bounds = [[0.0, 0.048],
[0.0, 0.522],
[0.07, 2.5],
[0.5263157894736842, 60.0],
[0.5263157894736842, 60.0],
[0.5263157894736842, 50.0],
[0.5882352941176471, 50.0],
[0.5263157894736842, 52.0],
[0.6060606060606061, 60.0],
[0.5263157894736842, 50.0],
[0.5263157894736842, 66.66666666666667],
[0.5882352941176471, 50.0],
[0.5882352941176471, 52.0],
[0.5882352941176471, 50.0],
[0.5263157894736842, 66.66666666666667],
[0.5882352941176471, 50.0],
[0.5882352941176471, 52.0],
[0.5882352941176471, 50.0],
[0.0, 45.0],
[0.0, 50.0],
[0.0, 50.0],
[0.0, 50.0],
[0.0, 50.0],
[0.0, 50.0],
[0.0, 50.0],
[0.0, 50.0],
[False, True],
[False, True],
[False, True],
[False, True]]
problem = {
'num_vars': 30,
'names': columns,
'bounds': bounds
}
# Generate samples
param_values = saltelli.sample(problem, 1000)
``` | closed | 2021-02-16T09:35:26Z | 2021-02-21T13:09:07Z | https://github.com/SALib/SALib/issues/400 | [] | wilhelmssonjens | 2 |
ansible/ansible | python | 84,216 | Maybe a Bug in ipv6 pattern | ### Summary
ipv6 pattern defined in ansible/parsing/utils/addresses.py may have a problem:
```shell
>>> import re
>>> pattern = "^abc|xyz$" # wish to match a complete line
>>> r = re.compile(pattern)
>>> r.match("abc123") # wish failed
<re.Match object; span=(0, 3), match='abc'>
>>> pattern = "^(abc|xyz)$"
>>> r = re.compile(pattern)
>>> r.match("abc123")
>>>
```
look at ipv6 pattern:
```python
'ipv6': re.compile(
r'''^
(?:{0}:){{7}}{0}| # uncompressed: 1:2:3:4:5:6:7:8
(?:{0}:){{1,6}}:| # compressed variants, which are all
(?:{0}:)(?::{0}){{1,6}}| # a::b for various lengths of a,b
(?:{0}:){{2}}(?::{0}){{1,5}}|
(?:{0}:){{3}}(?::{0}){{1,4}}|
(?:{0}:){{4}}(?::{0}){{1,3}}|
(?:{0}:){{5}}(?::{0}){{1,2}}|
(?:{0}:){{6}}(?::{0})| # ...all with 2 <= a+b <= 7
:(?::{0}){{1,6}}| # ::ffff(:ffff...)
{0}?::| # ffff::, ::
# ipv4-in-ipv6 variants
(?:0:){{6}}(?:{0}\.){{3}}{0}|
::(?:ffff:)?(?:{0}\.){{3}}{0}|
(?:0:){{5}}ffff:(?:{0}\.){{3}}{0}
$
'''.format(ipv6_component), re.X | re.I
),
````
if you want match a complete situation in one line, then the start and end signs may not effective for each line, that's why the ipv6 pattern may match a invalid ipv6 address:
```shell
>>> from ansible.parsing.utils.addresses import parse_address
>>> from ansible.parsing.utils.addresses import patterns
>>> patterns["ipv6"].match("240E:0982:990A:0002::::::::::::::::::1") # not a valid ipv6 address
<re.Match object; span=(0, 21), match='240E:0982:990A:0002::'>
>>> patterns["hostname"].match("240E:0982:990A:0002::::::::::::::::::1") # hostname pattern match failed
>>> parse_address("240E:0982:990A:0002::::::::::::::::::1")
('240E:0982:990A:0002::::::::::::::::::1', None)
```
it only match the front part, not complete line. so what's the ipv6 pattern real meaning? match a complete line or just the front part?
### Issue Type
Bug Report
### Component Name
addresses.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.17.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/python3.12/lib/python3.12/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/python3.12/bin/ansible
python version = 3.12.4 (main, Jul 17 2024, 16:22:20) [GCC 10.2.1 20200825 (Alibaba 10.2.1-3.8 2.32)] (/opt/python3.12/bin/python3.12)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
all system
### Steps to Reproduce
```shell
>>> from ansible.parsing.utils.addresses import parse_address
>>> from ansible.parsing.utils.addresses import patterns
>>> patterns["ipv6"].match("240E:0982:990A:0002::::::::::::::::::1") # not a valid ipv6 address
<re.Match object; span=(0, 21), match='240E:0982:990A:0002::'>
>>> patterns["hostname"].match("240E:0982:990A:0002::::::::::::::::::1") # hostname pattern match failed
>>> parse_address("240E:0982:990A:0002::::::::::::::::::1")
('240E:0982:990A:0002::::::::::::::::::1', None)
```
### Expected Results
```shell
>>> from ansible.parsing.utils.addresses import parse_address
>>> from ansible.parsing.utils.addresses import patterns
>>> patterns["ipv6"].match("240E:0982:990A:0002::::::::::::::::::1") # not a valid ipv6 address
>>>
```
### Actual Results
```console
see the summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-11-01T08:57:32Z | 2024-11-19T14:00:03Z | https://github.com/ansible/ansible/issues/84216 | [
"bug",
"has_pr",
"affects_2.17"
] | spyinx | 2 |
httpie/cli | rest-api | 959 | Pretty Print HTML Output (with newlines and indentation)? | When printing `Content-type: application/json` httpie formats the json with newlines and indentation making it very easy to read.
<pre> <font color="#135CD0">❯</font> <font color="#328A5D">http</font> --body <font color="#FA701D">'https://api2.nicehash.com/main/api/v2/legacy?method=stats.global.current'</font>
{
<font color="#135CD0"><b>"method"</b></font>: <font color="#FA701D">"stats.global.current"</font>,
<font color="#135CD0"><b>"result"</b></font>: {
<font color="#135CD0"><b>"error"</b></font>: <font color="#FA701D">"Method not supported"</font>
}
}
</pre>
However, when returning text/html, the output is syntax highlighted but not reformatted with newlines and indentation. For short HTML pages, reformatting would make the results a lot more easily readable.
<pre> <font color="#135CD0">❯</font> <font color="#328A5D">http</font> --body google.com
<<font color="#135CD0"><b>HTML</b></font>><<font color="#135CD0"><b>HEAD</b></font>><<font color="#135CD0"><b>meta</b></font> <font color="#33C3C1">http-equiv</font>=<font color="#FA701D">"content-type"</font> <font color="#33C3C1">content</font>=<font color="#FA701D">"text/html;charset=utf-8"</font>>
<<font color="#135CD0"><b>TITLE</b></font>>301 Moved</<font color="#135CD0"><b>TITLE</b></font>></<font color="#135CD0"><b>HEAD</b></font>><<font color="#135CD0"><b>BODY</b></font>>
<<font color="#135CD0"><b>H1</b></font>>301 Moved</<font color="#135CD0"><b>H1</b></font>>
The document has moved
<<font color="#135CD0"><b>A</b></font> <font color="#33C3C1">HREF</font>=<font color="#FA701D">"http://www.google.com/"</font>>here</<font color="#135CD0"><b>A</b></font>>.
</<font color="#135CD0"><b>BODY</b></font>></<font color="#135CD0"><b>HTML</b></font>>
</pre>
Syntax highlighting is fantastic, but would be very easy to read like:
```
<HTML>
<HEAD>
<meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>301 Moved</TITLE>
</HEAD>
<BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY>
</HTML>
``` | closed | 2020-08-07T00:26:54Z | 2024-02-19T15:00:07Z | https://github.com/httpie/cli/issues/959 | [] | ryanerwin | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.