repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
litestar-org/litestar | asyncio | 3,398 | Bug: Parsing tagged unions | ### Description
When using tagged unions, while the correct OpenAPI schema is generated, decoding the request fails. It also doesn't work with the MsgpackDTO. It seems that the internal litestar.typing.FieldDefinition type doesn't support tagged unions.
A question thou, are tagged unions meant to be supported in litestar? I would be happy to contribute to fix/implement the solution if it's meant to be there.
### URL to code causing the issue
_No response_
### MCVE
```python
from typing import Annotated, Literal
from litestar import Controller, Litestar, post
from litestar.contrib.pydantic import PydanticDTO
from pydantic import BaseModel, Field
class DataA(BaseModel):
name: str
type: Annotated[Literal["A"], Field("A")]
class DataB(BaseModel):
name: str
type: Annotated[Literal["B"], Field("B")]
class DataContainer(BaseModel):
value: Annotated[DataA | DataB, Field(discriminator="type")]
class TaggedUnionExample(Controller):
dto = PydanticDTO[DataContainer]
@post("/data/")
def post_data(self, data: DataContainer) -> str:
return data.value.name
app = Litestar(route_handlers=[TaggedUnionExample])
```
### Steps to reproduce
```bash
1. `LITESTAR_APP=example:app litestar run --debug`
2. `echo '{"value": {"type": "B", "name": "test"}}' | http post http://localhost:8000/data/`
4. See error
```
### Screenshots
```bash
""
```
### Logs
```bash
Traceback (most recent call last):
File ".../.venv/lib/python3.12/site-packages/litestar/middleware/exceptions/middleware.py", line 219, in __call__
await self.app(scope, receive, send)
File ".../.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 82, in handle
response = await self._get_response_for_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 134, in _get_response_for_request
return await self._call_handler_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 154, in _call_handler_function
response_data, cleanup_group = await self._get_response_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/routes/http.py", line 178, in _get_response_data
data = await kwargs["data"]
^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/_kwargs/extractors.py", line 490, in dto_extractor
return data_dto(connection).decode_bytes(body)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/contrib/pydantic/pydantic_dto_factory.py", line 100, in decode_bytes
return super().decode_bytes(value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/dto/base_dto.py", line 97, in decode_bytes
return backend.populate_data_from_raw(value, self.asgi_connection)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/dto/_codegen_backend.py", line 144, in populate_data_from_raw
return self._transfer_to_model_type(self.parse_raw(raw, asgi_connection))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/dto/_backend.py", line 237, in parse_raw
result = decode_json(value=raw, target_type=self.annotation, type_decoders=type_decoders)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../.venv/lib/python3.12/site-packages/litestar/serialization/msgspec_hooks.py", line 183, in decode_json
return msgspec.json.decode(
^^^^^^^^^^^^^^^^^^^^
TypeError: If a type union contains multiple Struct types, all Struct types must be tagged (via `tag` or `tag_field` kwarg) - type `typing.Annotated[typing.Union[litestar.dto._backend.PostDataDataContainer_0DataARequestBody, litestar.dto._backend.PostDataDataContainer_1DataBRequestBody], msgspec.Meta(examples=[])]` is not supported
```
### Litestar Version
2.8.2
### Platform
- [X] Linux
- [ ] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | open | 2024-04-17T12:56:09Z | 2025-03-20T15:54:36Z | https://github.com/litestar-org/litestar/issues/3398 | [
"Bug :bug:",
"DTOs",
"OpenAPI"
] | ahrzb | 1 |
serengil/deepface | machine-learning | 1,271 | where is the anti_spoofing | ### Description
I saw anti-spoofing parameters in the documentation, but there is no such code in the actual code
### Additional Info
_No response_ | closed | 2024-07-05T07:06:00Z | 2024-07-05T08:28:10Z | https://github.com/serengil/deepface/issues/1271 | [
"enhancement",
"invalid"
] | Json0926 | 1 |
mwaskom/seaborn | pandas | 2,914 | displot instantly close upon display when using seaborn 0.11.2 with matplotlib 3.5.2 | I encountered a case of figure window closing instantly when trying to display a displot.
To reproduce, try the following with seaborn version 0.11.2 and matplotlib version 3.5.2
```python
import seaborn
import matplotlib.pyplot as plt
sns.displot([0,0,1,1,2,3,3,1,0])
plt.show()
```
In seaborn version 0.11.2, a figure window will flash on the screen and instantly disappear, whereas in seaborn version 0.11.1, a histogram bar plot is displayed in a new window that persists until I closed it.
After some digging, I suspect that this is due to the handling of the `backend` entry of `matplotlib.rcParams`. `rcParams["backend"]` by default is initialized with a sentinel object `matplotlib.rcsetup._auto_backend_sentinel`. The first time someone tries to access `rcParams["backend"]` , matplotlib will call `matplotlib.pyplot.switch_backend` to swap in the actual backend object. The function `matplotlib.pyplot.switch_backend`, when called, will first close all currently open figures.
In seaborn version 0.11.2, `displot` will try to create a `FacetGrid` object in which to place the plot. In the constructor of `FacetGrid`, [at line 408](https://github.com/mwaskom/seaborn/blob/v0.11.2/seaborn/axisgrid.py#L408), it creates a Figure object within a context manager. As a result, the `rcParams["backend"]` gets reverted to the sentinel object upon exit from the with block. The created Figure would get automatically closed the next time the code tries to access `rcParams["backend"]`.
| closed | 2022-07-19T16:04:52Z | 2022-07-31T21:14:41Z | https://github.com/mwaskom/seaborn/issues/2914 | [
"upstream"
] | kailizcatman | 10 |
Lightning-AI/LitServe | api | 348 | Support for async requests or webhooks | <!--
⚠️ BEFORE SUBMITTING, READ:
We're excited for your request! However, here are things we are not interested in:
- Decorators.
- Doing the same thing in multiple ways.
- Adding more layers of abstraction... tree-depth should be 1 at most.
- Features that over-engineer or complicate the code internals.
- Linters, and crud that complicates projects.
-->
----
## 🚀 Feature
<!-- A clear and concise description of the feature proposal -->
Is there currently a way to run inference and get the result by polling or a webhook callback?
### Motivation
<!--
Please outline the motivation for the proposal.
Is your feature request related to a problem? e.g., I'm always frustrated when [...].
If this is related to another GitHub issue, please link here too...
-->
If an inference takes a long time to run lets say 30 min you dont want the connection to drop because you will have to rerun the inference especially on a shaky connection.
### Pitch
<!-- A clear and concise description of what you want to happen. -->
A webhook callback or polling method can let us run inference as a background task based off a uuid or something the user specifies so that they can come back at a later time for the result. can also help integrations so the inference is not a blocking request.
### Alternatives
<!-- A clear and concise description of any alternative solutions or features you've considered, if any. -->
[Cog](https://cog.run/http/#webhooks) from replicate implements a webhook
### Additional context
<!-- Add any other context or screenshots about the feature request here. -->
| closed | 2024-10-29T15:55:56Z | 2024-11-11T19:50:24Z | https://github.com/Lightning-AI/LitServe/issues/348 | [
"enhancement"
] | brian316 | 3 |
allenai/allennlp | data-science | 5,307 | Open IE separates auxiliary and participle into separate tuples | <!--
Please fill this template entirely and do not erase any of it.
We reserve the right to close without a response bug reports which are incomplete.
If you have a question rather than a bug, please ask on [Stack Overflow](https://stackoverflow.com/questions/tagged/allennlp) rather than posting an issue here.
-->
## Description
<!-- Please provide a clear and concise description of what the bug is here. -->
<details>
<summary><b>OpenIE separates auxiliary and participle into separate tuples. This affects the online demo and my local installation of AllenNLP.</b></summary>
<p>
In the sentence "He is running to the store", my understanding is that there is only one predicate and that the output of OpenIE should be a single tuple:
[Arg0: He] [V: is running] [Arg1: to the store]
Instead, I get two tuples where the first is simply the auxiliary:
[V: is]
[Arg0: He] [V: running] [Arg1: to the store]
You can see the examples linked below from the online demo to see that it affects other sentences/auxiliaries too. I also noticed the SRL demo returns two predicates for the same sentence.
</p>
</details>
## Related issues or possible duplicates
- None
## Environment
Online demo, though I initially noticed this on my local installation of AllenNLP.
## Steps to reproduce
https://demo.allennlp.org/open-information-extraction/s/he-has-moved-out-appartment/L1E3P1E9G4
https://demo.allennlp.org/open-information-extraction/s/he-is-running-to-store/E5E4Z1H6D9
| closed | 2021-07-09T15:58:44Z | 2021-07-23T16:09:38Z | https://github.com/allenai/allennlp/issues/5307 | [
"bug",
"stale"
] | artidoro | 3 |
pydantic/logfire | fastapi | 293 | So Logfire logs are actually traces? | ### Question
As per title:
https://github.com/pydantic/logfire/blob/770780d4e0f5610bd8cbd596029d0cb1e23f25ae/logfire/_internal/main.py#L654-L670
Asking because I have been exploring OpenTelemetry recently and was disappointed to find that [the client APIs for logging in Python are still in development](https://opentelemetry.io/docs/languages/python/#status-and-releases) and seemingly unstable:

| closed | 2024-06-30T20:28:15Z | 2024-07-02T09:45:24Z | https://github.com/pydantic/logfire/issues/293 | [
"Question"
] | astrojuanlu | 4 |
RobertCraigie/prisma-client-py | asyncio | 214 | DateTime comparisons behaving unexpectedly | <!--
Thanks for helping us improve Prisma Client Python! 🙏 Please follow the sections in the template and provide as much information as possible about your problem, e.g. by enabling additional logging output.
See https://prisma-client-py.readthedocs.io/en/stable/reference/logging/ for how to enable additional logging output.
-->
## Bug description
<!-- A clear and concise description of what the bug is. -->
A `find_many` query based on a `datetime` field returns unexpected results. `gt/gte/lt/lte` should be able to be used to specify ranges of availability (either in isolation or in combination), but diverge from what I was expecting.
## How to reproduce
1. Clone https://github.com/iiian/prisma_python_datefailure_demo
2. `prisma generate && prisma db push` to produce cli and db files the demo model:
```prisma
model Demo {
id Int @id
created_at DateTime @default(now())
}
```
3. open db, `sqlite3 database.db`
4. `INSERT INTO Demo (id) VALUES (1);` to create a record with the present moment in the `created_at` field
5. `SELECT * FROM Demo;` to confirm its existence
```
sqlite> SELECT * FROM Demo;
1|2022-01-08 08:01:58
```
6. run `python3 demo.py`, and observe test results:
### TESTS
```python
today = datetime.utcnow()
# 1. does a record exist who was created before the present moment? (Expecting true)
print(await client.demo.find_many(where={
'created_at': {
'lt': today
}
}))
# 2. " " " before or at present (Expecting true)
print(await client.demo.find_many(where={
'created_at': {
'lte': today
}
}))
# 3. does a record exist who was created after the present moment? (Expecting false)
print(await client.demo.find_many(where={
'created_at': {
'gt': today
}
}))
# 4. " " " after or at present moment (Expecting false)
print(await client.demo.find_many(where={
'created_at': {
'gte': today
}
}))
# 5. " " " after this morning, and before this time tomorrow (Expecting true)
print(await client.demo.find_many(where={
# (I think syntax is correct? doesn't complain)
'created_at': {
'gt': datetime(year=today.year, month=today.month, day=today.day),
'lt': today + relativedelta(day=1),
}
}))
# 6. " " " after this time tomorrow, and before this morning (impossible, so false)
print(await client.demo.find_many(where={
'created_at': {
'gt': today + relativedelta(day=1),
'lt': datetime(year=today.year, month=today.month, day=today.day)
}
}))
# 7. after this morning, before tomorrow with and syntax (I think this is correct?, Expecting true)
print(await client.demo.find_many(where={
'AND': [
{
'created_at': {
'lt': today + relativedelta(day=1),
}
},
{
'created_at': {
'gt': datetime(year=today.year, month=today.month, day=today.day)
}
}
]
}))
# 8. same as sixth test case, should be impossible, just with and syntax
print(await client.demo.find_many(where={
'AND': [
{
'created_at': {
'gt': today + relativedelta(day=1),
}
},
{
'created_at': {
'lt': datetime(year=today.year, month=today.month, day=today.day)
}
}
]
}))
```
### TEST RESULTS
```bash
# Test 1
[]
# Test 2
[]
# Test 3
[Demo(id=1, created_at=datetime.datetime(2022, 1, 8, 8, 1, 58, tzinfo=datetime.timezone.utc))]
# Test 4
[Demo(id=1, created_at=datetime.datetime(2022, 1, 8, 8, 1, 58, tzinfo=datetime.timezone.utc))]
# Test 5
[]
# Test 6
[]
# Test 7
[]
# Test 8
[]
```
Prisma python, therefore, currently claiming record I created before running demo was **created after** I ran the demo. Also, seems that including `lt(e)` in the query may result in immediate loss? I haven't tried specifying a time in the future.
<!--
Steps to reproduce the behavior:
1. Go to '...'
2. Change '....'
3. Run '....'
4. See error
-->
## Expected behavior
### Test Result Expectations
```bash
# Test 1
[Demo(id=1, created_at=datetime.datetime(2022, 1, 8, 8, 1, 58, tzinfo=datetime.timezone.utc))]
# Test 2
[Demo(id=1, created_at=datetime.datetime(2022, 1, 8, 8, 1, 58, tzinfo=datetime.timezone.utc))]
# Test 3
[]
# Test 4
[]
# Test 5
[Demo(id=1, created_at=datetime.datetime(2022, 1, 8, 8, 1, 58, tzinfo=datetime.timezone.utc))]
# Test 6
[]
# Test 7
[Demo(id=1, created_at=datetime.datetime(2022, 1, 8, 8, 1, 58, tzinfo=datetime.timezone.utc))]
# Test 8
[]
```
## Prisma information
<!-- Your Prisma schema, Prisma Client Python queries, ...
Do not include your database credentials when sharing your Prisma schema! -->
```prisma
// database
datasource db {
provider = "sqlite"
url = "file:database.db"
}
// generator
generator client {
provider = "prisma-client-py"
}
model Demo {
id Int @id
created_at DateTime @default(now())
}
```
## Environment & setup
<!-- In which environment does the problem occur -->
- OS: <!--[e.g. Mac OS, Windows, Debian, CentOS, ...]--> Tested on Windows 10, and macOS Big Sur
- Database: <!--[PostgreSQL, MySQL, MariaDB or SQLite]--> Sqlite3
- Python version: <!--[Run `python -V` to see your Python version]--> 3.10 Windows, 3.9 macOS
- Prisma version: 3.7.0, 0.4.3
<!--[Run `prisma py version` to see your Prisma version and paste it between the ´´´]-->
| closed | 2022-01-08T08:46:26Z | 2022-03-01T16:18:07Z | https://github.com/RobertCraigie/prisma-client-py/issues/214 | [
"bug/2-confirmed",
"kind/bug",
"topic: external",
"priority/low",
"level/unknown"
] | iiian | 11 |
cvat-ai/cvat | pytorch | 8,592 | Wrong skeleton structure issue when trying to load a svg skeleton file | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
Hi folks,
I created a skeleton in a project using CVAT self hosted version and then exported it into a svg file.
Then I created a new project and tried to upload my previous svg file as a new skeleton but I got "Wrong skeleton structure" error.
Do you have any idea how to fix this issue ?
Thanks a lot

### Expected Behavior
The upload function should load my svg file as a new skeleton
### Possible Solution
_No response_
### Context
_No response_
### Environment
CVAT installed on Windows laptop
Ubuntu and Docker Desktop | closed | 2024-10-24T12:29:07Z | 2024-12-13T10:28:14Z | https://github.com/cvat-ai/cvat/issues/8592 | [
"bug"
] | manu13008 | 2 |
jina-ai/serve | machine-learning | 5,659 | Where will jina write logs to when it triggers `continue_on_error`? | We use `post` to send request to server with parameter `continue_on_error`.
`if set, a Request that causes an error will be logged only without blocking the further requests.`
So where will it be logged to and what will write to the logs. | closed | 2023-02-06T12:17:29Z | 2023-02-07T01:04:06Z | https://github.com/jina-ai/serve/issues/5659 | [] | wqh17101 | 4 |
LibreTranslate/LibreTranslate | api | 579 | Documentation in readme for allowed parameter values | It would be great to include the allowed parameter values in the GitHub read-me. When setting parameters for a production server I had to guess or search many of them in the Discourse forum for that kind of information.
Example of questions that this documentation would avoid:
- --api-keys is a flag. On the command line it's obvious. But when using LT_API_KEYS, is defining it enough? Should I put true?
- Is LT_REQUIRE_API_KEY_ORIGIN a toggle, with the origin automatically detected? Or a string? I which case, can I put a wildcard there such as *.mydomain.com ?
| open | 2024-01-24T17:28:22Z | 2024-01-25T02:57:57Z | https://github.com/LibreTranslate/LibreTranslate/issues/579 | [
"enhancement"
] | jmlord | 1 |
modelscope/data-juicer | data-visualization | 233 | Video content compliance and privacy protection operators (image, text, audio) | closed | 2024-03-08T02:35:21Z | 2024-03-14T02:47:51Z | https://github.com/modelscope/data-juicer/issues/233 | [] | yxdyc | 0 |
|
yezyilomo/django-restql | graphql | 322 | Optional GenericRel support | Hi, the new version (0.16), uses `GenericRel` and `ContentType` imported in `mixins.py` lines 4 and 5, however this breaks all projects that do not use `contenttype` modules, especially for those who use django as microservices... removing all django stuff (auth, contenttype, messages and etc)... | closed | 2024-12-30T14:56:50Z | 2025-01-12T21:21:09Z | https://github.com/yezyilomo/django-restql/issues/322 | [] | shinneider-musa | 2 |
waditu/tushare | pandas | 1,626 | 退市股的基本数据里缺行业、地域等参数,可能影响回测。 | 调用stock_basic接口,查询已退市股票,缺少行业、地域等数据,可能影响回测。

用户id:213079 | open | 2022-02-06T13:26:20Z | 2022-02-06T13:28:57Z | https://github.com/waditu/tushare/issues/1626 | [] | cheesebear | 0 |
ranaroussi/yfinance | pandas | 1,957 | 404 Client Error (only on certain tickers - which are still active/not delisted) | ### Describe bug
I have a program that cycles through a list of all the S&P500 companies to check the 'upgrades_downgrades' attribute of each one (with a for loop). It gets the tickers from a well maintained Wikipedia page (scraping with BeautifulSoup4). All the tickers are up to date (I have personally checked), are able to be found on the Yahoo Finance website and are not delisted. All the tickers are in the correct format, as I apply that after they are scraped. While most of the tickers work once the program starts iterating through them, there are a few that have been raising this 404 Client Error since maybe about a month or two ago. Again, the tickers are perfectly fine and the implementation is correct (this is evident in the fact that most of the 503 tickers get a normal response during the iteration). I will provide the full 404 Client Error I receive on some of the symbols in the debug log section.
### Simple code that reproduces your problem
The simplest way to reproduce the problem is by putting a couple of the tickers I know are "bad" into a list with a few good ones and making the same type of call on them:
```python
import datetime
import yfinance as yf
error_date = datetime.date(2024, 6, 6)
list_with_some_bad_tickers = ['COF', 'CRL', 'COO', 'FANG', 'LIN', 'LULU']
tickers_data = yf.Tickers(' '.join(list_with_some_bad_tickers))
for symbol in list_with_some_bad_tickers:
try:
ticker = tickers_data.tickers[symbol]
actions = ticker.upgrades_downgrades
actions_on_error_date = actions[actions.index.date == error_date]
# Do not print empty DataFrames (for tickers without analyst activity)
if not actions_on_error_date.empty:
# Filter the DataFrame based on multiple conditions
filtered_actions = actions_on_error_date[(actions_on_error_date['Action'].isin(['up', 'init'])) &
(actions_on_error_date['ToGrade'].isin(['Buy', 'Outperform', 'Overweight']))]
if not filtered_actions.empty:
# Print the filtered DataFrame
print(f"Analyst action(s) for {symbol} on {error_date}:")
print(filtered_actions)
print()
except Exception as e:
print(f"General error fetching data for {symbol}: {e}")
print()
```
### Debug log
404 Client Error: Not Found for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/COO?modules=upgradeDowngradeHistory&corsDomain=finance.yahoo.com&formatted=false&symbol=COO&crumb=jroCz7O300G
404 Client Error: Not Found for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/LIN?modules=upgradeDowngradeHistory&corsDomain=finance.yahoo.com&formatted=false&symbol=LIN&crumb=jroCz7O300G
*Note*: When running the code, the 'General error fetching data for {symbol}: 'RangeIndex' object has no attribute 'date'' that is also raised in these two instances is avoidable, and I don't think it has to do with the 404 Client Error. I did not want to make my code snippet unnecessarily long, but I do avoid that error by using pd.to_datetime() in each iteration. It has no bearing on the 404 Client Error, for which I have tried everything I know how to do and still can't catch or avoid it!
**My apologies for not being more familiar with the logging/debugging process. I've been stuck on this bug for a while today and don't have the bandwidth to figure out the proper way to document this Debug log right now.
### Bad data proof
I believe this would be considered the 'bad data':
404 Client Error: Not Found for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/COO?modules=upgradeDowngradeHistory&corsDomain=finance.yahoo.com&formatted=false&symbol=COO&crumb=jroCz7O300G
404 Client Error: Not Found for url: https://query2.finance.yahoo.com/v10/finance/quoteSummary/LIN?modules=upgradeDowngradeHistory&corsDomain=finance.yahoo.com&formatted=false&symbol=LIN&crumb=jroCz7O300G
### `yfinance` version
yfinance 0.2.40
### Python version
3.10.12
### Operating system
LinuxLite (Ubuntu) | open | 2024-06-07T02:42:40Z | 2024-06-28T20:40:37Z | https://github.com/ranaroussi/yfinance/issues/1957 | [] | csirick2020 | 2 |
scikit-optimize/scikit-optimize | scikit-learn | 1,143 | gp_minimize: ranges can be a different datatype than initial guess | I am using skopt version 0.9.0 and numpy 1.23.5 (due to an error I got that is specified in #1138). I found, after downgrading numpy from 1.24.1, that I had made an error in specifying the ranges. They were ints instead of floats. however, my initial x-guess was a float. I made a minimal example below:
```python
from skopt import gp_minimize
def find_sqrt(x):
"""find square root of value y given a guess x"""
y = 12.5
x = x[0]
y_guess = x**2
return abs(y_guess - y)
x0 = [1.0]
y0 = None
res1 = gp_minimize(
find_sqrt,
[
(0.0, 12.0),
],
n_calls=12,
x0=x0,
y0=y0,
)
print("guesses with proper float ranges: ", res1.x_iters)
res2 = gp_minimize(
find_sqrt,
[
(0, 12),
],
n_calls=12,
x0=x0,
y0=y0,
)
print("guesses with improper int ranges: ", res2.x_iters)
```
which outputs:
```
guesses with proper float ranges: [[1.0], [8.114281197171252], [9.021416917982528], [4.955995220826303], [6.649848371121621], [11.582469882107137], [3.953831466277097], [2.1390536274803877], [2.6277353173592157], [5.884289789665528], [3.918451377145078], [3.5048487566802144]]
guesses with improper int ranges [[1.0], [6], [9], [5], [0], [11], [1], [1], [7], [3], [4], [4]]
```
If this is expected behavior and I missed something in the docs then I am fine with closing the issue, but I would think that there would be an error thrown if the first and second guesses are different datatypes for the same variable. | open | 2023-02-05T22:43:39Z | 2023-02-05T22:43:55Z | https://github.com/scikit-optimize/scikit-optimize/issues/1143 | [] | ChrisBNEU | 0 |
modelscope/data-juicer | streamlit | 369 | [Bug]: Memory leak in video OP | ### Before Reporting 报告之前
- [X] I have pulled the latest code of main branch to run again and the bug still existed. 我已经拉取了主分支上最新的代码,重新运行之后,问题仍不能解决。
- [X] I have read the [README](https://github.com/alibaba/data-juicer/blob/main/README.md) carefully and no error occurred during the installation process. (Otherwise, we recommend that you can ask a question using the Question template) 我已经仔细阅读了 [README](https://github.com/alibaba/data-juicer/blob/main/README_ZH.md) 上的操作指引,并且在安装过程中没有错误发生。(否则,我们建议您使用Question模板向我们进行提问)
### Search before reporting 先搜索,再报告
- [X] I have searched the Data-Juicer [issues](https://github.com/alibaba/data-juicer/issues) and found no similar bugs. 我已经在 [issue列表](https://github.com/alibaba/data-juicer/issues) 中搜索但是没有发现类似的bug报告。
### OS 系统
Ubuntu
### Installation Method 安装方式
pip
### Data-Juicer Version Data-Juicer版本
_No response_
### Python Version Python版本
3.9
### Describe the bug 描述这个bug
1. Video container should be closed after calling `load_data_with_context` in every video OP.
2. After the stream is decoded, the stream should be closed also. Please refer to [PyAV issue](https://github.com/PyAV-Org/PyAV/issues/1117).
### To Reproduce 如何复现
Run the `video_tagging_from_frames_mapper` OP for large dataset.
### Configs 配置信息
_No response_
### Logs 报错日志
_No response_
### Screenshots 截图
_No response_
### Additional 额外信息
_No response_ | closed | 2024-07-29T04:01:43Z | 2024-08-01T09:45:29Z | https://github.com/modelscope/data-juicer/issues/369 | [
"bug"
] | BeachWang | 0 |
numba/numba | numpy | 9,890 | In certain situations, `setdefault` of a Dict returns an incorrect value. | The `example` function in the following code is expected to return `DictType[int64,array(float64, 1d, A)]<iv=None>({1: [1000. 0.]})`, but instead, it returns `DictType[int64,array(float64, 1d, A)]<iv=None>({1: [0. 0.]})`. The issue is resolved when the line `b = {i: 1 for i in n}` is removed.
The version of `numba` being used is 0.60.0.
```python
import numpy as np
import numba
from numba.core import types
@numba.njit
def example():
a = numba.typed.Dict.empty(key_type=types.int64, value_type=types.float64[:])
b = numba.typed.Dict.empty(key_type=types.int64, value_type=types.int64)
n = np.array([1,2,3])
s = a.setdefault(1, np.array([0., 0.]))
s[0] += 1000.
s[1] = 0.
b = {i: 1 for i in n}
return a
``` | open | 2025-01-08T08:45:41Z | 2025-01-14T15:56:52Z | https://github.com/numba/numba/issues/9890 | [
"bug - incorrect behavior"
] | jedy | 2 |
jmcnamara/XlsxWriter | pandas | 418 | write doesn't call write_array_formula | Hi,
I am using XlsxWriter to write array formulas. The [documentation](http://xlsxwriter.readthedocs.io/worksheet.html?#write_array_formula) claims write() should call write_array_formula() for use when it returns a single value, but it is only called by write_formula().
I am using Python version 3.6 and XlsxWriter 0.9.6 and Excel version 2010 and 2016.
Here is some code that demonstrates the problem:
```python
import xlsxwriter
# Create a new workbook and add a worksheet
workbook = xlsxwriter.Workbook('array_formula.xlsx')
worksheet = workbook.add_worksheet()
# Write some test data.
worksheet.write('B1', 500)
worksheet.write('B2', 10)
worksheet.write('C1', 300)
worksheet.write('C2', 15)
# Write an array formula that returns a single value
worksheet.write_formula('A1', '{=SUM(B1:C1*B2:C2)}')
worksheet.write('A2', '{=SUM(B1:C1*B2:C2)}')
workbook.close()
```
A1 is an array formula while A2 is a string. | closed | 2017-02-22T19:40:02Z | 2018-09-22T19:02:50Z | https://github.com/jmcnamara/XlsxWriter/issues/418 | [
"bug",
"short term"
] | centreboard | 2 |
httpie/cli | api | 581 | Many tests fail | Ubuntu 16.10
tag: 0.9.8
Building & testing with:
```
echo --------
echo Cleaning
echo --------
cd git-httpie
sudo -u actionmystique -H git-reset-clean-pull-checkout.sh $branch $tag
echo -----------------------
echo Installing Dependencies
echo -----------------------
pip install -r requirements-dev.txt -U
echo --------
echo Building
echo --------
sudo -u actionmystique -H python setup.py build
echo ---------
echo Installing
echo ---------
python setup.py install
echo --------
echo Testing
echo --------
python setup.py test
```
leads to:
```
--------
Testing
--------
running test
running egg_info
writing requirements to httpie.egg-info/requires.txt
writing httpie.egg-info/PKG-INFO
writing top-level names to httpie.egg-info/top_level.txt
writing dependency_links to httpie.egg-info/dependency_links.txt
writing entry points to httpie.egg-info/entry_points.txt
reading manifest file 'httpie.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
writing manifest file 'httpie.egg-info/SOURCES.txt'
running build_ext
============================================================================ test session starts ============================================================================
platform linux2 -- Python 2.7.12+, pytest-3.0.7, py-1.4.33, pluggy-0.4.0 -- /usr/bin/python
cachedir: .cache
rootdir: /home/actionmystique/src/HTTPie/git-httpie, inifile: pytest.ini
plugins: xdist-1.15.0, timeout-1.2.0, httpbin-0.2.3, cov-2.4.0, catchlog-1.2.2
collected 236 items
httpie/utils.py::httpie.utils.humanize_bytes PASSED
tests/test_auth.py::test_basic_auth[http] PASSED
tests/test_auth.py::test_digest_auth[http---auth-type] PASSED
tests/test_auth.py::test_digest_auth[http--A] PASSED
tests/test_auth.py::test_credentials_in_url[http] PASSED
tests/test_auth.py::test_credentials_in_url_auth_flag_has_priority[http] PASSED
tests/test_downloads.py::TestDownloads::test_actual_download[http] PASSED
tests/test_downloads.py::TestDownloads::test_download_with_Content_Length[http] PASSED
tests/test_downloads.py::TestDownloads::test_download_no_Content_Length[http] PASSED
tests/test_downloads.py::TestDownloads::test_download_interrupted[http] PASSED
tests/test_httpie.py::test_GET[http] PASSED
tests/test_httpie.py::test_DELETE[http] PASSED
tests/test_httpie.py::test_PUT[http] PASSED
tests/test_httpie.py::test_POST_JSON_data[http] PASSED
tests/test_httpie.py::test_POST_form[http] PASSED
tests/test_httpie.py::test_POST_form_multiple_values[http] PASSED
tests/test_httpie.py::test_POST_stdin[http] PASSED
tests/test_httpie.py::test_headers[http] PASSED
tests/test_httpie.py::test_headers_unset[http] PASSED
tests/test_httpie.py::test_unset_host_header[http] SKIPPED
tests/test_httpie.py::test_headers_empty_value[http] PASSED
tests/test_httpie.py::test_json_input_preserve_order[http] PASSED
tests/test_auth.py::test_basic_auth[https] PASSED
tests/test_auth.py::test_digest_auth[https---auth-type] PASSED
tests/test_auth.py::test_digest_auth[https--A] PASSED
tests/test_auth.py::test_credentials_in_url[https] PASSED
tests/test_auth.py::test_credentials_in_url_auth_flag_has_priority[https] PASSED
tests/test_downloads.py::TestDownloads::test_actual_download[https] PASSED
tests/test_downloads.py::TestDownloads::test_download_with_Content_Length[https] PASSED
tests/test_downloads.py::TestDownloads::test_download_no_Content_Length[https] PASSED
tests/test_downloads.py::TestDownloads::test_download_interrupted[https] PASSED
tests/test_httpie.py::test_GET[https] PASSED
tests/test_httpie.py::test_DELETE[https] PASSED
tests/test_httpie.py::test_PUT[https] PASSED
tests/test_httpie.py::test_POST_JSON_data[https] PASSED
tests/test_httpie.py::test_POST_form[https] PASSED
tests/test_httpie.py::test_POST_form_multiple_values[https] PASSED
tests/test_httpie.py::test_POST_stdin[https] PASSED
tests/test_httpie.py::test_headers[https] PASSED
tests/test_httpie.py::test_headers_unset[https] PASSED
tests/test_httpie.py::test_unset_host_header[https] SKIPPED
tests/test_httpie.py::test_headers_empty_value[https] PASSED
tests/test_httpie.py::test_json_input_preserve_order[https] PASSED
tests/test_auth.py::test_password_prompt PASSED
tests/test_auth.py::test_only_username_in_url[username@example.org] PASSED
tests/test_auth.py::test_only_username_in_url[username:@example.org] PASSED
tests/test_auth.py::test_missing_auth PASSED
tests/test_auth_plugins.py::test_auth_plugin_parse_auth_false PASSED
tests/test_auth_plugins.py::test_auth_plugin_require_auth_false PASSED
tests/test_auth_plugins.py::test_auth_plugin_require_auth_false_and_auth_provided PASSED
tests/test_auth_plugins.py::test_auth_plugin_prompt_password_false PASSED
tests/test_binary.py::TestBinaryRequestData::test_binary_stdin PASSED
tests/test_binary.py::TestBinaryRequestData::test_binary_file_path PASSED
tests/test_binary.py::TestBinaryRequestData::test_binary_file_form PASSED
tests/test_binary.py::TestBinaryResponseData::test_binary_suppresses_when_terminal PASSED
tests/test_binary.py::TestBinaryResponseData::test_binary_suppresses_when_not_terminal_but_pretty PASSED
tests/test_binary.py::TestBinaryResponseData::test_binary_included_and_correct_when_suitable PASSED
tests/test_cli.py::TestItemParsing::test_invalid_items PASSED
tests/test_cli.py::TestItemParsing::test_escape_separator PASSED
tests/test_cli.py::TestItemParsing::test_backslash_before_non_special_character_does_not_escape[path=c:\windows-path-=-c:\windows] PASSED
tests/test_cli.py::TestItemParsing::test_backslash_before_non_special_character_does_not_escape[path=c:\windows\-path-=-c:\windows\] PASSED
tests/test_cli.py::TestItemParsing::test_backslash_before_non_special_character_does_not_escape[path\==c:\windows-path=-=-c:\windows] PASSED
tests/test_cli.py::TestItemParsing::test_escape_longsep PASSED
tests/test_cli.py::TestItemParsing::test_valid_items PASSED
tests/test_cli.py::TestItemParsing::test_multiple_file_fields_with_same_field_name PASSED
tests/test_cli.py::TestItemParsing::test_multiple_text_fields_with_same_field_name PASSED
tests/test_cli.py::TestQuerystring::test_query_string_params_in_url PASSED
tests/test_cli.py::TestQuerystring::test_query_string_params_items PASSED
tests/test_cli.py::TestQuerystring::test_query_string_params_in_url_and_items_with_duplicates PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_slash PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_port PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_path PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_port_and_slash PASSED
tests/test_cli.py::TestLocalhostShorthand::test_expand_localhost_shorthand_with_port_and_path PASSED
tests/test_cli.py::TestLocalhostShorthand::test_dont_expand_shorthand_ipv6_as_shorthand PASSED
tests/test_cli.py::TestLocalhostShorthand::test_dont_expand_longer_ipv6_as_shorthand PASSED
tests/test_cli.py::TestLocalhostShorthand::test_dont_expand_full_ipv6_as_shorthand PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_and_valid PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_not_set PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_but_invalid_and_data_field PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_but_invalid_and_header_field PASSED
tests/test_cli.py::TestArgumentParser::test_guess_when_method_set_but_invalid_and_item_exists PASSED
tests/test_cli.py::TestNoOptions::test_valid_no_options PASSED
tests/test_cli.py::TestNoOptions::test_invalid_no_options PASSED
tests/test_cli.py::TestIgnoreStdin::test_ignore_stdin PASSED
tests/test_cli.py::TestIgnoreStdin::test_ignore_stdin_cannot_prompt_password PASSED
tests/test_cli.py::TestSchemes::test_invalid_custom_scheme PASSED
tests/test_cli.py::TestSchemes::test_invalid_scheme_via_via_default_scheme PASSED
tests/test_cli.py::TestSchemes::test_default_scheme PASSED
tests/test_config.py::test_default_options PASSED
tests/test_config.py::test_default_options_overwrite PASSED
tests/test_config.py::test_migrate_implicit_content_type PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_GET PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_GET_with_headers PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_POST_json PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_POST_form PASSED
tests/test_defaults.py::TestImplicitHTTPMethod::test_implicit_POST_stdin PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_GET_no_data_no_auto_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_no_data_no_auto_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_with_data_auto_JSON_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_GET_with_data_auto_JSON_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_explicit_JSON_auto_JSON_accept PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_GET_explicit_JSON_explicit_headers PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_form_auto_Content_Type PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_POST_form_Content_Type_override PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_print_only_body_when_stdout_redirected_by_default PASSED
tests/test_defaults.py::TestAutoContentTypeAndAcceptHeaders::test_print_overridable_when_stdout_redirected PASSED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/AUTHORS.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CONTRIBUTING.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CHANGELOG.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/README.rst] FAILED
tests/test_docs.py::test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/tests/README.rst] FAILED
tests/test_downloads.py::TestDownloadUtils::test_Content_Range_parsing PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=hello-WORLD_123.txt-hello-WORLD_123.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=".hello-WORLD_123.txt"-hello-WORLD_123.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename="white space.txt"-white space.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename="\"quotes\".txt"-"quotes".txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=/etc/hosts-hosts] PASSED
tests/test_downloads.py::TestDownloadUtils::test_Content_Disposition_parsing[attachment; filename=-None] PASSED
tests/test_downloads.py::TestDownloadUtils::test_filename_from_url PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.bar-0-foo.bar] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.bar-1-foo.bar-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.bar-10-foo.bar-10] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA-0-AAAAAAAAAA] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA-1-AAAAAAAA-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA-10-AAAAAAA-10] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA.txt-0-AAAAAA.txt] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[AAAAAAAAAAAAAAAAAAAA.txt-1-AAAA.txt-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.AAAAAAAAAAAAAAAAAAAA-0-foo.AAAAAA] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.AAAAAAAAAAAAAAAAAAAA-1-foo.AAAA-1] PASSED
tests/test_downloads.py::TestDownloadUtils::test_unique_filename[foo.AAAAAAAAAAAAAAAAAAAA-10-foo.AAA-10] PASSED
tests/test_errors.py::test_error PASSED
tests/test_errors.py::test_error_traceback PASSED
tests/test_errors.py::test_timeout PASSED
tests/test_exit_status.py::test_keyboard_interrupt_during_arg_parsing_exit_status PASSED
tests/test_exit_status.py::test_keyboard_interrupt_in_program_exit_status PASSED
tests/test_exit_status.py::test_ok_response_exits_0 PASSED
tests/test_exit_status.py::test_error_response_exits_0_without_check_status PASSED
tests/test_exit_status.py::test_timeout_exit_status PASSED
tests/test_exit_status.py::test_3xx_check_status_exits_3_and_stderr_when_stdout_redirected PASSED
tests/test_exit_status.py::test_3xx_check_status_redirects_allowed_exits_0 PASSED
tests/test_exit_status.py::test_4xx_check_status_exits_4 PASSED
tests/test_exit_status.py::test_5xx_check_status_exits_5 PASSED
tests/test_httpie.py::test_debug PASSED
tests/test_httpie.py::test_help PASSED
tests/test_httpie.py::test_version PASSED
tests/test_httpie.py::test_headers_empty_value_with_value_gives_error PASSED
tests/test_output.py::test_output_option[True] PASSED
tests/test_output.py::test_output_option[False] PASSED
tests/test_output.py::TestVerboseFlag::test_verbose PASSED
tests/test_output.py::TestVerboseFlag::test_verbose_form PASSED
tests/test_output.py::TestVerboseFlag::test_verbose_json PASSED
tests/test_output.py::TestVerboseFlag::test_verbose_implies_all PASSED
tests/test_output.py::TestColors::test_get_lexer[application/json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/json+foo-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/foo+json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/json-foo-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/x-json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/json+bar-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/bar+json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/json-foo-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[foo/x-json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[application/vnd.comverge.grid+hal+json-False-None-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[text/plain-True-{}-JSON] PASSED
tests/test_output.py::TestColors::test_get_lexer[text/plain-True-foo-Text only] PASSED
tests/test_output.py::TestColors::test_get_lexer_not_found PASSED
tests/test_output.py::TestPrettyOptions::test_pretty_enabled_by_default PASSED
tests/test_output.py::TestPrettyOptions::test_pretty_enabled_by_default_unless_stdout_redirected PASSED
tests/test_output.py::TestPrettyOptions::test_force_pretty PASSED
tests/test_output.py::TestPrettyOptions::test_force_ugly PASSED
tests/test_output.py::TestPrettyOptions::test_subtype_based_pygments_lexer_match PASSED
tests/test_output.py::TestPrettyOptions::test_colors_option PASSED
tests/test_output.py::TestPrettyOptions::test_format_option PASSED
tests/test_output.py::TestLineEndings::test_CRLF_headers_only PASSED
tests/test_output.py::TestLineEndings::test_CRLF_ugly_response PASSED
tests/test_output.py::TestLineEndings::test_CRLF_formatted_response PASSED
tests/test_output.py::TestLineEndings::test_CRLF_ugly_request PASSED
tests/test_output.py::TestLineEndings::test_CRLF_formatted_request PASSED
tests/test_redirects.py::test_follow_all_redirects_shown PASSED
tests/test_redirects.py::test_follow_without_all_redirects_hidden[--follow] PASSED
tests/test_redirects.py::test_follow_without_all_redirects_hidden[-F] PASSED
tests/test_redirects.py::test_follow_all_output_options_used_for_redirects PASSED
tests/test_redirects.py::test_follow_redirect_output_options PASSED
tests/test_redirects.py::test_max_redirects PASSED
tests/test_regressions.py::test_Host_header_overwrite PASSED
tests/test_regressions.py::test_output_devnull PASSED
tests/test_sessions.py::TestSessionFlow::test_session_created_and_reused PASSED
tests/test_sessions.py::TestSessionFlow::test_session_update PASSED
tests/test_sessions.py::TestSessionFlow::test_session_read_only PASSED
tests/test_sessions.py::TestSession::test_session_ignored_header_prefixes PASSED
tests/test_sessions.py::TestSession::test_session_by_path PASSED
tests/test_sessions.py::TestSession::test_session_unicode PASSED
tests/test_sessions.py::TestSession::test_session_default_header_value_overwritten PASSED
tests/test_sessions.py::TestSession::test_download_in_session PASSED
tests/test_ssl.py::test_ssl_version[ssl2.3] PASSED
tests/test_ssl.py::test_ssl_version[tls1] PASSED
tests/test_ssl.py::test_ssl_version[tls1.1] PASSED
tests/test_ssl.py::test_ssl_version[tls1.2] PASSED
tests/test_ssl.py::TestClientCert::test_cert_and_key PASSED
tests/test_ssl.py::TestClientCert::test_cert_pem PASSED
tests/test_ssl.py::TestClientCert::test_cert_file_not_found PASSED
tests/test_ssl.py::TestClientCert::test_cert_file_invalid FAILED
tests/test_ssl.py::TestClientCert::test_cert_ok_but_missing_key FAILED
tests/test_ssl.py::TestServerCert::test_verify_no_OK PASSED
tests/test_ssl.py::TestServerCert::test_verify_custom_ca_bundle_path PASSED
tests/test_ssl.py::TestServerCert::test_self_signed_server_cert_by_default_raises_ssl_error PASSED
tests/test_ssl.py::TestServerCert::test_verify_custom_ca_bundle_invalid_path FAILED
tests/test_ssl.py::TestServerCert::test_verify_custom_ca_bundle_invalid_bundle pytest-httpbin server hit an exception serving request: EOF occurred in violation of protocol (_ssl.c:590)
attempting to ignore so the rest of the tests can run
FAILED
tests/test_stream.py::test_pretty_redirected_stream PASSED
tests/test_stream.py::test_encoded_stream PASSED
tests/test_stream.py::test_redirected_stream PASSED
tests/test_unicode.py::test_unicode_headers PASSED
tests/test_unicode.py::test_unicode_headers_verbose PASSED
tests/test_unicode.py::test_unicode_form_item PASSED
tests/test_unicode.py::test_unicode_form_item_verbose PASSED
tests/test_unicode.py::test_unicode_json_item PASSED
tests/test_unicode.py::test_unicode_json_item_verbose PASSED
tests/test_unicode.py::test_unicode_raw_json_item PASSED
tests/test_unicode.py::test_unicode_raw_json_item_verbose PASSED
tests/test_unicode.py::test_unicode_url_query_arg_item PASSED
tests/test_unicode.py::test_unicode_url_query_arg_item_verbose PASSED
tests/test_unicode.py::test_unicode_url PASSED
tests/test_unicode.py::test_unicode_basic_auth PASSED
tests/test_unicode.py::test_unicode_digest_auth PASSED
tests/test_uploads.py::TestMultipartFormDataFileUpload::test_non_existent_file_raises_parse_error PASSED
tests/test_uploads.py::TestMultipartFormDataFileUpload::test_upload_ok PASSED
tests/test_uploads.py::TestMultipartFormDataFileUpload::test_upload_multiple_fields_with_the_same_name PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path_with_explicit_content_type PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path_no_field_name_allowed PASSED
tests/test_uploads.py::TestRequestBodyFromFilePath::test_request_body_from_file_by_path_no_data_items_allowed PASSED
tests/test_windows.py::TestWindowsOnly::test_windows_colorized_output SKIPPED
tests/test_windows.py::TestFakeWindows::test_output_file_pretty_not_allowed_on_windows PASSED
tests/utils.py::utils.http PASSED
================================================================================= FAILURES ==================================================================================
_______________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/AUTHORS.rst] ________________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/AUTHORS.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c45daf8d0>.returncode
tests/test_docs.py:39: AssertionError
_____________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CONTRIBUTING.rst] _____________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/CONTRIBUTING.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44de4950>.returncode
tests/test_docs.py:39: AssertionError
______________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/CHANGELOG.rst] _______________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/CHANGELOG.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44e22c90>.returncode
tests/test_docs.py:39: AssertionError
________________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/README.rst] ________________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/README.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44de4e10>.returncode
tests/test_docs.py:39: AssertionError
_____________________________________________ test_rst_file_syntax[/home/actionmystique/src/HTTPie/git-httpie/tests/README.rst] _____________________________________________
filename = '/home/actionmystique/src/HTTPie/git-httpie/tests/README.rst'
@pytest.mark.skipif(not has_docutils(), reason='docutils not installed')
@pytest.mark.parametrize('filename', filenames)
def test_rst_file_syntax(filename):
p = subprocess.Popen(
['rst2pseudoxml.py', '--report=1', '--exit-status=1', filename],
stderr=subprocess.PIPE,
stdout=subprocess.PIPE
)
err = p.communicate()[1]
> assert p.returncode == 0, err.decode('utf8')
E AssertionError: Traceback (most recent call last):
E File "/usr/local/bin/rst2pseudoxml.py", line 17, in <module>
E from docutils.core import publish_cmdline, default_description
E File "/usr/local/lib/python2.7/dist-packages/docutils/__init__.py", line 68, in <module>
E class ApplicationError(StandardError):
E NameError: name 'StandardError' is not defined
E
E assert 1 == 0
E + where 1 = <subprocess.Popen object at 0x7f6c44e1cfd0>.returncode
tests/test_docs.py:39: AssertionError
___________________________________________________________________ TestClientCert.test_cert_file_invalid ___________________________________________________________________
self = <test_ssl.TestClientCert instance at 0x7f6c3d403998>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_cert_file_invalid(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure + '/get',
> '--cert', __file__)
tests/test_ssl.py:62:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:322: in ssl_wrap_socket
context.load_cert_chain(certfile, keyfile)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:416: in load_cert_chain
self._ctx.use_certificate_file(certfile)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:740: in use_certificate_file
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')]
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_certificate_file', 'PEM lib')]
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
________________________________________________________________ TestClientCert.test_cert_ok_but_missing_key ________________________________________________________________
self = <test_ssl.TestClientCert instance at 0x7f6c3d2b9b00>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_cert_ok_but_missing_key(self, httpbin_secure):
with pytest.raises(SSLError):
http(httpbin_secure + '/get',
> '--cert', CLIENT_CERT)
tests/test_ssl.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:322: in ssl_wrap_socket
context.load_cert_chain(certfile, keyfile)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:419: in load_cert_chain
self._ctx.use_privatekey_file(keyfile or certfile)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:798: in use_privatekey_file
self._raise_passphrase_exception()
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:777: in _raise_passphrase_exception
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'PEM lib')]
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: [('PEM routines', 'PEM_read_bio', 'no start line'), ('SSL routines', 'SSL_CTX_use_PrivateKey_file', 'PEM lib')]
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
_________________________________________________________ TestServerCert.test_verify_custom_ca_bundle_invalid_path __________________________________________________________
self = <test_ssl.TestServerCert instance at 0x7f6c45eab638>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_verify_custom_ca_bundle_invalid_path(self, httpbin_secure):
with pytest.raises(SSLError):
> http(httpbin_secure.url + '/get', '--verify', '/__not_found__')
tests/test_ssl.py:89:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:308: in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:411: in load_verify_locations
self._ctx.load_verify_locations(cafile, capath)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:669: in load_verify_locations
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: [('system library', 'fopen', 'No such file or directory'), ('BIO routines', 'BIO_new_file', 'no such file'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: [('system library', 'fopen', 'No such file or directory'), ('BIO routines', 'BIO_new_file', 'no such file'), ('x509 certificate routines', 'X509_load_cert_crl_file', 'system lib')]
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
________________________________________________________ TestServerCert.test_verify_custom_ca_bundle_invalid_bundle _________________________________________________________
self = <test_ssl.TestServerCert instance at 0x7f6c3d2e27e8>, httpbin_secure = <SecureServer(<class 'pytest_httpbin.serve.SecureServer'>, started 140102906935040)>
def test_verify_custom_ca_bundle_invalid_bundle(self, httpbin_secure):
with pytest.raises(SSLError):
> http(httpbin_secure.url + '/get', '--verify', __file__)
tests/test_ssl.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/utils.py:207: in http
exit_status = main(args=args, **kwargs)
httpie/core.py:227: in main
log_error=log_error,
httpie/core.py:99: in program
final_response = get_response(args, config_dir=env.config.directory)
httpie/client.py:70: in get_response
response = requests_session.request(**kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
/usr/local/lib/python2.7/dist-packages/requests/sessions.py:609: in send
r = adapter.send(request, **kwargs)
/usr/local/lib/python2.7/dist-packages/requests/adapters.py:423: in send
timeout=timeout
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:600: in urlopen
chunked=chunked)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:345: in _make_request
self._validate_conn(conn)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connectionpool.py:844: in _validate_conn
conn.connect()
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/connection.py:326: in connect
ssl_context=context)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:308: in ssl_wrap_socket
context.load_verify_locations(ca_certs, ca_cert_dir)
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/contrib/pyopenssl.py:411: in load_verify_locations
self._ctx.load_verify_locations(cafile, capath)
/usr/local/lib/python2.7/dist-packages/OpenSSL/SSL.py:669: in load_verify_locations
_raise_current_error()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
exception_type = <class 'OpenSSL.SSL.Error'>
def exception_from_error_queue(exception_type):
"""
Convert an OpenSSL library failure into a Python exception.
When a call to the native OpenSSL library fails, this is usually signalled
by the return value, and an error code is stored in an error queue
associated with the current thread. The err library provides functions to
obtain these error codes and textual error messages.
"""
errors = []
while True:
error = lib.ERR_get_error()
if error == 0:
break
errors.append((
text(lib.ERR_lib_error_string(error)),
text(lib.ERR_func_error_string(error)),
text(lib.ERR_reason_error_string(error))))
> raise exception_type(errors)
E Error: []
/usr/local/lib/python2.7/dist-packages/OpenSSL/_util.py:54: Error
--------------------------------------------------------------------------- Captured stderr call ----------------------------------------------------------------------------
http: error: Error: []
----------------------------------------------------------------------------- Captured log call -----------------------------------------------------------------------------
connectionpool.py 818 DEBUG Starting new HTTPS connection (1): 127.0.0.1
========================================================================== pytest-warning summary ===========================================================================
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_auth.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_binary.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_cli.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_config.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_defaults.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_downloads.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_exit_status.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_httpie.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_output.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_sessions.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_stream.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_uploads.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
WC1 /home/actionmystique/src/HTTPie/git-httpie/tests/test_windows.py cannot collect test class 'TestEnvironment' because it has a __init__ constructor
=================================================== 9 failed, 224 passed, 3 skipped, 13 pytest-warnings in 14.51 seconds ====================================================
``` | closed | 2017-05-01T18:14:35Z | 2019-09-03T15:23:17Z | https://github.com/httpie/cli/issues/581 | [] | jean-christophe-manciot | 6 |
waditu/tushare | pandas | 1,761 | 函数即将被废弃 | D:\Program Files (x86)\Python\Lib\site-packages\tushare\pro\data_pro.py:130: FutureWarning: Series.fillna with 'method' is deprecated and will raise in a future version. Use obj.ffill() or obj.bfill() instead.
data['adj_factor'] = data['adj_factor'].fillna(method='bfill') | open | 2025-01-15T13:32:09Z | 2025-01-15T13:32:09Z | https://github.com/waditu/tushare/issues/1761 | [] | shavingha | 0 |
tqdm/tqdm | pandas | 1,270 | Cannot append elements to shell array while using tqdm | - [ ] I have marked all applicable categories:
+ [ ] exception-raising bug
+ [ ] visual output bug
+ [x] Command-line unintentional behaviour
- [x] I have visited the [source website], and in particular
read the [known issues]
- [x] I have searched through the [issue tracker] for duplicates
- [x] I have mentioned version numbers, operating system and
environment, where applicable:
```bash
python -c "import tqdm, sys; print(tqdm.__version__, sys.version, sys.platform)"
4.60.0 3.8.0 (default, Oct 6 2020, 11:07:52)
[GCC 10.2.0] linux
```
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
I have encountered an issue where elements are not appended to shell arrays when using the tqdm bar from the terminal. The following example produces an empty array
```
$ test_arr=()
total=10
for i in $(seq $total)
do
echo "$i"
test_arr+=("$i")
done | tqdm --total $total --null
echo "Size of array: ${#test_arr[@]}"
echo "with elements: ${test_arr[@]}"
100%|████████████████████████████████████████████████████████████████████████████| 10/10 [00:00<00:00, 242445.32it/s]
Size of array: 0
with elements:
```
Whereas the elements are properly added if you remove the bar
```
$ test_arr=()
total=10
for i in $(seq $total)
do
echo "$i"
test_arr+=("$i")
done
echo "Size of array: ${#test_arr[@]}"
echo "with elements: ${test_arr[@]}"
1
2
3
4
5
6
7
8
9
10
Size of array: 10
with elements: 1 2 3 4 5 6 7 8 9 10
``` | open | 2021-11-02T12:26:51Z | 2021-11-02T12:26:51Z | https://github.com/tqdm/tqdm/issues/1270 | [] | jakob1379 | 0 |
ranaroussi/yfinance | pandas | 1,644 | RuntimeError: dictionary changed size during iteration | While using below function for multiple stocks (ticks) in below i am getting this error. Please suggest resolution
yf.download(ticks, period="1d", interval="1m", rounding=False, prepost=True, ignore_tz=False, progress=False)
File "/home/user/doc/lib/python3.11/site-packages/yfinance/utils.py", line 105, in wrapper
result = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/user/doc/lib/python3.11/site-packages/yfinance/multi.py", line 193, in download
for ticker in shared._TRACEBACKS:
RuntimeError: dictionary changed size during iteration
| closed | 2023-08-02T10:00:39Z | 2023-12-12T18:54:04Z | https://github.com/ranaroussi/yfinance/issues/1644 | [] | suntech123 | 4 |
huggingface/datasets | deep-learning | 7,192 | Add repeat() for iterable datasets | ### Feature request
It would be useful to be able to straightforwardly repeat iterable datasets indefinitely, to provide complete control over starting and ending of iteration to the user.
An IterableDataset.repeat(n) function could do this automatically
### Motivation
This feature was discussed in this issue https://github.com/huggingface/datasets/issues/7147, and would resolve the need to use the hack of interleave datasets with probability 0 as a simple way to achieve this functionality.
An additional benefit might be the simplification of the use of iterable datasets in a distributed setting:
If the user can assume that datasets will repeat indefinitely, then issues around different numbers of samples appearing on different devices (e.g. https://github.com/huggingface/datasets/issues/6437, https://github.com/huggingface/datasets/issues/6594, https://github.com/huggingface/datasets/issues/6623, https://github.com/huggingface/datasets/issues/6719) can potentially be straightforwardly resolved by simply doing:
ids.repeat(None).take(n_samples_per_epoch)
### Your contribution
I'm not familiar enough with the codebase to assess how straightforward this would be to implement.
If it might be very straightforward, I could possibly have a go. | closed | 2024-10-02T17:48:13Z | 2025-03-18T10:48:33Z | https://github.com/huggingface/datasets/issues/7192 | [
"enhancement"
] | alex-hh | 3 |
noirbizarre/flask-restplus | flask | 29 | Splitting up API library into multiple files | I've tried several different ways to split up the API files into separate python source but have come up empty. I love the additions to flask-restplus but it appears that only the classes within the main python file are seen. Is there a good example of how to do this? In Flask-Restful it was a bit simpler as you could just add the resource and point to a different python file that got imported.
| closed | 2015-03-13T22:41:18Z | 2018-01-05T18:11:28Z | https://github.com/noirbizarre/flask-restplus/issues/29 | [
"help wanted"
] | kinabalu | 16 |
TencentARC/GFPGAN | pytorch | 546 | Please help me | Open nahi ho raha hai please help me | closed | 2024-05-15T02:22:15Z | 2024-05-15T02:22:47Z | https://github.com/TencentARC/GFPGAN/issues/546 | [] | Sukumarghoshal | 0 |
reloadware/reloadium | pandas | 169 | Reloadium debugger shows filtered log messages in django | ## Describe the bug*
Django app running in Pycharm 2023.2, Windows 11
If debugging with native debugger, log messages that are filtered do not appear in the console.
If debugging with reloadium, they do appear and clutter the console. Also, normal messages are duplicated in a different format.
## To Reproduce
Steps to reproduce the behavior:
In settings.py, create a LOGGING = entry
Example:
```
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'skip_static_requests': {
'()': 'django.utils.log.CallbackFilter',
'callback': skip_static_requests
},
},
'handlers': {
'console': {
'level': 'INFO',
'filters': ['skip_static_requests'],
'class': 'logging.StreamHandler',
'stream': sys.stdout,
'formatter': 'verbose',
},
...
```
## Expected behavior
Filtered messages should not display in console. Also, messages should not duplicate if they are not filtered.
## Screenshots

The Red messages should not appear and do not when using native debugger. Additionally, they are not formatted according to the settings.
The messages in Green are normal and formatted as per the settings LOGGING entry.
| closed | 2023-09-25T17:49:10Z | 2023-11-07T16:07:18Z | https://github.com/reloadware/reloadium/issues/169 | [] | andyp05 | 3 |
numba/numba | numpy | 9,451 | [numba.cuda] StructModel + FP16 fails with CUDA because make_attribute_wrapper assumes default_manager | This
```python
from numba import cuda
from numba import types
from numba.extending import models, make_attribute_wrapper, lower_builtin, as_numba_type, type_callable
from numba.cuda.models import register_model
from numba.core import cgutils
import numba.cuda.cudaimpl
class float16x2(object):
pass
class float16x2Type(types.Type):
def __init__(self):
super(float16x2Type, self).__init__(name='float16x2')
float16x2_type = float16x2Type()
## Type inference (Python --> FE type)
as_numba_type.register(float16x2, float16x2_type)
@type_callable(float16x2)
def type_float16x2(context):
def typer(x, y):
if all(isinstance(v, types.Float) for v in [x, y]):
return float16x2_type
return typer
### Lowering (FE types --> LLVM)
# Data model
@register_model(float16x2Type)
class float16x2TypeModel(models.StructModel):
def __init__(self, dmm, fe_type):
members = [(f, types.float16) for f in ['x', 'y']]
models.StructModel.__init__(self, dmm, fe_type, members)
for field in ['x', 'y']:
make_attribute_wrapper(float16x2Type, field, field)
# Constructors
@lower_builtin(float16x2, types.Float, types.Float)
def impl(context, builder, sig, args):
typ = sig.return_type
obj = cgutils.create_struct_proxy(typ)(context, builder)
x, y = args
x = numba.cuda.cudaimpl.float_to_float16_cast(context, builder, sig.args[0], types.float16, x)
y = numba.cuda.cudaimpl.float_to_float16_cast(context, builder, sig.args[1], types.float16, y)
obj.x, obj.y = x, y
return obj._getvalue()
@cuda.jit
def f():
a = float16x2(1.0, 2.0)
print(a.y)
f[1, 1, 0, 0]()
cuda.synchronize()
```
fails with
```
numba.core.errors.TypingError: Failed in cuda mode pipeline (step: nopython frontend)
Internal error at resolving type of attribute "y" of "a".
<class '__main__.float16x2Type'>
During: typing of get attribute at [redacted] (54)
Enable logging at debug level for details.
File "[redacted]", line 54:
def f():
<source elided>
a = float16x2(1.0, 2.0)
print(a.y)
^
```
The reason seems to be that `make_attribute_wrapper` [here](https://github.com/numba/numba/blob/main/numba/core/extending.py#L279C38-L279C53) uses
```
from numba.core.datamodel import default_manager
```
and, as such, assumes the default data model. This means for FP16 it will uses this generic data model [here](https://github.com/numba/numba/blob/7b7137a1f2bf003e8aa42255f571d0947654e7ce/numba/core/datamodel/models.py#L363) instead of the CUDA specific one from [here](https://github.com/numba/numba/blob/7b7137a1f2bf003e8aa42255f571d0947654e7ce/numba/cuda/models.py#L35)
cc @gmarkall
| open | 2024-02-20T23:05:41Z | 2024-02-27T10:07:20Z | https://github.com/numba/numba/issues/9451 | [
"feature_request",
"CUDA",
"bug - incorrect behavior"
] | nvlcambier | 1 |
plotly/dash | data-visualization | 2,911 | Inconsistent behavior with dcc.Store initial values and prevent_initial_call |
```
dash 2.16.1
dash-bootstrap-components 1.6.0
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-mantine-components 0.12.1
dash-table 5.0.0
```
**description**
There appears to be an inconsistency in how dcc.Store components behave with different initial values when using prevent_initial_call=True.
**Expected behavior**
When using `prevent_initial_call=True`, callbacks should not be triggered on initial load for any dcc.Store, regardless of initial value.
**Actual behavior**
A callback with `prevent_initial_call=True` is triggered for a dcc.Store initialized with None, but not for one initialized with an empty dictionary {}.
Minimal reproducible example:
```
import dash
from dash import dcc, html, Input, Output, State
app = dash.Dash(__name__)
app.layout = html.Div([
dcc.Store(id='store_none', data=None),
dcc.Store(id='store', data={}),
html.H3("none store"),
html.Div(id='output none store', children="Not called yet"),
html.H3("store"),
html.Div(id='output store', children="Not called yet"),
])
@app.callback(
Output('output none store', 'children'),
Input('store_none', 'data'),
prevent_initial_call=True
)
def on_store_none(store_none):
return "called"
@app.callback(
Output('output store', 'children'),
Input("store", "data"),
prevent_initial_call=True
)
def on_store(store):
return "called"
if __name__ == '__main__':
app.run_server(debug=True)
```
**Screenshots**
 | open | 2024-07-03T16:42:36Z | 2024-08-30T15:13:07Z | https://github.com/plotly/dash/issues/2911 | [
"bug",
"P3"
] | shimon-l | 1 |
MemeMeow-Studio/MemeMeow | streamlit | 1 | The web version is not working | 
Trying to search and it loads forever. | closed | 2025-02-18T07:58:18Z | 2025-02-18T18:44:04Z | https://github.com/MemeMeow-Studio/MemeMeow/issues/1 | [
"enhancement / 功能请求"
] | Lee-Juntong | 4 |
jupyter/nbviewer | jupyter | 861 | Interactive plots with Plotly: Link broken 404 error! | ## Tried to see the nbviewer: Interactive plots with Plotly clicking in the link you offer ## https://nbviewer.jupyter.org/github/plotly/python-user-guide/blob/master/Index.ipynb
# got:
404 : Not Found
You are requesting a page that does not exist!
The remote resource was not found. | open | 2019-10-31T16:20:22Z | 2019-11-21T22:13:18Z | https://github.com/jupyter/nbviewer/issues/861 | [
"tag:Examples",
"tag:Public Service"
] | Rauleman | 2 |
idealo/image-super-resolution | computer-vision | 130 | tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/F_2_2/kernel) | ```
from PIL import Image
import numpy as np
import tensorflow as tf
from .models import RDN
from .models import RRDN
global gra
global rrdn
global rdn
# gra = tf.Graph()
rrdn = RRDN(weights='gans')
rdn = RDN(weights='psnr-large')
gra = tf.get_default_graph()
# g = tf.Session(graph=graph)
def get_gan_output(img_path, by_patch_of_size=50):
img = Image.open(img_path).convert("RGB")
lr_img = np.array(img)
with gra.as_default():
sr_img = rrdn.predict(lr_img, by_patch_of_size=50)
return sr_img
def get_2x_output(img_path, by_patch_of_size=50):
img = Image.open(img_path).convert("RGB")
lr_img = np.array(img)
with gra.as_default():
sr_img = rdn.predict(lr_img, by_patch_of_size=50)
return sr_img
```
Calling any of these function from a different .py file causes the error. File
```
"D:\data\deep_learning_projects\app\projects\main\service\superresolution\get_output.py", line 29, in get_2x_output
sr_img = rdn.predict(lr_img, by_patch_of_size=50)
File "D:\data\deep_learning_projects\venv\lib\site-packages\ISR\models\imagemodel.py", line 41, in predict
batch = self.model.predict(patches[i: i + batch_size])
File "D:\data\deep_learning_projects\venv\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 908, in predict
use_multiprocessing=use_multiprocessing)
File "D:\data\deep_learning_projects\venv\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 723, in predict
callbacks=callbacks)
File "D:\data\deep_learning_projects\venv\lib\site-packages\tensorflow_core\python\keras\engine\training_arrays.py", line 394, in model_iteration
batch_outs = f(ins_batch)
File "D:\data\deep_learning_projects\venv\lib\site-packages\tensorflow_core\python\keras\backend.py", line 3476, in __call__
run_metadata=self.run_metadata)
File "D:\data\deep_learning_projects\venv\lib\site-packages\tensorflow_core\python\client\session.py", line 1472, in __call__
run_metadata_ptr)
tensorflow.python.framework.errors_impl.FailedPreconditionError: Error while reading resource variable F_2_2/kernel from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/F_2_2/kernel)
[[{{node F_2_2/Conv2D/ReadVariableOp}}]]
```
I am running this in tensorflow 1.15.0. But this is running perfectly in colab even in case of 1.15.0 but not running on local system. | open | 2020-07-06T01:59:11Z | 2020-07-06T02:01:54Z | https://github.com/idealo/image-super-resolution/issues/130 | [] | kashyappiyush1998 | 0 |
iperov/DeepFaceLab | machine-learning | 5,245 | Training crashes during pretraining |
After extracting the version 01_04_2021, I immediately ran the pretraining with default options.
I set the target iterations to 500,000.
After loading and training for a while it crashes, I restarted it each time reducing batch size and it still crashes.
The final output:
Initializing models: 100%|#############################################| 5/5 [00:03<00:00, 1.64it/s]
Loaded 15843 packed faces from D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\pretrain_CelebA
Sort by yaw: 100%|################################################| 128/128 [00:00<00:00, 295.61it/s]
Sort by yaw: 100%|################################################| 128/128 [00:00<00:00, 289.58it/s]
=============== Model Summary ===============
== ==
== Model name: basic_SAEHD ==
== ==
== Current iteration: 91392 ==
== ==
==------------- Model Options -------------==
== ==
== resolution: 128 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: liae-ud ==
== ae_dims: 256 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 22 ==
== masked_training: True ==
== eyes_mouth_prio: False ==
== uniform_yaw: True ==
== adabelief: True ==
== lr_dropout: n ==
== random_warp: False ==
== true_face_power: 0.0 ==
== face_style_power: 0.0 ==
== bg_style_power: 0.0 ==
== ct_mode: rct ==
== clipgrad: True ==
== pretrain: True ==
== autobackup_hour: 1 ==
== write_preview_history: False ==
== target_iter: 500000 ==
== random_flip: True ==
== batch_size: 4 ==
== gan_power: 0.0 ==
== gan_patch_size: 16 ==
== gan_dims: 16 ==
== ==
==-------------- Running On ---------------==
== ==
== Device index: 0 ==
== Name: GeForce RTX 2060 ==
== VRAM: 6.00GB ==
== ==
=============================================
Starting. Target iteration: 500000. Press "Enter" to stop training and save model.
[18:32:21][#094491][0288ms][0.4154][0.3854]
[18:47:21][#097587][0281ms][0.4149][0.3843]
[19:02:21][#100685][0300ms][0.4120][0.3807]
[19:17:22][#103782][0305ms][0.4106][0.3801]
Error: 46][#104277][0275ms][0.4674][0.3715]
Traceback (most recent call last):
File "D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\DeepFaceLab\mainscripts\Trainer.py", line 130, in trainerThread
iter, iter_time = model.train_one_iter()
File "D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\DeepFaceLab\models\ModelBase.py", line 462, in train_one_iter
losses = self.onTrainOneIter()
File "D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 676, in onTrainOneIter
(warped_dst, target_dst, target_dstm, target_dstm_em) ) = self.generate_next_samples()
File "D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\DeepFaceLab\models\ModelBase.py", line 449, in generate_next_samples
sample.append ( generator.generate_next() )
File "D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\DeepFaceLab\samplelib\SampleGeneratorBase.py", line 21, in generate_next
self.last_generation = next(self)
File "D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\DeepFaceLab\samplelib\SampleGeneratorFace.py", line 112, in __next__
return next(generator)
File "D:\DeepFaceLab_NVIDIA_01_04_2021\_internal\DeepFaceLab\core\joblib\SubprocessGenerator.py", line 73, in __next__
gen_data = self.cs_queue.get()
File "multiprocessing\queues.py", line 94, in get
File "multiprocessing\connection.py", line 216, in recv_bytes
File "multiprocessing\connection.py", line 318, in _recv_bytes
File "multiprocessing\connection.py", line 340, in _get_more_data
MemoryError
System Information:
Edition Windows 10 Home
Version 20H2
Installed on 7/21/2020
OS build 19042.746
Experience Windows Feature Experience Pack 120.2212.551.0
Processor AMD Ryzen 7 4800H with Radeon Graphics 2.90 GHz
Installed RAM 16.0 GB (15.4 GB usable)
nvcc output:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:54:10_Pacific_Daylight_Time_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.relgpu_drvr455TC455_06.29190527_0
nvidia-smi output:
Thu Jan 14 19:53:32 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.89 Driver Version: 460.89 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2060 WDDM | 00000000:01:00.0 Off | N/A |
| N/A 44C P8 9W / N/A | 164MiB / 6144MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
| open | 2021-01-14T17:05:22Z | 2023-06-08T21:51:24Z | https://github.com/iperov/DeepFaceLab/issues/5245 | [] | nalhilal | 1 |
deepspeedai/DeepSpeed | pytorch | 6,617 | DeepSpeed inference load the whole model to cpu numerous times | Hi when I use DeepSpeed to inference ,when running GPU = 4,model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16) will be loaded four times. This takes up a lot of CPU memory.Is there a way to load the model only once? | closed | 2024-10-10T02:23:14Z | 2024-10-12T01:57:19Z | https://github.com/deepspeedai/DeepSpeed/issues/6617 | [] | xixi226108 | 1 |
roboflow/supervision | tensorflow | 1,038 | Opencv channel swap in ImageSinks | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Description
Hello I realize opencv used BGR instead of RGB, and therefore, the following code will cause channel swap:
```python
with sv.ImageSink(target_dir_path=output_dir, overwrite=True) as sink:
annotated_img = box_annotator.annotate(
scene=np.array(Image.open(img_dir).convert("RGB")),
detections=results,
labels=labels,
)
sink.save_image(
image=annotated_img, image_name="test.jpg"
)
```
Unless I use `cv2.cvtColor(annotated_img, cv2.COLOR_RGB2BGR)` . This also happens with video sinks and other places using `Opencv` for image writing.
I wonder if it is possible to add this conversion by default or at least mention this in the docs? Thanks a lot!
### Use case
For savings with `ImageSink()`
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | closed | 2024-03-24T21:48:00Z | 2024-03-26T01:46:31Z | https://github.com/roboflow/supervision/issues/1038 | [
"enhancement"
] | zhmiao | 3 |
tartiflette/tartiflette | graphql | 217 | Please delete this issue | -snip- | closed | 2019-04-21T10:24:10Z | 2019-04-21T10:27:36Z | https://github.com/tartiflette/tartiflette/issues/217 | [] | ba-alexg | 1 |
chmp/ipytest | pytest | 12 | Create a better Example notebook | Currently the example notebook is more geared towards understanding the implementaiton details, than how to use ipytest effectively. | closed | 2018-11-02T10:29:24Z | 2020-05-20T20:11:16Z | https://github.com/chmp/ipytest/issues/12 | [] | chmp | 1 |
nolar/kopf | asyncio | 220 | [PR] Fix Windows Compatibility | > <a href="https://github.com/damc-dev"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/3859789?v=4"></a> A pull request by [damc-dev](https://github.com/damc-dev) at _2019-11-06 01:44:29+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/pull/220
> Merged by [nolar](https://github.com/nolar) at _2019-11-12 10:30:08+00:00_
When running kopf on Windows it fails due to NotImplementedError thrown by add_signal_handler method in asyncio
> Issue : #219
# Description
In the asyncio docs it appears that the [add_signal_handler](https://docs.python.org/3/library/asyncio-eventloop.html#asyncio.loop.add_signal_handler) method is only available in Unix which is why it throws a NotImplementedError when kopf is run on Windows.
This PR should fix Windows compatibility by catching NotImplementedError thrown by asyncio and continuing.
# Types of Changes
* Bug fix (non-breaking change which fixes an issue)
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2019-11-13 18:25:01+00:00_
>
Released as [`kopf==0.23rc1`](https://github.com/nolar/kopf/releases/tag/0.23rc1).
[damc-dev](https://github.com/damc-dev) Thank you for you contribution! 😉 | closed | 2020-08-18T20:01:03Z | 2020-08-23T20:51:11Z | https://github.com/nolar/kopf/issues/220 | [
"archive"
] | kopf-archiver[bot] | 0 |
ray-project/ray | machine-learning | 50,927 | [serve] proxy memory leak | ### What happened + What you expected to happen
Seeing memory growth in the proxy actor over time indicating a memory leak:
<img width="1357" alt="Image" src="https://github.com/user-attachments/assets/e0fd6e84-6e92-4e7c-91e2-4169ffc0bee2" />
### Versions / Dependencies
Ray 2.41
### Reproduction script
n/a
### Issue Severity
None | closed | 2025-02-26T23:08:15Z | 2025-03-10T17:15:53Z | https://github.com/ray-project/ray/issues/50927 | [
"bug",
"P0",
"serve"
] | zcin | 2 |
dpgaspar/Flask-AppBuilder | rest-api | 1,794 | Fix french translation | French translation which seems ta have been automaticaly generated contains mistakes.
I propose to fix some of them
### Environment
Flask-Appbuilder version: 3.4.3
| open | 2022-01-26T21:23:13Z | 2022-05-02T13:04:28Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1794 | [] | mlaval24 | 3 |
python-visualization/folium | data-visualization | 1,583 | Markercluster, disable cluster at zoomlevel | This issue was rasied by another person last year. But it seems to be no answer. I come aross the same issue and wonder if this can be done Markercluster or folium
[https://github.com/python-visualization/folium/issues/1485] | closed | 2022-04-08T06:31:28Z | 2022-04-08T07:08:31Z | https://github.com/python-visualization/folium/issues/1583 | [] | KarenChen9999 | 2 |
vitalik/django-ninja | django | 674 | Renderer should be configurable on a per-router or per-endpoint basis | **Is your feature request related to a problem? Please describe.**
I am trying to export metrics of my application via `prometheus_client`. Prometheus will consume plain-text data from any HTTP endpoint and parse it.
All my endpoints are JSON, except the metrics endpoint, which should be plaintext.
**Describe the solution you'd like**
I'd like to be able to, ideally, set my renderer on a per-endpoint basis:
```python
@router.get("/", renderer=MyRenderer)
def endpoint(request):
...
```
Alternatively, I would be OK with having the router itself take a renderer:
```python
router = Router(renderer=MyRenderer)
```
What I have done now works, but seems like a hack, I bypass the renderer selection when bypassing `create_response` by returning an `HttpResponse`
```python
@router.get("/", response=str)
def endpoint(request):
return HttpResponse("some plain text", status=200, content_type="text/plain")
``` | open | 2023-02-01T11:35:09Z | 2023-02-06T10:50:15Z | https://github.com/vitalik/django-ninja/issues/674 | [] | DavidVentura | 3 |
CanopyTax/asyncpgsa | sqlalchemy | 113 | Incorrect version requirement in 0.27.0 for asyncpg | New 0.27.0 version [passes record_class](https://github.com/CanopyTax/asyncpgsa/blob/b9e928ce4a27a2c3964c69cfe5a3189290f1fdaa/asyncpgsa/connection.py#L96) thus being incompatible with asyncpg<0.22.0, but declaration in setup.py [states the opposite](https://github.com/CanopyTax/asyncpgsa/blob/master/setup.py#L13). Probably mistyped `<=` instead of `>=`?
| closed | 2021-02-12T06:56:33Z | 2021-02-26T00:39:20Z | https://github.com/CanopyTax/asyncpgsa/issues/113 | [] | ods | 1 |
adap/flower | tensorflow | 4,574 | Change Config and UserConfig to allow recursive nesting of dictionaries | ### Describe the type of feature and its functionality.
Currently both Config and User config are defined as arbitrary dictionaries with string keys and Scalar values. This requires the user to have knowledge of what the string keys are in order to access the values they packed into the config. Why not allow the values to also include dictionaries so long as the end values are always Scalars. This can be done with a recursive typing definition. It would allow more organized config and userconfig dictionaries. Furthermore we have a specific use case where we need to pass a highly nested dictionary loaded from a json file from the server to the client. There are too many key-value pairs to make it worth it to unnest the whole thing. Right now we are getting around it by serializing the dictionary so that it can be passed as a scalar Bytes value and then unserializing it on the client. Ideally we could skip the serialization step. However, the UserConfig type used in the Context class does not allow bytes values. We are not making use of the Context objects yet, but are considering it as a good way to store state information. Although one of the state objects is another nested dictionary.
I think this is a good change so long as it does not significantly interfere with other parts of the library. There might even be some other dictionary types in flwr.common.typing that could benefit from a recursive definition. Namely the Properties type and perhaps even the Metrics type. Although you might need to modify the aggregation function to recursively search the dictionary until it finds a scalar value to aggregate, but that would be worth the hassle in my opinion to allow more organized dictionary types
### Describe step by step what files and adjustments are you planning to include.
For python, I believe the change is simple. Not sure if there would be any downstream effects. Here is an example of some changes to flwr.common.typing
```python
Metrics = dict[str, Union[Scalar, 'Metrics']]
Config = dict[str, Union[Scalar, 'Config']]
Properties = dict[str, Union[Scalar, 'Properties']]
UserConfig = dict[str, Union[UserConfigValue, 'UserConfig']]
```
### Is there something else you want to add?
_No response_ | open | 2024-11-22T17:09:41Z | 2025-01-31T19:04:31Z | https://github.com/adap/flower/issues/4574 | [
"feature request",
"stale",
"part: misc framework"
] | scarere | 1 |
plotly/dash | data-science | 2,273 | [BUG] dcc.Dropdown use outdated styles for react-virtualized-select (cursor should be pointer) | 
Cursor should be "pointer". The underlying library [has fixed this](https://github.com/bvaughn/react-virtualized-select/commit/b2c5fe394ec3145319bde37158d05b3508fbf84a) and dash use a version which includes the fix (3.1.3), but the stylesheet is ["hardcoded"](https://github.com/plotly/dash/blob/dev/components/dash-core-components/src/components/css/react-virtualized-select%403.1.0.css) to 3.1.0
```
> conda list | grep dash
dash 2.6.1 pyhd8ed1ab_0 conda-forge
dash-bootstrap-components 1.2.1 pyhd8ed1ab_0 conda-forge
dash-core-components 2.0.0 pypi_0 pypi
dash-daq 0.5.0 pypi_0 pypi
dash-html-components 2.0.0 pypi_0 pypi
dash-table 5.0.0 pypi_0 pypi
``` | closed | 2022-10-16T13:08:11Z | 2022-11-02T19:33:25Z | https://github.com/plotly/dash/issues/2273 | [] | olejorgenb | 0 |
Urinx/WeixinBot | api | 256 | 原来你们都在这里,哈哈㊙️㊙️㊙️ | 全网黑科技收藏了:[https://wxbug.cn/](https://wxbug.cn/) | open | 2018-06-07T15:09:08Z | 2023-01-06T07:07:54Z | https://github.com/Urinx/WeixinBot/issues/256 | [] | wxbug-cn | 2 |
allure-framework/allure-python | pytest | 218 | Dependencies issues with Allure | Hi,
While trying to install pytest-allure plugin I caught the following issues :
Could not find a version that matches allure-pytest
Tried: 2.0.0b1, 2.0.0b1, 2.0.0b2, 2.0.0b2, 2.1.0b1, 2.1.0b1, 2.1.1b1, 2.1.1b1, 2.2.1b1, 2.2.1b1, 2.2.2b2, 2.2.2b2, 2.2.3b1, 2.2.3b1, 2.2.4b1, 2.2.4b1, 2.3.1b1, 2.3.1b1, 2.3.2b1, 2.3.2b1
There are incompatible versions in the resolved dependencies.
Need your help :( | closed | 2018-05-08T13:08:40Z | 2018-06-13T10:02:45Z | https://github.com/allure-framework/allure-python/issues/218 | [] | Formartha | 4 |
unionai-oss/pandera | pandas | 1,275 | Data Generation Overlap for Multiple DataFrames | #### Concurrent Data Generation Strategies for Multiple DataFrames
I realize this may be a difficult functionality to implement, but at a high level I am thinking about the best way to use pandera data generation in unit tests for testing functions that involve some merging across multiple datasets.
In one of my use cases I have a function that merges two dataframes on their shared timestamp column. I have pandera schemas defined for both of these dataframes, the timestamp data should be at an hourly frequency.
So I am wondering if you have any thoughts about how I could ensure that when data is generated, they have the same date range so that the function which contains a merge operation can be effectively tested. I guess one workaround would be to define a function that generates a date range with length equal to the length of the dataframe, and then overwrite the timestamp column in both dataframes. But maybe you all have some different ideas or I'm missing how to do this already?
Thanks! I can add the enhancement label if it's appropriate.
| open | 2023-07-21T16:23:47Z | 2023-07-21T16:23:47Z | https://github.com/unionai-oss/pandera/issues/1275 | [
"question"
] | aboomer07 | 0 |
coqui-ai/TTS | pytorch | 3,427 | [Bug] when i run print(TTS().list_models()) I get <TTS.utils.manage.ModelManager object at 0x7fa9d4c5a7a0> | ### Describe the bug
was found in coqui tts V0.22.0 not sure why its happening the but when i run print(TTS().list_models()) i get <TTS.utils.manage.ModelManager object at 0x7fa9d4c5a7a0>
### To Reproduce
When i run print(TTS().list_models()) I get <TTS.utils.manage.ModelManager object at 0x7fa9d4c5a7a0>
### Expected behavior
It should be giving me a list of all of the models but it doesn't
### Logs
_No response_
### Environment
```shell
Seems to occur in what I,ve tested being python 3.11 and 3.10
```
### Additional context
I found a fix tho | closed | 2023-12-14T01:13:58Z | 2024-01-28T22:40:23Z | https://github.com/coqui-ai/TTS/issues/3427 | [
"bug",
"wontfix"
] | DrewThomasson | 2 |
dropbox/sqlalchemy-stubs | sqlalchemy | 207 | error: Unexpected keyword argument "astext_type" for "JSONB" | I'm getting `error: Unexpected keyword argument "astext_type" for "JSONB"` in my alembic migrations on lines like:
`sa.Column("data", postgresql.JSONB(astext_type=sa.Text()), nullable=True),`
Looking at the [__init__](https://github.com/sqlalchemy/sqlalchemy/blob/rel_1_4_0b1/lib/sqlalchemy/dialects/postgresql/json.py#L183) function of the JSON class that JSONB inherits it has a keyword argument astext_type.
Can't find the __init__ function for JSON/JSONB in this repo.
So, am I doing something wrong or is there something not implemented or whatnot?
Thanks! | open | 2021-02-02T14:08:49Z | 2023-05-11T15:40:31Z | https://github.com/dropbox/sqlalchemy-stubs/issues/207 | [] | triptec | 1 |
anselal/antminer-monitor | dash | 15 | Auto Restart when chips show as dashes `-` | i hope it will be auto restart function on this software, once the page is refresh and if any running chip XXXXX it will be auto restart for them.
| closed | 2017-10-30T10:55:21Z | 2017-10-30T11:37:19Z | https://github.com/anselal/antminer-monitor/issues/15 | [] | babycicak | 5 |
python-visualization/folium | data-visualization | 1,571 | Issue using branca colormap with folium.raster_layers.ImageOverlay | **Describe the bug**
Exhausted my troubleshooting - I believe this is in fact a bug, hopefully I'm not wrong!
I think there is a bug attempting to use a branca colormap (of type branca.colormap.LinearColormap) with an ImageOverlay raster layer. Here is the error I receive when running the code below:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_26316/1517857374.py in <module>
13
14 # Add the raster overlay to the map
---> 15 image = folium.raster_layers.ImageOverlay(
16 name="Random Image",
17 image = rand_img,
c:\users\ianmp\git_repos\ce-api-endpoint-tests\venv\lib\site-packages\folium\raster_layers.py in __init__(self, image, bounds, origin, colormap, mercator_project, pixelated, name, overlay, control, show, **kwargs)
258 )
259
--> 260 self.url = image_to_url(image, origin=origin, colormap=colormap)
261
262 def render(self, **kwargs):
c:\users\ianmp\git_repos\ce-api-endpoint-tests\venv\lib\site-packages\folium\utilities.py in image_to_url(image, colormap, origin)
137 url = 'data:image/{};base64,{}'.format(fileformat, b64encoded)
138 elif 'ndarray' in image.__class__.__name__:
--> 139 img = write_png(image, origin=origin, colormap=colormap)
140 b64encoded = base64.b64encode(img).decode('utf-8')
141 url = 'data:image/png;base64,{}'.format(b64encoded)
c:\users\ianmp\git_repos\ce-api-endpoint-tests\venv\lib\site-packages\folium\utilities.py in write_png(data, origin, colormap)
200 if nblayers == 1:
201 arr = np.array(list(map(colormap, arr.ravel())))
--> 202 nblayers = arr.shape[1]
203 if nblayers not in [3, 4]:
204 raise ValueError('colormap must provide colors of r'
IndexError: tuple index out of range
```
**To Reproduce**
Below is a self-contained code snippet which reproduces the issue. I have substituted in a random numpy array (of a particular size) to replace a raster image I was using so that you can easily reproduce this. The error is the same using either the raster or the random array.
```
import branca
import branca.colormap as cm
import folium
from folium import plugins
import numpy as np
# Create an array of random values to simulate a raster image array
rand_img = np.random.rand(333, 443)
# Set the image bounds (WGS84 lon/lats)
bbox = [-128.5831, 31.3986, -112.7140, 43.0319]
# Create a folium map object
m = folium.Map(location=[39.3, -118.4], zoom_start=5, height=500)
# Define a Branca colormap for the colorbar
vmin = 0.2
vmax = 0.8
palette = ['red', 'orange', 'yellow', 'cyan', 'blue', 'darkblue'][::-1]
cmap = cm.LinearColormap(colors=palette,
vmin=vmin,
vmax=vmax,
caption='Image Colormap')
# Add the raster overlay to the map
image = folium.raster_layers.ImageOverlay(
name="Random Image",
image = rand_img,
bounds=[[bbox[1], bbox[0]], [bbox[3], bbox[2]]],
interactive=True,
colormap=cmap,
overlay=True)
image.add_to(m)
# Add a layer control panel to the map
m.add_child(folium.LayerControl())
m.add_child(cmap)
# Add fullscreen button
plugins.Fullscreen().add_to(m)
# Display the map
display(m)
```
**Expected behavior**
I expected the image to appear on the folium map, with a color palette as specified (dark blue values for 0.2 and below, red values for 0.8 and above, and other values linearly spaced in between).
**Environment (please complete the following information):**
- Jupyter Notebook
- Python version 3.9.5
- folium version 0.12.1.post1 (also experienced on 0.12.1)
- branca version 0.4.2
**Additional context**
If you remove the colormap from the `folium.raster_layers.ImageOverlay()` call, the image plots just fine on the map, and the colormap will appear as a colorbar, but the image will just be in greyscale.
**Possible solutions**
No solutions yet, but I think the error lies in the line identified in the stack trace ([l212 in utilities.py](https://github.com/AntonioLopardo/folium/blob/master/utilities.py#L212)).
If nblayers is already known to be 1 (mono/singleband image, i.e. a NxM array), I am not sure why it grabs the array shape (which at this point is single-dimensional) and tries to reset the nblayers value.
| closed | 2022-02-17T17:01:38Z | 2023-01-24T16:59:00Z | https://github.com/python-visualization/folium/issues/1571 | [
"bug",
"work in progress"
] | ipritchard | 12 |
scikit-learn/scikit-learn | data-science | 31,019 | Allow column names to pass through when fitting `narwhals` dataframes | ### Describe the workflow you want to enable
Currently when fitting with a `narwhals` DataFrame, the feature names do not pass through because it does not implement a `__dataframe__` method.
Example:
```python
import narwhals as nw
import pandas as pd
import polars as pl
from sklearn.preprocessing import StandardScaler
df_pd = pd.DataFrame({"a": [0, 1, 2], "b": [3, 4, 5]})
df_pl = pl.DataFrame(df_pd)
df_nw = nw.from_native(df_pd)
s_pd, s_pl, s_nw = StandardScaler(), StandardScaler(), StandardScaler()
s_pd.fit(df_pd)
s_pl.fit(df_pl)
s_nw.fit(df_nw)
print(s_pd.feature_names_in_)
print(s_pl.feature_names_in_)
print(s_nw.feature_names_in_)
```
**Expected output**
```
['a' 'b']
['a' 'b']
['a' 'b']
```
**Actual output**
```
['a' 'b']
['a' 'b']
AttributeError: 'StandardScaler' object has no attribute 'feature_names_in_'
```
All other attributes on `s_nw` are what I'd expect.
### Describe your proposed solution
This should be easy enough to implement by adding another check within `sklearn.utils.validation._get_feature_names`:
1. Add `_is_narwhals_df` method, borrowing logic from [`_is_pandas_df`](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/utils/validation.py#L2343)
```python
def _is_narwhals_df(X):
"""Return True if the X is a narwhals dataframe."""
try:
nw = sys.modules["narwhals"]
except KeyError:
return False
return isinstance(X, nw.DataFrame)
```
2. Add an additional check to [`_get_feature_names`](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/utils/validation.py#L2393-L2408):
```python
elif _is_narwhals_df(X):
feature_names = np.asarray(X.columns, dtype=object)
```
### Describe alternatives you've considered, if relevant
_No response_
### Additional context
https://github.com/narwhals-dev/narwhals/issues/355#issuecomment-2734066008 | closed | 2025-03-18T19:23:08Z | 2025-03-21T17:03:33Z | https://github.com/scikit-learn/scikit-learn/issues/31019 | [
"New Feature",
"Needs Triage"
] | ryansheabla | 4 |
3b1b/manim | python | 1,249 | compatible with BasicTeX? | Question: I was just looking at this [requirement](https://github.com/3b1b/manim/blame/master/README.md#L21) and wondering
Is this compatible with [BasicTeX](https://www.tug.org/mactex/morepackages.html)? | open | 2020-10-04T03:57:39Z | 2020-12-28T03:20:31Z | https://github.com/3b1b/manim/issues/1249 | [] | omarcostahamido | 2 |
sinaptik-ai/pandas-ai | data-science | 593 | Move prompt templates to separate files | ### 🚀 The feature
Currently we have prompt's content is stored in class body mixed with other code:
[`GeneratePythonCodePrompt`](https://github.com/gventuri/pandas-ai/blob/a8a15052eff658667d4eeccbe2431bc31911919d/pandasai/prompts/generate_python_code.py#L42)
[`CorrectErrorPrompt`](https://github.com/gventuri/pandas-ai/blob/a8a15052eff658667d4eeccbe2431bc31911919d/pandasai/prompts/correct_error_prompt.py#L25)
My suggestion is to move those template strings to separate files stored in specific directory. We could implement one more class extending base `Prompt` containing a method that would read content from a file. Then, make `GeneratePythonCodePrompt` and `CorrectErrorPrompt` inherit the new class, and they would only contain an attribute representing a path to the template.
### Motivation, pitch
This would enhance readibility of sources, segregate business logic from data.
### Alternatives
_No response_
### Additional context
_No response_ | closed | 2023-09-25T20:08:53Z | 2023-09-28T14:36:59Z | https://github.com/sinaptik-ai/pandas-ai/issues/593 | [] | nautics889 | 1 |
RobertCraigie/prisma-client-py | asyncio | 715 | Adding Example for FastApi OpenAPI documentation | ## Problem
Currently for my pydantic models, I use the example field to document FastAPI:
` = Field(..., example="This is my example for SwaggerUI")`
## Suggested solution
Would it be possible through comment in the schema.prisma to add these examples?
## Alternatives
Open to any solution.
## Additional context
I want to completely remove my custom pydantic object definitions and rely on your generated models and partials (which I find amazing!)
| open | 2023-03-03T22:08:31Z | 2023-03-06T14:03:23Z | https://github.com/RobertCraigie/prisma-client-py/issues/715 | [
"kind/feature",
"level/intermediate",
"priority/medium",
"topic: dx",
"topic: generation"
] | julien-roy-replicant | 4 |
zihangdai/xlnet | tensorflow | 220 | how to find the end token in _sample_mask? | https://github.com/zihangdai/xlnet/blob/5cd50bc451436e188a8e7fea15358d5a8c916b72/data_utils.py#L369
why isn't _is_start_piece(sp.IdToPiece(seg[end].item()))? Should it check the end of n-grams?
| open | 2019-08-21T01:48:12Z | 2019-08-21T01:50:57Z | https://github.com/zihangdai/xlnet/issues/220 | [] | YanglanWang | 0 |
psf/black | python | 4,293 | Incorrectly strips tuple brackets from single entry tuple | <!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
Black strips () from a tuple containing a single entry.
**To Reproduce**
<!--
Minimal steps to reproduce the behavior with source code and Black's configuration.
-->
For example, take this code:
```python
class Transaction(Model):
__table_args__ = (UniqueConstraint("entry_a_id", "entry_b_id"))
```
Black produces:
```python
class Transaction(Model):
__table_args__ = UniqueConstraint("entry_a_id", "entry_b_id")
```
This doesn't work with SQLAlchemy's expectation, causing errors:
> sqlalchemy.exc.ArgumentError: __table_args__ value must be a tuple, dict, or None
**Expected behavior**
The () are not stripped.
**Environment**
<!-- Please complete the following information: -->
- Black's version: tested using online main & local 24.3.0
- OS and Python version: your website & Python (CPython) 3.11.5
**Additional context**
This is the specific thing that it breaks for me https://docs.sqlalchemy.org/en/20/orm/declarative_tables.html#orm-declarative-table-configuration
| closed | 2024-03-29T09:56:13Z | 2024-03-29T15:50:19Z | https://github.com/psf/black/issues/4293 | [
"T: bug"
] | juur | 0 |
aleju/imgaug | machine-learning | 291 | [Documentation Question] Upsizing segmentation masks | I'm curious how a type uint segmentation mask is able to be resized larger. The only method that makes sense to me would be KNN, but the docs say, "Note that segmentation maps are handled internally as heatmaps (one per class) and as such can be resized using cubic interpolation."
But wouldn't cubic interpolation fill intermediate pixels with float values instead of only using the integer values present in the original mask? | closed | 2019-03-27T22:51:47Z | 2019-03-29T19:52:09Z | https://github.com/aleju/imgaug/issues/291 | [] | austinmw | 2 |
streamlit/streamlit | data-visualization | 10,218 | Capturing events from interactive plotly line drawing | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [x] I added a descriptive title and summary to this issue.
### Summary
A way to be able to draw lines in plots and get the coordinates of that line would be very useful. In plotly, line drawing is implemented and is should fire a specific event, but streamlit has no way of accessing that event.
### Why?
I was working on a dashboard with a plot of 2D probability densities of my data. A great way to understand this specific data is to "select" features that stand out in this probability density and find instances in the underlying data which is close to that feature.
In my case, the features are lines of varying shapes and can always be thought of as the plot of a function y=f(x).
As such, the easiest way to find instances that are close to the feature is to draw a simple line over the feature and calculate the distance to that line for all instances in my dataset.
The trick is then to get the coordinates of a line that I draw on a heatmap plot.
The closest I can get currently is to draw a lasso around a thin slice, but that gives more complexity to the "search", since my target line now has a thickness.
I'm sure that there's plenty more cases where it makes sense to get the coordinates of a line drawn on a plot or an image.
### How?
Interactive line drawing in plotly is described [here](https://plotly.com/python/shapes/#drawing-shapes-with-a-mouse-on-cartesian-plots).
For my use-case, the `drawopenpath` option is the most suitable.
Reading in plotly's documentation, this will send a `relayout` event, but that doesn't seem to trigger anything on the streamlit side.
Short example of what I tried:
```
import streamlit as st
import plotly.graph_objects as go
fig = go.Figure()
fig.add_scatter(x=[0, 1, 2, 3], y=[1,2, 4, 6])
events = st.plotly_chart(
fig,
on_select="rerun",
config={
'modeBarButtonsToAdd': [
'drawopenpath',
]
}
)
events
```
### Additional Context
_No response_ | open | 2025-01-21T14:14:10Z | 2025-01-21T15:36:30Z | https://github.com/streamlit/streamlit/issues/10218 | [
"type:enhancement",
"feature:st.plotly_chart",
"area:events"
] | CarlAndersson | 1 |
ymcui/Chinese-BERT-wwm | tensorflow | 151 | RoBERTa need vocab.json in hugging face Transformers | In hugging face, the `RobertaTokenizer` need a vocal file that is json format, would you tell how to fix? Otherwise release
a json format vocab? PS:transformers version is 3.2.0
Best regards! | closed | 2020-09-27T03:03:18Z | 2020-09-27T03:05:07Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/151 | [] | MrRace | 1 |
mitmproxy/pdoc | api | 430 | pdoc does not seem to play nicely with GitPython when in a package | #### Problem Description
pdoc is unable to import package that uses GitPython.
We've noticed this issue in a large application, and I have created a minimal example that has been able to reproduce the issue here: https://github.com/cquick01/pdoc-test
Our larger application runs fine when running normally, and that other `pdoc3` tool has no issues. So it is definitely something that `pdoc` is doing differently than the rest. When commenting out references to GitPython in our larger application, `pdoc` then runs successfully across our entire package.
#### Steps to reproduce the behavior:
1. Follow README in https://github.com/cquick01/pdoc-test
2. See errors
```
Warn: Error loading test_package:
Traceback (most recent call last):
File "/home/cquick/test/venv/lib/python3.10/site-packages/git/__init__.py", line 87, in <module>
refresh()
File "/home/cquick/test/venv/lib/python3.10/site-packages/git/__init__.py", line 78, in refresh
if not FetchInfo.refresh():
File "/home/cquick/test/venv/lib/python3.10/site-packages/git/remote.py", line 304, in refresh
if Git().version_info[:2] >= (2, 10):
File "/home/cquick/test/venv/lib/python3.10/site-packages/git/cmd.py", line 679, in version_info
return self._version_info
File "/home/cquick/test/venv/lib/python3.10/site-packages/git/cmd.py", line 638, in __getattr__
return LazyMixin.__getattr__(self, name)
File "/home/cquick/test/venv/lib/python3.10/site-packages/gitdb/util.py", line 253, in __getattr__
self._set_cache_(attr)
File "/home/cquick/test/venv/lib/python3.10/site-packages/git/cmd.py", line 659, in _set_cache_
version_numbers = process_version.split(' ')[2]
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/cquick/test/venv/lib/python3.10/site-packages/pdoc/extract.py", line 207, in load_module
return importlib.import_module(module)
File "/home/cquick/.pyenv/versions/3.10.6/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/cquick/test/test_package/__init__.py", line 1, in <module>
import git
File "/home/cquick/test/venv/lib/python3.10/site-packages/git/__init__.py", line 89, in <module>
raise ImportError('Failed to initialize: {0}'.format(exc)) from exc
ImportError: Failed to initialize: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/cquick/test/venv/lib/python3.10/site-packages/pdoc/extract.py", line 250, in walk_packages2
module = load_module(mod.name)
File "/home/cquick/.pyenv/versions/3.10.6/lib/python3.10/contextlib.py", line 79, in inner
return func(*args, **kwds)
File "/home/cquick/test/venv/lib/python3.10/site-packages/pdoc/extract.py", line 209, in load_module
raise RuntimeError(f"Error importing {module}") from e
RuntimeError: Error importing test_package
(/home/cquick/test/venv/lib/python3.10/site-packages/pdoc/extract.py:252)
pdoc server ready at http://localhost:8080
```
#### System Information
Paste the output of "pdoc --version" here.
```
pdoc --version
pdoc: 12.0.2
Python: 3.10.6
Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
```
Will try to provide more info if needed. | closed | 2022-08-11T20:38:59Z | 2022-08-12T14:26:32Z | https://github.com/mitmproxy/pdoc/issues/430 | [
"bug"
] | cquick01 | 2 |
open-mmlab/mmdetection | pytorch | 11,848 | ReadTheDocs needs updating | Thanks for your error report and we appreciate it a lot.
**Checklist**
1. I have searched related issues but cannot get the expected help.
2. I have read the [FAQ documentation](https://mmdetection.readthedocs.io/en/latest/faq.html) but cannot get the expected help.
3. The bug has not been fixed in the latest version.
**Describe the bug**
Some readthedocs sections are not populating in v3. Have to use v2.28 or earlier.
Additionally, https://mmdetection.readthedocs.io/en/latest/faq.html in the issue template is not found.
**Reproduction**

1. What command or script did you run?
```
https://mmdetection.readthedocs.io/en/latest/api.html
``` | open | 2024-07-11T21:32:49Z | 2024-07-11T21:33:05Z | https://github.com/open-mmlab/mmdetection/issues/11848 | [] | mbergman257 | 0 |
fastapi/sqlmodel | pydantic | 361 | I have 2 table. which have (one to many relationship). User and Car , one user can buy many cars .when i deleted Parent(User) child(cars) related with that parents must be deletd but when i deleted the child parents should not be deleted. | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class UserInSchema(SQLModel):
name: str
phone: str
class User(UserInSchema, table=True):
__tablename__ = "user_data"
id: Optional[int] = Field(default=None, primary_key=True)
cars: List["Car"] = Relationship(back_populates="user")
class CarInSchema(SQLModel):
name: str
color: str
class Car(CarInSchema, table=True):
__tablename__ = "car_data"
id: Optional[int] = Field(default=None, primary_key=True)
# user_id = int = Field(sa_column=Column(Integer, ForeignKey("hero.id", ondelete="CASCADE")))
user_id: int = Field(default=None, foreign_key="user_data.id")
user: Optional[User] = Relationship(back_populates="cars", sa_relationship_kwargs={"cascade":"all , delete"})
```
### Description
one to many relationship b/w user and car..means that one user can buy many cars ..when I deleted parents which is User the childs related with that parents are deleted which work correctly but when i deleted the child ,all childs and parent which is related with that child are also deleted , But i don,t want that..
### Operating System
Linux
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.9.13
### Additional Context
_No response_ | closed | 2022-06-11T04:52:17Z | 2022-06-15T15:11:22Z | https://github.com/fastapi/sqlmodel/issues/361 | [
"question"
] | israr96418 | 2 |
kynan/nbstripout | jupyter | 52 | Only output should be kept for keep_output and init_cell | Currently, `nbstripout` ignores `keep_output` and `init_cell` cells entirely. However, this means that lots of stuff beyond the output stays, such as `execution_count`. I think keeping this does not meet the design goals, and it can cause Git conflicts.
Feature request: Treat these two types of cells normally, except keep the output. | closed | 2017-04-07T17:49:22Z | 2017-06-09T21:49:48Z | https://github.com/kynan/nbstripout/issues/52 | [
"type:enhancement",
"resolution:fixed"
] | reidpr | 3 |
InstaPy/InstaPy | automation | 5,964 | Error while running "Follow the likers of photos of users" | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
it should follow the Amount of of likers of a post specified.
## Current Behavior
it says, "Liked button now found, might be a video" and skips the picture
## Possible Solution (optional)
## InstaPy configuration
session.follow_likers(['alkaramstudio' , 'khaadi'], photos_grab_amount = 2, follow_likers_per_photo = 15, randomize=True, sleep_delay=10, interact=False)
| closed | 2020-12-17T01:08:16Z | 2021-01-07T16:00:09Z | https://github.com/InstaPy/InstaPy/issues/5964 | [] | virtualKhubaib | 2 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 15,777 | [Bug]: Torch is not able to use GPU | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [X] The issue has been reported before but has not been fixed yet
### What happened?
Webui isn't working after I installed a model and related extension.
Model : reffusion (https://theally.notion.site/Quick-n-Dirty-Riffusion-txt2audio-Tutorial-18e57df9ef214c3280efc5998bbf774d)
Extension : https://github.com/enlyth/sd-webui-riffusion
my cmd showing 'RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check'
After I add '--skip-torch-cuda-test' in variable, generating image took forever.
I guess WebUI is not using GPU.
I already updated latest CUDA and cuDNN after this issue occurred, but still it isn't work.
### Steps to reproduce the problem
1. Install both model and related extension above
2. restart WebUI
3. See error message
4. Depression
### What should have happened?
WebUI should work as always worked.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
I couldn't open WebUI page without adding variable '--skip-torch-cuda-test'.
After adding it, I got this.
{
"Platform": "Windows-10-10.0.19045-SP0",
"Python": "3.10.8",
"Version": "v1.4.0-RC-2397-g1c0a0c4c",
"Commit": "1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0",
"Script path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui",
"Data path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui",
"Extensions dir": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions",
"Checksum": "67e854c290bf0df6bf2c6814a8e7781b8a57940a6cb7891b446b8b84b880dab1",
"Commandline": [
"launch.py",
"--skip-torch-cuda-test",
"--xformers",
"--deepdanbooru",
"--autolaunch"
],
"Torch env info": "'NoneType' object has no attribute 'splitlines'",
"Exceptions": [
{
"exception": "'NoneType' object has no attribute 'lowvram'",
"traceback": [
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\call_queue.py, line 57, f",
"res = list(func(*args, **kwargs))"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\call_queue.py, line 36, f",
"res = func(*args, **kwargs)"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\txt2img.py, line 109, txt2img",
"processed = processing.process_images(p)"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\processing.py, line 832, process_images",
"sd_models.reload_model_weights()"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\sd_models.py, line 860, reload_model_weights",
"sd_model = reuse_model_from_already_loaded(sd_model, checkpoint_info, timer)"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\sd_models.py, line 793, reuse_model_from_already_loaded",
"send_model_to_cpu(sd_model)"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\sd_models.py, line 662, send_model_to_cpu",
"if m.lowvram:"
]
]
},
{
"exception": "Torch not compiled with CUDA enabled",
"traceback": [
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\sd_models.py, line 620, get_sd_model",
"load_model()"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\sd_models.py, line 770, load_model",
"with devices.autocast(), torch.no_grad():"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\devices.py, line 218, autocast",
"if has_xpu() or has_mps() or cuda_no_autocast():"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\devices.py, line 28, cuda_no_autocast",
"device_id = get_cuda_device_id()"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\devices.py, line 40, get_cuda_device_id",
") or torch.cuda.current_device()"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\cuda\\__init__.py, line 674, current_device",
"_lazy_init()"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\venv\\lib\\site-packages\\torch\\cuda\\__init__.py, line 239, _lazy_init",
"raise AssertionError(\"Torch not compiled with CUDA enabled\")"
]
]
},
{
"exception": "No module named 'fonts'",
"traceback": [
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\scripts.py, line 508, load_scripts",
"script_module = script_loading.load_module(scriptfile.path)"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\script_loading.py, line 13, load_module",
"module_spec.loader.exec_module(module)"
],
[
"<frozen importlib._bootstrap_external>, line 883, exec_module",
""
],
[
"<frozen importlib._bootstrap>, line 241, _call_with_frames_removed",
""
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam-master\\scripts\\daam_script.py, line 21, <module>",
"from scripts.daam import trace, utils"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam-master\\scripts\\daam\\__init__.py, line 3, <module>",
"from .utils import *"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam-master\\scripts\\daam\\utils.py, line 9, <module>",
"from fonts.ttf import Roboto"
]
]
},
{
"exception": "No module named 'fonts'",
"traceback": [
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\scripts.py, line 508, load_scripts",
"script_module = script_loading.load_module(scriptfile.path)"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\script_loading.py, line 13, load_module",
"module_spec.loader.exec_module(module)"
],
[
"<frozen importlib._bootstrap_external>, line 883, exec_module",
""
],
[
"<frozen importlib._bootstrap>, line 241, _call_with_frames_removed",
""
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam\\scripts\\daam_script.py, line 21, <module>",
"from scripts.daam import trace, utils"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam\\scripts\\daam\\__init__.py, line 3, <module>",
"from .utils import *"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam\\scripts\\daam\\utils.py, line 9, <module>",
"from fonts.ttf import Roboto"
]
]
},
{
"exception": "cannot import name 'get_correct_sampler' from 'modules.processing' (A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\processing.py)",
"traceback": [
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\scripts.py, line 508, load_scripts",
"script_module = script_loading.load_module(scriptfile.path)"
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\modules\\script_loading.py, line 13, load_module",
"module_spec.loader.exec_module(module)"
],
[
"<frozen importlib._bootstrap_external>, line 883, exec_module",
""
],
[
"<frozen importlib._bootstrap>, line 241, _call_with_frames_removed",
""
],
[
"A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\AI-WEBUI-scripts-Random\\scripts\\Random grid.py, line 27, <module>",
"from modules.processing import process_images, Processed, get_correct_sampler, StableDiffusionProcessingTxt2Img"
]
]
}
],
"CPU": {
"model": "Intel64 Family 6 Model 151 Stepping 2, GenuineIntel",
"count logical": 20,
"count physical": 12
},
"RAM": {
"total": "48GB",
"used": "13GB",
"free": "35GB"
},
"Extensions": [
{
"name": "AI-WEBUI-scripts-Random",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\AI-WEBUI-scripts-Random",
"version": "b9faf99c",
"branch": "main",
"remote": "https://github.com/mkco5162/AI-WEBUI-scripts-Random"
},
{
"name": "openpose-editor",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\openpose-editor",
"version": "c9357715",
"branch": "master",
"remote": "https://github.com/fkunn1326/openpose-editor.git"
},
{
"name": "sd-dynamic-prompts",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-dynamic-prompts",
"version": "1567e787",
"branch": "main",
"remote": "https://github.com/adieyal/sd-dynamic-prompts"
},
{
"name": "sd-webui-3d-open-pose-editor-main",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-webui-3d-open-pose-editor-main",
"version": "",
"branch": null,
"remote": null
},
{
"name": "sd-webui-additional-networks",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-webui-additional-networks",
"version": "e9f3d622",
"branch": "main",
"remote": "https://github.com/kohya-ss/sd-webui-additional-networks.git"
},
{
"name": "sd-webui-controlnet",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-webui-controlnet",
"version": "3b4eedd9",
"branch": "main",
"remote": "https://github.com/Mikubill/sd-webui-controlnet"
},
{
"name": "sd-webui-regional-prompter",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-webui-regional-prompter",
"version": "50493ec0",
"branch": "main",
"remote": "https://github.com/hako-mikan/sd-webui-regional-prompter.git"
},
{
"name": "sd_dreambooth_extension",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd_dreambooth_extension",
"version": "45a12fe5",
"branch": "main",
"remote": "https://github.com/d8ahazard/sd_dreambooth_extension.git"
},
{
"name": "stable-diffusion-webui-daam",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam",
"version": "3a4fe7a2",
"branch": "master",
"remote": "https://github.com/Bing-su/stable-diffusion-webui-daam"
},
{
"name": "stable-diffusion-webui-daam-master",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-daam-master",
"version": "",
"branch": null,
"remote": null
},
{
"name": "stable-diffusion-webui-images-browser",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-images-browser",
"version": "a42c7a30",
"branch": "main",
"remote": "https://github.com/yfszzx/stable-diffusion-webui-images-browser"
},
{
"name": "stable-diffusion-webui-localization-ko_KR",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-localization-ko_KR",
"version": "78431bea",
"branch": "main",
"remote": "https://github.com/36DB/stable-diffusion-webui-localization-ko_KR.git"
},
{
"name": "stable-diffusion-webui-wd14-tagger",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\stable-diffusion-webui-wd14-tagger",
"version": "e72d984b",
"branch": "master",
"remote": "https://github.com/picobyte/stable-diffusion-webui-wd14-tagger"
},
{
"name": "tag-autocomplete",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\tag-autocomplete",
"version": "3eef536b",
"branch": "main",
"remote": "https://github.com/DominikDoom/a1111-sd-webui-tagcomplete"
}
],
"Inactive extensions": [
{
"name": "adetailer",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\adetailer",
"version": "910bf3b9",
"branch": "main",
"remote": "https://github.com/Bing-su/adetailer.git"
},
{
"name": "gif2gif",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\gif2gif",
"version": "5121851e",
"branch": "main",
"remote": "https://github.com/LonicaMewinsky/gif2gif.git"
},
{
"name": "sd-webui-animatediff",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-webui-animatediff",
"version": "f20f57f6",
"branch": "master",
"remote": "https://github.com/continue-revolution/sd-webui-animatediff.git"
},
{
"name": "sd-webui-openpose-editor",
"path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-webui-openpose-editor",
"version": "cebe13e0",
"branch": "main",
"remote": "https://github.com/huchenlei/sd-webui-openpose-editor"
}
],
"Environment": {
"COMMANDLINE_ARGS": " --skip-torch-cuda-test --xformers --deepdanbooru --autolaunch",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"samples_save": true,
"samples_format": "png",
"samples_filename_pattern": "",
"grid_save": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"n_rows": -1,
"enable_pnginfo": true,
"save_txt": false,
"save_images_before_face_restoration": false,
"jpeg_quality": 80,
"export_for_4chan": true,
"use_original_name_batch": false,
"save_selected_only": true,
"do_not_add_watermark": false,
"outdir_samples": "",
"outdir_txt2img_samples": "outputs/txt2img-images",
"outdir_img2img_samples": "outputs/img2img-images",
"outdir_extras_samples": "outputs/extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs/txt2img-grids",
"outdir_img2img_grids": "outputs/img2img-grids",
"outdir_save": "log/images",
"save_to_dirs": false,
"grid_save_to_dirs": false,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "",
"directories_max_prompt_words": 8,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN x4+",
"R-ESRGAN x4+ Anime6B",
"R-ESRGAN 4x+ Anime6B"
],
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"ldsr_steps": 100,
"upscaler_for_img2img": "R-ESRGAN 4x+ Anime6B",
"face_restoration_model": null,
"code_former_weight": 0.5,
"face_restoration_unload": false,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"sd_model_checkpoint": "0.7(flosbbwmix_v20) + 0.3(furrytoonmix_v10).safetensors [1ac5ad6e51]",
"sd_hypernetwork": "None",
"img2img_color_correction": false,
"save_images_before_color_correction": false,
"img2img_fix_steps": false,
"enable_quantization": false,
"enable_emphasis": true,
"use_old_emphasis_implementation": false,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"filter_nsfw": false,
"CLIP_stop_at_last_layers": 2,
"random_artist_categories": [],
"interrogate_keep_models_in_memory": false,
"interrogate_use_builtin_artists": true,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500.0,
"interrogate_deepbooru_score_threshold": 0.5,
"show_progressbar": true,
"show_progress_every_n_steps": 0,
"return_grid": true,
"do_not_show_images": false,
"add_model_hash_to_info": true,
"add_model_name_to_info": false,
"font": "",
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"show_progress_in_title": true,
"hide_samplers": [
"LMS",
"Heun",
"DPM2 a",
"DPM2 Karras",
"LMS Karras",
"DPM fast",
"DPM adaptive",
"PLMS",
"DPM2 a Karras",
"DPM2"
],
"eta_ddim": 0,
"eta_ancestral": 0.2,
"ddim_discretize": "uniform",
"s_churn": 0,
"s_tmin": 0,
"s_noise": 0.2,
"eta_noise_seed_delta": 0,
"unload_models_when_training": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 100,
"sd_hypernetwork_strength": 1,
"quicksettings": "sd_model_checkpoint",
"interrogate_return_ranks": false,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": false,
"deepbooru_escape": true,
"grid_prevent_empty_spots": false,
"use_scale_latent_for_hires_fix": false,
"training_write_csv_every": 500.0,
"sd_checkpoint_cache": 0,
"localization": "ko_KR",
"show_progress_grid": true,
"disable_weights_auto_swap": false,
"quicksettings_list": [
"sd_model_checkpoint"
],
"sd_checkpoint_hash": "1ac5ad6e51854ac5bde68340bed3530912254b0e34c112f442ad50e63f66af6e",
"disabled_extensions": [
"adetailer",
"gif2gif",
"sd-webui-animatediff",
"sd-webui-openpose-editor"
],
"disable_all_extensions": "none",
"restore_config_state_file": "",
"save_images_add_number": true,
"grid_zip_filename_pattern": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_highres_fix": false,
"save_mask": false,
"save_mask_composite": false,
"webp_lossless": false,
"img_downscale_threshold": 4.0,
"target_side_length": 4000.0,
"img_max_size_mp": 200.0,
"use_upscaler_name_as_suffix": false,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"outdir_init_images": "outputs/init-images",
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"face_restoration": false,
"auto_launch_browser": "Local",
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"pin_memory": true,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120.0,
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_unet": "Automatic",
"upcast_attn": false,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"sdxl_crop_top": 0.0,
"sdxl_crop_left": 0.0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 0,
"sd_vae": "kl-f8-anime2.ckpt",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision": true,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1,
"initial_noise_multiplier": 1,
"img2img_extra_noise": 0,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0,
"token_merging_ratio": 0,
"token_merging_ratio_img2img": 0,
"token_merging_ratio_hr": 0,
"pad_cond_uncond": true,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"dont_fix_second_order_samplers_schedule": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"lora_functional": false,
"interrogate_clip_skip_categories": [],
"deepbooru_filter_tags": "",
"extra_networks_show_hidden_directories": true,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1,
"extra_networks_card_width": 0.0,
"extra_networks_card_height": 0.0,
"extra_networks_card_text_scale": 1,
"extra_networks_card_show_desc": true,
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_show_all": false,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"gradio_theme": "Default",
"gradio_themes_cache": true,
"gallery_height": "",
"send_seed": true,
"send_size": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250.0,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~()",
"keyedit_move": true,
"ui_tab_order": [],
"hidden_tabs": [],
"ui_reorder_list": [],
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"disable_token_counters": false,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"infotext_styles": "Apply if any",
"live_previews_enable": true,
"live_previews_image_format": "png",
"show_progress_type": "Approx NN",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Prompt",
"live_preview_refresh_period": 1000.0,
"live_preview_fast_interrupt": false,
"s_tmax": 0,
"k_sched_type": "Automatic",
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"postprocessing_enable_in_main_ui": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
],
"images_history_preload": false,
"images_record_paths": true,
"images_delete_message": true,
"images_history_page_columns": 6.0,
"images_history_page_rows": 6.0,
"images_history_pages_perload": 20.0,
"tagger_out_filename_fmt": "[name].[output_extension]",
"tagger_count_threshold": 100,
"tagger_batch_recursive": true,
"tagger_auto_serde_json": true,
"tagger_store_images": false,
"tagger_weighted_tags_files": false,
"tagger_verbose": false,
"tagger_repl_us": true,
"tagger_repl_us_excl": "0_0, (o)_(o), +_+, +_-, ._., <o>_<o>, <|>_<|>, =_=, >_<, 3_3, 6_9, >_o, @_@, ^_^, o_o, u_u, x_x, |_|, ||_||",
"tagger_escape": false,
"tagger_batch_size": 1024.0,
"tagger_hf_cache_dir": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\models\\interrogators",
"tac_tagFile": "danbooru.csv",
"tac_active": true,
"tac_activeIn.txt2img": true,
"tac_activeIn.img2img": true,
"tac_activeIn.negativePrompts": true,
"tac_activeIn.thirdParty": true,
"tac_activeIn.modelList": "",
"tac_activeIn.modelListMode": "Blacklist",
"tac_slidingPopup": true,
"tac_maxResults": 5.0,
"tac_showAllResults": false,
"tac_resultStepLength": 100.0,
"tac_delayTime": 100.0,
"tac_useWildcards": true,
"tac_sortWildcardResults": true,
"tac_useEmbeddings": true,
"tac_includeEmbeddingsInNormalResults": false,
"tac_useHypernetworks": true,
"tac_useLoras": true,
"tac_useLycos": true,
"tac_showWikiLinks": false,
"tac_showExtraNetworkPreviews": true,
"tac_replaceUnderscores": true,
"tac_escapeParentheses": true,
"tac_appendComma": true,
"tac_appendSpace": true,
"tac_alwaysSpaceAtEnd": true,
"tac_modelKeywordCompletion": "Never",
"tac_modelKeywordLocation": "Start of prompt",
"tac_wildcardCompletionMode": "To next folder level",
"tac_alias.searchByAlias": true,
"tac_alias.onlyShowAlias": false,
"tac_translation.translationFile": "None",
"tac_translation.oldFormat": false,
"tac_translation.searchByTranslation": true,
"tac_translation.liveTranslation": false,
"tac_extra.extraFile": "extra-quality-tags.csv",
"tac_extra.addMode": "Insert before",
"tac_chantFile": "demo-chants.json",
"tac_keymap": "{\n \"MoveUp\": \"ArrowUp\",\n \"MoveDown\": \"ArrowDown\",\n \"JumpUp\": \"PageUp\",\n \"JumpDown\": \"PageDown\",\n \"JumpToStart\": \"Home\",\n \"JumpToEnd\": \"End\",\n \"ChooseSelected\": \"Enter\",\n \"ChooseFirstOrSelected\": \"Tab\",\n \"Close\": \"Escape\"\n}",
"tac_colormap": "{\n \"danbooru\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"indianred\", \"firebrick\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"orange\", \"darkorange\"]\n },\n \"e621\": {\n \"-1\": [\"red\", \"maroon\"],\n \"0\": [\"lightblue\", \"dodgerblue\"],\n \"1\": [\"gold\", \"goldenrod\"],\n \"3\": [\"violet\", \"darkorchid\"],\n \"4\": [\"lightgreen\", \"darkgreen\"],\n \"5\": [\"tomato\", \"darksalmon\"],\n \"6\": [\"red\", \"maroon\"],\n \"7\": [\"whitesmoke\", \"black\"],\n \"8\": [\"seagreen\", \"darkseagreen\"]\n }\n}",
"tac_refreshTempFiles": "Refresh TAC temp files",
"dp_ignore_whitespace": false,
"dp_write_raw_template": false,
"dp_write_prompts_to_file": false,
"dp_parser_variant_start": "{",
"dp_parser_variant_end": "}",
"dp_parser_wildcard_wrap": "__",
"dp_limit_jinja_prompts": false,
"dp_auto_purge_cache": false,
"dp_wildcard_manager_no_dedupe": false,
"dp_wildcard_manager_no_sort": false,
"dp_wildcard_manager_shuffle": false,
"dp_magicprompt_default_model": "Gustavosta/MagicPrompt-Stable-Diffusion",
"dp_magicprompt_batch_size": 1,
"ad_max_models": 2,
"ad_save_previews": true,
"ad_save_images_before": false,
"ad_only_seleted_scripts": true,
"ad_script_names": "dynamic_prompting,dynamic_thresholding,wildcard_recursive,wildcards,lora_block_weight",
"ad_bbox_sortby": "None",
"animatediff_model_path": "A:\\Stable Defusion\\WEBUI0.66.2\\stable-diffusion-webui\\extensions\\sd-webui-animatediff\\model",
"animatediff_optimize_gif_palette": true,
"animatediff_optimize_gif_gifsicle": true,
"animatediff_xformers": "Optimize attention layers with xformers",
"tac_modelSortOrder": "Name",
"control_net_allow_script_control": false,
"additional_networks_extra_lora_path": "",
"additional_networks_sort_models_by": "name",
"additional_networks_reverse_sort_order": false,
"additional_networks_model_name_filter": "",
"additional_networks_xy_grid_model_metadata": "",
"additional_networks_hash_thread_count": 1.0,
"additional_networks_back_up_model_when_saving": true,
"additional_networks_show_only_safetensors": false,
"additional_networks_show_only_models_with_metadata": "disabled",
"additional_networks_max_top_tags": 20.0,
"additional_networks_max_dataset_folders": 20.0,
"control_net_detectedmap_dir": "detected_maps",
"control_net_models_path": "",
"control_net_modules_path": "",
"control_net_unit_count": 3,
"control_net_model_cache_size": 2,
"control_net_inpaint_blur_sigma": 7,
"control_net_no_detectmap": false,
"control_net_detectmap_autosaving": false,
"control_net_sync_field_args": true,
"controlnet_show_batch_images_in_ui": false,
"controlnet_increment_seed_during_batch": false,
"controlnet_disable_openpose_edit": false,
"controlnet_disable_photopea_edit": false,
"controlnet_photopea_warning": true,
"controlnet_ignore_noninpaint_mask": false,
"controlnet_clip_detector_on_cpu": false,
"tac_wildcardExclusionList": "",
"tac_skipWildcardRefresh": false,
"tac_useLoraPrefixForLycos": true,
"tac_useStyleVars": false,
"openpose3d_use_online_version": false,
"regprp_debug": false,
"regprp_hidepmask": false,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"controlnet_control_type_dropdown": false,
"tac_frequencySort": true,
"tac_frequencyFunction": "Logarithmic (weak)",
"tac_frequencyMinCount": 3,
"tac_frequencyMaxAge": 30,
"tac_frequencyRecommendCap": 10,
"tac_frequencyIncludeAlias": false
},
"Startup": {
"total": 35.643954038619995,
"records": {
"initial startup": 0.014950990676879883,
"prepare environment/checks": 0.010963201522827148,
"prepare environment/git version info": 0.0627896785736084,
"prepare environment/torch GPU test": 0.001993417739868164,
"prepare environment/clone repositores": 0.1445159912109375,
"prepare environment/install requirements": 7.762157440185547,
"prepare environment/run extensions installers/AI-WEBUI-scripts-Random": 0.0009968280792236328,
"prepare environment/run extensions installers/openpose-editor": 0.0,
"prepare environment/run extensions installers/sd-dynamic-prompts": 0.15795183181762695,
"prepare environment/run extensions installers/sd-webui-3d-open-pose-editor-main": 0.18438172340393066,
"prepare environment/run extensions installers/sd-webui-additional-networks": 0.0,
"prepare environment/run extensions installers/sd-webui-controlnet": 0.12956738471984863,
"prepare environment/run extensions installers/sd-webui-regional-prompter": 0.0,
"prepare environment/run extensions installers/sd_dreambooth_extension": 9.188960075378418,
"prepare environment/run extensions installers/stable-diffusion-webui-daam": 0.25730133056640625,
"prepare environment/run extensions installers/stable-diffusion-webui-daam-master": 0.25215673446655273,
"prepare environment/run extensions installers/stable-diffusion-webui-images-browser": 0.0,
"prepare environment/run extensions installers/stable-diffusion-webui-localization-ko_KR": 0.0,
"prepare environment/run extensions installers/stable-diffusion-webui-wd14-tagger": 6.892964124679565,
"prepare environment/run extensions installers/tag-autocomplete": 0.0,
"prepare environment/run extensions installers": 17.064280033111572,
"prepare environment": 25.04669976234436,
"launcher": 0.0029897689819335938,
"import torch": 4.869890928268433,
"import gradio": 0.552384614944458,
"setup paths": 0.6129496097564697,
"import ldm": 0.004982948303222656,
"import sgm": 0.0,
"initialize shared": 0.13554716110229492,
"other imports": 0.4644455909729004,
"opts onchange": 0.0039865970611572266,
"setup SD model": 0.0,
"setup codeformer": 0.0009968280792236328,
"setup gfpgan": 0.009966611862182617,
"set samplers": 0.0,
"list extensions": 0.0029900074005126953,
"restore config state file": 0.0,
"list SD models": 0.26636576652526855,
"list localizations": 0.0009965896606445312,
"load scripts/custom_code.py": 0.004983425140380859,
"load scripts/img2imgalt.py": 0.0,
"load scripts/loopback.py": 0.0009965896606445312,
"load scripts/outpainting_mk_2.py": 0.0,
"load scripts/poor_mans_outpainting.py": 0.0,
"load scripts/postprocessing_codeformer.py": 0.000997304916381836,
"load scripts/postprocessing_gfpgan.py": 0.0,
"load scripts/postprocessing_upscale.py": 0.0,
"load scripts/prompt_matrix.py": 0.0009961128234863281,
"load scripts/prompts_from_file.py": 0.0,
"load scripts/sd_upscale.py": 0.0,
"load scripts/xyz_grid.py": 0.0019931793212890625,
"load scripts/ldsr_model.py": 0.7614519596099854,
"load scripts/lora_script.py": 0.14751935005187988,
"load scripts/scunet_model.py": 0.020929813385009766,
"load scripts/swinir_model.py": 0.018936634063720703,
"load scripts/hotkey_config.py": 0.0,
"load scripts/extra_options_section.py": 0.0,
"load scripts/hypertile_script.py": 0.03986644744873047,
"load scripts/hypertile_xyz.py": 0.0009970664978027344,
"load scripts/postprocessing_autosized_crop.py": 0.0,
"load scripts/postprocessing_caption.py": 0.0,
"load scripts/postprocessing_create_flipped_copies.py": 0.0,
"load scripts/postprocessing_focal_crop.py": 0.0019931793212890625,
"load scripts/postprocessing_split_oversized.py": 0.0009968280792236328,
"load scripts/soft_inpainting.py": 0.0,
"load scripts/Random dynamic_prompting.py": 0.0009965896606445312,
"load scripts/Random grid.py": 0.0019931793212890625,
"load scripts/Random.py": 0.0,
"load scripts/main.py": 0.11461639404296875,
"load scripts/dynamic_prompting.py": 0.09268975257873535,
"load scripts/openpose_editor.py": 0.03189373016357422,
"load scripts/additional_networks.py": 0.08571290969848633,
"load scripts/addnet_xyz_grid_support.py": 0.0,
"load scripts/lora_compvis.py": 0.0,
"load scripts/metadata_editor.py": 0.0009965896606445312,
"load scripts/model_util.py": 0.004983186721801758,
"load scripts/safetensors_hack.py": 0.0,
"load scripts/util.py": 0.0,
"load scripts/adapter.py": 0.0,
"load scripts/api.py": 0.5023193359375,
"load scripts/batch_hijack.py": 0.0009965896606445312,
"load scripts/cldm.py": 0.0,
"load scripts/controlnet.py": 0.11062979698181152,
"load scripts/controlnet_diffusers.py": 0.0009965896606445312,
"load scripts/controlnet_lllite.py": 0.0,
"load scripts/controlnet_lora.py": 0.0,
"load scripts/controlnet_model_guess.py": 0.0009965896606445312,
"load scripts/controlnet_sparsectrl.py": 0.0,
"load scripts/controlnet_version.py": 0.0,
"load scripts/enums.py": 0.0009965896606445312,
"load scripts/external_code.py": 0.0,
"load scripts/global_state.py": 0.0,
"load scripts/hook.py": 0.0009968280792236328,
"load scripts/infotext.py": 0.0,
"load scripts/logging.py": 0.0,
"load scripts/lvminthin.py": 0.0,
"load scripts/movie2movie.py": 0.0009968280792236328,
"load scripts/supported_preprocessor.py": 0.0009965896606445312,
"load scripts/utils.py": 0.0,
"load scripts/xyz_grid_support.py": 0.0,
"load scripts/attention.py": 0.0,
"load scripts/latent.py": 0.0029900074005126953,
"load scripts/regions.py": 0.0009965896606445312,
"load scripts/rp.py": 0.021926403045654297,
"load scripts/rps.py": 0.014950275421142578,
"load scripts/__init__.py": 0.0009968280792236328,
"load scripts/daam_script.py": 0.010963678359985352,
"load scripts/images_history.py": 0.0408632755279541,
"load scripts/tagger.py": 0.0627896785736084,
"load scripts/model_keyword_support.py": 0.002990245819091797,
"load scripts/shared_paths.py": 0.0,
"load scripts/tag_autocomplete_helper.py": 0.07176041603088379,
"load scripts/tag_frequency_db.py": 0.0009958744049072266,
"load scripts/comments.py": 0.01694321632385254,
"load scripts/refiner.py": 0.0009970664978027344,
"load scripts/sampler.py": 0.0,
"load scripts/seed.py": 0.0,
"load scripts": 2.206629514694214,
"load upscalers": 0.0029897689819335938,
"refresh VAE": 0.0009968280792236328,
"refresh textual inversion templates": 0.0,
"scripts list_optimizers": 0.001993417739868164,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.0,
"initialize extra networks": 0.008969783782958984,
"scripts before_ui_callback": 0.0019931793212890625,
"create ui": 0.8292255401611328,
"gradio launch": 0.5890524387359619,
"add APIs": 0.004983425140380859,
"app_started_callback/lora_script.py": 0.0,
"app_started_callback/api.py": 0.0019936561584472656,
"app_started_callback/tagger.py": 0.0019927024841308594,
"app_started_callback/tag_autocomplete_helper.py": 0.0029900074005126953,
"app_started_callback": 0.00697636604309082
}
},
"Packages": [
"-itsandbytes==0.41.2.post2",
"-orch==2.3.0",
"-rotobuf==3.20.0",
"absl-py==1.4.0",
"accelerate==0.21.0",
"addict==2.4.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.8.5",
"aiosignal==1.3.1",
"albumentations==1.4.3",
"altair==5.1.1",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"astunparse==1.6.3",
"async-timeout==4.0.3",
"attrs==23.1.0",
"av==10.0.0",
"basicsr==1.4.2",
"beautifulsoup4==4.12.2",
"bitsandbytes==0.43.1",
"blendmodes==2022",
"boltons==23.0.0",
"cachetools==5.3.1",
"certifi==2023.7.22",
"cffi==1.16.0",
"chardet==5.2.0",
"charset-normalizer==3.2.0",
"clean-fid==0.1.35",
"click==8.1.7",
"clip==1.0",
"colorama==0.4.6",
"coloredlogs==15.0.1",
"colorlog==6.8.2",
"contourpy==1.1.0",
"cssselect2==0.7.0",
"cycler==0.11.0",
"cython==3.0.8",
"dadaptation==3.2",
"deepdanbooru==1.0.2",
"deprecation==2.1.0",
"depth-anything==2024.1.22.0",
"diffusers==0.26.3",
"discord-webhook==1.3.0",
"diskcache==5.6.3",
"dsine==2024.3.23",
"dynamicprompts==0.31.0",
"easydict==1.12",
"einops==0.4.1",
"embreex==2.17.7.post4",
"exceptiongroup==1.1.3",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.1",
"filelock==3.12.4",
"filterpy==1.4.5",
"flatbuffers==23.5.26",
"fonttools==4.42.1",
"frozenlist==1.4.0",
"fsspec==2023.9.0",
"ftfy==6.1.1",
"future==0.18.3",
"fvcore==0.1.5.post20221221",
"gast==0.4.0",
"gdown==4.7.1",
"geffnet==1.0.2",
"gfpgan==1.3.8",
"gitdb==4.0.10",
"gitpython==3.1.43",
"glob2==0.5",
"google-auth-oauthlib==1.0.0",
"google-auth==2.23.0",
"google-pasta==0.2.0",
"gradio-client==0.5.0",
"gradio==3.41.2",
"grpcio==1.58.0",
"h11==0.12.0",
"h5py==3.9.0",
"handrefinerportable==2024.2.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.20.3",
"humanfriendly==10.0",
"idna==3.4",
"imageio==2.31.3",
"importlib-metadata==6.8.0",
"importlib-resources==6.0.1",
"inflection==0.5.1",
"insightface==0.7.3",
"intel-openmp==2021.4.0",
"iopath==0.1.9",
"jinja2==3.1.2",
"joblib==1.3.2",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.7.1",
"jsonschema==4.19.0",
"keras==2.13.1",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy-loader==0.3",
"libclang==16.0.6",
"lightning-utilities==0.9.0",
"llvmlite==0.42.0",
"lmdb==1.4.1",
"lpips==0.1.4",
"lxml==5.1.0",
"mapbox-earcut==1.0.1",
"markdown-it-py==3.0.0",
"markdown==3.4.4",
"markupsafe==2.1.3",
"matplotlib==3.6.2",
"mdurl==0.1.2",
"mediapipe==0.10.5",
"mkl==2021.4.0",
"mpmath==1.3.0",
"multidict==6.0.4",
"networkx==3.1",
"numba==0.59.1",
"numpy==1.24.3",
"oauthlib==3.2.2",
"omegaconf==2.2.3",
"onnx==1.15.0",
"onnxruntime-gpu==1.15.1",
"open-clip-torch==2.20.0",
"opencv-contrib-python==4.8.0.76",
"opencv-python-headless==4.9.0.80",
"opencv-python==4.8.0.76",
"opt-einsum==3.3.0",
"orjson==3.9.7",
"packaging==23.1",
"pandas==2.1.0",
"piexif==1.1.3",
"pillow-avif-plugin==1.4.3",
"pillow==9.5.0",
"pip==22.2.2",
"platformdirs==3.10.0",
"portalocker==2.8.2",
"prettytable==3.9.0",
"protobuf==3.20.3",
"psutil==5.9.5",
"py-cpuinfo==9.0.0",
"pyasn1-modules==0.3.0",
"pyasn1==0.5.0",
"pycollada==0.8",
"pycparser==2.21",
"pydantic==1.10.12",
"pydub==0.25.1",
"pygifsicle==1.0.7",
"pygments==2.16.1",
"pyparsing==3.1.1",
"pyreadline3==3.4.1",
"pysocks==1.7.1",
"python-dateutil==2.8.2",
"python-multipart==0.0.6",
"pytorch-lightning==1.9.4",
"pytorch-optimizer==2.12.0",
"pytz==2023.3.post1",
"pywavelets==1.4.1",
"pywin32==306",
"pyyaml==6.0.1",
"qudida==0.0.4",
"realesrgan==0.3.0",
"referencing==0.30.2",
"regex==2023.8.8",
"reportlab==4.1.0",
"requests-oauthlib==1.3.1",
"requests==2.31.0",
"resize-right==0.0.2",
"rich==13.6.0",
"rpds-py==0.10.3",
"rsa==4.9",
"rtree==1.2.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scikit-learn==1.4.1.post1",
"scipy==1.11.2",
"seaborn==0.13.0",
"semantic-version==2.10.0",
"send2trash==1.8.2",
"sentencepiece==0.1.99",
"setuptools==63.2.0",
"shapely==2.0.3",
"six==1.16.0",
"smmap==5.0.0",
"sniffio==1.3.0",
"sounddevice==0.4.6",
"soupsieve==2.5",
"spandrel==0.1.6",
"starlette==0.26.1",
"support-developer==1.0.5",
"svg.path==6.3",
"svglib==1.5.1",
"sympy==1.12",
"tabulate==0.9.0",
"tb-nightly==2.15.0a20230915",
"tbb==2021.12.0",
"tensorboard-data-server==0.7.1",
"tensorboard==2.13.0",
"tensorflow-estimator==2.13.0",
"tensorflow-intel==2.13.0",
"tensorflow-io-gcs-filesystem==0.31.0",
"tensorflow==2.13.0",
"termcolor==2.3.0",
"thop==0.1.1.post2209072238",
"threadpoolctl==3.3.0",
"tifffile==2023.8.30",
"timm==0.9.2",
"tinycss2==1.2.1",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"tomli==2.0.1",
"toolz==0.12.0",
"torch==2.0.1",
"torchdiffeq==0.2.3",
"torchmetrics==1.1.2",
"torchsde==0.2.6",
"torchvision==0.15.2+cu118",
"tqdm==4.65.0",
"trampoline==0.1.2",
"transformers==4.30.2",
"trimesh==4.1.3",
"typing-extensions==4.5.0",
"tzdata==2023.3",
"ultralytics==8.0.194",
"urllib3==1.26.16",
"uvicorn==0.23.2",
"vhacdx==0.0.5",
"wcwidth==0.2.6",
"webencodings==0.5.1",
"websockets==11.0.3",
"werkzeug==2.3.7",
"wheel==0.41.2",
"wrapt==1.15.0",
"xformers==0.0.20",
"xxhash==3.4.1",
"yacs==0.1.8",
"yapf==0.40.1",
"yarl==1.9.2",
"zipp==3.16.2"
]
}
### Console logs
```Shell
venv "A:\Stable Defusion\WEBUI0.66.2\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]
Version: v1.4.0-RC-2397-g1c0a0c4c
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Traceback (most recent call last):
File "A:\Stable Defusion\WEBUI0.66.2\stable-diffusion-webui\launch.py", line 48, in <module>
main()
File "A:\Stable Defusion\WEBUI0.66.2\stable-diffusion-webui\launch.py", line 39, in main
prepare_environment()
File "A:\Stable Defusion\WEBUI0.66.2\stable-diffusion-webui\modules\launch_utils.py", line 386, in prepare_environment
raise RuntimeError(
RuntimeError: Torch is not able to use GPU; add --skip-torch-cuda-test to COMMANDLINE_ARGS variable to disable this check
```
### Additional information
I have updated my GPU driver recently.
I already updated latest CUDA and cuDNN after this issue occurred, but still it isn't work. | closed | 2024-05-13T14:26:50Z | 2024-08-26T11:59:25Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15777 | [
"bug-report"
] | Adastra11 | 2 |
tartiflette/tartiflette | graphql | 228 | [Extensibility] Bring an "hookable" way to make third-party libraries possible | One missing part needed by the community to provide external directives is a way to configure the directive.
Here is a draft of the API to provide configuration.
## No more `Directive` abstract class
The Directive classes won't extend any tartiflette abstract classes. Instead, you will have to implement the methods you want to support depending on the feature you want to compliant with. _(see above regarding the feature set)_
**from**
```python
from tartiflette.directive import Directive, CommonDirective
@Directive("myDirective")
class MyDirective(CommonDirective):
pass
```
**to**
```python
from tartiflette.directive import Directive
@Directive("myDirective")
class MyDirective:
pass
```
## Pass configuration at build time to a third party library
This is currently impossible to configure a directive/scalar (pass configuration as API Key, URL etc ...).
The directives are a set of static methods (the same behavior is applied on every engine contained in your application).
### Why adding a configuration to `Directive`/`Scalar`?
The Tartiflette DNA is to provide as much as possible an extensible Engine. We decided to base our extensibility on the Directive feature of the GraphQL Specification. It will allow the developers to hook some behaviors at different stages _(build and execution time)_ of the Engine.
Here are some interesting features which could be created:
* Directive to apply JSON Schema validation on `input` types.
* Directive to add monitoring on specific fields
* Directive to apply access rules on fields (e.g `@auth`) with **dynamic introspection** which will allow the developer to hide the fields from the introspection.
* Directive to map the resolution of a field dirrectly to the result of an AWS Lambda / GCP Function
and so one... let your imagination take you away.
### Configuration proposal
```python
engine = Engine(
sdl,
modules=[
"yourapp.query_resolvers",
"yourapp.mutation_resolvers",
{
"name": "tartiflette_external_lib",
"config": {
"your": "configuration",
"foo": {
"bar": "baz"
}
}
}
]
)
```
Based on the current `modules` property _(which currently allow **only** string)_, add a new kind of entry with that shape.
```json
{
"name": "string",
"config": "dict"
}
```
### How to deal with the configuration within a third party library?
This configuration will helps the tartiflette users to configure the library third party librarythey want to use.
For the Tartiflette developers, they will have to handle this configuration within their packages.
The idea is to give ability to the tartiflette developers to provide Tartiflette _objects_ _(directives, scalars, resolvers and so on)_ to the community, **easily**.
#### Context
**Package name**: tartiflette-directive-validators-jsonschema
**Package main module name**: tartiflette_directive_validators_jsonschema
#### Access the configuration at `bake` time.
The `Engine` will guess the method named `bake(schema_name: str, config: dict)` at the root level of the module.
**e.g ./tartiflette_directive_validators_jsonschema/__init__.py**
```python
from tartiflette import Directive
from tartiflette_directive_validators_jsonschema.validate_by_ref import ValidateByRef
async def bake(schema_name: str, config: dict) -> str:
sdl = """
directive @validateByRef(
ref: String!
) on ARGUMENT_DEFINITION
"""
Directive("validateByRef", schema_name=schema_name)(ValidateByRef(config))
return sdl
```
**e.g ./tartiflette_directive_validators_jsonschema/validate_by_ref.py**
```python
class ValidateByRef:
def __init__(self, config):
# API Key ...
# File path ...
pass
async def on_argument_execution(
self,
directive_args: Dict[str, Any],
next_directive: Callable,
argument_definition: "GraphQLArgument",
args: Dict[str, Any],
ctx: Optional[Dict[str, Any]],
info: "Info",
) -> Any:
pass
``` | closed | 2019-05-15T13:47:41Z | 2019-06-13T09:49:47Z | https://github.com/tartiflette/tartiflette/issues/228 | [
"enhancement"
] | tsunammis | 0 |
noirbizarre/flask-restplus | flask | 701 | Update Documentation for Flask >= 1.1.0 and Werkzeug >= 0.15.0 | > d-kahara commented 2 hours ago •
> For anyone that's encountered this issue lately
> This has changed
>
> from werkzeug.contrib.fixers import ProxyFix
> app.wsgi_app = ProxyFix(app.wsgi_app)
>
> This is what you need:
>
> from werkzeug.middleware.proxy_fix import ProxyFix
> app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1)
>
> @noirbizarre perhaps this could also be rectified in the todo example?
This appears to be a refactor in Werkzeug 0.15.X series. This version is taken by flask >= 1.1.0
We should look at updating our example and documentation, along with compatibility notes. Sidebar, we should also consider locking to more specific versions of dependencies in case of breaking changes (since Flask 1.0.0 came out, and we're allowing >0.8.0. Some big changes happened in there :) ) | open | 2019-08-22T13:25:50Z | 2019-09-24T09:07:13Z | https://github.com/noirbizarre/flask-restplus/issues/701 | [
"documentation"
] | j5awry | 1 |
facebookresearch/fairseq | pytorch | 5,306 | High Memory consumption on ASR inferencing | ## ❓ Questions and Help
#### What is your question?
High memory consumption on ASR inferencing on a 16GB RAM server. It used up the system RAM up to 15.67GB and cause the whole server to hang.

#### What's your environment?
- fairseq Version (e.g., 1.0 or main): main
- PyTorch Version (e.g., 1.0)
- OS (e.g., Linux): Linux
- How you installed fairseq (`pip`, source): source
- Build command you used (if compiling from source):
```
cd /path/to/fairseq-py/
python examples/mms/asr/infer/mms_infer.py --model "/path/to/asr/model" --lang lang_code \
--audio "/path/to/audio_1.wav"
```
- Python version: 3.8
- CUDA/cuDNN version: None
- GPU models and configuration: None
- Any other relevant information: I am using docker.
| open | 2023-09-01T16:06:28Z | 2023-09-29T16:58:28Z | https://github.com/facebookresearch/fairseq/issues/5306 | [
"question",
"needs triage"
] | olawale1rty | 2 |
Urinx/WeixinBot | api | 149 | 微信发送信息频率上限? | # 请问有人知道微信发送信息频率上限是多少吗?
自己的机器人碰到的问题...
希望大家可以来帮助讨论下 先谢过啦
## 简单介绍下背景:
功能上基本上就是群发,且群发的信息对不同人是不一样的。
然后希望能够越准时越好(比如元旦,最好就是1月1日00:00发送,微信上貌似就显示到分钟,所以有60s的时间来发送,好友大概120个的样子)
当时我做的就是无脑for发送(由于使用nodejs写的,所以所有信息几乎是同时发送的,不会等待前一条发送成功才发送下一条)
一直都没有问题,好像是从去年开始,微信开始提示:"发送消息过于频繁,等待对方接受你的好友请求后再发。"
## 原因就是发送消息过于频繁了,于是想到几个解决方案
### 由于我是用nodejs,所以为了减少并发(同时发送)的数量,则
* 把所有要发送的信息放到一个queue
* 等到收到系统收到信息的提示后,再发送下一个
_这个做法应该和python发送的过程一样了,都是堵塞(等待发送结果才做决定)的方法, 但是实际上这种方法连续发送60条会被提示(不知道各位有没有用python试验过)_<br>
_然后对于这种方法,我尝试过发送60条,休息15s左右,还能发送40条,然后会被提示,用此方法应该可以测试出要多久可以恢复60条?_
### 直接用js的timeout(或者interval),硬设定间隔发送,每x秒发送一条。
_试过1000(也就是1s)没有问题,在500(也就是没0.5s发送一条)的时候,连续发送120条左右就提示错误了_<br>
_上述的120条是指给同一用户发送,我试过分成4个,结果是1人上限是30条左右,所以由此推出微信应该限制是发送请求的数量和具体对象是谁无关。_<br>
_这个测试中,感觉微信设定指标的条件好像有些模糊,因为在无脑发送(在这边中其实对应的是时间间隔为0),也能发送到60条,而每个0.5s是能发送120条,似乎这边有一个 发送频率-发生数量 的函数在?_
## 不知道各位有什么想法?
## 关于这个问题的?
## 亦或是关于如何测试出微信限制的规则的?
感激不尽!!!
| open | 2017-02-10T11:25:39Z | 2017-09-12T12:56:32Z | https://github.com/Urinx/WeixinBot/issues/149 | [] | eggachecat | 6 |
plotly/dash-core-components | dash | 575 | DatePickerSingle not compatible with Internet Explorer 11 | DatePickerSingle examples from the Dash user guide https://dash.plot.ly/dash-core-components/datepickersingle are not working with Internet Explorer 11 (tested version: 11.175.18362.0 with updates: 11.0.130). A selected date in the calendar won't update the text in the input field.
Possible Solution:
Regarding this issue https://github.com/airbnb/react-dates/issues/1639, please consider an update of react-dates to version >= 20.2.0 | open | 2019-06-29T11:54:05Z | 2020-04-16T12:57:41Z | https://github.com/plotly/dash-core-components/issues/575 | [
"size: 1",
"bug"
] | miburk | 1 |
axnsan12/drf-yasg | rest-api | 40 | Schema default value not used | Trying to set a default value for a Schema inside of a Parameter. It does not seem to be getting picked up in Swagger when I use the "Try it out" functionality.
My schema looks like this:
```
@swagger_auto_schema(
operation_id="Get a mission plan",
responses={
200: 'test'},
manual_parameters=[
openapi.Parameter(
name='Organization',
in_=openapi.IN_HEADER,
description="Organization name",
required=True,
schema=openapi.Schema(
type=openapi.TYPE_STRING,
default="Organization 1",
)
)
]
)
```
Perhaps providing a more in depth example in testproj, that uses more of the OpenApi parameters, would be helpful. From what I read [here](https://swagger.io/docs/specification/adding-examples/) it seems the example field should be used rather than default, but it does not seem to be an available field in the Schema class? | closed | 2018-01-11T10:13:49Z | 2018-01-12T02:37:29Z | https://github.com/axnsan12/drf-yasg/issues/40 | [] | arkadyark | 4 |
seleniumbase/SeleniumBase | web-scraping | 2,853 | New Chrome break the remote_debug? | After update to chrome Version 126.0.6478.62, remote_debug in seleniumbase is broken,
with WARNING: Unable to get screenshot!
<img width="348" alt="image" src="https://github.com/seleniumbase/SeleniumBase/assets/1127820/36f5fe04-7c3e-478b-83c5-4f60d626244a">
| closed | 2024-06-14T15:18:22Z | 2024-06-22T15:27:37Z | https://github.com/seleniumbase/SeleniumBase/issues/2853 | [
"can't reproduce"
] | guocity | 10 |
dynaconf/dynaconf | flask | 1,138 | [RFC] Add description text to the fields | There will be 2 ways to add a description to a field on the schema
first: usign Doc() PEP 727 https://github.com/tiangolo/fastapi/blob/typing-doc/typing_doc.md
```py
field: Annotated[str, Doc("This field is important")]
```
The above adds a Docstring to the schema that can be extracted by Dynaconf to generate template files and documentation.
---
second: Docstrings, this method is not standardized but has an advantage of being recognized by LSPs
```py
field: str
"""This field is important"""
```
To extract this one and turn into a `Doc` inside the annotated type the following snippet can be used:
```py
import inspect
import re
class Database:
host: str
"""this is a hostname"""
port: int
"""this is port"""
def extract_docstrings(cls):
source = inspect.getsource(cls)
pattern = re.compile(r"(\w+):\s*\w+\n\s*\"\"\"(.*?)\"\"\"", re.DOTALL)
docstrings = dict(pattern.findall(source))
return docstrings
docstrings = extract_docstrings(Database)
for attr, doc in docstrings.items():
print(f"{attr}: {doc}")
# Output should be:
# host: this is a hostname
# port: this is port
```
| open | 2024-07-06T20:31:56Z | 2024-07-16T14:58:09Z | https://github.com/dynaconf/dynaconf/issues/1138 | [
"Not a Bug",
"RFC",
"typed_dynaconf"
] | rochacbruno | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 632 | lora不和基础模型合并,单独使用PeftModel加载可以吗,两者有区别吗 | ### 提交前必须检查以下项目
- [X] 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
- [X] 由于相关依赖频繁更新,请确保按照[Wiki](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki)中的相关步骤执行
- [X] 我已阅读[FAQ章节](https://github.com/ymcui/Chinese-LLaMA-Alpaca/wiki/常见问题)并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
- [X] 第三方插件问题:例如[llama.cpp](https://github.com/ggerganov/llama.cpp)、[text-generation-webui](https://github.com/oobabooga/text-generation-webui)、[LlamaChat](https://github.com/alexrozanski/LlamaChat)等,同时建议到对应的项目中查找解决方案
- [X] 模型正确性检查:务必检查模型的[SHA256.md](https://github.com/ymcui/Chinese-LLaMA-Alpaca/blob/main/SHA256.md),模型不对的情况下无法保证效果和正常运行
### 问题类型
模型转换和合并
### 基础模型
Alpaca-Plus-13B
### 操作系统
Linux
### 详细描述问题
我看lora介绍是单独加载一下,而我看到本项目是需要将lora和原模型进行合并后才能使用,这两者有什么区别吗
### 依赖情况(代码类问题务必提供)
无
### 运行日志或截图
无 | closed | 2023-06-19T02:07:52Z | 2023-06-26T23:54:09Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/632 | [
"stale"
] | yuandongbo-1 | 2 |
Yorko/mlcourse.ai | plotly | 755 | Proofread topic 4 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | open | 2023-10-23T15:43:36Z | 2024-08-25T07:49:28Z | https://github.com/Yorko/mlcourse.ai/issues/755 | [
"enhancement",
"articles"
] | Yorko | 2 |
gevent/gevent | asyncio | 1,140 | Wait for a socket and an event | Hi,
Is it possible to wait for both a socket and an event? Something like this:
```python
s = socket.socket()
e = gevent.event.Event()
gevent.wait([s, e])
# Either the socket or the event is ready
```
This code is obviously not valid, but I wonder if there's some trick or workaround to make it work. | closed | 2018-03-12T08:40:04Z | 2018-03-13T14:08:52Z | https://github.com/gevent/gevent/issues/1140 | [] | ralt | 5 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 43 | 如何继续fine-tuning | 如题,你好,想问下如何基于你训练好的Aplace在微调下呢? | closed | 2023-04-03T11:32:42Z | 2023-04-07T08:36:08Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/43 | [] | Ferrair | 3 |
ivy-llc/ivy | pytorch | 27,979 | Fix Ivy Failing Test: torch - linalg.adjoint | To-do List: https://github.com/unifyai/ivy/issues/27501 | closed | 2024-01-21T16:04:33Z | 2024-01-22T07:00:33Z | https://github.com/ivy-llc/ivy/issues/27979 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 87 | Stuck in filters | How to use filters here? If you added some examples it will be helpful. For example if we want to filter user table by first_name, last_name, email or status, How we can achieve? | closed | 2023-10-13T22:25:20Z | 2023-11-01T09:48:20Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/87 | [] | russell310 | 2 |
ansible/ansible | python | 84,475 | docker_network state:absent destroys networks with containers still attached, contrary to the documentation | ### Summary
When I run a playbook which declares the state of an existing docker_network with connected containers to be absent, I expect the deletion to fail as documented [here](https://docs.ansible.com/ansible/2.9/modules/docker_network_module.html#parameter-state)
> If a network has connected containers, it cannot be deleted.
Instead the network is deleted leaving the container running but without the network.
### Issue Type
Bug Report
### Component Name
docker_network
### Ansible Version
```console
$ ansible --version
ansible [core 2.18.1]
config file = None
configured module search path = ['/Users/simon/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible
ansible collection location = /Users/simon/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] (/opt/homebrew/Cellar/ansible/11.1.0/libexec/bin/python)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CONFIG_FILE() = None
GALAXY_SERVERS:
```
### OS / Environment
MacOS 14.7.1
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
Playbook 1
```yaml
---
- name: Deploy Nginx in a Docker container
hosts: localhost
become: true
tasks:
- name: Create a user-defined Docker network
community.docker.docker_network:
name: custom_network
state: present
- name: Deploy an Nginx container
community.docker.docker_container:
name: nginx_container
image: nginx:latest
state: started
networks:
- name: custom_network
ports:
- "8080:80"
- name: Verify the Nginx container is running
ansible.builtin.shell: "docker ps | grep nginx_container"
register: nginx_status
- name: Display the status of the Nginx container
ansible.builtin.debug:
msg: "{{ nginx_status.stdout_lines }}"
- name: Verify the network exists
ansible.builtin.shell: "docker network list | grep custom_network"
register: network_list
- name: Display the network list
ansible.builtin.debug:
msg: "{{ network_list.stdout_lines }}"
```
Playbook 2
```yaml
- name: Deploy Nginx in a Docker container
hosts: localhost
become: true
tasks:
- name: Create a user-defined Docker network
community.docker.docker_network:
name: custom_network
state: absent
- name: Verify the Nginx container is running
ansible.builtin.shell: "docker ps | grep nginx_container"
register: nginx_status
- name: Display the status of the Nginx container
ansible.builtin.debug:
msg: "{{ nginx_status.stdout_lines }}"
- name: Verify the network exists
ansible.builtin.shell: "docker network list | grep custom_network"
register: network_list
failed_when: network_list.rc not in [0,1]
- name: Display the network list
ansible.builtin.debug:
msg: "{{ network_list.stdout_lines }}"
```
### Expected Results
I expected `custom_network` to remain in existence after running playbook2, but in fact it is gone. Output below is from playbook 2 only showing it disconnecting the container and deleting the network contrary to the docs.
### Actual Results
```console
ansible-playbook [core 2.18.1]
config file = None
configured module search path = ['/Users/simon/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible
ansible collection location = /Users/simon/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible-playbook
python version = 3.13.1 (main, Dec 3 2024, 17:59:52) [Clang 16.0.0 (clang-1600.0.26.4)] (/opt/homebrew/Cellar/ansible/11.1.0/libexec/bin/python)
jinja version = 3.1.4
libyaml = True
No config file found; using defaults
BECOME password:
setting up inventory plugins
Loading collection ansible.builtin from
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
yaml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
ini declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Skipping due to inventory source not existing or not being readable by the current user
toml declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
Loading collection community.docker from /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible_collections/community/docker
Loading callback plugin default of type stdout, v2.0 from /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible/plugins/callback/default.py
Skipping callback 'default', as we already have a stdout callback.
Skipping callback 'minimal', as we already have a stdout callback.
Skipping callback 'oneline', as we already have a stdout callback.
PLAYBOOK: playbook2.yml **************************************************************************************************************************************************************
Positional arguments: playbook2.yml
verbosity: 4
connection: ssh
become_method: sudo
become_ask_pass: True
tags: ('all',)
inventory: ('/etc/ansible/hosts',)
forks: 5
1 plays in playbook2.yml
PLAY [Deploy Nginx in a Docker container] ********************************************************************************************************************************************
TASK [Gathering Facts] ***************************************************************************************************************************************************************
task path: /Users/simon/source/docker-net-test/playbook2.yml:1
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: simon
<127.0.0.1> EXEC /bin/sh -c 'echo ~simon && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/simon/.ansible/tmp `"&& mkdir "` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524728.427339-80135-269737139249987 `" && echo ansible-tmp-1734524728.427339-80135-269737139249987="` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524728.427339-80135-269737139249987 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible/modules/setup.py
<127.0.0.1> PUT /Users/simon/.ansible/tmp/ansible-local-80114wwv6suld/tmp6_rh6wfo TO /Users/simon/.ansible/tmp/ansible-tmp-1734524728.427339-80135-269737139249987/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/simon/.ansible/tmp/ansible-tmp-1734524728.427339-80135-269737139249987/ /Users/simon/.ansible/tmp/ansible-tmp-1734524728.427339-80135-269737139249987/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=voprbglpsmdpzxuuqzxmnvrkiilabelz] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-voprbglpsmdpzxuuqzxmnvrkiilabelz ; /opt/homebrew/Cellar/ansible/11.1.0/libexec/bin/python /Users/simon/.ansible/tmp/ansible-tmp-1734524728.427339-80135-269737139249987/AnsiballZ_setup.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/simon/.ansible/tmp/ansible-tmp-1734524728.427339-80135-269737139249987/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
TASK [Create a user-defined Docker network] ******************************************************************************************************************************************
task path: /Users/simon/source/docker-net-test/playbook2.yml:5
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: simon
<127.0.0.1> EXEC /bin/sh -c 'echo ~simon && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/simon/.ansible/tmp `"&& mkdir "` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524731.379621-80176-130326662296446 `" && echo ansible-tmp-1734524731.379621-80176-130326662296446="` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524731.379621-80176-130326662296446 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible_collections/community/docker/plugins/modules/docker_network.py
<127.0.0.1> PUT /Users/simon/.ansible/tmp/ansible-local-80114wwv6suld/tmp1pa1la08 TO /Users/simon/.ansible/tmp/ansible-tmp-1734524731.379621-80176-130326662296446/AnsiballZ_docker_network.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/simon/.ansible/tmp/ansible-tmp-1734524731.379621-80176-130326662296446/ /Users/simon/.ansible/tmp/ansible-tmp-1734524731.379621-80176-130326662296446/AnsiballZ_docker_network.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=abwuehlmpshpuexxxxkrpipbnvllrqge] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-abwuehlmpshpuexxxxkrpipbnvllrqge ; /opt/homebrew/Cellar/ansible/11.1.0/libexec/bin/python /Users/simon/.ansible/tmp/ansible-tmp-1734524731.379621-80176-130326662296446/AnsiballZ_docker_network.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/simon/.ansible/tmp/ansible-tmp-1734524731.379621-80176-130326662296446/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"actions": [
"Disconnected container nginx_container",
"Removed network custom_network"
],
"changed": true,
"invocation": {
"module_args": {
"api_version": "auto",
"appends": false,
"attachable": null,
"ca_path": null,
"client_cert": null,
"client_key": null,
"config_from": null,
"config_only": null,
"connected": [],
"debug": false,
"docker_host": "unix:///var/run/docker.sock",
"driver": "bridge",
"driver_options": {},
"enable_ipv6": null,
"force": false,
"internal": null,
"ipam_config": null,
"ipam_driver": null,
"ipam_driver_options": null,
"labels": {},
"name": "custom_network",
"scope": null,
"state": "absent",
"timeout": 60,
"tls": false,
"tls_hostname": null,
"use_ssh_client": false,
"validate_certs": false
}
}
}
TASK [Verify the Nginx container is running] *****************************************************************************************************************************************
task path: /Users/simon/source/docker-net-test/playbook2.yml:10
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: simon
<127.0.0.1> EXEC /bin/sh -c 'echo ~simon && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/simon/.ansible/tmp `"&& mkdir "` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524732.229014-80201-42969956857922 `" && echo ansible-tmp-1734524732.229014-80201-42969956857922="` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524732.229014-80201-42969956857922 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/simon/.ansible/tmp/ansible-local-80114wwv6suld/tmpq8absk6k TO /Users/simon/.ansible/tmp/ansible-tmp-1734524732.229014-80201-42969956857922/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/simon/.ansible/tmp/ansible-tmp-1734524732.229014-80201-42969956857922/ /Users/simon/.ansible/tmp/ansible-tmp-1734524732.229014-80201-42969956857922/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=ddgjsezyvpqdwfppjqtpwvsrdjslddel] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-ddgjsezyvpqdwfppjqtpwvsrdjslddel ; /opt/homebrew/Cellar/ansible/11.1.0/libexec/bin/python /Users/simon/.ansible/tmp/ansible-tmp-1734524732.229014-80201-42969956857922/AnsiballZ_command.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/simon/.ansible/tmp/ansible-tmp-1734524732.229014-80201-42969956857922/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"cmd": "docker ps | grep nginx_container",
"delta": "0:00:00.055873",
"end": "2024-12-18 12:25:32.658854",
"invocation": {
"module_args": {
"_raw_params": "docker ps | grep nginx_container",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"expand_argument_vars": true,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "",
"rc": 0,
"start": "2024-12-18 12:25:32.602981",
"stderr": "",
"stderr_lines": [],
"stdout": "362e03085558 nginx:latest \"/docker-entrypoint.…\" 35 minutes ago Up 35 minutes nginx_container",
"stdout_lines": [
"362e03085558 nginx:latest \"/docker-entrypoint.…\" 35 minutes ago Up 35 minutes nginx_container"
]
}
TASK [Display the status of the Nginx container] *************************************************************************************************************************************
task path: /Users/simon/source/docker-net-test/playbook2.yml:14
ok: [localhost] => {
"msg": [
"362e03085558 nginx:latest \"/docker-entrypoint.…\" 35 minutes ago Up 35 minutes nginx_container"
]
}
TASK [Verify the network exists] *****************************************************************************************************************************************************
task path: /Users/simon/source/docker-net-test/playbook2.yml:18
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: simon
<127.0.0.1> EXEC /bin/sh -c 'echo ~simon && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/simon/.ansible/tmp `"&& mkdir "` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524732.802664-80227-229173310682829 `" && echo ansible-tmp-1734524732.802664-80227-229173310682829="` echo /Users/simon/.ansible/tmp/ansible-tmp-1734524732.802664-80227-229173310682829 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/11.1.0/libexec/lib/python3.13/site-packages/ansible/modules/command.py
<127.0.0.1> PUT /Users/simon/.ansible/tmp/ansible-local-80114wwv6suld/tmpbuqg9v6l TO /Users/simon/.ansible/tmp/ansible-tmp-1734524732.802664-80227-229173310682829/AnsiballZ_command.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/simon/.ansible/tmp/ansible-tmp-1734524732.802664-80227-229173310682829/ /Users/simon/.ansible/tmp/ansible-tmp-1734524732.802664-80227-229173310682829/AnsiballZ_command.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'sudo -H -S -p "[sudo via ansible, key=elfdwluorehhbqjdeeolyqgaymsbbrvg] password:" -u root /bin/sh -c '"'"'echo BECOME-SUCCESS-elfdwluorehhbqjdeeolyqgaymsbbrvg ; /opt/homebrew/Cellar/ansible/11.1.0/libexec/bin/python /Users/simon/.ansible/tmp/ansible-tmp-1734524732.802664-80227-229173310682829/AnsiballZ_command.py'"'"' && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/simon/.ansible/tmp/ansible-tmp-1734524732.802664-80227-229173310682829/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"cmd": "docker network list | grep custom_network",
"delta": "0:00:00.065006",
"end": "2024-12-18 12:25:33.207390",
"failed_when_result": false,
"invocation": {
"module_args": {
"_raw_params": "docker network list | grep custom_network",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"expand_argument_vars": true,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2024-12-18 12:25:33.142384",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": []
}
TASK [Display the network list] ******************************************************************************************************************************************************
task path: /Users/simon/source/docker-net-test/playbook2.yml:23
ok: [localhost] => {
"msg": []
}
PLAY RECAP ***************************************************************************************************************************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-12-18T12:27:49Z | 2025-01-01T14:00:02Z | https://github.com/ansible/ansible/issues/84475 | [
"bot_closed"
] | sonotley | 1 |
kizniche/Mycodo | automation | 460 | dashboard error | ## Mycodo Issue Report:
- Specific Mycodo Version: 6.08
#### Problem Description
Please list:
- when i modify the dashboard and add in a graph two value: temp setpoint, temp from ds18b20
### Errors
- List any errors you encountered.
- Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1982, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1614, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1517, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/_compat.py", line 33, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1612, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1598, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask_login/utils.py", line 261, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_page.py", line 400, in page_dashboard
y_axes = utils_dashboard.graph_y_axes(dict_measurements)
File "/home/pi/Mycodo/mycodo/mycodo_flask/utils/utils_dashboard.py", line 469, in graph_y_axes
input_dev)
File "/home/pi/Mycodo/mycodo/mycodo_flask/utils/utils_dashboard.py", line 538, in check_func
elif each_device.cmd_measurement and each_device.cmd_measurement != '':
AttributeError: 'PID' object has no attribute 'cmd_measurement'
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. create data for ds18b20
2. create function Pid
3. add to graph : setpoint pid, ds18b20 temp
| closed | 2018-04-27T20:31:12Z | 2018-04-28T01:02:45Z | https://github.com/kizniche/Mycodo/issues/460 | [] | magichammer | 3 |
gradio-app/gradio | data-visualization | 10,639 | show_search not working on gr.Dataframe | ### Describe the bug
Have gradio 5.14.0, when trying to use show_search='search' paramter for gr.Dataframe(..., getting the error below:
TypeError: Dataframe.__init__() got an unexpected keyword argument 'show_search'
Other parameters are working.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
df_notes = gr.Dataframe(show_search='search', show_fullscreen_button=True, interactive=True, visible=True)
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
5.14.0
```
### Severity
I can work around it | closed | 2025-02-20T20:47:41Z | 2025-02-21T07:33:44Z | https://github.com/gradio-app/gradio/issues/10639 | [
"bug"
] | predsymod | 1 |
huggingface/transformers | python | 36,104 | I get OSError: ... is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models' for valid models | ### System Info
OS: Windows 11
Python: Both 3.11.6 and 3.12.9
Pytorch: Both 2.2.0 and 2.6.0
### Who can help?
_No response_
### Information
- [x] The official example scripts
- [ ] My own modified scripts
### Tasks
- [x] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import OmDetTurboProcessor, OmDetTurboForObjectDetection
processor = OmDetTurboProcessor.from_pretrained("omlab/omdet-turbo-swin-tiny-hf")
model = OmDetTurboForObjectDetection.from_pretrained("omlab/omdet-turbo-swin-tiny-hf")
```
I can't reproduce this in colab so i figure it's my system but i can't figure out why. I get this trying to load any model. I also tried rt-detr and got similar errors
Full Traceback:
```
Traceback (most recent call last):
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\requests\models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/omlab/omdet-turbo-swin-tiny-hf/resolve/main/preprocessor_config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\utils\hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 860, in hf_hub_download
return _hf_hub_download_to_cache_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 967, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 1482, in _raise_on_head_call_error
raise head_call_error
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 1374, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 1294, in get_hf_file_metadata
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 278, in _request_wrapper
response = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\file_download.py", line 302, in _request_wrapper
hf_raise_for_status(response)
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\huggingface_hub\utils\_http.py", line 454, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-67a9034f-3221be180265fb393ff1b352;afb39851-99d5-491d-8d27-f58783b491da)
Repository Not Found for url: https://huggingface.co/omlab/omdet-turbo-swin-tiny-hf/resolve/main/preprocessor_config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid credentials in Authorization header
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\obkal\Desktop\cbtest\omdet\detect.py", line 5, in <module>
processor = OmDetTurboProcessor.from_pretrained("omlab/omdet-turbo-swin-tiny-hf")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\processing_utils.py", line 974, in from_pretrained
args = cls._get_arguments_from_pretrained(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\processing_utils.py", line 1020, in _get_arguments_from_pretrained
args.append(attribute_class.from_pretrained(pretrained_model_name_or_path, **kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\image_processing_base.py", line 209, in from_pretrained
image_processor_dict, kwargs = cls.get_image_processor_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\image_processing_base.py", line 341, in get_image_processor_dict
resolved_image_processor_file = cached_file(
^^^^^^^^^^^^
File "C:\Users\obkal\Desktop\cbtest\omdet\.venv\Lib\site-packages\transformers\utils\hub.py", line 426, in cached_file
raise EnvironmentError(
OSError: omlab/omdet-turbo-swin-tiny-hf is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
``` | closed | 2025-02-09T20:00:14Z | 2025-02-10T19:36:15Z | https://github.com/huggingface/transformers/issues/36104 | [
"bug"
] | ogkalu2 | 3 |
keras-team/keras | deep-learning | 20,601 | Tensorflow Variable Aggregation testing in Keras 3.7.0 | Forking this from: https://github.com/keras-team/keras/issues/20568
Specifically tagging @james77777778.
tl; dr: It's `SUM`.
------
The recommendation was:
> Since there is no reproducible script for debugging, this is a random guess:
> Before https://github.com/keras-team/keras/commit/28d39c0cc766767f4db54edc8b8ce68d3a05d4b4, the aggregation behavior might have been broken due to incorrect propagation of the aggregation attr to the variables.
Essentially, the training would be an aggregation=None setting (the default value for tf.Variable), which is likely incorrect.
>
>@das-apratim could you first try training the model without using tf.distribute.MirroredStrategy() to check if any NaNs occur?
>
>If the training runs without issues, try adding back tf.distribute.MirroredStrategy() and modifying _map_aggregation in keras/src/backend/tensorflow/core.py as follows:
mapping = {
"none": tf.VariableAggregation.NONE,
"sum": tf.VariableAggregation.NONE,
"mean": tf.VariableAggregation.NONE,
"only_first_replica": tf.VariableAggregation.NONE,
}
>This adjustment reflects the behavior in Keras 3.6. See if the training runs well with this change.
>
>If it does, incrementally restore the original mapping to identify which key is causing the issue. Here's a general guideline:
>
> "sum" is associated with metrics.
"mean" is associated with model/optimizer weights.
"only_first_replica" is associated with counters.
I can make the following report for you:
```
NONE, NONE, NONE, NONE -- okay
NONE, SUM, NONE, NONE -- fail, metrics nan first. Regression metrics go inf, "counting" metrics go nan; 123/341
NONE, NONE, MEAN, NONE -- okay
NONE, NONE, NONE, FIRST -- okay
NONE, NONE, MEAN, FIRST -- okay
```
If it matters at all, this was using a Logical split of an RTX A6000 Ada for testing. | open | 2024-12-05T22:03:59Z | 2025-03-12T05:05:35Z | https://github.com/keras-team/keras/issues/20601 | [
"type:Bug"
] | dryglicki | 5 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 713 | Noise, wheezing in the output audio | Hi! Almost all the output audio that I received had a low voice frequency, and noise could occur in the middle of the voice. On many synthesized audio, the voice had a "hoarseness".
As the source audio, I tried many different voices, many of them with a high-pitched voice.
What could be the problem? Is this an inaccuracy of the model or my mistake?
Thank you in advance for your help!
| closed | 2021-03-26T17:29:48Z | 2021-04-04T07:17:22Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/713 | [] | nikolaev1ma | 3 |
sgl-project/sglang | pytorch | 3,728 | [Bug] Deepseek chatChatCompletion has an additional bos in end of the assistant | ### Checklist
- [x] 1. I have searched related issues but cannot get the expected help.
- [x] 2. The bug has not been fixed in the latest version.
- [x] 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
- [x] 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
- [x] 5. Please use English, otherwise it will be closed.
### Describe the bug
When sglang processes a chat request, if the last role is assistant, it will separate it and do tokenizer.encode separately, and then splice the tokenid.
But when I tested deepseek R1, I found that tokenizer.encode adds bos at the beginning of the tokenids by default, which will result in an extra bos after assistant in the final tokenid.like
`<|begin▁of▁sentence|><|User|>Tell me a common saying<|Assistant|><|begin▁of▁sentence|>"An apple a day, keeps`
Not sure about the tokenizer behavior of other models. Should we remove the first token of tokenizer.encode?
### Reproduction
openai_compatible_messages = [
{
"role": "user",
"content": "Tell me a common saying"
},
{
"role": "assistant",
"content": "\"An apple a day, keeps"
}
]
### Environment
python3 | closed | 2025-02-20T07:59:54Z | 2025-02-21T08:30:15Z | https://github.com/sgl-project/sglang/issues/3728 | [
"deepseek"
] | Planck-a | 5 |
ultralytics/ultralytics | pytorch | 19,250 | mode.val(save_json=True),COCO API AssertionError: Results do not correspond to current coco set. | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
trian.py
```
if __name__ == '__main__':
from ultralytics import YOLO
model = YOLO("yolo11n.pt") # load a pretrained model (recommended for training)
model.train(data="coco8.yaml",device=0,batch=-1)
```
yolo2json.py
```
import os
import json
from PIL import Image
# 设置数据集路径
output_dir = "D:\YOLOv11\datasets\coco8" #修改为YOLO格式的数据集路径;
dataset_path = "D:\YOLOv11\datasets\coco8" # 修改你想输出的coco格式数据集路径
images_path = os.path.join(dataset_path,"images")
labels_path = os.path.join(dataset_path,"labels")
# 类别映射
categories = [
{"id": 0, "name": "person"},
{"id": 1, "name": "bicycle"},
{"id": 2, "name": "car"},
{"id": 3, "name": "motorcycle"},
{"id": 4, "name": "airplane"},
{"id": 5, "name": "bus"},
{"id": 6, "name": "train"},
{"id": 7, "name": "truck"},
{"id": 8, "name": "boat"},
{"id": 9, "name": "traffic light"},
{"id": 10, "name": "fire hydrant"},
{"id": 11, "name": "stop sign"},
{"id": 12, "name": "parking meter"},
{"id": 13, "name": "bench"},
{"id": 14, "name": "bird"},
{"id": 15, "name": "cat"}, # 修改这里
{"id": 16, "name": "dog"},
{"id": 17, "name": "horse"},
{"id": 18, "name": "sheep"},
{"id": 19, "name": "cow"},
{"id": 20, "name": "elephant"},
{"id": 21, "name": "bear"},
{"id": 22, "name": "zebra"},
{"id": 23, "name": "giraffe"},
{"id": 24, "name": "backpack"},
{"id": 25, "name": "umbrella"},
{"id": 26, "name": "handbag"},
{"id": 27, "name": "tie"},
{"id": 28, "name": "suitcase"},
{"id": 29, "name": "frisbee"},
{"id": 30, "name": "skis"},
{"id": 31, "name": "snowboard"},
{"id": 32, "name": "sports ball"},
{"id": 33, "name": "kite"},
{"id": 34, "name": "baseball bat"},
{"id": 35, "name": "baseball glove"},
{"id": 36, "name": "skateboard"},
{"id": 37, "name": "surfboard"},
{"id": 38, "name": "tennis racket"},
{"id": 39, "name": "bottle"},
{"id": 40, "name": "wine glass"},
{"id": 41, "name": "cup"},
{"id": 42, "name": "fork"},
{"id": 43, "name": "knife"},
{"id": 44, "name": "spoon"},
{"id": 45, "name": "bowl"},
{"id": 46, "name": "banana"},
{"id": 47, "name": "apple"},
{"id": 48, "name": "sandwich"},
{"id": 49, "name": "orange"},
{"id": 50, "name": "broccoli"},
{"id": 51, "name": "carrot"},
{"id": 52, "name": "hot dog"},
{"id": 53, "name": "pizza"},
{"id": 54, "name": "donut"},
{"id": 55, "name": "cake"},
{"id": 56, "name": "chair"},
{"id": 57, "name": "couch"},
{"id": 58, "name": "potted plant"},
{"id": 59, "name": "bed"},
{"id": 60, "name": "dining table"},
{"id": 61, "name": "toilet"},
{"id": 62, "name": "tv"},
{"id": 63, "name": "laptop"},
{"id": 64, "name": "mouse"},
{"id": 65, "name": "remote"},
{"id": 66, "name": "keyboard"},
{"id": 67, "name": "cell phone"},
{"id": 68, "name": "microwave"},
{"id": 69, "name": "oven"},
{"id": 70, "name": "toaster"},
{"id": 71, "name": "sink"},
{"id": 72, "name": "refrigerator"},
{"id": 73, "name": "book"},
{"id": 74, "name": "clock"},
{"id": 75, "name": "vase"},
{"id": 76, "name": "scissors"},
{"id": 77, "name": "teddy bear"},
{"id": 78, "name": "hair drier"},
{"id": 79, "name": "toothbrush"}
]
# YOLO格式转COCO格式的函数
def convert_yolo_to_coco(x_center, y_center, width, height, img_width, img_height):
x_min = (x_center - width / 2) * img_width
y_min = (y_center - height / 2) * img_height
width = width * img_width
height = height * img_height
return [x_min, y_min, width, height]
# 初始化COCO数据结构
def init_coco_format():
return {
"images": [],
"annotations": [],
"categories": categories
}
# 处理每个数据集分区
for split in ['train', 'val']: #'test'
coco_format = init_coco_format()
annotation_id = 1
for img_name in os.listdir(os.path.join(images_path, split)):
if img_name.lower().endswith(('.png', '.jpg', '.jpeg')):
img_path = os.path.join(images_path, split, img_name)
label_path = os.path.join(labels_path, split, img_name.replace("jpg", "txt"))
img = Image.open(img_path)
img_width, img_height = img.size
image_info = {
"file_name": img_name,
"id": len(coco_format["images"]) + 1,
"width": img_width,
"height": img_height
}
coco_format["images"].append(image_info)
if os.path.exists(label_path):
with open(label_path, "r") as file:
for line in file:
category_id, x_center, y_center, width, height = map(float, line.split())
bbox = convert_yolo_to_coco(x_center, y_center, width, height, img_width, img_height)
annotation = {
"id": annotation_id,
"image_id": image_info["id"],
"category_id": int(category_id) + 1,
"bbox": bbox,
"area": bbox[2] * bbox[3],
"iscrowd": 0
}
coco_format["annotations"].append(annotation)
annotation_id += 1
# 为每个分区保存JSON文件
with open(os.path.join(output_dir, f"{split}_coco_format.json"), "w") as json_file:
json.dump(coco_format, json_file, indent=4)
```
vail.py
```
if __name__ == '__main__':
from ultralytics import YOLO
from pycocotools.coco import COCO
from pycocotools.cocoeval import COCOeval
model = YOLO("runs/detect/train11/weights/best.pt") # load a pretrained model (recommended for training)
results=model.val(data="coco8.yaml",save_json=True,device=0,batch=1)
anno = COCO("D:/YOLOv11/datasets/coco8/val_coco_format.json") # Load your JSON annotations
pred = anno.loadRes(f"{results.save_dir}/predictions.json") # Load predictions.json
val = COCOeval(anno, pred, "bbox")
val.evaluate()
val.accumulate()
val.summarize()
```
vail.py 报错
```
(yolov11) D:\YOLOv11>python vail.py
Ultralytics 8.3.18 🚀 Python-3.11.7 torch-2.6.0+cu126 CUDA:0 (NVIDIA GeForce RTX 4060 Ti, 16380MiB)
YOLO11n summary (fused): 238 layers, 2,616,248 parameters, 0 gradients, 6.5 GFLOPs
val: Scanning D:\YOLOv11\datasets\coco8\labels\val.cache... 4 images, 0 backgrounds, 0 corrupt: 100%|██████████| 4/4 [00:00<?, ?it/s]
Class Images Instances Box(P R mAP50 mAP50-95): 100%|██████████| 4/4 [00:01<00:00, 2.88it/s]
all 4 17 0.802 0.66 0.864 0.593
person 3 10 0.82 0.461 0.695 0.347
dog 1 1 0.707 1 0.995 0.697
horse 1 2 0.835 1 0.995 0.473
elephant 1 2 0.779 0.5 0.508 0.153
umbrella 1 1 0.669 1 0.995 0.995
potted plant 1 1 1 0 0.995 0.895
Speed: 2.2ms preprocess, 26.6ms inference, 0.0ms loss, 14.8ms postprocess per image
Saving runs\detect\val18\predictions.json...
Results saved to runs\detect\val18
loading annotations into memory...
Done (t=0.00s)
creating index...
index created!
Loading and preparing results...
Traceback (most recent call last):
File "D:\YOLOv11\vail.py", line 56, in <module>
pred = anno.loadRes(f"{results.save_dir}/predictions.json") # Load predictions.json
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ProgramData\anaconda3\Lib\site-packages\pycocotools\coco.py", line 327, in loadRes
assert set(annsImgIds) == (set(annsImgIds) & set(self.getImgIds())), \
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AssertionError: Results do not correspond to current coco set
```
### Additional
I don't know why there's an error | closed | 2025-02-14T15:42:45Z | 2025-02-18T11:19:04Z | https://github.com/ultralytics/ultralytics/issues/19250 | [
"question",
"detect"
] | SDIX-7 | 7 |
coqui-ai/TTS | deep-learning | 3,538 | Numpy 1.22.0 and 1.24.3 Problem with TTS 0.22.0 and Librosa 0.10.0 and 0.10.1 | Hello everyone,
I'm working as a scientistic assistance on Linux and we had a compatibility problem with TTS 0.22.0, Librosa 0.10.0 and Numpy 1.22.0 after an TTS Update from 0.11.0 to 0.22.0.
With these updates we got the following traceback:
```
Traceback (most recent call last):
File "/project/translate_lecture.py", line 195, in <module>
main(
File "/project/translate_lecture.py", line 96, in main
speaker.speak(use_gpu=use_cuda)
File "/project/src/speaker.py", line 49, in speak
self._add_text(segment=segment, use_gpu=use_gpu)
File "/project/src/speaker.py", line 110, in _add_text
file_handler.adjust_audio_length(
File "/project/utils/file_handler.py", line 122, in adjust_audio_length
short_y = librosa.effects.time_stretch(y=y, rate=factor)
File "/opt/conda/lib/python3.10/site-packages/librosa/effects.py", line 245, in time_stretch
stft_stretch = core.phase_vocoder(
File "/opt/conda/lib/python3.10/site-packages/librosa/core/spectrum.py", line 1457, in phase_vocoder
d_stretch[..., t] = util.phasor(phase_acc, mag=mag)
File "/opt/conda/lib/python3.10/site-packages/librosa/util/utils.py", line 2602, in phasor
z = _phasor_angles(angles)
File "/opt/conda/lib/python3.10/site-packages/numba/np/ufunc/dufunc.py", line 190, in __call__
return super().__call__(*args, **kws)
numpy.core._exceptions._UFuncNoLoopError: ufunc '_phasor_angles' did not contain a loop with signature matching types <class 'numpy.dtype[float32]'> -> None
```
After this I was looking inside of the requirements of TTS 0.22.0 to search for the numpy compatibility:
```
# core deps
numpy==1.22.0;python_version<="3.10"
numpy>=1.24.3;python_version>"3.10"
librosa>=0.10.0
...
```
and Librosa 0.10.1:
```
install_requires =
audioread >= 2.1.9
numpy >= 1.20.3, != 1.22.0, != 1.22.1, != 1.22.2
...
```
With this knowledge I updated from python 3.10 (uninstalled) to python 3.11 and defined ``numpy==1.24.3`` in our requirements, because I thought it needs the next higher compatible version, as read in the TTS requirements. But, while building, the compiler said:
```
#17 12.18 The conflict is caused by:
#17 12.18 The user requested numpy==1.24.3
#17 12.18 tts 0.22.0 depends on numpy==1.22.0; python_version <= "3.10"
#17 12.18
```
I already tried to install numpy, librosa and tts without an version tag, but I got again the 1.22.0 numpy version with librosa 0.10.0 and tts 0.22.0 with the error at the beginning.
Why does the compiler stops at the first requirement validation and doesn't run through to the next one? Does anyone has an idea to fix it? install python 3.10 and 3.11?
_Originally posted by @OrbitPeppermint in https://github.com/coqui-ai/TTS/discussions/3537_ | closed | 2024-01-23T11:11:46Z | 2024-06-26T16:49:23Z | https://github.com/coqui-ai/TTS/issues/3538 | [
"wontfix"
] | OrbitPeppermint | 6 |
tatsu-lab/stanford_alpaca | deep-learning | 25 | Public release of model weights | Congratulations on the fine-tune! We have observed some fantastic performance through the provided web interface.
AFAIK the original Llama model was released under GNU/GPL, you should be able to distribute derivative work respecting this original license, correct? (Even if the original model weights have not officially been distributed to the public yet)
Will you provide some sort of wait-list to notify us when the model weights are made available?
Interested in as much information as you may share on this, again, congratulations and thank your impressive work!
https://github.com/facebookresearch/llama/blob/main/LICENSE | closed | 2023-03-14T21:44:34Z | 2023-03-15T20:02:13Z | https://github.com/tatsu-lab/stanford_alpaca/issues/25 | [] | topiconcept | 3 |
521xueweihan/HelloGitHub | python | 2,063 | 是否有必要记录每期生成的时间 | ## 项目推荐
- 项目地址:仅收录 GitHub 的开源项目,请填写 GitHub 的项目地址
- 类别:请从中选择(C、C#、C++、CSS、Go、Java、JS、Kotlin、Objective-C、PHP、Python、Ruby、Swift、其它、书籍、机器学习)
- 项目后续更新计划:
- 项目描述:
- 必写:这是个什么项目、能用来干什么、有什么特点或解决了什么痛点
- 可选:适用于什么场景、能够让初学者学到什么
- 描述长度(不包含示例代码): 10 - 256 个字符
- 推荐理由:令人眼前一亮的点是什么?解决了什么痛点?
- 示例代码:(可选)长度:1-20 行
- 截图:(可选)gif/png/jpg
## 提示(提交时请删除以下内容)
> 点击上方 “Preview” 更方便地阅读以下内容,
提高项目收录的概率方法如下:
1. 到 HelloGitHub 网站首页:https://hellogithub.com 搜索要推荐的项目地址,查看准备推荐的项目是否被推荐过。
2. 根据 [项目审核标准说明](https://github.com/521xueweihan/HelloGitHub/issues/271) 修改项目
3. 如您推荐的项目收录到《HelloGitHub》月刊,您的 GitHub 帐号将展示在 [贡献人列表](https://github.com/521xueweihan/HelloGitHub/blob/master/content/contributors.md),**同时会在本 issues 中通知您**。
再次感谢您对 HelloGitHub 项目的支持!
| closed | 2022-01-08T04:59:53Z | 2022-01-08T04:59:57Z | https://github.com/521xueweihan/HelloGitHub/issues/2063 | [
"恶意issue"
] | baymax55 | 1 |
lepture/authlib | django | 305 | ResourceProtector decorator doesn't work with class-based Django views | **Describe the bug**
When using the ResourceProtector decorator (as documented [here](https://docs.authlib.org/en/latest/django/2/resource-server.html)) on a Django REST Framework **class-based view**'s method:
```python
class MyView(APIView):
@require_oauth("order")
def post(self, request, *args, **kwargs):
return super().post(request, *args, **kwargs)
```
I get the following error:
> 'MyView' object has no attribute 'get_raw_uri'
This is because in this case, the first parameter in the [decorator's `decorated` function](https://github.com/lepture/authlib/blob/ffeeaa9fd7b5bc4ea7cae9fcf0c2ad9d7f5cf22a/authlib/integrations/django_oauth2/resource_protector.py#L36), will be the **view object**, rather than the request.
Adding a `view` parameter as the first parameter in the function fixes this.
```python
def __call__(self, scopes=None, optional=False):
def wrapper(f):
@functools.wraps(f)
def decorated(view, request, *args, **kwargs): # <= Change here
try:
token = self.acquire_token(request, scopes)
request.oauth_token = token
```
**Error Stacks**
```
File "/.venv/lib/python3.6/site-packages/rest_framework/views.py", line 502, in dispatch
response = handler(request, *args, **kwargs)
File "/.venv/lib/python3.6/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 39, in decorated
token = self.acquire_token(request, scopes)
File "/.venv/lib/python3.6/site-packages/authlib/integrations/django_oauth2/resource_protector.py", line 25, in acquire_token
url = request.get_raw_uri()
AttributeError: 'MyView' object has no attribute 'get_raw_uri'
```
**To Reproduce**
See code example in the bug description above.
**Expected behavior**
The decorator to work the same way as it does for function-based views.
**Environment:**
- OS: OSX
- Python Version: 3.6.9
- Authlib Version: 1.0.0.dev0
**Additional context**
I'm available to create a PR to fix this if you tell me the approach you want to take here. | closed | 2020-12-20T16:08:15Z | 2022-11-17T09:36:30Z | https://github.com/lepture/authlib/issues/305 | [
"bug"
] | thatguysimon | 3 |
MaartenGr/BERTopic | nlp | 1,132 | .transform() does not generate probability distribution despite calculate_probabilites=True | When I fit BERTopic on my documents and embeddings with fit_transform(), I get a list of topic assignments and a 2d array of soft clustering probability distributions out. But if I then take that fitted model and feed it new or the same data again using .transform, I get a list of topic assignments, but no soft clustering probabilities - instead I get a 1d array with a single value per document (0 for data points considered noise).
How do I get the distributions for new data? | closed | 2023-03-28T21:04:29Z | 2023-05-23T09:25:46Z | https://github.com/MaartenGr/BERTopic/issues/1132 | [] | kuchenrolle | 3 |
onnx/onnx | machine-learning | 5,907 | test FAILED QuantizeLinearOpMLFloat16Test.Float8 | # Bug Report
When executing the build command given below, the build failed with a failed test case. Also, there appear to be memory leaks detected. The build.log is attached.
### Describe the bug
[ FAILED ] QuantizeLinearOpMLFloat16Test.Float8
[----------] Global test environment tear-down
[==========] 4054 tests from 285 test suites ran. (2197772 ms total)
[ PASSED ] 4053 tests.
[ FAILED ] 1 test, listed below:
[ FAILED ] QuantizeLinearOpMLFloat16Test.Float8
1 FAILED TEST
YOU HAVE 15 DISABLED TESTS
### System information
Edition Windows 10 Pro for Workstations
Version 22H2
Installed on 6/29/2022
OS build 19045.3930
Experience Windows Feature Experience Pack 1000.19053.1000.0
Microsoft Visual Studio Community 2022
Version 17.6.5
VisualStudio.17.Release/17.6.5+33829.357
Microsoft .NET Framework
Version 4.8.09037
Installed Version: Community
Visual C++ 2022 00482-90000-00000-AA907
Microsoft Visual C++ 2022
ASP.NET and Web Tools 17.6.326.62524
ASP.NET and Web Tools
Azure App Service Tools v3.0.0 17.6.326.62524
Azure App Service Tools v3.0.0
Azure Functions and Web Jobs Tools 17.6.326.62524
Azure Functions and Web Jobs Tools
C# Tools 4.6.0-3.23259.8+c3cc1d0ceeab1a65da0217e403851a1e8a30086a
C# components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Common Azure Tools 1.10
Provides common services for use by Azure Mobile Services and Microsoft Azure Tools.
Cookiecutter 17.0.23087.1
Provides tools for finding, instantiating and customizing templates in cookiecutter format.
GitExtensions 1.0
Git Extensions is a graphical user interface for Git that allows you to control Git without using the command-line
GitHub Copilot 1.100.0.0 (v1.100.0.0@6ff082509)
GitHub Copilot is an AI pair programmer that helps you write code faster and with less work.
GitHub Copilot Agent 1.100.306 (v1.100.0)
Linux Core Dump Debugging 1.0.9.33801
Enables debugging of Linux core dumps.
Microsoft JVM Debugger 1.0
Provides support for connecting the Visual Studio debugger to JDWP compatible Java Virtual Machines
NuGet Package Manager 6.6.0
NuGet Package Manager in Visual Studio. For more information about NuGet, visit https://docs.nuget.org/
NVIDIA CUDA 11.7 Wizards 11.7
Wizards to create new NVIDIA CUDA projects and source files.
NVIDIA Nsight Visual Studio Edition 2022.2.0.22095
NVIDIA Nsight Visual Studio Edition provides tools for GPGPU and graphics development. Copyright © NVIDIA 2010 - 2022.
•Direct3D® and DirectX® are registered trademarks of Microsoft Corporation in the United States and/or other countries.
•Microsoft Detours is used under the Professional license (http://research.microsoft.com/en-us/projects/detours/).
•Gardens Point Parser Generator Copyright 2005 Queensland University of Technology (QUT). All rights reserved.
•Icons from Axialis Software used under the licensing terms found here: www.axialis.com
•NLog Copyright © 2004-2006 Jaroslaw Kowalski (jaak@jkowalski.net)
•zlib and libpng used under the zlib/libpnc license (http://opensource.org/licenses/Zlib)
•Breakpad Copyright ©2006, Google Inc. All rights reserved.
•The OpenGL Extension Wrangler Library
Copyright ©2008-2016, Nigel Stewart (nigels@users.sourceforge.net), Copyright ©2002-2008, Milan Ikits (milan.ikits@ieee.org), Copyright ©2002-2008, Marcelo E. Magallon (mmagallo@debian.org), Copyright ©2002, Lev Povalahev.
All rights reserved.
•LIBSSH2 Copyright ©2004-2007 Sara Golemon (sarag@libssh2.org), Copyright ©2005,2006 Mikhail Gusarov (dottedmag@dottedmag.net),Copyright ©2006-2007 The Written Word, Inc.,Copyright ©2007 Eli Fant (elifantu@mail.ru),Copyright ©2009-2014 Daniel Stenberg., Copyright ©2008, 2009 Simon Josefsson.
All rights reserved.
•Protobuf Copyright ©2014, Google Inc. All rights reserved.
•xxHASH Library Copyright ©2012-2014, Yann Collet. All rights reserved.
•FMT Copyright ©2012 - 2016, Victor Zverovich
•Font Awesome Copyright 2018 Fonticons, Inc.
•ELF Definitions Copyright (c) 2010 Joseph Koshy, All rights reserved.
Warning: This computer program is protected by copyright law and international treaties. Unauthorized reproduction or distribution of this program, or any portion of it, may result in severe civil and criminal penalties, and will be prosecuted to the maximum extent possible under the law.
NVIDIA Nsight Visual Studio Edition - CUDA support 2022.2.0.22095
NVIDIA Nsight Visual Studio Edition - CUDA support provides tools for CUDA development and debugging.
Python - Django support 17.0.23087.1
Provides templates and integration for the Django web framework.
Python - Profiling support 17.0.23087.1
Profiling support for Python projects.
Python with Pylance 17.0.23087.1
Provides IntelliSense, projects, templates, debugging, interactive windows, and other support for Python developers.
Razor (ASP.NET Core) 17.6.0.2327201+a6a61fdfa748eaa65aab53dab583276e26af4a3e
Provides languages services for ASP.NET Core Razor.
SQL Server Data Tools 17.6.13.0
Microsoft SQL Server Data Tools
Test Adapter for Boost.Test 1.0
Enables Visual Studio's testing tools with unit tests written for Boost.Test. The use terms and Third Party Notices are available in the extension installation directory.
Test Adapter for Google Test 1.0
Enables Visual Studio's testing tools with unit tests written for Google Test. The use terms and Third Party Notices are available in the extension installation directory.
TypeScript Tools 17.0.20329.2001
TypeScript Tools for Microsoft Visual Studio
Visual Basic Tools 4.6.0-3.23259.8+c3cc1d0ceeab1a65da0217e403851a1e8a30086a
Visual Basic components used in the IDE. Depending on your project type and settings, a different version of the compiler may be used.
Visual C++ for Cross Platform Mobile Development (Android) 17.0.33606.364
Visual C++ for Cross Platform Mobile Development (Android)
Visual C++ for Linux Development 1.0.9.33801
Visual C++ for Linux Development
Visual F# Tools 17.6.0-beta.23174.5+0207bea1afae48d9351ac26fb51afc8260de0a97
Microsoft Visual F# Tools
Visual Studio IntelliCode 2.2
AI-assisted development for Visual Studio.
- windows :
- ONNX version (*e.g. 1.13*):
- Python version:
- GCC/Compiler version (if compiling from source):
- CMake version:
- Protobuf version:
- Visual Studio version (if applicable):-->
ONNX source git hash
commit 2ac381c55397dffff327cc6efecf6f95a70f90a1 (HEAD, tag: v1.16.3, origin/rel-1.16.3)
### Reproduction instructions
.\build.bat --use_cuda --cudnn_home "C:\Program Files\NVIDIA\CUDNN\v8.9" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7"
| closed | 2024-02-05T13:01:23Z | 2024-02-06T12:02:22Z | https://github.com/onnx/onnx/issues/5907 | [
"bug"
] | daviddrell | 3 |
autogluon/autogluon | computer-vision | 3,915 | [BUG] Unable to work with Autogluon Object Detection | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [x] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [x] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [x] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
Unable to work with Autogluon in Kaggle env
**Expected behavior**
Code able to run without any error
**To Reproduce**
!pip install autogluon
from autogluon.multimodal import MultiModalPredictor
predictor = MultiModalPredictor(label=label_col).fit(
train_data=train_data,
time_limit=120
)
OSError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from autogluon.multimodal import MultiModalPredictor
3 predictor = MultiModalPredictor(label=label_col).fit(
4 train_data=train_data,
5 time_limit=120
6 )
File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/__init__.py:6
3 except ImportError:
4 pass
----> 6 from . import constants, data, learners, models, optimization, predictor, problem_types, utils
7 from .predictor import MultiModalPredictor
8 from .utils import download
File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/data/__init__.py:2
1 from . import collator, infer_types, randaug, utils
----> 2 from .datamodule import BaseDataModule
3 from .dataset import BaseDataset
4 from .dataset_mmlab import MultiImageMixDataset
File /opt/conda/lib/python3.10/site-packages/autogluon/multimodal/data/datamodule.py:4
1 from typing import Dict, List, Optional, Union
3 import pandas as pd
----> 4 from lightning.pytorch import LightningDataModule
5 from torch.utils.data import DataLoader, Dataset
7 from ..constants import PREDICT, TEST, TRAIN, VALIDATE
File /opt/conda/lib/python3.10/site-packages/lightning/__init__.py:25
23 from lightning.fabric.fabric import Fabric # noqa: E402
24 from lightning.fabric.utilities.seed import seed_everything # noqa: E402
---> 25 from lightning.pytorch.callbacks import Callback # noqa: E402
26 from lightning.pytorch.core import LightningDataModule, LightningModule # noqa: E402
27 from lightning.pytorch.trainer import Trainer # noqa: E402
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/__init__.py:26
23 _logger.propagate = False
25 from lightning.fabric.utilities.seed import seed_everything # noqa: E402
---> 26 from lightning.pytorch.callbacks import Callback # noqa: E402
27 from lightning.pytorch.core import LightningDataModule, LightningModule # noqa: E402
28 from lightning.pytorch.trainer import Trainer # noqa: E402
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/__init__.py:14
1 # Copyright The Lightning AI team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from lightning.pytorch.callbacks.batch_size_finder import BatchSizeFinder
15 from lightning.pytorch.callbacks.callback import Callback
16 from lightning.pytorch.callbacks.checkpoint import Checkpoint
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/batch_size_finder.py:24
21 from typing import Optional
23 import lightning.pytorch as pl
---> 24 from lightning.pytorch.callbacks.callback import Callback
25 from lightning.pytorch.tuner.batch_size_scaling import _scale_batch_size
26 from lightning.pytorch.utilities.exceptions import _TunerExitException, MisconfigurationException
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/callbacks/callback.py:22
19 from torch.optim import Optimizer
21 import lightning.pytorch as pl
---> 22 from lightning.pytorch.utilities.types import STEP_OUTPUT
25 class Callback:
26 r"""Abstract base class used to build new callbacks.
27
28 Subclass this class and override any of the relevant hooks
29
30 """
File /opt/conda/lib/python3.10/site-packages/lightning/pytorch/utilities/types.py:40
38 from torch import Tensor
39 from torch.optim import Optimizer
---> 40 from torchmetrics import Metric
41 from typing_extensions import NotRequired, Required
43 from lightning.fabric.utilities.types import _TORCH_LRSCHEDULER, LRScheduler, ProcessGroup, ReduceLROnPlateau
File /opt/conda/lib/python3.10/site-packages/torchmetrics/__init__.py:14
11 _PACKAGE_ROOT = os.path.dirname(__file__)
12 _PROJECT_ROOT = os.path.dirname(_PACKAGE_ROOT)
---> 14 from torchmetrics import functional # noqa: E402
15 from torchmetrics.aggregation import ( # noqa: E402
16 CatMetric,
17 MaxMetric,
(...)
22 SumMetric,
23 )
24 from torchmetrics.audio._deprecated import _PermutationInvariantTraining as PermutationInvariantTraining # noqa: E402
File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/__init__.py:14
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from torchmetrics.functional.audio._deprecated import _permutation_invariant_training as permutation_invariant_training
15 from torchmetrics.functional.audio._deprecated import _pit_permutate as pit_permutate
16 from torchmetrics.functional.audio._deprecated import (
17 _scale_invariant_signal_distortion_ratio as scale_invariant_signal_distortion_ratio,
18 )
File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/__init__.py:14
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from torchmetrics.functional.audio.pit import permutation_invariant_training, pit_permutate
15 from torchmetrics.functional.audio.sdr import (
16 scale_invariant_signal_distortion_ratio,
17 signal_distortion_ratio,
18 source_aggregated_signal_distortion_ratio,
19 )
20 from torchmetrics.functional.audio.snr import (
21 complex_scale_invariant_signal_noise_ratio,
22 scale_invariant_signal_noise_ratio,
23 signal_noise_ratio,
24 )
File /opt/conda/lib/python3.10/site-packages/torchmetrics/functional/audio/pit.py:22
19 from torch import Tensor
20 from typing_extensions import Literal
---> 22 from torchmetrics.utilities import rank_zero_warn
23 from torchmetrics.utilities.imports import _SCIPY_AVAILABLE
25 # _ps_dict: cache of permutations
26 # it's necessary to cache it, otherwise it will consume a large amount of time
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/__init__.py:14
1 # Copyright The Lightning team.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
(...)
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
---> 14 from torchmetrics.utilities.checks import check_forward_full_state_property
15 from torchmetrics.utilities.distributed import class_reduce, reduce
16 from torchmetrics.utilities.prints import rank_zero_debug, rank_zero_info, rank_zero_warn
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/checks.py:25
22 import torch
23 from torch import Tensor
---> 25 from torchmetrics.metric import Metric
26 from torchmetrics.utilities.data import select_topk, to_onehot
27 from torchmetrics.utilities.enums import DataType
File /opt/conda/lib/python3.10/site-packages/torchmetrics/metric.py:30
27 from torch import Tensor
28 from torch.nn import Module
---> 30 from torchmetrics.utilities.data import (
31 _flatten,
32 _squeeze_if_scalar,
33 dim_zero_cat,
34 dim_zero_max,
35 dim_zero_mean,
36 dim_zero_min,
37 dim_zero_sum,
38 )
39 from torchmetrics.utilities.distributed import gather_all_tensors
40 from torchmetrics.utilities.exceptions import TorchMetricsUserError
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/data.py:22
19 from torch import Tensor
21 from torchmetrics.utilities.exceptions import TorchMetricsUserWarning
---> 22 from torchmetrics.utilities.imports import _TORCH_GREATER_EQUAL_1_12, _XLA_AVAILABLE
23 from torchmetrics.utilities.prints import rank_zero_warn
25 METRIC_EPS = 1e-6
File /opt/conda/lib/python3.10/site-packages/torchmetrics/utilities/imports.py:50
48 _GAMMATONE_AVAILABEL: bool = package_available("gammatone")
49 _TORCHAUDIO_AVAILABEL: bool = package_available("torchaudio")
---> 50 _TORCHAUDIO_GREATER_EQUAL_0_10: Optional[bool] = compare_version("torchaudio", operator.ge, "0.10.0")
51 _SACREBLEU_AVAILABLE: bool = package_available("sacrebleu")
52 _REGEX_AVAILABLE: bool = package_available("regex")
File /opt/conda/lib/python3.10/site-packages/lightning_utilities/core/imports.py:77, in compare_version(package, op, version, use_base_version)
68 """Compare package version with some requirements.
69
70 >>> compare_version("torch", operator.ge, "0.1")
(...)
74
75 """
76 try:
---> 77 pkg = importlib.import_module(package)
78 except (ImportError, pkg_resources.DistributionNotFound):
79 return False
File /opt/conda/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File /opt/conda/lib/python3.10/site-packages/torchaudio/__init__.py:1
----> 1 from . import ( # noqa: F401
2 _extension,
3 compliance,
4 datasets,
5 functional,
6 io,
7 kaldi_io,
8 models,
9 pipelines,
10 sox_effects,
11 transforms,
12 utils,
13 )
14 from ._backend.common import AudioMetaData # noqa
16 try:
File /opt/conda/lib/python3.10/site-packages/torchaudio/_extension/__init__.py:45
43 _IS_ALIGN_AVAILABLE = False
44 if _IS_TORCHAUDIO_EXT_AVAILABLE:
---> 45 _load_lib("libtorchaudio")
47 import torchaudio.lib._torchaudio # noqa
49 _check_cuda_version()
File /opt/conda/lib/python3.10/site-packages/torchaudio/_extension/utils.py:64, in _load_lib(lib)
62 if not path.exists():
63 return False
---> 64 torch.ops.load_library(path)
65 torch.classes.load_library(path)
66 return True
File /opt/conda/lib/python3.10/site-packages/torch/_ops.py:643, in _Ops.load_library(self, path)
638 path = _utils_internal.resolve_library_path(path)
639 with dl_open_guard():
640 # Import the shared library into the process, thus running its
641 # static (global) initialization code in order to register custom
642 # operators with the JIT.
--> 643 ctypes.CDLL(path)
644 self.loaded_libraries.add(path)
File /opt/conda/lib/python3.10/ctypes/__init__.py:374, in CDLL.__init__(self, name, mode, handle, use_errno, use_last_error, winmode)
371 self._FuncPtr = _FuncPtr
373 if handle is None:
--> 374 self._handle = _dlopen(self._name, mode)
375 else:
376 self._handle = handle
OSError: /opt/conda/lib/python3.10/site-packages/torchaudio/lib/libtorchaudio.so: undefined symbol: _ZN3c10ltERKNS_6SymIntEi
```python
INSTALLED VERSIONS
------------------
date : 2024-02-12
time : 18:42:30.368085
python : 3.10.13.final.0
OS : Linux
OS-release : 5.15.133+
Version : #1 SMP Tue Dec 19 13:14:11 UTC 2023
machine : x86_64
processor : x86_64
num_cores : 4
cpu_ram_mb : 32110.140625
cuda version : None
num_gpus : 0
gpu_ram_mb : []
avail_disk_size_mb : 19933
accelerate : 0.21.0
async-timeout : 4.0.3
autogluon : 1.0.0
autogluon.common : 1.0.0
autogluon.core : 1.0.0
autogluon.features : 1.0.0
autogluon.multimodal : 1.0.0
autogluon.tabular : 1.0.0
autogluon.timeseries : 1.0.0
boto3 : 1.26.100
catboost : 1.2.2
defusedxml : 0.7.1
evaluate : 0.4.1
fastai : 2.7.13
gluonts : 0.14.4
hyperopt : 0.2.7
imodels : None
jinja2 : 3.1.2
joblib : 1.3.2
jsonschema : 4.17.3
lightgbm : 4.1.0
lightning : 2.0.9.post0
matplotlib : None
mlforecast : 0.10.0
networkx : 3.2.1
nlpaug : 1.1.11
nltk : 3.8.1
nptyping : 2.4.1
numpy : 1.26.3
nvidia-ml-py3 : 7.352.0
omegaconf : 2.2.3
onnxruntime-gpu : None
openmim : 0.3.9
orjson : 3.9.10
pandas : 2.1.4
Pillow : 10.2.0
psutil : 5.9.7
PyMuPDF : None
pytesseract : 0.3.10
pytorch-lightning : 2.0.9.post0
pytorch-metric-learning: 1.7.3
ray : 2.6.3
requests : 2.31.0
scikit-image : 0.20.0
scikit-learn : 1.4.0
scikit-learn-intelex : 2024.1.0
scipy : 1.11.4
seqeval : 1.2.2
setuptools : 69.0.3
skl2onnx : None
statsforecast : 1.4.0
statsmodels : 0.14.1
tabpfn : None
tensorboard : 2.15.1
text-unidecode : 1.3
timm : 0.9.12
torch : 2.0.1
torchmetrics : 1.1.2
torchvision : 0.15.2
tqdm : 4.66.1
transformers : 4.31.0
utilsforecast : 0.0.10
vowpalwabbit : 9.9.0
xgboost : 2.0.3
</details>
| closed | 2024-02-12T18:44:11Z | 2024-06-27T10:17:14Z | https://github.com/autogluon/autogluon/issues/3915 | [
"bug: unconfirmed",
"Needs Triage"
] | GDGauravDutta | 2 |
matplotlib/matplotlib | data-science | 29,428 | [Doc]: Multipage PDF: unclear which backend supports and which does not support attach_note() | ### Documentation Link
https://matplotlib.org/stable/gallery/misc/multipage_pdf.html
### Problem
The issue is in the first two paragraphs of the page.
> This is a demo of creating a pdf file with several pages, as well as adding metadata and annotations to pdf files.
>
> If you want to use a multipage pdf file using LaTeX, you need to use from matplotlib.backends.backend_pgf import PdfPages. This version however does not support [attach_note](https://matplotlib.org/stable/api/backend_pdf_api.html#matplotlib.backends.backend_pdf.PdfPages.attach_note).
Reading, it is unclear whether "this" in the last sentence refers to `pdf` backend (as suggested by it being on the page for it) or `pgf` (as suggested by it being in the paragraph about `pgf`).
Only after clickking on the hyperlinked `attach_note` do I have to notice that it is a documentation page for a specific backend (which isn't obvious, being sent to an anchor in the middle of the document without header visible). From there, seeing that `pdf` has `attach_note()` I can infer that `pgf` doesn't. That string of logic is quite long for something that should have been clearly stated.
### Suggested improvement
Change "this" to "that" in the last quoted sentence would probably make it clearer it is referring to the backend *not* used in the example. Or perhaps a different change in wording. | closed | 2025-01-07T20:41:08Z | 2025-01-11T09:09:16Z | https://github.com/matplotlib/matplotlib/issues/29428 | [
"Documentation"
] | jaskij | 4 |
psf/requests | python | 6,417 | github actions on ubuntu 18.04 fail to start because the image was removed | github actions on ubuntu 18.04 fail to start because the image was removed: https://github.blog/changelog/2022-08-09-github-actions-the-ubuntu-18-04-actions-runner-image-is-being-deprecated-and-will-be-removed-by-12-1-22/
this might be a challenge because the tests fail with SSLError: https://github.com/psf/requests/issues/5662 | closed | 2023-04-21T15:53:46Z | 2024-04-23T00:03:46Z | https://github.com/psf/requests/issues/6417 | [] | graingert | 1 |
huggingface/transformers | pytorch | 36,354 | Add EfficientLoFTR model | ### Model description
EfficientLoFTR is an image matching model performing dense matching (contrary to SuperPoint + SuperGlue).
It is a variant to LoFTR that runs in realtime (at most 40ms for inference).
The base model performances are ok, but with the soon release of [MatchAnything](https://github.com/zju3dv/MatchAnything) version of EfficientLoFTR, we should have an even better model.
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
https://github.com/zju3dv/EfficientLoFTR | open | 2025-02-23T14:12:59Z | 2025-02-24T20:48:25Z | https://github.com/huggingface/transformers/issues/36354 | [
"New model",
"Vision"
] | sbucaille | 0 |
microsoft/nni | pytorch | 4,817 | Why does SlimPruner utilize the WeightTrainerBasedDataCollector instead of the WeightDataCollector before model compressing? | open | 2022-04-27T11:43:29Z | 2022-04-29T01:50:48Z | https://github.com/microsoft/nni/issues/4817 | [] | songkq | 1 |
|
KaiyangZhou/deep-person-reid | computer-vision | 163 | How to import evaluate_cy? | try:
from torchreid.metrics.rank_cylib.rank_cy import evaluate_cy
IS_CYTHON_AVAI = True
except ImportError:
IS_CYTHON_AVAI = False
warnings.warn(
'Cython evaluation (very fast so highly recommended) is '
'unavailable, now use python evaluation.'
) | closed | 2019-04-30T13:00:08Z | 2022-10-24T11:28:08Z | https://github.com/KaiyangZhou/deep-person-reid/issues/163 | [] | shanqq377 | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.