repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
wagtail/wagtail | django | 12,661 | Hook for replacing user avatar logic | ### Is your proposal related to a problem?
From @jhrr on https://github.com/wagtail/wagtail/issues/12274#issuecomment-2514281305:
> Sorry to mildly necrobump this thread, but there is one issue with cutomizing the Wagtail `UserProfile` that we've encountered, and that is being able to replace or alias the `avatar` field on the `UserProfile` with some other avatar on some other model so it could use that instead.
For example, we have a custom `User` model and a `Profile` model, and we keep the avatar on the `Profile`. We'd prefer to just be able to use the avatar from the `Profile` everywhere, across all sub-domains/contexts, including inside the CMS itself. Is there any way we can achieve this currently?
### Describe the solution you'd like
As per https://github.com/wagtail/wagtail/issues/12274#issuecomment-2517181624 - introduce a new `get_avatar_url` hook which is called by the `{% avatar_url %}` tag in wagtailadmin_tags.py, passing the user object and requested size. If any hook returns a non-empty response, return that in preference to the standard avatar URL logic.
The change would be [to this function](https://github.com/wagtail/wagtail/blob/23275a4cef4d36bb311abe315eb9ddfc43868e8b/wagtail/admin/templatetags/wagtailadmin_tags.py#L652-L659), and would look something like (untested):
```python
@register.simple_tag
def avatar_url(user, size=50, gravatar_only=False):
"""
A template tag that receives a user and size and return
the appropriate avatar url for that user.
Example usage: {% avatar_url request.user 50 %}
"""
for hook_fn in hooks.get_hooks("get_avatar_url"):
url = hook_fn(user, size)
if url:
return url
# followed by the existing code
```
This would need to be accompanied by unit tests - I'd suggest adding a hook function in `wagtail/test/testapp/wagtail_hooks.py` that returns a custom URL when some specific condition is true, such as the username containing "fred", and adding tests to confirm that both the custom code and the default avatar logic come into play at the appropriate times. The hook will also need to be documented in the [hooks reference docs](https://github.com/wagtail/wagtail/blob/main/docs/reference/hooks.md), and ideally we'd mention it in the [customising admin templates](https://github.com/wagtail/wagtail/blob/main/docs/advanced_topics/customization/admin_templates.md) docs too - this _could_ be a detailed howto, but I expect we can come up with a simple self-contained example snippet on the hooks reference page that's not too long, and just link to that.
### Describe alternatives you've considered
On #12274 we considered:
* making it possible to customise the value returned for `avatar` on `UserProfile` (too invasive)
* overriding the wagtailadmin/shared/avatar.html template (too much copy-and-paste, and locks us in to the current template tag approach)
* overriding `avatar_url` in the wagtailadmin_tags tag registry (likewise, locks us in to the template tag approach)
### Working on this
@jhrr has first dibs on this, but anyone else is welcome to pick it up :-) | closed | 2024-12-04T12:26:35Z | 2024-12-17T20:47:21Z | https://github.com/wagtail/wagtail/issues/12661 | [
"type:Enhancement",
"component:User Management"
] | gasman | 3 |
randyzwitch/streamlit-folium | streamlit | 235 | refresh data with zoom in/zoom out | when I changed to st_folium everytime I zoom in/zoom out all data uploads again and it takes a lot of time. When I use folium_static I don't have this problem. | closed | 2024-11-07T21:01:32Z | 2024-11-22T20:33:24Z | https://github.com/randyzwitch/streamlit-folium/issues/235 | [] | sanapolsky | 3 |
allenai/allennlp | pytorch | 5,050 | How can we support text_to_instance running in parallel? | Though we have a multi-process data-loader, we use for loops in [predictor](https://github.com/allenai/allennlp/blob/main/allennlp/predictors/predictor.py#L299).
It will make the process hard and slow when we have many inputs( especially when this predictor as a server).
I mean can we use or add some method to make this call(`text_to_instance`) support multi-process ?
@epwalsh @dirkgr maybe you could help ?
| closed | 2021-03-11T11:14:18Z | 2021-12-30T06:29:36Z | https://github.com/allenai/allennlp/issues/5050 | [
"Feature request"
] | wlhgtc | 21 |
desec-io/desec-stack | rest-api | 247 | Log TLS protocol and cipher suite | Ssllabs is going to cap to B starting March 2020 if TLS < 1.2 is supported.
We currently do not know the prevalence of TLS versions amongst our requests, so we should start to log these parameters. Instructions: https://serverfault.com/a/620130/377401 | closed | 2019-10-07T10:54:49Z | 2024-10-07T16:53:54Z | https://github.com/desec-io/desec-stack/issues/247 | [
"prio: medium",
"easy"
] | peterthomassen | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,589 | Website detects undetected bot | I have been trying to log in on a website but it keeps detecting me. This is what I do:
I create a driver and open google with it. From there I log in manually to my google account so that I have cookies on the profile. Next step I go to the website where I want to log in, but whenever i try logging in it detects me as a bot. Besides creating the chrome instance everything is manually.
I tried doing the exact same on my actual chrome and there I can log in without a problem. I also changed my IP and I still encounter detection. It seems like it knows i spun up an instance with seleniumbase but how is that possible?
Things i tried:
Remove the user_dir folder,
Change user-agent,
Change IP,
Created new google profile with cookies,
Checked recaptcha score which was 0.9.
If anyone has any suggestions I would love to hear them! | closed | 2024-03-10T17:46:14Z | 2024-03-13T00:59:12Z | https://github.com/seleniumbase/SeleniumBase/issues/2589 | [
"invalid usage",
"UC Mode / CDP Mode"
] | jorisstander | 10 |
graphql-python/graphene-sqlalchemy | graphql | 122 | Invalid SQLAlchemy Model | Hey guys I hope someone can help me on this, i'm really stuck....
database automaping:
```
engine = create_engine(" blablabla", convert_unicode=True)
db_session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=engine))
Base = automap_base()
Base.prepare(engine, reflect=True, generate_relationship=_gen_relationship, name_for_scalar_relationship = name_for_scalar_relationship, classname_for_table=camelize_classname, name_for_collection_relationship=pluralize_collection)
Buyer = Base.classes.Buyer
Base.query = db_session.query_property()
```
schema :
```
import graphene
import json
from graphene import relay
from graphene_sqlalchemy import SQLAlchemyConnectionField, SQLAlchemyObjectType
from database import Buyer as BuyerModel
class Buyer(SQLAlchemyObjectType):
class Meta:
model = BuyerModel
interfaces = (relay.Node, )
class Query(graphene.ObjectType):
node = relay.Node.Field()
all_buyers = SQLAlchemyConnectionField(Buyer)
schema = graphene.Schema(query=Query)
```
Produces the error :
`AssertionError: You need to pass a valid SQLAlchemy Model in Buyer.Meta, received "<class 'sqlalchemy.ext.automap.Buyer'>".`
| closed | 2018-03-27T13:53:32Z | 2023-02-25T00:49:12Z | https://github.com/graphql-python/graphene-sqlalchemy/issues/122 | [] | rvandyk | 9 |
httpie/cli | python | 1,589 | v3.2.3 GitHub release missing binary and package | ## Checklist
- [x] I've searched for similar issues.
---
The 3.2.3 release https://github.com/httpie/cli/releases/tag/3.2.3 is missing the binary and .deb package | open | 2024-07-18T23:28:21Z | 2024-07-18T23:28:21Z | https://github.com/httpie/cli/issues/1589 | [
"bug",
"new"
] | Crissante | 0 |
miguelgrinberg/Flask-Migrate | flask | 300 | render_item method in env.py is not being called | There's an issue with the `sqlalchemy_utils` package. Special field types like UUID, Password from this library are not being recognized by alembic. There was a workaround in earlier versions of `flask-migrate` and that is to write custom `render_item` method in `env.py`:
```python
def render_item(type_, obj, autogen_context):
"""Apply custom rendering for selected items."""
if type_ == 'type' and isinstance(obj, sqlalchemy_utils.types.uuid.UUIDType):
# add import for this type
autogen_context.imports.add("import sqlalchemy_utils")
autogen_context.imports.add("import uuid")
return "sqlalchemy_utils.types.uuid.UUIDType(), default=uuid.uuid4"
# default rendering for other objects
return False
```
This was resulting in versions being correct, e.g. sqlalchemy_utils imported and `Column type` is correct.
I upgraded to the latest `flask-migrate` from `2.3.1` and this workaround no longer works. Seems like the `render_item` method is not called at all.
| closed | 2019-11-09T18:39:40Z | 2020-04-10T14:08:36Z | https://github.com/miguelgrinberg/Flask-Migrate/issues/300 | [
"question"
] | amilosmanli | 1 |
chatanywhere/GPT_API_free | api | 346 | [Enhancement] I'm looking for a solution | I would like to make a preview on my website, details of my purchased API like here https://api.chatanywhere.org/
I type API, give search and it displays information about what balance is left from what quantity. Please help | open | 2024-12-29T10:08:32Z | 2024-12-29T13:46:26Z | https://github.com/chatanywhere/GPT_API_free/issues/346 | [] | kamis086 | 5 |
jwkvam/bowtie | jupyter | 227 | consider moving to poetry from flit | Advantages:
1. Automatic python dependency management.
2. Consolidate requirements into one file.
Disadvantages:
1. Learn a new tool, (leaving flit).
https://github.com/sdispater/poetry | open | 2018-04-25T01:50:38Z | 2018-07-24T02:10:11Z | https://github.com/jwkvam/bowtie/issues/227 | [
"low-priority"
] | jwkvam | 0 |
opengeos/leafmap | jupyter | 611 | Leafmap and streamlit cannot use interactive vector tools to obtain coordinate values | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version:0.18.8
- Python version:3.9.16
- Operating System:windows
### Description
Leafmap and streamlit cannot use the interactive vector tool to obtain coordinate values. The possible reason is that the program will refresh and run as soon as the button is clicked.
### What I Did
```
import os
import streamlit as st
import leafmap as leafmap
st.set_page_config(layout="wide")
Map = leafmap.Map()
col1, col2 = st.columns([4, 1])
with col2:
addButton3=st.button('get range')
if addButton3:
bbox=Map.user_roi_bounds()
st.markdown("{}".format(bbox))
with col1:
Map.to_streamlit(height=750)
```
In addition, when using the 'leafmap' library in streamlit, the Map.add_raster function cannot load the local raster normally, but at the same time, the add_raster function of 'leafmap.foliumap' can be displayed normally in the streamlit map. What is the reason?

| closed | 2023-11-16T13:56:04Z | 2023-11-19T01:03:57Z | https://github.com/opengeos/leafmap/issues/611 | [
"bug"
] | CraibYun | 1 |
tfranzel/drf-spectacular | rest-api | 1,310 | Example for list response is either a list containing only the first element or a list inside a list in redoc | **Describe the bug**
When I specify a list as an example (in a raw schema) in redoc only the first item in the given list is displayed as the only item in the example.
If I wrap the example list into a list, the example is displayed as a list of list.
In swagger I always get a single element list with garbled data.
**To Reproduce**
Note that this endpoint returns a list of (subset of) icinga (nagios) check items which are converted into the format expected by icinga checks. The data is provided by code that runs these checks and returns the appropriate status code and description.
So this is my application code (I elided some earlier definitions, but you get the idea, I think):
~~~python
STATUS_INFO_NAME_SCHEMA = dict(
type='string',
pattern=r'sis\.[a-z_]+',
description='Name des Checks (beginnt immer mit `"sis."`)'
)
STATUS_INFO_VALUE_SCHEMA = dict(
type='string',
enum=('ok', 'warning', 'error', 'critical'),
description='Wert oder Schwere ("Severity") des Status.'
)
STATUS_INFO_TYPE_SCHEMA = dict(
type='string',
enum=('direct',),
description='Nagios/Icinga Typ (immer `"direct"`)'
)
STATUS_INFO_DESCRIPTION_SCHEMA = dict(
type='string',
description='Menschenlesbare Beschreibung des Resultats'
)
INFO_NAMES = [
'info',
'configuration',
'database',
'fingerprints',
'mail_host',
'django_compatibility',
'request_stats'
]
INFO_NAME_SCHEMA = dict(
type='string',
enum=tuple(
f'sis.{name}'
for name
in INFO_NAMES),
description='Name des Checks (beginnt immer mit `"sis."`)'
)
INFO_NAGIOS_SCHEMA = dict(
type='string',
pattern=(
f'({" | ".join(INFO_NAME_SCHEMA["enum"])})'
f' ({" | ".join(STATUS_INFO_TYPE_SCHEMA["enum"])})'
f' ({" | ".join(STATUS_INFO_VALUE_SCHEMA["enum"])})'
r' - .+'),
description='Vollständige Ausgabe des Nagios/Icinga-Checks'
)
INFO_EXAMPLE = [
OrderedDict([
("name", "sis.info"),
("value", "ok"),
("type", "direct"),
("description", "SIS version devel"),
("nagios", "sis.info direct ok - SIS version devel")]
),
OrderedDict([
("name", "sis.configuration"),
("value", "critical"),
("type", "direct"),
("description", "Config option 'errors-to' missing"),
("nagios", "sis.configuration direct critical - Config option 'errors-to' missing")] # noqa
),
OrderedDict([
("name", "sis.database"),
("value", "ok"),
("type", "direct"),
("description", "Database readable"),
("nagios", "sis.database direct ok - Database readable")]
),
OrderedDict([
("name", "sis.fingerprints"),
("value", "ok"),
("type", "direct"),
("description", "No duplicate fingerprints"),
("nagios", "sis.fingerprints direct ok - No duplicate fingerprints")]
),
OrderedDict([
("name", "sis.mail_host"),
("value", "ok"),
("type", "direct"),
("description", "SMTP server reachable"),
("nagios", "sis.mail_host direct ok - SMTP server reachable")]
),
OrderedDict([
("name", "sis.django_compatibility"),
("value", "ok"),
("type", "direct"),
("description", "SIS is compatible with the current version of Django"), # noqa
("nagios", "sis.django_compatibility direct ok - SIS is compatible with the current version of Django")] # noqa
),
OrderedDict([
("name", "sis.request_stats"),
("value", "ok"),
("type", "direct"),
("description", "No request durations over 100 seconds in the last 5 days"), # noqa
("nagios", "sis.request_stats direct ok - No request durations over 100 seconds in the last 5 days")] # noqa
)
]
INFO_ITEM_SCHEMA = dict(
type='object',
properties=OrderedDict({
'name': INFO_NAME_SCHEMA,
'value': STATUS_INFO_VALUE_SCHEMA,
'type': STATUS_INFO_TYPE_SCHEMA,
'description': STATUS_INFO_DESCRIPTION_SCHEMA,
'nagios': INFO_NAGIOS_SCHEMA,
}),
examples=[INFO_EXAMPLE])
class InfoViewSet(viewsets.ViewSet):
"""[Allgemeine Info über den SIS-Server](/doc/#tag/info)
Erlaubt zu Debugging-Zwecken eine Untermenge der
Statusinformationen ohne Autorisierung.
"""
resource = "info"
permission_classes = [permissions.IsAuthenticated]
@extend_schema(
responses={200: INFO_ITEM_SCHEMA})
def list(self, request):
"""List SIS Information
[/api/info/](/api/info/)
Liefert einen Status von einigen Selbsttest aus.
"""
return Response([c.as_dict() for c in check.info()])
~~~
In this form, the example data will be given in redoc as a list of list.
~~~json
[
[
{
"name": "sis.info",
"value": "ok",
"type": "direct",
"description": "SIS version devel",
"nagios": "sis.info direct ok - SIS version devel"
},
{
"name": "sis.configuration",
"value": "critical",
"type": "direct",
"description": "Config option 'errors-to' missing",
"nagios": "sis.configuration direct critical - Config option 'errors-to' missing"
},
{
"name": "sis.database",
"value": "ok",
"type": "direct",
"description": "Database readable",
"nagios": "sis.database direct ok - Database readable"
},
{
"name": "sis.fingerprints",
"value": "ok",
"type": "direct",
"description": "No duplicate fingerprints",
"nagios": "sis.fingerprints direct ok - No duplicate fingerprints"
},
{
"name": "sis.mail_host",
"value": "ok",
"type": "direct",
"description": "SMTP server reachable",
"nagios": "sis.mail_host direct ok - SMTP server reachable"
},
{
"name": "sis.django_compatibility",
"value": "ok",
"type": "direct",
"description": "SIS is compatible with the current version of Django",
"nagios": "sis.django_compatibility direct ok - SIS is compatible with the current version of Django"
},
{
"name": "sis.request_stats",
"value": "ok",
"type": "direct",
"description": "No request durations over 100 seconds in the last 5 days",
"nagios": "sis.request_stats direct ok - No request durations over 100 seconds in the last 5 days"
}
]
]
~~~
In swagger, the example looks garbled and only has the first item:
~~~json
[
{
"name": "sis.info",
"value": "ok",
"type": "direct",
"description": "string",
"nagios": " sisVrequest_stats direct critical - I{P{2prRCQe68zEfoE,tD"
}
]
~~~
If I change the raw schema to
~~~python
INFO_ITEM_SCHEMA = dict(
type='object',
properties=OrderedDict({
'name': INFO_NAME_SCHEMA,
'value': STATUS_INFO_VALUE_SCHEMA,
'type': STATUS_INFO_TYPE_SCHEMA,
'description': STATUS_INFO_DESCRIPTION_SCHEMA,
'nagios': INFO_NAGIOS_SCHEMA,
}),
examples=INFO_EXAMPLE)
~~~
The example in redoc changes to:
~~~json
[
{
"name": "sis.info",
"value": "ok",
"type": "direct",
"description": "SIS version devel",
"nagios": "sis.info direct ok - SIS version devel"
}
]
~~~
i.e. just the first item. And swagger displays:
~~~json
[
{
"name": "sis.info",
"value": "ok",
"type": "direct",
"description": "string",
"nagios": " sisdfingerprints direct ok - >[U7/ Nw?D+jZQ%=og65=v^v>!Z+^!r)(WY/O46iRA0_4=H"
}
]
~~~
**Expected behavior**
I expect to be able to give a full **list** as an example to a list response and see exactly that list as an example.
I've seen https://drf-spectacular.readthedocs.io/en/stable/faq.html#my-viewset-list-does-not-return-a-list-but-a-single-object as well as https://github.com/tfranzel/drf-spectacular/issues/990 and I've tried to come up with a way to apply the hints in https://drf-spectacular.readthedocs.io/en/stable/faq.html#how-do-i-wrap-my-responses-my-endpoints-are-wrapped-in-a-generic-envelope to an on-the-fly serializer via https://drf-spectacular.readthedocs.io/en/stable/drf_spectacular.html#drf_spectacular.utils.inline_serializer but it seems easier to write a serializer for the data returned by `check.info()`.
That seems wasteful since it already does return an array and I feel it should not be that hard.
Also why does **something** (redoc?) apparently wrap the first element of a list into a list if it's not a list **AND** keep a nested list as is? That garbled swagger output is also very weird. | open | 2024-10-09T09:38:29Z | 2024-12-10T12:31:23Z | https://github.com/tfranzel/drf-spectacular/issues/1310 | [] | TauPan | 5 |
noirbizarre/flask-restplus | flask | 617 | extra_requires[dev] does not exist | Issue: running `pip install -e .[dev]` works, but doesn't install the dev requirements. this is due to the missing extra, dev
```
pip install .[dev]
Processing /Users/jchittum/dev01/flask-restplus
flask-restplus 0.12.2.dev0 does not provide the extra 'dev'
```
clearly missing in setup.py
```
extras_require={
'test': tests_require,
'doc': doc_require,
},
```
Also, develop.pip contains an entry that is not compatible with installing in this fashion (even with the very useful parser in setup.py)
```
-e .[test]
invoke==0.21.0
flake8==3.5.0
readme-renderer==17.2
tox==2.9.1
```
| closed | 2019-04-01T19:57:01Z | 2019-04-19T13:04:39Z | https://github.com/noirbizarre/flask-restplus/issues/617 | [
"bug",
"in progress"
] | j5awry | 3 |
autogluon/autogluon | data-science | 4,443 | [BUG] Time Series Predictor Model Fit Causing Kernel to Die (Jupyter Notebook) | **Bug Report Checklist**
<!-- Please ensure at least one of the following to help the developers troubleshoot the problem: -->
- [ ] I provided code that demonstrates a minimal reproducible example. <!-- Ideal, especially via source install -->
- [ ] I confirmed bug exists on the latest mainline of AutoGluon via source install. <!-- Preferred -->
- [ ] I confirmed bug exists on the latest stable version of AutoGluon. <!-- Unnecessary if prior items are checked -->
**Describe the bug**
<!-- A clear and concise description of what the bug is. -->
I am running a Jupyter Notebook on an M2 Macbook Pro with 24 GB of RAM where I am using Autogluon to run a Time Series Predictor model. I am having issues when fitting the model, due to my kernel dying. I have tried 3 things:
1. Uninstalling Autogluon and reinstalling it using the provided method on the Autogluon website.
2. Decreasing the size of the data set
3. Excluding certain hyperparameters in the fit
With doing these my kernel still continues to crash, but when running on colleagues computer it runs with no issues. I am trying to understand why this is an issue on my computer.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
What is expected when fitting the Autogluon Time Series Predictor Model is a verbose output showing the system information about Autogluon, Python and your computer. Followed by information of the model you are trying to fit, then the information of the fit (Validation score, runtime information).
**Installed Versions**
<!-- Please run the following code snippet: -->
<details>
Here are my systems info for running the following code:
AutoGluon Version: 1.1.1
Python Version: 3.9.16
Operating System: Darwin
Platform Machine: arm64
CPU Count: 8
GPU Count: 0
Memory Avail: 12.45 GB / 24.00 GB (51.9%)
Disk Space Avail: 645.40 GB / 926.35 GB (69.7%)
Below is the Code that I am trying to run:
```python
# Fit the predictor with the specified models
predictor.fit(
training_data,
hyperparameters={
'SeasonalNaive': {}, # Seasonal Naive
'CrostonSBA': {}, # Croston's method with SBA
'NPTS': {}, # Non-Parametric Time Series
'AutoETS': {}, # Auto Exponential Smoothing
'DynamicOptimizedTheta': {}, # Dynamic Optimized Theta
'AutoARIMA': {}, # Auto ARIMA
'RecursiveTabular': {}, # Recursive Tabular Model
'DirectTabular': {}, # Direct Tabular Model
'DeepAR': {}, # DeepAR
'TemporalFusionTransformer': {}, # Temporal Fusion Transformer
'PatchTST': {} # Patch Transformer
},
num_val_windows=1,
time_limit=3600
)
```
Note: With the Data Set not being too large the notebook suggested to decrease the num_val_windows from 5 to 1, but still no improvement. Also the same with decreasing the time_limit.
</details>
| closed | 2024-08-29T15:55:06Z | 2024-09-10T09:08:05Z | https://github.com/autogluon/autogluon/issues/4443 | [
"bug: unconfirmed",
"Needs Triage"
] | Rybus07 | 4 |
ipython/ipython | jupyter | 14,233 | %autoreload fails since 8.17.0 | May be related to #14145
```python
In [1]: %load_ext autoreload
In [2]: %autoreload 3
Error in callback <bound method AutoreloadMagics.pre_run_cell of <IPython.extensions.autoreload.AutoreloadMagics object at 0x7f8877f5ad90>> (for pre_run_cell):
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
TypeError: AutoreloadMagics.pre_run_cell() takes 1 positional argument but 2 were given
```
This caused no issues until including version IPython 8.16.1
I use IPython from command line on Fedora Linux 38, Python 3.11.6 built from source, IPython installed from PyPI. | open | 2023-10-31T20:52:48Z | 2023-11-07T12:50:34Z | https://github.com/ipython/ipython/issues/14233 | [] | 2sn | 5 |
samuelcolvin/dirty-equals | pytest | 104 | dirty-equals causes a infinite-loop if it is used with `functools.singledispatch` | The following code causes a infinite-loop inside `functools.singledispatch`
``` python
from dirty_equals import IsStr
from functools import singledispatch
@singledispatch
def dispatch(a):
print("generic", a)
s = IsStr()
dispatch(s)
```
It looks like this is caused by the fact that `IsStr == IsStr` is `False`.
I already created a pull-request, please let me know if this can be fixed this way.
| closed | 2024-09-13T20:10:33Z | 2024-09-14T14:06:13Z | https://github.com/samuelcolvin/dirty-equals/issues/104 | [] | 15r10nk | 0 |
lexiforest/curl_cffi | web-scraping | 358 | [Feature]cookiejar | requests cookies.py
```
def get_cookie_header(jar, request):
"""
Produce an appropriate Cookie header string to be sent with `request`, or None.
:rtype: str
"""
r = MockRequest(request)
jar.add_cookie_header(r)
return r.get_new_headers().get("Cookie")
```
它会根据请求的url 自动加载jar路径cookie, 底层使用的是http.cookiejar的 add_cookie_header
我希望这个库可以支持一下这个功能
| closed | 2024-07-27T03:47:40Z | 2024-07-28T03:08:12Z | https://github.com/lexiforest/curl_cffi/issues/358 | [
"enhancement"
] | a-n-i-m-e-z | 5 |
iperov/DeepFaceLab | machine-learning | 866 | Question about seemingly unused file | what does the file 'samplelib/SampleGeneratorFaceCelebAMaskHQ.py' do? | open | 2020-08-20T01:47:55Z | 2020-08-20T01:47:55Z | https://github.com/iperov/DeepFaceLab/issues/866 | [] | test1230-lab | 0 |
litestar-org/litestar | api | 3,233 | Enhancement: Support for dictionaries with Pydantic models as value, e.g. dict[str, PydanticClass] | ### Summary
As mentioned on the Discord channel, below code will result in a Pydantic validation error. If I turn households into lists (as well as in the defaults dict as in the HousHolds class) it all works!! But using dict[str, HouseHold] as typehint, does not work out.
As a maintainer `g...` pointed out: This is something we just don't handle in polyfactory currently. When given a dictionary for households, then it's assumed that the value you've given is what should be used. That means we end up not creating instances of HouseHold, but just pass in the raw dictionary and pydantic complains.
Currently we do support Sequence[SomeModel, but no other type is supported. I think supporting things like Mapping[str, SomeModel] is going to be somewhat complex though I'm not a 100% sure.
The code:
```
from pydantic import Field, BaseModel
from uuid import UUID, uuid4
from typing_extensions import TypedDict
from typing import Union
from datetime import date, datetime
from polyfactory.factories.pydantic_factory import ModelFactory
from polyfactory.factories import TypedDictFactory
class RelativeDict(TypedDict):
household_id:str
familymember_id:str
class FamilyMember(BaseModel):
familymember_id: str
name: str
hobbies: list[str]
age: Union[float, int]
birthday: Union[datetime, date]
relatives: list[RelativeDict]
class HouseHold(BaseModel):
household_id: str
name: str
familymembers: list[FamilyMember]
class HouseHolds(BaseModel):
id: UUID = Field(default_factory=uuid4)
households: dict[str, HouseHold]
class RelativeDictFactory(TypedDictFactory[RelativeDict]):
...
class FamilyMemberFactory(ModelFactory[FamilyMember]):
relatives = list[RelativeDictFactory]
class HouseHoldFactory(ModelFactory[HouseHold]):
familymembers = list[FamilyMemberFactory]
class HouseHoldsFactory(ModelFactory[HouseHolds]):
...
defaults = {
"households": {
"beck": {
"household_id": "beck",
"name": "Family Beck",
"familymembers": [
{
"familymember_id": "linda",
"relatives": [
{
"household_id": "grant",
"familymember_id": "dad"
}
]
},
{"familymember_id": "erik"}
]
}
,"grant":{
"household_id": "grant",
"name": "Family Grant",
"familymembers": [
{"familymember_id": "dad"},
{"familymember_id": "mother"}
]
}
}
}
test = HouseHoldsFactory.build(**defaults)
print(test)
```
Just running the build method without any defaults works, but as you can see, the `relatives` key of the `FamilyMember` class should only contain a Household id and a Familymember id that exist (ie. an id of a Household that contains a Familymember wiht a certain id). So there are dependencies, that's the reason of this whole dictionary.
It does saves us from having to provide all the other fields, as in practice these classes could contain a ton of extra fields, which don't have dependencies, so Polyfactory could easily fake those for us.
### Basic Example
Maintainer `g...` provided a basic solution for now:
```
defaults = {
"households": {
"beck": HouseHold(...)
,"grant": HouseHold(...)
}
```
So that should work out or I could override the build method, was his suggestion.
What I think would be ideal is:
```
class HouseHoldsFactory(ModelFactory[HouseHolds]):
households: dict[str, HouseHoldFactory]
```
So the `ModelFactory` knows when it hits the key households, it actually knows how to handle that object (the keys should be a string and the values should be parsed with that `HouseHoldFactory` class).
### Drawbacks and Impact
_No response_
### Unresolved questions
_No response_ | closed | 2024-03-21T11:14:45Z | 2025-03-20T15:54:30Z | https://github.com/litestar-org/litestar/issues/3233 | [
"Enhancement"
] | ErikvdVen | 6 |
faif/python-patterns | python | 127 | publish_subscribe.py - Method missing? | The example does not make use of the method Provider.unsubscribe().
I expected that there would be a "wrapping" method like Subscriber.unsubscribe() for removing itself from the Provider (compare: Subscriber.subscribe() for appending itself to the Provider)?
[See draft for publish_subscribe.py tests, line 24](https://github.com/fkromer/python-patterns/blob/test_publish_subscribe/test_publish_subscribe.py)
| closed | 2016-03-06T14:46:12Z | 2016-03-06T18:50:30Z | https://github.com/faif/python-patterns/issues/127 | [] | fkromer | 4 |
tensorpack/tensorpack | tensorflow | 1,412 | image2image.py on win10: AttributeError: Can't pickle local object 'get_data.<locals>.<lambda>' | Hello,
I am rather new to tensorflow and to python, but I am trying to learn, because I could use the image2image translation network. Well I played around with other pix2pix variants before, and I got some of them working, though rather slow. Then I read, that your tensorpack code is trimmed for speed, so I am trying this now, especially your image2image.py.
So I cloned your tensorpack.git and all the dependencies. Then I tried to train a small selfmade sample training set made out of 512x256 pixel Images, that I put together in the form B to A, in the folder ´train´in the main directory. The code starts, and asks if I want to keep the model. Then the error happens, and it tells me somewhere:
AttributeError: Can't pickle local object 'get_data.<locals>.<lambda>'
(1) The command line I used: image2image.py --data train --mode BtoA
Log:
2020-03-26 23:19:47.581961: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
[0326 23:19:49 @logger.py:128] WRN Log directory train_log\Image2Image exists! Use 'd' to delete it.
[0326 23:19:49 @logger.py:131] WRN If you're resuming from a previous run, you can choose to keep it.
Press any other key to exit.
Select Action: k (keep) / d (delete) / q (quit):d
[0326 23:19:51 @logger.py:92] Argv: I:\pix2pix basic\Image2Image.py --data train --mode BtoA
[0326 23:19:51 @parallel.py:207] WRN MultiProcessRunner does support Windows. However, Windows requires more strict picklability on processes, which may lead of failure on some of the code.
Traceback (most recent call last):
File "I:\pix2pix basic\Image2Image.py", line 215, in <module>
data = QueueInput(get_data())
File "I:\pix2pix basic\Image2Image.py", line 172, in get_data
ds = MultiProcessRunner(ds, 100, 1)
File "I:\pix2pix basic\tensorpack\dataflow\parallel.py", line 226, in __init__
start_proc_mask_signal(self.procs)
File "I:\pix2pix basic\tensorpack\utils\concurrency.py", line 239, in start_proc_mask_signal
p.start()
File "C:\Program Files\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Program Files\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in __init__
reduction.dump(process_obj, to_child)
File "C:\Program Files\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'get_data.<locals>.<lambda>'
I:\pix2pix basic>2020-03-26 23:19:51.635938: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Program Files\Python37\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
Here´s the log.txt:
[32m[0326 23:42:54 @logger.py:92][0m Argv: I:\pix2pix basic\Image2Image.py --data train --mode BtoA
[32m[0326 23:42:54 @parallel.py:207][0m [5m[31mWRN[0m MultiProcessRunner does support Windows. However, Windows requires more strict picklability on processes, which may lead of failure on some of the code.
2020-03-26 23:40:51.195079: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cudart64_100.dll
-------------------- ---------------------------------------------------------------------------------
sys.platform win32
Python 3.7.6 (tags/v3.7.6:43364a7ae0, Dec 19 2019, 00:42:30) [MSC v.1916 64 bit (AMD64)]
Tensorpack 0.10
Numpy 1.18.1
TensorFlow 1.15.0/v1.15.0-rc3-22-g590d6eef7e
TF Compiler Version MSVC 191627032
TF CUDA support True
TF MKL support False
TF XLA support False
Nvidia Driver
CUDA
CUDNN
NCCL
CUDA_VISIBLE_DEVICES None
GPU AMD Ryzen 2600X
Free RAM 22.20/31.95 GB
CPU Count 12
cv2 4.1.2
msgpack 1.0.0
python-prctl False
-------------------- ---------------------------------------------------------------------------------
Don´t know why it says "CUDA_VISIBLE_DEVICES None", I have a Geforce 1080 installed, which is (normally) recognized, and which I successfully used in other small neural network experiments.
+ You can install tensorpack master by `pip install -U git+https://github.com/tensorpack/tensorpack.git`
and see if your issue is already solved.
+ If you're not using tensorpack under a normal command line shell (e.g.,
using an IDE or jupyter notebook), please retry under a normal command line shell.
Tried the above, sadly, didin´t change my error. Oh, and may I ask a question? Is your image2image implementation somehow limited in image-dimensions? As I understand it, the original pix2pix code only accepts 256x256 pixel images, so I chose those for my training set.
Thank you very much for your time.
| closed | 2020-03-26T23:00:50Z | 2020-03-27T08:56:04Z | https://github.com/tensorpack/tensorpack/issues/1412 | [] | Lucretious | 2 |
pbugnion/gmaps | jupyter | 298 | Exception error when exporting to HTML | Hello,
I tried exporting my map to HTML, the code is same in the documentation.
`embed_minimal_html('export.html', views=[fig])`
It works fine on my computer, but my friend has encountered the following exception error.
<img width="982" alt="error" src="https://user-images.githubusercontent.com/34975332/54467860-d38d6f80-47c2-11e9-80ae-b05dc33c9a67.png">
I am afraid we forget to do something before running the code.
What can I do to fix this?
Thank you for your help!
Best,
Iris | closed | 2019-03-16T00:14:44Z | 2019-03-16T07:29:22Z | https://github.com/pbugnion/gmaps/issues/298 | [] | picaguo1997 | 2 |
tensorlayer/TensorLayer | tensorflow | 918 | tl.files.load_and_assign_npz() does not return 'False' | ### New Issue Checklist
- [x] I have read the [Contribution Guidelines](https://github.com/tensorlayer/tensorlayer/blob/master/CONTRIBUTING.md)
- [x] I searched for [existing GitHub issues](https://github.com/tensorlayer/tensorlayer/issues)
### Issue Description
tl.files.load_and_assign_npz() does not return 'False' when the model is not exist.
I tried to add 'return False' to the code and it works well.
tensorlayer/files/utils.py
```python
def load_and_assign_npz(sess=None, name=None, network=None):
if not os.path.exists(name):
logging.error("file {} doesn't exist.".format(name))
return False
```
### Reproducible Code
- Which OS are you using ?
Ubuntu 16.04 LTS
```python
if tl.files.load_and_assign_npz(sess=sess, name=checkpoint_path + 'g_{}.npz'.format(tl.global_flag['mode']), network=net_g) is False:
tl.files.load_and_assign_npz(sess=sess, name=checkpoint_path + 'g_{}_init.npz'.format(tl.global_flag['mode']), network=net_g)
``` | closed | 2018-12-31T01:43:35Z | 2019-01-02T12:34:13Z | https://github.com/tensorlayer/TensorLayer/issues/918 | [] | ImpactCrater | 2 |
K3D-tools/K3D-jupyter | jupyter | 296 | Standalone HTML representation of animation not loading | I've tried running the hydrogen example notebook through nbconvert to get a "static" representation. The produced html file will generally load, but the plot will not.
Browser console (same error in firefox)
[k3d.log](https://github.com/K3D-tools/K3D-jupyter/files/6765093/k3d.log)
I've added the converted html too (zipped since GitHub doesn't allow html attachments:
[k3d.html.zip](https://github.com/K3D-tools/K3D-jupyter/files/6765102/k3d.html.zip)
| closed | 2021-07-05T15:00:12Z | 2021-10-02T20:09:43Z | https://github.com/K3D-tools/K3D-jupyter/issues/296 | [] | renefritze | 8 |
xinntao/Real-ESRGAN | pytorch | 106 | vkQueueSubmit failed -4 | realesrgan-ncnn-vulkan-20210901-windows.zip
Win10 Home,
GF 920M
Run with
realesrgan-ncnn-vulkan.exe -i 01.JPG -o 01-up.jpg
Tried with different files, same error(s). Got black plain jpg file on output. | open | 2021-10-01T14:23:02Z | 2021-12-30T17:26:24Z | https://github.com/xinntao/Real-ESRGAN/issues/106 | [] | mkrishtopa | 14 |
piskvorky/gensim | data-science | 2,723 | how to implemente get_document_topics() | Hi all,
I use the method of AuthorTopicModel() build a model, then i want to get the topic of each document or the document belongs to which topic(assume that i have 10 topics) and the tutorial tell me that the method of get_document_topics() are not be implemented, so how to implemente it.
thanks in advance
yi zhao
| closed | 2020-01-07T13:24:08Z | 2020-01-08T03:53:34Z | https://github.com/piskvorky/gensim/issues/2723 | [] | zhaoyi1025 | 1 |
microsoft/hummingbird | scikit-learn | 565 | Improper converter check | Can someone please check this [converter check](https://github.com/microsoft/hummingbird/blob/2705c42585ac1d3ee69e65a93e5f59d7d42f44d5/hummingbird/ml/_topology.py#L205) in topology.py?
Shouldn't it be a `if converter is None:` ?
Or am I just missing something in the flow?
| closed | 2022-02-04T06:28:50Z | 2022-03-11T18:33:18Z | https://github.com/microsoft/hummingbird/issues/565 | [] | shubh0508 | 1 |
Miserlou/Zappa | flask | 1,729 | [Suggestion] Remove print statements in `handler.py` | ## Context
I have gone thru some effort to reduce my CloudWatch logs. By default Zappa was logging a lot of things that I didn't use to CloudWatch which was costing me $100s / mo.
The best solution I found to suppressing logs was this code in `app.py`:
```
logging.getLogger('boto3').setLevel(logging.ERROR)
logging.getLogger('botocore').setLevel(logging.ERROR)
logging.getLogger('s3transfer').setLevel(logging.ERROR)
logging.getLogger('urllib3').setLevel(logging.ERROR)
logging.getLogger('requests').setLevel(logging.ERROR)
logging.getLogger('zappa').setLevel(logging.INFO)
logging.getLogger('chardet.charsetprober').setLevel(logging.INFO)
```
Now the majority of my logs are due to `print()` statements in `handler.py` which cannot be suppressed in my application code.
## Possible Fix
I would like to reduce logging from `handler.py`.
Here are three options:
1. Remove `print()` statements as on the line https://github.com/Miserlou/Zappa/blob/master/zappa/handler.py#L383
2. Change the `print()` statements to user a `logger`
3. Put the `print()` statements behind a `enable_logging` flag | open | 2018-12-12T01:30:02Z | 2018-12-12T01:30:22Z | https://github.com/Miserlou/Zappa/issues/1729 | [] | vpontis | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,251 | emotion voice cloning | I use the method mentioned in thie rep https://github.com/innnky/emotional-vits to try to implement emotion voice cloning, I finetuned the pretrained synthsizer on a small dataset that contains about 24 speakers, each with 100 audio, and these 100 pieces of audio are divided by into roughly five or four categories, with the same tetx in each category but with different emotions. I inference with the finetuned synthesizer and the pretrained encoder and vocoder, but it's not working very well, if anyone know what the problem is or how it should be trained? | open | 2023-09-18T12:00:32Z | 2023-10-18T19:47:50Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1251 | [] | micelvrice | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,701 | How to open a file *.html? | Hello,
I am trying to access to a file but it is giving me an error:
`ERROR - Invalid URL: "/output/html/my_html.html"!`
How could I do that? Thank you | closed | 2024-04-17T11:50:08Z | 2024-04-17T12:59:16Z | https://github.com/seleniumbase/SeleniumBase/issues/2701 | [
"question"
] | manel00 | 2 |
ploomber/ploomber | jupyter | 275 | Consider allowing PythonCallable initialization with dotted paths | We're currently working on exporting DAGs to Airflow but PythonCallables won't be supported for the first version. When we convert a Ploomber DAG to Airflow, we need to instantiate the Ploomber DAG, if it has PythonCallables, the actual functions have to be imported, which implies that all imported packages in those functions must be installed in the Airflow host.
This is highly problematic, as it can pollute the python installation where Airflow is running and it requires all dependencies to be installed before parsing the DAG.
A possible solution would be to let PythonCallables be initialized with dotted paths, so they don't have to import the source function until the task is executed. | closed | 2020-10-24T16:59:00Z | 2020-10-25T15:11:42Z | https://github.com/ploomber/ploomber/issues/275 | [] | edublancas | 1 |
kennethreitz/responder | flask | 165 | When Serving static files: "TypeError: 'NoneType' object is not iterable" | Using the example code:
```import responder
api = responder.API()
@api.route("/{greeting}")
async def greet_world(req, resp, *, greeting):
resp.text = f"{greeting}, world!"
if __name__ == '__main__':
api.run()
```
I dropped a couple files in the `static/` directory, and while it does successfully serve the files, I get these messages back in the log whenever it loads my index.html file.
```
INFO: ('127.0.0.1', 46560) - "GET /index.html HTTP/1.1" 302
INFO: ('127.0.0.1', 46560) - "GET / HTTP/1.1" 304
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py", line 387, in run_asgi
result = await asgi(self.receive, self.send)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/uvicorn/middleware/message_logger.py", line 59, in __call__
await self.inner(self.receive, self.send)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/wsgi.py", line 41, in __call__
await self.run_wsgi_app(message)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/sync.py", line 108, in __call__
return await asyncio.wait_for(future, timeout=None)
File "/usr/lib64/python3.6/asyncio/tasks.py", line 339, in wait_for
return (yield from fut)
File "/usr/lib64/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/sync.py", line 123, in thread_handler
return self.func(*args, **kwargs)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/wsgi.py", line 118, in run_wsgi_app
for output in self.wsgi_application(environ, self.start_response):
TypeError: 'NoneType' object is not iterable
INFO: ('127.0.0.1', 46564) - "GET /static/chat.js HTTP/1.1" 500
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/uvicorn/protocols/http/httptools_impl.py", line 387, in run_asgi
result = await asgi(self.receive, self.send)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/uvicorn/middleware/message_logger.py", line 59, in __call__
await self.inner(self.receive, self.send)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/wsgi.py", line 41, in __call__
await self.run_wsgi_app(message)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/sync.py", line 108, in __call__
return await asyncio.wait_for(future, timeout=None)
File "/usr/lib64/python3.6/asyncio/tasks.py", line 339, in wait_for
return (yield from fut)
File "/usr/lib64/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/sync.py", line 123, in thread_handler
return self.func(*args, **kwargs)
File "/home/cig/PycharmProjects/PythonSandbox/venv/lib64/python3.6/site-packages/asgiref/wsgi.py", line 118, in run_wsgi_app
for output in self.wsgi_application(environ, self.start_response):
TypeError: 'NoneType' object is not iterable
INFO: ('127.0.0.1', 46568) - "GET /static/chat.js HTTP/1.1" 500
```
To reproduce:
1. Check out my [sandbox](https://github.com/chrisbrake/PythonSandbox)
2. Install `responder`
3. Go into the `toDo` module
4. Run `app.py`
5. Go to http://127.0.0.1:5042/static/ | closed | 2018-10-27T03:10:31Z | 2018-10-27T13:55:11Z | https://github.com/kennethreitz/responder/issues/165 | [] | chrisbrake | 2 |
thunlp/OpenPrompt | nlp | 175 | Is it possible to use templates without {"mask"}? | I am using OpenPrompt on UnifiedQA (https://github.com/allenai/unifiedqa), which generates the answer using T5 without a mask token.
I tried a template without {"mask"}, but this is not allowed in OpenPrompt.
I use the template {"placeholder":"text_a"} \n {"placeholder":"text_b"} {"mask"}, following the instruction in UnifiedQA, but the generation results are most empty.
Is there a way to use T5 generation (instead of mask filling) in OpenPrompt?
like this:

| open | 2022-07-18T08:11:22Z | 2022-10-11T04:23:02Z | https://github.com/thunlp/OpenPrompt/issues/175 | [] | RefluxNing | 1 |
alirezamika/autoscraper | web-scraping | 66 | It skips duplicate items, is there a way to retain duplicate values | closed | 2021-10-03T07:31:26Z | 2021-10-03T07:34:34Z | https://github.com/alirezamika/autoscraper/issues/66 | [] | 12arpitgoel | 0 |
|
mljar/mljar-supervised | scikit-learn | 188 | Change the `feature_selection` to `features_selection` (typo) | Change the `feature_selection` to `features_selection` (typo) | closed | 2020-09-15T11:09:09Z | 2020-09-21T11:18:09Z | https://github.com/mljar/mljar-supervised/issues/188 | [
"help wanted",
"good first issue"
] | pplonski | 1 |
encode/databases | asyncio | 397 | How to log all of the query sql string with args ? | I tried set `echo=True` and `logging.getLogger("databases").setLevel(logging.DEBUG)`, but both are not working.
Are there have some method to log sql string and args except `print(query)` ? I want something global.
Thank you ! | closed | 2021-09-24T18:14:19Z | 2021-09-28T08:54:25Z | https://github.com/encode/databases/issues/397 | [
"question"
] | insoz | 2 |
encode/databases | asyncio | 92 | Issue with connection to database with postgres (asyncpg) | There is simplified example extracted from /databases/backends/postgres.py:connect()
```python
import asyncio
import asyncpg
from databases import DatabaseURL
async def run():
url = 'postgresql://user:password@localhost/dbname'
pool = await asyncpg.create_pool(str(DatabaseURL(url)))
loop = asyncio.get_event_loop()
loop.run_until_complete(run())
```
I am getting error `socket.gaierror: [Errno -3] Temporary failure in name resolution` and that's fair since according to https://magicstack.github.io/asyncpg/current/api/index.html#connection-pools accepts **dsn** (str) – Connection arguments specified using as a single string in the following format: `postgres://user:pass@host:port/database?option=value` like that.
So we need to add some note to docs maybe or handle that situation via advanced parsing maybe.. | closed | 2019-04-24T19:58:30Z | 2019-06-20T09:35:55Z | https://github.com/encode/databases/issues/92 | [] | kiddten | 4 |
fastapi/sqlmodel | pydantic | 539 | Dose there any better way to write timezone aware datetime field without using the SQLAlchemy ? | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
class UserBase(SQLModel):
username: str = Field(index=True, unique=True)
email: EmailStr = Field(unique=True, index=True) # this field should be unique for table and this field is required
fullname: str | None = None
created_at: datetime = Field(default_factory=datetime.utcnow)
updated_at: datetime = Field(default_factory=datetime.utcnow)
```
### Description
I write my `created_at` and `updated_at` fields like this, however, this did not work, because of the time awareness,
```
SQLModel user: fullname='string' created_at=datetime.datetime(2023, 1, 26, 18, 19, 32, 961000, tzinfo=datetime.timezone.utc) updated_at=datetime.datetime(2023, 1, 26, 18, 19, 32, 961000, tzinfo=datetime.timezone.utc) id=None is_staff=False is_admin=False username='string' email='user@example.com' password='string'
(sqlalchemy.dialects.postgresql.asyncpg.Error) <class 'asyncpg.exceptions.DataError'>: invalid input for query argument $4: datetime.datetime(2023, 1, 26, 18, 19, 3... (can't subtract offset-naive and offset-aware datetimes)
```
After checking Github, i found this solution:
```python
class AuthUser(sqlmodel.SQLModel, table=True):
__tablename__ = 'auth_user'
id: Optional[int] = sqlmodel.Field(default=None, primary_key=True)
password: str = sqlmodel.Field(max_length=128)
last_login: datetime.datetime = Field(sa_column=sa.Column(sa.DateTime(timezone=True), nullable=False))
```
It is written with mixing SQLModel stuff and the SALAlchemy, I know SQLModel is SQLAlchemy under the hood but this feels strange, cause i want to face SQLModel ONLY.
Is there any better way of handling this?
Let's say when SQLModel create tables, it will check the payding field `created_at`, if it is timezone aware datetime then it will set it as `sa_column=sa.Column(sa.DateTime(timezone=True)` so that we do not need to mix them both,
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.8
### Python Version
3.10.2
### Additional Context
_No response_ | open | 2023-01-26T18:34:49Z | 2025-02-26T22:39:08Z | https://github.com/fastapi/sqlmodel/issues/539 | [
"question"
] | azataiot | 5 |
deepspeedai/DeepSpeed | machine-learning | 7,012 | nv-ds-chat CI test failure | The Nightly CI for https://github.com/deepspeedai/DeepSpeed/actions/runs/13230975186 failed.
| closed | 2025-02-07T00:26:40Z | 2025-02-10T22:30:14Z | https://github.com/deepspeedai/DeepSpeed/issues/7012 | [
"ci-failure"
] | github-actions[bot] | 0 |
jacobgil/pytorch-grad-cam | computer-vision | 233 | Create CAM for network trained on timeseries | Would it be possible to handle networks trained for timeseries classification? Example CNN below:
```
class Cnn1d(nn.Module):
def __init__(self, outputs=2):
super(Cnn1d, self).__init__()
self.outputs = outputs
self.conv1 = nn.Conv1d(1, 16, 3)
self.bn1 = nn.BatchNorm1d(16)
self.pool1 = nn.MaxPool1d(2)
self.conv2 = nn.Conv1d(16, 32, 3)
self.bn2 = nn.BatchNorm1d(32)
self.pool2 = nn.MaxPool1d(2)
self.conv3 = nn.Conv1d(32, 64, 3)
self.bn3 = nn.BatchNorm1d(64)
self.pool3 = nn.MaxPool1d(2)
self.conv4 = nn.Conv1d(64, 128, 3)
self.bn4 = nn.BatchNorm1d(128)
self.pool4 = nn.MaxPool1d(2)
self.dropout = nn.Dropout()
self.avgPool = nn.AvgPool1d(127)
self.fc1 = nn.Linear(256, self.outputs)
def forward(self, x):
x = self.conv1(x)
x = F.relu(self.bn1(x))
x = self.pool1(x)
x = self.conv2(x)
x = F.relu(self.bn2(x))
x = self.pool2(x)
x = self.conv3(x)
x = F.relu(self.bn3(x))
x = self.pool3(x)
x = self.conv4(x)
x = F.relu(self.bn4(x))
x = self.pool4(x)
x = self.dropout(x)
x = self.avgPool(x)
x = x.view(x.shape[0], -1)
x = self.fc1(x)
return x
``` | closed | 2022-04-14T20:11:17Z | 2022-07-29T12:09:04Z | https://github.com/jacobgil/pytorch-grad-cam/issues/233 | [] | kalfasyan | 5 |
twopirllc/pandas-ta | pandas | 638 | in bbands function stdev does not behave well when there are NaN values in the input column | when my input columns has NaN values all my standard_deviation values after the NaN inputs become NaN too (even after there are no longer NaN inputs). This should not be the case and only standard_deviation values close to the input NaNs should be impacted.
I recommend changing this:
```standard_deviation = stdev(close=close, length=length, ddof=ddof)```
with this:
```close.rolling(length).std(ddof=ddof)```
and the problems looks resolved while all functionality remains | closed | 2023-01-17T10:14:34Z | 2023-02-24T18:47:22Z | https://github.com/twopirllc/pandas-ta/issues/638 | [
"bug",
"help wanted",
"good first issue"
] | dnentchev | 1 |
nerfstudio-project/nerfstudio | computer-vision | 3,209 | Alicevision Meshroom support for custom data format and reusable for gsplat as well? | **Is your feature request related to a problem? Please describe.**
I'm running the SFM pipeline from Alicevision Meshroom, but I don't find a supporting np-process-data pipeline explicitly handle the Meshroom output. I searched issues and PR with "Meshroom" keywords and got nothing found. Since Meshroom is not a very poor quality calibration+reconstruction software.
Also, the new repo [gsplat](https://github.com/nerfstudio-project/gsplat/tree/main) has the example processing COLMAP format. Will this np-process-data pipeline become reusable for both repos?
**Describe the solution you'd like**
I verified Meshroom output in Blender including the camera extrinsic and intrinsics as well as the mesh location
**Describe alternatives you've considered**
manually have a code to convert the output from Meshroom to the nerfstudio is not difficult but could be integrated to the data preprocessing pipeline.
| open | 2024-06-10T22:32:52Z | 2024-06-12T20:33:02Z | https://github.com/nerfstudio-project/nerfstudio/issues/3209 | [] | jingyangcarl | 2 |
amisadmin/fastapi-amis-admin | fastapi | 153 | 如何通过点击跳转到新页面? | 目前看到的案例里面都是页面通过register_admin 在初始化的时候已经设定好了。我在测试的时候需要通过点击某个item(比如一个列表)然后这个列表会打开或跳转到一个新页面 (比如ArticleOverviewPage)。而这个页面并没有在初始化的时候就设定好。
请问有没有实现的例子呢?多谢各位。 | closed | 2023-12-31T09:46:51Z | 2024-01-04T06:47:04Z | https://github.com/amisadmin/fastapi-amis-admin/issues/153 | [] | cic1988 | 1 |
zappa/Zappa | django | 904 | [Migrated] Can I use DON'T PANIC traceback in logs? | Originally from: https://github.com/Miserlou/Zappa/issues/2166 by [rafagan](https://github.com/rafagan)
Setting "FLASK_DEBUG": "true" in my environment_variables enable detailed traceback from errors, but how can I return the DON'T PANIC traceback HTML? | closed | 2021-02-20T13:03:35Z | 2022-07-16T05:12:11Z | https://github.com/zappa/Zappa/issues/904 | [] | jneves | 1 |
microsoft/Bringing-Old-Photos-Back-to-Life | pytorch | 203 | AssertionError: Batch size 4 is wrong. It must be a multiple of # GPUs 3. | [python3.6 run.py --input_folder test_images/old_w_scratch_gpu012/ --output_folder old_w_scratch_output-gpu-012/ --GPU 0,1,2](url)
Error comming...
help please
--------------------------------
File "test_face.py", line 15, in <module>
opt = TestOptions().parse()
File "/home/ubuntu/download/Bringing-Old-Photos-Back-to-Life/Face_Enhancement/options/base_options.py", line 293, in parse
), "Batch size %d is wrong. It must be a multiple of # GPUs %d." % (opt.batchSize, len(opt.gpu_ids))
AssertionError: Batch size 4 is wrong. It must be a multiple of # GPUs 3.
Finish Stage 3 ...
| open | 2021-11-19T09:24:59Z | 2021-11-19T09:25:49Z | https://github.com/microsoft/Bringing-Old-Photos-Back-to-Life/issues/203 | [] | davaram | 1 |
pennersr/django-allauth | django | 3,960 | Steam Provider doesn't appear in socialaccounts in Response from _allauth/browser/v1/config | Hey, I'm trying to integrate Steam into my Django Rest + SolidJS Project. To understand how allauth headless works, I downloaded the react-spa example from the repo and added these things:
In INSTALLED_APPS:
"allauth.socialaccount.providers.openid",
"allauth.socialaccount.providers.steam",
In the Django admin dashboard, I added a social application with this data:
Provider: Steam
Name: Steam Provider
Client id: steam_api_key
Secret Key: steam_api_key
After starting the docker-compose and going to the signup page, I don't see any Steam Provider because providers is empty in the Response from _allauth/browser/v1/config.
Any help or insights would be greatly appreciated. Thank you! | closed | 2024-07-10T14:30:47Z | 2024-09-10T10:23:44Z | https://github.com/pennersr/django-allauth/issues/3960 | [
"Feature request"
] | lachdab | 2 |
jina-ai/clip-as-service | pytorch | 579 | How to calculate relevance/score for a query asked by the user against the trained document on a scale of 0-1 | **Question**
How to calculate relevance/score for a query asked by the user against the trained documents ?
**Additional context**
@abhishekraok @DmitryKey
As relevance/score for the results obtained against the documents is very important to be known , so that user is aware of the confidence level of the retrieved documents .
Could you please let us know how is this calculated?
- If this is incorporated in this code base where is it?
- How do we do it for a BERT model ?( is it using dot product calculation? )
| open | 2020-07-24T16:53:55Z | 2020-08-20T10:27:45Z | https://github.com/jina-ai/clip-as-service/issues/579 | [] | nishithbenhur | 10 |
graphql-python/graphene-django | django | 571 | FilterSet fields associated with CharField with choices are not converted into graphene.Enum | closed | 2019-01-14T02:57:13Z | 2019-01-14T03:00:13Z | https://github.com/graphql-python/graphene-django/issues/571 | [] | vanyakosmos | 1 |
|
django-import-export/django-import-export | django | 1,174 | CHUNK_SIZE defaults to 1, so unless explicitly set, prefetching won't work | **Describe the bug**
Not setting `CHUNK_SIZE` in your resource will cause it to default to being set to 1. Since we need to paginate the queryset to enable prefetching, this means we iterate over the queryset row by row, which prevents prefetching from working.
Setting the `CHUNK_SIZE` explicitly against the resource fixes the issue and lets the prefetch work, but there's no signal that this problem could be happening to the user, which can be very confusing if you're new to prefetch and can't see why it's not working (not that I know anyone who fits that description, ahem)
**To Reproduce**
Steps to reproduce the behavior:
1. Remove the `CHUNK_SIZE` setting from your resource
2. Make a resource for a Django model that has a valid prefetch option, e.g. a `m2m_thing = ManyToManyField(…)`
3. Export the resource using `MyResource().export(queryset.prefetch_related('m2m_thing')`
4. See in the query log that one query for `m2m_thing` is run for each row of the model
**Versions:**
- Django Import Export: 2.3.0
- Python: 3.6.9
- Django: 3.0.8
**Expected behavior**
* Setting the `CHUNK_SIZE` default value to something higher, e.g. 5000?
* Possibly a warning output if `CHUNK_SIZE` isn't set but prefetches are being used on the queryset?
* A note added to the docs against `CHUNK_SIZE`?
| closed | 2020-08-17T11:23:37Z | 2020-10-19T07:25:48Z | https://github.com/django-import-export/django-import-export/issues/1174 | [
"bug",
"hacktoberfest"
] | jdewells | 6 |
encode/databases | sqlalchemy | 117 | performance issue | I have a big performance issue, 1.5s using the raw query, more than 16s using the sqlalchemy core version of it.
```
raw_query = str(stmt.compile(dialect=postgresql.dialect(), compile_kwargs={"literal_binds": True}))
conn = await asyncpg.connect(str(databaselake.url))
direct0 = time.time()
direct_result = await conn.fetch(raw_query)
await conn.close()
direct1 = time.time()
print(f"direct: {direct1-direct0}")
raw0 = time.time()
raw_result = await databaselake.fetch_all(raw_query)
raw1 = time.time()
print(f"raw: {raw1-raw0}")
stmt0 = time.time()
result = await databaselake.fetch_all(stmt)
stmt1 = time.time()
print(f"stmt: {stmt1 - stmt0}")
print("result ok")
```
direct: 1.3257312774658203
raw: 1.4148566722869873
stmt1: 19.825509071350098
`direct` and `raw` both use `raw_query` which is the fully compiled version of `stmt`
They have similar performance, in line with what I get from entering the query in psql, around 1.5s
`direct` uses asyncpg directly, this was just to test, while `raw` uses databases
now the fetch_all version using the sqlachemy core `stmt` takes 16 times more time...
the query returns about 21 columns and 4500 rows. it's about 80 000 characters long, not sure it makes sense to paste it here but maybe there's something else I havn't considered
edit: the raw statement looks like this
```
select l.field1, l.field2, l.field3, ,......, l.field21
from lake l
join (
select x.unnest, x.ordinality
from unnest(array ['ticker1','ticker2', ....., 'ticker4299', 'ticker4300']) with ordinality
as x (unnest, ordinality)) as r on l.ticker = r.unnest
where time = '2019-06-20'
order by r.ordinality;
``` | closed | 2019-07-01T10:21:40Z | 2019-09-30T10:57:00Z | https://github.com/encode/databases/issues/117 | [] | euri10 | 6 |
ydataai/ydata-profiling | pandas | 1,272 | Python Script Killed when I am running auto profiling. It's executing for few mins and just when summarising the dataset it throws one column name and says "Killed". The machine is a 16GB node unix server | open | 2023-02-23T19:07:29Z | 2023-03-02T16:08:14Z | https://github.com/ydataai/ydata-profiling/issues/1272 | [
"information requested ❔",
"needs-triage"
] | JeevananthanKrishnasamy | 1 |
|
onnx/onnx | tensorflow | 5,914 | How to make `irfft` in ONNX | # Ask a Question
### Question
Hello! Can you point out how `irfft` can be used? I found issues and documentation on using `rfft`, but didn't find anything about `irfft`.
I found that https://github.com/onnx/onnx/issues/1646 and https://github.com/onnx/onnx/issues/3573 was closed with comment `All the other ops from the original list were added at some point.`. But I can't find any information related to `irfft`.
I would be glad to help!
| open | 2024-02-07T12:47:56Z | 2024-02-08T19:22:03Z | https://github.com/onnx/onnx/issues/5914 | [
"question"
] | grazder | 12 |
tortoise/tortoise-orm | asyncio | 1,495 | Big bug on create || post > id increment on fail post wasting unique id and spamming db | 
id keep keep increase even on fail post while this issue not occur on sqlalchmey | closed | 2023-10-14T09:28:03Z | 2023-10-16T00:33:33Z | https://github.com/tortoise/tortoise-orm/issues/1495 | [] | xalteropsx | 12 |
aleju/imgaug | deep-learning | 804 | AddToBrightness not deterministic/reproducible | I have found that the result of `AddToBrightness` is not deterministic, even when a seed is provided. Given that other probabilistic augmentations such as `Dropout` do seem to be deterministic when a seed is provided, I assume this is unexpected behavior.
The behavior can be reproduced with the following snippet:
```python
import hashlib
import imgaug
import numpy as np
from imgaug.augmenters import AddToBrightness, Dropout, Sequential
from numpy.random import default_rng
numpy_random_generator = default_rng(seed=42)
image = numpy_random_generator.integers(0, 255, (200, 200, 3), dtype=np.uint8)
imgaug_random_generator = imgaug.random.RNG(numpy_random_generator)
sequential = Sequential(
children=[
AddToBrightness(seed=imgaug_random_generator),
Dropout(seed=imgaug_random_generator),
],
seed=imgaug_random_generator,
)
image = sequential.augment_image(image)
def hash_image(image):
md5_hash = hashlib.md5()
md5_hash.update(image.data.tobytes())
return int(md5_hash.hexdigest(), 16)
print(hash_image(image))
```
When running this code, the hash will be different each time. I expect the hash to be the same each time, as the seed is provided.
When commenting out `AddToBrightness` and only applying `Dropout` augmentation, the seed is the same on each run. | open | 2022-01-13T14:39:23Z | 2022-01-13T14:39:23Z | https://github.com/aleju/imgaug/issues/804 | [] | kriskorrel-cw | 0 |
mljar/mercury | data-visualization | 54 | access previous execution | Please add option to access notebooks from previous execution. The parameters values should be displayed and the notebook. | closed | 2022-03-04T14:16:31Z | 2022-07-13T10:10:41Z | https://github.com/mljar/mercury/issues/54 | [
"enhancement",
"Pro"
] | pplonski | 1 |
GibbsConsulting/django-plotly-dash | plotly | 331 | Stateless app loading hook exceptions should be captured and not propagated | A side effect of the misconfiguration in #330 is that it exposes a lack of exception handling around the hook to locate a stateless app.
The code in [get_local_stateless_by_name](https://github.com/GibbsConsulting/django-plotly-dash/blob/master/django_plotly_dash/dash_wrapper.py) should cleanly capture any exception raised by the hook.
| open | 2021-04-05T12:54:20Z | 2021-04-05T12:54:20Z | https://github.com/GibbsConsulting/django-plotly-dash/issues/331 | [
"enhancement"
] | GibbsConsulting | 0 |
ijl/orjson | numpy | 511 | Do not deserializing to python decimal.Decimal is a nightmare for floating numbers in JSON | Python uses floats for it's JSON library. Herein lies the problems.
$ python
>>> import json
>>> json.dumps(10.001)
'10.000999999999999'
>>> json.loads('{"blaa": 0.333331}')
{u'blaa': 0.33333099999999999}
>>> type(json.loads('{"blaa": 0.333331}')['blaa'])
<type 'float'>
This is unacceptable.
see https://bitcointalk.org/index.php?topic=4086.msg58882#msg58882
In fact, it’s impossible to customize the behaviour of orjson to map to decimal values. This is weird. | closed | 2024-08-13T13:54:00Z | 2024-08-13T15:35:29Z | https://github.com/ijl/orjson/issues/511 | [] | wind-shift | 0 |
albumentations-team/albumentations | deep-learning | 2,188 | [New transform] PadIfNeeded3D | closed | 2024-12-12T02:30:34Z | 2024-12-13T23:44:52Z | https://github.com/albumentations-team/albumentations/issues/2188 | [
"enhancement"
] | ternaus | 1 |
|
tqdm/tqdm | jupyter | 1,542 | PackageNotFoundError | Traceback (most recent call last):
File "main.py", line 4, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "whisperx\__init__.py", line 1, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "whisperx\transcribe.py", line 9, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "whisperx\alignment.py", line 12, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "transformers\__init__.py", line 26, in <module>
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "transformers\dependency_versions_check.py", line 57, in <module>
File "transformers\utils\versions.py", line 117, in require_version_core
File "transformers\utils\versions.py", line 104, in require_version
importlib.metadata.PackageNotFoundError: No package metadata was found for The 'tqdm>=4.27' distribution was not found and is required by this application.
Try: pip install transformers -U or pip install -e '.[dev]' if you're working with git main
[87568] Failed to execute script 'main' due to unhandled exception! | open | 2023-12-26T09:54:06Z | 2023-12-26T09:54:06Z | https://github.com/tqdm/tqdm/issues/1542 | [] | WhozKunal | 0 |
gradio-app/gradio | deep-learning | 10,703 | [docs] "Adding custom CSS to your demo" is not updated for 5.0 | ### Describe the bug
"For additional styling ability, you can pass any CSS to your app using the css= kwarg. You can either the filepath to a CSS file, or a string of CSS code." silently fails for css from a file path since 5.0.
This is mentioned in #9463 but not updated in the doc.
See also #10344 for another issue on that page.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
Prerequisites: `styles.css` in the same directory defining `some-style`
```python
import gradio as gr
with gr.Blocks(fill_height=True, css="styles.css") as demo:
gr.Textbox(label="Time", elem_classes="some-style")
```
fails silently.
The correct parameter for css from a file path:
```
css_paths="styles.css"
```
### Screenshot
_No response_
### Logs
```shell
```
### System Info
```shell
Gradio >= 5.0
```
### Severity
I can work around it | closed | 2025-03-01T03:08:59Z | 2025-03-06T15:54:02Z | https://github.com/gradio-app/gradio/issues/10703 | [
"docs/website"
] | crcdng | 0 |
ultralytics/ultralytics | machine-learning | 19,467 | feature request about resuming model training with epochs change | ### Scene
given trained model total 100 epochs done, then I want to resume training based on previous trained model and increasing/decreasing some epochs.
first 100 epochs:
```python
model = YOLO("yolo.pt")
results = model.train(
data="datasets/xxx.yaml",
save_period=2,
epochs=100,
)
```
then, resume training and change epochs:
```python
model = YOLO("epoch90.pt")
results = model.train(
data="datasets/xxx.yaml",
save_period=2,
epochs=300, # more epochs
resume=True, # resume flag
)
```
but `epochs=300` not works, it still stops at 100.
As already pointed in https://github.com/ultralytics/ultralytics/issues/16402 and https://github.com/ultralytics/ultralytics/issues/18154 . I'm wondering why changing epochs when resuming is not supported in current code base?
Is it possible to support it?
| closed | 2025-02-27T20:20:20Z | 2025-03-01T16:26:39Z | https://github.com/ultralytics/ultralytics/issues/19467 | [
"question"
] | yantaozhao | 2 |
streamlit/streamlit | streamlit | 10,103 | The popover doesn't hide when it's inside a fragment and depends on another widget to be visible. | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
When I add a `popover` that depends on another widget to be visible, once it becomes visible, it never hides again.
### Reproducible Code Example
```Python
from dataclasses import dataclass
from typing import Union
import streamlit as st
@dataclass
class Test:
name: str = "Test"
child: Union["Test", None] = None
@st.experimental_fragment
def show(self):
content_container, config_container = st.columns([1, 1])
with content_container:
st.write(f"This is the content of {self.name}")
with config_container:
with st.popover("Config"):
st.write("This is the config")
if st.toggle(label="Show child", key=self.name):
self.show_child()
def show_child(self):
if self.child is None:
st.write("No child")
return
self.child.show()
Test(child=Test("Child")).show()
```
### Steps To Reproduce
1. Initial state

2. Enable the `toggle`

3. Disable the `toggle`

### Expected Behavior
I expect the `popover` the be visible only if the `toggle` is enabled.
### Current Behavior
The `popover` remains visible but "disabled" if it has been rendered at least once and the `toggle` is disabled.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.36.0
- Python version: 3.11.10
- Operating System: WSL Ubuntu 22.04.5 LTS
- Browser: Chrome
### Additional Information
_No response_ | closed | 2025-01-03T15:17:18Z | 2025-01-06T20:46:54Z | https://github.com/streamlit/streamlit/issues/10103 | [
"type:bug",
"status:cannot-reproduce",
"feature:st.popover",
"feature:st.fragment"
] | jgabriellacerda | 3 |
OpenInterpreter/open-interpreter | python | 835 | A flag to disable running code entirely and give replies only | ### Is your feature request related to a problem? Please describe.
Hello, it seems like there is no way to disable the running code by default when returning responses which can cause the response to freeze waiting in a infinite loop for a y/n. Seems there was an oversight considering there is the option to default to execute code by default with `interpreter -y`.
### Describe the solution you'd like
Have a simple flag that can be toggled in the CLI and API, such as `interpreter -nr` and `res = interpreter.chat("Explain what is Python?", disable_code_run=True)`.
### Describe alternatives you've considered
Telling the interpreter to not run code doesn't always work and sometimes will get stuck in the prompt.
### Additional context
_No response_ | closed | 2023-12-16T04:12:06Z | 2024-03-19T18:38:02Z | https://github.com/OpenInterpreter/open-interpreter/issues/835 | [
"Enhancement"
] | JohnnyRacer | 1 |
DistrictDataLabs/yellowbrick | matplotlib | 1,209 | Argument to PredictionError is overwritten | **Describe the bug**
Argument `is_fitted` is overwritten when Instantiating a `PredictionError` or `ResidualsPlot`.
**To Reproduce**
```python
from yellowbrick.regressor import PredictionError
from sklearn.linear_models import Lasso
visualizer = PredictionError(Lasso(), "test title", is_fitted=True)
print(visualizer.is_fitted)
```
**Expected behavior**
When `is_fitted` is passed in, I expect it to not be overwritten in the __init__ calls
**Desktop (please complete the following information):**
- OS: Ubuntu 20.04
- Python Version 3.8.10
- Yellowbrick Version : 1.3.post1
| closed | 2022-01-21T01:07:29Z | 2022-02-25T22:56:02Z | https://github.com/DistrictDataLabs/yellowbrick/issues/1209 | [
"type: bug"
] | aramesh7 | 1 |
microsoft/MMdnn | tensorflow | 431 | Error: Convert Pytorch model to IR model | I trained the model with pytorch and used torch.save() to save the whole model including weights and network but when I run "mmtoir -f pytorch -d mn --inputShape 1,28,28 -n xxx.pth", I got the error
> "AttributeError: Can't get attribute 'Net' on <module '__main__' from '/home/xxxx/anaconda3/bin/mmtoir'>"
>
It seems mmtoir cannot find my network definition? | open | 2018-09-28T06:03:41Z | 2018-09-29T12:10:04Z | https://github.com/microsoft/MMdnn/issues/431 | [] | wang5566 | 1 |
kubeflow/katib | scikit-learn | 2,011 | Dedicated yaml tab for Trials | /kind feature
**Describe the solution you'd like**
Our goal is to create a new yaml tab that will show the full YAML as it comes from the backend using the new Editor component introduced in https://github.com/kubeflow/kubeflow/pull/6733.
Related issue: https://github.com/kubeflow/katib/issues/1763
Love this feature? Give it a 👍 We prioritize the features with the most 👍
| closed | 2022-11-14T13:06:37Z | 2023-08-24T15:19:47Z | https://github.com/kubeflow/katib/issues/2011 | [
"kind/feature",
"lifecycle/stale"
] | elenzio9 | 2 |
holoviz/panel | matplotlib | 7,551 | Tabulator : tooltips | Hello all,
#### ALL software version info
MacOs with Chrome, Safari or FireFox
bokeh 3.6.1 and panel >= 1.5.2
#### Description of expected behavior and the observed behavior
The issue occurs in Tabulator when using `header_tooltips` with a `FastListTemplate`. The background and font colors of the tooltips are both dark, making the text unreadable.
I couldn't find the CSS responsible for the background color.
#### Complete, minimal, self-contained example code that reproduces the issue
```python
import pandas as pd
import panel as pn
import random
import numpy as np
pn.extension('tabulator')
n = 100
data = {
"ID": range(1, n + 1),
"Name": [f"Name_{i}" for i in range(1, n + 1)],
"Age": [random.randint(18, 70) for _ in range(n)],
"Score": [round(random.uniform(50, 100), 2) for _ in range(n)],
"Category": [random.choice(["A", "B", "C"]) for _ in range(n)],
"Active": [random.choice([True, False]) for _ in range(n)],
"Date": pd.date_range("2023-01-01", periods=n),
"Comment": [f"Comment_{i}" for i in range(1, n + 1)],
"Rating": [round(random.uniform(1, 5), 1) for _ in range(n)],
"Value": np.random.randint(100, 500, size=n)}
df = pd.DataFrame(data)
htt = {x: x for x in data.keys()}
tabulator = pn.widgets.Tabulator(df, page_size=10, sizing_mode='stretch_width', header_tooltips=htt)
# app = tabulator # OK
template = pn.template.FastListTemplate(title="Tabulator test", main=[tabulator]) # bug
# template = pn.template.BootstrapTemplate(title="Tabulator test", main=[tabulator]) # OK
# template = pn.template.MaterialTemplate(title="Tabulator test", main=[tabulator]) # OK
# template = pn.template.MaterialTemplate(title="Tabulator test", main=[tabulator]) # OK
app = template
app.servable()
app.show()
```
#### Screenshots or screencasts of the bug in action
<img width="770" alt="Image" src="https://github.com/user-attachments/assets/76b5606e-03cc-4505-85b6-ef379496675a" />
| open | 2024-12-13T10:26:04Z | 2025-03-11T14:36:00Z | https://github.com/holoviz/panel/issues/7551 | [] | symelmu | 0 |
AirtestProject/Airtest | automation | 830 | 1.2.6 不能用javacap连接竖屏模式下的模拟器, 只能连接横屏模式 | **描述问题bug**
根据教程连接emulator时选择"use javacap",然后就会无法连接处于竖屏模式下的模拟器,我尝试了mumu emulator (oversea version) 和 ldplayer 都不行,但是在横屏模式就可以完美连接。如果不勾选"use javacap",使用minicap也可以在竖屏模式下连接,但图像比较模糊,还是希望可以使用javacap。
***Use javacap connect portrait mode log (mumu emulator)***:
```
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 get-state
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 wait-for-device
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.sdk
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap ; echo ---$?---
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap.so ; echo ---$?---
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[07:10:58][DEBUG]<airtest.core.android.minicap> WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.
version:5
[07:10:58][DEBUG]<airtest.core.android.minicap> upgrade minicap to lastest version: -1->6
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell rm -r /data/local/tmp/minicap*
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.product.cpu.abi
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.preview_sdk
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.release
[07:10:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\x86\minicap /data/local/tmp/minicap
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\minicap-shared/aosp/libs/android-23/x86/minicap.so /data/local/tmp/minicap.so
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap.so ; echo ---$?---
[07:10:59][INFO]<airtest.core.android.minicap> minicap installation finished
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
[07:10:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys package com.netease.nie.yosemite ; echo ---$?---
[07:11:00][INFO]<airtest.core.android.yosemite> local version code is 302, installed version code is 302
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:15578 localabstract:javacap_15578
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path com.netease.nie.yosemite ; echo ---$?---
[07:11:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell CLASSPATH=/data/app/com.netease.nie.yosemite-1/base.apk exec app_process /system/bin com.netease.nie.yosemite.Capture --scale 100 --socket javacap_15578 -lazy 2>&1
[07:11:00][DEBUG]<airtest.utils.nbsp> [javacap_sever]b'Capture server listening on @javacap_15578'
[07:11:00][ERROR]<airtest> Traceback (most recent call last):
File "app\plugins\devicepool\android\device.py", line 342, in run
File "app\plugins\devicepool\android\device.py", line 351, in main_loop
File "airtest\core\android\javacap.py", line 101, in get_frame_from_stream
return self.frame_gen.send(None)
StopIteration
[Warning] The current device is unplugged!
```
***Use minicap connect portrait mode log (mumu emulator)***:
```
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 get-state
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 wait-for-device
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.sdk
F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\helper.py:41: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
cls.LOGGING.warn("Device:%s updated %s -> %s" % (dev.uuid, instance, dev))
[07:20:58][WARNING]<airtest.core.api> Device:127.0.0.1:7555 updated <airtest.core.android.android.Android object at 0x000002244D4A9CF8> -> <airtest.core.android.android.Android object at 0x000002244C134F60>
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap ; echo ---$?---
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap.so ; echo ---$?---
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[07:20:58][DEBUG]<airtest.core.android.minicap> WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.
version:5
[07:20:58][DEBUG]<airtest.core.android.minicap> upgrade minicap to lastest version: -1->6
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell rm -r /data/local/tmp/minicap*
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.product.cpu.abi
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.preview_sdk
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.release
[07:20:58][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\x86\minicap /data/local/tmp/minicap
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\minicap-shared/aosp/libs/android-23/x86/minicap.so /data/local/tmp/minicap.so
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap.so ; echo ---$?---
[07:20:59][INFO]<airtest.core.android.minicap> minicap installation finished
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
[07:20:59][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:14428 localabstract:minicap_14428
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -n 'minicap_14428' -P 1600x900@450x800/270 -l 2>&1
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'PID: 3115'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: Using projection 1600x900@450x253/3'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:240) Creating SurfaceComposerClient'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:243) Performing SurfaceComposerClient init check'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:250) Creating virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:256) Creating buffer queue'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:261) Creating CPU consumer'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:265) Creating frame waiter'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:269) Publishing virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/JpgEncoder.cpp:64) Allocating 4379652 bytes for JPG encoder'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:489) Server start'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:492) New client connection'
[07:21:00][DEBUG]<airtest.core.android.minicap> (1, 24, 3115, 1600, 900, 450, 253, 3, 2)
[07:21:00][DEBUG]<airtest.core.android.minicap> quirk_flag found, going to resetup
[07:21:00][DEBUG]<airtest.core.android.minicap> minicap stream ends
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --remove tcp:14428
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:19850 localabstract:minicap_19850
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:21:00][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -n 'minicap_19850' -P 900x1600@800x450/0 -l 2>&1
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'PID: 3141'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: Using projection 900x1600@253x450/0'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:240) Creating SurfaceComposerClient'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:243) Performing SurfaceComposerClient init check'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:250) Creating virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:256) Creating buffer queue'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:261) Creating CPU consumer'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:265) Creating frame waiter'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (external/MY_minicap/src/minicap_23.cpp:269) Publishing virtual display'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/JpgEncoder.cpp:64) Allocating 4379652 bytes for JPG encoder'
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:489) Server start'
[07:21:00][DEBUG]<airtest.core.android.minicap> (1, 24, 3141, 900, 1600, 253, 450, 0, 2)
[07:21:00][DEBUG]<airtest.utils.nbsp> [minicap_server]b'INFO: (jni/minicap/minicap.cpp:492) New client connection'
```
***Use javacap connect landscape mode log (mumu emulator)***:
```
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 get-state
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 wait-for-device
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.sdk
F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\helper.py:41: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
cls.LOGGING.warn("Device:%s updated %s -> %s" % (dev.uuid, instance, dev))
[07:22:55][WARNING]<airtest.core.api> Device:127.0.0.1:7555 updated <airtest.core.android.android.Android object at 0x000002244C134F60> -> <airtest.core.android.android.Android object at 0x0000022451BC9B70>
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap ; echo ---$?---
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell ls /data/local/tmp/minicap.so ; echo ---$?---
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -v 2>&1
[07:22:55][DEBUG]<airtest.core.android.minicap> WARNING: linker: /data/local/tmp/minicap has text relocations. This is wasting memory and prevents security hardening. Please fix.
version:5
[07:22:55][DEBUG]<airtest.core.android.minicap> upgrade minicap to lastest version: -1->6
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell rm -r /data/local/tmp/minicap*
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.product.cpu.abi
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.preview_sdk
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell getprop ro.build.version.release
[07:22:55][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\x86\minicap /data/local/tmp/minicap
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 push F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\stf_libs\minicap-shared/aosp/libs/android-23/x86/minicap.so /data/local/tmp/minicap.so
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell chmod 755 /data/local/tmp/minicap.so ; echo ---$?---
[07:22:56][INFO]<airtest.core.android.minicap> minicap installation finished
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell LD_LIBRARY_PATH=/data/local/tmp /data/local/tmp/minicap -i ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys window displays ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path jp.co.cyberagent.stf.rotationwatcher ; echo ---$?---
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell export CLASSPATH=/data/app/jp.co.cyberagent.stf.rotationwatcher-1/base.apk;exec app_process /system/bin jp.co.cyberagent.stf.rotationwatcher.RotationWatcher
[07:22:56][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell dumpsys package com.netease.nie.yosemite ; echo ---$?---
[07:22:57][INFO]<airtest.core.android.yosemite> local version code is 302, installed version code is 302
[07:22:57][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 forward --no-rebind tcp:18115 localabstract:javacap_18115
[07:22:57][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell pm path com.netease.nie.yosemite ; echo ---$?---
[07:22:57][DEBUG]<airtest.core.android.adb> F:\Program Files\AirtestIDE-win-1.2.6\AirtestIDE\airtest\core\android\static\adb\windows\adb.exe -P 5037 -s 127.0.0.1:7555 shell CLASSPATH=/data/app/com.netease.nie.yosemite-1/base.apk exec app_process /system/bin com.netease.nie.yosemite.Capture --scale 100 --socket javacap_18115 -lazy 2>&1
[07:22:57][DEBUG]<airtest.utils.nbsp> [javacap_sever]b'Capture server listening on @javacap_18115'
[07:22:57][DEBUG]<airtest.core.android.javacap> (1, 3, 0, 1600, 900, 0, 0, 0, 1)
```
**相关截图**
(贴出遇到问题时的截图内容,如果有的话)
(在AirtestIDE里产生的图像和设备相关的问题,请贴一些AirtestIDE控制台黑窗口相关报错信息)
**复现步骤**
1. 打开mumu emulator,启用 ADB connection
2. 添加 remote connection "adb connect 127.0.0.1:7555"
3. 勾选 use javacap
4. 旋转mumu emulator屏幕
5. 点击connect
**预期效果**
横屏和竖屏应该都能正常使用javacap连接
**python 版本:** `Airtest IDE 1.2.6 默认python (3.6.5)`
**airtest 版本:** `1.2.6`
**设备:**
- 型号: mumu emulator 2.3.18, desktop launcher 2.4.0
- 系统: Android 6.0.1
**其他相关环境信息**
在横屏模式或minicap下都能正常连接。
| closed | 2020-11-14T03:31:39Z | 2021-05-06T03:32:26Z | https://github.com/AirtestProject/Airtest/issues/830 | [] | Anyrainel | 4 |
biosustain/potion | sqlalchemy | 88 | Minor fix for Meta docs | The documentation says that attribute name of Meta defaults to `the lower-case of the model’s class name` but it actually defaults to the [lower-case of the model's table name](https://github.com/biosustain/potion/blob/af3f52173e97ef41bddfa8f4775280c9b8d3188d/flask_potion/contrib/alchemy/manager.py#L60).
| closed | 2016-07-03T12:15:18Z | 2016-07-04T16:17:13Z | https://github.com/biosustain/potion/issues/88 | [] | Alain1405 | 0 |
jmcnamara/XlsxWriter | pandas | 211 | Shebang on vba_extract.py set incorrectly in wheel | The shebang at the beginning of vba_extract.py is set to `#!/Users/John/.pythonbrew/pythons/Python-2.7.2/bin/python` which is specific to a particular installation. It should probably be changed to just `#!python`.
Example which demonstrates the problem:
``` shell
$ vba_extract.py
bash: /usr/local/bin/vba_extract.py: /Users/John/.pythonbrew/pythons/Python-2.7.2/bin/python: bad interpreter: No such file or directory
```
| closed | 2015-01-15T17:42:36Z | 2015-02-04T14:06:19Z | https://github.com/jmcnamara/XlsxWriter/issues/211 | [
"bug",
"ready to close"
] | cassidylaidlaw | 4 |
MagicStack/asyncpg | asyncio | 711 | Keep pool connections count not less than min_size | Is there a way to specify min_size of the pool, so that even in case if connection isn't used longer than `max_inactive_connection_lifetime` it won't be closed if pool has exatly min_size connections oppened?
From what I see in the code, there is no such possibility at the moment and it may be tricky to add.
We have trafik spikes and overall requests variation during the day, i.e. there is more requests in the evening than in the morning, and there is significant variation in number of requests hourly.
So, we want to always have min_size of connections openned, but at the same time, we don't won't to keep all openned conections active all the time.
| closed | 2021-03-04T11:11:26Z | 2021-03-17T10:57:28Z | https://github.com/MagicStack/asyncpg/issues/711 | [] | lazitski-aliaksei | 1 |
django-import-export/django-import-export | django | 1,085 | ForeignKeyWidget through a OneToOneField w/ self as input | My model is based on a network connection:
# models.py
class NetworkPort(models.Model):
label = models.ForeignKey(
NetworkPortLabel,
on_delete=models.PROTECT,
)
asset = models.ForeignKey(
Asset,
on_delete=models.CASCADE
)
connection = models.OneToOneField(
"self",
null=True,
on_delete=models.SET_NULL
)
class Meta:
unique_together = ['label', 'asset']
Here's the NetworkPortResource:
# resources.py
class NetworkPortResource(resources.ModelResource):
class SrcPortForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row):
my_asset = Asset.objects.get(hostname=row['src_hostname'])
return self.model.objects.get(
name__iexact=row['src_port'],
itmodel__vendor__iexact=my_asset.itmodel.vendor,
itmodel__model_number__iexact=my_asset.itmodel.model_number
)
class DestPortForeignKeyWidget(ForeignKeyWidget):
def clean(self, value, row):
my_asset = Asset.objects.get(hostname=row['dest_hostname'])
return self.model.objects.get(
name__iexact=row['dest_port'],
itmodel__vendor__iexact=my_asset.itmodel.vendor,
itmodel__model_number__iexact=my_asset.itmodel.model_number
)
src_hostname = fields.Field(
column_name='src_hostname',
attribute='asset',
widget=ForeignKeyWidget(Asset, 'hostname')
)
src_port = fields.Field(
column_name='src_port',
attribute='label',
widget=SrcPortForeignKeyWidget(NetworkPortLabel, 'name')
)
src_mac = fields.Field(
column_name='src_mac',
attribute='asset',
widget=ForeignKeyWidget(Asset, 'mac_address')
)
dest_hostname = fields.Field(
column_name='dest_hostname',
attribute='connection',
widget=ForeignKeyWidget(Asset, 'hostname')
)
dest_port = fields.Field(
column_name='dest_port',
attribute='connection',
widget=DestPortForeignKeyWidget(NetworkPortLabel, 'name')
)
class Meta:
model = NetworkPort
exclude = ('id', 'label', 'asset', 'connection')
import_id_fields = ('src_hostname', 'src_port')
export_order = ('src_hostname', 'src_port', 'src_mac', 'dest_hostname', 'dest_port')
skip_unchanged = True
report_skipped = True
clean_model_instances = True
I'm not sure how to assign the dest_hostname and dest_port since they are subfields of the connection field. I'm currently getting this error message when attempting import:
Cannot assign "<NetworkPortLabel: Eth1 : V1 by Dell>": "NetworkPort.connection" must be a "NetworkPort" instance. | closed | 2020-02-22T19:29:54Z | 2020-03-03T05:14:54Z | https://github.com/django-import-export/django-import-export/issues/1085 | [] | CarterGay | 0 |
koxudaxi/datamodel-code-generator | pydantic | 2,199 | Incorrect type annotations for untyped JSON schema arrays | **Describe the bug**
When a JSON schema field is declared as `"type": "array"` with no further details, the generated data model code fails typechecking:
```
src/_models/__init__.py:1990: error: Missing type parameters for generic type "List" [type-arg]
src/_models/__init__.py:1991: error: Missing type parameters for generic type "List" [type-arg]
```
**To Reproduce**
Example schema snippet:
```json
"system/rpc/listDownloadedModels/returns": {
"type": "array"
},
```
Used commandline:
```
$ datamodel-codegen --input /path/to/schema.json --input-file-type jsonschema --output /path/to/src/_models/__init__.py \
--output-model-type pydantic_v2.BaseModel --use-annotated --use-union-operator
```
The generated model that fails typechecking:
```python
class SystemRpcListDownloadedModelsReturns(RootModel[List]):
root: List
```
**Expected behavior**
Untyped arrays should be effectively typed as `List[Any]` rather than `List`
**Version:**
- OS: Linux
- Python version: 3.12
- datamodel-code-generator version: 0.26.3
**Additional context**
`--use-generic-container-types` just shifted the error to complaining about `Sequence` instead.
`--use-standard-collections` crashed `mypy` outright, so I didn't investigate that any further.
The offending schema field not having any type information at all regarding the permitted elements is probably a bug in its own right, but not one that has anything to do with datamodel-code-generator.
| open | 2024-11-28T13:45:40Z | 2024-12-03T07:14:09Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2199 | [] | ncoghlan | 1 |
ultralytics/ultralytics | computer-vision | 19,213 | Questions about new NMS Export for Detect, Segment, Pose and OBB YOLO | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Thanks for this nice feature to have the option to embed the nms in the backends.
If I understand the source code correctly, there are now there two different methods of doing the nms:
- normal NMS: using non_max_suppression in utils/ops.py
- NMS export: using NMSModel.forward in engine.exporter.py
Questions:
- Is it to be expected that both methods will give exactly the same results?
- What is the (technical) reason that non_max_suppression is not used for NMS export?
Please enlighten me.
### Additional
_No response_ | open | 2025-02-12T16:44:46Z | 2025-02-14T18:42:05Z | https://github.com/ultralytics/ultralytics/issues/19213 | [
"question",
"detect",
"exports"
] | VdLMV | 6 |
nltk/nltk | nlp | 3,147 | Add Swahili Stopwords | From #2800 | open | 2023-04-24T06:04:48Z | 2023-04-24T06:04:48Z | https://github.com/nltk/nltk/issues/3147 | [] | OmondiVincent | 0 |
serengil/deepface | machine-learning | 833 | Deepface.Represent | When I put the argument force_detection = false what the function will return? | closed | 2023-08-25T19:34:56Z | 2023-08-25T19:59:23Z | https://github.com/serengil/deepface/issues/833 | [
"question"
] | RafaelFerraria | 1 |
flaskbb/flaskbb | flask | 110 | keyerror ONLINE_LAST_MINUTES | I clone the repo and follow the doc to deploy my flaskbb, but when I run command :
python manage.py createall
it hints not found the command , so I use command "python manage.py initdb" instead
I also do the command :
cp flaskbb/configs/production.py.example flaskbb/configs/production.py
but when run:
python manage.py runserver
it hints keyerror:
Traceback (most recent call last):
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1836, in **call**
return self.wsgi_app(environ, start_response)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1820, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1403, in handle_exception
reraise(exc_type, exc_value, tb)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1817, in wsgi_app
response = self.full_dispatch_request()
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1477, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1381, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask/app.py", line 1475, in full_dispatch_request
rv = self.dispatch_request()
File "/Users/xiaobing/workspace/flask_apps/venv/lib/python2.7/site-packages/flask_debugtoolbar/**init**.py", line 124, in dispatch_request
return view_func(**req.view_args)
File "/Users/xiaobing/workspace/flask_apps/flaskbb/flaskbb/forum/views.py", line 47, in index
online_users = User.query.filter(User.lastseen >= time_diff()).count()
File "/Users/xiaobing/workspace/flask_apps/flaskbb/flaskbb/utils/helpers.py", line 296, in time_diff
diff = now - timedelta(minutes=flaskbb_config['ONLINE_LAST_MINUTES'])
File "/Users/xiaobing/workspace/flask_apps/flaskbb/flaskbb/utils/settings.py", line 26, in **getitem**
return Setting.as_dict()[key]
KeyError: 'ONLINE_LAST_MINUTES'
| closed | 2015-04-10T10:22:46Z | 2018-04-15T07:47:35Z | https://github.com/flaskbb/flaskbb/issues/110 | [] | GreatmanBill | 1 |
zappa/Zappa | django | 1,360 | ImproperlyConfigured: Error loading psycopg2 or psycopg module on AWS | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 3.8/3.9/3.10/3.11/3.12 -->
## Expected Behavior
zappa should update dev when I do zappa update dev. It was working before without having to do anything.
## Actual Behavior
When I do zappa update dev I get the error "Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 502 response code."
When I do zappa tail I get the following error message
ImproperlyConfigured: Error loading psycopg2 or psycopg module
## Possible Fix
It was working when it showed INFO: - pyyaml==2.9.9: Using locally cached manylinux wheel in the output. Somehow set it back to that or something.
## Steps to Reproduce
I have a lambda django application configured to connect to postgres database backend
On MacOS Sequoia 15.2 running a virtual environmnet with pip Python 3.12.
try a zappa update of the running code.
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
Ouput of pip freeze
```
argcomplete==3.5.2
asgiref==3.8.1
boto3==1.35.90
botocore==1.35.90
certifi==2024.12.14
cfn-flip==1.3.0
charset-normalizer==3.4.1
click==8.1.8
Django==5.1.4
django-cors-headers==4.6.0
django-phonenumber-field==8.0.0
djangorestframework==3.15.2
djangorestframework-simplejwt==5.3.1
durationpy==0.9
emoji==2.14.0
hjson==3.1.0
idna==3.10
iniconfig==2.0.0
Jinja2==3.1.5
jmespath==1.0.1
kappa==0.6.0
MarkupSafe==3.0.2
packaging==24.2
phonenumbers==8.13.52
placebo==0.9.0
pluggy==1.5.0
psycopg==3.2.3
psycopg2-binary==2.9.10
PyJWT==2.10.1
pytest==8.3.4
pytest-django==4.9.0
pytest-html==4.1.1
pytest-metadata==3.1.1
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
python-slugify==8.0.4
pytz==2024.2
PyYAML==6.0.2
regex==2024.11.6
requests==2.32.3
s3transfer==0.10.4
setuptools==75.6.0
six==1.17.0
sqlparse==0.5.3
stripe==11.4.1
text-unidecode==1.3
toml==0.10.2
tqdm==4.67.1
troposphere==4.8.3
typing_extensions==4.12.2
urllib3==2.3.0
Werkzeug==3.1.3
wheel==0.45.1
zappa==0.59.0
```
| closed | 2024-12-30T20:12:38Z | 2025-01-07T15:03:35Z | https://github.com/zappa/Zappa/issues/1360 | [] | MichaelT-178 | 0 |
serengil/deepface | deep-learning | 512 | Is DeepFace differentiable, i.e., able to back-propagate gradients to update input? | I want to use deepface to analyze our synthesized images. The loss is computed based on deepface's output. And we want to bp the loss gradient through deepface to update the synthesizer. Is it possible? | closed | 2022-07-15T23:35:01Z | 2024-09-20T08:26:21Z | https://github.com/serengil/deepface/issues/512 | [
"question"
] | BannyStone | 3 |
horovod/horovod | tensorflow | 3,435 | Ray tests are flaky | Ray tests have shown to be flaky, especially with GPU (Buildkite CI).
There are two places that cause these flakiness:
1. Some tests fetch `ray.available_resources()` at the beginning of the test and compare the `dict` against `ray.available_resources()` after the test: `assert check_resources(original_resources)`. It looks like that `dict` has race conditions: https://buildkite.com/horovod/horovod/builds/7306#cc0600a5-13ed-479d-ba1b-3bb0f6847992/332-436
```
AssertionError: assert {'CPU': 4.0, ...147712.0, ...} == {'object_stor... 9999999984.0}
Differing items:
{'object_store_memory': 10000000000.0} != {'object_store_memory': 9999999984.0}
Left contains 5 more items:
{'CPU': 4.0,
'GPU': 4.0,
'accelerator_type:T4': 1.0,
'memory': 189132147712.0,...
```
2. Some tests see more GPUs than there should be:
```
assert len(all_envs[0]["CUDA_VISIBLE_DEVICES"].split(",")) == 4
assert 8 == 4
+8
-4
```
First, the tests should work with any number of GPUs (larger than the minimum required GPUs of 4). Second, the test run on a machine with only 4 GPUs on Buildkite, with only one agent per machine. So there cannot be more than 4 GPUs visible to those tests. It looks like the `RayExecutor` provides that environment variable to the worker and somehow start 8 workers rather than 4. | open | 2022-03-01T11:21:30Z | 2023-02-13T11:12:39Z | https://github.com/horovod/horovod/issues/3435 | [
"bug",
"ray"
] | EnricoMi | 4 |
iperov/DeepFaceLab | machine-learning | 5,280 | Unable to start subprocesses | ## Expected behavior
Trying to extract faces from data_src
## Actual behavior
Extracting faces...
Error while subprocess initialization: Traceback (most recent call last):
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 62, in _subprocess_run
self.on_initialize(client_dict)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\mainscripts\Extractor.py", line 68, in on_initialize
nn.initialize (device_config)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\core\leras\nn.py", line 123, in initialize
nn.tf_sess = tf.Session(config=nn.tf_sess_config)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1551, in __init__
super(Session, self).__init__(target, graph, config=config)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 676, in __init__
self._session = tf_session.TF_NewSessionRef(self._graph._c_graph, opts)
tensorflow.python.framework.errors_impl.InternalError: cudaGetDevice() failed. Status: CUDA driver version is insufficient for CUDA runtime version
Traceback (most recent call last):
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\main.py", line 324, in <module>
arguments.func(arguments)
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\main.py", line 45, in process_extract
force_gpu_idxs = [ int(x) for x in arguments.force_gpu_idxs.split(',') ] if arguments.force_gpu_idxs is not None else None,
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\mainscripts\Extractor.py", line 853, in main
device_config=device_config).run()
File "D:\depfak lab\DeepFaceLab_NVIDIA_RTX2080Ti_and_earlier\_internal\DeepFaceLab\core\joblib\SubprocessorBase.py", line 210, in run
raise Exception ( "Unable to start subprocesses." )
Exception: Unable to start subprocesses.
Press any key to continue . . .
## Steps to reproduce
Well, changing the settings did nothing, so is it because of my hardware?
GTX 1060 3GB
Intel core i5 4570
16GB DDR3
## Other relevant information
Running Windows 10 | closed | 2021-02-09T17:58:50Z | 2023-07-08T13:48:49Z | https://github.com/iperov/DeepFaceLab/issues/5280 | [] | makkarakka3 | 2 |
iperov/DeepFaceLab | deep-learning | 5,266 | DFL crashed with 3080 | DFL crash and then I got his error.
=============== Model Summary ===============
== ==
== Model name: Matt2_SAEHD ==
== ==
== Current iteration: 3594 ==
== ==
==------------- Model Options -------------==
== ==
== resolution: 128 ==
== face_type: wf ==
== models_opt_on_gpu: True ==
== archi: df ==
== ae_dims: 128 ==
== e_dims: 64 ==
== d_dims: 64 ==
== d_mask_dims: 22 ==
== masked_training: True ==
== eyes_mouth_prio: True ==
== uniform_yaw: False ==
== adabelief: True ==
== lr_dropout: n ==
== random_warp: False ==
== true_face_power: 0.01 ==
== face_style_power: 0.01 ==
== bg_style_power: 0.0 ==
== ct_mode: none ==
== clipgrad: False ==
== pretrain: False ==
== autobackup_hour: 0 ==
== write_preview_history: False ==
== target_iter: 0 ==
== random_flip: True ==
== batch_size: 8 ==
== gan_power: 0.1 ==
== gan_patch_size: 16 ==
== gan_dims: 16 ==
== ==
==-------------- Running On ---------------==
== ==
== Device index: 0 ==
== Name: GeForce RTX 3080 ==
== VRAM: 10.00GB ==
== ==
=============================================
Error: Input to reshape is a tensor with 8 values, but the requested shape has 334176
[[node gradients/Mean_11_grad/Reshape (defined at D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Original stack trace for 'gradients/Mean_11_grad/Reshape':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug,
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 468, in on_initialize
gpu_D_code_loss_gvs += [ nn.gradients (gpu_D_code_loss, self.code_discriminator.get_weights() ) ]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 172, in gradients
unconnected_gradients)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 340, in _MaybeCompile
return grad_fn() # Exit early
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in <lambda>
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 255, in _MeanGrad
sum_grad = _SumGrad(op, grad)[0]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 216, in _SumGrad
grad = array_ops.reshape(grad, output_shape_kept_dims)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 195, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 8377, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
...which was originally created as op 'Mean_11', defined at:
File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 465, in on_initialize
gpu_D_code_loss = (DLoss(gpu_src_code_d_ones , gpu_dst_code_d) + \
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 454, in DLoss
return tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits), axis=[1,2,3])
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2312, in reduce_mean_v1
return reduce_mean(input_tensor, axis, keepdims, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2372, in reduce_mean
name=name))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5781, in mean
name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
Traceback (most recent call last):
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1375, in _do_call
return fn(*args)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1360, in _run_fn
target_list, run_metadata)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1453, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 8 values, but the requested shape has 334176
[[{{node gradients/Mean_11_grad/Reshape}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 130, in trainerThread
iter, iter_time = model.train_one_iter()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 462, in train_one_iter
losses = self.onTrainOneIter()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 701, in onTrainOneIter
self.D_train (warped_src, warped_dst)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 545, in D_train
nn.tf_sess.run ([D_loss_gv_op], feed_dict={self.warped_src: warped_src, self.warped_dst: warped_dst})
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 968, in run
run_metadata_ptr)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1191, in _run
feed_dict_tensor, options, run_metadata)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1369, in _do_run
run_metadata)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\client\session.py", line 1394, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input to reshape is a tensor with 8 values, but the requested shape has 334176
[[node gradients/Mean_11_grad/Reshape (defined at D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py:55) ]]
Original stack trace for 'gradients/Mean_11_grad/Reshape':
File "threading.py", line 884, in _bootstrap
File "threading.py", line 916, in _bootstrap_inner
File "threading.py", line 864, in run
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\mainscripts\Trainer.py", line 58, in trainerThread
debug=debug,
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 468, in on_initialize
gpu_D_code_loss_gvs += [ nn.gradients (gpu_D_code_loss, self.code_discriminator.get_weights() ) ]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\core\leras\ops\__init__.py", line 55, in tf_gradients
grads = gradients.gradients(loss, vars, colocate_gradients_with_ops=True )
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 172, in gradients
unconnected_gradients)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in _GradientsHelper
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 340, in _MaybeCompile
return grad_fn() # Exit early
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gradients_util.py", line 684, in <lambda>
lambda: grad_fn(op, *out_grads))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 255, in _MeanGrad
sum_grad = _SumGrad(op, grad)[0]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_grad.py", line 216, in _SumGrad
grad = array_ops.reshape(grad, output_shape_kept_dims)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\array_ops.py", line 195, in reshape
result = gen_array_ops.reshape(tensor, shape, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 8377, in reshape
"Reshape", tensor=tensor, shape=shape, name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
...which was originally created as op 'Mean_11', defined at:
File "threading.py", line 884, in _bootstrap
[elided 3 identical lines from previous traceback]
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\ModelBase.py", line 189, in __init__
self.on_initialize()
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 465, in on_initialize
gpu_D_code_loss = (DLoss(gpu_src_code_d_ones , gpu_dst_code_d) + \
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\DeepFaceLab\models\Model_SAEHD\Model.py", line 454, in DLoss
return tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(labels=labels, logits=logits), axis=[1,2,3])
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2312, in reduce_mean_v1
return reduce_mean(input_tensor, axis, keepdims, name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\util\dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\math_ops.py", line 2372, in reduce_mean
name=name))
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 5781, in mean
name=name)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 750, in _apply_op_helper
attrs=attr_protos, op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 3536, in _create_op_internal
op_def=op_def)
File "D:\Downloads\DeepFaceLab_NVIDIA\_internal\python-3.6.8\lib\site-packages\tensorflow\python\framework\ops.py", line 1990, in __init__
self._traceback = tf_stack.extract_stack()
Any Ideas
| closed | 2021-01-27T20:33:53Z | 2022-02-25T22:09:21Z | https://github.com/iperov/DeepFaceLab/issues/5266 | [] | JamiesonDeuel | 1 |
falconry/falcon | api | 1,789 | Improve docs and checks for sink when using ASGI | Currently there is no documentation or checks in `add_sink` that require an async function when using the asgi app.
This is a follow up of https://github.com/falconry/falcon/pull/1769#discussion_r531449969
cc @vytas7 | closed | 2020-11-27T09:24:09Z | 2020-12-27T17:57:50Z | https://github.com/falconry/falcon/issues/1789 | [
"documentation",
"enhancement"
] | CaselIT | 3 |
piskvorky/gensim | nlp | 3,204 | mallet falls asleep | i'm using mallet-2.0.8 with Python 3 and Jupyter Notebook. Recently it worked just fine. But now it falls asleep while showing it's still working even with a small dataset.
The problem is with following function:
def compute_coherence_values(dictionary, corpus, texts, limit, start=2, step=1):
"""
Compute c_v coherence for various number of topics
Parameters:
----------
dictionary : Gensim dictionary
corpus : Gensim corpus
texts : List of input texts
limit : Max num of topics
Returns:
-------
model_list : List of LDA topic models
coherence_values : Coherence values corresponding to the LDA model with respective number of topics
"""
coherence_values = []
model_list = []
for num_topics in range(start, limit, step):
model = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=num_topics, id2word=id2word)
model_list.append(model)
coherencemodel = CoherenceModel(model=model, texts=texts, dictionary=dictionary, coherence='c_v')
coherence_values.append(coherencemodel.get_coherence())
return model_list, coherence_values | closed | 2021-07-28T16:28:49Z | 2021-07-28T16:34:38Z | https://github.com/piskvorky/gensim/issues/3204 | [] | nako21 | 1 |
OpenInterpreter/open-interpreter | python | 1,547 | Ollama run error | ```error
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Users\unsia\anaconda3\Scripts\interpreter.exe\__main__.py", line 7, in <module>
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 612, in main
start_terminal_interface(interpreter)
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 471, in start_terminal_interface
interpreter = profile(
^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 64, in profile
return apply_profile(interpreter, profile, profile_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 148, in apply_profile
exec(profile["start_script"], scope, scope)
File "<string>", line 1, in <module>
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\core.py", line 145, in local_setup
self = local_setup(self)
^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\terminal_interface\local_setup.py", line 314, in local_setup
interpreter.computer.ai.chat("ping")
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 86, in run
self.load()
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 397, in load
self.interpreter.computer.ai.chat("ping")
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\computer\ai\ai.py", line 134, in chat
for chunk in self.computer.interpreter.llm.run(messages):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 322, in run
yield from run_tool_calling_llm(self, params)
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\run_tool_calling_llm.py", line 178, in run_tool_calling_llm
for chunk in llm.completions(**request_params):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 466, in fixed_litellm_completions
raise first_error # If all attempts fail, raise the first error
^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\site-packages\interpreter\core\llm\llm.py", line 443, in fixed_litellm_completions
yield from litellm.completion(**params)
File "C:\Users\unsia\anaconda3\Lib\site-packages\litellm\llms\ollama.py", line 455, in ollama_completion_stream
raise e
File "C:\Users\unsia\anaconda3\Lib\site-packages\litellm\llms\ollama.py", line 433, in ollama_completion_stream
function_call = json.loads(response_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\unsia\anaconda3\Lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)
```
这个错误信息表明在尝试解析JSON数据时遇到了问题。具体来说,`json.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)` 表示JSON字符串没有正确终止。
根据堆栈跟踪,错误发生在处理LLM(Large Language Model)响应的地方,特别是当尝试将响应内容解析为JSON对象时。这可能是由于以下几个原因之一:
1. **服务器返回的数据格式不正确**:LLM服务可能返回了非JSON格式的数据,导致无法解析。
2. **网络传输过程中数据丢失或损坏**:在从LLM服务接收数据的过程中,可能存在数据包丢失或损坏的情况。
3. **客户端代码问题**:负责接收和解析响应的代码可能存在逻辑错误。
为了进一步诊断问题,可以采取以下步骤:
- 检查LLM服务的日志,确认其是否正常工作并且返回正确的JSON格式数据。
- 在代码中添加调试信息,打印出接收到的原始响应内容,以便检查其格式是否符合预期。
- 确认网络连接稳定,排除因网络问题导致的数据传输中断或损坏的可能性。
- 审查客户端代码,确保所有相关的请求和响应处理逻辑正确无误。
| closed | 2024-12-02T17:12:59Z | 2024-12-02T17:48:22Z | https://github.com/OpenInterpreter/open-interpreter/issues/1547 | [] | unsiao | 0 |
alpacahq/alpaca-trade-api-python | rest-api | 125 | Can't seem to create a stream to get trade updates. (RuntimeError: This event loop is already running) | So I am trying to get a stream of trade updates from the alpaca api, I run the following command.
```
import alpaca_trade_api as tradeapi
import time
import datetime
api = tradeapi.REST('xxx','https://paper-api.alpaca.markets')
conn = tradeapi.StreamConn('xxx','xxx','https://paper-api.alpaca.markets')
account = api.get_account()
def run():
@conn.on(r'trade_updates')
async def on_msg(conn, channel, data):
datasymbol = data.order['symbol']
event = data.event
print('Order executed for',datasymbol, data.order['side'], event, data.order['filled_qty'])
conn.run(['trade_updates'])
```
This is the error I get.
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
D:\Users\user\Anaconda3\envs\ml\lib\site-packages\alpaca_trade_api\stream2.py in run(self, initial_channels)
158 try:
--> 159 loop.run_until_complete(self.subscribe(initial_channels))
160 loop.run_forever()
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_until_complete(self, future)
565 try:
--> 566 self.run_forever()
567 except:
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_forever(self)
520 if self.is_running():
--> 521 raise RuntimeError('This event loop is already running')
522 if events._get_running_loop() is not None:
RuntimeError: This event loop is already running
During handling of the above exception, another exception occurred:
RuntimeError Traceback (most recent call last)
<ipython-input-2-8a009c05ab30> in <module>
6 print('Order executed for',datasymbol, data.order['side'], event, data.order['filled_qty'])
7
----> 8 conn.run(['trade_updates'])
D:\Users\user\Anaconda3\envs\ml\lib\site-packages\alpaca_trade_api\stream2.py in run(self, initial_channels)
162 logging.info("Exiting on Interrupt")
163 finally:
--> 164 loop.run_until_complete(self.close())
165 loop.close()
166
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_until_complete(self, future)
564 future.add_done_callback(_run_until_complete_cb)
565 try:
--> 566 self.run_forever()
567 except:
568 if new_task and future.done() and not future.cancelled():
D:\Users\user\Anaconda3\envs\ml\lib\asyncio\base_events.py in run_forever(self)
519 self._check_closed()
520 if self.is_running():
--> 521 raise RuntimeError('This event loop is already running')
522 if events._get_running_loop() is not None:
523 raise RuntimeError(
RuntimeError: This event loop is already running
```
Is there something wrong with my code? | closed | 2019-12-04T04:59:09Z | 2020-12-23T21:09:47Z | https://github.com/alpacahq/alpaca-trade-api-python/issues/125 | [] | anarchy89 | 4 |
quokkaproject/quokka | flask | 70 | Feeds RSS/Atom | Implement RSS feeds
https://github.com/pythonhub/quokka/blob/master/quokka/core/views.py#L228
ContentFeed should return a specific Content SubContents (useful for contentblocks and to return images in a gallery)
ChannelFeed should return all the contents inside that channel, if homepage it return all the Contents.
| closed | 2013-10-31T03:53:37Z | 2015-07-16T02:56:41Z | https://github.com/quokkaproject/quokka/issues/70 | [
"enhancement"
] | rochacbruno | 0 |
litestar-org/litestar | pydantic | 3,049 | Bug: Litestar Logging in Python3.11: Unable to configure handler 'queue_listener' | ### Description
I have been seeing this issue in our production Litestar application a few times. It happens after a few hours the app has been running, and restarting it (temporarily) fixes the issue.
It looks related to #2469 and #2576, but in my case:
- I am using python 3.11
- it works locally, and only happens after some time
### Logs
```bash
{"event":"Traceback (most recent call last):
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 573, in configure
handler = self.configure_handler(handlers[name])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 758, in configure_handler
result = factory(**kwargs)
^^^^^^^^^^^^^^^^^
File \"/opt/venv/lib/python3.11/site-packages/litestar/logging/standard.py\", line 29, in __init__
self.listener.start()
File \"/root/.nix-profile/lib/python3.11/logging/handlers.py\", line 1539, in start
t.start()
File \"/root/.nix-profile/lib/python3.11/threading.py\", line 957, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File \"/app/src/app/domain/weather_data/controllers/metrics_data.py\", line 135, in _fetch_data_for_metrics
data = RegionAdapter.get_metrics(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File \"/app/src/app/lib/log/__init__.py\", line 132, in get_logger_with_args
logger = get_logger()
^^^^^^^^^^^^
File \"/app/src/app/lib/log/__init__.py\", line 124, in get_logger
config.configure()
File \"/opt/venv/lib/python3.11/site-packages/litestar/logging/config.py\", line 234, in configure
config.dictConfig(values)
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 823, in dictConfig
dictConfigClass(config).configure()
File \"/root/.nix-profile/lib/python3.11/logging/config.py\", line 580, in configure
raise ValueError('Unable to configure handler 'ValueError: Unable to configure handler 'queue_listener'","level":"info","timestamp":"2024-01-30T14:29:30.530002Z"}
```
### Litestar Version
2.4.1
### Platform
- [ ] Linux
- [ ] Mac
- [ ] Windows
- [X] Other (Please specify in the description above) | closed | 2024-01-30T15:04:23Z | 2024-01-31T13:51:53Z | https://github.com/litestar-org/litestar/issues/3049 | [
"Bug :bug:"
] | 0xSwego | 1 |
sammchardy/python-binance | api | 1,105 | How can we stream USD margined quarterly futures order book data using the ThreadedWebSocketManager? | open | 2021-12-14T14:09:27Z | 2021-12-14T14:09:27Z | https://github.com/sammchardy/python-binance/issues/1105 | [] | SJLow | 0 |
|
flairNLP/flair | pytorch | 3,166 | [Question]: Why is expand_context set to false when training (TransformerEmbeddings)? | ### Question
I was debugging the behaviour of the model during training, I landed on this peculiar [behaviour](https://github.com/flairNLP/flair/blob/v0.12.1/flair/embeddings/transformer.py#L622).
If I understand it correctly, the _expand\_context_ is set to false if we are training the model. Why? Why is the context expanded only during inference?
| closed | 2023-03-28T09:15:42Z | 2023-03-28T11:41:13Z | https://github.com/flairNLP/flair/issues/3166 | [
"question"
] | kobiche | 2 |
JaidedAI/EasyOCR | pytorch | 460 | Quality Issues | Hi there,
Thank you for the amazing work. This library makes it so easy to perform OCR. However, it seems like the quality of the recognition results is not very good especially with app/web screenshots or rendered text. Detector also fails to detect single letters. Here are some of the failure cases:
1. Detector failure of single letters:

2. Recognition failure of small time boxes (model=en):




3. Bad results on symbols (model=en):

4. Confusion between 0 and o (model=en):

5. Bad 'en' results on (model=en+ja):








## Extra Info:
I am using the default settings (with batch_size=32) on image with scale (height=~800px & width=~400px). Is there any setting I can tweak to improve the results or will I have to train from scratch on my dataset?
Best regards | closed | 2021-06-14T08:27:14Z | 2021-10-15T03:08:39Z | https://github.com/JaidedAI/EasyOCR/issues/460 | [] | NaxAlpha | 5 |
vimalloc/flask-jwt-extended | flask | 132 | jwt_optional does not give None, instead the old token | ```
@api.route("/")
class Payment(Resource):
@jwt_optional
@api.header("Authorization", "access token", required=False)
@api.doc(payment_fields)
@api.expect(payment_fields)
def post(self):
username = get_jwt_identity()
```
This endpoint are jwt optional, which can be either identify with access token or email alone. This endpoint are use for both make payment or first registration.
I first logged in with an account, then i logged out and clear my browser cookie and local storage which I use to store access token, then when I try to register it appear my old username instead of `None`. But everytime when I restart my backend it appear back to `None` again. Are there way to stop the cache? | closed | 2018-03-14T16:07:07Z | 2018-04-11T22:43:45Z | https://github.com/vimalloc/flask-jwt-extended/issues/132 | [] | gaara4896 | 4 |
tortoise/tortoise-orm | asyncio | 1,068 | how to set timezone in UnitTest | I see there is a way to set timezone in init()
but how can I set timezone in UnitTest
```python
import os
import pytest
from tortoise.contrib import test
from tortoise.contrib.test import finalizer, initializer
@pytest.fixture(scope="session", autouse=True)
def initialize_tests(request):
db_url = os.environ.get("TORTOISE_TEST_DB", "sqlite://:memory:")
initializer(['app.models.models'], db_url=db_url, app_label="models")
request.addfinalizer(finalizer)
```
| open | 2022-02-18T08:21:16Z | 2022-02-18T08:21:16Z | https://github.com/tortoise/tortoise-orm/issues/1068 | [] | ljj038 | 0 |
huggingface/transformers | deep-learning | 36,147 | Torchao `int4_weight_only` save error when passing layout | ### System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.49.0.dev0
- Platform: Linux-4.18.0-425.3.1.el8.x86_64-x86_64-with-glibc2.39
- Python version: 3.12.3
- Huggingface_hub version: 0.27.1
- Safetensors version: 0.4.5
- Accelerate version: 1.4.0.dev0
- Accelerate config: - compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: bf16
- use_cpu: False
- debug: False
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: 5,6
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- PyTorch version (GPU?): 2.6.0+cu126 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100 80GB PCIe
### Who can help?
@SunMarc
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
More details refer to [torchao](https://github.com/pytorch/ao/issues/1704)
### Expected behavior
Hi @SunMarc . Do you think we can handle this in transformers? | closed | 2025-02-12T09:00:01Z | 2025-02-19T05:11:17Z | https://github.com/huggingface/transformers/issues/36147 | [
"bug"
] | jiqing-feng | 0 |
abhimishra91/insight | streamlit | 4 | JSONDecodeError: Expecting value: line 1 column 1 (char 0) | Hello Abhishek
I try today with the latest changes but still is not working for me. Now I get "Internal Server Error" using FastAPI Swagger UI (see picture)

If I request a GET info, I get the name and the description of the model (see next picture)

The front end present the same Json decoder error as yesterday.

I tried in windows and linux again.
| closed | 2020-08-25T16:08:10Z | 2020-08-26T12:21:12Z | https://github.com/abhimishra91/insight/issues/4 | [
"bug"
] | mbenetti | 9 |
bregman-arie/devops-exercises | python | 10,488 | voicemod-pro-crack | voicemod-pro | voicemod | voicemod-crack | voicemod-cracked | voicemod-crack-2024 | voicemod-free | voicemod-2024 | voicemod-pro-free | voicemod-free-crack | voicemod-cracked-2023 | voicemod-pro-crack-2023 | voicemod-2023-crack-free | voicemod-pro-crack-2024-free | voicemod-cracked-version-2024 | voicemod-pro-free-2023-download | voicemod-pro-crack-2024-free-2024 | voicemod-cracked-version-2024-free | closed | 2024-08-28T18:06:28Z | 2024-09-05T16:08:20Z | https://github.com/bregman-arie/devops-exercises/issues/10488 | [] | Sachin-vlr | 0 |
|
sherlock-project/sherlock | python | 2,427 | Requesting support for: linuxfr.org | ### Site URL
https://linuxfr.org
### Additional info
- Link to the site main page: https://linuxfr.org/
- Link to an existing account: https://linuxfr.org/users/pylapp
### Code of Conduct
- [x] I agree to follow this project's Code of Conduct | open | 2025-03-05T12:16:35Z | 2025-03-09T20:11:10Z | https://github.com/sherlock-project/sherlock/issues/2427 | [
"site support request"
] | pylapp | 4 |
noirbizarre/flask-restplus | flask | 138 | OAuth2 password credential doesn't work in SwaggerUI | In SwaggerUI which created by flask-restplus, just simple request is working well.
But, if I trying to use oauth2 with password credential then the credential popup doesn't show up.
To make sure this problem, I tried to use SwaggerUI from SwaggerUI project page with Swagger json; /swagger.json.
And it worked. But the other SwaggerUI that created by flask-restplus still doesn't work.
I think, something wrong in swagger template at flask-restplus on pip-py3.
Please check it.
| closed | 2016-02-24T06:46:07Z | 2016-04-21T16:31:40Z | https://github.com/noirbizarre/flask-restplus/issues/138 | [
"bug"
] | StdCarrot | 7 |
FlareSolverr/FlareSolverr | api | 416 | [mteamtp] (updating) The cookies provided by FlareSolverr are not valid | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-07-04T11:08:25Z | 2022-07-04T12:43:46Z | https://github.com/FlareSolverr/FlareSolverr/issues/416 | [
"duplicate"
] | Arly521 | 1 |
JoeanAmier/TikTokDownloader | api | 122 | TK网页可登录,用的是v2rayNG全局,VMESS服务器,提取TK视频时候读取很久然后报错 | 程序检测到上次运行可能没有正常结束,您的作品下载记录数据可能已经丢失!
数据文件路径:C:\Users\ASUS\Desktop\JDM软件\TikTokDownloader_V5.2_WIN\_internal\cache\IDRecorder.txt
检测到 IDRecorder 备份文件,是否恢复最后一次备份的数据(YES/NO): yes
IDRecorder 已恢复最后一次备份的数据,请重新运行程序!
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
TikTokDownloader v5.2
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
项目地址: https://github.com/JoeanAmier/TikTokDownloader
项目文档: https://github.com/JoeanAmier/TikTokDownloader/wiki/Documentation
开源许可: GNU General Public License v3.0
当前已是最新正式版
代理 http://127.0.0.1:10809 测试成功
请选择 TikTokDownloader 运行模式:
1. 复制粘贴写入 Cookie
2. 扫码登录写入 Cookie
=========================
3. 终端命令行模式
4. Web API 接口模式
5. Web UI 交互模式
6. 服务器部署模式
=========================
7. 禁用自动检查更新
8. 禁用作品下载记录
9. 启用运行日志记录
3
读取缓存数据成功
请选择采集功能:
1. 批量下载账号作品(TikTok)
2. 批量下载账号作品(抖音)
3. 批量下载链接作品(通用)
4. 获取直播推流地址(抖音)
5. 采集作品评论数据(抖音)
6. 批量下载合集作品(抖音)
7. 批量采集账号数据(抖音)
8. 采集搜索结果数据(抖音)
9. 采集抖音热榜数据(抖音)
10. 批量下载收藏作品(抖音)
1
已选择批量下载账号作品(TikTok)模式
请输入 TikTok 主页 HTML 文件(夹)路径: C:\Users\ASUS\Desktop\软件\TikTokDownloader_V5.2_WIN\TK_html
开始处理第 1 个账号
开始提取作品数据
Traceback (most recent call last):
File "main.py", line 343, in <module>
File "main.py", line 321, in run
File "main.py", line 217, in main_menu
File "main.py", line 286, in compatible
File "main.py", line 65, in inner
File "main.py", line 226, in complete
File "src\main_complete.py", line 877, in run
File "src\main_complete.py", line 149, in account_acquisition_interactive_tiktok
File "src\main_complete.py", line 190, in _deal_account_works_tiktok
File "src\main_complete.py", line 354, in _batch_process_works
File "src\DataExtractor.py", line 83, in run
File "src\DataExtractor.py", line 107, in batch
File "src\DataExtractor.py", line 129, in extract_batch
File "src\DataExtractor.py", line 180, in extract_works_info
File "src\DataExtractor.py", line 186, in classifying_works
File "src\DataExtractor.py", line 221, in extract_image_info_tiktok
TypeError: 'types.SimpleNamespace' object is not subscriptable
[13668] Failed to execute script 'main' due to unhandled exception!
| open | 2023-12-29T06:24:57Z | 2024-01-02T10:31:19Z | https://github.com/JoeanAmier/TikTokDownloader/issues/122 | [] | lingbufeng | 5 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.