repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
Tinche/aiofiles | asyncio | 72 | rmtree support | Hi,
It would be great to support recursive directory removal.
Regards | open | 2020-01-29T10:36:03Z | 2023-01-19T01:19:47Z | https://github.com/Tinche/aiofiles/issues/72 | [] | eLvErDe | 2 |
openapi-generators/openapi-python-client | fastapi | 1,120 | union types and nullables can create unnecessary class suffixes (and related problems) | **Describe the bug**
In some valid specs, the use of `anyOf`, `oneOf`, `nullable`, and/or `type` as a list, can cause generated class names to have unnecessary suffixes (and, sometimes, for spurious extra copies of classes to appear). These problems seem to all ultimately come from the behavior of `UnionProperty`.
The problem cases are all included in the attached specs files. Looking at the code generated from these specs—
_in both 3.0 & 3.1:_
- `ExampleModel.nullableObjectWithOneOf` correctly has the type `Union[MyObject, None, Unset]`.
- `ExampleModel.inlineNullableObject` generates a model class called `ExampleModelInlineNullableObjectType0`. It should be just `ExampleModelInlineNullableObject`, since there are no other named types for this property.
- Similarly, the `MyEnum` schema, which is a nullable enum, generates the class `MyEnumType1` even though there is no `Type0`.
- The `MyEnumWithExplicitType` schema is the same as `MyEnum` except it specifically indicates `type`, and the result is a bit wilder: it generates enum classes `MyEnumWithExplicitTypeType1`, `MyEnumWithExplicitType2Type1`, and `MyEnumWithExplicitType3Type1`, all of which are exactly the same.
_only applicable to 3.1_:
- `ExampleModel.nullableObjectWithExplicitTypes`, which is the same as `nullable_object_with_one_of` except that it also (unnecessarily, but validly) specifies `type: ["object", "null"]`, also works correctly.
- `ExampleModel.one_of_enums_with_explicit_types`, which combines a string enum with an int enum and specifies `type: ["string", "integer"], has the problem where three classes are created for each enum.
**OpenAPI Spec Files**
3.0: https://gist.github.com/eli-bl/7bea406525fb3b1452a71781ada5c1c0
3.1: https://gist.github.com/eli-bl/c03c88eb69312053e0122d1d8a06c2a0
**Desktop (please complete the following information):**
- OS: macOS 14.5
- Python Version: 3.8.15
- openapi-python-client version 0.21.5
| open | 2024-09-12T21:48:39Z | 2024-09-17T18:53:35Z | https://github.com/openapi-generators/openapi-python-client/issues/1120 | [] | eli-bl | 0 |
Tinche/aiofiles | asyncio | 46 | Feature Request: gzip support | https://docs.python.org/3/library/gzip.html
Would be nice to have gzip support. | open | 2018-08-10T21:05:26Z | 2018-08-10T21:05:26Z | https://github.com/Tinche/aiofiles/issues/46 | [] | xNinjaKittyx | 0 |
jmcnamara/XlsxWriter | pandas | 207 | Datetime export not working correctly from pandas | Using version 0.6.5 with Pandas 0.15.2 I'm using to_excel to export some dataframes which have a datatime variable as the first column. I noticed the the exported spreadsheet values only contained daily values when I was expecting hourly values. I traced to problem to the utility.py module and the datetime_to_excel_datetime function. Please check the equation at line 630 it appears to me to be missing delta.hours (and delta.minutes?) portion of the datetime value.
Thanks for providing this excel interface, I have tried others and xlsxwriter fulfills my requirements the best.
| closed | 2015-01-03T09:55:02Z | 2015-01-21T17:00:02Z | https://github.com/jmcnamara/XlsxWriter/issues/207 | [
"question",
"ready to close"
] | rschell | 3 |
joke2k/django-environ | django | 284 | Django improperly configured error if secret key begins with a dollar sign character | I've come across this error which only seems to occur when using the django-environ package to read a SECRET_KEY from a .env file.
My .env file (with a generated secret key using the get_random_secret_key module from django)
`SECRET_KEY=$q*eo+c&2!)u9^tpd6f=0szxt6+th!j^#z9$1mh!kyen*36($t)`
My settings.py file
```
import environ
env = environ.Env()
environ.Env.read_env()
SECRET_KEY = env('SECRET_KEY')
```
Running this code will generate the following error
`django.core.exceptions.ImproperlyConfigured: Set the q*eo+c&2!)u9^tpd6f=0szxt6+th!j^#z9$1mh!kyen*36($t) environment variable`
Even if I surround the secret key value with single or double quotes the error will still occur.
This error does not occur if I use a different python environment file package such as python-dotenv to read the same .env file.
The django-environ package seems to think the SECRET_KEY value is an environment variable itself.
I know this is a very niche situation as this secret key that I generated just happened to have a leading '$' character but it may catch people out if they're generating their secret key with a script. The error occurs for any secret key value that starts with a '$'.
Full stack trace:
```
Traceback (most recent call last):
File "/opt/anaconda3/envs/django_py/lib/python3.9/site-packages/environ/environ.py", line 273, in get_value
value = self.ENVIRON[var]
File "/opt/anaconda3/envs/django_py/lib/python3.9/os.py", line 679, in __getitem__
raise KeyError(key) from None
KeyError: 'q*eo+c&2!)u9^tpd6f=0szxt6+th!j^#z9$1mh!kyen*36($t)'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/danieloram/Desktop/test_env/environ_secret_key.py", line 10, in <module>
SECRET_KEY = env('SECRET_KEY')
File "/opt/anaconda3/envs/django_py/lib/python3.9/site-packages/environ/environ.py", line 123, in __call__
return self.get_value(var, cast=cast, default=default, parse_default=parse_default)
File "/opt/anaconda3/envs/django_py/lib/python3.9/site-packages/environ/environ.py", line 284, in get_value
value = self.get_value(value, cast=cast, default=default)
File "/opt/anaconda3/envs/django_py/lib/python3.9/site-packages/environ/environ.py", line 277, in get_value
raise ImproperlyConfigured(error_msg)
django.core.exceptions.ImproperlyConfigured: Set the q*eo+c&2!)u9^tpd6f=0szxt6+th!j^#z9$1mh!kyen*36($t) environment variable
```
django version: 3.0.3
django-environ version: 0.4.5 | closed | 2021-01-09T06:02:53Z | 2021-12-04T03:37:13Z | https://github.com/joke2k/django-environ/issues/284 | [
"duplicate"
] | DanielOram | 3 |
davidsandberg/facenet | tensorflow | 1,087 | why the time for encoding face embedding is so long? | I rewrite the compare.py to check the time for facenet's face embedding encoding, but to my surprise,the time is above 60ms on my Geforce RTX2070 card,I also check the time for ArcFace,it only use 10 ms; I also found when my check program was running, the GPU load report by GPU-Z is only about 25%,it was clear the GPU's power is not fully utilized, so why the time for facenet's embedding encoding is so long? why GPU's power can not be fully utilized?
below is my code to check the time rewrite in compare.py:
def main(args):
images = load_and_align_data(args.image_files, args.image_size, args.margin, args.gpu_memory_fraction)
with tf.Graph().as_default():
with tf.Session() as sess:
# Load the model
facenet.load_model(args.model)
# Get input and output tensors
images_placeholder = tf.get_default_graph().get_tensor_by_name("input:0")
embeddings = tf.get_default_graph().get_tensor_by_name("embeddings:0")
phase_train_placeholder = tf.get_default_graph().get_tensor_by_name("phase_train:0")
# Run forward pass to calculate embeddings
feed_dict = { images_placeholder: images, phase_train_placeholder:False }
emb = sess.run(embeddings, feed_dict=feed_dict)
# Check embedding encode time
testTimes = 100
tCount = 0
for t in range(1,testTimes+1):
t0 = time.time()
sess.run(embeddings,feed_dict=feed_dict)
t1 = time.time()
print("Test",t," time=",(t1-t0)*1000.0,"ms")
tCount += t1-t0
avgTime = tCount/testTimes * 1000.0
print("AvgRefTime=",avgTime, "ms")
| closed | 2019-09-20T07:09:10Z | 2019-09-22T11:38:41Z | https://github.com/davidsandberg/facenet/issues/1087 | [] | pango99 | 1 |
tiangolo/uvicorn-gunicorn-fastapi-docker | pydantic | 60 | Allow to pass log config to the start-reload.sh for the uvicorn | Allow to pass log config to the start-reload.sh for the uvicorn.
| closed | 2020-09-23T10:01:01Z | 2024-08-25T03:58:46Z | https://github.com/tiangolo/uvicorn-gunicorn-fastapi-docker/issues/60 | [
"investigate"
] | ievgennaida | 1 |
Avaiga/taipy | data-visualization | 2,345 | Upgrade react to 19 | what it says on the tin | open | 2024-12-17T13:53:47Z | 2025-01-27T11:04:32Z | https://github.com/Avaiga/taipy/issues/2345 | [
"🖧 Devops",
"🟧 Priority: High",
"❌ Blocked",
"🔒 Staff only",
"GUI: Front-End",
"dependencies"
] | FredLL-Avaiga | 1 |
pydata/bottleneck | numpy | 173 | pypi page still links to http://berkeleyanalytics.com/bottleneck | It looks like you've moved bottleneck's documentation to github.io (87ae3e397ca4a78f02a530058be055728d47418d). However, the link on the [pypi](https://pypi.python.org/pypi/Bottleneck#Where) still links to http://berkeleyanalytics.com/bottleneck. Probably could use an update. | closed | 2017-10-23T22:47:32Z | 2019-11-13T05:21:50Z | https://github.com/pydata/bottleneck/issues/173 | [] | jhamman | 2 |
Lightning-AI/pytorch-lightning | machine-learning | 19,981 | [fabric.example.rl] Not support torch.float64 for MPS device | ### Bug description
I found an error when run the example `pytorch-lightning/examples/fabric/reinforcement_learning` on M2 Mac (device type=mps)
### Reproduce Error
```
reinforcement_learning git:(master) ✗ fabric run train_fabric.py
W0617 12:53:22.541000 8107367488 torch/distributed/elastic/multiprocessing/redirects.py:27] NOTE: Redirects are currently not supported in Windows or MacOs.
[rank: 0] Seed set to 42
Missing logger folder: logs/fabric_logs/2024-06-17_12-53-24/CartPole-v1_default_42_1718596404
set default torch dtype as torch.float32
Traceback (most recent call last):
File "/Users/user/workspace/pytorch-lightning/examples/fabric/reinforcement_learning/train_fabric.py", line 215, in <module>
main(args)
File "/Users/user/workspace/pytorch-lightning/examples/fabric/reinforcement_learning/train_fabric.py", line 154, in main
rewards[step] = torch.tensor(reward, device=device).view(-1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.
```
This bug is fixed by checking device.type and type casting to torch.float32 `reward`.
```bash
@@ -146,7 +146,7 @@ def main(args: argparse.Namespace):
# Single environment step
next_obs, reward, done, truncated, info = envs.step(action.cpu().numpy())
done = torch.logical_or(torch.tensor(done), torch.tensor(truncated))
- rewards[step] = torch.tensor(reward, device=device).view(-1)
+ rewards[step] = torch.tensor(reward, device=device, dtype=torch.float32 if device.type == 'mps' else None).view(-1)
```
### What version are you seeing the problem on?
master
### Environment
<details><summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0): 2.3.0
#- Lightning App Version (e.g., 0.5.2): 2.3.0
#- PyTorch Version (e.g., 2.0): 2.3.1
#- Python version (e.g., 3.9): 3.12.3
#- OS (e.g., Linux): Mac
#- CUDA/cuDNN version: MPS
#- GPU models and configuration: M2
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
| closed | 2024-06-17T04:13:05Z | 2024-06-21T14:36:12Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19981 | [
"bug",
"example",
"ver: 2.2.x"
] | swyo | 0 |
PrefectHQ/prefect | automation | 17,040 | Image is missing | As title | open | 2025-02-07T13:00:11Z | 2025-02-08T00:30:31Z | https://github.com/PrefectHQ/prefect/issues/17040 | [] | HamiltonWang | 1 |
darrenburns/posting | automation | 190 | Is it possible to downgrade the project to support Python 3.9? | I checked the source code, and it seems like only the `|` operator and `match-case` feature are used. If we downgrade to 3.9, it would run directly on macOS, which would make it accessible to more users.
The `|` can be replaced with `typing.Union`, and `match-case` can be replaced with `if-elif`
😊 | closed | 2025-02-20T02:15:40Z | 2025-02-20T02:39:01Z | https://github.com/darrenburns/posting/issues/190 | [] | kkHAIKE | 0 |
Urinx/WeixinBot | api | 273 | 长得帅也是一种罪 | closed | 2019-10-12T02:44:35Z | 2019-11-26T05:53:00Z | https://github.com/Urinx/WeixinBot/issues/273 | [] | WangPney | 2 |
|
slackapi/bolt-python | fastapi | 839 | How to get the time stamp of my block post to delete the message once the button is clicked. | python 3.11
slack bolt 1.16.2
This is probably a simple question which is why I could not find the answer but How would I use slack bolt sdk to delete an "approval" block that my bot posts. I am able to post the block and the result does return the time stamp. But how do I then pass that specific time stamp into my listener function?
````
def yes_or_no(ack, client):
"""
Simple function to respond if a user performs an action. Using "app mention" for testing
"""
ack()
result = client.chat_postMessage(
blocks=slackpayload2,
channel="channel_id",
text="Please approve the attached information"
)
return result
result = app.event("app_mention")(yes_or_no)
@app.action("Approved")
def handle_approval(ack, body, logger, client, context):
ack()
logger.info(body)
client.chat_postMessage(
channel="channel_id",
text="Approved!"
)
client.chat_delete(
channel=context["channel_id"],
ts=???,
)
````
I looked into being able to pass an additional argument into my handle_approval() function but it seems like that would be overcomplicating the issue. I am probably just missing something but could not find any documentation on how to accomplish this.
(Note if I manually print(result['ts'] and use that as my timestamp in chat_delete this does work. I just need to figure out how to programmatically get it in there.
| closed | 2023-02-27T21:24:53Z | 2023-02-28T14:24:36Z | https://github.com/slackapi/bolt-python/issues/839 | [
"question"
] | EricStehnach | 2 |
jpadilla/django-rest-framework-jwt | django | 428 | ObtainJSONWebToken endpoint not working for user.is_active = False | when user is not active..endpoint does not shows "User account is disabled"
???????
if all(credentials.values()):
user = authenticate(**credentials)
if user:
if not user.is_active:
msg = _('User account is disabled.')
raise serializers.ValidationError(msg)
payload = jwt_payload_handler(user)
return {
'token': jwt_encode_handler(payload),
'user': user
}
else:
msg = _('Unable to log in with provided credentials.')
raise serializers.ValidationError(msg) | closed | 2018-03-18T10:27:48Z | 2018-05-30T09:47:20Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/428 | [] | sbishnu019 | 2 |
sinaptik-ai/pandas-ai | pandas | 1,343 | Questions about the train function | Thanks for the great work.
I have several questions about the instruct train function
1. May I know that what vectorDB perform during the train? Does it act as a RAG?
2. After the train, is that anyway to save the trained model or stuff? Or it requires to call the train function for the prompt everytime?
3. For the cache, it seems generate a new one when I restart the kernel. Is that it store the previous prompt and response?
Thank you very much. | closed | 2024-08-30T02:21:56Z | 2025-02-11T16:00:11Z | https://github.com/sinaptik-ai/pandas-ai/issues/1343 | [] | mrgreen3325 | 3 |
python-visualization/folium | data-visualization | 1,906 | Questions about Copilot + Open Source Software Hierarchy | Hi! My name is Chris and I'm writing a thesis on Open Source Software. I'm trying to collect/validate my data and I have two questions for the maintainers of this project.
1) Did this project receive free github copilot access on June 2022?
2) My thesis is especially focused on understanding hierarchical structures. Would it be possible to share a list of individuals in this project with triage/write/maintain/admin access in this project? I understand this may be confidential information, so please feel free to share it with [chrisliao@uchicago.edu ](chrisliao@uchicago.edu)
Happy to chat further if you have questions, and thank you for your time!
| closed | 2024-03-24T16:42:36Z | 2024-05-06T14:55:02Z | https://github.com/python-visualization/folium/issues/1906 | [] | liaochris | 1 |
coqui-ai/TTS | pytorch | 2,384 | [Feature request] Easier running of tts under Windows | <!-- Welcome to the 🐸TTS project!
We are excited to see your interest, and appreciate your support! --->
**🚀 Feature Description**
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
Currently running tts under windows requires a lot of dependencies and tools preinstalled before.
It also requires visual studio installed as well.
**Solution**
<!-- A clear and concise description of what you want to happen. -->
Is it possible to compile all the required libraries and tools using py2exe or similar tool?
So it is easier to run and install on Windows?
NVDA which is a screenreader for Windows is also written mostly in Python 3 but also has some various c/c++ libraries.
Would be great if someone could have a look into that.
Greetings and thanks.
**Alternative Solutions**
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
An alternative which isn't all that handy and convenient is to pack the virtual environment from another computer on which tts was running and moving it over to other computers.
After cleaning the virtual environment with
$ pip cache purge
and using cleanpy you can bring it down to 1.5 gb without voices.
Though that isn't as said previously all that nice.
**Additional context**
<!-- Add any other context or screenshots about the feature request here. -->
| closed | 2023-03-05T19:15:37Z | 2024-06-13T18:37:48Z | https://github.com/coqui-ai/TTS/issues/2384 | [
"help wanted",
"wontfix",
"feature request"
] | domasofan | 16 |
sczhou/CodeFormer | pytorch | 278 | Results are different between personal computer and replicate | Hi,
I tried to install it on my personal computer but when run Image Enhancement it is not as good as I tried on https://replicate.com/sczhou/codeformer.
I have attached the results below. I found the photos processed on https://replicate.com/ have better results (red lips, blue eyes are better). Why is that, maybe I used the wrong model?
Thank you!


| open | 2023-07-28T09:20:21Z | 2023-07-28T09:21:07Z | https://github.com/sczhou/CodeFormer/issues/278 | [] | hongdthaui | 0 |
jschneier/django-storages | django | 678 | Azure: The specified block list is invalid | Hi,
We're getting an error back from Azure on some uploads to block storage. I believe it happens when we're uploading large files, but I haven't been able to debug it much more. Here is the exception:
<img width="834" alt="screenshot 2019-03-04 12 46 09" src="https://user-images.githubusercontent.com/25510/53744317-854fa680-3e7b-11e9-9613-3e10c960800a.png">
I believe this blog post talks more about it: https://gauravmantri.com/2013/05/18/windows-azure-blob-storage-dealing-with-the-specified-blob-or-block-content-is-invalid-error/
I'll try and get more details on this and figure out what's going on. | open | 2019-03-04T15:46:49Z | 2019-05-10T13:57:44Z | https://github.com/jschneier/django-storages/issues/678 | [
"azure"
] | ericholscher | 0 |
WeblateOrg/weblate | django | 13,722 | Request to add Awadhi (awa) Language in Weblate for localisation of libreoffice. | ### Describe the problem
When I login in Weblate and looking for my language Awadhi (awa) it doesn't appear.
### Describe the solution you would like
You are kindly requested to add Awadhi (awa) Language in Weblate so that I can work for localisation.
### Describe alternatives you have considered
_No response_
### Screenshots
_No response_
### Additional context
_No response_ | closed | 2025-01-31T03:20:57Z | 2025-02-05T09:41:04Z | https://github.com/WeblateOrg/weblate/issues/13722 | [] | awadhiworld | 5 |
Farama-Foundation/Gymnasium | api | 477 | [Bug Report] `MuJoCo/Walker2d` left foot has different friction than right foot | ### Describe the bug
Right foot has a friction coeff of 0.9
https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/envs/mujoco/assets/walker2d.xml#L25
left foot has a friction coeff of 1.9
https://github.com/Farama-Foundation/Gymnasium/blob/main/gymnasium/envs/mujoco/assets/walker2d.xml#L38
This issue was previously reported, but not addressed.
https://github.com/openai/gym/issues/1068
https://github.com/openai/gym/issues/1646
from what I can tell, this is most likely a typo
both were supposed to be 1.9 (or 2.0)
Since `Walker2d` and `Hopper` will be getting new models in v5 (cause https://github.com/deepmind/mujoco/issues/833)
should this be fixed there?
### Code example
_No response_
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-05-02T10:32:08Z | 2023-05-10T15:45:56Z | https://github.com/Farama-Foundation/Gymnasium/issues/477 | [
"bug"
] | Kallinteris-Andreas | 7 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 450 | 关于怎么再docker替换cookie | 有2个东西不太理解!
第一是再docker里怎么去替换cookei
我看了视频,你用的是本地python,是可以直接替换!那docker里怎么弄!
还有就是这个API怎么用到脚本里面去!我是API小白所以问问! | closed | 2024-07-14T15:58:10Z | 2024-07-25T18:07:36Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/450 | [
"enhancement"
] | xilib | 3 |
graphdeco-inria/gaussian-splatting | computer-vision | 796 | How to change Iterations? | Solved!
Sorry, can not delete the Issue. | closed | 2024-05-08T20:01:56Z | 2024-05-10T20:34:46Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/796 | [] | DuVogel87 | 2 |
skypilot-org/skypilot | data-science | 4,406 | Skypilot 0.7.0 tries to start non-spot instances when requesting spot instances on Azure | I use spot instances on Azure and noticed error messages telling me that I do not have quota for non-spot instances after upgrading from 0.6.1 to 0.7.0.
Skypilot clearly tells me that I asked for a spot instance but it seems it is requesting a non-spot instance.
After downgrading back to 0.6.1 the problem was gone. | closed | 2024-11-24T20:19:18Z | 2024-12-19T09:31:33Z | https://github.com/skypilot-org/skypilot/issues/4406 | [] | petergerten | 0 |
iMerica/dj-rest-auth | rest-api | 395 | What is this ??? | https://github.com/iMerica/dj-rest-auth/blob/master/demo/demo/urls.py
Why do you gotta use re_path ^^^.
This is a warzone in urls.py.
Here, let me teach you how to write URLs.py in django
Normal way: " path('articles/2003/', views.special_case_2003),"
You call this a "demo" yet you make it so bewildering. How are we supposed to understand this demo when we can't even understand URLs.py?
Please, follow the normal django way of doing stuff. | closed | 2022-04-14T17:38:57Z | 2022-04-14T21:58:33Z | https://github.com/iMerica/dj-rest-auth/issues/395 | [] | Steve-Dusty | 1 |
plotly/dash | jupyter | 2,671 | DatePickerRrange accepts start_date which is less then min_date_allowed | The following datepicker range will accept start_date although it is less then min_date_allowed.
```
dcc.DatePickerRange(
id='picker-range',
min_date_allowed=date(2023, 8, 1),
start_date=date(2023, 7, 1),
)
```
| open | 2023-10-24T08:07:02Z | 2024-08-13T19:41:45Z | https://github.com/plotly/dash/issues/2671 | [
"bug",
"P3"
] | ognjengrubac-tomtom | 1 |
falconry/falcon | api | 1,503 | feat: Support Non-standard HTTP methods | To support porting legacy APIs with non-standard HTTP method names, we should allow the ability to extend our acceptable HTTP method list. It should be documented, that this is not a preferred usage of Falcon, but it's there to support porting odd edge-cases and legacy APIs. | closed | 2019-05-05T21:40:25Z | 2019-11-03T22:20:45Z | https://github.com/falconry/falcon/issues/1503 | [
"good first issue",
"enhancement",
"needs contributor"
] | jmvrbanac | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,226 | Loss trend validation | Hello. I hope everyone are doing well. I have trained the CycleGan on set A and set B below:
A-->

B-->

And the result at epoch 30 is as:
Fake_B -->

Fake_A -->

My loss trends is as below, **I am wondering if the losses trends look fine to you**? The discriminator losses sit around 0.2, 0.3 in the plot below:

| closed | 2021-01-15T05:34:12Z | 2021-01-27T23:05:46Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1226 | [] | mamrez7 | 0 |
deepspeedai/DeepSpeed | pytorch | 6,600 | [BUG] Facing build error while comapiling FUSED_ADAM on ROCm | Hi, I would to report the below build error while compiling the DeepSpeech with Fused adam. Please let me know if any further information I can share to resolve this issue. Thanks you.
**Install command**
```
DS_BUILD_FUSED_ADAM=1 pip install .
```
**Error:** Log is huge so I've attached the file below
[failure_log.txt](https://github.com/user-attachments/files/17262422/failure_log.txt)
| closed | 2024-10-04T18:19:37Z | 2024-10-05T21:02:54Z | https://github.com/deepspeedai/DeepSpeed/issues/6600 | [] | itej89 | 3 |
AntonOsika/gpt-engineer | python | 695 | 'pip install -e .' command not working on windows | I downloaded the repo and tghen trying to build on windows but it's not working.
Error:
(venv_gpt_engineer) PS gpt-engineer> pip install -e .
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: C:\Users\jiten\Desktop\Jack\Work\gpt-engineer
(A "pyproject.toml" file was found, but editable mode currently requires a setuptools-based build.)
WARNING: You are using pip version 21.1.2; however, version 23.2.1 is available.
You should consider upgrading via the 'C:\Users\jiten\Desktop\Jack\Work\gpt-engineer\venv_gpt_engineer\Scripts\python.exe -m pip install --upgrade pip' command. | closed | 2023-09-14T11:51:25Z | 2023-09-17T18:45:11Z | https://github.com/AntonOsika/gpt-engineer/issues/695 | [
"bug",
"triage"
] | bafna94 | 1 |
recommenders-team/recommenders | machine-learning | 2,015 | [BUG] sar_movielens.ipynb - top_k = model.recommend_k_items(test, top_k=TOP_K, remove_seen=True) error | ### Description
<!--- Describe your issue/bug/request in detail -->
I got error when running sar_movielens.ipynb notebooks from azure machine learning compute instance
<img width="727" alt="image" src="https://github.com/recommenders-team/recommenders/assets/110788250/4a967e56-d122-4166-b493-a608670a7e9d">
```code
2023-10-09 20:08:29,498 INFO Calculating recommendation scores
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[8], line 2
1 with Timer() as test_time:
----> 2 top_k = model.recommend_k_items(test, top_k=TOP_K, remove_seen=True)
4 print("Took {} seconds for prediction.".format(test_time.interval))
File /anaconda/envs/msftrecsys2/lib/python3.9/site-packages/recommenders/models/sar/sar_singlenode.py:533, in SARSingleNode.recommend_k_items(self, test, top_k, sort_top_k, remove_seen)
520 def recommend_k_items(self, test, top_k=10, sort_top_k=True, remove_seen=False):
521 """Recommend top K items for all users which are in the test set
522
523 Args:
(...)
530 pandas.DataFrame: top k recommendation items for each user
531 """
--> 533 test_scores = self.score(test, remove_seen=remove_seen)
535 top_items, top_scores = get_top_k_scored_items(
536 scores=test_scores, top_k=top_k, sort_top_k=sort_top_k
537 )
539 df = pd.DataFrame(
540 {
541 self.col_user: np.repeat(
(...)
546 }
547 )
File /anaconda/envs/msftrecsys2/lib/python3.9/site-packages/recommenders/models/sar/sar_singlenode.py:346, in SARSingleNode.score(self, test, remove_seen)
344 # calculate raw scores with a matrix multiplication
345 logger.info("Calculating recommendation scores")
--> 346 test_scores = self.user_affinity[user_ids, :].dot(self.item_similarity)
348 # ensure we're working with a dense ndarray
349 if isinstance(test_scores, sparse.spmatrix):
File /anaconda/envs/msftrecsys2/lib/python3.9/site-packages/scipy/sparse/_base.py:411, in _spbase.dot(self, other)
409 return self * other
410 else:
--> 411 return self @ other
File /anaconda/envs/msftrecsys2/lib/python3.9/site-packages/scipy/sparse/_base.py:622, in _spbase.__matmul__(self, other)
620 def __matmul__(self, other):
621 if isscalarlike(other):
--> 622 raise ValueError("Scalar operands are not allowed, "
623 "use '*' instead")
624 return self._mul_dispatch(other)
ValueError: Scalar operands are not allowed, use '*' instead
```
### In which platform does it happen?
<!--- Describe the platform where the issue is happening (use a list if needed) -->
<!--- For example: -->
<!--- * Azure Data Science Virtual Machine. -->
<!--- * Azure Databricks. -->
<!--- * Other platforms. -->
It happen on Azure ML compute instance
### How do we replicate the issue?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for pyspark -->
<!--- * Run unit test `test_sar_pyspark.py` with `pytest -m 'spark'` -->
<!--- * ... -->
This is setup script I use in AzureML :
```code
# 1. Install gcc if it is not installed already. On Ubuntu, this could done by using the command
# sudo apt install gcc
# 2. Create and activate a new conda environment
conda create -n msftrecsys2 python=3.9.16
conda activate msftrecsys2
# 3. Install the core recommenders package. It can run all the CPU notebooks.
pip install recommenders
pip install recommenders[gpu]
# 4. create a Jupyter kernel
python -m ipykernel install --user --name msftrecsys2 --display-name msftrecsys2
# 5. Clone this repo within VSCode or using command line:
git clone https://github.com/recommenders-team/recommenders.git
```
this is my compute instance setting:
<img width="1108" alt="image" src="https://github.com/recommenders-team/recommenders/assets/110788250/2c43df07-eb72-4f3a-82e2-56e48f4f594a">
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for SAR PySpark should pass successfully. -->
Cell below should run without throwing error
```code
with Timer() as test_time:
top_k = model.recommend_k_items(test, top_k=TOP_K, remove_seen=True)
print("Took {} seconds for prediction.".format(test_time.interval))
```
### Other Comments
| closed | 2023-10-09T20:23:56Z | 2024-04-30T04:58:46Z | https://github.com/recommenders-team/recommenders/issues/2015 | [
"bug"
] | lordaouy | 4 |
modin-project/modin | pandas | 7,295 | unpin numexpr | Since the latest version of numexpr is 2.10.0, can it be unpinned? (pandas pins a floor but not [ceiling](https://github.com/modin-project/modin/issues/6469) for numexpr.) | closed | 2024-06-01T19:43:39Z | 2024-06-03T18:00:27Z | https://github.com/modin-project/modin/issues/7295 | [
"new feature/request 💬",
"External"
] | rootsmusic | 1 |
babysor/MockingBird | pytorch | 985 | how to solve this problem about pydantic V2? 跑代码在运行web.py时说pydantic版本不对,我使用liunx系统下跑的 | PydanticImportError: `pydantic:parse_raw_as` has been removed in V2.
For further information visit https://errors.pydantic.dev/2.6/u/import-error
PyTorch 2.1.0
Python 3.10(ubuntu22.04)
Cuda 12.1
| open | 2024-02-02T06:11:19Z | 2024-03-09T03:13:45Z | https://github.com/babysor/MockingBird/issues/985 | [] | JJJJJJX1 | 1 |
laurentS/slowapi | fastapi | 107 | @limiter.limit("1/second") not working | it worked based on minute, but not working based on second | closed | 2022-08-19T04:57:17Z | 2022-08-24T12:40:24Z | https://github.com/laurentS/slowapi/issues/107 | [] | moo611 | 5 |
gradio-app/gradio | deep-learning | 10,564 | Misplaced Chat Avatar While Thinking | ### Describe the bug
When the chatbot is thinking, the Avatar icon is misplaced. When it is actually inferencing or done inferencing, the avatar is fine.
Similar to https://github.com/gradio-app/gradio/issues/9655 I believe, but a special edge case. Also, I mostly notice the issue with rectangular images.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
from time import sleep
AVATAR = "./car.png"
# Define a simple chatbot function
def chatbot_response(message, hist):
sleep(10)
return f"Gradio is pretty cool!"
# Create a chat interface using gr.ChatInterface
chatbot = gr.ChatInterface(fn=chatbot_response,
chatbot=gr.Chatbot(
label="LLM",
elem_id="chatbot",
avatar_images=(
None,
AVATAR
),
)
)
# Launch the chatbot
chatbot.launch()
```
### Screenshot



### Logs
```shell
```
### System Info
```shell
(base) carter.yancey@Yancy-XPS:~$ gradio environment
Gradio Environment Information:
------------------------------
Operating System: Linux
gradio version: 5.13.1
gradio_client version: 1.6.0
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 3.7.1
audioop-lts is not installed.
fastapi: 0.115.7
ffmpy: 0.3.2
gradio-client==1.6.0 is not installed.
httpx: 0.25.1
huggingface-hub: 0.27.1
jinja2: 3.1.2
markupsafe: 2.1.3
numpy: 1.26.2
orjson: 3.9.10
packaging: 23.2
pandas: 1.5.3
pillow: 10.0.0
pydantic: 2.5.1
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.1
ruff: 0.2.2
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.45.3
tomlkit: 0.12.0
typer: 0.15.1
typing-extensions: 4.8.0
urllib3: 2.3.0
uvicorn: 0.24.0.post1
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2023.10.0
httpx: 0.25.1
huggingface-hub: 0.27.1
packaging: 23.2
typing-extensions: 4.8.0
websockets: 11.0.3
```
### Severity
I can work around it | closed | 2025-02-11T18:31:28Z | 2025-03-04T21:23:07Z | https://github.com/gradio-app/gradio/issues/10564 | [
"bug",
"💬 Chatbot"
] | CarterYancey | 0 |
ultralytics/ultralytics | python | 19,745 | Export torchscript not support dynamic batch and nms | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
_No response_
### Bug
code
```
def export(model_path='yolo11x-pose.pt', imagz=832, batch=8):
model = YOLO(model_path)
# dynamic_axes = {
# 'images': {0: 'batch', 2: 'height', 3: 'width'}, # 让 batch, height, width 为动态维度
# 'output0': {0: 'batch', 1: 300, 2: 6} # 假设你需要设置输出维度
# }
onnx_model_path = model.export(imgsz=imagz, format="torchscript",
**{'batch': batch, 'half': False, 'simplify': True,
'nms': True, })
print(f"✅ 模型已导出为 torchscript 格式: {onnx_model_path}")
# # 转换为 FP16 格式
# output_path = onnx_model_path.replace('.onnx', '_fp16.onnx')
# utils.convert_to_fp16(onnx_model_path, output_path)
return onnx_model_path
imagz = 640
model_path = 'det_31c.pt'
batch = 8
script_path = export(model_path, imagz=imagz, batch=batch)
```
If I use nms = false , also get same error.
error
```
Traceback of TorchScript, original code (most recent call last):
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(1576): forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1729): _slow_forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1750): _call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1739): _wrapped_call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1276): trace_module
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(696): _trace_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1000): trace
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(492): export_torchscript
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(175): outer_func
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(407): __call__
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/model.py(741): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(32): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(48): <module>
RuntimeError: select(): index 5 out of range for tensor of size [5, 8400, 4] at dimension 0
Thread [77] had error: PyTorch execute failure: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/__torch__/ultralytics/engine/exporter.py", line 277, in forward
_169 = torch.pad(dets3, _168)
_170 = torch.copy_(torch.select(out, 0, 4), _169)
box4 = torch.select(boxes, 0, 5)
~~~~~~~~~~~~ <--- HERE
cls9 = torch.select(classes, 0, 5)
score9 = torch.select(scores0, 0, 5)
Traceback of TorchScript, original code (most recent call last):
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(1576): forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1729): _slow_forward
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1750): _call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/nn/modules/module.py(1739): _wrapped_call_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1276): trace_module
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(696): _trace_impl
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/torch/jit/_trace.py(1000): trace
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(492): export_torchscript
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(175): outer_func
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/exporter.py(407): __call__
/data/anaconda3/envs/wyx/lib/python3.11/site-packages/ultralytics/engine/model.py(741): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(32): export
/tmp/wyx/yolo_onnx_demo/yolo_export_script.py(48): <module>
RuntimeError: select(): index 5 out of range for tensor of size [5, 8400, 4] at dimension 0
Thread [91] had error: PyTorch execute failure: The following operation failed in the TorchScript interpreter.
```
I want to modify the `exporter.py` to slove.
If I use `torch.jit.script` to export such as `task.py , head.py , module.py` will get error and I don't know how to fix them.
And I use it after `torch.jit.trace` get same error when use different batch size to inference
```
@try_export
def export_torchscript(self, prefix=colorstr("TorchScript:")):
"""YOLO TorchScript model export."""
LOGGER.info(f"\n{prefix} starting export with torch {torch.__version__}...")
f = self.file.with_suffix(".torchscript")
ts = torch.jit.trace(NMSModel(self.model, self.args) if self.args.nms else self.model, self.im, strict=False)
ts = torch.jit.script(ts)
extra_files = {"config.txt": json.dumps(self.metadata)} # torch._C.ExtraFilesMap()
if self.args.optimize: # https://pytorch.org/tutorials/recipes/mobile_interpreter.html
LOGGER.info(f"{prefix} optimizing for mobile...")
from torch.utils.mobile_optimizer import optimize_for_mobile
optimize_for_mobile(ts)._save_for_lite_interpreter(str(f), _extra_files=extra_files)
else:
ts.save(str(f), _extra_files=extra_files)
return f, None
```
I don't know how to solve my problem
### Environment
Ultralytics 8.3.76 🚀 Python-3.11.11 torch-2.6.0+cu124 CPU (Intel Xeon Gold 6462C)
Model summary (fused): 112 layers, 43,630,509 parameters, 0 gradients, 164.9 GFLOPs
PyTorch: starting from 'det_31c.pt' with input shape (8, 3, 640, 640) BCHW and output shape(s) (8, 35, 8400) (83.6 MB)
### Minimal Reproducible Example
```
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | open | 2025-03-17T10:03:57Z | 2025-03-24T00:05:26Z | https://github.com/ultralytics/ultralytics/issues/19745 | [
"exports"
] | 631068264 | 15 |
schemathesis/schemathesis | pytest | 1,852 | [FEATURE] Filters for hooks | Continuation of #1673
Extend the hook system to allow users to specify the scope of hook functions through a decorator instead of implementing conditions inside the hook function body. This would simplify the filtering process and make it easier to use.
```python
# Filter via custom function
def select_operations(context):
# Or any complex logic here
return context.operation.verbose_name == "PATCH /users/{user_id}/"
# Apply hook only if `select_operations` returned `True`
@schemathesis.hook.apply_to(select_operations)
def filter_query(context, query):
return query["key"] != "42"
# Apply hook to everything except `GET /users/{user_id}/`
@schemathesis.hook.skip_for(method="GET", path="/users/{user_id}/")
def map_path_parameters(context, path_parameters):
path_parameters["user_id"] = 42
return path_parameters
# Simple filter hook for all operations
@schemathesis.hook
def filter_query(context, query):
return query["key"] != "42"
``` | closed | 2023-10-18T16:41:00Z | 2024-08-10T18:28:48Z | https://github.com/schemathesis/schemathesis/issues/1852 | [
"Priority: Medium",
"Type: Feature",
"Difficulty: Intermediate",
"UX: Usability",
"Component: Hooks"
] | Stranger6667 | 0 |
dynaconf/dynaconf | fastapi | 331 | [RFC] Add support for Pydantic BaseSettings | Pydantic has a schema class BaseSettings that can be integrated with Dynaconf validators. | closed | 2020-04-29T14:24:51Z | 2020-09-12T04:20:13Z | https://github.com/dynaconf/dynaconf/issues/331 | [
"Not a Bug",
"RFC"
] | rochacbruno | 2 |
plotly/dash-core-components | dash | 307 | An in-range update of prettier is breaking the build 🚨 |
## The dependency [prettier](https://github.com/prettier/prettier) was updated from `1.14.2` to `1.14.3`.
🚨 [View failing branch](https://github.com/plotly/dash-core-components/compare/master...plotly:greenkeeper%2Fprettier-1.14.3).
This version is **covered** by your **current version range** and after updating it in your project **the build failed**.
prettier is a direct dependency of this project, and **it is very likely causing it to break**. If other packages depend on yours, this update is probably also breaking those in turn.
<details>
<summary>Status Details</summary>
- ✅ **ci/circleci: python-2.7:** Your tests passed on CircleCI! ([Details](https://circleci.com/gh/plotly/dash-core-components/1262?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)).
- ❌ **percy/dash-core-components:** 2 visual changes need review ([Details](https://percy.io/plotly/dash-core-components/builds/1019778?utm_campaign=plotly&utm_content=dash-core-components&utm_source=github_status)).
- ✅ **ci/circleci: python-3.7:** Your tests passed on CircleCI! ([Details](https://circleci.com/gh/plotly/dash-core-components/1263?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)).
- ✅ **ci/circleci: python-3.6:** Your tests passed on CircleCI! ([Details](https://circleci.com/gh/plotly/dash-core-components/1264?utm_campaign=vcs-integration-link&utm_medium=referral&utm_source=github-build-link)).
</details>
---
<details>
<summary>Release Notes for 1.14.3</summary>
<p><g-emoji class="g-emoji" alias="link" fallback-src="https://assets-cdn.github.com/images/icons/emoji/unicode/1f517.png">🔗</g-emoji> <a href="https://urls.greenkeeper.io/prettier/prettier/blob/master/CHANGELOG.md#1143"><strong>Changelog</strong></a></p>
</details>
<details>
<summary>FAQ and help</summary>
There is a collection of [frequently asked questions](https://greenkeeper.io/faq.html). If those don’t help, you can always [ask the humans behind Greenkeeper](https://github.com/greenkeeperio/greenkeeper/issues/new).
</details>
---
Your [Greenkeeper](https://greenkeeper.io) Bot :palm_tree:
| closed | 2018-09-19T13:26:14Z | 2018-12-04T20:20:44Z | https://github.com/plotly/dash-core-components/issues/307 | [] | greenkeeper[bot] | 1 |
FlareSolverr/FlareSolverr | api | 1,079 | The Cloudflare 'Verify you are human' button not found on the page. | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:
3.3.14-hotfix2
- Last working FlareSolverr version:
3.3.14
- Operating system:
Unraid 6.12.6
- Are you using Docker: [yes/no]
Yes
- FlareSolverr User-Agent (see log traces or / endpoint):
FlareSolverr User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36
- Are you using a VPN: [yes/no]
Yes
- Are you using a Proxy: [yes/no]
No
- Are you using Captcha Solver: [yes/no]
No
- If using captcha solver, which one:
No
- URL to test this issue:
From within Jackett using:
https://1337x.to/
https://1337x.to/
https://1337x.st/
https://x1337x.ws/
https://x1337x.eu/
https://x1337x.se/
https://1337x.so/
https://1337x.unblockit.dad/
https://1337x.unblockninja.com/
https://1337x.ninjaproxy1.com/
https://1337x.proxyninja.org/
https://1337x.torrentbay.st/
```
### Description
This is the headless docker container version hosted within comunity applications. I have seen similar log errors in here, but the support threads seem to all finish with (update *** has fixed this issue). As far as I am aware I am using the latest version, and I reverted to the last version with no success. I am not too sure what is going on here. I have attached container start up + error logs, please let me know if anything else was needed.
[flaresolverr.txt](https://github.com/FlareSolverr/FlareSolverr/files/14337594/flaresolverr.txt)
### Logged Error Messages
```text
2024-02-20 10:22:18 DEBUG ReqId 22364533434112 Waiting for title (attempt 17): Just a moment...
2024-02-20 10:22:19 DEBUG ReqId 22364533434112 Timeout waiting for selector
2024-02-20 10:22:19 DEBUG ReqId 22364533434112 Try to find the Cloudflare verify checkbox...
2024-02-20 10:22:19 DEBUG ReqId 22364533434112 Cloudflare verify checkbox not found on the page.
2024-02-20 10:22:19 DEBUG ReqId 22364533434112 Try to find the Cloudflare 'Verify you are human' button...
2024-02-20 10:22:19 DEBUG ReqId 22364533434112 The Cloudflare 'Verify you are human' button not found on the page.
2024-02-20 10:22:20 DEBUG ReqId 22364541839104 A used instance of webdriver has been destroyed
```
### Screenshots
_No response_ | closed | 2024-02-19T23:30:46Z | 2024-02-21T02:27:49Z | https://github.com/FlareSolverr/FlareSolverr/issues/1079 | [
"duplicate"
] | lsultana98 | 8 |
plotly/dash-cytoscape | dash | 9 | Label Text Wrapping with ellipsis leaves "..." after changing layout | When the layout is set to preset, and we set the `text-wrap` for `edge` elements to be `ellipsis`, the "..." stay visually, even when the layout is changed. Here's an example:

When we change to grid layout:

This issue is currently only happening in Dash. Reproduction in React.js will come soon. | closed | 2018-08-27T17:51:06Z | 2019-03-06T21:46:36Z | https://github.com/plotly/dash-cytoscape/issues/9 | [
"bug"
] | xhluca | 2 |
kynan/nbstripout | jupyter | 55 | Piping not working on Python 3.5 | I'm trying to clean the test notebooks by using a pipe but it doesn't work on Python 3.5. I just get empty result:
```
$ cat tests/test_metadata.ipynb | nbstripout
```
Running `nbstripout tests/test_metadata.ipynb` correctly cleans the notebook inplace.
I tried with Python 2.7 and then it outputs the normal cleaned notebook when piping. Any ideas what could be wrong?
```
$ pip freeze
decorator==4.0.11
docopt==0.6.2
docutils==0.13.1
ipython==6.0.0
ipython-genutils==0.2.0
jedi==0.10.2
jsonschema==2.5.1
jupyter-core==4.3.0
nbformat==4.3.0
nbstripout==0.3.0
path.py==10.1
pexpect==4.2.1
pickleshare==0.7.4
prompt-toolkit==1.0.14
ptyprocess==0.5
Pygments==2.2.0
simplegeneric==0.8.1
six==1.10.0
testpath==0.3
traitlets==4.3.2
wcwidth==0.1.6
``` | closed | 2017-05-08T13:58:34Z | 2017-08-08T18:49:57Z | https://github.com/kynan/nbstripout/issues/55 | [
"type:bug",
"resolution:fixed"
] | jluttine | 13 |
ranaroussi/yfinance | pandas | 1,478 | yf.download fails with pandas 2.0.0 | Pandas 2.0.0 was released today. It appears something changed in Pandas that breaks `yf.download`. I'm using:
* Python 3.11.2
* yfinance 0.2.14
* macOS 12.6.4
`yf.download` works with Pandas 1.5.3:
```
>>> import pandas as pd
>>> import yfinance as yf
>>> pd.__version__
'1.5.3'
>>> yf.__version__
'0.2.14'
>>> yf.download('MSFT', start='2023-04-03', end='2023-04-04')
[*********************100%***********************] 1 of 1 completed
Open High Low Close Adj Close Volume
Date
2023-04-03 286.519989 288.269989 283.950012 287.230011 287.230011 24100907
```
`yf.download` fails with Pandas 2.0.0:
```
>>> import pandas as pd
>>> import yfinance as yf
>>> pd.__version__
'2.0.0'
>>> yf.__version__
'0.2.14'
>>> yf.download('MSFT', start='2023-04-03', end='2023-04-04')
[*********************100%***********************] 1 of 1 completed
1 Failed download:
- MSFT: No data found for this date range, symbol may be delisted
Empty DataFrame
Columns: [Open, High, Low, Close, Adj Close, Volume]
Index: []
``` | closed | 2023-04-03T23:56:20Z | 2023-04-14T05:38:25Z | https://github.com/ranaroussi/yfinance/issues/1478 | [] | mndavidoff | 12 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 419 | install.sh 出错 | 系统:20.04.6
网络:Google畅通
root下执行 `sudo bash install.sh`
install.sh是新下载的,运行了好几次,都是一样的问题,日志如下:
```
Updating package lists... | 正在更新软件包列表...
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Hit:2 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease
Hit:4 http://us.archive.ubuntu.com/ubuntu focal-security InRelease
Reading package lists... Done
Installing Git... | 正在安装Git...
Reading package lists... Done
Building dependency tree
Reading state information... Done
git is already the newest version (1:2.25.1-1ubuntu3.12).
0 upgraded, 0 newly installed, 0 to remove and 57 not upgraded.
Installing Python3... | 正在安装Python3...
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3 is already the newest version (3.8.2-0ubuntu2).
0 upgraded, 0 newly installed, 0 to remove and 57 not upgraded.
Installing PIP3... | 正在安装PIP3...
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-pip is already the newest version (20.0.2-5ubuntu1.10).
0 upgraded, 0 newly installed, 0 to remove and 57 not upgraded.
Installing python3-venv... | 正在安装python3-venv...
Reading package lists... Done
Building dependency tree
Reading state information... Done
python3-venv is already the newest version (3.8.2-0ubuntu2).
0 upgraded, 0 newly installed, 0 to remove and 57 not upgraded.
Creating path: /www/wwwroot | 正在创建路径: /www/wwwroot
Cloning Douyin_TikTok_Download_API.git from Github! | 正在从Github克隆Douyin_TikTok_Download_API.git!
Cloning into 'Douyin_TikTok_Download_API'...
remote: Enumerating objects: 3354, done.
remote: Counting objects: 100% (817/817), done.
remote: Compressing objects: 100% (338/338), done.
remote: Total 3354 (delta 562), reused 679 (delta 467), pack-reused 2537
Receiving objects: 100% (3354/3354), 8.55 MiB | 6.98 MiB/s, done.
Resolving deltas: 100% (2318/2318), done.
Creating a virtual environment | 正在创建虚拟环境
Activating the virtual environment | 正在激活虚拟环境
Setting pip to use the default PyPI index | 设置pip使用默认PyPI索引
Writing to /root/.config/pip/pip.conf
Installing pip setuptools | 安装pip setuptools
Looking in indexes: https://pypi.org/simple/
Requirement already satisfied: setuptools in ./venv/lib/python3.8/site-packages (44.0.0)
Installing dependencies from requirements.txt | 从requirements.txt安装依赖
Looking in indexes: https://pypi.org/simple/
Collecting aiofiles==23.2.1
Using cached aiofiles-23.2.1-py3-none-any.whl (15 kB)
Collecting annotated-types==0.6.0
Using cached annotated_types-0.6.0-py3-none-any.whl (12 kB)
Collecting anyio==4.3.0
Using cached anyio-4.3.0-py3-none-any.whl (85 kB)
Collecting browser-cookie3==0.19.1
Using cached browser_cookie3-0.19.1-py3-none-any.whl (14 kB)
Collecting certifi==2024.2.2
Using cached certifi-2024.2.2-py3-none-any.whl (163 kB)
Collecting click==8.1.7
Using cached click-8.1.7-py3-none-any.whl (97 kB)
Collecting colorama==0.4.6
Using cached colorama-0.4.6-py2.py3-none-any.whl (25 kB)
Collecting fastapi==0.110.2
Using cached fastapi-0.110.2-py3-none-any.whl (91 kB)
Collecting h11==0.14.0
Using cached h11-0.14.0-py3-none-any.whl (58 kB)
Collecting httpcore==1.0.5
Using cached httpcore-1.0.5-py3-none-any.whl (77 kB)
Collecting httpx==0.27.0
Using cached httpx-0.27.0-py3-none-any.whl (75 kB)
Collecting idna==3.7
Using cached idna-3.7-py3-none-any.whl (66 kB)
Collecting importlib_resources==6.4.0
Using cached importlib_resources-6.4.0-py3-none-any.whl (38 kB)
Collecting lz4==4.3.3
Using cached lz4-4.3.3-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.3 MB)
Collecting markdown-it-py==3.0.0
Using cached markdown_it_py-3.0.0-py3-none-any.whl (87 kB)
Collecting mdurl==0.1.2
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting numpy
Using cached numpy-1.24.4-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (17.3 MB)
Collecting pycryptodomex==3.20.0
Using cached pycryptodomex-3.20.0-cp35-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
Collecting pydantic==2.7.0
Using cached pydantic-2.7.0-py3-none-any.whl (407 kB)
Collecting pydantic_core==2.18.1
Using cached pydantic_core-2.18.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.1 MB)
WARNING: The candidate selected for download or install is a yanked version: 'pyfiglet' candidate (version 1.0.0 at https://files.pythonhosted.org/packages/29/e4/f2ec4cc8773a60678a702ad0c1cae0dbacfbf1f79956d4aecaa712a980f1/pyfiglet-1.0.0-py3-none-any.whl#sha256=975a6349470d1e470212015f13ef10f565de5270e387d9b0bc6330f692791b09 (from https://pypi.org/simple/pyfiglet/))
Reason for being yanked: Breaks old python versions
Collecting pyfiglet==1.0.0
Using cached pyfiglet-1.0.0-py3-none-any.whl (1.1 MB)
Collecting Pygments==2.17.2
Using cached pygments-2.17.2-py3-none-any.whl (1.2 MB)
Collecting pypng==0.20220715.0
Using cached pypng-0.20220715.0-py3-none-any.whl (58 kB)
Collecting pywebio==1.8.3
Using cached pywebio-1.8.3.tar.gz (500 kB)
Collecting pywebio-battery==0.6.0
Using cached pywebio_battery-0.6.0.tar.gz (13 kB)
Collecting PyYAML==6.0.1
Using cached PyYAML-6.0.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (736 kB)
Collecting qrcode==7.4.2
Using cached qrcode-7.4.2-py3-none-any.whl (46 kB)
Collecting rich==13.7.1
Using cached rich-13.7.1-py3-none-any.whl (240 kB)
Collecting sniffio==1.3.1
Using cached sniffio-1.3.1-py3-none-any.whl (10 kB)
Collecting starlette==0.37.2
Using cached starlette-0.37.2-py3-none-any.whl (71 kB)
Collecting tornado==6.4
Using cached tornado-6.4-cp38-abi3-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (435 kB)
Collecting typing_extensions==4.11.0
Using cached typing_extensions-4.11.0-py3-none-any.whl (34 kB)
Collecting ua-parser==0.18.0
Using cached ua_parser-0.18.0-py2.py3-none-any.whl (38 kB)
Collecting user-agents==2.2.0
Using cached user_agents-2.2.0-py3-none-any.whl (9.6 kB)
Collecting uvicorn==0.29.0
Using cached uvicorn-0.29.0-py3-none-any.whl (60 kB)
Collecting websockets==12.0
Using cached websockets-12.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (130 kB)
Collecting exceptiongroup>=1.0.2; python_version < "3.11"
Using cached exceptiongroup-1.2.1-py3-none-any.whl (16 kB)
Collecting jeepney; python_version >= "3.7" and ("bsd" in sys_platform or sys_platform == "linux")
Using cached jeepney-0.8.0-py3-none-any.whl (48 kB)
Collecting zipp>=3.1.0; python_version < "3.10"
Using cached zipp-3.19.2-py3-none-any.whl (9.0 kB)
Building wheels for collected packages: pywebio, pywebio-battery
Building wheel for pywebio (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /www/wwwroot/Douyin_TikTok_Download_API/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xefcwgok/pywebio/setup.py'"'"'; __file__='"'"'/tmp/pip-install-xefcwgok/pywebio/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-g8i06u6p
cwd: /tmp/pip-install-xefcwgok/pywebio/
Complete output (6 lines):
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Failed building wheel for pywebio
Running setup.py clean for pywebio
Building wheel for pywebio-battery (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /www/wwwroot/Douyin_TikTok_Download_API/venv/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-xefcwgok/pywebio-battery/setup.py'"'"'; __file__='"'"'/tmp/pip-install-xefcwgok/pywebio-battery/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-f76w5kh7
cwd: /tmp/pip-install-xefcwgok/pywebio-battery/
Complete output (6 lines):
usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: setup.py --help [cmd1 cmd2 ...]
or: setup.py --help-commands
or: setup.py cmd --help
error: invalid command 'bdist_wheel'
----------------------------------------
ERROR: Failed building wheel for pywebio-battery
Running setup.py clean for pywebio-battery
Failed to build pywebio pywebio-battery
Installing collected packages: aiofiles, typing-extensions, annotated-types, exceptiongroup, idna, sniffio, anyio, lz4, jeepney, pycryptodomex, browser-cookie3, certifi, click, colorama, pydantic-core, pydantic, starlette, fastapi, h11, httpcore, httpx, zipp, importlib-resources, mdurl, markdown-it-py, numpy, pyfiglet, Pygments, pypng, tornado, ua-parser, user-agents, pywebio, pywebio-battery, PyYAML, qrcode, rich, uvicorn, websockets
Running setup.py install for pywebio ... done
Running setup.py install for pywebio-battery ... done
Successfully installed PyYAML-6.0.1 Pygments-2.17.2 aiofiles-23.2.1 annotated-types-0.6.0 anyio-4.3.0 browser-cookie3-0.19.1 certifi-2024.2.2 click-8.1.7 colorama-0.4.6 exceptiongroup-1.2.1 fastapi-0.110.2 h11-0.14.0 httpcore-1.0.5 httpx-0.27.0 idna-3.7 importlib-resources-6.4.0 jeepney-0.8.0 lz4-4.3.3 markdown-it-py-3.0.0 mdurl-0.1.2 numpy-1.24.4 pycryptodomex-3.20.0 pydantic-2.7.0 pydantic-core-2.18.1 pyfiglet-1.0.0 pypng-0.20220715.0 pywebio-1.8.3 pywebio-battery-0.6.0 qrcode-7.4.2 rich-13.7.1 sniffio-1.3.1 starlette-0.37.2 tornado-6.4 typing-extensions-4.11.0 ua-parser-0.18.0 user-agents-2.2.0 uvicorn-0.29.0 websockets-12.0 zipp-3.19.2
Deactivating the virtual environment | 正在停用虚拟环境
Adding Douyin_TikTok_Download_API to system service | 将Douyin_TikTok_Download_API添加到系统服务
Enabling Douyin_TikTok_Download_API service | 启用Douyin_TikTok_Download_API服务
Starting Douyin_TikTok_Download_API service | 启动Douyin_TikTok_Download_API服务
Douyin_TikTok_Download_API installation complete! | Douyin_TikTok_Download_API安装完成!
You can access the API at http://localhost:80 | 您可以在http://localhost:80访问API
You can change the port in config.yaml under the /www/wwwroot/Douyin_TikTok_Download_API directory | 您可以在/www/wwwroot/Douyin_TikTok_Download_API目录下的config.yaml中更改端口
If the API is not working, please change the cookie in config.yaml under the /www/wwwroot/Douyin_TikTok_Download_API/crawler/[Douyin/TikTok]/[APP/Web]/config.yaml directory | 如果API无法工作,请更改/www/wwwroot/Douyin_TikTok_Download_API/crawler/[Douyin/TikTok]/[APP/Web]/config.yaml目录下的cookie
``` | closed | 2024-06-05T13:04:22Z | 2024-07-04T12:59:04Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/419 | [
"BUG"
] | markvlenvision | 3 |
ageitgey/face_recognition | machine-learning | 723 | Multiple results for one person (face) | * face_recognition version: 1.2.2
* Python version: 2.7
* Operating System: Mac
First of all thanks for this amazing library.
**What I need**: I want to get multiple predictions for one face with respective distances and then allow the user/admin to take a call on which is the correct image.
**What I did**: By changing the n_neigbors to 3 I'm able to get the closest 3 matches for the face. The output (closest_distances) is distance and an index. But I'm unsure how to use that index to find out who is the person that index is referring to in the trained classifier.
`closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=3)`
When I use knn_clf.classes_ , I do get a list of the class_labels but the index (aforementioned) is not the index of knn_clf.classes_
```
class_labels = knn_clf.classes_
user_id = class_labels[closest_distances[1][i][j]]
```
The user_id that I get in the above example is incorrect. I'm not sure how to fetch it from the classifier.
Is there something I'm missing? Is it possible to achieve this using this library?
TIA. | closed | 2019-01-25T09:10:25Z | 2021-01-21T23:50:03Z | https://github.com/ageitgey/face_recognition/issues/723 | [] | 316karan | 3 |
Lightning-AI/pytorch-lightning | deep-learning | 19,828 | TensorBoardLogger has the wrong epoch numbers much more than the fact | ### Bug description
I used the following code to log the metrics, but I found that the epoch recorded in the tensorboard logger is much more than it should have:
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("train_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("valid_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
pl.Train(..., logger=TensorBoardLogger(save_dir='store',version=log_path), ....)
In the configure, I set max_epoch=10000, but in the logger, I got epoches more than 650k:


### What version are you seeing the problem on?
v2.1
### How to reproduce the bug
```python
def training_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("train_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self.forward(x)
loss = torch.sqrt(self.loss_fn(y_hat,y))
self.log("valid_loss", loss, logger=True, prog_bar=True, on_epoch=True)
return loss
pl.Train(..., logger=TensorBoardLogger(save_dir='store',version=log_path), ....) # u can use any path you like
```
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0): 2.1.3
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9): 2.1.2
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source): pip
#- Running environment of LightningApp (e.g. local, cloud):
```
</details>
### More info
_No response_ | open | 2024-04-30T17:13:10Z | 2024-05-19T06:46:33Z | https://github.com/Lightning-AI/pytorch-lightning/issues/19828 | [
"bug",
"needs triage",
"ver: 2.1.x"
] | AlbireoBai | 2 |
coqui-ai/TTS | python | 3,335 | [Bug] Performance decay in XTTS-v2 | ### Describe the bug
I noticed a significant decrease in quality and audio similarity when using the hugging space demo for xtts v2.0.3 before that version the quality and similarly between input and output audios was miles better.
### To Reproduce
Use hugging face space to compare xtts 2.0.3 with older versions.
### Expected behavior
_No response_
### Logs
_No response_
### Environment
```shell
Hugging face demo for xtts v2.0.3
```
### Additional context
_No response_ | closed | 2023-11-29T21:19:35Z | 2023-12-06T19:40:33Z | https://github.com/coqui-ai/TTS/issues/3335 | [
"bug"
] | Lenos500 | 3 |
schemathesis/schemathesis | pytest | 2,074 | [BUG] False positive error message if the hooks file itself raises `ModuleNotFound` | closed | 2024-02-27T18:27:57Z | 2024-02-27T22:09:07Z | https://github.com/schemathesis/schemathesis/issues/2074 | [
"Type: Bug"
] | Stranger6667 | 0 |
|
falconry/falcon | api | 2,024 | ASGI: iterating over `req.stream` hangs for chunked requests | When reading a streaming request without `Content-Length` (i.e., using "chunked" `Transfer-Encoding`), iterating over `req.stream` hangs in the case the request payload consists of more than one chunk.
It looks like this is caused by the receive loop implementation in `asgi/stream.py`. It only checks for the number of remaining bytes, and disregards the `more_body: False` hint in the case of an empty body. The logic to account for it is in place, but it is not reached due to the `continue` clause for an empty body chunk in the following event sequence:
```
RECV {'type': 'http.request', 'body': b'123456789abcdef\n', 'more_body': True}
RECV {'type': 'http.request', 'body': b'123456789abcdef\n', 'more_body': True}
RECV {'type': 'http.request', 'body': b'', 'more_body': False}
```
Eventually, the client (I was using `httpx`) times out, and an `http.disconnect` event is received which is handled correctly. But the request has already failed (timeout) from the client's perspective at this point. | closed | 2022-02-12T19:40:56Z | 2022-02-14T05:48:08Z | https://github.com/falconry/falcon/issues/2024 | [
"bug"
] | vytas7 | 1 |
tflearn/tflearn | data-science | 912 | Transfer Learning | I'm new to this so I apologize a head of time if this is the wrong way to ask this, please correct me if it is.
I have a model that I have trained, but I have decided that I want to change the output layer. Is there a way to change the output layer without completely retraining the whole model. | open | 2017-09-23T21:48:03Z | 2017-10-22T11:10:22Z | https://github.com/tflearn/tflearn/issues/912 | [] | orrin-nay | 1 |
huggingface/datasets | pytorch | 7,070 | how set_transform affects batch size? | ### Describe the bug
I am trying to fine-tune w2v-bert for ASR task. Since my dataset is so big, I preferred to use the on-the-fly method with set_transform. So i change the preprocessing function to this:
```
def prepare_dataset(batch):
input_features = processor(batch["audio"], sampling_rate=16000).input_features[0]
input_length = len(input_features)
labels = processor.tokenizer(batch["text"], padding=False).input_ids
batch = {
"input_features": [input_features],
"input_length": [input_length],
"labels": [labels]
}
return batch
train_ds.set_transform(prepare_dataset)
val_ds.set_transform(prepare_dataset)
```
After this, I also had to change the DataCollatorCTCWithPadding class like this:
```
@dataclass
class DataCollatorCTCWithPadding:
processor: Wav2Vec2BertProcessor
padding: Union[bool, str] = True
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# Separate input_features and labels
input_features = [{"input_features": feature["input_features"][0]} for feature in features]
labels = [feature["labels"][0] for feature in features]
# Pad input features
batch = self.processor.pad(
input_features,
padding=self.padding,
return_tensors="pt",
)
# Pad and process labels
label_features = self.processor.tokenizer.pad(
{"input_ids": labels},
padding=self.padding,
return_tensors="pt",
)
labels = label_features["input_ids"]
attention_mask = label_features["attention_mask"]
# Replace padding with -100 to ignore these tokens during loss calculation
labels = labels.masked_fill(attention_mask.ne(1), -100)
batch["labels"] = labels
return batch
```
But now a strange thing is happening, no matter how much I increase the batch size, the amount of V-RAM GPU usage does not change, while the number of total steps in the progress-bar (logging) changes. Is this normal or have I made a mistake?
### Steps to reproduce the bug
i can share my code if needed
### Expected behavior
Equal to the batch size value, the set_transform function is applied to the dataset and given to the model as a batch.
### Environment info
all updated versions | open | 2024-07-25T15:19:34Z | 2024-07-25T15:19:34Z | https://github.com/huggingface/datasets/issues/7070 | [] | VafaKnm | 0 |
home-assistant/core | asyncio | 140,874 | Geocaching: unable to add integration | ### The problem
After the update of HassOS from 14.2 to 15, the Geocaching integration could not be loaded. I deleted the integration and tried to add it again, however I always get an error message after putting in the credentials and allow the exchange of information.
### What version of Home Assistant Core has the issue?
2025.3.3
### What was the last working version of Home Assistant Core?
2025.3.3
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Geocaching
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/geocaching/
### Diagnostics information
No information available as the integration can not be added.
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
 | open | 2025-03-18T14:21:29Z | 2025-03-22T19:35:52Z | https://github.com/home-assistant/core/issues/140874 | [
"integration: geocaching"
] | XalaTheShepard | 13 |
facebookresearch/fairseq | pytorch | 4,725 | kernel keeps dying on Jupiter notebook and inference is slow / no such problem if ran on gradio | I am trying out and validating translations from Bulgarian to English. The problem is that on Jupyter, the kernel keeps dying and is rather slow when translating. However, if ran on gradio, things happen very quickly.
This is the code I am using, where `article` is some article from a dataframe.
```
start_time = time.time()
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M")
tokenizer = NllbTokenizerFast.from_pretrained("facebook/nllb-200-distilled-600M")
translator = pipeline('translation',
model=model,
tokenizer=tokenizer,
src_lang='bul_Cyrl',
tgt_lang='eng_Latn')
article = df.truncated[3]
output = translator(article, max_length=512)
end_time = time.time()
output = output[0]['translation_text']
result = {'inference_time': end_time - start_time,
'result': output}
result
```
I am on an M1 Mac. Translations on Jupyter range from 15 to 35 seconds, whereas on gradio from 4-10 seconds max. What could be the case as I am even using NllbTokenizerFast? Is there a way to improve this? | open | 2022-09-15T12:39:42Z | 2022-09-15T12:39:42Z | https://github.com/facebookresearch/fairseq/issues/4725 | [
"question",
"needs triage"
] | alexander-py | 0 |
jeffknupp/sandman2 | rest-api | 93 | README.rst contains non-ASCII characters fails installtion | ```powershell
> pip install sandman2
Collecting sandman2
Using cached https://files.pythonhosted.org/packages/34/43/65317a5a01c16d494a68b37bc21d9cbe17c3fd089b76835fdbda60f1973b/sandman2-1.2.0.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\34357\AppData\Local\Temp\pip-install-1toj7xsk\sandman2\setup.py", line 19, in <module>
LONG_DESCRIPTION = read('README.rst')
File "C:\Users\34357\AppData\Local\Temp\pip-install-1toj7xsk\sandman2\setup.py", line 17, in read
return codecs.open(os.path.join(HERE, *parts), 'r').read()
UnicodeDecodeError: 'gbk' codec can't decode byte 0xa6 in position 2084: illegal multibyte sequence
```
# Solution1
README.rst contains non-ASCII character `’`, should replace it with ASCII character `'`
See https://github.com/pypa/virtualenv/issues/201#issuecomment-3145690
# Solution2
https://github.com/jeffknupp/sandman2/blob/1ce21d6f7a6df77fa96fab694b0f9bb8469c166b/setup.py#L16-L17
Intentionally *do* adding an 'utf-8' encoding option to open | closed | 2019-04-02T13:50:19Z | 2019-05-29T00:19:31Z | https://github.com/jeffknupp/sandman2/issues/93 | [] | NateScarlet | 2 |
pennersr/django-allauth | django | 4,043 | settings.ACCOUNT_AUTHENTICATION_METHOD MANDATORY Phone Number with Verification | Currently when I try to set a CustomUser without username field and use USERNAME_FIELD = 'phone_number'
I cannot login to django admin because allauth's authentication backend tries to search the login user with 'username field'
setting ACCOUNT_USER_MODEL_USERNAME_FIELD = 'phone_number'
enables admin login.
However it is still hard to setup the social login functionality with mandatory phone number verification with allauth.
I think there should be mountable PhoneNumberAuthBackend, etc. | closed | 2024-08-17T19:59:33Z | 2024-08-17T20:11:53Z | https://github.com/pennersr/django-allauth/issues/4043 | [] | fatihkabakk | 0 |
zappa/Zappa | flask | 562 | [Migrated] No module named 'app': ModuleNotFoundError , slim_handler : true | Originally from: https://github.com/Miserlou/Zappa/issues/1482 by [houdinisparks](https://github.com/houdinisparks)
<!--- Provide a general summary of the issue in the Title above -->
## Context
I am trying to deploy a >100 mb flask app with slim_handler : true. Below is my file structure:
```
+-- app
| +-- routes
| +-- static
| +-- __init__.py
| +-- load.py
+-- venv
+-- run.py
+-- setup.py
+-- requirements.txt
```
However, when i try to deploy to zappa, its gives me the following error below:
```
No module named 'app': ModuleNotFoundError
Traceback (most recent call last):
File "/var/task/handler.py", line 566, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 237, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 129, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 936, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 948, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'app'
```
this is my zappa_settings.json
```
{
"dev": {
"app_function": "run.app",
"profile_name": "work",
"project_name": "<projectname>",
"runtime": "python3.6",
"s3_bucket": "<s3bucketname>",
"slim_handler": true
}
}
```
and my run.py
```
from app.load import create_app
app = create_app()
# We only need this for local development.
if __name__ == '__main__':
print("initiating the web app...")
app.run(debug=True)
```
and my load.py file:
```
from flask import Flask
from app import routes
def create_app():
"""An application factory, as explained here: http://flask.pocoo.org/docs/patterns/appfactories/.
:param config_object: The configuration object to use.
"""
app = Flask(__name__, static_url_path='', template_folder="static/pages")
register_blueprints(app)
return app
def register_blueprints(app):
"""Register Flask blueprints."""
app.register_blueprint(routes.api.api_bp)
app.register_blueprint(routes.index.webpage_bp)
return None
```
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: latest from pip installed from github <0.45.11>
* Operating System and Python version: Win 10, python 3.6
* The output of `pip freeze`:
```
aniso8601==3.0.0
argcomplete==1.9.3
asn1crypto==0.24.0
azure-common==1.1.8
azure-nspkg==2.0.0
azure-storage==0.36.0
base58==0.2.4
boto3==1.6.4
botocore==1.9.4
certifi==2018.1.18
cffi==1.11.5
cfn-flip==1.0.0
chardet==3.0.4
click==6.7
-e git+https://wyy95.visualstudio.com/IVAN/_git/ModelDeployment@213574193aa6759d2b7767871714f5c7e3079a11#egg=cognizant
cryptography==2.2.1
docutils==0.14
durationpy==0.5
Flask==0.12.2
Flask-REST==1.3
Flask-RESTful==0.3.6
future==0.16.0
hjson==3.0.1
idna==2.6
itsdangerous==0.24
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.19.0
lightgbm==2.1.0
MarkupSafe==1.0
numpy==1.14.1
pandas==0.22.0
placebo==0.8.1
pycparser==2.18
python-dateutil==2.6.1
python-slugify==1.2.4
pytz==2018.3
PyYAML==3.12
requests==2.18.4
s3transfer==0.1.13
scikit-learn==0.19.1
scipy==1.0.0
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.2.0
Unidecode==1.0.22
urllib3==1.22
Werkzeug==0.13
wsgi-request-logger==0.4.6
zappa==0.45.1
```
* Link to your project (optional):
| closed | 2021-02-20T12:22:48Z | 2024-04-13T17:09:25Z | https://github.com/zappa/Zappa/issues/562 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
howie6879/owllook | asyncio | 62 | 折腾了我一个小时,什么玩意 | https://mp.weixin.qq.com/s/0CqLiKsyDQ-pVmeo3R-UlA
这个jb安装教程,复制的代码不是换行就是多空格
安装到最后一步还完成不了 | closed | 2019-03-14T22:03:47Z | 2019-03-14T23:05:42Z | https://github.com/howie6879/owllook/issues/62 | [] | xiaodao2019 | 0 |
streamlit/streamlit | python | 9,951 | Tabs dont respond when using nested cache functions | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [X] I added a very descriptive title to this issue.
- [X] I have provided sufficient information below to help reproduce this issue.
### Summary
[](https://issues.streamlitapp.com/?issue=gh-9951)
I detected a critical problem with tabs when updating streamlit to a version newer than 1.35.0 (1.36 and up all have this problem). I found the issue in mobile, but it reproduces also in PC.
In my app I have the following scenario:
- Multiple tabs
- Several of them call functions that are cached
- And those functions call also (sometimes several times) nested cache functions.
In version 1.35 everything works fine on mobile and pc, but when I tried to update to a newer version, I detected that changing between tabs dont work (they become super irresponsibe and the app seems to crash). This is weird because my understanding was that changing tabs didnt trigger any runs/calculations.
If you erase all the @st.cache_data from the reproducible code example, all the code works just fine. So the problem seem to be that streamlit is doing somthing with the cache data when I try to switch tabs.
### Reproducible Code Example
```Python
import streamlit as st
st.header(body = "Testing problem switching tabs")
@st.cache_data(ttl=None)
def cached_func_level4():
return "test"
@st.cache_data(ttl=None)
def cached_func_level3():
return cached_func_level4()
@st.cache_data(ttl=None)
def cached_func_level2():
return cached_func_level3()
@st.cache_data(ttl=None)
def cached_func_level1():
return cached_func_level2()
@st.cache_data(ttl=None)
def cached_func_level0():
# If you iterate more times than 2000, the tab problem is even bigger
for _ in range(2000):
x = cached_func_level1()
return x
# In this testing tabs I only print a value and execute the
# "root" cached function, which calls other cached funcs
admin_tabs = st.tabs(["test1", "test2"])
with admin_tabs[0]:
st.write("Hello")
val = cached_func_level0()
with admin_tabs[1]:
st.write("World!")
val = cached_func_level0()
```
### Steps To Reproduce
Just run streamlit and when the page renders try to switch between the tabs.
### Expected Behavior
The expected behavior would be to be able to switch tabs without delay
### Current Behavior
Now the tabs crash when you try to switch between them, and the app does not respond or it does but super slowly.
### Is this a regression?
- [X] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.36 and up
- Python version: 3.11.5
- Operating System: Windows and iOS
- Browser: Testing in both safari and chrome
### Additional Information
_No response_ | closed | 2024-12-01T16:47:01Z | 2024-12-06T21:41:32Z | https://github.com/streamlit/streamlit/issues/9951 | [
"type:bug",
"feature:cache",
"priority:P3"
] | ricardorfe | 7 |
django-import-export/django-import-export | django | 1,806 | Admin export confirm page runs a query for the entire table | When the export page is loaded, the code runs a query for the entire table of the model (i.e. the equivalent of `SELECT * FROM table`):
https://github.com/django-import-export/django-import-export/blob/9839a28089575baef1ecab686ab81682751ed761/import_export/admin.py#L736
When trying to export even a moderately sized table, which I think is a common scenario for django-import-export, such a query is very problematic, especially because it ignores any filtering on the queryset.
I'm not sure if this query is really intentional, but if it is, would it possible to add a way to turn it off? | closed | 2024-05-01T15:14:41Z | 2024-05-14T06:26:43Z | https://github.com/django-import-export/django-import-export/issues/1806 | [
"bug"
] | bluetech | 4 |
mckinsey/vizro | pydantic | 331 | Documentation Enhancement | ### Which package?
vizro
### What's the problem this feature will solve?
It will save developers and analysts countless hours of implementing selectors. It will do this by giving a detailed gif or webp video of what each selector does with an example of that selector in use.
It does this by giving a visual representation reinforcing what it is that the selector does before implementation, guiding user in the correct application of a selector for their use case.
### Describe the solution you'd like
Under each selector in the documentation section. Have a use case gif/webp video of that selector for a use case.
### Alternative Solutions
Knowing what each selector already does through implementing. Trial and error.
### Additional context
This enhancement can save developers time in understanding what does what for their implementations.
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-02-25T02:32:12Z | 2024-02-28T12:01:49Z | https://github.com/mckinsey/vizro/issues/331 | [
"Feature Request :nerd_face:",
"Needs triage :mag:"
] | Rhanselman | 3 |
hankcs/HanLP | nlp | 1,849 | pip install hanlp failed |
**Describe the bug**
pip install hanlp failed
**Code to reproduce the issue**
```
pipx install hanlp
```
**Describe the current behavior**
```
Fatal error from pip prevented installation. Full pip output in file:
/home/neoe/.local/pipx/logs/cmd_2023-10-12_20.44.33_pip_errors.log
pip failed to build package:
tokenizers
Some possibly relevant errors from pip install:
error: subprocess-exited-with-error
error: casting `&T` to `&mut T` is undefined behavior, even if the reference is unused, consider instead using an `UnsafeCell`
error: could not compile `tokenizers` (lib) due to previous error; 3 warnings emitted
error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib --` failed with code 101
Error installing hanlp.
```
**Expected behavior**
install ok
**System information**
- debian 12
- Python version: Python 3.11.2
- HanLP version: newest
**Other info / logs**
* [x] I've completed this form and searched the web for solutions. | closed | 2023-10-12T11:57:31Z | 2023-10-13T11:06:01Z | https://github.com/hankcs/HanLP/issues/1849 | [
"invalid"
] | neoedmund | 4 |
d2l-ai/d2l-en | machine-learning | 2,086 | Default value for "training" parameter for BatchNorm custom layer call method | ```
class BatchNorm(tf.keras.layers.Layer):
.
.
.
@tf.function
def call(self, inputs, training):
if training:
axes = list(range(len(inputs.shape) - 1))
batch_mean = tf.reduce_mean(inputs, axes, keepdims=True)
batch_variance = tf.reduce_mean(tf.math.squared_difference(
inputs, tf.stop_gradient(batch_mean)), axes, keepdims=True)
batch_mean = tf.squeeze(batch_mean, axes)
batch_variance = tf.squeeze(batch_variance, axes)
.
.
.
else:
.
.
return output
```
[Above is the scratch implementation of Batch Normalization layer](https://d2l.ai/chapter_convolutional-modern/batch-norm.html#implementation-from-scratch).
The call method doesn't have a default training parameter( **None or True**).
[When this particular implementation is plugged into a Sequential model as shown in D2l Lenet implementation, ](https://d2l.ai/chapter_convolutional-modern/batch-norm.html#applying-batch-normalization-in-lenet)
```
def net():
return tf.keras.models.Sequential([
tf.keras.layers.Conv2D(filters=6, kernel_size=5,
input_shape=(28, 28, 1)),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Conv2D(filters=16, kernel_size=5),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.AvgPool2D(pool_size=2, strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(120),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(84),
BatchNorm(),
tf.keras.layers.Activation('sigmoid'),
tf.keras.layers.Dense(10)]
)
```
I get the **" tf__call() missing 1 required positional argument: 'training'"** error as shown below
```
2022-03-31 06:14:37.125158: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library cublas64_10.dll
Traceback (most recent call last):
File "D:/codebase/d2l-2/MCN/exercises/GoogleNet/runner.py", line 11, in <module>
X = layer(X)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "D:\codebase\d2l-2\MCN\exercises\GoogleNet\blocks\B1.py", line 14, in call
bridged_input = self.conv(bridged_input)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "D:\codebase\d2l-2\MCN\exercises\GoogleNet\ConvBNRelu.py", line 22, in call
bridged_input = self.batch_norm(bridged_input)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\def_function.py", line 580, in __call__
result = self._call(*args, **kwds)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\def_function.py", line 618, in _call
results = self._stateful_fn(*args, **kwds)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 2419, in __call__
graph_function, args, kwargs = self._maybe_define_function(args, kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 2777, in _maybe_define_function
graph_function = self._create_graph_function(args, kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 2667, in _create_graph_function
capture_by_value=self._capture_by_value),
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\framework\func_graph.py", line 981, in func_graph_from_py_func
func_outputs = python_func(*func_args, **func_kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\def_function.py", line 441, in wrapped_fn
return weak_wrapped_fn().__wrapped__(*args, **kwds)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\eager\function.py", line 3299, in bound_method_wrapper
return wrapped_fn(*args, **kwargs)
File "C:\Users\goofy\d2l\lib\site-packages\tensorflow\python\framework\func_graph.py", line 968, in wrapper
raise e.ag_error_metadata.to_exception(e)
TypeError: in user code:
TypeError: tf__call() missing 1 required positional argument: 'training'
```
I have handled the unassigned training parameter for now as shown below
```
@tf.function
def call(self, inputs, training= None):
if training is None:
training = tf.keras.backend.learning_phase()
if training:
axes = list(range(len(inputs.shape) - 1))
batch_mean = tf.reduce_mean(inputs, axes, keepdims=True)
```
Any other suggestions would be great.
And the D2L code needs a change as the TensorFlow lenet with BN is still spitting out the above error.
| closed | 2022-03-31T00:53:51Z | 2022-12-15T23:59:52Z | https://github.com/d2l-ai/d2l-en/issues/2086 | [] | gopalakrishna-r | 3 |
miguelgrinberg/microblog | flask | 191 | Edit .gitignore | Hi everyone
please add ".idea" to ".gitignore" file. | closed | 2019-11-11T13:58:39Z | 2020-04-10T14:14:08Z | https://github.com/miguelgrinberg/microblog/issues/191 | [
"question"
] | adelminayi | 7 |
strawberry-graphql/strawberry-django | graphql | 23 | Feature request: Support for Django form validation | Hey,
is there any possibility to add some support for the Django form validation by also returning the field parameters via GraphQL? What I was thinking of is something like this which takes whatever is defined in the model and passes it to the frontend so that I'd be possible to create some auto-validation based on the backend.
```json
User {
"firstname": {
"value": "James",
"validation": {
"type": "String",
"max_length": 30
}
}
}
``` | open | 2021-04-14T08:37:46Z | 2025-03-20T15:56:59Z | https://github.com/strawberry-graphql/strawberry-django/issues/23 | [] | holtergram | 3 |
explosion/spaCy | data-science | 12,310 | Unable to load model on VM, getting error 'utf-8' codec can't decode 0x86 in position 0: UnicodeDecodeError | Hello,guys
I have successfully trained the model using GPU enabled system and now I want this model to be used on my VM.
While performing ``` spacy.load("/Users/home/djnago/model-last")```
getting the error
```
/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py:877: UserWarning: [W095] Model 'en_pipeline' (0.0.0) was trained with spaCy v3.4 and may not be 100% compatible with the current version (3.5.0). If you see errors or degraded performance, download a newer compatible model or retrain your custom model with the current spaCy version. For more details and available updates, run: python -m spacy validate
warnings.warn(warn_msg)
Traceback (most recent call last):
File "/home/ubuntu/ResumeParser/manage.py", line 22, in <module>
main()
File "/home/ubuntu/ResumeParser/manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line
utility.execute()
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/__init__.py", line 440, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/base.py", line 402, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/base.py", line 443, in execute
self.check()
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/management/base.py", line 475, in check
all_issues = checks.run_checks(
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/checks/registry.py", line 88, in run_checks
new_errors = check(app_configs=app_configs, databases=databases)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/checks/urls.py", line 14, in check_url_config
return check_resolver(resolver)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/core/checks/urls.py", line 24, in check_resolver
return check_method()
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/urls/resolvers.py", line 494, in check
for pattern in self.url_patterns:
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/utils/functional.py", line 57, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/urls/resolvers.py", line 715, in url_patterns
patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/utils/functional.py", line 57, in __get__
res = instance.__dict__[self.name] = self.func(instance)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/django/urls/resolvers.py", line 708, in urlconf_module
return import_module(self.urlconf_name)
File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/home/ubuntu/ResumeParser/cv_parser_api/urls.py", line 18, in <module>
from resume_parser.views import home
File "/home/ubuntu/ResumeParser/resume_parser/views.py", line 21, in <module>
entityNlp = spacy.load(os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))),"model/model-last-ner-18"))
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/__init__.py", line 54, in load
return util.load_model(
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 434, in load_model
return load_model_from_path(Path(name), **kwargs) # type: ignore[arg-type]
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 514, in load_model_from_path
return nlp.from_disk(model_path, exclude=exclude, overrides=overrides)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/language.py", line 2125, in from_disk
util.from_disk(path, deserializers, exclude) # type: ignore[arg-type]
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 1352, in from_disk
reader(path / key)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/language.py", line 2119, in <lambda>
deserializers[name] = lambda p, proc=proc: proc.from_disk( # type: ignore[misc]
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy_transformers/pipeline_component.py", line 419, in from_disk
util.from_disk(path, deserialize, exclude) # type: ignore
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy/util.py", line 1352, in from_disk
reader(path / key)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy_transformers/pipeline_component.py", line 393, in load_model
self.model.from_bytes(mfile.read())
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/thinc/model.py", line 619, in from_bytes
return self.from_dict(msg)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/thinc/model.py", line 657, in from_dict
node.shims[i].from_bytes(shim_bytes)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/spacy_transformers/layers/hf_shim.py", line 89, in from_bytes
msg = srsly.msgpack_loads(bytes_data)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/srsly/_msgpack_api.py", line 27, in msgpack_loads
msg = msgpack.loads(data, raw=False, use_list=use_list)
File "/home/ubuntu/ResumeParser/rParserVenv/lib/python3.10/site-packages/srsly/msgpack/__init__.py", line 79, in unpackb
return _unpackb(packed, **kwargs)
File "srsly/msgpack/_unpacker.pyx", line 191, in srsly.msgpack._unpacker.unpackb
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x86 in position 0: invalid start byte
```
Kindly provide the solution, or do let me know how to get rid of this error!
## Info about spaCy
- **spaCy version:** 3.5.0
- **Platform:** Linux-5.4.0-137-generic-x86_64-with-glibc2.31
- **Python version:** 3.10.10
- **Pipelines:** en_core_web_sm (3.5.0)
* Operating System: Ubuntu 22.04 LTS
* Python Version Used: Python 3.10.10
* spaCy Version Used: 3.5.0
* Environment Information: venv
| closed | 2023-02-21T09:11:10Z | 2023-02-23T09:04:45Z | https://github.com/explosion/spaCy/issues/12310 | [
"feat / serialize"
] | Anand195 | 0 |
jonra1993/fastapi-alembic-sqlmodel-async | sqlalchemy | 92 | Where or how to obtain a database session for a task | Thanks for this amazing work!
Can you help tell me where or how to obtain a database session for a task. I saw that the initial function of CRUDBase has db param, but I didn't see initialization. Thank you very much.
| closed | 2024-01-23T14:18:28Z | 2024-01-24T09:02:27Z | https://github.com/jonra1993/fastapi-alembic-sqlmodel-async/issues/92 | [] | dylenchang | 1 |
statsmodels/statsmodels | data-science | 9,097 | Big performance difference between statsmodels MixedLM and R lmer | Hi,
Using statsmodels version 0.14.0 I notice a big difference in performance when running the same model in R. Using statsmodels' `mixedlm`, the model takes 41 minutes to run while only 1-2 seconds using R's `lmer` package on the same machine. Why is there such a large difference in performance and is there anything I can do speed things up?
I have tried different `method` arguments, but most of them have trouble converging.
My dataset contains 125,066 groups (pairs), has 2 categorical variables and 1 numerical. I am comparing the following code:
```
lmm_model = smf.mixedlm(f'value ~ C(cat_var1) + C(cat_var2) + numerical_var', data=pdf, groups=pdf['unit'])
lmm_results = lmm_model.fit()
print(lmm_results.summary())
```
And in R:
```
summary(lmer('value ~ cat_var1 + cat_var2 + numerical_var + (1|unit)', data=data_r))
```
Thanks in advance! | open | 2023-12-18T10:41:42Z | 2024-04-16T12:11:20Z | https://github.com/statsmodels/statsmodels/issues/9097 | [] | irrationalme | 1 |
dhaitz/mplcyberpunk | matplotlib | 10 | Savefig saves only background color | Is there a fix for it? I'd like to be able to save my figs in higher dpi. But for some reason it only saves as a dark rectangle of the background color. | open | 2021-11-14T18:39:18Z | 2023-06-28T10:44:43Z | https://github.com/dhaitz/mplcyberpunk/issues/10 | [] | miroslavtushev | 2 |
google-research/bert | nlp | 1,138 | Error when running run_pretraining.py (error recorded from training loop: indices[] is not in ...) | I'm trying to create my own `.ckpt.model` file by running the `run_pretraining.py file` on Google Colab using this command :
> !python run_pretraining.py \
> --bert_config_file "../bert-multi-cased/bert_config.json" \
> --input_file "../bert-model-custom/pretrain.tfrecord" \
> --output_dir "../bert-model-custom" \
> --init_checkpoint "../bert-multi-cased/bert_model.ckpt" \
> --do_train True \
> --train_batch_size 2
but i encountered this error :
> ....
> INFO:tensorflow:Saving checkpoints for 0 into ../bert-model-custom/model.ckpt.
> I0814 08:29:48.793563 140475286189952 basic_session_run_hooks.py:606] Saving checkpoints for 0 into ../bert-model-custom/model.ckpt.
> ERROR:tensorflow:Error recorded from training_loop: indices[254] = 164930 is not in [0, 119547)
> [[node bert/embeddings/GatherV2 (defined at /content/bert/modeling.py:419) ]]
I've read article somewhere this problem is becase vocabulary size outbounding issue, i mean the **164930** above is outbounding the **119547**, am i right?
If it's right then where can i change the vocabulary size?
Thankyou in advance,
sorry for bad english. | open | 2020-08-14T08:54:08Z | 2020-08-14T08:56:47Z | https://github.com/google-research/bert/issues/1138 | [] | dhimasyoga16 | 1 |
tqdm/tqdm | jupyter | 1,177 | Do an automatic stream flush before rendering a progress bar | 4.60.0 3.7.6 (default, Jan 8 2020, 20:23:39) [MSC v.1916 64 bit (AMD64)] win32
Running Python in Spyder
While tqdm is great for the negligible effort required to have a progress bar, I've found it necessary to add a `sys.stdout.flush()` call before usage to avoid possible interleaved output. This taints the code and reduces the ease of use. e.g. below, where although the `1` was printed before the bar was started, the output was interleaved.
```
1%| | 14/2408 [00:00<00:17, 135.41it/s]1
11%|█ | 263/2408 [00:02<00:17, 123.80it/s]
```
An option to have tqdm automatically flush the output stream it's about to use would be a huge benefit. | closed | 2021-06-03T11:54:35Z | 2021-07-06T06:42:14Z | https://github.com/tqdm/tqdm/issues/1177 | [
"p3-enhancement 🔥",
"to-fix ⌛",
"p2-bug-warning ⚠"
] | nickion | 8 |
mljar/mercury | data-visualization | 21 | https://github.com/ngoclinh8123/ngoclinh8123.github.io | closed | 2022-01-24T23:57:43Z | 2022-01-25T06:10:51Z | https://github.com/mljar/mercury/issues/21 | [] | Hanzie666 | 0 |
|
redis/redis-om-python | pydantic | 475 | Test case `test_pagination_queries` is flaky | Two successive runs of the test suite resulted in failure and success without any changes to the code.
The first failure was caused by AssertionError in `test_pagination_queries` :
```
members = (Member(id=0, first_name='Andrew', last_name='Brookins', email='a@example.com', join_date=datetime.date(2023, 2, 11), ...com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.'))
m = Models(BaseHashModel=<class 'tests_sync.test_hash_model.m.<locals>.BaseHashModel'>, Order=<class 'tests_sync.test_hash_model.m.<locals>.Order'>, Member=<class 'tests_sync.test_hash_model.m.<locals>.Member'>)
@py_test_mark_sync
def test_pagination_queries(members, m):
member1, member2, member3 = members
actual = m.Member.find(m.Member.last_name == "Brookins").page()
assert actual == [member1, member2]
actual = m.Member.find().page(1, 1)
> assert actual == [member2]
E AssertionError: assert [Member(id=2, first_name='Andrew', last_name='Smith', email='as@example.com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')] == [Member(id=1, first_name='Kim', last_name='Brookins', email='k@example.com', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E At index 0 diff: Member(id=2, first_name='Andrew', last_name='Smith', email='as@example.com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.') != Member(id=1, first_name='Kim', last_name='Brookins', email='k@example.com', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')
E Full diff:
E - [Member(id=1, first_name='Kim', last_name='Brookins', email='k@example.com', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E + [Member(id=2, first_name='Andrew', last_name='Smith', email='as@example.com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')]
tests_sync/test_hash_model.py:187: AssertionError
```
---
Attached is a full log showing the first failure and immediate re-run resulting in success.
<details>
<summary>Full console output</summary>
```
~/Repositories/redis-om-python fix-model-typings* 9s
redis-om-DEJACET3-py3.10 ❯ make test
/opt/homebrew/bin/poetry install
Installing dependencies from lock file
No dependencies to install or update
Installing the current project: redis-om (0.1.2)
touch .install.stamp
/opt/homebrew/bin/poetry run python make_sync.py
docker-compose up -d
[+] Running 7/7
⠿ oss_redis Pulled 3.6s
⠿ 5731adb3a4ab Already exists 0.0s
⠿ e78ad00da4bd Pull complete 0.6s
⠿ acf81d284940 Pull complete 0.8s
⠿ c19f7ed7779d Pull complete 1.5s
⠿ 9df49c3f82f2 Pull complete 1.5s
⠿ cf4fe2915070 Pull complete 1.5s
[+] Running 3/3
⠿ Network redis-om-python_default Created 0.0s
⠿ Container redis-om-python-oss_redis-1 Started 0.4s
⠿ Container redis-om-python-redis-1 Started 0.5s
REDIS_OM_URL=""redis://localhost:6380?decode_responses=True"" /opt/homebrew/bin/poetry run pytest -n auto -vv ./tests/ ./tests_sync/ --cov-report term-missing --cov aredis_om redis_om
=============================================================================== test session starts ================================================================================
platform darwin -- Python 3.10.8, pytest-7.2.1, pluggy-1.0.0 -- /Users/marian/Library/Caches/pypoetry/virtualenvs/redis-om-DEJACET3-py3.10/bin/python
cachedir: .pytest_cache
rootdir: /Users/marian/Repositories/redis-om-python, configfile: pytest.ini
plugins: xdist-3.2.0, asyncio-0.20.3, cov-4.0.0
asyncio: mode=strict
[gw0] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw1] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw2] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw3] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw4] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw5] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw6] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw7] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw0] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw1] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw2] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw3] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw4] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw5] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw6] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw7] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
gw0 [152] / gw1 [152] / gw2 [152] / gw3 [152] / gw4 [152] / gw5 [152] / gw6 [152] / gw7 [152]
scheduling tests via LoadScheduling
tests/test_hash_model.py::test_recursive_query_resolution
tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_validation_passes
tests/test_hash_model.py::test_raises_error_with_dicts
tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_exact_match_queries
[gw3] [ 0%] PASSED tests/test_hash_model.py::test_validation_passes
[gw5] [ 1%] PASSED tests/test_hash_model.py::test_raises_error_with_dicts
tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_raises_error_with_sets
[gw4] [ 1%] PASSED tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_expire
[gw1] [ 2%] PASSED tests/test_hash_model.py::test_recursive_query_resolution
[gw6] [ 3%] PASSED tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_updates_a_model
tests/test_hash_model.py::test_tag_queries_boolean_logic
[gw5] [ 3%] PASSED tests/test_hash_model.py::test_raises_error_with_sets
[gw7] [ 4%] PASSED tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_raises_error_with_lists
tests/test_hash_model.py::test_schema
[gw2] [ 5%] PASSED tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_sorting
[gw0] [ 5%] PASSED tests/test_hash_model.py::test_exact_match_queries
tests/test_hash_model.py::test_delete_non_exist
[gw3] [ 6%] PASSED tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_saves_model_and_creates_pk
[gw4] [ 7%] PASSED tests/test_hash_model.py::test_expire
tests/test_hash_model.py::test_raises_error_with_embedded_models
[gw1] [ 7%] PASSED tests/test_hash_model.py::test_tag_queries_boolean_logic
tests/test_hash_model.py::test_tag_queries_punctuation
[gw5] [ 8%] PASSED tests/test_hash_model.py::test_raises_error_with_lists
[gw7] [ 9%] PASSED tests/test_hash_model.py::test_schema
tests/test_hash_model.py::test_saves_many
tests/test_hash_model.py::test_primary_key_model_error
[gw6] [ 9%] PASSED tests/test_hash_model.py::test_updates_a_model
tests/test_hash_model.py::test_paginate_query
[gw3] [ 10%] PASSED tests/test_hash_model.py::test_saves_model_and_creates_pk
tests/test_hash_model.py::test_all_pks
[gw4] [ 11%] PASSED tests/test_hash_model.py::test_raises_error_with_embedded_models
tests/test_hash_model.py::test_raises_error_with_dataclasses
[gw2] [ 11%] PASSED tests/test_hash_model.py::test_sorting
tests/test_hash_model.py::test_validates_required_fields
[gw0] [ 12%] PASSED tests/test_hash_model.py::test_delete_non_exist
[gw5] [ 13%] PASSED tests/test_hash_model.py::test_saves_many
tests/test_hash_model.py::test_count
tests/test_hash_model.py::test_full_text_search_queries
[gw1] [ 13%] PASSED tests/test_hash_model.py::test_tag_queries_punctuation
tests/test_hash_model.py::test_tag_queries_negation
[gw6] [ 14%] PASSED tests/test_hash_model.py::test_paginate_query
[gw2] [ 15%] PASSED tests/test_hash_model.py::test_validates_required_fields
tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_hash_model.py::test_validates_field
[gw4] [ 15%] PASSED tests/test_hash_model.py::test_raises_error_with_dataclasses
tests/test_json_model.py::test_updates_a_model
[gw7] [ 16%] PASSED tests/test_hash_model.py::test_primary_key_model_error
tests/test_hash_model.py::test_primary_pk_exists
[gw3] [ 17%] PASSED tests/test_hash_model.py::test_all_pks
[gw0] [ 17%] PASSED tests/test_hash_model.py::test_full_text_search_queries
tests/test_json_model.py::test_all_pks
tests/test_hash_model.py::test_pagination_queries
[gw5] [ 18%] PASSED tests/test_hash_model.py::test_count
tests/test_json_model.py::test_validates_required_fields
[gw2] [ 19%] PASSED tests/test_hash_model.py::test_validates_field
tests/test_json_model.py::test_list_field_limitations
[gw1] [ 19%] PASSED tests/test_hash_model.py::test_tag_queries_negation
tests/test_json_model.py::test_in_query
[gw6] [ 20%] PASSED tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_tag_queries_negation
[gw0] [ 21%] PASSED tests/test_hash_model.py::test_pagination_queries
tests/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 21%] PASSED tests/test_json_model.py::test_validates_required_fields
tests/test_json_model.py::test_validates_field
[gw4] [ 22%] PASSED tests/test_json_model.py::test_updates_a_model
tests/test_json_model.py::test_paginate_query
[gw1] [ 23%] PASSED tests/test_json_model.py::test_in_query
tests/test_json_model.py::test_update_query
[gw7] [ 23%] PASSED tests/test_hash_model.py::test_primary_pk_exists
tests/test_json_model.py::test_recursive_query_field_resolution
[gw5] [ 24%] PASSED tests/test_json_model.py::test_validates_field
[gw6] [ 25%] PASSED tests/test_json_model.py::test_tag_queries_negation
tests/test_json_model.py::test_validation_passes
tests/test_json_model.py::test_numeric_queries
[gw2] [ 25%] PASSED tests/test_json_model.py::test_list_field_limitations
tests/test_json_model.py::test_allows_dataclasses
[gw3] [ 26%] PASSED tests/test_json_model.py::test_all_pks
tests/test_json_model.py::test_delete
[gw0] [ 26%] PASSED tests/test_json_model.py::test_allows_and_serializes_lists
tests/test_json_model.py::test_schema
[gw4] [ 27%] PASSED tests/test_json_model.py::test_paginate_query
[gw5] [ 28%] PASSED tests/test_json_model.py::test_validation_passes
tests/test_json_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_saves_model_and_creates_pk
[gw1] [ 28%] PASSED tests/test_json_model.py::test_update_query
tests/test_json_model.py::test_exact_match_queries
[gw3] [ 29%] PASSED tests/test_json_model.py::test_delete
tests/test_json_model.py::test_saves_many_implicit_pipeline
[gw2] [ 30%] PASSED tests/test_json_model.py::test_allows_dataclasses
tests/test_json_model.py::test_allows_and_serializes_dicts
[gw0] [ 30%] PASSED tests/test_json_model.py::test_schema
tests/test_json_model.py::test_count
[gw6] [ 31%] PASSED tests/test_json_model.py::test_numeric_queries
tests/test_json_model.py::test_sorting
[gw3] [ 32%] PASSED tests/test_json_model.py::test_saves_many_implicit_pipeline
tests/test_json_model.py::test_saves_many_explicit_transaction
[gw4] [ 32%] PASSED tests/test_json_model.py::test_access_result_by_index_cached
[gw7] [ 33%] PASSED tests/test_json_model.py::test_recursive_query_field_resolution
tests/test_json_model.py::test_access_result_by_index_not_cached
[gw5] [ 34%] PASSED tests/test_json_model.py::test_saves_model_and_creates_pk
tests/test_json_model.py::test_full_text_search
tests/test_oss_redis_features.py::test_not_found
[gw1] [ 34%] PASSED tests/test_json_model.py::test_exact_match_queries
tests/test_json_model.py::test_recursive_query_expression_resolution
[gw2] [ 35%] PASSED tests/test_json_model.py::test_allows_and_serializes_dicts
[gw6] [ 36%] PASSED tests/test_json_model.py::test_sorting
tests/test_json_model.py::test_allows_and_serializes_sets
tests/test_json_model.py::test_not_found
[gw0] [ 36%] PASSED tests/test_json_model.py::test_count
tests/test_oss_redis_features.py::test_all_keys
[gw3] [ 37%] PASSED tests/test_json_model.py::test_saves_many_explicit_transaction
tests/test_json_model.py::test_delete_many_implicit_pipeline
[gw5] [ 38%] PASSED tests/test_oss_redis_features.py::test_not_found
tests/test_oss_redis_features.py::test_validates_required_fields
[gw6] [ 38%] PASSED tests/test_json_model.py::test_not_found
tests_sync/test_hash_model.py::test_recursive_query_resolution
[gw1] [ 39%] PASSED tests/test_json_model.py::test_recursive_query_expression_resolution
[gw4] [ 40%] PASSED tests/test_json_model.py::test_access_result_by_index_not_cached
tests/test_pydantic_integrations.py::test_email_str
tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw2] [ 40%] PASSED tests/test_json_model.py::test_allows_and_serializes_sets
tests_sync/test_hash_model.py::test_delete_non_exist
[gw7] [ 41%] PASSED tests/test_json_model.py::test_full_text_search
tests/test_json_model.py::test_tag_queries_boolean_logic
[gw3] [ 42%] PASSED tests/test_json_model.py::test_delete_many_implicit_pipeline
tests_sync/test_hash_model.py::test_validates_required_fields
[gw5] [ 42%] PASSED tests/test_oss_redis_features.py::test_validates_required_fields
tests/test_oss_redis_features.py::test_validates_field
[gw6] [ 43%] PASSED tests_sync/test_hash_model.py::test_recursive_query_resolution
tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
[gw4] [ 44%] PASSED tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw3] [ 44%] PASSED tests_sync/test_hash_model.py::test_validates_required_fields
tests_sync/test_hash_model.py::test_validates_field
[gw2] [ 45%] PASSED tests_sync/test_hash_model.py::test_delete_non_exist
tests_sync/test_hash_model.py::test_full_text_search_queries
[gw1] [ 46%] PASSED tests/test_pydantic_integrations.py::test_email_str
tests/test_redis_type.py::test_redis_type
[gw1] [ 46%] PASSED tests/test_redis_type.py::test_redis_type
tests_sync/test_hash_model.py::test_exact_match_queries
[gw0] [ 47%] PASSED tests/test_oss_redis_features.py::test_all_keys
[gw6] [ 48%] PASSED tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
tests_sync/test_hash_model.py::test_tag_queries_negation
tests_sync/test_hash_model.py::test_tag_queries_punctuation
[gw5] [ 48%] PASSED tests/test_oss_redis_features.py::test_validates_field
tests/test_oss_redis_features.py::test_validation_passes
[gw7] [ 49%] PASSED tests/test_json_model.py::test_tag_queries_boolean_logic
tests/test_json_model.py::test_tag_queries_punctuation
[gw3] [ 50%] PASSED tests_sync/test_hash_model.py::test_validates_field
tests_sync/test_hash_model.py::test_validation_passes
[gw2] [ 50%] PASSED tests_sync/test_hash_model.py::test_full_text_search_queries
tests_sync/test_hash_model.py::test_pagination_queries
[gw4] [ 51%] PASSED tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
tests/test_oss_redis_features.py::test_saves_many
[gw3] [ 51%] PASSED tests_sync/test_hash_model.py::test_validation_passes
tests_sync/test_hash_model.py::test_raises_error_with_sets
[gw6] [ 52%] PASSED tests_sync/test_hash_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_all_pks
[gw5] [ 53%] PASSED tests/test_oss_redis_features.py::test_validation_passes
tests_sync/test_hash_model.py::test_expire
[gw0] [ 53%] PASSED tests_sync/test_hash_model.py::test_tag_queries_negation
tests_sync/test_hash_model.py::test_numeric_queries
[gw1] [ 54%] PASSED tests_sync/test_hash_model.py::test_exact_match_queries
tests_sync/test_hash_model.py::test_retrieve_first
[gw3] [ 55%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_sets
tests_sync/test_hash_model.py::test_raises_error_with_lists
[gw7] [ 55%] PASSED tests/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
[gw5] [ 56%] PASSED tests_sync/test_hash_model.py::test_expire
tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
[gw3] [ 57%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_lists
tests_sync/test_hash_model.py::test_updates_a_model
[gw2] [ 57%] FAILED tests_sync/test_hash_model.py::test_pagination_queries
tests_sync/test_hash_model.py::test_saves_many
[gw1] [ 58%] PASSED tests_sync/test_hash_model.py::test_retrieve_first
tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
[gw4] [ 59%] PASSED tests/test_oss_redis_features.py::test_saves_many
tests/test_oss_redis_features.py::test_updates_a_model
[gw0] [ 59%] PASSED tests_sync/test_hash_model.py::test_numeric_queries
tests_sync/test_hash_model.py::test_sorting
[gw5] [ 60%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
tests_sync/test_hash_model.py::test_access_result_by_index_cached
[gw7] [ 61%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
tests_sync/test_hash_model.py::test_raises_error_with_dicts
[gw2] [ 61%] PASSED tests_sync/test_hash_model.py::test_saves_many
tests_sync/test_hash_model.py::test_delete_many
[gw3] [ 62%] PASSED tests_sync/test_hash_model.py::test_updates_a_model
[gw1] [ 63%] PASSED tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_hash_model.py::test_schema
[gw6] [ 63%] PASSED tests_sync/test_hash_model.py::test_all_pks
tests_sync/test_hash_model.py::test_delete
[gw7] [ 64%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dicts
tests_sync/test_hash_model.py::test_count
[gw0] [ 65%] PASSED tests_sync/test_hash_model.py::test_sorting
tests_sync/test_hash_model.py::test_primary_pk_exists
[gw2] [ 65%] PASSED tests_sync/test_hash_model.py::test_delete_many
[gw1] [ 66%] PASSED tests_sync/test_hash_model.py::test_schema
tests_sync/test_json_model.py::test_validates_required_fields
tests_sync/test_json_model.py::test_validation_passes
[gw5] [ 67%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_cached
tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
[gw6] [ 67%] PASSED tests_sync/test_hash_model.py::test_delete
[gw4] [ 68%] PASSED tests/test_oss_redis_features.py::test_updates_a_model
tests_sync/test_json_model.py::test_saves_model_and_creates_pk
tests_sync/test_hash_model.py::test_primary_key_model_error
[gw3] [ 69%] PASSED tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_json_model.py::test_validates_field
[gw7] [ 69%] PASSED tests_sync/test_hash_model.py::test_count
tests_sync/test_json_model.py::test_all_pks
[gw2] [ 70%] PASSED tests_sync/test_json_model.py::test_validates_required_fields
[gw1] [ 71%] PASSED tests_sync/test_json_model.py::test_validation_passes
tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
tests_sync/test_json_model.py::test_saves_many_explicit_transaction
[gw0] [ 71%] PASSED tests_sync/test_hash_model.py::test_primary_pk_exists
tests_sync/test_json_model.py::test_delete
[gw3] [ 72%] PASSED tests_sync/test_json_model.py::test_validates_field
[gw5] [ 73%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
tests_sync/test_json_model.py::test_access_result_by_index_cached
tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
[gw6] [ 73%] PASSED tests_sync/test_json_model.py::test_saves_model_and_creates_pk
tests_sync/test_json_model.py::test_updates_a_model
[gw4] [ 74%] PASSED tests_sync/test_hash_model.py::test_primary_key_model_error
tests_sync/test_json_model.py::test_paginate_query
[gw2] [ 75%] PASSED tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
tests_sync/test_json_model.py::test_in_query
[gw1] [ 75%] PASSED tests_sync/test_json_model.py::test_saves_many_explicit_transaction
tests_sync/test_json_model.py::test_update_query
[gw3] [ 76%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_cached
[gw0] [ 76%] PASSED tests_sync/test_json_model.py::test_delete
tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_exact_match_queries
[gw5] [ 77%] PASSED tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
tests_sync/test_json_model.py::test_recursive_query_field_resolution
[gw2] [ 78%] PASSED tests_sync/test_json_model.py::test_in_query
tests_sync/test_json_model.py::test_tag_queries_punctuation
[gw4] [ 78%] PASSED tests_sync/test_json_model.py::test_paginate_query
tests_sync/test_json_model.py::test_tag_queries_boolean_logic
[gw6] [ 79%] PASSED tests_sync/test_json_model.py::test_updates_a_model
tests_sync/test_json_model.py::test_full_text_search
[gw1] [ 80%] PASSED tests_sync/test_json_model.py::test_update_query
[gw3] [ 80%] PASSED tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_tag_queries_negation
tests_sync/test_json_model.py::test_numeric_queries
[gw7] [ 81%] PASSED tests_sync/test_json_model.py::test_all_pks
tests_sync/test_json_model.py::test_access_result_by_index_not_cached
[gw2] [ 82%] PASSED tests_sync/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_json_model.py::test_list_field_limitations
[gw5] [ 82%] PASSED tests_sync/test_json_model.py::test_recursive_query_field_resolution
tests_sync/test_json_model.py::test_not_found
[gw0] [ 83%] PASSED tests_sync/test_json_model.py::test_exact_match_queries
tests_sync/test_json_model.py::test_sorting
[gw6] [ 84%] PASSED tests_sync/test_json_model.py::test_full_text_search
[gw4] [ 84%] PASSED tests_sync/test_json_model.py::test_tag_queries_boolean_logic
tests_sync/test_json_model.py::test_allows_and_serializes_dicts
tests_sync/test_json_model.py::test_allows_dataclasses
[gw1] [ 85%] PASSED tests_sync/test_json_model.py::test_tag_queries_negation
tests_sync/test_json_model.py::test_allows_and_serializes_sets
[gw3] [ 86%] PASSED tests_sync/test_json_model.py::test_numeric_queries
tests_sync/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 86%] PASSED tests_sync/test_json_model.py::test_not_found
tests_sync/test_oss_redis_features.py::test_all_keys
[gw7] [ 87%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_not_cached
tests_sync/test_json_model.py::test_schema
[gw6] [ 88%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_dicts
tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw0] [ 88%] PASSED tests_sync/test_json_model.py::test_sorting
tests_sync/test_oss_redis_features.py::test_not_found
[gw4] [ 89%] PASSED tests_sync/test_json_model.py::test_allows_dataclasses
[gw2] [ 90%] PASSED tests_sync/test_json_model.py::test_list_field_limitations
tests_sync/test_oss_redis_features.py::test_validates_field
tests_sync/test_json_model.py::test_count
[gw1] [ 90%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_sets
[gw3] [ 91%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_lists
tests_sync/test_oss_redis_features.py::test_validation_passes
tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw7] [ 92%] PASSED tests_sync/test_json_model.py::test_schema
tests_sync/test_oss_redis_features.py::test_saves_many
[gw6] [ 92%] PASSED tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw0] [ 93%] PASSED tests_sync/test_oss_redis_features.py::test_not_found
tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw2] [ 94%] PASSED tests_sync/test_json_model.py::test_count
tests_sync/test_pydantic_integrations.py::test_email_str
[gw4] [ 94%] PASSED tests_sync/test_oss_redis_features.py::test_validates_field
tests_sync/test_redis_type.py::test_redis_type
[gw4] [ 95%] PASSED tests_sync/test_redis_type.py::test_redis_type
[gw3] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw1] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_validation_passes
[gw7] [ 97%] PASSED tests_sync/test_oss_redis_features.py::test_saves_many
[gw6] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw5] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_all_keys
tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw0] [ 99%] PASSED tests_sync/test_pydantic_integrations.py::test_email_str
[gw5] [100%] PASSED tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
===================================================================================== FAILURES =====================================================================================
_____________________________________________________________________________ test_pagination_queries ______________________________________________________________________________
[gw2] darwin -- Python 3.10.8 /Users/marian/Library/Caches/pypoetry/virtualenvs/redis-om-DEJACET3-py3.10/bin/python
members = (Member(id=0, first_name='Andrew', last_name='Brookins', email='a@example.com', join_date=datetime.date(2023, 2, 11), ...com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.'))
m = Models(BaseHashModel=<class 'tests_sync.test_hash_model.m.<locals>.BaseHashModel'>, Order=<class 'tests_sync.test_hash_model.m.<locals>.Order'>, Member=<class 'tests_sync.test_hash_model.m.<locals>.Member'>)
@py_test_mark_sync
def test_pagination_queries(members, m):
member1, member2, member3 = members
actual = m.Member.find(m.Member.last_name == "Brookins").page()
assert actual == [member1, member2]
actual = m.Member.find().page(1, 1)
> assert actual == [member2]
E AssertionError: assert [Member(id=2, first_name='Andrew', last_name='Smith', email='as@example.com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')] == [Member(id=1, first_name='Kim', last_name='Brookins', email='k@example.com', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E At index 0 diff: Member(id=2, first_name='Andrew', last_name='Smith', email='as@example.com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.') != Member(id=1, first_name='Kim', last_name='Brookins', email='k@example.com', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')
E Full diff:
E - [Member(id=1, first_name='Kim', last_name='Brookins', email='k@example.com', join_date=datetime.date(2023, 2, 11), age=34, bio='This is member 2 who can be quite anxious until you get to know them.')]
E + [Member(id=2, first_name='Andrew', last_name='Smith', email='as@example.com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who is a funny and lively sort of person.')]
tests_sync/test_hash_model.py:187: AssertionError
---------- coverage: platform darwin, python 3.10.8-final-0 ----------
Name Stmts Miss Cover Missing
----------------------------------------------------------------------
aredis_om/__init__.py 5 0 100%
aredis_om/async_redis.py 1 0 100%
aredis_om/checks.py 21 12 43% 9-10, 15-18, 23-28
aredis_om/connections.py 10 1 90% 20
aredis_om/model/__init__.py 2 0 100%
aredis_om/model/cli/__init__.py 0 0 100%
aredis_om/model/cli/migrate.py 13 13 0% 1-18
aredis_om/model/encoders.py 72 35 51% 68, 70, 73-86, 94, 96, 98, 132-147, 150-155, 159-173
aredis_om/model/migrations/__init__.py 0 0 100%
aredis_om/model/migrations/migrator.py 87 15 83% 24-35, 45, 56, 83-84, 89-90, 101, 112-114
aredis_om/model/model.py 888 115 87% 100, 111, 128, 136, 145-152, 166, 185, 193, 199, 203, 207, 211-214, 218, 241, 245, 297, 305, 352, 394, 401, 419, 446, 474, 499, 502-508, 527, 529, 533, 561-571, 592-595, 606, 653, 667-672, 685, 699, 701, 703, 705, 768, 787, 823-828, 844-854, 904, 927-928, 1072, 1135, 1157, 1161, 1166, 1190, 1221-1224, 1232, 1308, 1314, 1374-1382, 1396, 1436-1445, 1449, 1464-1472, 1483-1493, 1506, 1606-1607, 1634-1637, 1721, 1725-1729
aredis_om/model/query_resolver.py 23 23 0% 1-103
aredis_om/model/render_tree.py 33 31 6% 24-75
aredis_om/model/token_escaper.py 13 1 92% 16
aredis_om/sync_redis.py 1 1 0% 1
aredis_om/util.py 6 1 83% 7
----------------------------------------------------------------------
TOTAL 1175 248 79%
============================================================================= short test summary info ==============================================================================
FAILED tests_sync/test_hash_model.py::test_pagination_queries - AssertionError: assert [Member(id=2, first_name='Andrew', last_name='Smith', email='as@example.com', join_date=datetime.date(2023, 2, 11), age=100, bio='This is member 3 who i...
========================================================================== 1 failed, 151 passed in 1.54s ===========================================================================
make: *** [test] Error 1
~/Repositories/redis-om-python fix-model-typings* 8s
redis-om-DEJACET3-py3.10 ❯ make test
/opt/homebrew/bin/poetry run python make_sync.py
docker-compose up -d
[+] Running 2/2
⠿ Container redis-om-python-oss_redis-1 Started 0.5s
⠿ Container redis-om-python-redis-1 Running 0.0s
REDIS_OM_URL=""redis://localhost:6380?decode_responses=True"" /opt/homebrew/bin/poetry run pytest -n auto -vv ./tests/ ./tests_sync/ --cov-report term-missing --cov aredis_om redis_om
=============================================================================== test session starts ================================================================================
platform darwin -- Python 3.10.8, pytest-7.2.1, pluggy-1.0.0 -- /Users/marian/Library/Caches/pypoetry/virtualenvs/redis-om-DEJACET3-py3.10/bin/python
cachedir: .pytest_cache
rootdir: /Users/marian/Repositories/redis-om-python, configfile: pytest.ini
plugins: xdist-3.2.0, asyncio-0.20.3, cov-4.0.0
asyncio: mode=strict
[gw0] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw1] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw2] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw3] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw4] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw5] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw6] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw7] darwin Python 3.10.8 cwd: /Users/marian/Repositories/redis-om-python
[gw0] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw1] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw2] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw3] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw4] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw5] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw6] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
[gw7] Python 3.10.8 (main, Oct 13 2022, 09:48:40) [Clang 14.0.0 (clang-1400.0.29.102)]
gw0 [152] / gw1 [152] / gw2 [152] / gw3 [152] / gw4 [152] / gw5 [152] / gw6 [152] / gw7 [152]
scheduling tests via LoadScheduling
tests/test_hash_model.py::test_exact_match_queries
tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_validation_passes
tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_recursive_query_resolution
tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_raises_error_with_dicts
[gw5] [ 0%] PASSED tests/test_hash_model.py::test_raises_error_with_dicts
[gw3] [ 1%] PASSED tests/test_hash_model.py::test_validation_passes
tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_raises_error_with_sets
[gw4] [ 1%] PASSED tests/test_hash_model.py::test_delete
tests/test_hash_model.py::test_expire
[gw6] [ 2%] PASSED tests/test_hash_model.py::test_delete_many
tests/test_hash_model.py::test_updates_a_model
[gw1] [ 3%] PASSED tests/test_hash_model.py::test_recursive_query_resolution
tests/test_hash_model.py::test_tag_queries_boolean_logic
[gw0] [ 3%] PASSED tests/test_hash_model.py::test_exact_match_queries
tests/test_hash_model.py::test_delete_non_exist
[gw5] [ 4%] PASSED tests/test_hash_model.py::test_raises_error_with_sets
tests/test_hash_model.py::test_raises_error_with_lists
[gw2] [ 5%] PASSED tests/test_hash_model.py::test_numeric_queries
tests/test_hash_model.py::test_sorting
[gw4] [ 5%] PASSED tests/test_hash_model.py::test_expire
[gw7] [ 6%] PASSED tests/test_hash_model.py::test_access_result_by_index_not_cached
tests/test_hash_model.py::test_raises_error_with_embedded_models
tests/test_hash_model.py::test_schema
[gw3] [ 7%] PASSED tests/test_hash_model.py::test_retrieve_first
tests/test_hash_model.py::test_saves_model_and_creates_pk
[gw6] [ 7%] PASSED tests/test_hash_model.py::test_updates_a_model
tests/test_hash_model.py::test_paginate_query
[gw5] [ 8%] PASSED tests/test_hash_model.py::test_raises_error_with_lists
tests/test_hash_model.py::test_saves_many
[gw1] [ 9%] PASSED tests/test_hash_model.py::test_tag_queries_boolean_logic
tests/test_hash_model.py::test_tag_queries_punctuation
[gw4] [ 9%] PASSED tests/test_hash_model.py::test_raises_error_with_embedded_models
tests/test_hash_model.py::test_raises_error_with_dataclasses
[gw7] [ 10%] PASSED tests/test_hash_model.py::test_schema
tests/test_hash_model.py::test_primary_key_model_error
[gw3] [ 11%] PASSED tests/test_hash_model.py::test_saves_model_and_creates_pk
tests/test_hash_model.py::test_all_pks
[gw0] [ 11%] PASSED tests/test_hash_model.py::test_delete_non_exist
[gw2] [ 12%] PASSED tests/test_hash_model.py::test_sorting
tests/test_hash_model.py::test_validates_required_fields
tests/test_hash_model.py::test_full_text_search_queries
[gw5] [ 13%] PASSED tests/test_hash_model.py::test_saves_many
tests/test_hash_model.py::test_count
[gw2] [ 13%] PASSED tests/test_hash_model.py::test_validates_required_fields
tests/test_hash_model.py::test_validates_field
[gw6] [ 14%] PASSED tests/test_hash_model.py::test_paginate_query
[gw4] [ 15%] PASSED tests/test_hash_model.py::test_raises_error_with_dataclasses
tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_all_pks
[gw1] [ 15%] PASSED tests/test_hash_model.py::test_tag_queries_punctuation
tests/test_hash_model.py::test_tag_queries_negation
[gw0] [ 16%] PASSED tests/test_hash_model.py::test_full_text_search_queries
tests/test_hash_model.py::test_pagination_queries
[gw2] [ 17%] PASSED tests/test_hash_model.py::test_validates_field
[gw3] [ 17%] PASSED tests/test_hash_model.py::test_all_pks
tests/test_json_model.py::test_list_field_limitations
tests/test_json_model.py::test_updates_a_model
[gw5] [ 18%] PASSED tests/test_hash_model.py::test_count
[gw7] [ 19%] PASSED tests/test_hash_model.py::test_primary_key_model_error
tests/test_json_model.py::test_validates_required_fields
tests/test_hash_model.py::test_primary_pk_exists
[gw6] [ 19%] PASSED tests/test_hash_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_in_query
[gw1] [ 20%] PASSED tests/test_hash_model.py::test_tag_queries_negation
tests/test_json_model.py::test_recursive_query_field_resolution
[gw0] [ 21%] PASSED tests/test_hash_model.py::test_pagination_queries
tests/test_json_model.py::test_allows_and_serializes_lists
[gw4] [ 21%] PASSED tests/test_json_model.py::test_all_pks
tests/test_json_model.py::test_delete
[gw5] [ 22%] PASSED tests/test_json_model.py::test_validates_required_fields
tests/test_json_model.py::test_validates_field
[gw6] [ 23%] PASSED tests/test_json_model.py::test_in_query
tests/test_json_model.py::test_update_query
[gw3] [ 23%] PASSED tests/test_json_model.py::test_updates_a_model
tests/test_json_model.py::test_paginate_query
[gw5] [ 24%] PASSED tests/test_json_model.py::test_validates_field
tests/test_json_model.py::test_validation_passes
[gw2] [ 25%] PASSED tests/test_json_model.py::test_list_field_limitations
tests/test_json_model.py::test_allows_dataclasses
[gw4] [ 25%] PASSED tests/test_json_model.py::test_delete
tests/test_json_model.py::test_saves_many_implicit_pipeline
[gw0] [ 26%] PASSED tests/test_json_model.py::test_allows_and_serializes_lists
[gw1] [ 26%] PASSED tests/test_json_model.py::test_recursive_query_field_resolution
tests/test_json_model.py::test_schema
tests/test_json_model.py::test_full_text_search
[gw7] [ 27%] PASSED tests/test_hash_model.py::test_primary_pk_exists
tests/test_json_model.py::test_tag_queries_negation
[gw5] [ 28%] PASSED tests/test_json_model.py::test_validation_passes
tests/test_json_model.py::test_saves_model_and_creates_pk
[gw6] [ 28%] PASSED tests/test_json_model.py::test_update_query
tests/test_json_model.py::test_exact_match_queries
[gw3] [ 29%] PASSED tests/test_json_model.py::test_paginate_query
[gw2] [ 30%] PASSED tests/test_json_model.py::test_allows_dataclasses
tests/test_json_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_allows_and_serializes_dicts
[gw0] [ 30%] PASSED tests/test_json_model.py::test_schema
tests/test_json_model.py::test_count
[gw4] [ 31%] PASSED tests/test_json_model.py::test_saves_many_implicit_pipeline
tests/test_json_model.py::test_saves_many_explicit_transaction
[gw1] [ 32%] PASSED tests/test_json_model.py::test_full_text_search
tests/test_json_model.py::test_tag_queries_boolean_logic
[gw5] [ 32%] PASSED tests/test_json_model.py::test_saves_model_and_creates_pk
tests/test_oss_redis_features.py::test_not_found
[gw3] [ 33%] PASSED tests/test_json_model.py::test_access_result_by_index_cached
tests/test_json_model.py::test_access_result_by_index_not_cached
[gw6] [ 34%] PASSED tests/test_json_model.py::test_exact_match_queries
tests/test_json_model.py::test_recursive_query_expression_resolution
[gw2] [ 34%] PASSED tests/test_json_model.py::test_allows_and_serializes_dicts
tests/test_json_model.py::test_allows_and_serializes_sets
[gw0] [ 35%] PASSED tests/test_json_model.py::test_count
tests/test_oss_redis_features.py::test_all_keys
[gw7] [ 36%] PASSED tests/test_json_model.py::test_tag_queries_negation
tests/test_json_model.py::test_numeric_queries
[gw4] [ 36%] PASSED tests/test_json_model.py::test_saves_many_explicit_transaction
tests/test_json_model.py::test_delete_many_implicit_pipeline
[gw1] [ 37%] PASSED tests/test_json_model.py::test_tag_queries_boolean_logic
tests/test_json_model.py::test_tag_queries_punctuation
[gw5] [ 38%] PASSED tests/test_oss_redis_features.py::test_not_found
[gw6] [ 38%] PASSED tests/test_json_model.py::test_recursive_query_expression_resolution
tests/test_oss_redis_features.py::test_validates_required_fields
tests/test_pydantic_integrations.py::test_email_str
[gw3] [ 39%] PASSED tests/test_json_model.py::test_access_result_by_index_not_cached
tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
[gw4] [ 40%] PASSED tests/test_json_model.py::test_delete_many_implicit_pipeline
tests_sync/test_hash_model.py::test_tag_queries_negation
[gw1] [ 40%] PASSED tests/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_validates_required_fields
[gw2] [ 41%] PASSED tests/test_json_model.py::test_allows_and_serializes_sets
tests_sync/test_hash_model.py::test_delete_non_exist
[gw5] [ 42%] PASSED tests/test_oss_redis_features.py::test_validates_required_fields
tests/test_oss_redis_features.py::test_validates_field
[gw7] [ 42%] PASSED tests/test_json_model.py::test_numeric_queries
tests/test_json_model.py::test_sorting
[gw1] [ 43%] PASSED tests_sync/test_hash_model.py::test_validates_required_fields
tests_sync/test_hash_model.py::test_validates_field
[gw3] [ 44%] PASSED tests/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw6] [ 44%] PASSED tests/test_pydantic_integrations.py::test_email_str
tests/test_redis_type.py::test_redis_type
[gw6] [ 45%] PASSED tests/test_redis_type.py::test_redis_type
tests_sync/test_hash_model.py::test_exact_match_queries
[gw4] [ 46%] PASSED tests_sync/test_hash_model.py::test_tag_queries_negation
tests_sync/test_hash_model.py::test_numeric_queries
[gw5] [ 46%] PASSED tests/test_oss_redis_features.py::test_validates_field
[gw2] [ 47%] PASSED tests_sync/test_hash_model.py::test_delete_non_exist
tests_sync/test_hash_model.py::test_full_text_search_queries
[gw1] [ 48%] PASSED tests_sync/test_hash_model.py::test_validates_field
tests/test_oss_redis_features.py::test_validation_passes
tests_sync/test_hash_model.py::test_validation_passes
[gw0] [ 48%] PASSED tests/test_oss_redis_features.py::test_all_keys
tests_sync/test_hash_model.py::test_recursive_query_resolution
[gw1] [ 49%] PASSED tests_sync/test_hash_model.py::test_validation_passes
tests_sync/test_hash_model.py::test_expire
[gw3] [ 50%] PASSED tests/test_oss_redis_features.py::test_raises_error_with_embedded_models
tests/test_oss_redis_features.py::test_saves_many
[gw7] [ 50%] PASSED tests/test_json_model.py::test_sorting
[gw2] [ 51%] PASSED tests_sync/test_hash_model.py::test_full_text_search_queries
tests/test_json_model.py::test_not_found
tests_sync/test_hash_model.py::test_pagination_queries
[gw4] [ 51%] PASSED tests_sync/test_hash_model.py::test_numeric_queries
tests_sync/test_hash_model.py::test_sorting
[gw6] [ 52%] PASSED tests_sync/test_hash_model.py::test_exact_match_queries
[gw5] [ 53%] PASSED tests/test_oss_redis_features.py::test_validation_passes
tests_sync/test_hash_model.py::test_retrieve_first
tests_sync/test_hash_model.py::test_all_pks
[gw1] [ 53%] PASSED tests_sync/test_hash_model.py::test_expire
[gw0] [ 54%] PASSED tests_sync/test_hash_model.py::test_recursive_query_resolution
tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
[gw2] [ 55%] PASSED tests_sync/test_hash_model.py::test_pagination_queries
tests_sync/test_hash_model.py::test_raises_error_with_sets
[gw6] [ 55%] PASSED tests_sync/test_hash_model.py::test_retrieve_first
[gw1] [ 56%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_embedded_models
tests_sync/test_hash_model.py::test_updates_a_model
tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
[gw4] [ 57%] PASSED tests_sync/test_hash_model.py::test_sorting
[gw7] [ 57%] PASSED tests/test_json_model.py::test_not_found
tests_sync/test_hash_model.py::test_saves_many
tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
[gw3] [ 58%] PASSED tests/test_oss_redis_features.py::test_saves_many
tests/test_oss_redis_features.py::test_updates_a_model
[gw0] [ 59%] PASSED tests_sync/test_hash_model.py::test_tag_queries_boolean_logic
tests_sync/test_hash_model.py::test_tag_queries_punctuation
[gw2] [ 59%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_sets
tests_sync/test_hash_model.py::test_raises_error_with_lists
[gw6] [ 60%] PASSED tests_sync/test_hash_model.py::test_saves_model_and_creates_pk
tests_sync/test_hash_model.py::test_access_result_by_index_cached
[gw4] [ 61%] PASSED tests_sync/test_hash_model.py::test_saves_many
tests_sync/test_hash_model.py::test_delete_many
[gw7] [ 61%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dataclasses
tests_sync/test_hash_model.py::test_raises_error_with_dicts
[gw2] [ 62%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_lists
[gw1] [ 63%] PASSED tests_sync/test_hash_model.py::test_updates_a_model
tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_hash_model.py::test_primary_pk_exists
[gw0] [ 63%] PASSED tests_sync/test_hash_model.py::test_tag_queries_punctuation
tests_sync/test_hash_model.py::test_primary_key_model_error
[gw6] [ 64%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_cached
tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
[gw4] [ 65%] PASSED tests_sync/test_hash_model.py::test_delete_many
[gw7] [ 65%] PASSED tests_sync/test_hash_model.py::test_raises_error_with_dicts
tests_sync/test_hash_model.py::test_count
tests_sync/test_json_model.py::test_validates_required_fields
[gw3] [ 66%] PASSED tests/test_oss_redis_features.py::test_updates_a_model
tests_sync/test_hash_model.py::test_schema
[gw5] [ 67%] PASSED tests_sync/test_hash_model.py::test_all_pks
tests_sync/test_hash_model.py::test_delete
[gw0] [ 67%] PASSED tests_sync/test_hash_model.py::test_primary_key_model_error
tests_sync/test_json_model.py::test_saves_model_and_creates_pk
[gw7] [ 68%] PASSED tests_sync/test_json_model.py::test_validates_required_fields
tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
[gw1] [ 69%] PASSED tests_sync/test_hash_model.py::test_paginate_query
tests_sync/test_json_model.py::test_validates_field
[gw2] [ 69%] PASSED tests_sync/test_hash_model.py::test_primary_pk_exists
tests_sync/test_json_model.py::test_validation_passes
[gw3] [ 70%] PASSED tests_sync/test_hash_model.py::test_schema
[gw4] [ 71%] PASSED tests_sync/test_hash_model.py::test_count
tests_sync/test_json_model.py::test_saves_many_explicit_transaction
tests_sync/test_json_model.py::test_delete
[gw6] [ 71%] PASSED tests_sync/test_hash_model.py::test_access_result_by_index_not_cached
tests_sync/test_json_model.py::test_all_pks
[gw5] [ 72%] PASSED tests_sync/test_hash_model.py::test_delete
tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
[gw7] [ 73%] PASSED tests_sync/test_json_model.py::test_saves_many_implicit_pipeline
tests_sync/test_json_model.py::test_paginate_query
[gw1] [ 73%] PASSED tests_sync/test_json_model.py::test_validates_field
tests_sync/test_json_model.py::test_access_result_by_index_cached
[gw4] [ 74%] PASSED tests_sync/test_json_model.py::test_delete
tests_sync/test_json_model.py::test_update_query
[gw2] [ 75%] PASSED tests_sync/test_json_model.py::test_validation_passes
tests_sync/test_json_model.py::test_access_result_by_index_not_cached
[gw0] [ 75%] PASSED tests_sync/test_json_model.py::test_saves_model_and_creates_pk
tests_sync/test_json_model.py::test_updates_a_model
[gw5] [ 76%] PASSED tests_sync/test_json_model.py::test_delete_many_implicit_pipeline
[gw3] [ 76%] PASSED tests_sync/test_json_model.py::test_saves_many_explicit_transaction
tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_in_query
[gw7] [ 77%] PASSED tests_sync/test_json_model.py::test_paginate_query
[gw1] [ 78%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_cached
tests_sync/test_json_model.py::test_recursive_query_field_resolution
tests_sync/test_json_model.py::test_full_text_search
[gw4] [ 78%] PASSED tests_sync/test_json_model.py::test_update_query
tests_sync/test_json_model.py::test_tag_queries_boolean_logic
[gw5] [ 79%] PASSED tests_sync/test_json_model.py::test_recursive_query_expression_resolution
tests_sync/test_json_model.py::test_numeric_queries
[gw2] [ 80%] PASSED tests_sync/test_json_model.py::test_access_result_by_index_not_cached
[gw3] [ 80%] PASSED tests_sync/test_json_model.py::test_in_query
tests_sync/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_json_model.py::test_sorting
[gw0] [ 81%] PASSED tests_sync/test_json_model.py::test_updates_a_model
tests_sync/test_json_model.py::test_tag_queries_negation
[gw7] [ 82%] PASSED tests_sync/test_json_model.py::test_recursive_query_field_resolution
[gw4] [ 82%] PASSED tests_sync/test_json_model.py::test_tag_queries_boolean_logic
tests_sync/test_json_model.py::test_not_found
tests_sync/test_json_model.py::test_allows_dataclasses
[gw1] [ 83%] PASSED tests_sync/test_json_model.py::test_full_text_search
tests_sync/test_json_model.py::test_list_field_limitations
[gw6] [ 84%] PASSED tests_sync/test_json_model.py::test_all_pks
tests_sync/test_json_model.py::test_exact_match_queries
[gw3] [ 84%] PASSED tests_sync/test_json_model.py::test_sorting
tests_sync/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 85%] PASSED tests_sync/test_json_model.py::test_numeric_queries
tests_sync/test_json_model.py::test_allows_and_serializes_dicts
[gw7] [ 86%] PASSED tests_sync/test_json_model.py::test_not_found
tests_sync/test_json_model.py::test_count
[gw2] [ 86%] PASSED tests_sync/test_json_model.py::test_tag_queries_punctuation
tests_sync/test_json_model.py::test_allows_and_serializes_sets
[gw4] [ 87%] PASSED tests_sync/test_json_model.py::test_allows_dataclasses
[gw0] [ 88%] PASSED tests_sync/test_json_model.py::test_tag_queries_negation
tests_sync/test_oss_redis_features.py::test_all_keys
tests_sync/test_json_model.py::test_schema
[gw3] [ 88%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_lists
[gw5] [ 89%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_dicts
tests_sync/test_oss_redis_features.py::test_validates_field
tests_sync/test_oss_redis_features.py::test_validation_passes
[gw7] [ 90%] PASSED tests_sync/test_json_model.py::test_count
[gw0] [ 90%] PASSED tests_sync/test_json_model.py::test_schema
tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw1] [ 91%] PASSED tests_sync/test_json_model.py::test_list_field_limitations
[gw6] [ 92%] PASSED tests_sync/test_json_model.py::test_exact_match_queries
tests_sync/test_oss_redis_features.py::test_not_found
tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw2] [ 92%] PASSED tests_sync/test_json_model.py::test_allows_and_serializes_sets
tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw5] [ 93%] PASSED tests_sync/test_oss_redis_features.py::test_validation_passes
tests_sync/test_redis_type.py::test_redis_type
[gw5] [ 94%] PASSED tests_sync/test_redis_type.py::test_redis_type
[gw3] [ 94%] PASSED tests_sync/test_oss_redis_features.py::test_validates_field
[gw7] [ 95%] PASSED tests_sync/test_oss_redis_features.py::test_saves_model_and_creates_pk
tests_sync/test_pydantic_integrations.py::test_email_str
[gw6] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_validates_required_fields
[gw0] [ 96%] PASSED tests_sync/test_oss_redis_features.py::test_updates_a_model
[gw1] [ 97%] PASSED tests_sync/test_oss_redis_features.py::test_not_found
[gw2] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_raises_error_with_embedded_models
[gw4] [ 98%] PASSED tests_sync/test_oss_redis_features.py::test_all_keys
tests_sync/test_oss_redis_features.py::test_saves_many
[gw3] [ 99%] PASSED tests_sync/test_pydantic_integrations.py::test_email_str
[gw4] [100%] PASSED tests_sync/test_oss_redis_features.py::test_saves_many
---------- coverage: platform darwin, python 3.10.8-final-0 ----------
Name Stmts Miss Cover Missing
----------------------------------------------------------------------
aredis_om/__init__.py 5 0 100%
aredis_om/async_redis.py 1 0 100%
aredis_om/checks.py 21 12 43% 9-10, 15-18, 23-28
aredis_om/connections.py 10 1 90% 20
aredis_om/model/__init__.py 2 0 100%
aredis_om/model/cli/__init__.py 0 0 100%
aredis_om/model/cli/migrate.py 13 13 0% 1-18
aredis_om/model/encoders.py 72 35 51% 68, 70, 73-86, 94, 96, 98, 132-147, 150-155, 159-173
aredis_om/model/migrations/__init__.py 0 0 100%
aredis_om/model/migrations/migrator.py 87 15 83% 24-35, 45, 56, 83-84, 89-90, 101, 112-114
aredis_om/model/model.py 888 115 87% 100, 111, 128, 136, 145-152, 166, 185, 193, 199, 203, 207, 211-214, 218, 241, 245, 297, 305, 352, 394, 401, 419, 446, 474, 499, 502-508, 527, 529, 533, 561-571, 592-595, 606, 653, 667-672, 685, 699, 701, 703, 705, 768, 787, 823-828, 844-854, 904, 927-928, 1072, 1135, 1157, 1161, 1166, 1190, 1221-1224, 1232, 1308, 1314, 1374-1382, 1396, 1436-1445, 1449, 1464-1472, 1483-1493, 1506, 1606-1607, 1634-1637, 1721, 1725-1729
aredis_om/model/query_resolver.py 23 23 0% 1-103
aredis_om/model/render_tree.py 33 31 6% 24-75
aredis_om/model/token_escaper.py 13 1 92% 16
aredis_om/sync_redis.py 1 1 0% 1
aredis_om/util.py 6 1 83% 7
----------------------------------------------------------------------
TOTAL 1175 248 79%
=============================================================================== 152 passed in 1.45s ================================================================================
docker-compose down
[+] Running 3/3
⠿ Container redis-om-python-oss_redis-1 Removed 0.2s
⠿ Container redis-om-python-redis-1 Removed 0.1s
⠿ Network redis-om-python_default Removed 0.0s
~/Repositories/redis-om-python fix-model-typings*
redis-om-DEJACET3-py3.10 ❯
```
</details> | open | 2023-02-11T22:14:41Z | 2023-04-30T07:26:53Z | https://github.com/redis/redis-om-python/issues/475 | [
"maintenance"
] | marianhlavac | 0 |
clovaai/donut | computer-vision | 327 | Loading donut transformers model getting error | self.model = self.model.to(self.device)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 1145, in to
return self._apply(convert)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 797, in _apply
module._apply(fn)
[Previous line repeated 1 more time]
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 820, in _apply
param_applied = fn(param)
File “/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py”, line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA error: device-side assert triggered
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
code:-
class Donut:
def __init__(self):
try:
self.processor = DonutProcessor.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
self.model = VisionEncoderDecoderModel.from_pretrained("naver-clova-ix/donut-base-finetuned-docvqa")
self.device = "cuda" if torch.cuda.is_available() else "cpu"
if torch.cuda.is_available():
try:
self.device = torch.device("cuda")
self.model = self.model.to(self.device)
torch.cuda.empty_cache()
except RuntimeError as e:
console_logger.warning(f"{str(e)}")
except Exception as e:
console_logger.error(f"Failed to initialize Donut: {str(e)}")
raise
Versions:-
torch==2.0.1+cu118
torchaudio==2.0.2+cu118
torchvision==0.15.2+cu118
transformers==4.24.0
Nvidia Driver :- Driver Version: 550.144.03 CUDA Version: 12.4
Docker image :- nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
| open | 2025-02-28T10:17:52Z | 2025-02-28T10:17:52Z | https://github.com/clovaai/donut/issues/327 | [] | ankitagotarne | 0 |
nalepae/pandarallel | pandas | 54 | AttributeError: 'ProgressBarsConsole' object has no attribute 'set_error' | Hi.
The progress bar may not be displayed.
```
$ pip list
tqdm 4.36.1
pandas 0.25.3
pandarallel 1.4.1
```
```
0.00% | 0 / 6164 |
0.00% | 0 / 6164 |
0.00% | 0 / 6163 |
0.00% | 0 / 6163 |
File "/home/ubuntu/test.py", line 60, in _run
df['result'] = df.parallel_apply(_func, axis=1)
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/pandarallel/pandarallel.py", line 384, in closure
map_result,
File "/home/ubuntu/.pyenv/versions/3.6.8/lib/python3.6/site-packages/pandarallel/pandarallel.py", line 327, in get_workers_result
progress_bars.set_error(worker_index)
AttributeError: 'ProgressBarsConsole' object has no attribute 'set_error'
```
Because `set_error` seems to be only `ProgressBarsNotebookLab`.
https://github.com/nalepae/pandarallel/blob/master/pandarallel/utils/progress_bars.py#L91
But It seems to be called either `ProgressBarsNotebookLab` or `ProgressBarsConsole`.
https://github.com/nalepae/pandarallel/blob/master/pandarallel/pandarallel.py#L322
It seems that it was from the time of refactoring > https://github.com/nalepae/pandarallel/commit/f297b7547766edba9e9fbdcdac62b88d9b33f4fa | closed | 2019-11-15T08:31:26Z | 2019-11-23T18:33:01Z | https://github.com/nalepae/pandarallel/issues/54 | [] | vaaaaanquish | 0 |
opengeos/leafmap | streamlit | 1,033 | leafmap.add_raster only recognizes some colormap names? | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- leafmap version: 0.42.6
- Python version: 3.12.3
- Operating System: Ubuntu 24.04.1
### Description
leafmap seems to recognize only a (very) limited range of matplotlib colormap names.
### What I Did
```
import leafmap
m = leafmap.Map()
m.add_raster('output/freq.tif', vmin=0, vmax=100, colormap = 'rainbow')
m
```
This works fine (freq.tif is a single band GTiff float32 file with values between 0 and 100).
```
m.add_raster('output/freq.tif', vmin=0, vmax=100, colormap = 'Blues')
m
```
throws:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[6], line 5
1 import leafmap
3 m = leafmap.Map()
----> 5 m.add_raster('output[/freq.tif](http://localhost:8888/freq.tif)', vmin=0, vmax=100, colormap = 'blues')
7 m
File [/opt/conda/lib/python3.11/site-packages/leafmap/leafmap.py:2384](http://localhost:8888/opt/conda/lib/python3.11/site-packages/leafmap/leafmap.py#line=2383), in Map.add_raster(self, source, indexes, colormap, vmin, vmax, nodata, attribution, layer_name, layer_index, zoom_to_layer, visible, opacity, array_args, client_args, **kwargs)
2381 if isinstance(source, np.ndarray) or isinstance(source, xr.DataArray):
2382 source = common.array_to_image(source, **array_args)
-> 2384 tile_layer, tile_client = common.get_local_tile_layer(
2385 source,
2386 indexes=indexes,
2387 colormap=colormap,
2388 vmin=vmin,
2389 vmax=vmax,
2390 nodata=nodata,
2391 opacity=opacity,
2392 attribution=attribution,
2393 layer_name=layer_name,
2394 client_args=client_args,
2395 return_client=True,
2396 **kwargs,
2397 )
2398 tile_layer.visible = visible
2400 self.add(tile_layer, index=layer_index)
File [/opt/conda/lib/python3.11/site-packages/leafmap/common.py:3002](http://localhost:8888/opt/conda/lib/python3.11/site-packages/leafmap/common.py#line=3001), in get_local_tile_layer(source, port, debug, indexes, colormap, vmin, vmax, nodata, attribution, tile_format, layer_name, client_args, return_client, quiet, **kwargs)
3000 else:
3001 if tile_format == "ipyleaflet":
-> 3002 tile_layer = get_leaflet_tile_layer(
3003 tile_client,
3004 port=port,
3005 debug=debug,
3006 indexes=indexes,
3007 colormap=colormap,
3008 vmin=vmin,
3009 vmax=vmax,
3010 nodata=nodata,
3011 attribution=attribution,
3012 name=layer_name,
3013 **kwargs,
3014 )
3015 else:
3016 tile_layer = get_folium_tile_layer(
3017 tile_client,
3018 port=port,
(...)
3028 **kwargs,
3029 )
File [/opt/conda/lib/python3.11/site-packages/localtileserver/widgets.py:105](http://localhost:8888/opt/conda/lib/python3.11/site-packages/localtileserver/widgets.py#line=104), in get_leaflet_tile_layer(source, port, debug, indexes, colormap, vmin, vmax, nodata, attribution, **kwargs)
98 bounds = Union((Tuple(),), default_value=None, allow_none=True).tag(sync=True, o=True)
100 source, created = get_or_create_tile_client(
101 source,
102 port=port,
103 debug=debug,
104 )
--> 105 url = source.get_tile_url(
106 indexes=indexes,
107 colormap=colormap,
108 vmin=vmin,
109 vmax=vmax,
110 nodata=nodata,
111 client=True,
112 )
113 if attribution is None:
114 attribution = DEFAULT_ATTRIBUTION
File [/opt/conda/lib/python3.11/site-packages/localtileserver/client.py:461](http://localhost:8888/opt/conda/lib/python3.11/site-packages/localtileserver/client.py#line=460), in TileServerMixin.get_tile_url(self, indexes, colormap, vmin, vmax, nodata, client)
458 colormap = json.dumps(colormap)
459 else:
460 # make sure palette is valid
--> 461 palette_valid_or_raise(colormap)
463 params["colormap"] = colormap
464 if vmin is not None:
File [/opt/conda/lib/python3.11/site-packages/localtileserver/tiler/palettes.py:31](http://localhost:8888/opt/conda/lib/python3.11/site-packages/localtileserver/tiler/palettes.py#line=30), in palette_valid_or_raise(name)
29 def palette_valid_or_raise(name: str):
30 if not is_mpl_cmap(name):
---> 31 raise ValueError(f"Please use a valid matplotlib colormap name. Invalid: {name}")
ValueError: Please use a valid matplotlib colormap name. Invalid: blues
```
'blues' (or 'Blues') **is** a valid matplotlib colormap name. Most other names do not work either. | closed | 2025-01-27T15:16:36Z | 2025-01-27T16:01:45Z | https://github.com/opengeos/leafmap/issues/1033 | [
"bug"
] | glemoine62 | 2 |
Lightning-AI/pytorch-lightning | data-science | 20,088 | Sometimes I get Dataset Errors when using the lightning module in a distributed manor | ### Bug description
I use a Lightning Datamodule. In this module I initialize (according to your
tutorials a torch dataset:
```
class CustomImageDataset(Dataset):
# Torch dataset to handle basic file operations
```
```
class DataModule(L.LightningDataModule):
# Lightning DataModule to handle dataloaders and train/test split
dset = CustomImageDataset()
```
In most cases it works perfectly fine, but sometimes I get an error when initializing my training, which forces me to start it again until the bug does not appear anymore. This only happens in distributed training.
It happens when I read in my dataset in the CustomImageDataset() by using a csv reader. The error is:
```
train.py 74 <module>
mydata.setup(stage="fit")
dataset.py 206 setup
self.train_set = self.create_dataset("train")
dataset.py 190 create_dataset
dset = CustomImageDataset(self.data_dir,
dataset.py 50 __init__
self.data_paths, self.targets = self._load_data()
dataset.py 59 _load_data
paths, targets = get_paths(self.data_dir, "train", self.seed)
dataset.py 22 get_paths
r = list(reader)
_csv.Error:
line contains NUL
```
Since the list conversion seems to trigger the bug I am bit lost on how to solve it, but maybe you guys already stumbled upon it.
### What version are you seeing the problem on?
v2.2
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 1.5.0):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_ | open | 2024-07-15T10:37:37Z | 2024-07-15T10:37:37Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20088 | [
"bug",
"needs triage"
] | asusdisciple | 0 |
piskvorky/gensim | nlp | 3,411 | Python11 can not install gensim, if it is possible, I wish Python11 can have the right version for gensim too | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
What are you trying to achieve? What is the expected result? What are you seeing instead?
#### Steps/code/corpus to reproduce
Include full tracebacks, logs and datasets if necessary. Please keep the examples minimal ("minimal reproducible example").
If your problem is with a specific Gensim model (word2vec, lsimodel, doc2vec, fasttext, ldamodel etc), include the following:
```python
print(my_model.lifecycle_events)
```
#### Versions
Please provide the output of:
```python
import platform; print(platform.platform())
import sys; print("Python", sys.version)
import struct; print("Bits", 8 * struct.calcsize("P"))
import numpy; print("NumPy", numpy.__version__)
import scipy; print("SciPy", scipy.__version__)
import gensim; print("gensim", gensim.__version__)
from gensim.models import word2vec;print("FAST_VERSION", word2vec.FAST_VERSION)
```
| closed | 2022-12-09T08:38:34Z | 2022-12-09T09:49:36Z | https://github.com/piskvorky/gensim/issues/3411 | [] | Victorrrrr86 | 2 |
dpgaspar/Flask-AppBuilder | rest-api | 1,417 | Cannot use custom Icon for oauth login | As described in the doc (https://flask-appbuilder.readthedocs.io/en/latest/security.html#authentication-oauth) we currently support font-awesome as provider icons for oauth login form. Many companies use custom oauth providers instead of the public ones. It will be great if one can use an external resource as the login icon. | closed | 2020-06-30T00:15:15Z | 2020-10-10T03:49:43Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1417 | [
"stale"
] | widewing | 1 |
OpenInterpreter/open-interpreter | python | 945 | copy code button | ### Is your feature request related to a problem? Please describe.
no
### Describe the solution you'd like
add a copy icon top right corner to generated scripts so they are easy to copy
### Describe alternatives you've considered
_No response_
### Additional context
_No response_ | open | 2024-01-18T20:36:32Z | 2024-03-20T01:17:35Z | https://github.com/OpenInterpreter/open-interpreter/issues/945 | [
"Enhancement"
] | clickclack777 | 4 |
sczhou/CodeFormer | pytorch | 293 | ImportError: cannot import name 'get_device' from 'basicsr.utils.misc' | ImportError: cannot import name 'get_device' from 'basicsr.utils.misc'

| open | 2023-08-21T03:37:54Z | 2023-08-21T03:49:18Z | https://github.com/sczhou/CodeFormer/issues/293 | [] | djwashout | 1 |
K3D-tools/K3D-jupyter | jupyter | 448 | Expose material property 'shininess' for objects | I have a use-case where I would need some meshes to appear shiny whereas others should not. Setting plot.lighting = 0 achieves the latter effect, but then all objects in the scene are affected. I believe this could be controlled individually for each object if the material shininess property, e.g. for a Surface object, https://github.com/K3D-tools/K3D-jupyter/blob/46bec8581e213351aa5df621c1825b95c732486b/js/src/providers/threejs/objects/Surface.js#L36
would be exposed and not hard-coded.
I believe exposing this would also to some degree address the comment raised previously in #51. | open | 2024-01-29T20:01:34Z | 2024-07-02T13:11:49Z | https://github.com/K3D-tools/K3D-jupyter/issues/448 | [
"Next release"
] | jpomoell | 0 |
onnx/onnx | deep-learning | 6,284 | ImportError: DLL load failed while importing onnx_cpp2py_export: 动态链接库(DLL)初始化例程失败。 | # Bug Report
### Is the issue related to model conversion?
1.16.2

<img width="952" alt="onnx_bug" src="https://github.com/user-attachments/assets/4f0d6581-a62e-4fbb-931b-65eb844a7aae">
| closed | 2024-08-07T08:05:20Z | 2024-08-07T14:09:33Z | https://github.com/onnx/onnx/issues/6284 | [
"bug"
] | LHSSHL001 | 2 |
ultralytics/yolov5 | pytorch | 12,826 | How to plot confusion matrix in yolov5-cls | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello! I am trying to find out how to plot confusion matrix for yolov5 classification task. I want to compare confusion matrix together with other metrics of trained models on custom dataset of both yolov5 and yolov8. Any ideas , suggestions or help would be appreciated!
### Additional
_No response_ | closed | 2024-03-18T22:29:51Z | 2024-10-20T19:41:43Z | https://github.com/ultralytics/yolov5/issues/12826 | [
"question"
] | vlad1mirJ | 6 |
TracecatHQ/tracecat | automation | 147 | not able to login after fresh installation of tracecat | After installing tracecat getting the below error
Unhandled Runtime Error
Error: Error creating new user

Below are the logs files from docker.
○ Compiling /_error ...
✓ Compiled /_error in 1661ms
Start new user flow
Auth is disabled, using test token.
⨯ Error: Error creating new user
at newUserFlow (/app/.next/server/chunks/[root of the server]__62c00d._.js:736:23)
Start new user flow
Auth is disabled, using test token.
⨯ Error: Error creating new user
at newUserFlow (/app/.next/server/chunks/[root of the server]__62c00d._.js:736:23)
Call Stack
newUserFlow
/app/.next/server/chunks/[root of the server]__62c00d._.js (736:23)
process.processTicksAndRejections
node:internal/process/task_queues (95:5)
async
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js (39:406)
async t2
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js (38:6412)
async rS
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/compiled/next-server/app-page.runtime.dev.js (41:1369)
async doRender
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/base-server.js (1395:30)
async cacheEntry.responseCache.get.routeKind
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/base-server.js (1544:40)
async DevServer.renderToResponseWithComponentsImpl
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/base-server.js (1464:28)
async DevServer.renderPageComponent
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/base-server.js (1861:24)
async DevServer.renderToResponseImpl
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/base-server.js (1899:32)
async DevServer.pipeImpl
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/base-server.js (912:25)
async NextNodeServer.handleCatchallRenderRequest
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/next-server.js (269:17)
async DevServer.handleRequestImpl
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/base-server.js (808:17)
async
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/dev/next-dev-server.js (331:20)
async Span.traceAsyncFn
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/trace/trace.js (151:20)
async DevServer.handleRequest
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/dev/next-dev-server.js (328:24)
async invokeRender
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/lib/router-server.js (136:21)
async handleRequest
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/lib/router-server.js (315:24)
async requestHandlerImpl
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/lib/router-server.js (339:13)
async Server.requestListener
/app/node_modules/.pnpm/next@14.1.1_@babel+core@7.24.5_babel-plugin-macros@3.1.0_react-dom@18.3.1_react@18.3.1__react@18.3.1/node_modules/next/dist/server/lib/start-server.js (140:13) | closed | 2024-05-29T09:41:53Z | 2024-05-31T18:41:40Z | https://github.com/TracecatHQ/tracecat/issues/147 | [
"duplicate"
] | kishanecosmob | 5 |
allenai/allennlp | nlp | 5,495 | Updating model for Coreference Resolution | I noticed a new SoTA on Ontonotes 5.0 Coreference task on [paperswithcode](https://paperswithcode.com/paper/word-level-coreference-resolution#code)
The author provides the model (.pt) file in [their git repo](https://github.com/vdobrovolskii/wl-coref#preparation) and claims it to be faster (since it uses RoBERTa) while having an improvement on avg F1 score over SpanBERT.
What would be the steps to use this checkpoint in the AllenNLP Predictor? | closed | 2021-12-06T05:26:48Z | 2022-01-06T15:57:46Z | https://github.com/allenai/allennlp/issues/5495 | [
"question"
] | aakashb95 | 2 |
developmentseed/lonboard | data-visualization | 663 | Update docs to include more community projects | open | 2024-10-02T22:41:17Z | 2024-10-03T18:47:00Z | https://github.com/developmentseed/lonboard/issues/663 | [] | kylebarron | 0 |
|
deepfakes/faceswap | deep-learning | 1,237 | Apple M1: python -m pip install tensorflow-macos installs 2.9.2 | Be advised when following the M1 setup instructions, the tensorflow installation instructions at https://developer.apple.com/metal/tensorflow-plugin/ will now install 2.9.2, which will throw an error when you run `python faceswap.py gui`.
Until faceswap supports TensorFlow 2.9, the following change worked for me:
use:
```
conda install -c apple tensorflow-deps==2.8.0
python -m pip install tensorflow-macos==2.8.0
```
instead of:
```
conda install -c apple tensorflow-deps
python -m pip install tensorflow-macos
``` | closed | 2022-06-15T07:33:39Z | 2022-06-18T10:18:15Z | https://github.com/deepfakes/faceswap/issues/1237 | [] | joeybarrus | 1 |
litl/backoff | asyncio | 107 | Add install section to README | Because it's a standard good practice to tell people how to install something. ;) | open | 2020-10-15T10:11:12Z | 2023-02-23T14:52:30Z | https://github.com/litl/backoff/issues/107 | [] | deeplook | 4 |
bmoscon/cryptofeed | asyncio | 138 | More bitstamp l2 data? | It looks like a different websocket address has more L2 book data than the one currently in the code. At `diff_order_book_v2.html`. It's the 'live full order book' instead of just the 'live order book'. Should we change it or set an option to use the full order book? https://www.bitstamp.net/websocket/v2/ | closed | 2019-08-09T04:49:42Z | 2019-09-01T00:23:35Z | https://github.com/bmoscon/cryptofeed/issues/138 | [] | nateGeorge | 1 |
allure-framework/allure-python | pytest | 648 | Fixture into test Class are not displayed in Allure report when used parametrize fixture | [//]: # (
. Note: for support questions, please use Stackoverflow or Gitter**.
. This repository's issues are reserved for feature requests and bug reports.
.
. In case of any problems with Allure Jenkins plugin** please use the following repository
. to create an issue: https://github.com/jenkinsci/allure-plugin/issues
.
. Make sure you have a clear name for your issue. The name should start with a capital
. letter and no dot is required in the end of the sentence. An example of good issue names:
.
. - The report is broken in IE11
. - Add an ability to disable default plugins
. - Support emoji in test descriptions
)
#### I'm submitting a ...
- [ ] bug report
#### What is the current behavior?
Pytest fixture write into test class are not displayed in Allure report, if there are parametrize pytest fixture before them.
#### If the current behavior is a bug, please provide the steps to reproduce and if possible a minimal demo of the problem
I have parametrize pytest fixture
```
@pytest.fixture(scope="session", params=auth_params)
def public_api_client(request)
pass
```
and fixture into test class
```
class TestPublic:
@pytest.fixture(scope="class", autouse=True)
def delete_ws(self, request):
pass
```
After run tests I see that **delete_ws** are not displayed in Allure report.

When I delete **params** in public_api_client I found **delete_ws** fixture into set up \ tear down section.

#### What is the expected behavior?
All pytest fixture are displayed in Allure report in set up \ tear down section.
#### Please tell us about your environment:
- Allure version: 2.16.1
- Test framework: pytest 6.2.5
- Allure adaptor: allure-pytest@2.9.45
#### Other information
[//]: # (
. e.g. detailed explanation, stacktraces, related issues, suggestions
. how to fix, links for us to have more context, eg. Stackoverflow, Gitter etc
)
| closed | 2022-02-03T20:16:45Z | 2023-02-06T05:47:29Z | https://github.com/allure-framework/allure-python/issues/648 | [
"theme:pytest"
] | Morena51 | 2 |
explosion/spaCy | machine-learning | 13,380 | The word transitions to the wrong prototype | 
| closed | 2024-03-15T15:17:19Z | 2024-03-19T09:28:33Z | https://github.com/explosion/spaCy/issues/13380 | [
"feat / lemmatizer",
"perf / accuracy"
] | github123666 | 1 |
charlesq34/pointnet | tensorflow | 137 | What does the visualization look like after TNet matmul input? | Thanks for the excellent work! The authors say, "We predict an affine transformation matrix by a mini-network and directly apply this transformation to the coordinates of input points."
I'm curious what the point cloud looks like after the transformation. Is it a three-dimensional representation that a person can still understand? | closed | 2018-09-28T01:53:32Z | 2018-11-16T05:30:28Z | https://github.com/charlesq34/pointnet/issues/137 | [] | kxhit | 6 |
adbar/trafilatura | web-scraping | 634 | some extraction duplicated in xml | hi,
I was setting a test site and playing with trafilatura and found a weird bug.
site URL:
`https://milkfriends.s1-tastewp.com/2024/06/27/ok-this/`
as this test site is only available for 2 days, so I also attached the simple Gutenberg block code below for you to replicate
Command:
```
html = trafilatura.fetch_url(url, no_ssl=True,)
ts = trafilatura.extract(html, output_format='xml', include_comments=False)
```
the Wordpress Gutenberg htmls below
```
<!-- wp:paragraph -->
<p>this is sample intro</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">intro 2</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>table below</p>
<!-- /wp:paragraph -->
<!-- wp:table -->
<figure class="wp-block-table"><table><tbody><tr><td>a</td><td>b</td><td></td></tr><tr><td>f</td><td>s</td><td>s</td></tr><tr><td>g</td><td></td><td>b</td></tr></tbody></table></figure>
<!-- /wp:table -->
<!-- wp:paragraph -->
<p>header table below</p>
<!-- /wp:paragraph -->
<!-- wp:table -->
<figure class="wp-block-table"><table><thead><tr><th>b</th><th>s</th><th>h</th></tr></thead><tbody><tr><td>a</td><td>b</td><td></td></tr><tr><td>f</td><td>s</td><td>s</td></tr><tr><td>g</td><td></td><td>b</td></tr></tbody></table></figure>
<!-- /wp:table -->
<!-- wp:paragraph -->
<p>list below</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>this is 1</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 2</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 3</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>numbered list below</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>this is 1</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 2</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>this is 3</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
```
It is very simple extraction but I find some elements are extracted twice.
elements below "this is sample intro" appeared twice but not all of the elements appear twice. some of the list elements only show up once.
See the extraction below:
```
<doc sitename="milkfriends.s1-tastewp.com" title="ok this" author="Admin" date="2024-06-27" url="https://milkfriends.s1-tastewp.com/2024/06/27/ok-this/" hostname="s1-tastewp.com" fingerprint="f69d7033beefe32d">
<main>
<p>this is sample intro</p>
<head rend="h3">intro 2</head>
<p>table below</p>
<table>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>header table below</p>
<table>
<row span="3">
<cell role="head">b</cell>
<cell role="head">s</cell>
<cell role="head">h</cell>
</row>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>list below</p>
<list rend="ul">
<item>this is 1</item>
<item>this is 2</item>
<item>this is 3</item>
</list>
<p>numbered list below</p>
<list rend="ol">
<item>this is 1</item>
<item>this is 2</item>
<item>this is 3</item>
</list>
<p>this is sample intro</p>
<p>table below</p>
<table>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>header table below</p>
<table>
<row span="3">
<cell role="head">b</cell>
<cell role="head">s</cell>
<cell role="head">h</cell>
</row>
<row span="3">
<cell>a</cell>
<cell>b</cell>
</row>
<row span="3">
<cell>f</cell>
<cell>s</cell>
<cell>s</cell>
</row>
<row>
<cell>g</cell>
<cell>b</cell>
</row>
</table>
<p>list below</p>
<p>numbered list below</p>
</main>
</doc>
```
| open | 2024-06-27T09:14:44Z | 2024-07-25T11:58:46Z | https://github.com/adbar/trafilatura/issues/634 | [
"question"
] | fortyfourforty | 3 |
modoboa/modoboa | django | 3,229 | Easy Install Fails to configure nginx | Bone stock install on a brand new Debian 12 vm.
nginx config doesn't work and just a blank page is displayed instead of an admin login.
I'll try the manual install now... | closed | 2024-04-08T16:00:53Z | 2024-04-08T16:46:18Z | https://github.com/modoboa/modoboa/issues/3229 | [] | dtdionne | 1 |
ijl/orjson | numpy | 409 | why no Windows win32 (32-bit) wheels on PyPI? | https://pypi.org/project/orjson/#files only contains `win_amd64` wheels for 64-bit Python for Windows, but no `win32` wheels for the 32 bit Windows Python, which AFAIK is still the default download from Python.org.
I'd like to depend on orjson in my python app which supports both win32 and win_amd64 and was wondering if the absence of the win32 wheels is a technical limiation (orjson doesn't support 32 bit) or simply you decided not to build those for other reasons.
Thanks | closed | 2023-07-28T12:09:58Z | 2023-08-06T20:13:02Z | https://github.com/ijl/orjson/issues/409 | [
"Stale"
] | anthrotype | 3 |
igorbenav/fastcrud | pydantic | 68 | FastCRUD class docs don't match signature | **Describe the bug or question**
According to the [documentation](https://igorbenav.github.io/fastcrud/api/fastcrud/), the `FastCRUD` class takes optional create, update, and delete schemas as arguments, but this doesn't make sense according to the calling signature for `FastCRUD.__init__()` and indeed it doesn't seem to work in practice.
(As a secondary but related matter, the documentation references a bunch of example model/schema classes that are never fully defined anywhere, including the code and tests, which made figuring out how to articulate this fairly tricky. If what the docs say is supposed to work, it's something that probably needs some actual tests written for to confirm.)
**To Reproduce**
I had to do a little fudging using the `Item` model/schema classes to get a set that'd work for the `User` model referenced in the docs, but it seems to me like this:
```python
from fastcrud import FastCRUD
from pydantic import BaseModel
from sqlalchemy import Boolean, Column, DateTime, Integer, String
from sqlalchemy.orm import DeclarativeBase
class Base(DeclarativeBase):
pass
class User(Base):
__tablename__ = "user"
id = Column(Integer, primary_key=True)
name = Column(String)
archived = Column(Boolean, default=False)
archived_at = Column(DateTime)
class UserCreateSchema(BaseModel):
name: str
class UserUpdateSchema(BaseModel):
name: str
```
should work for this (Example 6 from the page listed above, in turn taken from the `FastCRUD` class doc string), and yet:
```python
>>> custom_user_crud = FastCRUD(User, UserCreateSchema, UserUpdateSchema, is_deleted_column="archived", deleted_at_column="archived_at")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: FastCRUD.__init__() got multiple values for argument 'is_deleted_column'
```
For that matter, Example 1 _seems_ like it works, but the case of it "working" is actually a typing violation:
```python
>>> user_crud = FastCRUD(User, UserCreateSchema, UserUpdateSchema)
>>> user_crud.is_deleted_column
<class '__main__.UserCreateSchema'>
>>> user_crud.deleted_at_column
<class '__main__.UserUpdateSchema'>
```
| closed | 2024-04-30T17:33:59Z | 2024-05-05T05:38:48Z | https://github.com/igorbenav/fastcrud/issues/68 | [
"bug",
"documentation"
] | slaarti | 9 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 326 | 抖音好像无法解析了 | 如题 | closed | 2024-02-23T12:35:45Z | 2024-03-26T03:50:42Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/326 | [
"BUG",
"enhancement"
] | Sunsh4j | 2 |
JaidedAI/EasyOCR | pytorch | 438 | How to recognize negative number? | I used this model to recognize negative number like '-264.27' with CPU only.
But I get the list ['264.27'] without the negative sign.Its kinda weird.
What's wrong with my code?Any suggestions?
Thanks a lot!
| closed | 2021-05-27T16:50:31Z | 2021-05-30T02:32:13Z | https://github.com/JaidedAI/EasyOCR/issues/438 | [] | Janzyz3 | 1 |
agronholm/anyio | asyncio | 107 | New event loop is created across different runs in asyncio | I think this is not intentional - the implication is that setting new event loop policies will possibly lose the current event loop:
https://github.com/agronholm/anyio/blob/14daa2bad967bf6dd0f96d04ecffe8e383985195/anyio/_backends/_asyncio.py#L71-L86
This causes the pytest plugin to have different event loops across fixtures and tests:
```python
import asyncio
import pytest
@pytest.fixture
async def loop():
return asyncio.get_event_loop()
@pytest.mark.anyio
async def test(loop):
assert loop is asyncio.get_event_loop() # fails here
```
In many cases, the fixture should share the same loop as the test, for example:
```python
import asyncpg
import pytest
@pytest.fixture
async def conn():
return await asyncpg.connect("postgresql:///")
@pytest.mark.anyio
async def test(conn):
assert await conn.fetchval("SELECT 123") == 123
```
```
E RuntimeError: Task <Task pending name='Task-5' coro=<run.<locals>.wrapper() running at .../anyio/_backends/_asyncio.py:67> cb=[run_until_complete.<locals>.<lambda>()]> got Future <Future pending cb=[Protocol._on_waiter_completed()]> attached to a different loop
```
One possible solution is to cache all policy instances under different setups, and switch to use the right one. I'll create a PR with a more complete test. | closed | 2020-05-21T23:21:47Z | 2020-08-04T20:36:11Z | https://github.com/agronholm/anyio/issues/107 | [
"design"
] | fantix | 13 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.