repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
pytest-dev/pytest-xdist | pytest | 663 | Exit code consolidation | We don't know in advance how many workers we need for the tests (it depends on information we calculate in `pytest_generate_tests`), so if we have too many workers, we skip the redundant ones (as suggested here - https://stackoverflow.com/questions/33400071/skip-parametrized-tests-generated-by-pytest-generate-tests-at-module-level) and it works
Problem now is the exit code. For some reason, if I have some workers which returns exitstatus 0 (OK), and some workers which returns exitstatus 5 (NO_TESTS_COLLECTED), the general return code from the pytest run (from master) is 5 (where I want it to be 0, because all of the collected tests passed).
Is that a bug or the desired behavior? And if it's the latter, any ideas to how can I accomplish the behavior I want? | open | 2021-05-18T15:26:27Z | 2021-05-19T12:36:59Z | https://github.com/pytest-dev/pytest-xdist/issues/663 | [] | cr-omermazig | 11 |
python-gitlab/python-gitlab | api | 2,890 | Improve download files from artifacts | ## Description of the problem, including code/CLI snippet
If file is absent in repository that will error:
`('Connection broken: IncompleteRead(0 bytes read, 2 more expected)', IncompleteRead(0 bytes read, 2 more expected))`
It has not information, and I can`t clearly use try-except to handle this error.
## Expected Behavior
gitlab.exceptions.IncompleteRead
to add this as exception line
## Actual Behavior
```
Traceback (most recent call last):
File "/venv/lib/python3.11/site-packages/urllib3/response.py", line 737, in _error_catcher
yield
File "/venv/lib/python3.11/site-packages/urllib3/response.py", line 883, in _raw_read
raise IncompleteRead(self._fp_bytes_read, self.length_remaining)
urllib3.exceptions.IncompleteRead: IncompleteRead(0 bytes read, 2 more expected)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/venv/lib/python3.11/site-packages/requests/models.py", line 816, in generate
yield from self.raw.stream(chunk_size, decode_content=True)
File "/venv/lib/python3.11/site-packages/urllib3/response.py", line 1043, in stream
data = self.read(amt=amt, decode_content=decode_content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/urllib3/response.py", line 935, in read
data = self._raw_read(amt)
^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/urllib3/response.py", line 861, in _raw_read
with self._error_catcher():
File "/usr/lib/python3.11/contextlib.py", line 158, in __exit__
self.gen.throw(typ, value, traceback)
File "/venv/lib/python3.11/site-packages/urllib3/response.py", line 761, in _error_catcher
raise ProtocolError(arg, e) from e
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read, 2 more expected)', IncompleteRead(0 bytes read, 2 more expected))
During handling of the above exception, another exception occurred:
File "/venv/lib/python3.11/site-packages/requests/models.py", line 899, in content
self._content = b"".join(self.iter_content(CONTENT_CHUNK_SIZE)) or b""
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.11/site-packages/requests/models.py", line 818, in generate
raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read, 2 more expected)', IncompleteRead(0 bytes read, 2 more expected))
```
## Specifications
- python-gitlab version: python3.11
- API version you are using (v3/v4): default
- Gitlab server version (or gitlab.com): gitlab.com
| closed | 2024-06-04T14:11:45Z | 2024-07-29T02:22:01Z | https://github.com/python-gitlab/python-gitlab/issues/2890 | [
"need info",
"stale"
] | q000p | 3 |
AUTOMATIC1111/stable-diffusion-webui | pytorch | 16,544 | [Bug]: restarting the PC when using stable diffusion | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
It turned out that when generating images of different sizes, my PC reboots. But I can generate large size images and everything is fine.The video card is new and I checked its performance using a benchmark - there are no errors, overheating too, the power supply is coping.
I completely reinstalled stable diffusion, but the problem remains.
The problem arises both in converting text to an image and in converting an image to a picture.
the python image(version 3.10.6)
bp:700w
ram:16
Video card: 4060ti
If you need additional information, I am ready to provide it.
a critical error occurs in the windows log:
Kernel-Power. code 41. task category (63)
### Steps to reproduce the problem
1. I generate a picture based on the text. 640x840. 28 steps. dpm++ sde
2.during generation, my computer restarts
### What should have happened?
I think most likely there is some kind of conflict between the drivers of my video card and the sd
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
[sysinfo.txt](https://github.com/user-attachments/files/17346609/sysinfo.txt)
### Console logs
```Shell
windows log:
- <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
- <System>
<Provider Name="Microsoft-Windows-Kernel-Power" Guid="{331c3b3a-2005-44c2-ac5e-77220c37d6b4}" />
<EventID>41</EventID>
<Version>8</Version>
<Level>1</Level>
<Task>63</Task>
<Opcode>0</Opcode>
<Keywords>0x8000400000000002</Keywords>
<TimeCreated SystemTime="2024-10-11T18:25:57.6042399Z" />
<EventRecordID>2226</EventRecordID>
<Correlation />
<Execution ProcessID="4" ThreadID="8" />
<Channel>System</Channel>
<Computer>DESKTOP-VSCHOK7</Computer>
<Security UserID="S-1-5-18" />
</System>
- <EventData>
<Data Name="BugcheckCode">0</Data>
<Data Name="BugcheckParameter1">0x0</Data>
<Data Name="BugcheckParameter2">0x0</Data>
<Data Name="BugcheckParameter3">0x0</Data>
<Data Name="BugcheckParameter4">0x0</Data>
<Data Name="SleepInProgress">0</Data>
<Data Name="PowerButtonTimestamp">0</Data>
<Data Name="BootAppStatus">0</Data>
<Data Name="Checkpoint">0</Data>
<Data Name="ConnectedStandbyInProgress">false</Data>
<Data Name="SystemSleepTransitionsToOn">0</Data>
<Data Name="CsEntryScenarioInstanceId">0</Data>
<Data Name="BugcheckInfoFromEFI">false</Data>
<Data Name="CheckpointStatus">0</Data>
<Data Name="CsEntryScenarioInstanceIdV2">0</Data>
<Data Name="LongPowerButtonPressDetected">false</Data>
</EventData>
</Event>
```
### Additional information
I installed a new ssd, I installed the operating system on it. I installed a new graphics card and installed the latest studio drivers. established a new stable diffusion | open | 2024-10-11T19:18:52Z | 2024-10-12T20:53:52Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16544 | [
"bug-report"
] | Wertat12 | 1 |
CTFd/CTFd | flask | 2,394 | Docker compose: Mounting volumes |
**Environment**:
- master
- Operating System: Ubuntu 22.04 / Docker
- Web Browser and Version:
Hey all, I'm trying to mount local custom themes into a docker instance. Was hoping someone can share some examples of how they are doing this?
Something like this does not seem to work:
- .data/CTFd/themes:/opt/CTFd/CTFd/themes | closed | 2023-08-29T20:02:40Z | 2023-08-31T08:19:20Z | https://github.com/CTFd/CTFd/issues/2394 | [] | socialstijn | 0 |
streamlit/streamlit | data-visualization | 9,934 | Provide tooltip icon to show when label is hidden. | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
I have reported this before [here](https://github.com/streamlit/streamlit/issues/9705#issue-2605477818) but was [informed](https://github.com/streamlit/streamlit/issues/9705#issuecomment-2452665722) tooltip would show when label_visibility was `hidden`. I have tried this in st.text_input but the tooltip did not show.
Would love for this to show when `label_visibility` is `hidden`.
### Why?
import streamlit as st
st.text_input(label="CV name", label_visibility="hidden", help="Call it whatever you want. When downloading, it will be called [first name]_[last name]_CV", key="name_of_CV_for_internal_template")
### How?
_No response_
### Additional Context
_No response_ | open | 2024-11-26T23:04:24Z | 2024-12-28T23:30:08Z | https://github.com/streamlit/streamlit/issues/9934 | [
"type:enhancement",
"area:widgets"
] | Socvest | 3 |
tflearn/tflearn | tensorflow | 1,154 | Not working with tensorflow 2.3.1 | from tensorflow.contrib.framework.python.ops import add_arg_scope as contrib_add_arg_scope
ModuleNotFoundError: No module named 'tensorflow.contrib' | open | 2020-10-16T19:18:09Z | 2020-11-13T01:30:03Z | https://github.com/tflearn/tflearn/issues/1154 | [] | AkilaUd96 | 5 |
gee-community/geemap | jupyter | 1,561 | cartoee.add_scale_bar_lite() issue | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
Please run the following code on your computer and share the output with us so that we can better debug your issue:
```python
import ee
import geemap
import cartopy.crs as ccrs
# import the cartoee functionality from geemap
from geemap import cartoee
```
### Description
Scale bar seems have a problem. I guess maybe the projection problem, but I can not solve.

### What I Did
```python
import datetime
import ee
import geemap
Map = geemap.Map()
import numpy as np
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
# import the cartoee functionality from geemap
from geemap import cartoee
year = 2001
startDate = str(year) +'-01-01'
endDate = str(year) +'-12-31'
daymet= ee.ImageCollection('NASA/ORNL/DAYMET_V4').filter(ee.Filter.date(startDate, endDate));
tmax = daymet.select('tmax').first();
fig = plt.figure(figsize=(10, 8))
# region = [-180, 0, 0, 90]
vis = {
"min": -40.0,
"max": 30.0,
"palette": ['1621A2', '#0000FF', '#00FF00', '#FFFF00', '#FF0000'],
};
Map.setCenter(-110.21, 35.1, 4);
# target_crs = region.projection().crs()
# Corr = crossCorr.reproject(crs=target_crs, scale=500)
# use cartoee to get a map
ax = cartoee.get_map(tmax, region=[-180, 0, 0, 90], vis_params=vis,ccrs=ccrs.PlateCarree())
# add gridlines to the map at a specified interval
# cartoee.add_gridlines(ax, interval=[30, 30], linestyle=":")
cartoee.add_gridlines(ax, interval=[30, 30], linestyle="--")
# add a colorbar to the map using the visualization params we passed to the map
cartoee.add_colorbar(ax, vis, loc="bottom", label="Correlation", orientation="horizontal")
# ax.set_title(label='Correlation', fontsize=15)
# add coastlines using the cartopy api
ax.coastlines(color="red")
# add north arrow
cartoee.add_north_arrow(ax, text="N", xy=(0.1, 0.35), arrow_length=0.15,text_color="black", arrow_color="black", fontsize=20)
# add scale bar
cartoee.add_scale_bar_lite(ax, length=100, xy=(0.1, 0.05), linewidth=8, fontsize=20, color="red", unit="km", ha='center', va='bottom')
show()
```
| closed | 2023-06-13T02:30:42Z | 2023-06-13T03:25:07Z | https://github.com/gee-community/geemap/issues/1561 | [
"bug"
] | caomy7 | 6 |
InstaPy/InstaPy | automation | 5,932 | Commenting with '@{}' inserts my own username instead of the target's username | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
@{} should insert the target's username in comments
## Current Behavior
@{} inserts my own username
## InstaPy configuration
0.6.12 - latest unreleaded | closed | 2020-11-30T04:32:50Z | 2020-12-06T21:12:18Z | https://github.com/InstaPy/InstaPy/issues/5932 | [] | sharmanshah | 1 |
wkentaro/labelme | computer-vision | 1,018 | How can I enable side bar that is label list, file list, etc | I accidentally clicked on the "crossed" button which made the label list, file list, etc from the side bar disappear. How can I enable those back? | closed | 2022-05-13T19:52:48Z | 2022-05-13T19:53:53Z | https://github.com/wkentaro/labelme/issues/1018 | [] | jaskiratsingh2000 | 0 |
wkentaro/labelme | computer-vision | 644 | [Feature] Set yes button as default in delete label warning popup | **Is your feature request related to a problem? Please describe.**
I use macOS when I'm labeling and verifying images. When I delete a label, the warning dialog box pops up. Currently, pressing Enter selects "No" in the dialog box. There is no way to change it to "Yes" using just the keyboard. Also, when I select to delete the label, I'm certain I want to delete it. If I deleted the label by mistake, I can bring it back via the Undo command. Using the mouse to select "Yes" every time is very time-consuming.
**Describe the solution you'd like**
Have the "Yes" button in the delete warning box as the default button so that when the dialog box pops up, all we have to do to delete the labels is press Enter. | closed | 2020-04-16T07:40:22Z | 2020-05-26T14:59:06Z | https://github.com/wkentaro/labelme/issues/644 | [] | aksharpatel47 | 0 |
tensorflow/tensor2tensor | deep-learning | 1,532 | How to restore a trained Transformer model to make predictions in Python? | I trained the Transformer model on my own data by defining an own Problem class (called "sequence", which is a text2text problem). I used `model=transformer` and `hparams=transformer_base_single_gpu`. After data generation, training and decoding I successfully exported the model using `t2t-exporter` as I can see a `saved_model.pbtxt` file and a `variables/` directory created in my export directory.
My question is: how can I now restore that trained model to make predictions on new sentences in Python? I'm working in Google Colab. I read that for text problems, the exported model expects the inputs to already be encoded as integers. How to do this?
I tried to work as in [this notebook](https://colab.research.google.com/notebooks/t2t/hello_t2t.ipynb#scrollTo=oILRLCWN_16u) but I am not able to retrieve the Problem I defined earlier. When I run
```
from tensor2tensor import problems
# Fetch the problem
problem = problems.problem("sequence")
```
it throws an error stating that `sequence not in the set of supported problems`.
Thanks for any help! | open | 2019-04-07T19:03:22Z | 2019-10-09T14:08:38Z | https://github.com/tensorflow/tensor2tensor/issues/1532 | [] | NielsRogge | 6 |
jupyter/nbgrader | jupyter | 1,242 | Generate assignments from a folder structure with subfolders | Hi,
I have a small problem using nbgrader on Jupyterhub.
I am using following versions:
Ubuntu 18.04
Nbgrader 0.6.0
Jupyterhub 1.0.0 (tljh)
Jupyter Notebook 5.7.8
I'm trying to generate assignments from a folder structure with subfolders. So my source folder looks something like this:
./source/assignment_1
./source/assignment_1/folder_1
./source/assignment_1/folder_2
When executing "nbgrader generate_assignment" only the ipynb-files in the folder assignment_1 are converted and copied to the release folder. The files in folder_1 and folder_2 are ignored.
I already tried to add the --CourseDirectory.include=['**'] option to "nbgrader generate_assignment" as well as to the config file. However, the ipynb-files in the subfolders are still ignored.
Any idea why this is the case? Am I missing some necessary config option?
Thanks in advance :)
| closed | 2019-10-10T11:28:28Z | 2019-11-02T09:59:33Z | https://github.com/jupyter/nbgrader/issues/1242 | [
"duplicate"
] | DerFerdi | 2 |
sqlalchemy/alembic | sqlalchemy | 981 | Autogenerated revisions want to change foreignkeys without code changes | **Describe the bug**
Initially I had the issue of having `None` in my autogenerated Foreignkeys, but I managed to get past that from #588, but now
when I autogenerate without changing anything in the code Alembic changes the foreignkeys. I can run autogenerate multiple times and the new revisions are constantly trying to change the foreignkeys.
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
The foreignkeys stay the same.
**To Reproduce**
I have a naming convention in the Base class that all my schemas inherit from.
```py
metadata = MetaData(schema="my_schema",
naming_convention={"ix": "ix_%(column_0_label)s",
"uq": "uq_%(table_name)s_%(column_0_name)s",
"ck": "ck_%(table_name)s_%(constraint_name)s",
"fk": "fk_%(table_name)s_%(column_0_name)s_%(referred_table_name)s",
"pk": "pk_%(table_name)s"},
)
Base = automap_base(metadata=metadata)
```
The new revisions look like the following:
```py
"""change1
Revision ID: de0b721b57b8
Revises: 2272f316e54a
Create Date: 2022-01-25 16:32:43.697034
"""
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision = 'de0b721b57b8'
down_revision = '2272f316e54a'
branch_labels = None
depends_on = None
def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint('fk_bookings_member_id_members', 'bookings', type_='foreignkey')
op.drop_constraint('fk_bookings_voucher_id_vouchers', 'bookings', type_='foreignkey')
op.create_foreign_key(op.f('fk_bookings_member_id_members'), 'bookings', 'members', ['member_id'], ['member_id'], source_schema='my_schema', referent_schema='my_schema')
op.create_foreign_key(op.f('fk_bookings_voucher_id_vouchers'), 'bookings', 'vouchers', ['voucher_id'], ['voucher_id'], source_schema='my_schema', referent_schema='my_schema')
op.drop_constraint('fk_trainings_partner_event_id_partners', 'trainings', type_='foreignkey')
op.create_foreign_key(op.f('fk_trainings_partner_event_id_partners'), 'trainings', 'partners', ['partner_event_id'], ['event_id'], source_schema='my_schema', referent_schema='my_schema')
op.drop_constraint('fk_travelbookings_client_id_travelclients', 'travelbookings', type_='foreignkey')
op.create_foreign_key(op.f('fk_travelbookings_client_id_travelclients'), 'travelbookings', 'travelclients', ['client_id'], ['client_id'], source_schema='my_schema', referent_schema='my_schema')
op.drop_constraint('fk_travelclients_member_id_members', 'travelclients', type_='foreignkey')
op.create_foreign_key(op.f('fk_travelclients_member_id_members'), 'travelclients', 'members', ['member_id'], ['member_id'], source_schema='my_schema', referent_schema='my_schema')
# ### end Alembic commands ###
def downgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.drop_constraint(op.f('fk_travelclients_member_id_members'), 'travelclients', schema='my_schema', type_='foreignkey')
op.create_foreign_key('fk_travelclients_member_id_members', 'travelclients', 'members', ['member_id'], ['member_id'])
op.drop_constraint(op.f('fk_travelbookings_client_id_travelclients'), 'travelbookings', schema='my_schema', type_='foreignkey')
op.create_foreign_key('fk_travelbookings_client_id_travelclients', 'travelbookings', 'travelclients', ['client_id'], ['client_id'])
op.drop_constraint(op.f('fk_trainings_partner_event_id_partners'), 'trainings', schema='my_schema', type_='foreignkey')
op.create_foreign_key('fk_trainings_partner_event_id_partners', 'trainings', 'partners', ['partner_event_id'], ['event_id'])
op.drop_constraint(op.f('fk_bookings_voucher_id_vouchers'), 'bookings', schema='my_schema', type_='foreignkey')
op.drop_constraint(op.f('fk_bookings_member_id_members'), 'bookings', schema='my_schema', type_='foreignkey')
op.create_foreign_key('fk_bookings_voucher_id_vouchers', 'bookings', 'vouchers', ['voucher_id'], ['voucher_id'])
op.create_foreign_key('fk_bookings_member_id_members', 'bookings', 'members', ['member_id'], ['member_id'])
# ### end Alembic commands ###
```
I can upgrade and do autogenerate again and get the same code in a new revision.
**Versions.**
- OS: MacOS
- Python: 3.7.11
- Alembic: 1.7.5
- SQLAlchemy: 1.4.22
- Database: MySQL
- DBAPI:
**Additional context**
<!-- Add any other context about the problem here. -->
**Have a nice day!**
| closed | 2022-01-26T07:23:24Z | 2022-03-13T16:21:26Z | https://github.com/sqlalchemy/alembic/issues/981 | [
"question"
] | HansBambel | 3 |
jupyter-incubator/sparkmagic | jupyter | 13 | Explore alternate SQL contexts | Sparkmagic currently supports only vanilla SQLContexts as first class interfaces. If a user wants to use an alternate context (like a HiveQLContext), they can do so through the pyspark or scala interfaces, but they must handle the context itself. It may be useful to allow the user to specify a type of SQLContext when using the SQL interface.
| closed | 2015-09-28T20:28:29Z | 2015-10-30T20:06:59Z | https://github.com/jupyter-incubator/sparkmagic/issues/13 | [
"kind:enhancement"
] | alope107 | 0 |
lundberg/respx | pytest | 274 | Test error in Python 3.12 -Debian | Hi, I am getting the following test error while building the package in Debian unstable.
```
_____________________ ERROR at setup of test_plain_fixture _____________________
file /tmp/pytest-of-yogu/pytest-0/test_respx_mock_fixture0/test_respx_mock_fixture.py, line 8
def test_plain_fixture(respx_mock):
E fixture 'respx_mock' not found
> available fixtures: anyio_backend, anyio_backend_name, anyio_backend_options, autojump_clock, cache, capfd, capfdbinary, caplog, capsys, capsysbinary, cov, doctest_namespace, event_loop, http_client, http_server, http_server_client, http_server_port, io_loop, mock_clock, monkeypatch, no_cover, nursery, pytestconfig, record_property, record_testsuite_property, record_xml_attribute, recwarn, some_fixture, tmp_path, tmp_path_factory, tmpdir, tmpdir_factory, twisted_greenlet, unused_tcp_port, unused_tcp_port_factory, unused_udp_port, unused_udp_port_factory
> use 'pytest --fixtures [testpath]' for help on them.
/tmp/pytest-of-yogu/pytest-0/test_respx_mock_fixture0/test_respx_mock_fixture.py:8
```
I have created a patch to address this error,
```patch
--- a/tests/test_plugin.py
+++ b/tests/test_plugin.py
@@ -3,6 +3,7 @@ def test_respx_mock_fixture(testdir):
"""
import httpx
import pytest
+ from respx.plugin import respx_mock
@pytest.fixture
def some_fixture():
```
There is a deprecated warning too,
```
.pybuild/cpython3_3.12_respx/build/tests/test_mock.py::test_proxies
/usr/lib/python3/dist-packages/httpx/_client.py:671: DeprecationWarning: The 'proxies' argument is now deprecated. Use 'proxy' or 'mounts' instead.
warnings.warn(message, DeprecationWarning)
```
Fix for the warning below,
```patch
--- a/tests/test_mock.py
+++ b/tests/test_mock.py
@@ -476,14 +476,14 @@ def test_add_remove_targets():
async def test_proxies():
with respx.mock:
respx.get("https://foo.bar/") % dict(json={"foo": "bar"})
- with httpx.Client(proxies={"https://": "https://1.1.1.1:1"}) as client:
+ with httpx.Client(proxy={"https://": "https://1.1.1.1:1"}) as client:
response = client.get("https://foo.bar/")
assert response.json() == {"foo": "bar"}
async with respx.mock:
respx.get("https://foo.bar/") % dict(json={"foo": "bar"})
async with httpx.AsyncClient(
- proxies={"https://": "https://1.1.1.1:1"}
+ proxy={"https://": "https://1.1.1.1:1"}
) as client:
response = await client.get("https://foo.bar/")
assert response.json() == {"foo": "bar"}
``` | closed | 2024-08-17T23:02:34Z | 2024-12-19T10:58:53Z | https://github.com/lundberg/respx/issues/274 | [] | NGC2023 | 1 |
dask/dask | scikit-learn | 10,997 | Dumb code error in the Example code in Dask-SQL Homepage | Easy to fix. On https://dask-sql.readthedocs.io/en/latest/ in the Example code the last line is
```
# ...or use it for another computation
result.sum.mean().compute()
```
this throws an error because 'sum' was accidentally left in.
the code should be:
```
# ...or use it for another computation
result.mean().compute()
``` | closed | 2024-03-12T14:44:08Z | 2024-03-15T16:21:13Z | https://github.com/dask/dask/issues/10997 | [
"needs triage"
] | tiraldj | 3 |
eriklindernoren/ML-From-Scratch | data-science | 12 | Apriori - Subset name error | Name subset missing s (subsets) at line 158?
NameError Traceback (most recent call last)
<ipython-input-117-5dfcb657789c> in <module>()
17
18 # Get and print the rules
---> 19 rules = apriori.generate_rules(transactions)
20 print ("Rules:")
21 for rule in rules:
<ipython-input-116-c89fd55eb61b> in generate_rules(self, transactions)
167 rules = []
168 for itemset in frequent_itemsets:
--> 169 rules += self._rules_from_itemset(itemset, itemset)
170 return rules
<ipython-input-116-c89fd55eb61b> in _rules_from_itemset(self, initial_itemset, itemset)
156 # recursively add rules from subsets
157 if k - 1 > 1:
--> 158 rules.append(self._rules_from_itemset(initial_itemset, subset))
159 return rules
160
NameError: name 'subset' is not defined | closed | 2017-03-05T21:21:57Z | 2017-03-06T09:43:27Z | https://github.com/eriklindernoren/ML-From-Scratch/issues/12 | [] | zpencerguy | 2 |
pyqtgraph/pyqtgraph | numpy | 2,367 | ExportDialog Drawn Off Screen | Depending on the size of the scene the export dialog box can be drawn off or partially off screen. This is due to an implementation of the `show` command that allows moving the box to negative pixel indices.
Problem Code:
https://github.com/pyqtgraph/pyqtgraph/blob/a5f48ec5b58a10260195f1424309f7374a85ece7/pyqtgraph/GraphicsScene/exportDialog.py#L57-L62
To fix this, the position calculation can be clipped using `max`, and the `setGeometry` command can be changed to `move` to account for the size of the window's frame.
Potential Fix:
```python
if not self.shown:
self.shown = True
vcenter = self.scene.getViewWidget().geometry().center()
x = max(0, int(vcenter.x() - self.width() / 2))
y = max(0, int(vcenter.y() - self.height() / 2))
self.move(x, y)
```
I can't say I understand the motivation for moving the dialog box in the first place, but atleast with this modification the dialog box is always accessible with the mouse. | closed | 2022-07-19T14:09:36Z | 2022-10-28T16:53:02Z | https://github.com/pyqtgraph/pyqtgraph/issues/2367 | [
"good first issue",
"exporters",
"hacktoberfest"
] | cdfredrick | 5 |
OFA-Sys/Chinese-CLIP | computer-vision | 72 | 训练模型时,他是输入一个图片对应多个query,还是随机输入不同的query和图片,如果是前者的话,我保持图片数量不变,增加query应该不会增加太多训练时间 | closed | 2023-03-21T02:55:18Z | 2023-04-27T15:45:02Z | https://github.com/OFA-Sys/Chinese-CLIP/issues/72 | [] | shenghuangxu | 1 |
|
manbearwiz/youtube-dl-server | rest-api | 45 | Request: Error handling on download thread | Queue downloading does not recover after any error occurs on the download thread.
This is the same issue as: https://github.com/manbearwiz/youtube-dl-server/issues/43 | closed | 2019-10-14T06:46:12Z | 2020-12-04T21:37:22Z | https://github.com/manbearwiz/youtube-dl-server/issues/45 | [] | GeorgeHahn | 3 |
pywinauto/pywinauto | automation | 813 | pywinauto crashes with tkinter on Python 3.7 on exit | Simply importing `pywinauto` and `tkinter` together will crash Python 3.7 post execution
## Expected Behavior
Not get any "Python has stopped working" crash message after Python script has executed.
## Actual Behavior
After execution, Python should be able to exit gracefully without a crash message.
## Steps to Reproduce the Problem
1. import tkinter
2. import pywinauto
3. create a `Tk()` instance
4. quit `Tk` instance
5. Python script will exit but a crash message will be shown
## Short Example of Code to Demonstrate the Problem
```
import tkinter as tk
import pywinauto as pyw
root = tk.Tk()
```
## Specifications
- Pywinauto version: 0.6.7
- Python version and bitness: 3.7-32
- Platform and OS: Windows 10 64 bit
| open | 2019-09-06T13:50:16Z | 2021-09-02T09:51:40Z | https://github.com/pywinauto/pywinauto/issues/813 | [
"3rd-party issue"
] | r-ook | 9 |
lexiforest/curl_cffi | web-scraping | 336 | [BUG] 中文网站乱码 Chinese website messy code | - curl_cffi version 0.6.4
请问中文网站乱码,有什么通用的解决办法吗?(即不通过手动指定编码)
Can you tell me if there is any general solution for Chinese websites with messy codes? (i.e. not by specifying the encoding manually)
| closed | 2024-07-03T06:46:56Z | 2024-07-04T03:32:20Z | https://github.com/lexiforest/curl_cffi/issues/336 | [
"bug"
] | zyoung1212 | 6 |
tensorflow/tensor2tensor | machine-learning | 1,486 | The variable is in the checkpoints, But the model cannot be loaded correctly. | ### Description
Hi, I want to reproduce the CNN translation model. But I encounter the model load problem. When I use the tensorflow 1.8, the model seems to be loaded correctly. But when I use the tensorflow 1.12, the model can not be loaded. And the message is
```
NotFoundError (see above for traceback): Restoring from checkpoint failed. This is most likely due to a Variable name or other graph key that is missing from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error:
837.train | [2019-03-13T08:58:51Z]
837.train | [2019-03-13T08:58:51Z] Key while/cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_0/conv1d/conv1d_7/kernel not found in checkpoint
837.train | [2019-03-13T08:58:51Z] [[node save/RestoreV2_1 (defined at /code/tensor2tensor/tensor2tensor/utils/decoding.py:368) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, ..., DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2_1/tensor_names, save/RestoreV2_1/shape_and_slices)]]
```
But I print the checkpoint variables, I found the variable was in the model.
```
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_2/dense_8/kernel/Adam_1')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_5/dense_14/kernel/Adam_1')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_0/conv1d/conv1d_7/kernel/Adam')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_6/dense_16/kernel/Adam_1')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_3/dense_10/bias/Adam')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/encoder/dense/kernel/Adam')
('tensor_name: ', 'losses_avg/problem_0/extra_loss')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_5/conv1d/conv1d_12/kernel')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_3/conv1d/conv1d_10/kernel/Adam')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/encoder/cnn_1/conv1d/conv1d_1/kernel/Adam_1')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_3/dense_9/kernel/Adam')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_5/dense_14/bias')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_1/conv1d/conv1d_8/kernel/Adam')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_6/dense_16/bias/Adam_1')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_1/dense_5/kernel')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/cnn_0/conv1d/conv1d_7/kernel')
('tensor_name: ', 'cnn_translate/parallel_0_5/cnn_translate/cnn_translate/body/cnn_decoder/dense_17/bias/Adam_1')
```
So I was very confused with this problem. Can someone can help me ? | open | 2019-03-13T09:35:01Z | 2019-08-08T20:48:30Z | https://github.com/tensorflow/tensor2tensor/issues/1486 | [] | chuanHN | 3 |
mage-ai/mage-ai | data-science | 5,363 | [BUG] Transformer Block randomly stuck in | ### Mage version
0.9.70
### Describe the bug
1. the block in question is a simple block that returns a string
2. the parent block returns a data frame
3. the pipeline has been scheduled and working fine for at least 2 months
4. all of a sudden, the block stuck on the running state without being executed for 2 days so I need to cancel it manually

the log of the stuck block

5. after canceling it manually the scheduled pipeline seems to work normally
### To reproduce
1. the bug seems to happen randomly without any cause
### Expected behavior
the block should not be stuck
### Screenshots
_No response_
### Operating system
-OS : windows 11
-Mage : 0.9.70
-Browser : Latest Ms Edge
### Additional context
_No response_ | open | 2024-08-26T03:52:20Z | 2024-08-26T18:49:42Z | https://github.com/mage-ai/mage-ai/issues/5363 | [
"bug"
] | JethroJethro | 1 |
yeongpin/cursor-free-vip | automation | 124 | no sign in [mac] | i logged out my account , closed the cursor, use cursor-free-vip sign a new account like this:
<img width="1200" alt="Image" src="https://github.com/user-attachments/assets/f60798d1-1b91-4b6e-bcb8-2c2ea427bc3d" />
but, when i reopen my cursor, nothing happened
<img width="649" alt="Image" src="https://github.com/user-attachments/assets/8c339b30-b804-486c-843a-fd0145bcf9f7" />
i wonder if this is due to the brower is in private mode? thank u | closed | 2025-03-01T11:58:15Z | 2025-03-06T04:26:04Z | https://github.com/yeongpin/cursor-free-vip/issues/124 | [] | guox18 | 7 |
facebookresearch/fairseq | pytorch | 5,097 | Pretrain Hubert base second iteration | I'm training a Hubert model from scratch on 8k Hz audio speech data same as described on the paper, first iteration succeeded. I've started the second iteration where first iteration features were used to learns kmeans clusters.
why the follow warning printed for all the training data. should I be concerned ?
`[2023-05-02 16:53:22,434][fairseq.data.audio.hubert_dataset][WARNING] - audio and label duration differ too much
` | open | 2023-05-03T10:12:37Z | 2024-11-12T07:44:19Z | https://github.com/facebookresearch/fairseq/issues/5097 | [
"question",
"needs triage"
] | renadnasser1 | 3 |
graphdeco-inria/gaussian-splatting | computer-vision | 996 | Extract data of ellipsoids | I want to get the mean and covariance of all ellipsoids of the fitting structure, is this achievable? | open | 2024-09-26T03:00:03Z | 2024-12-02T02:00:46Z | https://github.com/graphdeco-inria/gaussian-splatting/issues/996 | [] | 3406212002 | 2 |
PeterL1n/BackgroundMattingV2 | computer-vision | 176 | train_refine | Hi,
first thanks for your great work!
I'm trying to train the refine part, but I get some weird error. I tried everything but nothing helps. Maybe You guys have some ideas!
```
File "~/.venv/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 272, in __getitem__
return self.dataset[self.indices[idx]]
File "~/BMV2_/dataset/zip.py", line 17, in __getitem__
x = tuple(d[(idx % len(d))+1] for d in self.datasets)
ZeroDivisionError: integer division or modulo by zero
File "/usr/lib/python3.6/multiprocessing/spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
``` | closed | 2022-03-25T10:11:15Z | 2024-06-14T02:13:08Z | https://github.com/PeterL1n/BackgroundMattingV2/issues/176 | [] | grewanhassan | 6 |
horovod/horovod | pytorch | 4,050 | How to set timeout status when Missing Rank? | How to set TIMEOUT for horovod? When retry time exceeds the TIMEOUT the program should shutdown
If there is a TIMEOUT environment, I think it should be set by default | closed | 2024-06-20T01:41:32Z | 2025-01-21T05:34:19Z | https://github.com/horovod/horovod/issues/4050 | [
"enhancement"
] | fuhailin | 0 |
pywinauto/pywinauto | automation | 651 | wait Operation error | Hi Vasily,
I am trying to access outlook.
Based on docs and examples, I am trying below code with wait operation and getting timeout error before application open and visible. What might be best way to use wait operation?
```python
from pywinauto.application import Application
app = Application().start(r'C:\Program Files (x86)\Microsoft Office\root\Office16\OUTLOOK.EXE')
app.rctrl_renwnd32.wait('enabled', timeout = 20)
Traceback (most recent call last):
File "C:/Users/rrmamidi/Desktop/old Desktop/compress_1/python/basic python scripts/attache_mail.py", line 9, in <module>
app.rctrl_renwnd32.wait('enabled', timeout = 20)
File "C:\Users\rrmamidi\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto-0.6.5-py3.6.egg\pywinauto\application.py", line 502, in wait
lambda: self.__check_all_conditions(check_method_names, retry_interval))
File "C:\Users\rrmamidi\AppData\Local\Programs\Python\Python36\lib\site-packages\pywinauto-0.6.5-py3.6.egg\pywinauto\timings.py", line 370, in wait_until
raise err
pywinauto.timings.TimeoutError: timed out
```
Thanks,
Raja | closed | 2019-01-08T04:22:09Z | 2023-03-18T06:11:14Z | https://github.com/pywinauto/pywinauto/issues/651 | [
"question"
] | rajarameshmamidi | 16 |
Asabeneh/30-Days-Of-Python | matplotlib | 222 | 30 days python | closed | 2022-05-19T06:50:12Z | 2022-05-19T06:50:24Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/222 | [] | CYPHERTIGER | 0 |
|
pytorch/pytorch | machine-learning | 149,495 | DISABLED AotInductorTest.FreeInactiveConstantBufferCuda (build.bin.test_aoti_inference) | Platforms: inductor
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=AotInductorTest.FreeInactiveConstantBufferCuda&suite=build.bin.test_aoti_inference&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39012167561).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 3 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `AotInductorTest.FreeInactiveConstantBufferCuda`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
<details><summary>Sample error message</summary>
```
Expected equality of these values:
initMemory - DATASIZE
Which is: 22508863488
updateMemory2
Which is: 22508797952
/var/lib/jenkins/workspace/test/cpp/aoti_inference/test.cpp:383: C++ failure
```
</details>
Test file path: `` or `test/run_test`
Error: Error retrieving : 400, test/run_test: 404
cc @clee2000 @chauhang @penguinwu @avikchaudhuri @gmagogsfm @zhxchen17 @tugsbayasgalan @angelayi @suo @ydwu4 @desertfire @chenyang78 @yushangdi @benjaminglass1 | open | 2025-03-19T09:43:10Z | 2025-03-21T09:41:37Z | https://github.com/pytorch/pytorch/issues/149495 | [
"module: flaky-tests",
"skipped",
"oncall: pt2",
"oncall: export",
"module: aotinductor"
] | pytorch-bot[bot] | 11 |
onnx/onnxmltools | scikit-learn | 526 | CoreML to ONNX conversion error | I am trying to convert CoreML model - [MobileNetV2](https://developer.apple.com/machine-learning/models/) to ONNX. The conversion is successful but on loading the saved onnx file, the following error is thrown:
[ONNXRuntimeError] : 1 : FAIL : Load model from mobilenetv2.onnx failed:Type Error: Type parameter (T) of Optype (Clip) bound to different types (tensor(float) and tensor(double) in node (unary).
I would appreciate any help with this. | open | 2022-02-16T01:58:03Z | 2022-02-16T01:58:03Z | https://github.com/onnx/onnxmltools/issues/526 | [] | Diksha-G | 0 |
iperov/DeepFaceLive | machine-learning | 123 | Huge Delay | Everything is setup, works great. Only issue, huge delay even with audio offset sync, and fiddling with settings...I can't seem to find solution. Could it be an issue with my paging file allocation? I mean I changed it to 32gb as suggested, but for some reason I am only getting like 10fps on the source feed from my cam and its set to auto..I tried changing it to 30fps and others but it's stuck on 10fps. All the info I really have atm, but great app so far! Please advise of any suggestions.
Thanks,
B. Justice | closed | 2023-01-23T00:27:09Z | 2023-01-23T15:29:39Z | https://github.com/iperov/DeepFaceLive/issues/123 | [] | vintaclectic | 2 |
TencentARC/GFPGAN | deep-learning | 3 | where is the pretrained model FFHQ_eye_mouth_landmarks_512.pth &arcface..... | nice work , i want to train the model , but where is pretrained model FFHQ_eye_mouth_landmarks_512.pth &arcface..... | closed | 2021-06-16T08:08:45Z | 2021-06-18T02:09:42Z | https://github.com/TencentARC/GFPGAN/issues/3 | [] | zhangyunming | 3 |
streamlit/streamlit | data-visualization | 10,067 | `st.server_state` with endpoint access (like `st.session_state`, but for global values; different from pickled user sessions) | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
Currently, we can use `st.cache_data` and `st.cache_resource` to "save" values that can be accessed between sessions. What if there was an API similar to `st.session_state` that was shared between sessions? I suggest `st.server_state`.
Additionally, a Streamlit app could include an endpoint for accessing and updating server state.
### Why?
A common pattern is for a Streamlit app to reach out and collect data from a remote location, typically saving it with `st.cache_data`. If there was some server state combined with an endpoint, a remote source could be able to ping the app and initiate an update of data. This prevents needing to schedule an app to run to update data or making a random user wait if they are the first to connect beyond a cached values TTL.
This would also be useful for IoT use cases where a smart device can send an alert to the app.
Another feature request to send a global message to all sessions (#7312) could also be accommodated with this.
### How?
Add a new API `st.server_state` which is global with all sessions having read/write access.
Add an (authenticated) enpoint for remote sources to connect to and update values in `st.server_state`.
### Additional Context
This is related to requests for app state (#8609), but I'm suggesting something narrower. | open | 2024-12-22T10:09:44Z | 2025-02-01T05:19:22Z | https://github.com/streamlit/streamlit/issues/10067 | [
"type:enhancement",
"feature:state"
] | sfc-gh-dmatthews | 2 |
lukas-blecher/LaTeX-OCR | pytorch | 229 | `pip install pix2tex[gui]` does not work | Running `pip install pix2tex[gui]` in the terminal gives me `no matches found: pix2tex[gui]`. What am I doing wrong? | closed | 2023-01-03T17:51:44Z | 2023-01-04T13:48:06Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/229 | [] | rschmidtner | 2 |
keras-team/autokeras | tensorflow | 1,680 | KeyError: 'classification_head_1/spatial_reduction_1/reduction_type' with 'overwrite=True' & AK 1.0.17 | ### Bug Description
Similar issue to #1183 even with `overwrite=True` and Autokeras 1.0.17
### Bug Reproduction
A simple image classification as given in the tutorials.
Data used by the code: Normal jpg images.
### Expected Behavior
The training continues to the maximal max_trials, then stops.
### Setup Details
Include the details about the versions of:
- OS type and version: Ubuntu 20.04.3 LTS
- Python: 3.9.10
- autokeras: 1.0.17
- keras-tuner: 1.1.0
- scikit-learn: 0.24.1
- numpy: 1.22.2
- pandas: 1.2.3
- tensorflow: 2.8.0
### Additional context, here is the full stack trace:
```
Trial 3 Complete [10h 14m 30s]
val_loss: 0.29048436880111694
Best val_loss So Far: 0.29048436880111694
Total elapsed time: 12h 58m 04s
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
Input In [7], in <module>
1 tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
3 model = ak.ImageClassifier(overwrite=True, max_trials=30)
----> 4 history = model.fit(train_data, epochs=10, callbacks=[tensorboard_callback])
File ~/Git/ipig/venv/lib/python3.9/site-packages/autokeras/tasks/image.py:164, in ImageClassifier.fit(self, x, y, epochs, callbacks, validation_split, validation_data, **kwargs)
107 def fit(
108 self,
109 x: Optional[types.DatasetType] = None,
(...)
117 **kwargs
118 ):
119 """Search for the best model and hyperparameters for the AutoModel.
120
121 It will search for the best model based on the performances on
(...)
162 validation loss values and validation metrics values (if applicable).
163 """
--> 164 history = super().fit(
165 x=x,
166 y=y,
167 epochs=epochs,
168 callbacks=callbacks,
169 validation_split=validation_split,
170 validation_data=validation_data,
171 **kwargs
172 )
173 return history
File ~/Git/ipig/venv/lib/python3.9/site-packages/autokeras/auto_model.py:288, in AutoModel.fit(self, x, y, batch_size, epochs, callbacks, validation_split, validation_data, verbose, **kwargs)
283 if validation_data is None and validation_split:
284 dataset, validation_data = data_utils.split_dataset(
285 dataset, validation_split
286 )
--> 288 history = self.tuner.search(
289 x=dataset,
290 epochs=epochs,
291 callbacks=callbacks,
292 validation_data=validation_data,
293 validation_split=validation_split,
294 verbose=verbose,
295 **kwargs
296 )
298 return history
File ~/Git/ipig/venv/lib/python3.9/site-packages/autokeras/engine/tuner.py:193, in AutoTuner.search(self, epochs, callbacks, validation_split, verbose, **fit_kwargs)
191 self.hypermodel.build(hp)
192 self.oracle.update_space(hp)
--> 193 super().search(
194 epochs=epochs, callbacks=new_callbacks, verbose=verbose, **fit_kwargs
195 )
197 # Train the best model use validation data.
198 # Train the best model with enough number of epochs.
199 if validation_split > 0 or early_stopping_inserted:
File ~/Git/ipig/venv/lib/python3.9/site-packages/keras_tuner/engine/base_tuner.py:169, in BaseTuner.search(self, *fit_args, **fit_kwargs)
167 self.on_search_begin()
168 while True:
--> 169 trial = self.oracle.create_trial(self.tuner_id)
170 if trial.status == trial_module.TrialStatus.STOPPED:
171 # Oracle triggered exit.
172 tf.get_logger().info("Oracle triggered exit")
File ~/Git/ipig/venv/lib/python3.9/site-packages/keras_tuner/engine/oracle.py:189, in Oracle.create_trial(self, tuner_id)
187 values = None
188 else:
--> 189 response = self.populate_space(trial_id)
190 status = response["status"]
191 values = response["values"] if "values" in response else None
File ~/Git/ipig/venv/lib/python3.9/site-packages/autokeras/tuners/greedy.py:153, in GreedyOracle.populate_space(self, trial_id)
151 for _ in range(self._max_collisions):
152 hp_names = self._select_hps()
--> 153 values = self._generate_hp_values(hp_names)
154 # Reached max collisions.
155 if values is None:
File ~/Git/ipig/venv/lib/python3.9/site-packages/autokeras/tuners/greedy.py:189, in GreedyOracle._generate_hp_values(self, hp_names)
186 if hps.is_active(hp):
187 # if was active and not selected, do nothing.
188 if best_hps.is_active(hp.name) and hp.name not in hp_names:
--> 189 hps.values[hp.name] = best_hps.values[hp.name]
190 continue
191 # if was not active or selected, sample.
KeyError: 'classification_head_1/spatial_reduction_2/reduction_type'
```
| open | 2022-02-10T09:44:14Z | 2022-02-10T09:45:14Z | https://github.com/keras-team/autokeras/issues/1680 | [] | mmortazavi | 0 |
automl/auto-sklearn | scikit-learn | 784 | Does output with R2 below simple multiple regression indicate error or tuning need? | This was the script that resulted in an unusually low R2 -- the expected result was higher than multiple regression (.55) instead of R2 of .12. The question is whether this indicates a poor use case for auto-SKLearn, a need for parameter or hyperparameter adjustments, or some other error in use?
15:11:48 PRIVATE python3 eluellen-sklearn.py
/usr/local/lib/python3.6/dist-packages/sklearn/utils/deprecation.py:144: FutureWarning: The sklearn.metrics.classification module is deprecated in version 0.22 and will be removed in version 0.24. The corresponding classes / functions should instead be imported from sklearn.metrics. Anything that cannot be imported from sklearn.metrics is now part of the private API.
warnings.warn(message, FutureWarning)
Samples = 2619, Features = 40
X_train = [[4.96200e+04 1.71090e+04 3.44800e-01 ... 1.70000e+04 3.26200e+04
6.57400e-01]
[5.95000e+04 0.00000e+00 0.00000e+00 ... 5.95000e+04 0.00000e+00
0.00000e+00]
[4.65400e+04 4.65400e+04 1.00000e+00 ... 1.15400e+04 3.50000e+04
7.52000e-01]
...
[5.25800e+04 3.14100e+04 5.97400e-01 ... 2.25800e+04 3.00000e+04
5.70600e-01]
[6.46150e+04 6.27120e+04 9.70500e-01 ... 9.61500e+03 5.50000e+04
8.51200e-01]
[5.25800e+04 2.90390e+04 5.52300e-01 ... 2.22230e+04 3.03575e+04
5.77400e-01]], y_train = [1. 0. 1. ... 1. 1. 1.]
/usr/local/lib/python3.6/dist-packages/sklearn/base.py:197: FutureWarning: From version 0.24, get_params will raise an AttributeError if a parameter cannot be retrieved as an instance attribute. Previously it would return None.
FutureWarning)
[WARNING] [2020-02-18 15:11:58,185:AutoMLSMBO(1)::cb28bbd020a0a08a3c17168f19c8aaae] Could not find meta-data directory /usr/local/lib/python3.6/dist-packages/autosklearn/metalearning/files/r2_regression_dense
[WARNING] [2020-02-18 15:11:58,212:EnsembleBuilder(1):cb28bbd020a0a08a3c17168f19c8aaae] No models better than random - using Dummy Score!
[WARNING] [2020-02-18 15:11:58,224:EnsembleBuilder(1):cb28bbd020a0a08a3c17168f19c8aaae] No models better than random - using Dummy Score!
[WARNING] [2020-02-18 15:12:00,228:EnsembleBuilder(1):cb28bbd020a0a08a3c17168f19c8aaae] No models better than random - using Dummy Score!
[(0.340000, SimpleRegressionPipeline({'categorical_encoding:__choice__': 'one_hot_encoding', 'imputation:strategy': 'median', 'preprocessor:__choice__': 'extra_trees_preproc_for_regression', 'regressor:__choice__': 'ridge_regression', 'rescaling:__choice__': 'quantile_transformer', 'categorical_encoding:one_hot_encoding:use_minimum_fraction': 'True', 'preprocessor:extra_trees_preproc_for_regression:bootstrap': 'True', 'preprocessor:extra_trees_preproc_for_regression:criterion': 'mae', 'preprocessor:extra_trees_preproc_for_regression:max_depth': 'None', 'preprocessor:extra_trees_preproc_for_regression:max_features': 0.8215479502881777, 'preprocessor:extra_trees_preproc_for_regression:max_leaf_nodes': 'None', 'preprocessor:extra_trees_preproc_for_regression:min_samples_leaf': 11, 'preprocessor:extra_trees_preproc_for_regression:min_samples_split': 9, 'preprocessor:extra_trees_preproc_for_regression:min_weight_fraction_leaf': 0.0, 'preprocessor:extra_trees_preproc_for_regression:n_estimators': 100, 'regressor:ridge_regression:alpha': 4.563743442447699, 'regressor:ridge_regression:fit_intercept': 'True', 'regressor:ridge_regression:tol': 4.8339309027613326e-05, 'rescaling:quantile_transformer:n_quantiles': 572, 'rescaling:quantile_transformer:output_distribution': 'uniform', 'categorical_encoding:one_hot_encoding:minimum_fraction': 0.022216999044307732},
dataset_properties={
'task': 4,
'sparse': False,
'multilabel': False,
'multiclass': False,
'target_type': 'regression',
'signed': False})),
(0.340000, SimpleRegressionPipeline({'categorical_encoding:__choice__': 'one_hot_encoding', 'imputation:strategy': 'most_frequent', 'preprocessor:__choice__': 'fast_ica', 'regressor:__choice__': 'extra_trees', 'rescaling:__choice__': 'minmax', 'categorical_encoding:one_hot_encoding:use_minimum_fraction': 'False', 'preprocessor:fast_ica:algorithm': 'parallel', 'preprocessor:fast_ica:fun': 'logcosh', 'preprocessor:fast_ica:whiten': 'False', 'regressor:extra_trees:bootstrap': 'False', 'regressor:extra_trees:criterion': 'friedman_mse', 'regressor:extra_trees:max_depth': 'None', 'regressor:extra_trees:max_features': 0.343851332296278, 'regressor:extra_trees:max_leaf_nodes': 'None', 'regressor:extra_trees:min_impurity_decrease': 0.0, 'regressor:extra_trees:min_samples_leaf': 14, 'regressor:extra_trees:min_samples_split': 5, 'regressor:extra_trees:n_estimators': 100},
dataset_properties={
'task': 4,
'sparse': False,
'multilabel': False,
'multiclass': False,
'target_type': 'regression',
'signed': False})),
(0.260000, SimpleRegressionPipeline({'categorical_encoding:__choice__': 'one_hot_encoding', 'imputation:strategy': 'mean', 'preprocessor:__choice__': 'no_preprocessing', 'regressor:__choice__': 'random_forest', 'rescaling:__choice__': 'standardize', 'categorical_encoding:one_hot_encoding:use_minimum_fraction': 'True', 'regressor:random_forest:bootstrap': 'True', 'regressor:random_forest:criterion': 'mse', 'regressor:random_forest:max_depth': 'None', 'regressor:random_forest:max_features': 1.0, 'regressor:random_forest:max_leaf_nodes': 'None', 'regressor:random_forest:min_impurity_decrease': 0.0, 'regressor:random_forest:min_samples_leaf': 1, 'regressor:random_forest:min_samples_split': 2, 'regressor:random_forest:min_weight_fraction_leaf': 0.0, 'regressor:random_forest:n_estimators': 100, 'categorical_encoding:one_hot_encoding:minimum_fraction': 0.01},
dataset_properties={
'task': 4,
'sparse': False,
'multilabel': False,
'multiclass': False,
'target_type': 'regression',
'signed': False})),
(0.040000, SimpleRegressionPipeline({'categorical_encoding:__choice__': 'one_hot_encoding', 'imputation:strategy': 'most_frequent', 'preprocessor:__choice__': 'fast_ica', 'regressor:__choice__': 'ridge_regression', 'rescaling:__choice__': 'standardize', 'categorical_encoding:one_hot_encoding:use_minimum_fraction': 'True', 'preprocessor:fast_ica:algorithm': 'deflation', 'preprocessor:fast_ica:fun': 'exp', 'preprocessor:fast_ica:whiten': 'True', 'regressor:ridge_regression:alpha': 1.3608642297867532e-05, 'regressor:ridge_regression:fit_intercept': 'True', 'regressor:ridge_regression:tol': 0.002596874543719601, 'categorical_encoding:one_hot_encoding:minimum_fraction': 0.00017348437847697216, 'preprocessor:fast_ica:n_components': 1058},
dataset_properties={
'task': 4,
'sparse': False,
'multilabel': False,
'multiclass': False,
'target_type': 'regression',
'signed': False})),
(0.020000, SimpleRegressionPipeline({'categorical_encoding:__choice__': 'no_encoding', 'imputation:strategy': 'median', 'preprocessor:__choice__': 'select_percentile_regression', 'regressor:__choice__': 'ridge_regression', 'rescaling:__choice__': 'quantile_transformer', 'preprocessor:select_percentile_regression:percentile': 82.56436225708288, 'preprocessor:select_percentile_regression:score_func': 'mutual_info', 'regressor:ridge_regression:alpha': 1.6259354959848533, 'regressor:ridge_regression:fit_intercept': 'True', 'regressor:ridge_regression:tol': 0.005858793476627702, 'rescaling:quantile_transformer:n_quantiles': 431, 'rescaling:quantile_transformer:output_distribution': 'normal'},
dataset_properties={
'task': 4,
'sparse': False,
'multilabel': False,
'multiclass': False,
'target_type': 'regression',
'signed': False})),
]
R2 score: 0.12086525801756198
real 1m58.008s
user 2m17.253s
sys 0m12.919s | closed | 2020-02-18T15:44:50Z | 2020-06-19T08:56:56Z | https://github.com/automl/auto-sklearn/issues/784 | [] | EricLuellen | 2 |
iperov/DeepFaceLab | machine-learning | 778 | Windows Error 1450 with Xseg and train SAE | Made everything step by step from the tutorial from YT, and with the last step (train SAE), I`ve got an error.
`Running trainer.
[new] No saved models found. Enter a name of a new model : hindu_SAE
hindu_SAE
Model first run.
Choose one or several GPU idxs (separated by comma).
[CPU] : CPU
[0] : GeForce GTX 1080 Ti
[0] Which GPU indexes to choose? :
0
[0] Autobackup every N hour ( 0..24 ?:help ) : 0
0
[n] Write preview history ( y/n ?:help ) :
n
[0] Target iteration :
0
[y] Flip faces randomly ( y/n ?:help ) :
y
[8] Batch_size ( ?:help ) :
8
[128] Resolution ( 64-512 ?:help ) :
128
[wf] Face type ( h/mf/f/wf/head ?:help ) : wf
wf
[dfuhd] AE architecture ( df/liae/dfhd/liaehd/dfuhd/liaeuhd ?:help ) : df
df
[688] AutoEncoder dimensions ( 32-1024 ?:help ) :
688
[64] Encoder dimensions ( 16-256 ?:help ) :
64
[64] Decoder dimensions ( 16-256 ?:help ) :
64
[22] Decoder mask dimensions ( 16-256 ?:help ) :
22
[y] Masked training ( y/n ?:help ) :
y
[n] Eyes priority ( y/n ?:help ) :
n
[y] Place models and optimizer on GPU ( y/n ?:help ) :
y
[n] Use learning rate dropout ( n/y/cpu ?:help ) : ?
When the face is trained enough, you can enable this option to get extra sharpness and reduce subpixel shake for less amount of iterations.
n - disabled.
y - enabled
cpu - enabled on CPU. This allows not to use extra VRAM, sacrificing 20% time of iteration.
[n] Use learning rate dropout ( n/y/cpu ?:help ) :
n
[y] Enable random warp of samples ( y/n ?:help ) :
y
[0.0] GAN power ( 0.0 .. 10.0 ?:help ) :
0.0
[0.0] 'True face' power. ( 0.0000 .. 1.0 ?:help ) :
0.0
[0.0] Face style power ( 0.0..100.0 ?:help ) :
0.0
[0.0] Background style power ( 0.0..100.0 ?:help ) :
0.0
[none] Color transfer for src faceset ( none/rct/lct/mkl/idt/sot ?:help ) :
none
[n] Enable gradient clipping ( y/n ?:help ) :
n
[n] Enable pretraining mode ( y/n ?:help ) :
n
Initializing models: 100%|############################| 5/5 [00:34<00:00, 6.98s/it]
Loading samples: 100%|#######################| 48929/48929 [02:29<00:00, 328.19it/s]
Loading samples: 100%|#######################| 38226/38226 [02:14<00:00, 284.53it/s]
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "multiprocessing\spawn.py", line 105, in spawn_main
File "multiprocessing\spawn.py", line 115, in _main
File "multiprocessing\heap.py", line 55, in __setstate__
OSError: [WinError 1450] Zasoby systemowe nie wystarczają do ukończenia żądanej usługi
`
OSError: [WinError 1450] **System resources are not sufficient to complete the requested service**
My configuration:
Ryzen 5 1600X, 32GB DDR4, 11GB GTX 1080Ti, 64GB Virtual Memory on Win 10 x64BIT
The DF Master is current from today.
Please help | open | 2020-06-09T21:22:40Z | 2023-06-08T20:11:21Z | https://github.com/iperov/DeepFaceLab/issues/778 | [] | wasyleque | 1 |
Yorko/mlcourse.ai | pandas | 747 | Update the Docker image to use Poetry | The current Docker image ([Dockerfile](https://github.com/Yorko/mlcourse.ai/blob/main/docker_files/Dockerfile), [DockerHub](https://hub.docker.com/layers/festline/mlcourse_ai/latest/images/sha256-a736f95d84cb934331d5c58f408dbfcb897a725adb36a5963d9656f4199f4abb?context=explore)) uses Anaconda and is thus very heavy. Also, package versions in the [Dockerfile](https://github.com/Yorko/mlcourse.ai/blob/main/docker_files/Dockerfile) and [instructions](https://mlcourse.ai/book/prereqs/docker.html) on Docker usage have not been updated since 2021 or even 2019.
It'd help to replace the image with another one that uses Poetry and the provided [poetry.lock](https://github.com/Yorko/mlcourse.ai/blob/main/poetry.lock) file. So essentially, what's needed, is no install poetry in the Docker image and run `poetry install` to install all dependencies. | closed | 2023-05-17T09:24:57Z | 2024-08-25T07:46:26Z | https://github.com/Yorko/mlcourse.ai/issues/747 | [
"help wanted",
"wontfix"
] | Yorko | 0 |
gevent/gevent | asyncio | 1,952 | testFDPassSeparate fails on OpenIndiana (sunos5) | * gevent version: 22.10.2, installed from sdist during packaging `gevent` for OpenIndiana
* Python version: 3.9.16, part of OpenIndiana
* Operating System: OpenIndiana Hipster (latest)
### Description:
I'm packaging `gevent` for OpenIndiana and I found that following four tests fails due to an OSError:
```
======================================================================
ERROR: testFDPassSeparate (__main__.RecvmsgSCMRightsStreamTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test_socketogqslxst.py", line 372, in _tearDown
raise exc
File "/tmp/test_socketogqslxst.py", line 390, in clientRun
test_func()
File "/tmp/test_socketogqslxst.py", line 3527, in _testFDPassSeparate
self.sendmsgToServer([MSG], [(socket.SOL_SOCKET,
File "/tmp/test_socketogqslxst.py", line 2664, in sendmsgToServer
return self.cli_sock.sendmsg(
File "/usr/lib/python3.9/vendor-packages/gevent/_socket3.py", line 399, in sendmsg
return self._sock.sendmsg(buffers, ancdata, flags, address)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: testFDPassSeparateMinSpace (__main__.RecvmsgSCMRightsStreamTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test_socketogqslxst.py", line 372, in _tearDown
raise exc
File "/tmp/test_socketogqslxst.py", line 390, in clientRun
test_func()
File "/tmp/test_socketogqslxst.py", line 3554, in _testFDPassSeparateMinSpace
self.sendmsgToServer([MSG], [(socket.SOL_SOCKET,
File "/tmp/test_socketogqslxst.py", line 2664, in sendmsgToServer
return self.cli_sock.sendmsg(
File "/usr/lib/python3.9/vendor-packages/gevent/_socket3.py", line 399, in sendmsg
return self._sock.sendmsg(buffers, ancdata, flags, address)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: testFDPassSeparate (__main__.RecvmsgIntoSCMRightsStreamTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test_socketogqslxst.py", line 372, in _tearDown
raise exc
File "/tmp/test_socketogqslxst.py", line 390, in clientRun
test_func()
File "/tmp/test_socketogqslxst.py", line 3527, in _testFDPassSeparate
self.sendmsgToServer([MSG], [(socket.SOL_SOCKET,
File "/tmp/test_socketogqslxst.py", line 2664, in sendmsgToServer
return self.cli_sock.sendmsg(
File "/usr/lib/python3.9/vendor-packages/gevent/_socket3.py", line 399, in sendmsg
return self._sock.sendmsg(buffers, ancdata, flags, address)
OSError: [Errno 22] Invalid argument
======================================================================
ERROR: testFDPassSeparateMinSpace (__main__.RecvmsgIntoSCMRightsStreamTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/test_socketogqslxst.py", line 372, in _tearDown
raise exc
File "/tmp/test_socketogqslxst.py", line 390, in clientRun
test_func()
File "/tmp/test_socketogqslxst.py", line 3554, in _testFDPassSeparateMinSpace
self.sendmsgToServer([MSG], [(socket.SOL_SOCKET,
File "/tmp/test_socketogqslxst.py", line 2664, in sendmsgToServer
return self.cli_sock.sendmsg(
File "/usr/lib/python3.9/vendor-packages/gevent/_socket3.py", line 399, in sendmsg
return self._sock.sendmsg(buffers, ancdata, flags, address)
OSError: [Errno 22] Invalid argument
```
All other `test_socket.py` tests either pass or are skipped.
### What I've run:
Tests | closed | 2023-05-08T13:03:22Z | 2023-07-10T21:02:33Z | https://github.com/gevent/gevent/issues/1952 | [
"Status: not gevent",
"Platform: Unsupported environment"
] | mtelka | 2 |
mckinsey/vizro | pydantic | 732 | [POC] Investigate whether we can get rid of outer container with className to use vizro-bootstrap | Currently, if someone creates a pure Dash app and wants to use the vizro-bootstrap CSS file, they have to wrap their Dash app content inside an outer div with the className=vizro_light or vizro_dark.
If not provided, it does not take on the styling of our Vizro bootstrap stylesheet. Take this minimal example: https://github.com/mckinsey/vizro/blob/poc/add-example-bootstrap/vizro-core/examples/scratch_dev/app.py#L54-L55
@pruthvip15 - could you investigate the following?
- **Why do we need to provide this outer container with className vizro_dark/vizro_light?** I have a rough understanding here, but would be great if we could get to the bottom of it. I assume that this is how the theme switch in bootstrap works e.g. if you take a look at [this example](https://getbootstrap.com/docs/5.3/customize/color-modes/) from the bootstrap docs, they also always wrap it inside an outer container called "light" or "dark". Otherwise it cannot find the variables and relevant scss files.
- **Is there any way how we could remove the requirement to define an outer container with classname?** So basically the goal is to be able to remove L54-L55 if possible and optimally, everything should work as expected.
@pruthvip15 - you can take the branch and dev example from above :)
I think because we define this in our bootstrap theme, we need this outer container, so I am not sure if there is any way how this can be read in without having to specify this out container
```
.vizro_dark, [data-bs-theme="dark"] {
...
}
``` | closed | 2024-09-23T11:43:23Z | 2024-10-15T11:54:43Z | https://github.com/mckinsey/vizro/issues/732 | [] | huong-li-nguyen | 7 |
bmoscon/cryptofeed | asyncio | 360 | Bitfinex: failing pair normalization strategy | **Describe the bug**
`BCHN` & `BCHABC` are not distinguished by Cryptofeed/Bitfinex.
`LINK` is not recognized by Cryptofeed/Bitfinex.
Current code in `pairs.py` assumes only 3 letters allow identifying a coin.
```python
normalized = pair[1:-3] + PAIR_SEP + pair[-3:]
```
- Which does not allow distinguishing `BCHN` and `BCHABC`.
"tBCHABC:USD"
"tBCHN:USD"
(to be noticed within the normalization effort: Binance, Kraken, Huobi, OKEX, FTX... keep `BCH` for `BCHN`: I would propose to do the same within cryptofeed)
- Which also do not allow to find `LINK.
"tLINK:USD"
"tLINK:UST"
**To Reproduce**
Trying to query following coins do not work.
```yaml
BITFINEX:
trades: ['BCH-USDT', 'LINK-USD']
```
PS: link to retrieve Bitfinex pairs:
https://api.bitfinex.com/v2/tickers?symbols=ALL | closed | 2020-12-20T20:04:33Z | 2020-12-23T14:23:58Z | https://github.com/bmoscon/cryptofeed/issues/360 | [
"bug"
] | yohplala | 1 |
vaexio/vaex | data-science | 1,951 | [BUG-REPORT] install error for vaex on python=3.10 in vaex-core with vaexfast.cpp | Thank you for reaching out and helping us improve Vaex!
Before you submit a new Issue, please read through the [documentation](https://docs.vaex.io/en/latest/). Also, make sure you search through the Open and Closed Issues - your problem may already be discussed or addressed.
**Description**
Please provide a clear and concise description of the problem. This should contain all the steps needed to reproduce the problem. A minimal code example that exposes the problem is very appreciated.
**Software information**
- Vaex version (`import vaex; vaex.__version__)`:
``` not available - install error ```
- Vaex was installed via: pip / conda-forge / from source
` pip`
- OS:
```
windows OS Name Microsoft Windows 10 Enterprise
Version 10.0.19044 Build 19044
```
**Additional information**
Please state any supplementary information or provide additional context for the problem (e.g. screenshots, data, etc..).
To replicate:
using python 3.10 & cmd.exe
```
python -m venv c:\Data\venv\github-pages
c:\Data\venv\github-pages\Scripts\activate.bat
(github-pages) C:\Data\venv>pip install jupyter vaex==4.8.0
```
Traceback:
```
(github-pages) C:\Data\venv>pip install jupyter vaex==4.8.0
Collecting jupyter
Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB)
Collecting vaex==4.8.0
Using cached vaex-4.8.0-py3-none-any.whl (4.7 kB)
Collecting vaex-server<0.9,>=0.8.1
Using cached vaex_server-0.8.1-py3-none-any.whl (23 kB)
Collecting vaex-core<4.9,>=4.8.0
Using cached vaex-core-4.8.0.tar.gz (2.2 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Collecting vaex-ml<0.18,>=0.17.0
Using cached vaex_ml-0.17.0-py3-none-any.whl (56 kB)
Collecting vaex-viz<0.6,>=0.5.1
Using cached vaex_viz-0.5.1-py3-none-any.whl (19 kB)
Collecting vaex-hdf5<0.13,>=0.12.0
Using cached vaex_hdf5-0.12.0-py3-none-any.whl (16 kB)
Collecting vaex-astro<0.10,>=0.9.0
Using cached vaex_astro-0.9.0-py3-none-any.whl (20 kB)
Collecting vaex-jupyter<0.8,>=0.7.0
Using cached vaex_jupyter-0.7.0-py3-none-any.whl (43 kB)
Collecting notebook
Using cached notebook-6.4.8-py3-none-any.whl (9.9 MB)
Collecting qtconsole
Using cached qtconsole-5.2.2-py3-none-any.whl (120 kB)
Collecting ipywidgets
Using cached ipywidgets-7.6.5-py2.py3-none-any.whl (121 kB)
Collecting jupyter-console
Using cached jupyter_console-6.4.0-py3-none-any.whl (22 kB)
Collecting ipykernel
Using cached ipykernel-6.9.1-py3-none-any.whl (128 kB)
Collecting nbconvert
Using cached nbconvert-6.4.2-py3-none-any.whl (558 kB)
Collecting astropy
Using cached astropy-5.0.1-cp310-cp310-win_amd64.whl (6.4 MB)
Collecting pyarrow>=3.0
Using cached pyarrow-7.0.0-cp310-cp310-win_amd64.whl (16.1 MB)
Collecting rich
Using cached rich-11.2.0-py3-none-any.whl (217 kB)
Collecting progressbar2
Using cached progressbar2-4.0.0-py2.py3-none-any.whl (26 kB)
Collecting future>=0.15.2
Using cached future-0.18.2.tar.gz (829 kB)
Collecting frozendict!=2.2.0
Using cached frozendict-2.3.0-cp310-cp310-win_amd64.whl (34 kB)
Collecting pandas
Using cached pandas-1.4.1-cp310-cp310-win_amd64.whl (10.6 MB)
Collecting filelock
Using cached filelock-3.6.0-py3-none-any.whl (10.0 kB)
Collecting requests
Using cached requests-2.27.1-py2.py3-none-any.whl (63 kB)
Collecting dask
Using cached dask-2022.2.1-py3-none-any.whl (1.1 MB)
Collecting nest-asyncio>=1.3.3
Using cached nest_asyncio-1.5.4-py3-none-any.whl (5.1 kB)
Collecting blake3
Using cached blake3-0.3.1-cp310-none-win_amd64.whl (193 kB)
Collecting tabulate>=0.8.3
Using cached tabulate-0.8.9-py3-none-any.whl (25 kB)
Collecting aplus
Using cached aplus-0.11.0.tar.gz (3.7 kB)
Collecting numpy>=1.16
Using cached numpy-1.22.2-cp310-cp310-win_amd64.whl (14.7 MB)
Collecting pydantic>=1.8.0
Using cached pydantic-1.9.0-cp310-cp310-win_amd64.whl (2.1 MB)
Collecting six
Using cached six-1.16.0-py2.py3-none-any.whl (11 kB)
Collecting cloudpickle
Using cached cloudpickle-2.0.0-py3-none-any.whl (25 kB)
Collecting pyyaml
Using cached PyYAML-6.0-cp310-cp310-win_amd64.whl (151 kB)
Collecting typing-extensions>=3.7.4.3
Using cached typing_extensions-4.1.1-py3-none-any.whl (26 kB)
Collecting h5py>=2.9
Using cached h5py-3.6.0-cp310-cp310-win_amd64.whl (2.8 MB)
Collecting xarray
Using cached xarray-0.21.1-py3-none-any.whl (865 kB)
Collecting bqplot>=0.10.1
Using cached bqplot-0.12.33-py2.py3-none-any.whl (1.2 MB)
Collecting ipyvolume>=0.4
Using cached ipyvolume-0.5.2-py2.py3-none-any.whl (2.9 MB)
Collecting ipyleaflet
Using cached ipyleaflet-0.15.0-py2.py3-none-any.whl (3.3 MB)
Collecting ipympl
Using cached ipympl-0.8.8-py2.py3-none-any.whl (507 kB)
Collecting ipyvuetify<2,>=1.2.2
Using cached ipyvuetify-1.8.2-1-py2.py3-none-any.whl (11.7 MB)
Collecting traitlets>=4.3.0
Using cached traitlets-5.1.1-py3-none-any.whl (102 kB)
Collecting traittypes>=0.0.6
Using cached traittypes-0.2.1-py2.py3-none-any.whl (8.6 kB)
Collecting ipywebrtc
Using cached ipywebrtc-0.6.0-py2.py3-none-any.whl (260 kB)
Collecting Pillow
Using cached Pillow-9.0.1-cp310-cp310-win_amd64.whl (3.2 MB)
Collecting pythreejs>=1.0.0
Using cached pythreejs-2.3.0-py2.py3-none-any.whl (3.4 MB)
Collecting ipyvue<2,>=1.5
Using cached ipyvue-1.7.0-py2.py3-none-any.whl (2.7 MB)
Collecting jupyterlab-widgets>=1.0.0
Using cached jupyterlab_widgets-1.0.2-py3-none-any.whl (243 kB)
Collecting ipython>=4.0.0
Using cached ipython-8.1.0-py3-none-any.whl (750 kB)
Collecting nbformat>=4.2.0
Using cached nbformat-5.1.3-py3-none-any.whl (178 kB)
Collecting ipython-genutils~=0.2.0
Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB)
Collecting widgetsnbextension~=3.5.0
Using cached widgetsnbextension-3.5.2-py2.py3-none-any.whl (1.6 MB)
Collecting debugpy<2.0,>=1.0.0
Using cached debugpy-1.5.1-cp310-cp310-win_amd64.whl (4.4 MB)
Collecting jupyter-client<8.0
Using cached jupyter_client-7.1.2-py3-none-any.whl (130 kB)
Collecting matplotlib-inline<0.2.0,>=0.1.0
Using cached matplotlib_inline-0.1.3-py3-none-any.whl (8.2 kB)
Collecting tornado<7.0,>=4.2
Using cached tornado-6.1.tar.gz (497 kB)
Collecting stack-data
Using cached stack_data-0.2.0-py3-none-any.whl (21 kB)
Collecting prompt-toolkit!=3.0.0,!=3.0.1,<3.1.0,>=2.0.0
Using cached prompt_toolkit-3.0.28-py3-none-any.whl (380 kB)
Collecting backcall
Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB)
Requirement already satisfied: setuptools>=18.5 in c:\data\venv\github-pages\lib\site-packages (from ipython>=4.0.0->ipywidgets->jupyter) (58.1.0)
Collecting pygments>=2.4.0
Using cached Pygments-2.11.2-py3-none-any.whl (1.1 MB)
Collecting colorama
Using cached colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Collecting pickleshare
Using cached pickleshare-0.7.5-py2.py3-none-any.whl (6.9 kB)
Collecting jedi>=0.16
Using cached jedi-0.18.1-py2.py3-none-any.whl (1.6 MB)
Collecting decorator
Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB)
Collecting parso<0.9.0,>=0.8.0
Using cached parso-0.8.3-py2.py3-none-any.whl (100 kB)
Collecting jupyter-core>=4.6.0
Using cached jupyter_core-4.9.2-py3-none-any.whl (86 kB)
Collecting python-dateutil>=2.1
Using cached python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting entrypoints
Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB)
Collecting pyzmq>=13
Using cached pyzmq-22.3.0-cp310-cp310-win_amd64.whl (1.1 MB)
Collecting pywin32>=1.0
Using cached pywin32-303-cp310-cp310-win_amd64.whl (9.2 MB)
Collecting jsonschema!=2.5.0,>=2.4
Using cached jsonschema-4.4.0-py3-none-any.whl (72 kB)
Collecting pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0
Using cached pyrsistent-0.18.1-cp310-cp310-win_amd64.whl (61 kB)
Collecting attrs>=17.4.0
Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB)
Collecting pytz>=2020.1
Using cached pytz-2021.3-py2.py3-none-any.whl (503 kB)
Collecting wcwidth
Using cached wcwidth-0.2.5-py2.py3-none-any.whl (30 kB)
Collecting ipydatawidgets>=1.1.1
Using cached ipydatawidgets-4.2.0-py2.py3-none-any.whl (275 kB)
Collecting numba
Using cached numba-0.55.1-cp310-cp310-win_amd64.whl (2.4 MB)
Collecting jinja2
Using cached Jinja2-3.0.3-py3-none-any.whl (133 kB)
Collecting cachetools
Using cached cachetools-5.0.0-py3-none-any.whl (9.1 kB)
Collecting fastapi
Using cached fastapi-0.74.1-py3-none-any.whl (53 kB)
Collecting uvicorn[standard]
Using cached uvicorn-0.17.5-py3-none-any.whl (53 kB)
Collecting matplotlib>=1.3.1
Using cached matplotlib-3.5.1-cp310-cp310-win_amd64.whl (7.2 MB)
Collecting kiwisolver>=1.0.1
Using cached kiwisolver-1.3.2-cp310-cp310-win_amd64.whl (52 kB)
Collecting pyparsing>=2.2.1
Using cached pyparsing-3.0.7-py3-none-any.whl (98 kB)
Collecting fonttools>=4.22.0
Using cached fonttools-4.29.1-py3-none-any.whl (895 kB)
Collecting cycler>=0.10
Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB)
Collecting packaging>=20.0
Using cached packaging-21.3-py3-none-any.whl (40 kB)
Collecting Send2Trash>=1.8.0
Using cached Send2Trash-1.8.0-py3-none-any.whl (18 kB)
Collecting argon2-cffi
Using cached argon2_cffi-21.3.0-py3-none-any.whl (14 kB)
Collecting prometheus-client
Using cached prometheus_client-0.13.1-py3-none-any.whl (57 kB)
Collecting terminado>=0.8.3
Using cached terminado-0.13.1-py3-none-any.whl (14 kB)
Collecting pywinpty>=1.1.0
Using cached pywinpty-2.0.2-cp310-none-win_amd64.whl (1.4 MB)
Collecting argon2-cffi-bindings
Using cached argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl (30 kB)
Collecting cffi>=1.0.1
Using cached cffi-1.15.0-cp310-cp310-win_amd64.whl (180 kB)
Collecting pycparser
Using cached pycparser-2.21-py2.py3-none-any.whl (118 kB)
Collecting pyerfa>=2.0
Using cached pyerfa-2.0.0.1-cp310-cp310-win_amd64.whl (366 kB)
Collecting toolz>=0.8.2
Using cached toolz-0.11.2-py3-none-any.whl (55 kB)
Collecting partd>=0.3.10
Using cached partd-1.2.0-py3-none-any.whl (19 kB)
Collecting fsspec>=0.6.0
Using cached fsspec-2022.2.0-py3-none-any.whl (134 kB)
Collecting locket
Using cached locket-0.2.1-py2.py3-none-any.whl (4.1 kB)
Collecting starlette==0.17.1
Using cached starlette-0.17.1-py3-none-any.whl (58 kB)
Collecting anyio<4,>=3.0.0
Using cached anyio-3.5.0-py3-none-any.whl (79 kB)
Collecting sniffio>=1.1
Using cached sniffio-1.2.0-py3-none-any.whl (10 kB)
Collecting idna>=2.8
Using cached idna-3.3-py3-none-any.whl (61 kB)
Collecting xyzservices>=2021.8.1
Using cached xyzservices-2022.2.0-py3-none-any.whl (35 kB)
Collecting MarkupSafe>=2.0
Using cached MarkupSafe-2.1.0-cp310-cp310-win_amd64.whl (16 kB)
Collecting testpath
Using cached testpath-0.6.0-py3-none-any.whl (83 kB)
Collecting pandocfilters>=1.4.1
Using cached pandocfilters-1.5.0-py2.py3-none-any.whl (8.7 kB)
Collecting nbclient<0.6.0,>=0.5.0
Using cached nbclient-0.5.11-py3-none-any.whl (71 kB)
Collecting defusedxml
Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB)
Collecting mistune<2,>=0.8.1
Using cached mistune-0.8.4-py2.py3-none-any.whl (16 kB)
Collecting bleach
Using cached bleach-4.1.0-py2.py3-none-any.whl (157 kB)
Collecting jupyterlab-pygments
Using cached jupyterlab_pygments-0.1.2-py2.py3-none-any.whl (4.6 kB)
Collecting webencodings
Using cached webencodings-0.5.1-py2.py3-none-any.whl (11 kB)
Collecting numpy>=1.16
Using cached numpy-1.21.5-cp310-cp310-win_amd64.whl (14.0 MB)
Collecting llvmlite<0.39,>=0.38.0rc1
Using cached llvmlite-0.38.0-cp310-cp310-win_amd64.whl (23.2 MB)
Collecting python-utils>=3.0.0
Using cached python_utils-3.1.0-py2.py3-none-any.whl (19 kB)
Collecting qtpy
Using cached QtPy-2.0.1-py3-none-any.whl (65 kB)
Collecting certifi>=2017.4.17
Using cached certifi-2021.10.8-py2.py3-none-any.whl (149 kB)
Collecting urllib3<1.27,>=1.21.1
Using cached urllib3-1.26.8-py2.py3-none-any.whl (138 kB)
Collecting charset-normalizer~=2.0.0
Using cached charset_normalizer-2.0.12-py3-none-any.whl (39 kB)
Collecting commonmark<0.10.0,>=0.9.0
Using cached commonmark-0.9.1-py2.py3-none-any.whl (51 kB)
Collecting executing
Using cached executing-0.8.3-py2.py3-none-any.whl (16 kB)
Collecting pure-eval
Using cached pure_eval-0.2.2-py3-none-any.whl (11 kB)
Collecting asttokens
Using cached asttokens-2.0.5-py2.py3-none-any.whl (20 kB)
Collecting asgiref>=3.4.0
Using cached asgiref-3.5.0-py3-none-any.whl (22 kB)
Collecting click>=7.0
Using cached click-8.0.4-py3-none-any.whl (97 kB)
Collecting h11>=0.8
Using cached h11-0.13.0-py3-none-any.whl (58 kB)
Collecting websockets>=10.0
Using cached websockets-10.2-cp310-cp310-win_amd64.whl (97 kB)
Collecting watchgod>=0.6
Using cached watchgod-0.7-py3-none-any.whl (11 kB)
Collecting httptools<0.4.0,>=0.2.0
Using cached httptools-0.3.0-cp310-cp310-win_amd64.whl (141 kB)
Collecting python-dotenv>=0.13
Using cached python_dotenv-0.19.2-py2.py3-none-any.whl (17 kB)
Using legacy 'setup.py install' for future, since package 'wheel' is not installed.
Using legacy 'setup.py install' for tornado, since package 'wheel' is not installed.
Using legacy 'setup.py install' for aplus, since package 'wheel' is not installed.
Building wheels for collected packages: vaex-core
Building wheel for vaex-core (PEP 517) ... error
ERROR: Command errored out with exit status 1:
command: 'c:\Data\venv\github-pages\Scripts\python.exe' 'c:\Data\venv\github-pages\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\madsenbj\AppData\Local\Temp\tmpfkcd33uh'
cwd: C:\Users\madsenbj\AppData\Local\Temp\pip-install-pmndbbpw\vaex-core_43eb4a13698048478681800aa74049df
Complete output (260 lines):
setup.py:4: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses
import imp
running bdist_wheel
running build
running build_py
creating build
creating build\lib.win-amd64-3.10
creating build\lib.win-amd64-3.10\vaex
copying vaex\agg.py -> build\lib.win-amd64-3.10\vaex
copying vaex\array_types.py -> build\lib.win-amd64-3.10\vaex
copying vaex\asyncio.py -> build\lib.win-amd64-3.10\vaex
copying vaex\benchmark.py -> build\lib.win-amd64-3.10\vaex
copying vaex\cache.py -> build\lib.win-amd64-3.10\vaex
copying vaex\column.py -> build\lib.win-amd64-3.10\vaex
copying vaex\config.py -> build\lib.win-amd64-3.10\vaex
copying vaex\convert.py -> build\lib.win-amd64-3.10\vaex
copying vaex\cpu.py -> build\lib.win-amd64-3.10\vaex
copying vaex\dataframe.py -> build\lib.win-amd64-3.10\vaex
copying vaex\dataframe_protocol.py -> build\lib.win-amd64-3.10\vaex
copying vaex\dataset.py -> build\lib.win-amd64-3.10\vaex
copying vaex\dataset_misc.py -> build\lib.win-amd64-3.10\vaex
copying vaex\dataset_mmap.py -> build\lib.win-amd64-3.10\vaex
copying vaex\dataset_utils.py -> build\lib.win-amd64-3.10\vaex
copying vaex\datatype.py -> build\lib.win-amd64-3.10\vaex
copying vaex\datatype_test.py -> build\lib.win-amd64-3.10\vaex
copying vaex\delayed.py -> build\lib.win-amd64-3.10\vaex
copying vaex\docstrings.py -> build\lib.win-amd64-3.10\vaex
copying vaex\encoding.py -> build\lib.win-amd64-3.10\vaex
copying vaex\events.py -> build\lib.win-amd64-3.10\vaex
copying vaex\execution.py -> build\lib.win-amd64-3.10\vaex
copying vaex\export.py -> build\lib.win-amd64-3.10\vaex
copying vaex\expression.py -> build\lib.win-amd64-3.10\vaex
copying vaex\expresso.py -> build\lib.win-amd64-3.10\vaex
copying vaex\formatting.py -> build\lib.win-amd64-3.10\vaex
copying vaex\functions.py -> build\lib.win-amd64-3.10\vaex
copying vaex\geo.py -> build\lib.win-amd64-3.10\vaex
copying vaex\grids.py -> build\lib.win-amd64-3.10\vaex
copying vaex\groupby.py -> build\lib.win-amd64-3.10\vaex
copying vaex\hash.py -> build\lib.win-amd64-3.10\vaex
copying vaex\image.py -> build\lib.win-amd64-3.10\vaex
copying vaex\itertools.py -> build\lib.win-amd64-3.10\vaex
copying vaex\join.py -> build\lib.win-amd64-3.10\vaex
copying vaex\json.py -> build\lib.win-amd64-3.10\vaex
copying vaex\kld.py -> build\lib.win-amd64-3.10\vaex
copying vaex\legacy.py -> build\lib.win-amd64-3.10\vaex
copying vaex\logging.py -> build\lib.win-amd64-3.10\vaex
copying vaex\memory.py -> build\lib.win-amd64-3.10\vaex
copying vaex\meta.py -> build\lib.win-amd64-3.10\vaex
copying vaex\metal.py -> build\lib.win-amd64-3.10\vaex
copying vaex\misc_cmdline.py -> build\lib.win-amd64-3.10\vaex
copying vaex\multiprocessing.py -> build\lib.win-amd64-3.10\vaex
copying vaex\multithreading.py -> build\lib.win-amd64-3.10\vaex
copying vaex\parallelize.py -> build\lib.win-amd64-3.10\vaex
copying vaex\progress.py -> build\lib.win-amd64-3.10\vaex
copying vaex\promise.py -> build\lib.win-amd64-3.10\vaex
copying vaex\registry.py -> build\lib.win-amd64-3.10\vaex
copying vaex\rolling.py -> build\lib.win-amd64-3.10\vaex
copying vaex\samp.py -> build\lib.win-amd64-3.10\vaex
copying vaex\schema.py -> build\lib.win-amd64-3.10\vaex
copying vaex\scopes.py -> build\lib.win-amd64-3.10\vaex
copying vaex\selections.py -> build\lib.win-amd64-3.10\vaex
copying vaex\serialize.py -> build\lib.win-amd64-3.10\vaex
copying vaex\settings.py -> build\lib.win-amd64-3.10\vaex
copying vaex\shift.py -> build\lib.win-amd64-3.10\vaex
copying vaex\stat.py -> build\lib.win-amd64-3.10\vaex
copying vaex\strings.py -> build\lib.win-amd64-3.10\vaex
copying vaex\struct.py -> build\lib.win-amd64-3.10\vaex
copying vaex\tasks.py -> build\lib.win-amd64-3.10\vaex
copying vaex\utils.py -> build\lib.win-amd64-3.10\vaex
copying vaex\version.py -> build\lib.win-amd64-3.10\vaex
copying vaex\_version.py -> build\lib.win-amd64-3.10\vaex
copying vaex\__init__.py -> build\lib.win-amd64-3.10\vaex
copying vaex\__main__.py -> build\lib.win-amd64-3.10\vaex
package init file 'vaex\arrow\__init__.py' not found (or not a regular file)
creating build\lib.win-amd64-3.10\vaex\arrow
copying vaex\arrow\convert.py -> build\lib.win-amd64-3.10\vaex\arrow
copying vaex\arrow\dataset.py -> build\lib.win-amd64-3.10\vaex\arrow
copying vaex\arrow\numpy_dispatch.py -> build\lib.win-amd64-3.10\vaex\arrow
copying vaex\arrow\opener.py -> build\lib.win-amd64-3.10\vaex\arrow
copying vaex\arrow\utils.py -> build\lib.win-amd64-3.10\vaex\arrow
copying vaex\arrow\utils_test.py -> build\lib.win-amd64-3.10\vaex\arrow
copying vaex\arrow\_version.py -> build\lib.win-amd64-3.10\vaex\arrow
creating build\lib.win-amd64-3.10\vaex\core
copying vaex\core\_version.py -> build\lib.win-amd64-3.10\vaex\core
copying vaex\core\__init__.py -> build\lib.win-amd64-3.10\vaex\core
creating build\lib.win-amd64-3.10\vaex\file
copying vaex\file\asyncio.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\cache.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\column.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\gcs.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\s3.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\s3arrow.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\s3fs.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\s3_test.py -> build\lib.win-amd64-3.10\vaex\file
copying vaex\file\__init__.py -> build\lib.win-amd64-3.10\vaex\file
creating build\lib.win-amd64-3.10\vaex\test
copying vaex\test\all.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\cmodule.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\dataset.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\expresso.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\misc.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\plot.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\ui.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\__init__.py -> build\lib.win-amd64-3.10\vaex\test
copying vaex\test\__main__.py -> build\lib.win-amd64-3.10\vaex\test
creating build\lib.win-amd64-3.10\vaex\ext
copying vaex\ext\bokeh.py -> build\lib.win-amd64-3.10\vaex\ext
copying vaex\ext\common.py -> build\lib.win-amd64-3.10\vaex\ext
copying vaex\ext\ipyvolume.py -> build\lib.win-amd64-3.10\vaex\ext
copying vaex\ext\jprops.py -> build\lib.win-amd64-3.10\vaex\ext
copying vaex\ext\readcol.py -> build\lib.win-amd64-3.10\vaex\ext
copying vaex\ext\__init__.py -> build\lib.win-amd64-3.10\vaex\ext
creating build\lib.win-amd64-3.10\vaex\misc
copying vaex\misc\expressions.py -> build\lib.win-amd64-3.10\vaex\misc
copying vaex\misc\ordereddict.py -> build\lib.win-amd64-3.10\vaex\misc
copying vaex\misc\pandawrap.py -> build\lib.win-amd64-3.10\vaex\misc
copying vaex\misc\parallelize.py -> build\lib.win-amd64-3.10\vaex\misc
copying vaex\misc\progressbar.py -> build\lib.win-amd64-3.10\vaex\misc
copying vaex\misc\samp.py -> build\lib.win-amd64-3.10\vaex\misc
copying vaex\misc\__init__.py -> build\lib.win-amd64-3.10\vaex\misc
creating build\lib.win-amd64-3.10\vaex\datasets
copying vaex\datasets\__init__.py -> build\lib.win-amd64-3.10\vaex\datasets
running egg_info
writing vaex_core.egg-info\PKG-INFO
writing dependency_links to vaex_core.egg-info\dependency_links.txt
writing entry points to vaex_core.egg-info\entry_points.txt
writing requirements to vaex_core.egg-info\requires.txt
writing top-level names to vaex_core.egg-info\top_level.txt
reading manifest file 'vaex_core.egg-info\SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.c' under directory 'vendor'
warning: no files found matching '*.h' under directory 'src'
warning: no files found matching '*.c' under directory 'src'
adding license file 'LICENSE.txt'
writing manifest file 'vaex_core.egg-info\SOURCES.txt'
copying vaex\datasets\iris.hdf5 -> build\lib.win-amd64-3.10\vaex\datasets
copying vaex\datasets\titanic.hdf5 -> build\lib.win-amd64-3.10\vaex\datasets
running build_ext
building 'vaex.vaexfast' extension
creating build\temp.win-amd64-3.10
creating build\temp.win-amd64-3.10\Release
creating build\temp.win-amd64-3.10\Release\src
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\madsenbj\AppData\Local\Temp\pip-build-env-it205hpj\overlay\Lib\site-packages\numpy\core\include -Ic:\Data\venv\github-pages\include -IC:\Users\madsenbj\AppData\Local\Programs\Python\Python310\include -IC:\Users\madsenbj\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\vaexfast.cpp /Fobuild\temp.win-amd64-3.10\Release\src\vaexfast.obj /EHsc
vaexfast.cpp
src\vaexfast.cpp(18): warning C4005: 'INFINITY': macro redefinition
C:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt\corecrt_math.h(88): note: see previous definition of 'INFINITY'
C:\Users\madsenbj\AppData\Local\Temp\pip-build-env-it205hpj\overlay\Lib\site-packages\numpy\core\include\numpy\npy_1_7_deprecated_api.h(14) : Warning Msg: Using deprecated NumPy API, disable it with #define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION
src\vaexfast.cpp(201): warning C4244: 'argument': conversion from '__int64' to 'int', possible loss of data
src\vaexfast.cpp(532): warning C4244: 'argument': conversion from '__int64' to 'const int', possible loss of data
src\vaexfast.cpp(956): warning C4244: '=': conversion from 'Py_ssize_t' to 'int', possible loss of data
src\vaexfast.cpp(1798): warning C4244: 'argument': conversion from '__int64' to 'int', possible loss of data
src\vaexfast.cpp(1798): warning C4244: 'argument': conversion from '__int64' to 'int', possible loss of data
src\vaexfast.cpp(64): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(198): note: see reference to function template instantiation 'void object_to_numpy1d_nocopy<double>(T *&,PyObject *,__int64 &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(88): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(280): note: see reference to function template instantiation 'void object_to_numpy1d_nocopy_endian<double>(T *&,PyObject *,__int64 &,bool &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(105): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(644): note: see reference to function template instantiation 'void object_to_numpy2d_nocopy<double>(T *&,PyObject *,int &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(108): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(667): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(775): note: see reference to function template instantiation 'void histogram2d_f4<__int64>(const float *__restrict const ,const float *__restrict const ,const float *const ,const __int64,bool,bool,bool,Tout *__restrict const ,const int,const int,const double,const double,const double,const double,const __int64,const __int64)' being compiled
with
[
Tout=__int64
]
src\vaexfast.cpp(667): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(668): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(668): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(669): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(669): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(670): warning C4244: 'initializing': conversion from 'const double' to 'float', possible loss of data
src\vaexfast.cpp(670): warning C4244: 'initializing': conversion from 'const double' to 'const float', possible loss of data
src\vaexfast.cpp(671): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
src\vaexfast.cpp(671): warning C4244: 'initializing': conversion from 'double' to 'const float', possible loss of data
src\vaexfast.cpp(672): warning C4244: 'initializing': conversion from 'double' to 'float', possible loss of data
src\vaexfast.cpp(672): warning C4244: 'initializing': conversion from 'double' to 'const float', possible loss of data
src\vaexfast.cpp(133): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(887): note: see reference to function template instantiation 'void object_to_numpy3d_nocopy<double>(T *&,PyObject *,int &,int &,int &,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(136): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(139): warning C4244: 'initializing': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(174): warning C4244: '=': conversion from 'npy_intp' to 'int', possible loss of data
src\vaexfast.cpp(983): note: see reference to function template instantiation 'void object_to_numpyNd_nocopy<double>(T *&,PyObject *,int,int &,int *,__int64 *,int)' being compiled
with
[
T=double
]
src\vaexfast.cpp(1335): warning C4244: '=': conversion from 'Py_ssize_t' to 'int', possible loss of data
src\vaexfast.cpp(2072): note: see reference to function template instantiation 'PyObject *statisticNd_<double,NPY_DOUBLE>(PyObject *,PyObject *)' being compiled
src\vaexfast.cpp(1338): warning C4244: '=': conversion from 'Py_ssize_t' to 'int', possible loss of data
src\vaexfast.cpp(1149): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
src\vaexfast.cpp(1271): note: see reference to function template instantiation 'void statisticNd<T,op_add1<T,double,endian>,endian>(const T *__restrict const [],const T *__restrict const [],__int64,const int,const int,double *__restrict const ,const __int64 *__restrict const ,const int *__restrict const ,const T *__restrict const ,const T *__restrict const ,int)' being compiled
with
[
T=float,
endian=functor_double_to_native
]
src\vaexfast.cpp(1308): note: see reference to function template instantiation 'void statisticNd_wrap_template_endian<T,functor_double_to_native>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],int,int)' being compiled
with
[
T=float
]
src\vaexfast.cpp(1402): note: see reference to function template instantiation 'void statisticNd_wrap_template<T>(const T *const [],const T *const [],__int64,int,int,double *,__int64 [],int [],T [],T [],bool,int,int)' being compiled
with
[
T=float
]
src\vaexfast.cpp(2073): note: see reference to function template instantiation 'PyObject *statisticNd_<float,NPY_FLOAT>(PyObject *,PyObject *)' being compiled
src\vaexfast.cpp(1178): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
src\vaexfast.cpp(1198): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
src\vaexfast.cpp(1216): warning C4244: 'initializing': conversion from 'double' to 'T', possible loss of data
with
[
T=float
]
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\bin\HostX86\x64\link.exe" /nologo /INCREMENTAL:NO /LTCG /DLL /MANIFEST:EMBED,ID=2 /MANIFESTUAC:NO /LIBPATH:c:\Data\venv\github-pages\libs /LIBPATH:C:\Users\madsenbj\AppData\Local\Programs\Python\Python310\libs /LIBPATH:C:\Users\madsenbj\AppData\Local\Programs\Python\Python310 /LIBPATH:c:\Data\venv\github-pages\PCbuild\amd64 "/LIBPATH:C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\lib\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\ucrt\x64" "/LIBPATH:C:\Program Files (x86)\Windows Kits\10\lib\10.0.19041.0\um\x64" /EXPORT:PyInit_vaexfast build\temp.win-amd64-3.10\Release\src\vaexfast.obj /OUT:build\lib.win-amd64-3.10\vaex\vaexfast.cp310-win_amd64.pyd /IMPLIB:build\temp.win-amd64-3.10\Release\src\vaexfast.cp310-win_amd64.lib
Creating library build\temp.win-amd64-3.10\Release\src\vaexfast.cp310-win_amd64.lib and object build\temp.win-amd64-3.10\Release\src\vaexfast.cp310-win_amd64.exp
Generating code
Finished generating code
building 'vaex.superstrings' extension
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\madsenbj\AppData\Local\Temp\pip-build-env-it205hpj\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -Ic:\Data\venv\github-pages\include -Ic:\Data\venv\github-pages\Library\include -Ivendor\pcre\Library\include -Ic:\Data\venv\github-pages\include -IC:\Users\madsenbj\AppData\Local\Programs\Python\Python310\include -IC:\Users\madsenbj\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files
(x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files
(x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\string_utils.cpp /Fobuild\temp.win-amd64-3.10\Release\src\string_utils.obj /EHsc
string_utils.cpp
C:\Users\madsenbj\AppData\Local\Temp\pip-install-pmndbbpw\vaex-core_43eb4a13698048478681800aa74049df\src\string_utils.hpp(208): warning C4244: '=': conversion from 'char32_t' to 'char', possible loss of data
"C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\madsenbj\AppData\Local\Temp\pip-build-env-it205hpj\overlay\Lib\site-packages\numpy\core\include -Ivendor/pybind11/include -Ivendor/pybind11/include -Ivendor/string-view-lite/include -Ivendor/boost -Ic:\Data\venv\github-pages\include -Ic:\Data\venv\github-pages\Library\include -Ivendor\pcre\Library\include -Ic:\Data\venv\github-pages\include -IC:\Users\madsenbj\AppData\Local\Programs\Python\Python310\include -IC:\Users\madsenbj\AppData\Local\Programs\Python\Python310\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\include" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files
(x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files
(x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc\strings.cpp /Fobuild\temp.win-amd64-3.10\Release\src\strings.obj /EHsc
strings.cpp
vendor/pybind11/include\pybind11/numpy.h(35): error C2065: 'ssize_t': undeclared identifier
vendor/pybind11/include\pybind11/numpy.h(35): error C2338: ssize_t != Py_intptr_t
C:\Users\madsenbj\AppData\Local\Temp\pip-install-pmndbbpw\vaex-core_43eb4a13698048478681800aa74049df\src\string_utils.hpp(208): warning C4244: '=': conversion from 'char32_t' to 'char', possible loss of data
vendor\pcre\Library\include\pcrecpp.h(701): warning C4251: 'pcrecpp::RE::pattern_': class 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>' needs to have dll-interface to be used by clients of class 'pcrecpp::RE'
C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\VC\Tools\MSVC\14.29.30037\include\xstring(4905): note: see declaration of 'std::basic_string<char,std::char_traits<char>,std::allocator<char>>' src\strings.cpp(273): warning C4018: '>': signed/unsigned mismatch
src\strings.cpp(282): warning C4018: '>': signed/unsigned mismatch
error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\BuildTools\\VC\\Tools\\MSVC\\14.29.30037\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
----------------------------------------
ERROR: Failed building wheel for vaex-core
Failed to build vaex-core
ERROR: Could not build wheels for vaex-core which use PEP 517 and cannot be installed directly
WARNING: You are using pip version 21.2.4; however, version 22.0.3 is available.
You should consider upgrading via the 'c:\Data\venv\github-pages\Scripts\python.exe -m pip install --upgrade pip' command.
```
| closed | 2022-02-28T17:50:42Z | 2024-06-14T00:59:51Z | https://github.com/vaexio/vaex/issues/1951 | [] | root-11 | 13 |
Miserlou/Zappa | flask | 1,389 | can't make 2 events in same s3 bucket | ## Expected Behavior
Created s3:ObjectCreated:* event schedule for {name}!
## Actual Behavior
s3:ObjectCreated:* event schedule for {name} already exists - Nothing to do here.
## Possible Fix
to make multiple event in s3
## Your Environment
setting file is like below
{
"service1": {
"events": [
{
"function": "name",
"event_source": {
"arn": "arn:aws:s3:::bucket",
"key_filters": [
{
"type": "prefix",
"value": "service1/"
}
],
"events": [
"s3:ObjectCreated:*"
]
}
}
]
},
"service2": {
"events": [
{
"function": "name",
"event_source": {
"arn": "arn:aws:s3:::bucket",
"key_filters": [
{
"type": "prefix",
"value": "service2/"
}
],
"events": [
"s3:ObjectCreated:*"
]
}
}
]
}
} | closed | 2018-02-12T06:55:49Z | 2018-02-12T17:03:35Z | https://github.com/Miserlou/Zappa/issues/1389 | [] | sshkim | 1 |
microsoft/qlib | deep-learning | 1,115 | backtest error using gru | here is the problem, i am using GRU for prediction,
here is my backtest config
`
###################################
# prediction, backtest & analysis
###################################
port_analysis_config = {
"executor": {
"class": "SimulatorExecutor",
"module_path": "qlib.backtest.executor",
"kwargs": {
"time_per_step": "day",
"generate_portfolio_metrics": True,
},
},
"strategy": {
"class": "TopkDropoutStrategy",
"module_path": "qlib.contrib.strategy.signal_strategy",
"kwargs": {
"model": model,
"dataset": dataset,
"topk":50,
"n_drop": 5,
},
},
"backtest": {
"start_time": "2022-01-01",
"end_time": '2022-05-20',
"account": 100000000,
"benchmark": benchmark,
"exchange_kwargs": {
"freq": "day",
"limit_threshold": 0.0,
"deal_price": "close",
"open_cost": 0.0005,
"close_cost": 0.0015,
"min_cost": 5,
},
},
}
# backtest and analysis
with R.start(experiment_name="backtest_analysis"):
recorder = R.get_recorder(recorder_id=rid, experiment_name="GRU")
model = recorder.load_object("trained_model")
# prediction
recorder = R.get_recorder()
ba_rid = recorder.id
sr = SignalRecord(model, dataset, recorder)
sr.generate()
# backtest & analysis
par = PortAnaRecord(recorder, port_analysis_config, "day")
par.generate()
`
then, the ffr report nan which is very strange
`
'The following are analysis results of benchmark return(1day).'
risk
mean -0.000600
std 0.011434
annualized_return -0.142707
information_ratio -0.809016
max_drawdown -0.140281
'The following are analysis results of the excess return without cost(1day).'
risk
mean 0.000600
std 0.011434
annualized_return 0.142707
information_ratio 0.809016
max_drawdown -0.069261
'The following are analysis results of the excess return with cost(1day).'
risk
mean 0.000600
std 0.011434
annualized_return 0.142707
information_ratio 0.809016
max_drawdown -0.069261
'The following are analysis results of indicators(1day).'
value
ffr NaN
pa NaN
pos NaN
`
can someone answer this question?
| closed | 2022-05-28T12:26:06Z | 2024-07-09T14:47:37Z | https://github.com/microsoft/qlib/issues/1115 | [
"question"
] | LiuHao-THU | 2 |
strawberry-graphql/strawberry | fastapi | 3,081 | Missing `starlite` docs | `starlite` integration was added in https://github.com/strawberry-graphql/strawberry/pull/2391 in which docs were added.
The doc file does exist - https://github.com/strawberry-graphql/strawberry/blob/main/docs/integrations/starlite.md
...but on the website there is no mention of it under Integrations:

...and searching for `starlite` in the docs also brings up no results. | closed | 2023-09-07T22:34:23Z | 2025-03-20T15:56:21Z | https://github.com/strawberry-graphql/strawberry/issues/3081 | [] | dhirschfeld | 5 |
miguelgrinberg/flasky | flask | 416 | ch | closed | 2019-03-29T14:08:54Z | 2019-03-29T14:09:15Z | https://github.com/miguelgrinberg/flasky/issues/416 | [] | lyhanburger | 1 |
|
marshmallow-code/apispec | rest-api | 444 | What to do about multiple nested schemas warning? | Originally reported here: https://github.com/Nobatek/flask-rest-api/issues/57 - see there for more input
--
I'm using marshmallow v3-rc5 and using [two-way nesting](https://marshmallow.readthedocs.io/en/latest/nesting.html#two-way-nesting)
Using this technique I get the following error if I attempt to use something like `@blp.response(CreatorSchema(many=True, exclude=('follower_count',)))`:
```
/Users/cyber/.virtualenvs/luve-pim5aQIP/lib/python3.7/site-packages/apispec/ext/marshmallow/common.py:143:
UserWarning: Multiple schemas resolved to the name Creator. The name has been modified. Either manually add each of the schemas with a different name or provide a custom schema_name_resolver.
```
and see multiple versions of the schema in swagger (Creator, Creator1).
If I remove the `exclude` arg to my schemas and make new schemas then everything works perfectly. Something about `exclude` causes it to think there are multiple versions of the schema and it all explodes. | open | 2019-05-03T14:29:42Z | 2020-07-20T09:43:38Z | https://github.com/marshmallow-code/apispec/issues/444 | [
"question"
] | revmischa | 9 |
ufoym/deepo | jupyter | 116 | [WARNING]: Empty continuation lines will become errors in a future release. | I'm building an edited version of the tensorflow-py36-cuda90 dockerfile where I pip install some more packages
```
# ==================================================================
# module list
# ------------------------------------------------------------------
# python 3.6 (apt)
# tensorflow latest (pip)
# ==================================================================
FROM nvidia/cuda:10.0-cudnn7-devel-ubuntu18.04
ENV LANG C.UTF-8
RUN APT_INSTALL="apt-get install -y --no-install-recommends" && \
PIP_INSTALL="python -m pip --no-cache-dir install --upgrade" && \
GIT_CLONE="git clone --depth 10" && \
rm -rf /var/lib/apt/lists/* \
/etc/apt/sources.list.d/cuda.list \
/etc/apt/sources.list.d/nvidia-ml.list && \
apt-get update && \
# ==================================================================
# tools
# ------------------------------------------------------------------
DEBIAN_FRONTEND=noninteractive $APT_INSTALL \
build-essential \
apt-utils \
ca-certificates \
wget \
git \
vim \
libssl-dev \
curl \
unzip \
unrar \
&& \
$GIT_CLONE https://github.com/Kitware/CMake ~/cmake && \
cd ~/cmake && \
./bootstrap && \
make -j"$(nproc)" install && \
# ==================================================================
# python
# ------------------------------------------------------------------
DEBIAN_FRONTEND=noninteractive $APT_INSTALL \
software-properties-common \
&& \
add-apt-repository ppa:deadsnakes/ppa && \
apt-get update && \
DEBIAN_FRONTEND=noninteractive $APT_INSTALL \
python3.6 \
python3.6-dev \
python3-distutils-extra \
&& \
wget -O ~/get-pip.py \
https://bootstrap.pypa.io/get-pip.py && \
python3.6 ~/get-pip.py && \
ln -s /usr/bin/python3.6 /usr/local/bin/python3 && \
ln -s /usr/bin/python3.6 /usr/local/bin/python && \
$PIP_INSTALL \
setuptools \
&& \
$PIP_INSTALL \
numpy \
matplotlib \
pyqt \
seaborn \
xlrd \
scipy \
scikit-learn \
scikit-image \
xarray \
dask \
pandas \
cloudpickle \
Cython \
Pillow \
opencv \
IPython[all] \
rasterstats \
geopy \
cartopy \
geopandas \
rasterio \
contextily \
pysal \
pyproj \
folium \
gdal \
libgdal \
kealib \
geojson \
yaml \
"git+https://github.com/ecohydro/lsru@master#egg=lsru" \
imgaug \
rioxarray \
&& \
# ==================================================================
# tensorflow
# ------------------------------------------------------------------
$PIP_INSTALL \
tf-nightly-gpu-2.0-preview \
&& \
# ==================================================================
# config & cleanup
# ------------------------------------------------------------------
ldconfig && \
apt-get clean && \
apt-get autoremove && \
rm -rf /var/lib/apt/lists/* /tmp/* ~/*
EXPOSE 6006
```
I get the following with both my modified dockerfile and the original docker file. Doesn't seem to affect the build but looks like it will in the future.
```
# rave at rave-thinkpad in ~/CropMask_RCNN on git:azureml-refactor ✖︎ [17:05:27]
→ docker build . -t tensorflow-py36-cu90:2
Sending build context to Docker daemon 219.8MB
[WARNING]: Empty continuation line found in:
RUN APT_INSTALL="apt-get install -y --no-install-recommends" && PIP_INSTALL="python -m pip --no-cache-dir install --upgrade" && GIT_CLONE="git clone --depth 10" && rm -rf /var/lib/apt/lists/* /etc/apt/sources.list.d/cuda.list /etc/apt/sources.list.d/nvidia-ml.list && apt-get update && DEBIAN_FRONTEND=noninteractive $APT_INSTALL build-essential apt-utils ca-certificates wget git vim libssl-dev curl unzip unrar && $GIT_CLONE https://github.com/Kitware/CMake ~/cmake && cd ~/cmake && ./bootstrap && make -j"$(nproc)" install && DEBIAN_FRONTEND=noninteractive $APT_INSTALL software-properties-common && add-apt-repository ppa:deadsnakes/ppa && apt-get update && DEBIAN_FRONTEND=noninteractive $APT_INSTALL python3.6 python3.6-dev python3-distutils-extra && wget -O ~/get-pip.py https://bootstrap.pypa.io/get-pip.py && python3.6 ~/get-pip.py && ln -s /usr/bin/python3.6 /usr/local/bin/python3 && ln -s /usr/bin/python3.6 /usr/local/bin/python && $PIP_INSTALL setuptools && $PIP_INSTALL numpy matplotlib pyqt seaborn xlrd scipy scikit-learn scikit-image xarray dask pandas cloudpickle Cython Pillow opencv IPython[all] rasterstats geopy cartopy geopandas rasterio contextily pysal pyproj folium gdal libgdal kealib geojson yaml "git+https://github.com/ecohydro/lsru@master#egg=lsru" imgaug rioxarray && $PIP_INSTALL tf-nightly-gpu-2.0-preview && ldconfig && apt-get clean && apt-get autoremove && rm -rf /var/lib/apt/lists/* /tmp/* ~/*
[WARNING]: Empty continuation lines will become errors in a future release.
``` | closed | 2019-08-14T00:16:42Z | 2020-01-24T19:17:25Z | https://github.com/ufoym/deepo/issues/116 | [] | rbavery | 0 |
sinaptik-ai/pandas-ai | data-visualization | 912 | Cache related issue | ### System Info
OS version: Windows 10
Python version: 3.11
The current version of pandasai being used: 1.5.18
### 🐛 Describe the bug
I tried to run the code from example:
-------------
from pandasai import SmartDataframe
df = pd.DataFrame({
"country": [
"United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
"gdp": [
19294482071552, 2891615567872, 2411255037952, 3435817336832, 1745433788416, 1181205135360, 1607402389504, 1490967855104, 4380756541440, 14631844184064
],
})
df = SmartDataframe(df)
df.chat('Which are the countries with GDP greater than 3000000000000?')
-------------------
And got the following error:
IOException Traceback (most recent call last)
Cell In[11], [line 11](vscode-notebook-cell:?execution_count=11&line=11)
[1](vscode-notebook-cell:?execution_count=11&line=1) from pandasai import SmartDataframe
[3](vscode-notebook-cell:?execution_count=11&line=3) df = pd.DataFrame({
[4](vscode-notebook-cell:?execution_count=11&line=4) "country": [
[5](vscode-notebook-cell:?execution_count=11&line=5) "United States", "United Kingdom", "France", "Germany", "Italy", "Spain", "Canada", "Australia", "Japan", "China"],
(...)
[8](vscode-notebook-cell:?execution_count=11&line=8) ],
[9](vscode-notebook-cell:?execution_count=11&line=9) })
---> [11](vscode-notebook-cell:?execution_count=11&line=11) df = SmartDataframe(df)
[12](vscode-notebook-cell:?execution_count=11&line=12) df.chat('Which are the countries with GDP greater than 3000000000000?')
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\pandasai\smart_dataframe\__init__.py:279, in SmartDataframe.__init__(self, df, name, description, custom_head, config, logger)
[277](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_dataframe/__init__.py:277) self._table_description = description
[278](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_dataframe/__init__.py:278) self._table_name = name
--> [279](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_dataframe/__init__.py:279) self._lake = SmartDatalake([self], config, logger)
[281](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_dataframe/__init__.py:281) # set instance type in SmartDataLake
[282](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_dataframe/__init__.py:282) self._lake.set_instance_type(self.__class__.__name__)
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\pandasai\smart_datalake\__init__.py:113, in SmartDatalake.__init__(self, dfs, config, logger, memory, cache)
[111](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_datalake/__init__.py:111) self._cache = cache
[112](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_datalake/__init__.py:112) elif self._config.enable_cache:
--> [113](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_datalake/__init__.py:113) self._cache = Cache()
[115](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_datalake/__init__.py:115) context = Context(self._config, self.logger, self.engine)
[117](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/smart_datalake/__init__.py:117) if self._config.response_parser:
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\pandasai\helpers\cache.py:32, in Cache.__init__(self, filename, abs_path)
[29](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/helpers/cache.py:29) os.makedirs(cache_dir, mode=DEFAULT_FILE_PERMISSIONS, exist_ok=True)
[31](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/helpers/cache.py:31) self.filepath = os.path.join(cache_dir, f"{filename}.db")
---> [32](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/helpers/cache.py:32) self.connection = duckdb.connect(self.filepath)
[33](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/helpers/cache.py:33) self.connection.execute(
[34](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/helpers/cache.py:34) "CREATE TABLE IF NOT EXISTS cache (key STRING, value STRING)"
[35](file:///C:/Program%20Files/Python311/Lib/site-packages/pandasai/helpers/cache.py:35) )
IOException: IO Error: Cannot open file "\\server\home\userXXX\...\documents\...\...\\\server\home\userXXX\...\documents\...\cache\cache_db_0.9.db": The system cannot find the path specified.
Any ideas on how to resolve an issue are appreciated. | closed | 2024-01-31T00:30:51Z | 2024-07-22T14:30:16Z | https://github.com/sinaptik-ai/pandas-ai/issues/912 | [] | staceymir | 8 |
ultralytics/ultralytics | deep-learning | 19,811 | Some questions about how to modify YAML files after improving the network | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have been conducting experiments using YOLOv3 recently. After replacing the Darknet53 backbone network with Mobilenetv3, I modified the number of channels in each layer of the head layer, but it always got wrong and the code kept reporting errors. However, if I made the modifications on YOLOv5, it could run normally. I want to know how to determine the number of channels? After modifying the backbone, do we only need to change the number of channels for the head modification, and do we need to change anything else?
### Additional
# Parameters
nc: 80 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
# darknet53 backbone
backbone:
# [from, number, module, args]
- [-1, 1, Conv, [32, 3, 1]] # 0
- [-1, 1, Conv, [64, 3, 2]] # 1-P1/2
- [-1, 1, Bottleneck, [64]]
- [-1, 1, Conv, [128, 3, 2]] # 3-P2/4
- [-1, 2, Bottleneck, [128]]
- [-1, 1, Conv, [256, 3, 2]] # 5-P3/8
- [-1, 8, Bottleneck, [256]]
- [-1, 1, Conv, [512, 3, 2]] # 7-P4/16
- [-1, 8, Bottleneck, [512]]
- [-1, 1, Conv, [1024, 3, 2]] # 9-P5/32
- [-1, 4, Bottleneck, [1024]] # 10
# YOLOv3 head
head:
- [-1, 1, Bottleneck, [1024, False]]
- [-1, 1, Conv, [512, 1, 1]]
- [-1, 1, Conv, [1024, 3, 1]]
- [-1, 1, Conv, [512, 1, 1]]
- [-1, 1, Conv, [1024, 3, 1]] # 15 (P5/32-large)
- [-2, 1, Conv, [256, 1, 1]]
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 8], 1, Concat, [1]] # cat backbone P4
- [-1, 1, Bottleneck, [512, False]]
- [-1, 1, Bottleneck, [512, False]]
- [-1, 1, Conv, [256, 1, 1]]
- [-1, 1, Conv, [512, 3, 1]] # 22 (P4/16-medium)
- [-2, 1, Conv, [128, 1, 1]]
- [-1, 1, nn.Upsample, [None, 2, "nearest"]]
- [[-1, 6], 1, Concat, [1]] # cat backbone P3
- [-1, 1, Bottleneck, [256, False]]
- [-1, 2, Bottleneck, [256, False]] # 27 (P3/8-small)
- [[27, 22, 15], 1, Detect, [nc]] # Detect(P3, P4, P5)
Replace the backbone with MobileNetV3
[[-1, 1, conv_bn_hswish, [16, 2]], # 0-p1/2
[-1, 1, MobileNetV3, [ 16, 16, 3, 1, 0, 0]], # 1-p1/2
[-1, 1, MobileNetV3, [ 24, 64, 3, 2, 0, 0]], # 2-p2/4
[-1, 1, MobileNetV3, [ 24, 72, 3, 1, 0, 0]], # 3-p2/4
[-1, 1, MobileNetV3, [ 40, 72, 5, 2, 1, 0]], # 4-p3/8
[-1, 1, MobileNetV3, [ 40, 120, 5, 1, 1, 0]], # 5-p3/8
[-1, 1, MobileNetV3, [ 40, 120, 5, 1, 1, 0]], # 6-p3/8
[-1, 1, MobileNetV3, [ 80, 240, 3, 2, 0, 1]], # 7-p4/16
[-1, 1, MobileNetV3, [ 80, 200, 3, 1, 0, 1]], # 8-p4/16
[-1, 1, MobileNetV3, [ 80, 184, 3, 1, 0, 1]], # 9-p4/16
[-1, 1, MobileNetV3, [ 80, 184, 3, 1, 0, 1]], # 10-p4/16
[-1, 1, MobileNetV3, [112, 480, 3, 1, 1, 1]], # 11-p4/16
[-1, 1, MobileNetV3, [112, 672, 3, 1, 1, 1]], # 12-p4/16
[-1, 1, MobileNetV3, [160, 672, 5, 1, 1, 1]], # 13-p4/16
[-1, 1, MobileNetV3, [160, 960, 5, 2, 1, 1]], # 14-p5/32 原672改为原算法960
[-1, 1, MobileNetV3, [160, 960, 5, 1, 1, 1]], # 15-p5/32
]
| open | 2025-03-21T06:53:38Z | 2025-03-22T07:26:26Z | https://github.com/ultralytics/ultralytics/issues/19811 | [
"question",
"detect"
] | Meaccy | 2 |
reloadware/reloadium | pandas | 20 | Reloadium on pycharm won't load Flask templates | **Describe the bug**
I tried the reloadium plugin for PyCharm for my Flask project. The problem is reloadium cannot found the index.html template used in my project.
Here is the error :
```
C:\Users\tom52\Desktop\projet>reloadium run app.py
■■■■■■■■■■■■■■■
Reloadium 0.8.8
■■■■■■■■■■■■■■■
If you like this project consider becoming a sponsor or giving a start at https://github.com/reloadware/reloadium
* Serving Flask app '__main__' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
INFO:werkzeug: * Running on http://127.0.0.1:5000 (Press CTRL+C to quit)
Loaded 3 watched modules so far from paths:
- \C:\Users\tom52\Desktop\projet\**\*.html
- \C:\Users\tom52\Desktop\projet\**\*.py
ERROR:__main__:Exception on / [GET]
Traceback (most recent call last):
File "C:\Python310\lib\site-packages\flask\app.py", line 2077, in wsgi_app
response = self.full_dispatch_request()
File "C:\Python310\lib\site-packages\flask\app.py", line 1525, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Python310\lib\site-packages\flask\app.py", line 1523, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Python310\lib\site-packages\reloadium\reloader\llll11l1l1l1l1llIl1l1\llllll1l1ll111l1Il1l1.py", line 165, in ll11ll1ll11ll111Il1l1
File "C:\Python310\lib\site-packages\flask\app.py", line 1509, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
File "C:\Users\tom52\Desktop\projet\app.py", line 28, in index
return render_template('index.html', history=history)
File "C:\Python310\lib\site-packages\flask\templating.py", line 149, in render_template
ctx.app.jinja_env.get_or_select_template(template_name_or_list),
File "C:\Python310\lib\site-packages\jinja2\environment.py", line 1081, in get_or_select_template
return self.get_template(template_name_or_list, parent, globals)
File "C:\Python310\lib\site-packages\jinja2\environment.py", line 1010, in get_template
return self._load_template(name, globals)
File "C:\Python310\lib\site-packages\jinja2\environment.py", line 969, in _load_template
template = self.loader.load(self, name, self.make_globals(globals))
File "C:\Python310\lib\site-packages\jinja2\loaders.py", line 126, in load
source, filename, uptodate = self.get_source(environment, name)
File "C:\Python310\lib\site-packages\flask\templating.py", line 59, in get_source
return self._get_source_fast(environment, template)
File "C:\Python310\lib\site-packages\flask\templating.py", line 95, in _get_source_fast
raise TemplateNotFound(template)
jinja2.exceptions.TemplateNotFound: index.html
INFO:werkzeug:127.0.0.1 - - [06/Jun/2022 22:24:08] "GET / HTTP/1.1" 500 -
```
**To Reproduce**
Steps to reproduce the behavior:
1. Create a python flask project with the following files tree:
```
project/
| app.py
| templates/
| index.html
```
3. Create a templates directory and add index.html
4. create the app in python with `app = Flask(__name__, template_folder='templates')` and `app.run()`
5. run `reloadium run app.py`
**Expected behavior**
As the Flask constructor specifies the template folder, the flask app should run correctly
**Screenshots**
- Running the code by python app.py or PyCharm (same result)

- Running the code with console reloadium or pycharm reloadium (same result)

**Desktop (please complete the following information):**
- OS: Windows
- OS version: Windows 11 Professional - Version 21H2 - build 22000.675
- Reloadium package version: 0.8.8
- PyCharm plugin version: 0.8.2
- Editor: PyCharm
- Run mode: Run & Debug
**Additional context**
Add any other context about the problem here. | closed | 2022-06-06T20:27:50Z | 2022-06-16T10:19:12Z | https://github.com/reloadware/reloadium/issues/20 | [] | pommepommee | 2 |
freqtrade/freqtrade | python | 10,648 | Implementing Unified Trading Account (UTA) Feature (Bybit & Binance) | <!--
Have you searched for similar issues before posting it?
Did you have a VERY good look at the [documentation](https://www.freqtrade.io/en/latest/) and are sure that the question is not explained there
Please do not use the question template to report bugs or to request new features.
-->
## Describe your environment
* Operating system: ubuntu 24.04
* Freqtrade Version: 2024.8
## Your question
Hi! You probably have noticed than Binance stopped offering crypto futures and derivatives trading in EU, you can only trade them through a Unified Trading Account (UTA).
https://www.reddit.com/r/3Commas_io/comments/1dndnng/important_notice_for_binance_futures_traders_in/
https://en.cryptonomist.ch/2024/06/19/crypto-regulation-mica-arriving-on-june-30-what-does-it-mean-for-stablecoin-in-europe/
https://announcements.bybit.com/en/article/migration-of-usdc-derivatives-trading-to-unified-trading-accounts-uta--bltfa0f4defd805a6d3/
https://www.binance.com/en/square/post/9012660554761
Recently, Bybit is also rushing users to switch from Standard to Unified Trading Account (UTA) due to the need to comply with new EU regulations. They also stopped allowing user to create Standard Trading Accounts in EU area. They are now supporting inverse derivatives trading through UTA as well and unfortunately none of these futures can be used with freqtrade atm.
https://announcements.bybit.com/en/article/product-updates-introducing-inverse-derivatives-trading-to-unified-trading-account-uta--blte3dc1e58c8aefd04/


A lot of users may be affected, so I would kindly like to ask you when this Unified Trading Account (UTA) feature will be available with freqtrade for Binance and Bybit.
Thank you.
| closed | 2024-09-12T09:04:08Z | 2024-09-12T09:28:23Z | https://github.com/freqtrade/freqtrade/issues/10648 | [
"Question",
"Non-spot"
] | aliengofx | 1 |
microsoft/nni | tensorflow | 5,555 | TypeError: Invalid shape (64, 64, 1, 1) for image data | **Environment**: VScode
- NNI version: 2.10
- Training service:remote
- Client OS: MacOS
- Server OS (for remote mode only): Ubuntu
- Python version:3.9
- PyTorch version: 1.12
- Is conda/virtualenv/venv used?: conda
- Is running in Docker?: No
Hi, I am trying to prune a Face detector with this architecture:
```
EXTD(
(base): ModuleList(
(0): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): PReLU(num_parameters=1)
)
(1): InvertedResidual_dwc(
(conv): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): PReLU(num_parameters=1)
(3): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(2): InvertedResidual_dwc(
(conv): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): PReLU(num_parameters=1)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): PReLU(num_parameters=1)
(6): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(3): InvertedResidual_dwc(
(conv): Sequential(
(0): Conv2d(64, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): PReLU(num_parameters=1)
(3): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=128)
(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): PReLU(num_parameters=1)
(6): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(4): InvertedResidual_dwc(
(conv): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): PReLU(num_parameters=1)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=256)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): PReLU(num_parameters=1)
(6): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
(5): InvertedResidual_dwc(
(conv): Sequential(
(0): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): PReLU(num_parameters=1)
(3): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), groups=256)
(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(5): PReLU(num_parameters=1)
(6): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(7): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
)
)
(upfeat): ModuleList(
(0): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(1): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(2): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(3): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
(4): Sequential(
(0): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=64, bias=False)
(1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(3): ReLU()
)
)
(loc): ModuleList(
(0): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(conf): ModuleList(
(0): Conv2d(64, 4, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(4): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): Conv2d(64, 2, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(softmax): Softmax(dim=-1)
)
```
I am using this config_list :
```
config_list = [{
'sparsity_per_layer' : 0.2,
'op_types' : ['Conv2d'],
}, {
'exclude' : True,
'op_names' : ['loc.0', 'loc.1', 'loc.2', 'loc.3', 'loc.4', 'loc.5',
'conf.0', 'conf.1', 'conf.2', 'conf.3', 'conf.4', 'conf.5',
]
}]
```
and when I apply the pruner and try to visualize the mask I get the follownig error:
```
sparsity: 0.8125
Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?29929b28-b222-4e75-80f2-fefedb0d1d62)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[5], line 7
4 mask = mask['weight'].detach().cpu().numpy()
6 print("sparsity: {}".format(mask.sum() [/](https://vscode-remote+ssh-002dremote-002b160-002e40-002e53-002e84.vscode-resource.vscode-cdn.net/) mask.size))
----> 7 plt.imshow(mask)
File [~/anaconda3/envs/gpu/lib/python3.9/site-packages/matplotlib/pyplot.py:2695](https://vscode-remote+ssh-002dremote-002b160-002e40-002e53-002e84.vscode-resource.vscode-cdn.net/m2/gkrispanis/Projects/EXTD_Pytorch-master2/~/anaconda3/envs/gpu/lib/python3.9/site-packages/matplotlib/pyplot.py:2695), in imshow(X, cmap, norm, aspect, interpolation, alpha, vmin, vmax, origin, extent, interpolation_stage, filternorm, filterrad, resample, url, data, **kwargs)
2689 @_copy_docstring_and_deprecators(Axes.imshow)
2690 def imshow(
2691 X, cmap=None, norm=None, *, aspect=None, interpolation=None,
2692 alpha=None, vmin=None, vmax=None, origin=None, extent=None,
2693 interpolation_stage=None, filternorm=True, filterrad=4.0,
2694 resample=None, url=None, data=None, **kwargs):
-> 2695 __ret = gca().imshow(
2696 X, cmap=cmap, norm=norm, aspect=aspect,
2697 interpolation=interpolation, alpha=alpha, vmin=vmin,
2698 vmax=vmax, origin=origin, extent=extent,
2699 interpolation_stage=interpolation_stage,
2700 filternorm=filternorm, filterrad=filterrad, resample=resample,
2701 url=url, **({"data": data} if data is not None else {}),
2702 **kwargs)
2703 sci(__ret)
2704 return __ret
...
716 # - otherwise casting wraps extreme values, hiding outliers and
717 # making reliable interpretation impossible.
718 high = 255 if np.issubdtype(self._A.dtype, np.integer) else 1
TypeError: Invalid shape (64, 64, 1, 1) for image data
```
The code I used is this:
```
from nni.compression.pytorch.pruning import L1NormPruner
pruner = L1NormPruner(model, config_list)
import matplotlib.pyplot as plt
for _, mask in masks.items():
mask = mask['weight'].detach().cpu().numpy()
print("sparsity: {}".format(mask.sum() / mask.size))
plt.imshow(mask)
```
It is also worth noting that even though I set `'sparsity_per_layer' : 0.2,` when I try to visualize the masks as you see it prints `sparsity: 0.8125` . Do you know why and how I can fix this issue ? | closed | 2023-05-10T12:30:29Z | 2023-05-27T12:28:55Z | https://github.com/microsoft/nni/issues/5555 | [] | gkrisp98 | 6 |
JoeanAmier/TikTokDownloader | api | 342 | 请问device_id 怎么设置 |
正在进行第 2 次重试
响应码异常: Client error '403 Forbidden' for url
'https://v16-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068-euttp/oo8nyeeGLEjTgeNLI5xY7AgqEG0JeSIgYrQc17/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0
%7C0%7C&cv=1&br=1006&bt=503&cs=0&ds=6&ft=Fx9KL6BMyq8jV1tiE12if3EYztGxRf&mime_type=video_mp4&qs=0&rc=aGY1Nzw8NWg1M2VnaGlnNEBpanJuPG45cnlmcDMzZjgzM0A2X19fMTNgNl8xMDNfLi5iYSM0M2VxM
mRjNmZgLS1kL2Nzcw%3D%3D&btag=e000b8000&expire=1733155398&l=202412021003016A460738DFF0AC05BB6B&ply_type=2&policy=2&signature=ac8daaabc699cdc55648b8344769cb49&tk=tt_chain_token'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/403
如果 TikTok 平台作品下载功能异常,请检查配置文件中 browser_info_tiktok 的 device_id 参数!
| open | 2024-12-02T10:11:34Z | 2025-01-06T16:15:14Z | https://github.com/JoeanAmier/TikTokDownloader/issues/342 | [] | sean0157 | 3 |
zappa/Zappa | flask | 1,232 | How we can set up web acl (WAF) with api gateway stage? | How we can set up web acl (WAF) with api gateway stage? | closed | 2023-04-20T03:19:15Z | 2023-05-22T00:24:45Z | https://github.com/zappa/Zappa/issues/1232 | [] | jitesh-prajapati123 | 2 |
keras-team/keras | data-science | 21,027 | Use JAX's AbstractMesh in distribution lib | In the JAX [distribution lib](https://github.com/keras-team/keras/blob/master/keras/src/backend/jax/distribution_lib.py#L246), use [AbstractMesh](https://github.com/jax-ml/jax/blob/main/jax/_src/mesh.py#L434) instead of Mesh since it doesn't result in a JIT cache misses when the devices change. It may also simplify the distribution API. | open | 2025-03-13T20:26:13Z | 2025-03-17T19:06:38Z | https://github.com/keras-team/keras/issues/21027 | [
"type:performance",
"backend:jax"
] | hertschuh | 6 |
indico/indico | sqlalchemy | 5,919 | Room booking link from agenda | **Is your feature request related to a problem? Please describe.**
It's a bit cumbersome to verify the details of a room booking. I can think of a few situations where these details would be useful:
- People clone agenda and forget to check if the room booking is still valid
- The room has a subsequent booking that means the meeting must end promptly and the booked end time
**Describe the solution you'd like**
It would be nice if the pop up box here had a link to the booking details
<img width="149" alt="image" src="https://github.com/indico/indico/assets/1979581/f364773f-db2e-445f-b86c-5ef1f8fdc505">
in this case it would point to https://indico.cern.ch/rooms/rooms?text=2%2FR-030&modal=room-details%3A114
It seems like it _might_ be easy given that the URL already exists and is straightforward to build from the room name (at least at CERN).
**Describe alternatives you've considered**
I can copy and paste the room name into the room booking page. It involves a few clicks so it's just a bit annoying.
| open | 2023-09-06T07:17:55Z | 2025-03-21T15:33:13Z | https://github.com/indico/indico/issues/5919 | [
"enhancement"
] | dguest | 1 |
vitalik/django-ninja | django | 1,279 | Mypy complaining about PatchDict | Please describe what you are trying to achieve
Please include code examples (like models code, schemes code, view function) to help understand the issue
I'm using PatchDict to extend a Schema so it can be used for partial update. The code functions fine, but I'm not sure how to get around the mypy complaints:

I'm currently using mypy 1.10.1 with Django-ninja 1.3.0
Here is how I set up this API:
```python
class CreateFacilityPayload(Schema):
name: str
...
PartialUpdateFacilityPayload = PatchDict[CreateFacilityPayload]
@facility_router.patch("/{guid}")
def update_facility(request: HttpRequest, guid: UUID4, payload: PartialUpdateFacilityPayload) -> dict:
facility = get_object_or_404(Facility, guid=guid)
for key, value in payload.items():
setattr(facility, key, value)
facility.save()
return {"guid": facility.guid}
```
Any suggestion is much appreciated. | closed | 2024-08-25T18:15:19Z | 2024-08-26T07:57:32Z | https://github.com/vitalik/django-ninja/issues/1279 | [] | oscarychen | 1 |
pydata/xarray | pandas | 9,346 | datatree: Tree-aware dataset handling/selection | ### What is your issue?
> I'm looking for a good way to apply a function to a subset of nodes that share some common characteristics encoded in the subtree path.
>
> Imagine the following data tree
>
> ```python
> import xarray as xr
> import datatree
> from datatree import map_over_subtree
>
> dt = datatree.DataTree.from_dict({
> 'control/lowRes' : xr.Dataset({'z':(('x'),[0,1,2])}),
> 'control/highRes' : xr.Dataset({'z':(('x'),[0,1,2,3,4,5])}),
> 'plus4K/lowRes' : xr.Dataset({'z':(('x'),[0,1,2])}),
> 'plus4K/highRes' : xr.Dataset({'z':(('x'),[0,1,2,3,4,5])})
> })
> ```
>
> To apply a function to all `control` or all `plus4K` nodes is straight forward by just selecting the specific subtree, e.g. `dt['control']`. However, in case all `lowRes` dataset should be manipulated this becomes more elaborative and I wonder what the best approach would be.
>
> * `dt['control/lowRes','plus4K/lowRes']` is not yet implemented and would also be complex for large data trees
>
> * `dt['*/lowRes']` could be one idea to make the subtree selection more straight forward, where `*` is a wildcard
>
> * `dt.search(regex)` could make this even more general
>
>
> Currently, I use the @map_over_subtree decorator, which also has some limitations as the function does not know its tree origin ([as noted in the code](https://github.com/xarray-contrib/datatree/blob/696cec9e6288ba9e8c473cd1ba527122edef2b1c/datatree/datatree.py#L1219C38-L1219C38)) and it needs to be inferred from the dataset itself, which is sometimes possible (here the length of the dataset) but does not need to be always the case.
>
> ```python
> @map_over_subtree
> def resolution_specific_func(ds):
> if len(ds.x) == 3:
> ds = ds.z*2
> elif len(ds.x) == 6:
> ds = ds.z*4
> return ds
>
> z= resolution_specific_func(dt)
> ```
>
> I do not know how the tree information could be passed through the decorator, but maybe it is okay if the `DatasetView` class has an additional property (e.g. `_path`) that could be filled with `dt.path` during the call of DatasetView._from_node()?. This would lead to
>
> ```python
> @map_over_subtree
> def resolution_specific_func(ds):
> if 'lowRes' in ds._path:
> ds = ds.z*2
> if 'highRes' in ds._path:
> ds = ds.z*4
> return ds
> ```
>
> and would allow for tree-aware manipulation of the datasets.
>
> What do you think? Happy to open a PR if this makes sense.
_Originally posted by @observingClouds in https://github.com/xarray-contrib/datatree/issues/254#issue-1835784457_ | open | 2024-08-13T16:20:31Z | 2024-08-13T16:22:19Z | https://github.com/pydata/xarray/issues/9346 | [
"topic-DataTree"
] | keewis | 0 |
dnouri/nolearn | scikit-learn | 31 | PendingDeprecationWarning: object.__format__ with a non-empty format string is deprecated | I am getting:
```
/home/ubuntu/git/nolearn/nolearn/lasagne.py:408: PendingDeprecationWarning: object.__format__ with a non-empty format string is deprecated
```
which I believe is coming from:
```
print(" {:<18}\t{:<20}\tproduces {:>7} outputs".format(
layer.__class__.__name__,
output_shape,
reduce(operator.mul, output_shape[1:]),
))
```
| closed | 2015-01-29T05:50:48Z | 2015-02-08T22:59:30Z | https://github.com/dnouri/nolearn/issues/31 | [] | cancan101 | 1 |
tortoise/tortoise-orm | asyncio | 1,533 | Group by for many to many table problem | Hi,
I am encountering an issue with how Tortoise ORM handles grouping in queries involving many-to-many tables. Specifically, when I attempt to group by a single field in a many-to-many intermediary table, the SQL query produced by Tortoise ORM includes additional fields in the GROUP BY clause, leading to incorrect aggregation results.
this is the model class
```python
class Recipe2Ingredient(Model):
class Meta:
app = "recipes"
table = "recipes_recipe2ingredient"
id = fields.UUIDField(pk=True)
recipe = fields.ForeignKeyField(
'recipes.Recipe',
related_name='ingredients'
)
ingredient = fields.ForeignKeyField(
'recipes.Ingredient',
related_name='recipes'
)
```
this is the query
```python
query = models.Recipe2Ingredient.annotate(
count=Count("ingredient_id")
).group_by("ingredient_id").order_by("-count").limit(5).prefetch_related("ingredient")
```
I expected this query to produce raw sql query:
```
SELECT "ingredient_id", COUNT("ingredient_id") AS "count"
FROM "recipes_recipe2ingredient"
GROUP BY "ingredient_id"
ORDER BY "count" DESC
LIMIT 5
```
but I got
```
'SELECT "recipe_id","id","ingredient_id",COUNT("ingredient_id") "count" FROM "recipes_recipe2ingredient"
GROUP BY "recipe_id","id","ingredient_id"
ORDER BY COUNT("ingredient_id") DESC
LIMIT 5'
```
My main problem is with the grouping part because instead of getting GROUP BY "ingredient_id" I got GROUP BY "recipe_id","id","ingredient_id" and this is totally incorrect.
Is there something I was doing wrong or is this is a tortoise bug? If so, can you please fix this bug.
Thanks
| open | 2024-01-02T10:23:50Z | 2024-01-02T10:36:35Z | https://github.com/tortoise/tortoise-orm/issues/1533 | [] | acast83 | 0 |
wandb/wandb | data-science | 9,580 | [Bug]: Login error `Object has no attribute 'disabled'` | ### Describe the bug
I just `pip install wandb` (version `0.19.8`) and got an error when running `wandb login <API-KEY>`.
```
Traceback (most recent call last):
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/bin/wandb", line 8, in <module>
sys.exit(cli())
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/click/core.py", line 1161, in __call__
return self.main(*args, **kwargs)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/click/core.py", line 1082, in main
rv = self.invoke(ctx)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/click/core.py", line 1697, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/click/core.py", line 1443, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/click/core.py", line 788, in invoke
return __callback(*args, **kwargs)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/wandb/cli/cli.py", line 104, in wrapper
return func(*args, **kwargs)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/wandb/cli/cli.py", line 246, in login
wandb.setup(
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/wandb/sdk/wandb_setup.py", line 382, in setup
return _setup(settings=settings)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/wandb/sdk/wandb_setup.py", line 318, in _setup
_singleton = _WandbSetup(settings=settings, pid=pid)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/wandb/sdk/wandb_setup.py", line 96, in __init__
self._settings = self._settings_setup(settings)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/wandb/sdk/wandb_setup.py", line 123, in _settings_setup
s.update_from_workspace_config_file()
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/wandb/sdk/wandb_settings.py", line 1290, in update_from_workspace_config_file
setattr(self, key, value)
File "/home/GRAMES.POLYMTL.CA/p118739/.conda/envs/ply_env/lib/python3.10/site-packages/pydantic/main.py", line 922, in __setattr__
self.__pydantic_validator__.validate_assignment(self, name, value)
pydantic_core._pydantic_core.ValidationError: 1 validation error for Settings
disabled
Object has no attribute 'disabled' [type=no_such_attribute, input_value='true', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/no_such_attribute
```
It looks like there is an error with the package `pydantic_core`. | closed | 2025-03-13T14:29:39Z | 2025-03-24T14:54:11Z | https://github.com/wandb/wandb/issues/9580 | [
"ty:bug",
"a:sdk"
] | NathanMolinier | 8 |
gunthercox/ChatterBot | machine-learning | 1,801 | Best Match Behaviour | Hi,
I'm getting some unusual results with the BestMatch logic adaptor. For example, I trained my bot with the following two questions:
"Who wrote The Odyssey?"
"Who wrote The Lord of the Rings?"
If I ask the bot "Who wrote The Lord of the Rings?", I get a confidence of 1 and the correct answer. If I drop the first "the" from the question, "Who wrote Lord of the Rings?" I get the answer to "Who wrote The Odyssey?". Looking at the logging I can see that the confidence with of "Who wrote Lord of the Rings?" compared to "Who wrote The Odyssey?" is 0.45 and "Who wrote The Lord of the Rings?" is 0.87.
Shouldn't the BestMatch be returning the answer with the highest confidence?
Looking at the best_match.py script, I think I can locate where this is happening:
```
# Use the input statement as the closest match if no other results are found
closest_match = next(search_results, input_statement)
# Search for the closest match to the input statement
for result in search_results:
# Stop searching if a match that is close enough is found
if result.confidence >= self.maximum_similarity_threshold:
closest_match = result
break
```
The maximum_similarity_threshold is set to 0.95 so anything with a confidence above that gets put as the answer. But it looks to me as if the script is keeping the whatever it found first (in my case "Who wrote The Odyssey?") as the closest_match variable, and not updating it if it finds something with a higher confidence.
Is there some other setting that I need to apply to get this to work to find the closest match if the maximum_similarity_threshold isn't reached? I found if I added the following lines of code as seen below, the script worked as intended:
```
# Use the input statement as the closest match if no other results are found
closest_match = next(search_results, input_statement)
# Search for the closest match to the input statement
for result in search_results:
#set closest_match to result if it has a higher confidence
if result.confidence > closest_match.confidence:
closest_match = result
# Stop searching if a match that is close enough is found
if result.confidence >= self.maximum_similarity_threshold:
closest_match = result
break
```
I'm using chatterbot version 1.0.5 on Python 3.7.
Thanks in advance for your help. | open | 2019-08-22T16:29:29Z | 2021-02-03T14:55:51Z | https://github.com/gunthercox/ChatterBot/issues/1801 | [] | dcasserly001 | 3 |
youfou/wxpy | api | 205 | 收消息时会有并发的问题 | 应当如何解决,python较差,因为看了这个项目才算入门~ | open | 2017-09-29T05:49:29Z | 2017-09-29T05:49:29Z | https://github.com/youfou/wxpy/issues/205 | [] | bestfc | 0 |
plotly/dash-core-components | dash | 795 | Range slider update is slow when draging but fast when clicking despite 'mouseup' property | I'm aware the range-slider with "drag" option can be slow, however if I put updatemode="mouseup", I expect to be able to drag the range slider and get fast results, since the intermediate positions of the slider won't be evaluated in this case.
However I observe that clicking the range slider is fast, while draging the range slider to do exactly the same thing is slow. Is this an expected behaviour?
```
import dash
import dash_core_components as dcc
import dash_html_components as html
from dash.dependencies import Input, Output
import pandas as pd
import plotly.express as px
from flask_caching import Cache
import gunicorn
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.title = "Epidemics over time"
# server = app.server
timeout = 1e20
cache = Cache(app.server, config={
# try 'filesystem' if you don't want to setup redis
'CACHE_TYPE': 'filesystem',
'CACHE_DIR': './cache/'
})
app.config.suppress_callback_exceptions = True
df = pd.read_csv('data/epidemics.csv', sep=';', encoding='latin1')
df.dropna(inplace=True)
df['Death toll'] = df['Death toll'].astype("int32")
app.layout = html.Div([
dcc.Graph(
id='graph-with-slider'#,
# figure=fig
),
dcc.RangeSlider(
id='year-slider',
min=df['Date'].min(),
max=df['Date'].max(),
value=[df['Date'].min(),df['Date'].max()],
marks={str(year): str(year) for year in df['Date'].unique() if year%3==0},
step=2,
updatemode='mouseup'
)
])#, style={'columnCount': 2})
@cache.memoize(timeout=timeout) # in seconds
@app.callback(
Output('graph-with-slider', 'figure'),
[Input('year-slider', 'value')])
def update_figure(range):
min = range[0]
max = range[1]
filtered_df = df[(df.Date >= min) & (df.Date <= max)]
fig = px.treemap(filtered_df, path=['Event'], values='Death toll', color='Disease', title="Epidemic diseases landscape from {} to {}".format(min, max))
return fig
if __name__ == '__main__':
app.run_server()
``` | open | 2020-04-19T17:25:21Z | 2020-04-20T07:10:40Z | https://github.com/plotly/dash-core-components/issues/795 | [] | P-mir | 0 |
stanford-oval/storm | nlp | 265 | [BUG] ImportError: cannot import name 'AzureAISearch' from 'knowledge_storm.rm' | **Describe the bug**
ImportError: cannot import name 'AzureAISearch' from 'knowledge_storm.rm'
**To Reproduce**
pip install knowledge-storm
pip install knolwledge-storm-upgrade
python examples/storm_examples/run_storm_wiki_gpt.py --output-dir . --retriever you --do-research --do-generate-outline --do-generate-article --do-polish-article
**Environment:**
- OS: MacOS (Intel)
| closed | 2024-12-09T15:44:50Z | 2024-12-11T07:31:03Z | https://github.com/stanford-oval/storm/issues/265 | [] | Elusv | 1 |
microsoft/nni | deep-learning | 5,342 | nni webportal doesn't show | **Describe the issue**:
webportal doesn't appear after executing nnictl create --config config_detailed.yml
**Environment**: Google Cloud VM
- NNI version: 2.10
- Training service (local|remote|pai|aml|etc):
- Client OS: Ubuntu 20
- Server OS (for remote mode only):
- Python version:
- PyTorch/TensorFlow version:
- Is conda/virtualenv/venv used?: Yes
- Is running in Docker?: No
**Configuration**:
- Experiment config (remember to remove secrets!):
- Search space:
**Log message**:
- nnimanager.log:
- dispatcher.log:
- nnictl stdout and stderr:
<!--
Where can you find the log files:
LOG: https://github.com/microsoft/nni/blob/master/docs/en_US/Tutorial/HowToDebug.md#experiment-root-director
STDOUT/STDERR: https://nni.readthedocs.io/en/stable/reference/nnictl.html#nnictl-log-stdout
-->
**How to reproduce it?**: | closed | 2023-02-08T22:07:06Z | 2023-02-17T02:50:52Z | https://github.com/microsoft/nni/issues/5342 | [] | yiqiaoc11 | 5 |
pyg-team/pytorch_geometric | deep-learning | 10,010 | Cuda 12.6/8 support? | ### 😵 Describe the installation problem
Very brief:
- Is cuda 12.6/8 supported?
- If they are, can we please get a downloadable whl alongside the existing ones for cuda 12.4?
Thank you!
| closed | 2025-02-10T16:53:06Z | 2025-03-20T20:57:03Z | https://github.com/pyg-team/pytorch_geometric/issues/10010 | [
"installation"
] | Nathaniel-Bubis | 2 |
slackapi/python-slack-sdk | asyncio | 1,189 | How to use blocks in attachments using the model classes | Currently only legacy fields are supported in the Attachments object (`from slack_sdk.models.attachments import Attachment`). The API now supports passing in Blocks (https://api.slack.com/reference/messaging/attachments#fields). Please can support be added to include this.
### Category (place an `x` in each of the `[ ]`)
- [ ] **slack_sdk.web.WebClient (sync/async)** (Web API client)
- [ ] **slack_sdk.webhook.WebhookClient (sync/async)** (Incoming Webhook, response_url sender)
- [X] **slack_sdk.models** (UI component builders)
- [ ] **slack_sdk.oauth** (OAuth Flow Utilities)
- [ ] **slack_sdk.socket_mode** (Socket Mode client)
- [ ] **slack_sdk.audit_logs** (Audit Logs API client)
- [ ] **slack_sdk.scim** (SCIM API client)
- [ ] **slack_sdk.rtm** (RTM client)
- [ ] **slack_sdk.signature** (Request Signature Verifier)
### Requirements
Please read the [Contributing guidelines](https://github.com/slackapi/python-slack-sdk/blob/main/.github/contributing.md) and [Code of Conduct](https://slackhq.github.io/code-of-conduct) before creating this issue or pull request. By submitting, you are agreeing to those rules.
| closed | 2022-03-09T17:06:51Z | 2022-03-10T01:48:11Z | https://github.com/slackapi/python-slack-sdk/issues/1189 | [
"question",
"web-client",
"Version: 3x"
] | alextriaca | 1 |
ionelmc/pytest-benchmark | pytest | 79 | Add a "quick and dirty" way to get execution times for test cases | I would like some way to quickly (i.e. without having to modify the test suite) get timing data for test cases, as well as a sum total for the whole test suite.
Something like `--durations` but with an optional (or automatically determined, like with `timeit`) number of repetitions.
Would this be a fit for this plugin? | open | 2017-07-03T19:16:02Z | 2019-01-07T11:02:45Z | https://github.com/ionelmc/pytest-benchmark/issues/79 | [
"documentation"
] | hop | 8 |
qubvel-org/segmentation_models.pytorch | computer-vision | 157 | Qustions: Softmax for Multi-Class | Hi! Thank you for your work!
I have a question: Is it necessary to have a background class when using softmax activation? So that every pixel belongs to a label.
And what is the difference between 'softmax' and 'softmax2d'?
Thank U! | closed | 2020-03-04T16:54:22Z | 2022-02-10T01:50:55Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/157 | [
"Stale"
] | gjbit | 10 |
slackapi/python-slack-sdk | asyncio | 1,144 | SlackConnectionError: Handshake status 404 - Bot broken since 12/3/21 | I have a bot that was working until 12/3/21. I was thinking it was due to rtm.start being deprecated on 11/30, but your message said it would only affect new apps. So not sure what the issue may be. Please assist
I get the following error:
```
Traceback (most recent call last):
File "/lib/python2.7/site-packages/slackclient/client.py", line 53, in rtm_connect
self.server.rtm_connect(use_rtm_start=with_team_state, **kwargs)
File "/lib/python2.7/site-packages/slackclient/server.py", line 84, in rtm_connect
self.connect_slack_websocket(self.ws_url)
File "/lib/python2.7/site-packages/slackclient/server.py", line 122, in connect_slack_websocket
raise SlackConnectionError(message=str(e))
SlackConnectionError: Handshake status 404
```
I am running python 2.7.5
OpenSSL version 1.1.1 | closed | 2021-12-06T19:32:57Z | 2022-01-24T00:02:29Z | https://github.com/slackapi/python-slack-sdk/issues/1144 | [
"Version: 1x",
"needs info",
"rtm-client",
"web-client",
"auto-triage-stale"
] | syun54 | 11 |
holoviz/panel | jupyter | 7,073 | Several stylings do not load when disconnected from the internet (i.e. behind a corporate firewall) | #### ALL software version info
Panel 1.4.5
Panel is loaded with `pn.serve` from a Uvicorn FastAPI server.
Uvicorn 0.30.5
Panel Server on Ubuntu 22.04
Browser is MS Edge on Windows 11
#### Description of expected behavior and the observed behavior
Please try disconnecting from the internet and connecting (locally) to a Panel server. This simulates the environment behind a corporate firewall, where both the server and the user are in the same network, but no access is available to the web.
Numerous stylings will not work, including:
+ button icons from the "tablers" icons
+ progress bars as used in the Progress widget
+ stylings look a bit different
+ Image widgets do not load properly
+ adding the Code Editor widget to an app will cause the entire app to break (won't come up)
The screenshots below are informative: they show (in MS Edge) the CDN URLs that are not being loaded. Please note there is more than one. I have not tried out all Panel features, so there may be more.
#### Extensions loaded
```
pn.extension('floatpanel', 'tabulator', 'codeeditor',
nthreads=n_concurrent_threads)
hv.extension('bokeh')
```
#### Screenshots or screencasts of the bug in action
<img width="412" alt="image" src="https://github.com/user-attachments/assets/c4537d9b-20ef-4edb-8049-ef551d05cb31">

| open | 2024-08-04T10:36:23Z | 2024-10-25T19:37:37Z | https://github.com/holoviz/panel/issues/7073 | [] | giladpn | 13 |
wkentaro/labelme | deep-learning | 317 | color assignments | in json_to_dataset.py, i can manipulate the output colors for the labels by changing the values of label_value. however i need to get a specific color given a rgb value. do you have a masterlist of the rgb equivalents of the colors assigned to the values in label_value? | closed | 2019-02-12T07:23:14Z | 2019-02-13T01:48:23Z | https://github.com/wkentaro/labelme/issues/317 | [] | digracesion | 7 |
matterport/Mask_RCNN | tensorflow | 2,170 | [ Data Set Recommendation]: Multi-Class Object Detection Data Set? | Hi,
For some experiment purposes, I require the following format open-source data set which is for the **multi-class** problem.
```
Annotation:
- 1.xml
- 2.xml
- 3.xml
...
Images:
1.jpg
2.jpg
3.jpg
...
```
The `XML` annotation should only contain bounding box information (like in PASCAL VOC). And `n times class` images are in a single folder (like annotation). The data set should neither too big nor too small and even some reputation.
Any suggestions would be appreciated. :) | closed | 2020-05-09T07:39:03Z | 2020-05-14T21:59:38Z | https://github.com/matterport/Mask_RCNN/issues/2170 | [] | innat | 0 |
clovaai/donut | computer-vision | 158 | Training Donut for a new language | @josianem @gwkrsrch thank you for a great work!
Could you please help me with donut pretaining for a new language? I am trying to train donut model for ukrainian text.
What advice could you give me in terms of tokeneizer and data amount?
| open | 2023-03-05T17:14:43Z | 2023-03-05T17:14:43Z | https://github.com/clovaai/donut/issues/158 | [] | Invalid-coder | 0 |
mlfoundations/open_clip | computer-vision | 647 | LAION-400M Citation | Not sure if this is the best spot, but I was looking for a Bibtex citation for LAION-400M's [Neurips 2021 version](https://datacentricai.org/neurips21/papers/159_CameraReady_Workshop_Submission_LAION_400M__Public_Dataset_with_CLIP_Filtered_400M_Image_Text_Pairs.pdf), and had to manually type it out.
I thought it could be added to the README to make it easier for future people.
```
@inproceedings{schuhmann2021laion400m,
author={Schuhmann, Cristoph and Vencu, Richard and Beaumont, Romain and Kaczmarczyk, Robert and Mullis, Clayton and Jitsev, Jenia and Komatsuzaki, Aran},
title={{LAION-400M}: Open Dataset of {CLIP}-Filtered 400 Million Image-Text Pairs},
booktitle={Proceedings of Neurips Data-Centric AI Workshop},
year={2021}
}
``` | closed | 2023-09-28T21:01:09Z | 2023-10-24T02:11:25Z | https://github.com/mlfoundations/open_clip/issues/647 | [] | samuelstevens | 0 |
cvat-ai/cvat | computer-vision | 9,113 | (Help) How to use nginx as a reverse proxy for cvat instead of traefik? | I recently installed CVAT on a local VM. CVAT uses docker and installs a local Traefik container within the VM. The docs give instructions on how to run it on domain with free SSL by LetsEncrypt, but these docs assume that SSL termination happens on Traefik reverse proxy. But in my case, I already have a reverse proxy in charge of public facing IP, the SSL termination happens there. I do not have any idea how to remove traefik as a reverse proxy from cvat and use my nginx reverse proxy. Any help will be appreciated. | closed | 2025-02-17T13:22:54Z | 2025-02-21T13:34:16Z | https://github.com/cvat-ai/cvat/issues/9113 | [
"question"
] | osman-goni-cse | 1 |
huggingface/datasets | computer-vision | 6,476 | CI on windows is broken: PermissionError | See: https://github.com/huggingface/datasets/actions/runs/7104781624/job/19340572394
```
FAILED tests/test_load.py::test_loading_from_the_datasets_hub - NotADirectoryError: [WinError 267] The directory name is invalid: 'C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\tmpfcnps56i\\hf-internal-testing___dataset_with_script\\default\\0.0.0\\c240e2be3370bdbd\\dataset_with_script-train.arrow'
``` | closed | 2023-12-06T08:32:53Z | 2023-12-06T09:17:53Z | https://github.com/huggingface/datasets/issues/6476 | [
"bug"
] | albertvillanova | 0 |
xonsh/xonsh | data-science | 5,450 | Refactoring: operators | This is metaissue
In the future we need to solve these points:
* Confusion around `.output` and `.out`.
* Show good cases when `!()` non-blocking is doing perfect work to people.
* Use this advice https://github.com/xonsh/xonsh/pull/4445#pullrequestreview-757072815
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2024-05-27T21:27:37Z | 2024-06-22T20:22:07Z | https://github.com/xonsh/xonsh/issues/5450 | [
"metaissue",
"commandlining",
"refactoring"
] | anki-code | 0 |
erdewit/ib_insync | asyncio | 327 | reqHistoricalDataAsync() takes more time with "endDateTime" param | This is weird bug I have found during last months, because in summer 2020 everything worked fine.
So, if I add date to `endDateTime` param in `reqHistoricalDataAsync()`, script executes in 30 times more than if I leave `endDateTime=""` param empty.
Script with empty `endDateTime` executes in ~1.4 seconds for 50 stocks and semaphore=50:
```
19:45:49 start time
19:45:49 started NIO
19:45:49 started AAL
19:45:49 started CCL
19:45:49 started BLNK
19:45:49 started JMIA
19:45:49 started NCLH
19:45:49 started SNAP
19:45:49 started DKNG
19:45:49 started PLUG
19:45:49 started WKHS
19:45:49 started SONO
19:45:49 started FE
19:45:49 started OXY
19:45:49 started WORK
19:45:49 started NKLA
19:45:49 started FEYE
19:45:49 started PCG
19:45:49 started UBER
19:45:49 started UAL
19:45:49 started INO
19:45:49 started MRNA
19:45:49 started SBE
19:45:49 started LYFT
19:45:49 started TWTR
19:45:49 started IQ
19:45:49 started JWN
19:45:49 started DVN
19:45:49 started BILI
19:45:49 started CIIC
19:45:49 started MGM
19:45:49 started SPWR
19:45:49 started GME
19:45:49 started KSS
19:45:49 started NUAN
19:45:49 started VIPS
19:45:49 started BLDP
19:45:49 started HST
19:45:49 started DISCA
19:45:49 started LVS
19:45:49 started HAL
19:45:49 started LB
19:45:49 started FTCH
19:45:49 started SAVE
19:45:49 started CNK
19:45:49 started SPG
19:45:49 started HUYA
19:45:49 started NOV
19:45:49 started SDC
19:45:49 started NET
19:45:49 started EQT
19:45:50 ended NIO, len=136
19:45:50 ended CCL, len=136
19:45:50 ended AAL, len=136
19:45:50 ended JMIA, len=136
19:45:50 ended BLNK, len=136
19:45:50 ended NCLH, len=136
19:45:50 ended DKNG, len=136
19:45:50 ended SNAP, len=136
19:45:50 ended PLUG, len=136
19:45:50 ended FE, len=136
19:45:50 ended WKHS, len=136
19:45:50 ended OXY, len=136
19:45:50 ended SONO, len=136
19:45:50 ended WORK, len=136
19:45:50 ended NKLA, len=136
19:45:50 ended PCG, len=136
19:45:50 ended FEYE, len=136
19:45:50 ended UAL, len=136
19:45:50 ended UBER, len=136
19:45:50 ended SBE, len=136
19:45:50 ended INO, len=136
19:45:50 ended MRNA, len=136
19:45:50 ended TWTR, len=136
19:45:50 ended JWN, len=136
19:45:50 ended LYFT, len=136
19:45:50 ended IQ, len=136
19:45:50 ended DVN, len=136
19:45:50 ended BILI, len=136
19:45:50 ended CIIC, len=136
19:45:50 ended SPWR, len=136
19:45:50 ended MGM, len=136
19:45:50 ended GME, len=136
19:45:50 ended NUAN, len=136
19:45:50 ended BLDP, len=136
19:45:50 ended KSS, len=136
19:45:50 ended VIPS, len=136
19:45:50 ended HST, len=136
19:45:50 ended DISCA, len=136
19:45:50 ended LVS, len=136
19:45:51 ended HAL, len=136
19:45:51 ended LB, len=136
19:45:51 ended FTCH, len=136
19:45:51 ended SAVE, len=136
19:45:51 ended CNK, len=136
19:45:51 ended SDC, len=136
19:45:51 ended SPG, len=136
19:45:51 ended HUYA, len=136
19:45:51 ended NOV, len=136
19:45:51 ended NET, len=136
19:45:51 ended EQT, len=136
1.41 execution seconds
```
If I add `endDateTime='20210106 23:59:59'`, it will take more than 30 seconds and print some errors with `errorEvent`. Full code:
```
from ib_insync import *
import asyncio
import pandas as pd
import threading
import time
from datetime import datetime, timedelta, timezone
import nest_asyncio
nest_asyncio.apply()
# 50 stocks
tickers = 'NIO AAL CCL BLNK JMIA NCLH SNAP DKNG PLUG WKHS SONO FE OXY WORK NKLA FEYE PCG UBER UAL INO MRNA SBE LYFT TWTR IQ JWN DVN BILI CIIC MGM SPWR GME KSS NUAN VIPS BLDP HST DISCA LVS HAL LB FTCH SAVE CNK SPG HUYA NOV SDC NET EQT'
class Trader:
def __init__(self, ticker):
self.ticker = ticker
ib.errorEvent += self.onError
def onError(self, reqId, errorCode, errorString, contract):
print({'ticker': self.ticker, 'errorCode': errorCode, 'reqId': reqId, 'errorString': errorString, 'contract': contract})
pass
async def _init(self):
print('{} started {}'.format(datetime.now().strftime('%H:%M:%S'), self.ticker))
c = 0
while 1:
c += 1
df = util.df(await ib.reqHistoricalDataAsync(
Stock(self.ticker, 'SMART', 'USD'),
endDateTime='20210106 23:59:59',
#endDateTime='',
durationStr='1 D',
#durationStr='86400 S',
barSizeSetting='1 min',
whatToShow='TRADES',
useRTH=True,
#timeout=10,
formatDate=1))
try:
x = df.empty
df.index = pd.to_datetime(df['date'])
df.index.name = 'date'
print('{} ended {}, len={}'.format(datetime.now().strftime('%H:%M:%S'), self.ticker, len(df)))
break
except (AttributeError):
print('{} {} AttributeError'.format(datetime.now().strftime('%H:%M:%S'), self.ticker))
except Exception as e:
print('{} {} error: {}'.format(datetime.now().strftime('%H:%M:%S'), self.ticker, e))
if c == 5:
return None
await asyncio.sleep(1)
except:
print('{} {} error'.format(datetime.now().strftime('%H:%M:%S'), self.ticker))
async def fetch_tickers():
return await asyncio.gather(*(asyncio.ensure_future(safe_trader(ticker)) for ticker in tickers.split(' ')))
async def safe_trader(ticker):
async with sem:
t = Trader(ticker)
return await t._init()
if __name__ == '__main__':
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=1) # 7496, 7497, 4001, 4002
try:
start_time = time.time()
print('{} start time'.format(datetime.now().strftime('%H:%M:%S')))
sem = asyncio.Semaphore(50)
loop = asyncio.get_event_loop()
results = loop.run_until_complete(fetch_tickers())
print("%.2f execution seconds" % (time.time() - start_time))
ib.disconnect()
except (KeyboardInterrupt, SystemExit):
ib.disconnect()
```
Any ideas? Thank you for help anyway. | closed | 2021-01-08T17:00:48Z | 2021-03-07T21:32:41Z | https://github.com/erdewit/ib_insync/issues/327 | [] | fridary | 3 |
davidteather/TikTok-Api | api | 571 | [FEATURE_REQUEST] - Adding "verify" as a parameter in the requests. | When using proxy servers some require to pass a "verify=False" into the request.get() statement.
Currently verify is not passed into the requests statement, this does not allow me to use a proxy.
Let me know if there another way around this.
| closed | 2021-04-21T22:36:43Z | 2021-05-15T01:31:21Z | https://github.com/davidteather/TikTok-Api/issues/571 | [
"feature_request"
] | bmader12 | 2 |
PaddlePaddle/ERNIE | nlp | 122 | ELMo中LAC_Demo的network.py中ipdb不适合暴露给用户 | network.py 第12行import ipdb
ipdb是一种调试工具,不适合暴露给用户,否则可能报错,如下:
<img width="864" alt="3a20b5a8c0e34fe9e15d5df2999d1d8c" src="https://user-images.githubusercontent.com/48793257/57214900-f0555f00-701d-11e9-9cdb-0cd059958f98.png">
| closed | 2019-05-06T08:42:42Z | 2019-06-11T06:49:49Z | https://github.com/PaddlePaddle/ERNIE/issues/122 | [] | Steffy-zxf | 0 |
TencentARC/GFPGAN | pytorch | 390 | not working | not working | open | 2023-06-10T14:41:52Z | 2023-06-10T14:41:52Z | https://github.com/TencentARC/GFPGAN/issues/390 | [] | Vigprint | 0 |
docarray/docarray | fastapi | 1,780 | Release Note | # Release Note
This release contains 3 bug fixes and 4 documentation improvements, including 1 breaking change.
## 💥 Breaking Changes
### Changes to the return type of `DocList.to_json()` and `DocVec.to_json()`
In order to make the `to_json` method consistent across different classes, we changed its return type in `DocList` and `DocVec` to `str`.
This means that, if you use this method in your application, make sure to update your codebase to expect `str` instead of `bytes`.
## 🐞 Bug Fixes
### Make DocList.to_json() and DocVec.to_json() return str instead of bytes (#1769)
This release changes the return type of the methods `DocList.to_json()` and `DocVec.to_json()` in order to be consistent with `BaseDoc .to_json()` and other pydantic models. After this release, these methods will return `str ` type data instead of `bytes`.
💥 Since the return type is changed, this is considered a breaking change.
### Casting in reduce before appending (#1758)
This release introduces type casting internally in the `reduce `helper function, casting its inputs before appending them to the final result. This will make it possible to reduce documents whose schemas are compatible but not exactly the same.
### Skip doc attributes in `__annotations__` but not in `__fields__` (#1777)
This release fixes an issue in the create_pure_python_type_model helper function. Starting with this release, only attributes in the class `__fields__` will be considered during type creation.
The previous behavior broke applications when users introduced a ClassVar in an input class:
```python
class MyDoc(BaseDoc):
endpoint: ClassVar[str] = "my_endpoint"
input_test: str = ""
```
```text
field_info = model.__fields__[field_name].field_info
KeyError: 'endpoint'
```
Kudos to @NarekA for raising the issue and contributing a fix in the Jina project, which was ported in DocArray.
## 📗 Documentation Improvements
- Explain how to set Document config (#1773)
- Add workaround for torch compile (#1754)
- Add note about pickling dynamically created Doc class (#1763)
- Improve the docstring of `filter_docs` (#1762)
## 🤟 Contributors
We would like to thank all contributors to this release:
- Sami Jaghouar (@samsja )
- Johannes Messner (@JohannesMessner )
- AlaeddineAbdessalem (@alaeddine-13 )
- Joan Fontanals (@JoanFM ) | closed | 2023-09-07T07:09:57Z | 2023-09-07T13:50:39Z | https://github.com/docarray/docarray/issues/1780 | [] | JoanFM | 5 |
holoviz/panel | jupyter | 7,169 | Overlap in icons with new ChatMessage Layout | ChatMesssage(Tabs(Tabulator()))
<img width="925" alt="image" src="https://github.com/user-attachments/assets/e671c158-3cbb-4096-bac9-8619d2273f6f">
| closed | 2024-08-19T20:43:21Z | 2024-08-19T23:18:15Z | https://github.com/holoviz/panel/issues/7169 | [] | ahuang11 | 1 |
pydata/xarray | numpy | 9,343 | datatree: Collapsible items in `groups` DataTree | ### What is your issue?
_Originally posted by @agrouaze in https://github.com/xarray-contrib/datatree/issues/145_
_Attempted implementation in [this PR](https://github.com/xarray-contrib/datatree/pull/155)_
Having collapsible items (like `xarray.Datasets`) in `groups` repr_html_ would help to have user friendly overview of Python objects.
I am wondering whether this feature is already available or not.

Clarification: The ask here is to make individual groups collapsible in the HTML rendering, so it is possible to see all the child groups, without having to see the full contents of a group. | open | 2024-08-13T16:14:40Z | 2024-08-13T18:16:02Z | https://github.com/pydata/xarray/issues/9343 | [
"enhancement",
"topic-html-repr",
"topic-DataTree"
] | owenlittlejohns | 1 |
AutoGPTQ/AutoGPTQ | nlp | 159 | [BUG] Error running quantized tiiuae/falcon-7b-instruct model | Hi,
I was able to quantize the model using the following code:
```python
pretrained_model_dir = 'tiiuae/falcon-7b-instruct'
quantized_model_dir = 'tiiuae/falcon-7b-instruct-4bit-128g'
quantize_config = BaseQuantizeConfig(
bits=4, # quantize model to 4-bit
group_size=128, # it is recommended to set the value to 128
desc_act=False
)
max_memory = {}
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_dir, use_fast=True)
# load un-quantized model, by default, the model will always be loaded into CPU memory
model = AutoGPTQForCausalLM.from_pretrained(pretrained_model_dir, quantize_config, max_memory=max_memory, trust_remote_code=True)
# quantize model, the examples should be list of dict whose keys can only be "input_ids" and "attention_mask"
model.quantize(examples, use_triton=False, use_cuda_fp16=True)
# save quantized model
model.save_quantized(quantized_model_dir)
```
Then, when I try to run the model:
```python
model = AutoGPTQForCausalLM.from_quantized(quantized_model_dir, use_safetensors=True, use_strict=False, use_triton=False, quantize_config=quantize_config, use_cuda_fp16=False, trust_remote_code=True)
model.to(device)
model.eval()
inputs = tokenizer("Write a letter to OpenAI CEO Sam Altman as to why GPT3 model should be open-sourced. ", return_tensors="pt").to(model.device)
# inputs.pop('token_type_ids')
outputs = model.generate(**inputs,
num_beams=5,
no_repeat_ngram_size=4,
max_length=512)
print(f"Output: {tokenizer.batch_decode(outputs, skip_special_tokens=True)[0]}")
```
I get the following error:
```
ValueError: The following `model_kwargs` are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)
```
If I remove the key `token_type_ids` from `inputs`, I get the following error:
```
RuntimeError: shape '[-1, 128, 4672]' is invalid for input of size 21229568
```
How can I solve this? | closed | 2023-06-15T08:43:01Z | 2023-06-19T16:49:15Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/159 | [
"bug"
] | abhinavkulkarni | 8 |
davidsandberg/facenet | computer-vision | 1,245 | Unable to convert onnx model to TRT model | I have converted this model "20180402-114759" for face recognition to onnx format with onnx==1.14.1. and When I want to convert the onnx model to TRT I get these errors:
"[E] Error[4]: [graphShapeAnalyzer.cpp::processCheck::862] Error Code 4: Internal Error (StatefulPartitionedCall/inception_resnet_v1/Conv2d_2a_3x3/Conv2D: spatial dimension of convolution/deconvolution output cannot be negative (build-time output dimension of axis 2 is -2))
[12/16/2023-15:52:45] [E] Engine could not be created from network
[12/16/2023-15:52:45] [E] Building engine failed
[12/16/2023-15:52:45] [E] Failed to create engine from model or file.
[12/16/2023-15:52:45] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8601] # ...."
I use TensorRT 8.6 for CUDA 12.1. This [doc ](https://onnxruntime.ai/docs/execution-providers/TensorRT-ExecutionProvider.html)said that for tensorrt 8.6 the onnxruntime should be 1.15 or higher. but I am not able to convert the tensorflow model to onnx model with onnx==1.15.0. and I face some errors. How can I fix this problem to successfully convert the tensorflow model to onnx and then TRT model.
Best Regards! | open | 2023-12-16T14:08:14Z | 2024-08-07T13:31:18Z | https://github.com/davidsandberg/facenet/issues/1245 | [] | k-khosravi | 1 |
davidteather/TikTok-Api | api | 579 | [BUG] - Cannot download videos from by_hashtag with playwright because of EmptyResponseError | **Describe the bug**
Because of the #434 issue i chosed to use playwright (Selenium seems detected when using by_hashtag). My goal was to download videos from by_hashtag method. I have no problem with getting tiktoks by_trending method and download them.
But i cannot download data from api.get_Video_By_TikTok() when using by_hashtag. The weird thing is that i cannot get video when using by_hastag because of a TikTokApi.exceptions.EmptyResponseError: Empty response from Tiktok ... when using custom_did. But i can get them without custom_did but obvisouly the tiktoks retrieved cannot be downloaded with api.get_Video_By_TikTok() because of an Access Denied.
**The buggy code**
Please insert the code that is throwing errors or is giving you weird unexpected results.
```
verifyFp = 'verify_XXX'
did = str(random.randint(10000, 999999999))
api = TikTokApi.get_instance(custom_verifyFp=verifyFp, use_test_endpoints=True)
tiktoks = api.byHashtag("funny", count = 3, custom_verifyFp=verifyFp, custom_did = did)
data = api.get_Video_By_TikTok(tiktoks[0], custom_verifyFp=verifyFp, custom_did = did)# bytes of the video
with open("0.mp4", 'wb') as output:
output.write(data) # saves data to the mp4 file
```
**Expected behavior**
Retrieve and download videos from by_hashtag method
**Error Trace**
```
ERROR:root:TikTok response:
Traceback (most recent call last):
File "AVM_TikTok_2.py", line 545, in <module>
tiktoks = api.byHashtag("funny", count = 3, custom_verifyFp=verifyFp, custom_did = did)
File "C:\Users\X\Anaconda3\lib\site-packages\TikTokApi\tiktok.py", line 915, in by_hashtag
res = self.getData(url=api_url, **kwargs)
File "C:\Users\X\Anaconda3\lib\site-packages\TikTokApi\tiktok.py", line 283, in get_data
) from None
TikTokApi.exceptions.EmptyResponseError: Empty response from Tiktok to https://m.tiktok.com/api/challenge/item_list/?aid=1988&app_name=tiktok_web&device_platform=web&referer=&root_referer=&user_agent=Mozilla%252F5.0%2B%28iPhone%253B%2BCPU%2BiPhone%2BOS%2B12_2%2Blike%2BMac%2BOS%2BX%29%2BAppleWebKit%252F605.1.15%2B%28KHTML%2C%2Blike%2BGecko%29%2BVersion%252F13.0%2BMobile%252F15E148%2BSafari%252F604.1&cookie_enabled=true&screen_width=1029&screen_height=1115&browser_language=&browser_platform=&browser_name=&browser_version=&browser_online=true&ac=4g&timezone_name=&appId=1233&appType=m&isAndroid=False&isMobile=False&isIOS=False&OS=windows&count=3&challengeID=5424&type=3&secUid=&cursor=0&priority_region=&verifyFp=verify_kobqsdpl_QSXJKxZh_pg9u_4Iyr_8Blt_xMoTx3w5ry3y&did=107654595&_signature=_02B4Z6wo00f01rODarAAAIBDqO373H0xHFazkm4AAMx-68
```
**Desktop (please complete the following information):**
- OS: [e.g. Windows 10]
- TikTokApi Version [e.g. 3.9.5]
**Additional context**
In summary :
by_trending() => no problem for retrieve and download
by_hashtag() + no custom_did => can retrieve but not download because of Access Denied
by_hashtag() + custom_did => cannot retrieve with by_hashtag at all because of an TikTokApi.exceptions.EmptyResponseError: Empty response from Tiktok to ...
i test all the combinaison of with or without custom_did in each methode (by_hashtag, api.get_Video_By_TikTok or get_instance) and nothing worked. | closed | 2021-05-05T18:40:15Z | 2021-08-07T00:28:02Z | https://github.com/davidteather/TikTok-Api/issues/579 | [
"bug"
] | Goldrest | 3 |
ydataai/ydata-profiling | data-science | 1,194 | bug: variables list is causing a misconfiguration in the UI variables section | ### Current Behaviour

### Expected Behaviour
It would be easier on eyes if we make it as pill buttons instead, just like the one in "Overview"

**Example:**

### Data Description
https://pandas-profiling.ydata.ai/examples/master/features/united_report.html
### pandas-profiling version
vdev
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | closed | 2022-12-02T09:24:21Z | 2023-03-08T16:58:50Z | https://github.com/ydataai/ydata-profiling/issues/1194 | [
"bug 🐛"
] | stormbeforesunsetbee | 9 |
jessevig/bertviz | nlp | 71 | TypeError: new(): invalid data type 'str' when i using neuron_view_bert.py with my own model | Hi, it show the error code : TypeError: new(): invalid data type 'str' when i using neuron_view_bert.py with my own fine-tune model
is there any solution for this condition?
here is my code:
```
import sys
get_ipython().system('test -d bertviz_repo && echo "FYI: bertviz_repo directory already exists, to pull latest version uncomment this line: !rm -r bertviz_repo"')
# !rm -r bertviz_repo # Uncomment if you need a clean pull from repo
get_ipython().system('test -d bertviz_repo || git clone https://github.com/jessevig/bertviz bertviz_repo')
if not 'bertviz_repo' in sys.path:
sys.path += ['bertviz_repo']
get_ipython().system('pip install regex')
from bertviz.transformers_neuron_view import BertModel, BertTokenizer
from bertviz.neuron_view import show
from transformers import BertTokenizer, BertModel
from transformers import BertConfig, BertForSequenceClassification, BertTokenizer, AdamW
get_ipython().run_cell_magic('javascript', '', "require.config({\n paths: {\n d3: '//cdnjs.cloudflare.com/ajax/libs/d3/3.4.8/d3.min',\n jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',\n }\n});")
from IPython.display import clear_output
# 在 jupyter notebook 裡頭顯示 visualzation 的 helper
def call_html():
import IPython
display(IPython.core.display.HTML('''
<script src="/static/components/requirejs/require.js"></script>
<script>
requirejs.config({
paths: {
base: '/static/base',
"d3": "https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.8/d3.min",
jquery: '//ajax.googleapis.com/ajax/libs/jquery/2.0.0/jquery.min',
},
});
</script>
'''))
clear_output()
tokenizer = BertTokenizer(vocab_file='C:\\Users\\e7789520\\Desktop\\HO TSUNG TSE\\TaipeiCityFood\\bert-base-chinese-vocab.txt')
bert_config, bert_class, bert_tokenizer = (BertConfig, BertForSequenceClassification, BertTokenizer)
config = bert_config.from_pretrained('C:\\Users\\e7789520\\Desktop\\HO TSUNG TSE\\TaipeiCityFood\\trained_model\\config.json',output_attentions=True)
model = bert_class.from_pretrained('C:\\Users\\e7789520\\Desktop\\HO TSUNG TSE\\TaipeiCityFood\\trained_model\\pytorch_model.bin', from_tf=bool('.ckpt' in 'bert-base-chinese'), config=config)
sentence_a = "大麥克"
sentence_b = "我想要牛肉堡"
# 得到 tokens 後丟入 BERT 取得 attention
model_type = "bert"
display_mode ="dark"
layer=2
head=0
inputs = tokenizer.encode_plus(sentence_a, sentence_b, return_tensors='pt', add_special_tokens=True)
token_type_ids = inputs['token_type_ids']
input_ids = inputs['input_ids']
attention = model(input_ids, token_type_ids)[-1]
sentence_b_start = token_type_ids[0].tolist().index(1)
input_id_list = input_ids[0].tolist() # Batch index 0
tokens = tokenizer.convert_ids_to_tokens(input_id_list)
call_html()
``` | closed | 2021-04-13T10:11:38Z | 2021-05-08T14:29:08Z | https://github.com/jessevig/bertviz/issues/71 | [] | leo88359 | 2 |
horovod/horovod | pytorch | 3,968 | Distributed Models guide with Gloo has disappeared | Hi,
Looks like spell.ml no longer has a web presence? If there's a copy on how to run distributed models using Gloo (listed here: https://horovod.readthedocs.io/en/latest/summary_include.html#gloo under Guides) that would be great to see. Thanks! | closed | 2023-07-27T14:39:06Z | 2023-08-31T10:39:47Z | https://github.com/horovod/horovod/issues/3968 | [] | jthiels | 0 |
tflearn/tflearn | data-science | 1,067 | How to deploy TFlearn deep learning model to Google cloud ML or AWS machine learning service? | I have created a Tflearn deep learning model for QA. I want to use cloud deployment for that model. Did anyone know about Google cloud ML engine or AWS machine learning? Which one is good for Deep learning model deployment. Does Google cloud ML engine supports Tflearn Deep learning model? | open | 2018-06-16T11:56:44Z | 2018-07-21T14:16:44Z | https://github.com/tflearn/tflearn/issues/1067 | [] | abhijitdalavi | 1 |
ets-labs/python-dependency-injector | asyncio | 70 | Improve Providers extending | At the moment, every extended provider have to implement override logic:
``` python
class Extended(Provider):
def __call__(self, *args, **kwargs):
"""Return provided instance."""
if self.overridden:
return self.last_overriding(*args, **kwargs)
```
Need to improve provider extending process.
| closed | 2015-05-24T23:36:52Z | 2015-05-25T07:46:17Z | https://github.com/ets-labs/python-dependency-injector/issues/70 | [
"refactoring"
] | rmk135 | 0 |
miguelgrinberg/python-socketio | asyncio | 273 | Starvation | Hi, I would like to send data from server to client on connection event. Is it possible? Why does it seem the server is blocked if it emits an event on own connection event?
**latency_client.py**:
```python
import asyncio
import time
import socketio
loop = asyncio.get_event_loop()
sio = socketio.AsyncClient()
start_timer = None
async def send_ping():
global start_timer
start_timer = time.time()
await sio.emit('ping_from_client')
@sio.on('connect')
async def on_connect():
print('connected to server')
# await send_ping()
@sio.on('my_test')
async def on_my_test(data):
print('my_test')
await send_ping()
@sio.on('pong_from_server')
async def on_pong(data):
global start_timer
latency = time.time() - start_timer
print('latency is {0:.2f} ms'.format(latency * 1000))
await sio.sleep(1)
await send_ping()
async def start_server():
await sio.connect('http://localhost:5000')
await sio.wait()
if __name__ == '__main__':
loop.run_until_complete(start_server())
```
**latency_server.py**:
```python
from aiohttp import web
import socketio
sio = socketio.AsyncServer(async_mode='aiohttp')
app = web.Application()
sio.attach(app)
@sio.on('connect')
async def on_connect(sid, environ):
await sio.emit('my_test', room=sid)
@sio.on('ping_from_client')
async def ping(sid):
await sio.emit('pong_from_server', room=sid)
if __name__ == '__main__':
web.run_app(app, port=5000)
```
The server never calls ping function. What's wrong in my code?
Thank you.
| closed | 2019-03-16T18:54:16Z | 2019-03-16T19:48:01Z | https://github.com/miguelgrinberg/python-socketio/issues/273 | [
"bug"
] | voidloop | 2 |
healthchecks/healthchecks | django | 348 | Notification emails: include more details about the check | Consider including:
* check's tags
* check's schedule
* last ping, total pings
* log of received pings
* request body of the last '/fail` request (#308)
Consider changing the summary table to a table of totals:
* 3 checks are down
* 17 checks are up
This would make the notification emails more search-friendly, also there would be less items competing for recipient's attention. | closed | 2020-03-25T16:55:37Z | 2021-03-08T16:54:30Z | https://github.com/healthchecks/healthchecks/issues/348 | [] | cuu508 | 6 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.