repo_name
stringlengths 9
75
| topic
stringclasses 30
values | issue_number
int64 1
203k
| title
stringlengths 1
976
| body
stringlengths 0
254k
| state
stringclasses 2
values | created_at
stringlengths 20
20
| updated_at
stringlengths 20
20
| url
stringlengths 38
105
| labels
sequencelengths 0
9
| user_login
stringlengths 1
39
| comments_count
int64 0
452
|
---|---|---|---|---|---|---|---|---|---|---|---|
localstack/localstack | python | 12,244 | bug: TEST_AWS_ACCOUNT_ID does not apply | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Current Behavior
` docker run -d \
--name localstack \
-p "4566:4566" \
--restart always \
-e TEST_AWS_ACCOUNT_ID="00000000001" \
-e DEFAULT_ACCOUNT_ID="00000000001" \
-e SERVICES=${SERVICES- } \
-e DEBUG=${DEBUG- } \
-e DATA_DIR=${DATA_DIR- } \
-e PORT_WEB_UI=${PORT_WEB_UI- } \
-e LAMBDA_EXECUTOR=docker-reuse \
-e KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- } \
--privileged \
localstack/localstack`
when I run:
`aws --endpoint-url=http://localhost:4566 sts get-caller-identity --query "Account" --output text`
I still see 00000000000.
Same when I create a resource and check its ARN.
### Expected Behavior
_No response_
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
docker run localstack/localstack
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
awslocal s3 mb s3://mybucket
### Environment
```markdown
- OS:
- LocalStack:
LocalStack version:
LocalStack Docker image sha: a51a40f61ab458eb156777d60186b39ba799ebd01872578751eebbe6ac9625ca
LocalStack build date: 2025-02-10
LocalStack build git hash:
```
### Anything else?
_No response_ | closed | 2025-02-10T12:54:04Z | 2025-03-03T14:03:16Z | https://github.com/localstack/localstack/issues/12244 | [
"type: bug",
"status: response required",
"status: resolved/stale",
"area: multi-account"
] | skhreshefsharvit | 3 |
jpadilla/django-rest-framework-jwt | django | 469 | Call an endpoint without Authorization header | I have a view derived from viewsets.ModelViewSet. When I call it with `Authorization` header, request is checked for authentication. But if I call it without the header, the access is granted.
I found this line of [code](https://github.com/GetBlimp/django-rest-framework-jwt/blob/master/rest_framework_jwt/authentication.py#L28). It seems to me if the header is not valid it simply returns None instead of raise exception.
Have I correctly read the code? | closed | 2019-02-13T06:35:39Z | 2019-02-14T02:24:31Z | https://github.com/jpadilla/django-rest-framework-jwt/issues/469 | [] | Li-ReDBox | 1 |
pydata/pandas-datareader | pandas | 892 | Fails to Run on Google Collab | I've tried to run the following on google collab. Note the code runs fine on my Pycharm IDE
`import pandas_datareader.data as web
import pandas as pd
import datetime as dt
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
start = dt.datetime(2017, 1, 3)
end = dt.datetime(2017, 11, 20)
prices = web.DataReader('AAPL', 'yahoo', start, end)['Close']
`
Here's the traceback
`
KeyboardInterrupt Traceback (most recent call last)
<ipython-input-45-b76722b62ca8> in <module>()
14 end = dt.datetime(2017, 11, 20)
15
---> 16 prices = web.DataReader('AAPL', 'yahoo', start, end)['Close']
17
18 returns = prices.pct_change()
/usr/local/lib/python3.7/dist-packages/yfinance/multi.py in download(tickers, start, end, actions, threads, group_by, auto_adjust, back_adjust, progress, period, show_errors, interval, prepost, proxy, rounding, **kwargs)
95 rounding=rounding)
96 while len(shared._DFS) < len(tickers):
---> 97 _time.sleep(0.01)
98
99 # download synchronously
` | closed | 2021-07-20T14:54:51Z | 2021-07-20T16:15:23Z | https://github.com/pydata/pandas-datareader/issues/892 | [] | evanpfeffer | 13 |
cvat-ai/cvat | tensorflow | 9,217 | 500 error after login | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Login
Immediately after Login the following 500 error appears in a popup:
```
[2025-03-17 07:45:32,385] ERROR django.request: Internal Server Error: /api/requests
Traceback (most recent call last):
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 518, in thread_handler
raise exc_info[1]
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/exception.py", line 42, in inner
response = await get_response(request)
File "/opt/venv/lib/python3.10/site-packages/django/core/handlers/base.py", line 253, in _get_response_async
response = await wrapped_callback(
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 468, in __call__
ret = await asyncio.shield(exec_coro)
File "/opt/venv/lib/python3.10/site-packages/asgiref/current_thread_executor.py", line 40, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/venv/lib/python3.10/site-packages/asgiref/sync.py", line 522, in thread_handler
return func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/csrf.py", line 56, in wrapper_view
return view_func(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/viewsets.py", line 124, in view
return self.dispatch(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 509, in dispatch
response = self.handle_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
raise exc
File "/opt/venv/lib/python3.10/site-packages/rest_framework/views.py", line 506, in dispatch
response = handler(request, *args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/utils/decorators.py", line 46, in _wrapper
return bound_method(*args, **kwargs)
File "/opt/venv/lib/python3.10/site-packages/django/views/decorators/cache.py", line 62, in _wrapper_view_func
response = view_func(request, *args, **kwargs)
File "/home/django/cvat/apps/engine/views.py", line 3779, in wrapper
return func(*args, **kwargs)
File "/home/django/cvat/apps/engine/views.py", line 3803, in list
user_jobs = self._get_rq_jobs(user_id)
File "/home/django/cvat/apps/engine/views.py", line 3745, in _get_rq_jobs
jobs = self._get_rq_jobs_from_queue(queue, user_id)
File "/home/django/cvat/apps/engine/views.py", line 3722, in _get_rq_jobs_from_queue
if job and is_rq_job_owner(job, user_id):
File "/home/django/cvat/apps/engine/rq.py", line 315, in is_rq_job_owner
return BaseRQMeta.for_job(rq_job).user.id == user_id
File "/home/django/cvat/apps/engine/rq.py", line 196, in user
return UserMeta(self.meta[RQJobMetaField.USER])
KeyError: 'user'
```
### Expected Behavior
No error message
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
Server version: 2.31.0
UI version: 2.31.0
``` | closed | 2025-03-17T07:50:50Z | 2025-03-17T15:43:54Z | https://github.com/cvat-ai/cvat/issues/9217 | [
"bug"
] | eporsche | 2 |
iperov/DeepFaceLab | machine-learning | 659 | Limiting Used Cores | Hi There
I got a Problem: new Merger uses ALL Cores. That means in my Case 64 Cores.
In the last Version there was the Option to Limit the cores, but now, this option is away.
How can i limit the used cores? | closed | 2020-03-17T16:13:03Z | 2020-03-18T07:23:15Z | https://github.com/iperov/DeepFaceLab/issues/659 | [] | blanuk | 1 |
deepfakes/faceswap | machine-learning | 767 | FFMPEG image sequence with longer durations after generated an output video | **Describe the bug**
The problem is the original duration video is shorter than the generated one. Original is 35 seconds but generated is 41 seconds. Image frames captured and converted are same, 1037 frames.
**To Reproduce**
I'm using this command to generate the video.
```
ffmpeg -i video-frame-%0d.png -c:v libx264 -vf "fps=25,format=yuv420p" out.mp4
```
**Expected behavior**
The output video duration supposedly to be exactly same with the original video.
**Desktop (please complete the following information):**
- OS: High Sierra 10.13.6 | closed | 2019-06-21T15:52:43Z | 2019-06-21T16:39:29Z | https://github.com/deepfakes/faceswap/issues/767 | [] | datomnurdin | 1 |
ading2210/poe-api | graphql | 42 | Extension of the formkey problem from the poe-api patch earlier today. | Hi y'all -- after having followed the previous instructions on upgrading poe-api to fix the problem with the too-many-downloads, I started having a new problem.
<img width="644" alt="image" src="https://user-images.githubusercontent.com/56563509/232254687-6c63dea6-76eb-4e27-a5a2-56def72f8f19.png">
<img width="749" alt="image" src="https://user-images.githubusercontent.com/56563509/232254716-06fe1a5c-72c8-48f8-8f64-b65e61d1a4ea.png">
| closed | 2023-04-15T21:54:17Z | 2023-04-15T22:56:24Z | https://github.com/ading2210/poe-api/issues/42 | [
"bug"
] | zukixa | 7 |
FlareSolverr/FlareSolverr | api | 428 | [mteamtp] (testing) Exception (mteamtp): FlareSolverr was unable to process the request, please check FlareSolverr logs. Message: Cloudflare Error: No challenge selectors found, unable to proceed.: Parse error | **Please use the search bar** at the top of the page and make sure you are not creating an already submitted issue.
Check closed issues as well, because your issue may have already been fixed.
### How to enable debug and html traces
[Follow the instructions from this wiki page](https://github.com/FlareSolverr/FlareSolverr/wiki/How-to-enable-debug-and-html-trace)
### Environment
* **FlareSolverr version**:
* **Last working FlareSolverr version**:
* **Operating system**:
* **Are you using Docker**: [yes/no]
* **FlareSolverr User-Agent (see log traces or / endpoint)**:
* **Are you using a proxy or VPN?** [yes/no]
* **Are you using Captcha Solver:** [yes/no]
* **If using captcha solver, which one:**
* **URL to test this issue:**
### Description
[List steps to reproduce the error and details on what happens and what you expected to happen]
### Logged Error Messages
[Place any relevant error messages you noticed from the logs here.]
[Make sure you attach the full logs with your personal information removed in case we need more information]
### Screenshots
[Place any screenshots of the issue here if needed]
| closed | 2022-07-16T13:53:09Z | 2022-07-16T13:53:28Z | https://github.com/FlareSolverr/FlareSolverr/issues/428 | [] | 1960697431 | 0 |
agronholm/anyio | asyncio | 783 | Look into creating a shared subprocess implementation between backends | ### Things to check first
- [X] I have searched the existing issues and didn't find my feature already requested there
### Feature description
We should take a stab at creating a more low-level subprocess implementation that only relies on a minimum set of functionality from each backend, just like the threading API.
### Use case
Currently the asyncio subprocess API is a bit iffy from AnyIO's perspective. There are a lot of moving parts which we don't control, and cases where we use undocumented methods (`StreamReader.set_exception()` as an example). A new implementation, if possible, might make it easier to further develop the AnyIO subprocess APIs. | open | 2024-09-05T18:55:19Z | 2024-09-05T18:55:19Z | https://github.com/agronholm/anyio/issues/783 | [
"enhancement"
] | agronholm | 0 |
labmlai/annotated_deep_learning_paper_implementations | machine-learning | 169 | Dimension of subsequent layers in Hypernetwork | Hi, I was reading through your implementation of HyperLSTM and the associated paper. I got lost in the shaping of the layers after the first layer. Could you please explain why the input size is 2*main_lstm_hidden_size? | open | 2023-02-27T23:49:52Z | 2023-06-30T10:39:39Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/169 | [
"question"
] | Simply-Adi | 3 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 609 | UNET 修改单通道数据集输入 | 导, 求解如何修改 unet项目为 单通道灰度数据集进行训练
我尝试修改model的channel=1 程序报错TypeError: Input tensor should be a float tensor. Got torch.int32.
在normalize处设置断点得到
tensor([[[2410, 2395, 2418, ..., 2110, 2111, 2160],
[2406, 2432, 2418, ..., 2120, 2133, 2166],
[2359, 2384, 2389, ..., 2138, 2153, 2155],
...,
[2090, 2088, 2104, ..., 2195, 2208, 2228],
[2088, 2136, 2133, ..., 2257, 2242, 2257],
[2072, 2111, 2103, ..., 2245, 2227, 2209]]], dtype=torch.int32)
请问接下来如何修改才能实现单通道的数据训练
| closed | 2022-07-31T08:28:00Z | 2022-08-06T12:10:26Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/609 | [] | guanghao-sun | 2 |
mckinsey/vizro | data-visualization | 317 | How to integrate with other applications like flask, frappe or django? | ### Question
How to integrate with other applications like flask, frappe or django?
### Code/Examples
_No response_
### Other information
_No response_
### Which package?
None
### Package version
_No response_
### Python version
_No response_
### OS
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | open | 2024-02-19T06:06:37Z | 2024-07-08T15:03:31Z | https://github.com/mckinsey/vizro/issues/317 | [
"General Question :question:"
] | KathirvelPriya | 4 |
ploomber/ploomber | jupyter | 218 | Check param values are passed correctly | See 0489695 | closed | 2020-08-11T04:00:37Z | 2020-08-11T05:18:30Z | https://github.com/ploomber/ploomber/issues/218 | [] | edublancas | 1 |
public-apis/public-apis | api | 3,617 | Tem que ser preto no branco | Não vou repor nada. vai ser o de 2017 que é meu, só quero uma compensação pelos 6 anos de inferno, ajudo, o que falo cumpro. Porém só faço negócio cara a cara, peguem a verba que vcs dizem ue investem, peguem um avião e terei o maior prazer em recebê-los. Só quero ajudar todo mumdo. Me mandem sinal.de.fumaca, me liguem.....tenho todas as provas que o sistema saiu da minha máquina, derrubo qualquer outro cod na justiça, eles ou vcs podem fazer com a data que for. Já deu pra ver que não entendo de informática, mais sou teimoso e dei um bug na rede. kkkkkk. só quero ajudar e ter paz..Já vi vcs esvaziarem minhas contas Google e aplicações varias vezes, mais se é como vcs falam, não tô nem aí e topo a parada, se não for ilegal lógico. Mais a principio estão todos juntos, é eu contra o mundo. Mais bora lá .... | closed | 2023-08-30T03:00:13Z | 2023-08-31T01:13:41Z | https://github.com/public-apis/public-apis/issues/3617 | [] | Fodase4 | 0 |
tensorflow/tensor2tensor | machine-learning | 1,778 | Building model body multiple times when calling model_fn() multiple times | ### Description
I am constructing a meta learning framework on tensor2tensor which requires calling model_fn() multiple times. And I find that the framework will build the model body multiple times, even with reuse=True flag. The pseudocode is as follows:
```
def model_fn_raw(self, feature): # the origin model_fn
...
tf.print("scope:", tf.get_variable_scope(), "name:", tf.get_variable_scope().name)
log_info("Building model body")
body_out = self.body(feature)
...
def model_fn(self, feature): # wrap the origin model_fn with meta learning part
# step1: call model_fn_raw() the first time to compute loss
with tf.variable_scope(tf.get_variable_scope()):
_, loss = model_fn_raw(self, feature)
# step2: meta learning part, update params and assign it to all variables
updated_para = updated_para_once(loss)
assign_para_op = tf.assign(tf.trainable_variables(), updated_para)
with tf.control_dependencies(list(assign_para_op)):
# step3: call model_fn_raw() the second time to compute loss with new params
# set the reuse flag as True
with tf.variable_scope(tf.get_variable_scope(), reuse=True):
logits, loss = model_fn_raw(self, feature)
restore_origin_params()
return logits, loss
```
In this way, when excuting model_fn(), the model body should be built only once because I set reuse=True at the second time. However the model body is still built twice when I running the code, with printed logs as follows:
```
INFO:tensorflow:Transforming feature 'targets' with symbol_modality_10152_256.targets_bottom
INFO:tensorflow:Building model body
:::MLPv0.5.0 transformer ...
...(Other logs of model components)
INFO:tensorflow:Transforming body output with symbol_modality_10152_256.top
INFO:tensorflow:Transforming feature 'inputs' with symbol_modality_10152_256.bottom
# (this should be the end. but the same logs are printed again)
INFO:tensorflow:Transforming feature 'targets' with symbol_modality_10152_256.targets_bottom
INFO:tensorflow:Building model body
:::MLPv0.5.0 transformer ...
...
INFO:tensorflow:Transforming body output with symbol_modality_10152_256.top
INFO:tensorflow:Transforming feature 'inputs' with symbol_modality_10152_256.bottom
```
I also have checked and printed variable_scope in model_fn_raw(). And I find that the scope name in the first and second call is the same, but the address is not.
For example,
```
# scope address and name printed in the first call
scope: <tensorflow.python.ops.variable_scope.VariableScope object at 0x7fcb13928ba8>
name: transformer/body
# in the second call
scope: <tensorflow.python.ops.variable_scope.VariableScope object at 0x7fcb137747f0>
name: transformer/body
```
The name is consistent but the address is changed, which means that the variable scope is not the same one in the first and second call.
So I open this issue and hope someone can help me to address this problem. How to construct the model body only once and reuse it in multiple calls? Is it because of the assign ops in step 2 that leads to the multiple constructions?
### Environment information
```
OS: CentOS 7.5
$ pip freeze | grep tensor
tensor2tensor==1.10.0
mesh-tensorflow==0.1.7
tensorboard==1.12.2
tensorflow-gpu==1.12.0
$ python -V
Python 3.6.2
```
| open | 2020-01-07T03:54:28Z | 2020-02-11T03:46:41Z | https://github.com/tensorflow/tensor2tensor/issues/1778 | [] | lemmonation | 1 |
Textualize/rich | python | 2,951 | Filesize precision is not used on bytes/s | https://github.com/Textualize/rich/blob/6d30ad0f30028210124c149811cbbe2b183711f9/rich/filesize.py#L30
On my end this is personally very annoying. If anything, I would want it to have no decimal places at all, but as it stands now the decimal places can go extremely long, e.g. `361.3816634069428 bytes/s`. | closed | 2023-05-03T23:59:00Z | 2023-07-30T09:49:35Z | https://github.com/Textualize/rich/issues/2951 | [
"wontfix"
] | rlaphoenix | 6 |
lux-org/lux | pandas | 370 | [BUG] to_datetime warning displayed multiple times | **Describe the bug**
When a dataset with temporal column is loaded, the warning for to_datetime is displayed multiple times. This seems to be happening because the `maintain_metadata` is being called on all the vis.data, which is not the intended behavior.
**To Reproduce**
```python
df = pd.read_csv("https://raw.githubusercontent.com/lux-org/lux-datasets/master/data/stocks.csv")
df
```

| closed | 2021-04-30T15:13:52Z | 2021-06-25T04:36:31Z | https://github.com/lux-org/lux/issues/370 | [
"bug",
"priority"
] | dorisjlee | 1 |
robotframework/robotframework | automation | 4,736 | Backslash preventing newline in documentation can form escape sequence like `\n` | When suite, test or keyword documentation is split to multiple rows, rows are automatically joined together with a newline. This isn't always desirable so it's possible to use `\` at the end of a documentation row to prevent it:
```robotframework
*** Settings ***
Documentation Backslash avoids automatic \
... newline.
```
The problem is that lines are joined together so that the above looks like `Backslash avoids automatic \newline` and when it is later evaluated the accidental `\n` actually creates a newline. This can be fixed so that backslashes used for preventing newlines are removed from the documentation before rows are joined together.
This is a pretty annoying bug, but luckily there are very seldom needs to prevent the automatic newline. For example, in the above case the documentation in log and report looks exactly the same when formatted as a HTML paragraph. The main use case for splitting lines without an automatic newline is splitting strings that don't even have spaces in them and such strings are rare in documentation. | closed | 2023-04-12T21:47:39Z | 2023-05-31T20:35:01Z | https://github.com/robotframework/robotframework/issues/4736 | [
"bug",
"priority: low",
"beta 1",
"effort: small"
] | pekkaklarck | 0 |
modelscope/data-juicer | streamlit | 53 | [feature] remove_non_chinese_character_mapper | Remove all characters outside the unicode encoding range 4E00-9FA5 | closed | 2023-10-30T09:01:20Z | 2023-11-01T02:44:38Z | https://github.com/modelscope/data-juicer/issues/53 | [
"enhancement"
] | HYLcool | 0 |
mckinsey/vizro | data-visualization | 193 | Add Python 3.12 to tests | Quick and easy - do for both `vizro-core` and `vizro-ai`:
- [x] In our `matrix` on Github actions, we currently test on `python-version` up to and including 3.11. Let's add 3.12 to the list
- [x] Include 3.12 in `classifiers` in `pyproject.toml`
- [x] Include 3.12 in our badge in the README
- [x] Include 3.12 in `[[envs.all.matrix]]`
- [ ] Make sure the relevant 3.12 jobs are required for merging in Github branch protection settings
Could be difficult but I don't expect it will be necessary, at least for vizro-core:
- [x] Fix any problems that make our code incompatible with 3.12 | closed | 2023-12-06T10:03:36Z | 2023-12-18T10:16:09Z | https://github.com/mckinsey/vizro/issues/193 | [] | antonymilne | 0 |
strawberry-graphql/strawberry | fastapi | 3,655 | multipart upload struggle | I am trying to make the file upload work and no luck yet
I got back to the example on https://strawberry.rocks/docs/guides/file-upload#sending-file-upload-requests
but just copy past multi file requests from postman returns "Unsupported content type"
<!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
## Describe the Bug
<!-- A clear and concise description of what the bug is. -->
## System Information
- Operating system: macOS sequoia
- Strawberry version (if applicable):
- latest
## Additional Context
<!-- Add any other relevant information about the problem here. -->
python code
@strawberry.mutation
def read_files(self, files: List[Upload]) -> List[str]:
print(f"Received read_files mutation. Number of files: {len(files)}")
contents = []
for file in files:
content = file.read().decode("utf-8")
contents.append(content)
return contents
curl --location 'localhost:7675/graphql' \
--form 'operations="{ \"query\": \"mutation(\$files: [Upload\!]\!) { readFiles(files: \$files) }\", \"variables\": { \"files\": [null, null] } }"' \
--form 'map="{\"file1\": [\"variables.files.0\"], \"file2\": [\"variables.files.1\"]}"' \
--form 'file1=@"/Users/its/Documents/roll.csv"' \
--form 'file2=@"/Users/its/Documents/dump.csv"'
Request Body
operations: "{ "query": "mutation($files: [Upload!]!) { readFiles(files: $files) }", "variables": { "files": [null, null] } }"
map: "{"file1": ["variables.files.0"], "file2": ["variables.files.1"]}"
file1: undefined
file2: undefined
Response Headers
date: Tue, 01 Oct 2024 15:10:58 GMT
server: uvicorn
content-length: 24
content-type: text/plain; charset=utf-8
Response Body
Unsupported content type
response
Unsupported content type | closed | 2024-10-01T15:59:02Z | 2025-03-20T15:56:53Z | https://github.com/strawberry-graphql/strawberry/issues/3655 | [
"bug"
] | itsklimov | 5 |
shibing624/text2vec | nlp | 111 | The difference between STSB and STSBenchmark | 您好, 请问一下您发布的数据集STSB(1.36k rows) (https://huggingface.co/datasets/shibing624/nli_zh/viewer/STS-B/test)为什么与STSBenchmark(https://huggingface.co/datasets/stsb_multi_mt/viewer/zh/test)数据集数量不同(1.38k rows)?您是做了什么过滤吗?
| closed | 2023-08-09T14:20:36Z | 2023-08-17T13:11:30Z | https://github.com/shibing624/text2vec/issues/111 | [
"question"
] | staoxiao | 1 |
microsoft/MMdnn | tensorflow | 419 | PyTorch to IR error | Platform (like ubuntu 16.04/win10): Ubuntu 16.04
Python version: 3.5
Source framework with version (like Tensorflow 1.4.1 with GPU): Pytorch 0.3.1 with gpu
Destination framework with version (like CNTK 2.3 with GPU): Tensorflow
Pre-trained model path (webpath or webdisk path): https://drive.google.com/file/d/1Y2ritTA6PXosQ9u66f2JVpP_CMTt9fxl/view?usp=sharing
Running scripts:
mmtoir -f pytorch -d out -n ResNet-2018-09-18T21\:50\:42.pth --inputShape 3,224,224
Traceback (most recent call last):
File "/usr/local/bin/mmtoir", line 11, in <module>
sys.exit(_main())
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 192, in _main
ret = _convert(args)
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/_script/convertToIR.py", line 91, in _convert
from mmdnn.conversion.pytorch.pytorch_parser import PytorchParser
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/pytorch/pytorch_parser.py", line 12, in <module>
from mmdnn.conversion.pytorch.pytorch_graph import PytorchGraph
File "/usr/local/lib/python3.5/dist-packages/mmdnn/conversion/pytorch/pytorch_graph.py", line 12, in <module>
from torch.jit import _unique_state_dict
ImportError: cannot import name '_unique_state_dict'
| open | 2018-09-19T15:49:00Z | 2018-09-22T03:10:56Z | https://github.com/microsoft/MMdnn/issues/419 | [] | uhvardhan | 5 |
sqlalchemy/sqlalchemy | sqlalchemy | 11,423 | Combining "within group" and "filter" doesn't seem to be possible | ### Describe the bug
I'd like to be able to combine the within group and filter syntax on the same aggregate function, e.g.:
```
select percentile_cont(0.9) within group (order by value) filter (where id > 10) from test_table;
```
However the method I tried to achieve this fails:
```
from sqlalchemy import Column, Integer, Float, Table, MetaData, func
metadata_obj = MetaData()
table = Table(
"test_table",
metadata_obj,
Column("id", Integer, primary_key=True),
Column("value", Float),
)
f = func.percentile_cont(0.9).within_group(table.c.value).filter(table.c.value > 10)
```
with the following error:
```
AttributeError: Neither 'WithinGroup' object nor 'Comparator' object has an attribute 'filter'
```
Is there a way to achieve the desired result above?
Thanks!
### Optional link from https://docs.sqlalchemy.org which documents the behavior that is expected
_No response_
### SQLAlchemy Version in Use
2.0.30
### DBAPI (i.e. the database driver)
psycopg2
### Database Vendor and Major Version
PostgreSQL 16
### Python Version
3.11
### Operating system
OSX
### To Reproduce
```python
from sqlalchemy import Column, Integer, Float, Table, MetaData, func
metadata_obj = MetaData()
table = Table(
"test_table",
metadata_obj,
Column("id", Integer, primary_key=True),
Column("value", Float),
)
f = func.percentile_cont(0.9).within_group(table.c.value).filter(table.c.value > 10)
```
### Error
```
AttributeError: Neither 'WithinGroup' object nor 'Comparator' object has an attribute 'filter'
```
### Additional context
_No response_ | closed | 2024-05-28T16:53:29Z | 2024-06-27T20:06:53Z | https://github.com/sqlalchemy/sqlalchemy/issues/11423 | [
"bug",
"sql",
"functions",
"PRs (with tests!) welcome"
] | willnewton | 12 |
dynaconf/dynaconf | fastapi | 1,069 | Validation doc section "On instantiation" improvement | I think the example given in the docs here could do with some improvement to make it clearer:
https://github.com/dynaconf/dynaconf/blob/4ab518393a1f7aa72e353a485aebea2852561120/docs/validation.md?plain=1#L36-L58
1. It refers to the example `settings.toml` given in [this section](https://www.dynaconf.com/validation/#overview), so I feel like the example should have `settings_files=["settings.toml"]` and `environments=True` so that a user can easily recreate a working example.
2. `Path` is imported here but not used
3. The horizontal whitespace between validators could be taken out
4. The will raise section at the end could be presented better, and currently it doesn't seem to be accurate because of a potential bug #1068 | closed | 2024-03-04T19:05:08Z | 2024-03-18T19:04:28Z | https://github.com/dynaconf/dynaconf/issues/1069 | [
"Docs",
"good first issue"
] | mitches-got-glitches | 2 |
dsdanielpark/Bard-API | api | 216 | BardFlight def airlines(self): TypeError: 'type' object is not subscriptable | 💚💜 Thank you for interest. ❤️💛
Please make sure to check for more efficient package management. *Please prioritize checking existing issues first. I will repay with higher-quality code.*
**To Reproduce**
I get the stacktrace when running flask run and from bardapi import Bard
once I delete: from bardapi import Bard
and comment out all bard related stuff my project works fine. so it's only Bard.
----------Please delete the content above this line, including this line.-------------
**Describe the bug**
A clear and concise description of what the bug is.
Keep getting the following stacktrace when I ran flask run. I'm using venv and have API key and cookies set up.
It works on another mac but not on mine but no idea how to figure out what the issue is?
Traceback (most recent call last):
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/bin/flask", line 8, in <module>
sys.exit(main())
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/flask/cli.py", line 1064, in main
cli.main()
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/click/decorators.py", line 92, in new_func
return ctx.invoke(f, obj, *args, **kwargs)
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/flask/cli.py", line 912, in run_command
raise e from None
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/flask/cli.py", line 898, in run_command
app = info.load_app()
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/flask/cli.py", line 313, in load_app
app = locate_app(import_name, None, raise_if_not_found=False)
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/flask/cli.py", line 219, in locate_app
__import__(module_name)
File "/Users/benitosanchez/Documents/GitHub/promise/app.py", line 6, in <module>
from bardapi import Bard
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/bardapi/__init__.py", line 4, in <module>
from bardapi.core import Bard
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/bardapi/core.py", line 28, in <module>
from bardapi.models.result import BardResult
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/bardapi/models/result.py", line 3, in <module>
from bardapi.models.draft import BardDraft
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/bardapi/models/draft.py", line 5, in <module>
from bardapi.models.tools.flight import BardFlightContent
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/bardapi/models/tools/flight.py", line 4, in <module>
class BardFlight:
File "/Users/benitosanchez/Documents/GitHub/promise/.venv/lib/python3.8/site-packages/bardapi/models/tools/flight.py", line 17, in BardFlight
def airlines(self) -> list[str]:
TypeError: 'type' object is not subscriptable
**Version**
OS:
Python:
Bard API:
Using proxy:
Legion:
**Code**
```python
code line
```
**Error**
```
error message
```
| closed | 2023-10-20T23:26:02Z | 2023-10-27T19:26:36Z | https://github.com/dsdanielpark/Bard-API/issues/216 | [] | bevenets | 4 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,471 | OSError: Caught OSError in DataLoader worker process 3 | Hello, I was training my model it was working until epoch 148 when I got theses Errors: <<OSError: Caught OSError in DataLoader worker process 3>> <<OSError: [Errno 5] Input/output error>>.
I'm training the model on a linux VM.
learning rate 0.0001050 -> 0.0001030
(epoch: 148, iters: 50, time: 5.328, data: 0.004) G_GAN: 1.660 G_L1: 21.545 D_real: 0.006 D_fake: 0.244 G: 23.206 D: 0.125
saving the latest model (epoch 148, total_iters 60000)
(epoch: 148, iters: 150, time: 1.322, data: 0.003) G_GAN: 1.076 G_L1: 34.955 D_real: 0.000 D_fake: 0.642 G: 36.031 D: 0.321
(epoch: 148, iters: 250, time: 1.316, data: 0.004) G_GAN: 2.841 G_L1: 17.667 D_real: 0.607 D_fake: 0.061 G: 20.508 D: 0.334
(epoch: 148, iters: 350, time: 1.338, data: 0.004) G_GAN: 1.837 G_L1: 25.288 D_real: 0.050 D_fake: 0.239 G: 27.126 D: 0.144
(epoch: 148, iters: 450, time: 2.624, data: 0.003) G_GAN: 5.915 G_L1: 23.653 D_real: 0.006 D_fake: 0.003 G: 29.568 D: 0.005
(epoch: 148, iters: 550, time: 1.307, data: 0.004) G_GAN: 1.869 G_L1: 35.894 D_real: 0.004 D_fake: 0.292 G: 37.763 D: 0.148
(epoch: 148, iters: 650, time: 1.308, data: 0.003) G_GAN: 1.511 G_L1: 21.548 D_real: 0.095 D_fake: 0.382 G: 23.059 D: 0.238
(epoch: 148, iters: 750, time: 1.338, data: 0.003) G_GAN: 3.447 G_L1: 22.605 D_real: 0.088 D_fake: 0.038 G: 26.052 D: 0.063
(epoch: 148, iters: 850, time: 2.473, data: 0.004) G_GAN: 3.026 G_L1: 22.714 D_real: 0.017 D_fake: 0.063 G: 25.740 D: 0.040
Traceback (most recent call last):
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/train.py", line 44, in <module>
for i, data in enumerate(dataset): # inner loop within one epoch
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/__init__.py", line 90, in __iter__
for i, data in enumerate(self.dataloader):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
return self._process_data(data)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
data.reraise()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/_utils.py", line 461, in reraise
raise exception
OSError: Caught OSError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/aligned_dataset.py", line 45, in __getitem__
A = AB.crop((0, 0, w2, h))
File "/usr/lib/python3/dist-packages/PIL/Image.py", line 1146, in crop
self.load()
File "/usr/lib/python3/dist-packages/PIL/ImageFile.py", line 235, in load
s = read(self.decodermaxblock)
File "/usr/lib/python3/dist-packages/PIL/JpegImagePlugin.py", line 402, in load_read
s = self.fp.read(read_bytes)
OSError: [Errno 5] Input/output error
Traceback (most recent call last):
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/train.py", line 44, in <module>
for i, data in enumerate(dataset): # inner loop within one epoch
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/__init__.py", line 90, in __iter__
for i, data in enumerate(self.dataloader):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 681, in __next__
data = self._next_data()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1376, in _next_data
return self._process_data(data)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 1402, in _process_data
data.reraise()
File "/home/exxact/.local/lib/python3.10/site-packages/torch/_utils.py", line 461, in reraise
raise exception
OSError: Caught OSError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 302, in _worker_loop
data = fetcher.fetch(index)
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/exxact/Documents/OMEGA/OMEGA_RD_IA/CycleGAN_Pix2Pix/data/aligned_dataset.py", line 45, in __getitem__
A = AB.crop((0, 0, w2, h))
File "/usr/lib/python3/dist-packages/PIL/Image.py", line 1146, in crop
May I ask help to understand where this come from? | open | 2022-08-19T07:30:10Z | 2022-09-20T20:48:18Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1471 | [] | FlorianRegisBamb | 3 |
robotframework/robotframework | automation | 5,163 | RobotFramework Unable to read or show the Documentation properly in log file when there is an empty line in between the lines of Keyword Documentation | RF- 7.0.1
RF is not showing the entire documentation whenever an empty line space come in between the lines in the keywords documentation. It shows the content of lines which are present before the empty line and completely misses out all the contents which comes after the empty lines.
When empty line is present in documentation

When empty line is removed

Expected Outcome-
Robot Framework should be able to read or show the documentation properly even an empty line space is coming in between the keyword documentation.
PFA robot test files and there execution log files.
[RobotFrameword Test Files and Log Files.zip](https://github.com/user-attachments/files/16143081/RobotFrameword.Test.Files.and.Log.Files.zip)
| closed | 2024-07-09T12:18:23Z | 2024-11-06T09:28:30Z | https://github.com/robotframework/robotframework/issues/5163 | [] | abhisheksandilya30 | 7 |
AntonOsika/gpt-engineer | python | 926 | docker install get a error Multiple top-level packages discovered in a flat-layout: ['docker', 'projects', 'gpt_engineer']. |
` => ERROR [8/9] RUN sudo pip install -e . 4.1s
------
> [8/9] RUN sudo pip install -e .:
0.442 Looking in indexes: https://pypi.doubanio.com/simple
0.442 Obtaining file:///app
0.448 Installing build dependencies: started
3.772 Installing build dependencies: finished with status 'done'
3.773 Checking if build backend supports build_editable: started
3.904 Checking if build backend supports build_editable: finished with status 'done'
3.905 Getting requirements to build editable: started
4.019 Getting requirements to build editable: finished with status 'error'
4.024 error: subprocess-exited-with-error
4.024
4.024 × Getting requirements to build editable did not run successfully.
4.024 │ exit code: 1
4.024 ╰─> [14 lines of output]
4.024 error: Multiple top-level packages discovered in a flat-layout: ['docker', 'projects', 'gpt_engineer'].
4.024
4.024 To avoid accidental inclusion of unwanted files or directories,
4.024 setuptools will not proceed with this build.
4.024
4.024 If you are trying to create a single distribution with multiple packages
4.024 on purpose, you should not rely on automatic discovery.
4.024 Instead, consider the following options:
4.024
4.024 1. set up custom discovery (`find` directive with `include` or `exclude`)
4.024 2. use a `src-layout`
4.024 3. explicitly set `py_modules` or `packages` with a list of names
4.024
4.024 To find more information, look for "package discovery" on setuptools docs.
4.024 [end of output]
4.024
4.024 note: This error originates from a subprocess, and is likely not a problem with pip.
4.025 error: subprocess-exited-with-error
4.025
4.025 × Getting requirements to build editable did not run successfully.
4.025 │ exit code: 1
4.025 ╰─> See above for output.
4.025
4.025 note: This error originates from a subprocess, and is likely not a problem with pip.
------
Dockerfile:16
--------------------
14 | RUN sudo pip install --upgrade pip
15 |
16 | >>> RUN sudo pip install -e .
17 |
18 | RUN pwd
--------------------
ERROR: failed to solve: process "/bin/sh -c sudo pip install -e ." did not complete successfully: exit code: 1
·
| closed | 2023-12-22T08:25:25Z | 2024-05-09T16:35:34Z | https://github.com/AntonOsika/gpt-engineer/issues/926 | [
"bug"
] | xjspace | 2 |
stanfordnlp/stanza | nlp | 1,427 | Update German Model in default.zip? | 
I have problem with downloading other recommended way.
Could you upgrade the default.zip to the latest version e.g. 1.9.0 that is 2 months old instead of the 7 months old 1.8.0 ? | closed | 2024-10-18T20:16:50Z | 2024-10-19T04:26:07Z | https://github.com/stanfordnlp/stanza/issues/1427 | [
"enhancement"
] | GeorgeS2019 | 2 |
plotly/dash-cytoscape | plotly | 159 | Update cycle broken after callback | <!--
Thanks for your interest in Plotly's Dash Cytoscape Component!
Note that GitHub issues in this repo are reserved for bug reports and feature
requests. Implementation questions should be discussed in our
[Dash Community Forum](https://community.plotly.com/c/dash).
Before opening a new issue, please search through existing issues (including
closed issues) and the [Dash Community Forum](https://community.plotly.com/c/dash).
When reporting a bug, please include a reproducible example! We recommend using
the [latest version](https://github.com/plotly/dash-cytoscape/blob/master/CHANGELOG.md)
as this project is frequently updated. Issues can be browser-specific so
it's usually helpful to mention the browser and version that you are using.
-->
#### Description
After running a callback with `Output` and `State` on the graph JSON, the update cycle is broken.
This affects Scale AI and Hydro Quebec.
CC: @mtwichan @alexcjohnson @mj3cheun @jackparmer
#### Steps/Code to Reproduce
This is a minimal example adapted from another reproducible example from Matthew (@mtwichan). You can drag around the nodes in the demo to easily demonstrate the issue:
1. Press the print-JSON button.
2. Move a node around.
3. Press the JSON button again.
4. Note that the position changed, reflecting the drag.
5. Press the break button, which has input/output arguments on the JSON.
6. Move the node around again.
7. Press the JSON button. The position is not updated this time, or any time after this.
Notes:
- This is very likely to affect other JSON as well (e.g. `data`), though position is simple and clear to demonstrate.
- This may need to be filed in Dash rather than here, as this may be a general issue with Dash.
- You can drop this file into this repo in the root as something like `usage.py` to quickly verify.
```python
import dash
import dash_cytoscape as cyto
import dash_html_components as html
from dash.dependencies import Input, Output, State
import json
from dash.exceptions import PreventUpdate
from copy import deepcopy
app = dash.Dash(__name__)
server = app.server
app.layout = html.Div([
cyto.Cytoscape(
id='cytoscape',
elements=[
{'data': {'id': 'one', 'label': 'Node 1'},
'position': {'x': 50, 'y': 50}},
{'data': {'id': 'two', 'label': 'Node 2'},
'position': {'x': 200, 'y': 200}},
{'data': {'source': 'one', 'target': 'two', 'label': '1 to 2'}}
],
layout={'name': 'preset'}
),
html.Button("Print elements JSONified", id="button-cytoscape"),
html.Button("Break", id="button-break"),
html.Div(id="html-cytoscape"),
])
@app.callback(
Output("html-cytoscape", "children"),
[Input("button-cytoscape", "n_clicks")],
[State("cytoscape", "elements")],
)
def testCytoscape(n_clicks, elements):
if n_clicks:
return json.dumps(elements)
@app.callback(
Output("cytoscape", "elements"),
[Input("button-break", "n_clicks")],
[State("cytoscape", "elements")],
)
def breakCytoscape(n_clicks, elements):
if n_clicks:
return deepcopy(elements) # happens with a deep copy or not
else:
raise PreventUpdate
if __name__ == '__main__':
app.run_server(debug=True)
```
<!--
Example:
```python
import dash
import dash_cytoscape as cyto
import dash_html_components as html
app = dash.Dash(__name__)
app.scripts.config.serve_locally = True
app.css.config.serve_locally = True
app.layout = html.Div([
cyto.Cytoscape(
id='cytoscape',
elements=[
{'data': {'id': 'one', 'label': 'Node 1'}, 'position': {'x': 50, 'y': 50}},
{'data': {'id': 'two', 'label': 'Node 2'}, 'position': {'x': 200, 'y': 200}},
{'data': {'source': 'one', 'target': 'two','label': 'Node 1 to 2'}}
],
layout={'name': 'preset'}
)
])
if __name__ == '__main__':
app.run_server(debug=True)
```
If the code is too long, feel free to put it in a public gist and link
it in the issue: https://gist.github.com
-->
#### Expected Results
<!-- Please paste or describe the expected results.-->
The update cycle should continue.
#### Actual Results
<!-- Please paste or specifically describe the actual output or traceback. -->
The update cycle is stopped.
#### Versions
<!--
Please run the following snippet and paste the output below:
from __future__ import print_function
import dash; print("Dash", dash.__version__)
import dash_html_components; print("Dash Core Components", dash_html_components.__version__)
import dash_core_components; print("Dash HTML Components", dash_core_components.__version__)
import dash_renderer; print("Dash Renderer", dash_renderer.__version)
import dash_cytoscape; print("Dash HTML Components", dash_cytoscape.__version__)
-->
@mtwichan, would you post your versions here for reference?
<!--
Thanks for taking the time to help up improve this component. Dash Cytoscape
would not be possible without awesome contributors like you!
-->
| open | 2021-10-25T16:43:18Z | 2023-07-25T06:56:47Z | https://github.com/plotly/dash-cytoscape/issues/159 | [] | maxkfranz | 3 |
gee-community/geemap | jupyter | 698 | Downloading large sized GEE image or image collection at once | <!-- Please search existing issues to avoid creating duplicates. -->
### Description
I am downloading GEE image (Sentinel-1 data) using the 'ee_export_image' function of geemap package. The scale I need and set is 10. However, this is possible for only very small image sizes. Can I somehow download a large area of the image at a time to my local drive?
The code and error that is shown for downloading an image of a large area is as follows:
### Source code
```
feature = Map.draw_last_feature
aoi1 = feature.geometry()
aoi1
import os
out_dir = os.path.join(os.path.expanduser('~'), 'Downloads')
filename = os.path.join(out_dir, 'RBD_VH1_aoi1_2.tif') # change
geemap.ee_export_image(RBD_VH, filename=filename, scale = 10, region=aoi1, file_per_band=True) # change
# Error
An error occurred while downloading.
Total request size (45186320 bytes) must be less than or equal to 33554432 bytes.
```
| closed | 2021-10-08T07:21:26Z | 2021-10-08T12:45:37Z | https://github.com/gee-community/geemap/issues/698 | [
"Feature Request"
] | HappyR90 | 1 |
holoviz/colorcet | plotly | 89 | matplotlib.cm.get_cmap returns a copy in 3.6.0 | #### ALL software version info
colorcet 39af94a; Python 3.11; Matplotlib 3.6.0rc1
#### Description of expected behavior and the observed behavior
As noted in [the `get_cmap` docstring](https://matplotlib.org/stable/api/cm_api.html#matplotlib.cm.get_cmap), it will return a copy in 3.6. This has taken effect in 3.6.0rc1, and is causing tests that assume that to break. See [test downstream build in Fedora](https://download.copr.fedorainfracloud.org/results/qulogic/matplotlib-3.6.0/fedora-rawhide-x86_64/04751388-python-colorcet/builder-live.log.gz).
#### Stack traceback and/or browser JavaScript console output
```
_______________ test_get_cm[diverging_isoluminant_cjm_75_c23-v0] _______________
k = 'diverging_isoluminant_cjm_75_c23'
v = <matplotlib.colors.LinearSegmentedColormap object at 0x7f7592dcfa50>
@pytest.mark.parametrize('k,v', list(cc.cm.items()))
def test_get_cm(k, v):
import matplotlib.cm as mcm
> assert mcm.get_cmap('cet_' + k) is v
E AssertionError: assert <matplotlib.colors.LinearSegmentedColormap object at 0x7f7591932e50> is <matplotlib.colors.LinearSegmentedColormap object at 0x7f7592dcfa50>
E + where <matplotlib.colors.LinearSegmentedColormap object at 0x7f7591932e50> = <function _get_cmap at 0x7f75934dcae0>(('cet_' + 'diverging_isoluminant_cjm_75_c23'))
E + where <function _get_cmap at 0x7f75934dcae0> = <module 'matplotlib.cm' from '/usr/lib64/python3.11/site-packages/matplotlib/cm.py'>.get_cmap
../../BUILDROOT/python-colorcet-3.0.0^20211128git39af94a-3.fc38.x86_64/usr/lib/python3.11/site-packages/colorcet/tests/test_matplotlib.py:48: AssertionError
```
etc. for each parametrization of `test_get_cm`.
PS, there are also pending deprecation warnings for `register_cmap` and `get_cmap`, but these are non-fatal as [they are pending](https://matplotlib.org/devdocs/api/next_api_changes/deprecations/23668-TC.html). | closed | 2022-08-21T05:21:32Z | 2022-09-30T18:39:12Z | https://github.com/holoviz/colorcet/issues/89 | [] | QuLogic | 2 |
lucidrains/vit-pytorch | computer-vision | 187 | Vit MAE reconstruction size mismatch | I'm trying to Train ViT with Masked Autoencoder training but I'm getting an error when running MAE.forward()
The tensor size of the predicted pixel values is of by a factor of 4 in comparison to the masked_patches tensor in the MSE_loss call.
RuntimeError: The size of tensor a (1024) must match the size of tensor b (4096) at non-singleton dimension 2
I've tried different settings but the factor 4 size mismatch stays.
I've also tried a hack to fix the predicted pixel values size by adding a factor 4 to the to_pixels output layer neuron count.
This fixes the problem in the MSE_loss call but introduces a new one, namely: The gradients don't match up in the backward call.
RuntimeError: Function MmBackward returned an invalid gradient at index 1 - got [4096, 1024] but expected shape compatible with [1024, 1024]
But now I don't know how to debug further.
my last settings where:
'model': {
'encoder_depth': 5,
'decoder_depth': 5,
'patch_size': 32,
'num_classes': 1000,
'channels': 1,
'dim': 1024,
'heads': 8,
'mlp_dim': 2048,
'masking_ratio': 0.75,
'decoder_dim': 512,
}, | open | 2021-12-30T21:55:46Z | 2022-01-04T17:41:10Z | https://github.com/lucidrains/vit-pytorch/issues/187 | [] | RhinigtasSalvex | 2 |
python-restx/flask-restx | flask | 611 | Recommended method to serve swagger ui behind nginx non-root location | I am trying to use flask and flask-restx to create multiple rest apis using nginx
In order to support multiple rest services I add locations to the nginx config
I am developing on a remote server, which is also my deployment server (one server for all rest apis)
Each rest api should have its own swagger doc
I am using a blueprint to alter the api.doc as shown in the restx documentation
I have started with just one simple rest api
I find that when serving with flask and gunicorn it works as expected, both the rest endpoint and the swagger doc work
But when I use nginx the endpoint works but the swagger doc is getting "the page you are looking for is not found"
I have been trying to resolve this over the last few days and have found many discussions on this issue dating back many years
All the way back to before flask-restful forked to flask-restx
I see many posted solutions, most involve using some combination of:
defining a custom_ui
changing the nginx location configuration for the reverse-proxy
adding additional nginx locations for swagger.json and swaggerui
I have tried many of these with no success
I see the github appears to have been updated with changes based on this issue.
I searched the restx documentation and could not find an example for my case.
Could someone either reply with the recommended solution or point me to an example for this case.
I have attached my simple restapi file as a text file
[restapi.txt](https://github.com/user-attachments/files/16531244/restapi.txt)
# key component versions
Flask==3.0.3
flask-restx==1.3.0
gunicorn==22.0.0
nginx version: nginx/1.14.1
# nginx conf
location /restapi {
proxy_pass http://restapi/;
}
upstream restapi {
server unix:/run/gunicorn/restapi.sock;
}
| open | 2024-08-07T16:03:58Z | 2024-10-08T21:42:10Z | https://github.com/python-restx/flask-restx/issues/611 | [
"question"
] | billross00 | 3 |
plotly/dash | data-science | 2,360 | Variable Path not accounting for HTML encoding | **Describe your context**
```
async-dash 0.1.0a1
dash 2.7.0
dash-bootstrap-components 1.2.1
dash-core-components 2.0.0
dash-daq 0.5.0
dash-extensions 0.1.5
dash-html-components 2.0.0
dash-iconify 0.1.2
dash-loading-spinners 1.0.0
dash-mantine-components 0.11.0a2
dash-table 5.0.0
```
**Describe the bug**
Variable paths are not working when they have been altered with html encoding during request process.
ie - /path/this is a test => /path/this%20is%20a%20test
variable path = this%20is%20a%20test
**Expected behavior**
/path/this is a test => /path/this%20is%20a%20test
variable path = this is a test
Please note, this is available within Flask to remove the html encoding
| open | 2022-12-08T16:16:08Z | 2024-08-13T19:24:23Z | https://github.com/plotly/dash/issues/2360 | [
"bug",
"P3"
] | BSd3v | 4 |
pallets/flask | python | 5,201 | Wrong Behavior in Blueprints | <!--
This issue tracker is a tool to address bugs in Flask itself. Please use
Pallets Discord or Stack Overflow for questions about your own code.
Replace this comment with a clear outline of what the bug is.
-->
When the `import_name` of my blueprint is set to `__name__`, if the name of my python file is different from that of the upper-level folder, other loaded blueprints will become invalid.
just like it:

<!--
Describe how to replicate the bug.
Include a minimal reproducible example that demonstrates the bug.
Include the full traceback if there was an exception.
-->
```py
from flask import Flask
from gevent import pywsgi
import logging
import os
import importlib
class Server:
def __init__(self):
self.app = Flask(__name__, static_folder='./RKR', static_url_path='/')
...
def start(self, debug: bool, port: int):
if debug:
logging.info("DEBUG IS TRUE.Using Flask.App.Run .")
self.app.run(host='0.0.0.0', debug=True, port=port)
else:
logging.info("Using WSGI .")
server = pywsgi.WSGIServer(('0.0.0.0', port), self.app)
server.serve_forever()
def load_blueprint(self):
liss = list()
modules = {}
directory = ".\\src\\pages"
lis = list()
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith(".py"):
file_path = os.path.join(root, file)
file_path = file_path.replace(".\\", "").replace(".py", "").split("\\")
lis.append(file_path)
for i in lis:
m = ""
for j in range(0, len(i)):
if j == 0:
m = i[0]
else:
m = m + "." + str(i[j])
liss.append(m)
for i in liss:
logging.info(f"Load {i} .")
module = importlib.import_module(i)
modules[i] = module
for i in liss:
self.app.register_blueprint(modules[i].page)
def run_server(self, debug: bool, port: int):
self.load_blueprint()
self.start(debug, port)
# Server1 = Server()
# Server1.run_server(True, 5000)
```
it import
```
src.pages.shelf.bookshelf.bookshelf
src.pages.shelf.bookinfo.book_info #if it is src.pages.shelf.book_info.book_info it will be normal
```
bookshelf:
```py
from flask import Blueprint
from flask import render_template as template
import src.book as book
name = "bookshelf"
page = Blueprint(name, __name__, template_folder=".\\files")
@page.route("/", methods=['GET', 'POST'])
def bookshelf():
book_info = book.get_book_shelf()
return template("index.html", book_info=book_info)
```
book_info:
```py
from flask import Blueprint, request
from flask import render_template as template
import src.book as book
name = "book_info"
page = Blueprint(name, __name__, template_folder=".\\files")
@page.route("/shelf/book_info", methods=['GET', 'POST'])
def bookshelf():
data = request.args
return data
```
tree:
```
├─src
│ ├─pages
│ │ ├─search
│ │ ├─shelf
│ │ │ ├─bookinfo
│ │ │ │ ├─files
│ │ │ │ └─__pycache__
│ │ │ ├─bookshelf
│ │ │ │ ├─files
│ │ │ │ └─__pycache__
│ │ │ └─__pycache__
│ │ └─viewer
│ ├─RKR
│ │ └─asset
│ │ └─img
│ └─__pycache__
└─__pycache__
```
full_file:
https://github.com/Suto-Commune/Re-Kindle-Reader/tree/8a7169364779feb722197b0c42b8cbf7a346d5b7
<!--
Describe the expected behavior that should have happened but didn't.
-->
normal behavior

Environment:
- Python version: Python 3.11.4
- Flask version: Flask==2.3.2
| closed | 2023-07-16T16:10:08Z | 2023-07-31T00:05:50Z | https://github.com/pallets/flask/issues/5201 | [] | NatsumiXD | 1 |
piskvorky/gensim | data-science | 3,089 | Using corpus_file does not speed up while the CPU utilization seems full. | #### Problem description
I'm struggling with the issue of speeding up a doc2vec training using `corpus_file` after my institution introduced a new computing server system. With the previous system, I had/have no problem, but I found that the same script takes drastically different times, and I'm not enjoying the quasi-linear speed-up with the number of threads/cores anymore. I have not been able to find a solution for this issue, so I decided to post this here.
#### Steps/code/corpus to reproduce
The script is simple as below. The `test.txt` file is in LineSentence format as [this page](https://github.com/RaRe-Technologies/gensim/blob/develop/docs/notebooks/Any2Vec_Filebased.ipynb) suggests. (The wide `windows`=240 is chosen to check out the CPU usage.)
```
import logging
import time
logging.basicConfig(level=logging.INFO)
from gensim.models.doc2vec import Doc2Vec
file = 'test.txt'
start_time = time.time()
model = Doc2Vec(vector_size=100, min_count=10, window=240, workers=24, seed=111, dm=0, dbow_words=1)
model.build_vocab(corpus_file=file)
print(time.time() - start_time, 'build vocab')
start_time = time.time()
model.train(corpus_file=file, total_examples=model.corpus_count, total_words=model.corpus_total_words, epochs=1)
print(time.time() - start_time, '1-epoch')
```
Running this with the previous server system, the 1-epoch time was
```
INFO:gensim.models.word2vec:EPOCH - 1 : training on 4182347 raw words (3516440 effective words) took 13.7s, 256302 effective words/s
```
While with the new system, the 1- epoch time was
```
INFO:gensim.models.word2vec:EPOCH - 1 : training on 4182347 raw words (3516440 effective words) took 88.8s, 39578 effective words/s
```
I checked out the CPU utilization from the new system (with the current issue) and it seemed that it was using 2400%.

I tried to use 48 cores, it again used the full 48 cores as below.

But, the training time was identical from the 24 core training.
```
NFO:gensim.models.word2vec:EPOCH - 1 : training on 4184193 raw words (3518530 effective words) took 87.7s, 40107 effective words/s
```
While I'm not putting it here, this happens not only from Gensim 4.0 but also from 3.8.3. I understand a wrong hardware configuration might cause this, so I will close this issue if I can confirm that. But I wanted to know if any developers/other users encountered a case similar to what I have now...
#### Versions
From the new system (with the training speed issue)
```
Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.10
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0]
Bits 64
NumPy 1.20.1
SciPy 1.5.2
gensim 4.0.0
FAST_VERSION 1
```
From the previous system (where the `corpus_file` method worked.)
```
Linux-3.10.0-1127.8.2.el7.x86_64-x86_64-with-redhat-7.4-Nitrogen
Python 3.7.6 (default, Jan 8 2020, 19:59:22)
[GCC 7.3.0]
Bits 64
NumPy 1.18.5
SciPy 1.5.0
gensim 4.0.0
FAST_VERSION 1
```
| open | 2021-03-25T17:00:18Z | 2023-10-31T04:11:55Z | https://github.com/piskvorky/gensim/issues/3089 | [
"performance"
] | Donghyun-Kang-Soc | 19 |
ivy-llc/ivy | pytorch | 28,371 | Fix Frontend Failing Test: jax - math.tensorflow.math.zero_fraction | To-do List: https://github.com/unifyai/ivy/issues/27496 | closed | 2024-02-21T15:44:59Z | 2024-02-26T11:24:28Z | https://github.com/ivy-llc/ivy/issues/28371 | [
"Sub Task"
] | Sai-Suraj-27 | 0 |
matterport/Mask_RCNN | tensorflow | 2,881 | Show bbox that only over %60 prob has | How can I let my model show bboxes that only prob over 60 haves? | closed | 2022-09-14T16:52:44Z | 2022-09-14T22:21:30Z | https://github.com/matterport/Mask_RCNN/issues/2881 | [] | muratali016 | 0 |
yzhao062/pyod | data-science | 314 | autoencoder StandardScaler and sigmoid as output layer / inconsistency in values size | Hello pyod community,
according to the standard autoencoder settings the output layer has the sigmoid activation function whose values are within 0 and 1, but the input data are scaled with StandardScaler whose values can be higher than 1 and smaller than 0.
Why do we have this inconsistency?
thanks! | open | 2021-06-14T13:52:38Z | 2021-06-14T17:40:18Z | https://github.com/yzhao062/pyod/issues/314 | [] | cherepanovic | 9 |
sktime/pytorch-forecasting | pandas | 1,310 | 'nth' is not a valid function name for transform(name) | - PyTorch-Forecasting version: 0.10.2
- PyTorch version: 2.0.1
- Python version: 3.10.11
- Operating System: Windows 10 Anaconda
- Pandas: 2.0.1
### Expected behavior
I tried to repeat the simple N-Beats notebook in the tutorial
https://github.com/jdb78/pytorch-forecasting/blob/master/docs/source/tutorials/ar.ipynb
### Actual behavior
However, error showed up in the function TimeSeriesDataSet(). It seems to me this error was due to pandas, which version was used to develop pytorch-forecasting ?
ValueError Traceback (most recent call last)
Cell In[5], line 10
7 context_length = max_encoder_length
8 prediction_length = max_prediction_length
---> 10 training = TimeSeriesDataSet(
11 data[lambda x: x.time_idx <= training_cutoff],
12 time_idx="time_idx",
13 target="value",
14 categorical_encoders={"series": NaNLabelEncoder().fit(data.series)},
15 group_ids=["series"],
16 # only unknown variable is "value" - and N-Beats can also not take any additional variables
17 time_varying_unknown_reals=["value"],
18 max_encoder_length=context_length,
19 max_prediction_length=prediction_length,
20 )
22 validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training_cutoff + 1)
23 batch_size = 128
File ~\anaconda3\envs\py310tft\lib\site-packages\pytorch_forecasting\data\timeseries.py:481, in TimeSeriesDataSet.__init__(self, data, time_idx, target, group_ids, weight, max_encoder_length, min_encoder_length, min_prediction_idx, min_prediction_length, max_prediction_length, static_categoricals, static_reals, time_varying_known_categoricals, time_varying_known_reals, time_varying_unknown_categoricals, time_varying_unknown_reals, variable_groups, constant_fill_strategy, allow_missing_timesteps, lags, add_relative_time_idx, add_target_scales, add_encoder_length, target_normalizer, categorical_encoders, scalers, randomize_length, predict_mode)
478 assert target not in self.scalers, "Target normalizer is separate and not in scalers."
480 # create index
--> 481 self.index = self._construct_index(data, predict_mode=self.predict_mode)
483 # convert to torch tensor for high performance data loading later
484 self.data = self._data_to_tensors(data)
File ~\anaconda3\envs\py310tft\lib\site-packages\pytorch_forecasting\data\timeseries.py:1218, in TimeSeriesDataSet._construct_index(self, data, predict_mode)
1205 """
1206 Create index of samples.
1207
(...)
1214 It contains a list of all possible subsequences.
1215 """
1216 g = data.groupby(self._group_ids, observed=True)
-> 1218 df_index_first = g["__time_idx__"].transform("nth", 0).to_frame("time_first")
1219 df_index_last = g["__time_idx__"].transform("nth", -1).to_frame("time_last")
1220 df_index_diff_to_next = -g["__time_idx__"].diff(-1).fillna(-1).astype(int).to_frame("time_diff_to_next")
File ~\anaconda3\envs\py310tft\lib\site-packages\pandas\core\groupby\generic.py:469, in SeriesGroupBy.transform(self, func, engine, engine_kwargs, *args, **kwargs)
466 @Substitution(klass="Series", example=__examples_series_doc)
467 @Appender(_transform_template)
468 def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
--> 469 return self._transform(
470 func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
471 )
File ~\anaconda3\envs\py310tft\lib\site-packages\pandas\core\groupby\groupby.py:1534, in GroupBy._transform(self, func, engine, engine_kwargs, *args, **kwargs)
1532 elif func not in base.transform_kernel_allowlist:
1533 msg = f"'{func}' is not a valid function name for transform(name)"
-> 1534 raise ValueError(msg)
1535 elif func in base.cythonized_kernels or func in base.transformation_kernels:
1536 # cythonized transform or canned "agg+broadcast"
1537 return getattr(self, func)(*args, **kwargs)
ValueError: 'nth' is not a valid function name for transform(name)
```
```
Paste the command(s) you ran and the output. Including a link to a colab notebook will speed up issue resolution.
If there was a crash, please include the traceback here.
The code used to initialize the TimeSeriesDataSet and model should be also included.
| open | 2023-05-22T23:25:17Z | 2023-07-09T16:33:51Z | https://github.com/sktime/pytorch-forecasting/issues/1310 | [] | haopengcu | 1 |
TencentARC/GFPGAN | pytorch | 328 | How do I force it to work on the CPU? | How do I force it to work on the CPU?
My GPU RAM is too small with 2GB, I get the message
` "RuntimeError: CUDA out of memory. Tried to allocate 154.00 MiB (GPU 0; 1.96 GiB total capacity; 927.97 MiB already allocated; 72.44 MiB free; 1.05 GiB reserved in total by PyTorch)"` | closed | 2023-01-30T15:20:35Z | 2023-12-27T08:07:47Z | https://github.com/TencentARC/GFPGAN/issues/328 | [] | joe-eis | 7 |
plotly/dash-table | dash | 631 | Type formatting for rows | Currently it's only possible to specify type/number formatting for entire columns.
It would be nice to be able to do this also for rows (eg. when one row only contains percentages, another row only contains curreny amounts).
Edit: Even better, make it configurable on a single cell level, similar to how `style_data_conditional` works. | open | 2019-10-24T20:03:02Z | 2019-10-31T10:55:37Z | https://github.com/plotly/dash-table/issues/631 | [] | nborrmann | 0 |
d2l-ai/d2l-en | data-science | 1,825 | JAX/Flax Implementation | Have you already considered to add a JAX based (could be Flax for NNs) implementation as an alternative to MXNet, Tensorflow and Pytorch?
| closed | 2021-07-09T13:32:44Z | 2021-12-02T16:27:59Z | https://github.com/d2l-ai/d2l-en/issues/1825 | [] | sazio | 9 |
jupyter-book/jupyter-book | jupyter | 1,993 | improve documentation on managing sphinx warnings | ### Context
As of now, sphinx warnings are often reported as non-breaking errors
```
reading sources... [ 50%] sample_intro
/__w/jupyter-book-ghpages/jupyter-book-ghpages/tests/docs/sample_section.md:5: ERROR: Unknown directive type "plantuml".
```
After turning on the picky mode with the `-W` flag, they are correctly raised as breaking errors with `exist code=1`
```
sphinx.errors.SphinxWarning: /__w/jupyter-book-ghpages/jupyter-book-ghpages/tests/docs/sample_section.md:5:Unknown directive type "plantuml".
```
### Proposal
I find it a bit confusing that `ERROR` doesn't raise exit code 1 (e.g. doesn't break CI/CD).
Would it make sense to either downgrade non-breaking errors to warnings or to raise exception? Or at least to add a note with an example to the documentation?
### Tasks and updates
[ ] feedback from developers/maintainers: "bug or feature"
[ ] discuss whether and how to improve docs | open | 2023-04-11T08:23:14Z | 2023-04-11T08:36:17Z | https://github.com/jupyter-book/jupyter-book/issues/1993 | [
"enhancement"
] | maciejskorski | 1 |
sebp/scikit-survival | scikit-learn | 116 | What is the best way to prepare data for sksurv? | Problem solved | closed | 2020-05-27T16:36:06Z | 2020-10-06T19:58:07Z | https://github.com/sebp/scikit-survival/issues/116 | [
"question"
] | flippercy | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,901 | [nodriver] support how to set timezone? | `async def setIP(self):
self.tab = await self.browser.get('https://ipinfo.io/json')
self.tab.evaluate("document.documentElement.outerHTML;",return_by_value=True,await_promise=True)
timezone = str(checkout_page_data.split('"timezone": "')[1].split('"')[0])
cdp.emulation.set_timezone_override(timezone_id=timezone)
latlong = str(checkout_page_data.split('"loc": "')[1].split('"')[0]).split(",")
cdp.emulation.clear_geolocation_override()
cdp.emulation.set_geolocation_override(latitude=float(latlong[0]),longitude=latlong[1],accuracy=100)
time.sleep(5)
self.tab = await self.browser.get('https://www.browserscan.net/')
print()`
I am using this code but it is not working. | open | 2024-05-30T04:59:50Z | 2024-05-30T04:59:50Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1901 | [] | gouravkumar99 | 0 |
robotframework/robotframework | automation | 4,841 | Add typing to all modules under `robot.api` | Currently, the `@keyword` and `@library` functions exposed through the `robot.api` module have no annotations. This limits the type-checking that tools like mypy and Pylance / pyright can perform, in turn limiting what types of issues language servers can detect.
Adding annotations to these functions (and, ultimately, all public-facing interfaces of the Robot Framework) would enable better IDE / tool integration.
Note that the improvements with regards to automatic type conversion also closely relate to this issue since the framework uses the signatures of methods decorated with the `@keyword` decorator to attempt automatic type conversion, while the missing annotations limit static analysis of the decorated methods. | closed | 2023-08-16T08:49:34Z | 2023-11-22T19:02:18Z | https://github.com/robotframework/robotframework/issues/4841 | [
"enhancement",
"priority: medium",
"alpha 2",
"acknowledge",
"effort: medium"
] | robinmackaij | 2 |
saulpw/visidata | pandas | 2,497 | How would you unhide a single column in visidata? | How would you unhide a single column in visidata? My workflow sometimes involves looking at the columns in a file (Shift-C mode), find a column using regex, select all other columns and hide them. After looking at the data of that column, I might think that I need to unhide one of the columns that I just hid. Unfortunately, there doesn't seem to be an option to unhide only a single column in the Shift - C mode. There doesn't seem to be a way to unhide even selected columns. I tried using `gv` but that just unhid all columns in the current view, which are not the columns of the file (that are currently arranged in rows (Shift-C mode).
How might one go about doing what I want above? Is it that I am misunderstanding the way of use and there is a better way of doing this? | closed | 2024-08-21T11:11:50Z | 2024-09-22T00:10:39Z | https://github.com/saulpw/visidata/issues/2497 | [
"question"
] | ivan-gerov | 2 |
biolab/orange3 | numpy | 5,996 | Parallel processing for "Suggest features" | Processing is quite slow. E.g., I'm running a Radviz "Suggest features", and it only uses one core to calculate permutations.
Please extend multicore processing to "Suggest features" functions. Thanks.
| closed | 2022-05-30T15:30:05Z | 2022-05-30T15:46:42Z | https://github.com/biolab/orange3/issues/5996 | [] | hydrastarmaster | 1 |
huggingface/diffusers | pytorch | 11,033 | SD1.5 Unet from_single_file loading does not work | ### Describe the bug
SD1.5 Unet from_single_file loading does not work (either from safetensor or GGUF)
### Reproduction
`
import torch
from diffusers import UNet2DConditionModel
config = UNet2DConditionModel.load_config("SimianLuo/LCM_Dreamshaper_v7", subfolder="unet")
unet = UNet2DConditionModel.from_single_file(
"stable-diffusion.cpp/build/diffusion_pytorch_model.safetensors",
config="SimianLuo/LCM_Dreamshaper_v7", # Use the repo ID string
subfolder='unet',
torch_dtype=torch.bfloat16,
use_safetensors=True
)
`
and
`
import torch
from diffusers import UNet2DConditionModel
config = UNet2DConditionModel.load_config("SimianLuo/LCM_Dreamshaper_v7", subfolder="unet")
unet = UNet2DConditionModel.from_single_file(
"https://huggingface.co/abhinavgopal/firstfile.gguf/LCM_Dreamshaper_v7_4k.safetensors.q8_0.gguf",
config="SimianLuo/LCM_Dreamshaper_v7", # Use the repo ID string
subfolder='unet',
torch_dtype=torch.bfloat16,
use_safetensors=True
)
`
### Logs
```shell
Traceback (most recent call last):
File "/home/ec2-user/temp.py", line 14, in <module>
unet = UNet2DConditionModel.from_single_file(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ec2-user/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/ec2-user/miniconda3/lib/python3.12/site-packages/diffusers/loaders/single_file_model.py", line 325, in from_single_file
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load UNet2DConditionModel. Weights for this component appear to be missing in the checkpoint.
```
### System Info
- 🤗 Diffusers version: 0.32.2
- Platform: Linux-6.1.128-136.201.amzn2023.x86_64-x86_64-with-glibc2.34
- Running on Google Colab?: No
- Python version: 3.12.9
- PyTorch version (GPU?): 2.6.0+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.29.2
- Transformers version: 4.49.0
- Accelerate version: 1.4.0
- PEFT version: not installed
- Bitsandbytes version: 0.45.3
- Safetensors version: 0.5.3
- xFormers version: not installed
- Accelerator: NVIDIA L40S, 46068 MiB
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help?
@yiyixuxu @asomoza @sayakpaul | open | 2025-03-11T18:51:44Z | 2025-03-17T19:32:33Z | https://github.com/huggingface/diffusers/issues/11033 | [
"bug"
] | AbhinavGopal | 9 |
frappe/frappe | rest-api | 31,161 | workflow action does not handle error case, async causes any throw get bypassed | <!--
Welcome to the Frappe Framework issue tracker! Before creating an issue, please heed the following:
1. This tracker should only be used to report bugs and request features / enhancements to Frappe
- For questions and general support, use https://stackoverflow.com/questions/tagged/frappe
- For documentation issues, refer to https://frappeframework.com/docs/user/en or the developer cheetsheet https://github.com/frappe/frappe/wiki/Developer-Cheatsheet
2. Use the search function before creating a new issue. Duplicates will be closed and directed to
the original discussion.
3. When making a bug report, make sure you provide all required information. The easier it is for
maintainers to reproduce, the faster it'll be fixed.
4. If you think you know what the reason for the bug is, share it with us. Maybe put in a PR 😉
-->
## Description of the issue
1. when calling workflow there are times we want to put a client side script.
2. if it returns throw i dont want the workflow action to follow through and throw error instead.
3. problem is the code below is async so no matter if it returns error or not the code still get through
behavior:
since workflow action bypass hooks, i created a client script that runs before_workflow_action as specified below.
it does its job preventing the async but when it returns error, the screen freezes because it doesnt have the case to handle error.
## Context information (for bug reports)
https://github.com/frappe/frappe/blob/a1df46077d58052e9655c2a8183c4cdb40b7a20c/frappe/public/js/frappe/form/workflow.js#L95
```
me.frm.script_manager.trigger('before_workflow_action').then(() => {
frappe.xcall('frappe.model.workflow.apply_workflow',
{doc: me.frm.doc, action: d.action})
.then((doc) => {
frappe.model.sync(doc);
me.frm.refresh();
me.frm.selected_workflow_action = null;
me.frm.script_manager.trigger("after_workflow_action");
});
});
```
this code should handle error so the UI doesnt freeze and does frm.refresh instead.
my code:
```
frappe.ui.form.on("SomeCustomDoctype", {
refresh(frm) {
},
before_workflow_action: (frm) => {
return new Promise((resolve, reject) => {
if (frm.selected_workflow_action) {
frappe.call({
method: "custom.code.that.handle.validation",
args: {
doc: frm.doc,
action: frm.selected_workflow_action
},
freeze: true,
error: function(err) {
reject(frappe.throw(__(err))); // Screen Freezes here, because the code from frappe above does not handle error case.
},
callback: function(r) {
if (r.message) {
// Handle the response from the Python function
frappe.msgprint(r.message);
resolve();
} else {
resolve();
}
}
});
} else {
resolve();
}
});
}
});
```
**Output of `bench version`**
```
version 15
```
## Steps to reproduce the issue
1. create workflow that doesnt change the docstatus (from docstatus 0 to docstatus 0)
2. make an error case in the client script
3. the code still follow through
4. now make a promise statement and do a rejection
5. the scren freezes because the code doesnt handle fail case.
### Observed result

screen freezes after pressing approve and triggering error case.

### Expected result
it should handle error case so the screen doesnt freezes
### Stacktrace / full error message
```
(paste here)
```
## Additional information
currently i just make a workaround to reload the page :
```
error: function(err) {
reject(setTimeout(function(){
window.location.reload();
}, 1500));
},
```
OS version / distribution, `Frappe` install method, etc.
| open | 2025-02-06T12:07:18Z | 2025-02-06T12:34:56Z | https://github.com/frappe/frappe/issues/31161 | [
"bug"
] | foolishdino | 0 |
fugue-project/fugue | pandas | 334 | [FEATURE] Replace RLock with SerializableRLock | **Is your feature request related to a problem? Please describe.**
Many Fugue objects are not picklable because they have RLocks. We can replace most of them with SerializableRLock because we don't really want them to take effect cross processes.
| closed | 2022-07-08T08:04:39Z | 2022-07-10T05:31:23Z | https://github.com/fugue-project/fugue/issues/334 | [
"enhancement"
] | goodwanghan | 0 |
tensorlayer/TensorLayer | tensorflow | 306 | [Feature Request] - Verbose option for the Layer API | Hello dear friends,
I have used TL for quite a while and I'm really thankful for all the amazing work.
One thing which appeared very cool to me from the beginning was the verbose graph definition which helped me a lot during debug and development. However, I figured out a bit later that are no way to deactivate the followings:
```shell
[TL] InputLayer encoder/input: (?, 256, 256, 1)
[TL] Conv2dLayer encoder/h1/conv2d: shape:[5, 5, 1, 64] strides:[1, 2, 2, 1] pad:VALID act:identity
[TL] BatchNormLayer encoder/h1/batch_norm: decay:0.900000 epsilon:0.000010 act:identity is_train:True
```
It is a really interesting feature to have these information, however it could rapidly clutter the console output.
#### Feature Request
The idea would be to make this output optional (default = True or False). I think there could be different ways to do this.
##### 1. Solution - Create a **verbose** parameter in the Layer API
Simple and backward compatible, a "verbose" parameter can be added to the Layer Class and influence the behavior of [print_params()](https://github.com/tensorlayer/tensorlayer/blob/master/tensorlayer/layers.py#L321) method.
##### 2. Use the Logging module from TF.
Why should we re-invent the wheel ? Everything is already implemented in Tensorflow.
We can use the logging level already existing in TF.
```python
tf.logging._level_names ## outputs => {50: 'FATAL', 40: 'ERROR', 30: 'WARN', 20: 'INFO', 10: 'DEBUG'}
tf.logging.get_verbosity() ## outputs => 30 (default value)
tf.logging.set_verbosity(tf.logging.DEBUG)
tf.logging.get_verbosity() ## outputs => 10
```
We could for instance determine that for logging level <= 20 (INFO & DEBUG), we normally output the Tensorlayer informations, and we don't perform this action for any higher value.
All the best,
Jonathan
| closed | 2018-02-12T15:29:17Z | 2018-02-13T13:59:06Z | https://github.com/tensorlayer/TensorLayer/issues/306 | [] | DEKHTIARJonathan | 2 |
pytorch/pytorch | deep-learning | 149,726 | SDPA gives different outputs compared to manual attention with `dropout>0.0` | ### 🐛 Describe the bug
SDPA gives different outputs compared to manual attention when the `EFFICIENT_ATTENTION` backend is used and dropout is non-zero. Is this expected? Is the efficient kernel using a different RNG?
Here's an MWE:
```py
from torch.nn.functional import scaled_dot_product_attention
from torch.nn.attention import SDPBackend, sdpa_kernel
def manual_attention(query, key, value, mask, dropout=0.0):
scores = torch.matmul(query, key.transpose(3, 2))
scores += mask
attn_weights = torch.nn.functional.softmax(scores.float(), dim=-1).type_as(scores)
attn_weights = torch.nn.functional.dropout(attn_weights, p=dropout, training=True)
attn_output = torch.matmul(attn_weights, value)
return attn_output
def compare(query, key, value, mask, dropout=0.0, backends: list = []):
torch.manual_seed(0)
manual_result = manual_attention(query, key, value, mask=mask, dropout=dropout)
torch.manual_seed(0)
with sdpa_kernel(backends=backends):
sdpa_result = scaled_dot_product_attention(
query, key, value, attn_mask=mask, is_causal=False, dropout_p=dropout, scale=1.0
)
return torch.abs(manual_result - sdpa_result).mean()
torch.manual_seed(0)
query = torch.randn(2, 3, 4, 8, device="cuda:0")
key = torch.randn(2, 3, 4, 8, device="cuda:0")
value = torch.randn(2, 3, 4, 8, device="cuda:0")
mask = torch.where(torch.rand(2, 1, 4, 4, device="cuda:0") > 0.5, 0.0, -float("inf"))
print(compare(query, key, value, mask=mask, dropout=0.0, backends=[SDPBackend.EFFICIENT_ATTENTION])) # tensor(1.0005e-07, device='cuda:0')
print(compare(query, key, value, mask=mask, dropout=0.5, backends=[SDPBackend.EFFICIENT_ATTENTION])) # tensor(0.9543, device='cuda:0')
print(compare(query, key, value, mask=mask, dropout=0.0, backends=[SDPBackend.MATH])) # tensor(0., device='cuda:0')
print(compare(query, key, value, mask=mask, dropout=0.5, backends=[SDPBackend.MATH])) # tensor(0., device='cuda:0')
```
### Versions
Collecting environment information...
PyTorch version: 2.6.0+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.35
Python version: 3.11.11 (main, Dec 11 2024, 16:28:39) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.8.0-1021-aws-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A100-SXM4-40GB
GPU 1: NVIDIA A100-SXM4-40GB
GPU 2: NVIDIA A100-SXM4-40GB
GPU 3: NVIDIA A100-SXM4-40GB
GPU 4: NVIDIA A100-SXM4-40GB
GPU 5: NVIDIA A100-SXM4-40GB
GPU 6: NVIDIA A100-SXM4-40GB
GPU 7: NVIDIA A100-SXM4-40GB
Nvidia driver version: 550.144.03
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
Stepping: 7
BogoMIPS: 6000.00
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch pti fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid mpx avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves ida arat pku ospke
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 1.5 MiB (48 instances)
L1i cache: 1.5 MiB (48 instances)
L2 cache: 48 MiB (48 instances)
L3 cache: 71.5 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-23,48-71
NUMA node1 CPU(s): 24-47,72-95
Vulnerability Gather data sampling: Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit: KVM: Mitigation: VMX unsupported
Vulnerability L1tf: Mitigation; PTE Inversion
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Vulnerable
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Retpoline
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] numpy==2.1.3
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-cusparselt-cu12==0.6.2
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] torch==2.6.0
[pip3] triton==3.2.0
[conda] numpy 2.1.3 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-cusparselt-cu12 0.6.2 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] torch 2.6.0 pypi_0 pypi
[conda] triton 3.2.0 pypi_0 pypi
cc @pbelevich | closed | 2025-03-21T13:39:44Z | 2025-03-21T19:18:49Z | https://github.com/pytorch/pytorch/issues/149726 | [
"triaged",
"module: random",
"module: sdpa"
] | abdulfatir | 3 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 778 | Some strange error but training is continue.How i can solve it?Does is influence result? | Problem blew: How I can solve it?
Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7fbdb1faa128>>
Traceback (most recent call last):
File "/home/pch/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 399, in __del__
self._shutdown_workers()
File "/home/pch/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers
self.worker_result_queue.get()
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "/home/pch/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
fd = df.detach()
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 493, in Client
answer_challenge(c, authkey)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 737, in answer_challenge
response = connection.recv_bytes(256) # reject large message
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 379, in _recv
chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
Exception ignored in: <bound method _DataLoaderIter.__del__ of <torch.utils.data.dataloader._DataLoaderIter object at 0x7fbdd5df8d30>>
Traceback (most recent call last):
File "/home/pch/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 399, in __del__
self._shutdown_workers()
File "/home/pch/anaconda3/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 378, in _shutdown_workers
self.worker_result_queue.get()
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "/home/pch/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
fd = df.detach()
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 57, in detach
with _resource_sharer.get_connection(self._id) as conn:
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 87, in get_connection
c = Client(address, authkey=process.current_process().authkey)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 493, in Client
answer_challenge(c, authkey)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 732, in answer_challenge
message = connection.recv_bytes(256) # reject large message
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 216, in recv_bytes
buf = self._recv_bytes(maxlength)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 407, in _recv_bytes
buf = self._recv(4)
File "/home/pch/anaconda3/lib/python3.6/multiprocessing/connection.py", line 383, in _recv
raise EOFError
EOFError:
(epoch: 11, iters: 20, time: 0.364, data: 0.003) D_A: 0.062 G_A: 0.481 cycle_A: 1.802 idt_A: 0.761 D_B: 0.035 G_B: 0.792 cycle_B: 1.773 idt_B: 0.828
(epoch: 11, iters: 120, time: 0.345, data: 0.002) D_A: 0.044 G_A: 0.905 cycle_A: 1.665 idt_A: 0.807 D_B: 0.074 G_B: 0.701 cycle_B: 1.721 idt_B: 0.783
End of epoch 11 / 200 Time Taken: 45 sec
learning rate = 0.0002000 | closed | 2019-09-24T03:48:54Z | 2019-09-26T19:05:38Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/778 | [] | Endless-Hao | 7 |
tensorflow/tensor2tensor | deep-learning | 1,816 | Multistep optimizer "Failed to place the graph without changing the devices" non fatal | ### Description
Received warning as below:
```Failed to place the graph without changing the devices of some resources. Some of the operations (that had to be colocated with resource generating operations) are not supported on the resources' devices. Current candidate devices are [......```
...
### Environment information
t2t 1.15.4
tf 1.15.2
2 1080 ti GPU
Thank you for the awesome class, it saved me from the resource blackhole. However, as mentioned, when running, i received the above warning. Although the program continue to run smoothly I would like to confirm if the message could safely be ignored. | open | 2020-05-20T19:16:07Z | 2021-05-16T09:49:10Z | https://github.com/tensorflow/tensor2tensor/issues/1816 | [] | colmantse | 2 |
automl/auto-sklearn | scikit-learn | 1,721 | [Question] How to solve the warning 'Configuration *** not found'? | Hi everybody
I’m using 2.0.
My dataset shape is 360*20480.
I have changed the time_left_for_this_task to 6000 to give more time to find the appropriate models. I also changed the time for per_run_time_limit to 300.When I run it, there's a warning of 'Configuration *** not found' . What is the cause of this situation and how can I fix it.

-Hao
# System Details (if relevant)
* auto-sklearn 2.0
* ubantu
| open | 2024-02-23T03:46:42Z | 2024-02-23T07:41:27Z | https://github.com/automl/auto-sklearn/issues/1721 | [] | bankuaimianbao | 1 |
freqtrade/freqtrade | python | 11,319 | Could no find any Features | <!--
Have you searched for similar issues before posting it?
If you have discovered a bug in the bot, please [search the issue tracker](https://github.com/freqtrade/freqtrade/issues?q=is%3Aissue).
If it hasn't been reported, please create a new issue.
Please do not use bug reports to request new features.
-->
## Describe your environment
* Operating system: raspberry Pi 5____
* Python Version: _____ (`python -V`)
* CCXT version: _____ (`pip freeze | grep ccxt`)
* Freqtrade Version: ____ (`freqtrade -V` or `docker compose run --rm freqtrade -V` for Freqtrade running in docker)
Note: All issues other than enhancement requests will be closed without further comment if the above template is deleted or not filled out.
## Describe the problem:
*Explain the problem you have encountered*
Every Time I tried to backtesting my strategy with freqai, I get the Error that there could not find any Features. I tried every thing in my config, but nothing helps.
Can anyone help me?
### Steps to reproduce:
1. _____
2. _____
3. _____
### Observed Results:
* What happened?
* What did you expect to happen?
### Relevant code exceptions or logs
```
{
"trading_mode": "spot",
"margin_mode": "isolated",
"max_open_trades": 3,
"stake_currency": "USDT",
"stake_amount": "unlimited",
"tradable_balance_ratio": 0.99,
"fiat_display_currency": "EUR",
"dry_run": true,
"strategy": "Ai",
"timeframe": "5m",
"freqaimodel": "LightGBMClassifier",
"dataformat_ohlcv": "json",
"dataformat_trades": "jsongz",
"cancel_open_orders_on_exit": false,
"dry_run_wallet": 200,
"unfilledtimeout": {
"entry": 10,
"exit": 30
},
"entry_pricing": {
"price_side": "same",
"use_order_book": true,
"order_book_top": 1,
"price_last_balance": 0.0,
"check_depth_of_market": {
"enabled": false,
"bids_to_ask_delta": 1
}
},
"exit_pricing": {
"price_side": "other",
"use_order_book": true,
"order_book_top": 1
},
"exchange": {
"name": "binance",
"fees": {
"maker": 0.001,
"taker": 0.001
"sandbox": false,
"key": "",
"secret": "",
"ccxt_config": {
"enableRateLimit": true,
"rateLimit": 50
},
"ccxt_async_config": {
"enableRateLimit": true,
"rateLimit": 50
},
"pair_whitelist": [
"BTC/USDT",
"ETH/USDT",
"LTC/USDT",
"BNB/USDT",
"XRP/USDT",
"ADA/USDT",
"DOT/USDT",
"SOL/USDT",
"LINK/USDT",
"AVAX/USDT"
],
"pair_blacklist": []
},
"pairlists": [
{
"method": "StaticPairList"
},
{
"method": "ShuffleFilter",
"shuffle_frequency": "candle",
"seed": 42
}
],
"freqai": {
"enabled": true,
"model_name": "LightGBMClassifier",
"identifier": "ai_strategy",
"train_period_days": 90,
"backtest_period_days": 30,
"expiration_hours": 24,
"live_retrain_hours": 12,
"purge_old_models": 2,
"save_backtest_models": true,
"feature_parameters": {
"features": [
"sma_50",
"sma_200",
"rsi",
"macd",
"macdsignal",
"macdhist",
"stoch_k",
"stoch_d",
"adx",
"close_to_sma_50",
"sma_diff"
],
"include_corr_pairlist": [
"BTC/USDT:USDT",
"ETH/USDT:USDT"
],
"include_timeframes": [
"5m",
"15m",
"1h"
],
"label_period_candles": 5,
"include_shifted_candles": 10,
"indicator_periods_candles": [
14,
50,
200
]
},
"data_split_parameters": {
"test_size": 0.2,
"random_state": 42,
"shuffle": true
},
"model_training_parameters": {
"learning_rate": 0.01,
"num_leaves": 31,
"n_estimators": 100
}
},
"telegram": {
"enabled": true,
"token": "",
"chat_id": ""
},
"api_server": {
"enabled": true,
"listen_ip_address": "192.168.178.118",
"listen_port": 8081,
"verbosity": "error",
"enable_openapi": false,
"jwt_secret_key": "",
"ws_token": "",
"CORS_origins": [],
"username": "freqtrader",
"password": ""
},
"bot_name": "freqtrade",
"initial_state": "running",
"force_entry_enable": false,
"internals": {
"process_throttle_secs": 5
}
}
```
``` python
from freqtrade.strategy import IStrategy
from freqtrade.strategy import IntParameter
import talib.abstract as ta
from pandas import DataFrame
import pandas as pd
from functools import reduce
from freqtrade.freqai.data_kitchen import FreqaiDataKitchen
import talib.abstract as ta
from freqtrade.strategy import (
BooleanParameter,
CategoricalParameter,
DecimalParameter,
IntParameter,
IStrategy
)
class Ai(IStrategy):
# Define the minimal ROI for the strategy
minimal_roi = {
"0": 0.2,
"10": 0.1,
"30": 0.05,
"60": 0.02,
"120": 0
}
# Define the stop loss
stoploss = -0.15
# Define the timeframe for the strategy
timeframe = '5m'
# Define the startup candle count
startup_candle_count = 50
def __init__(self, config):
super().__init__(config)
# Store the config for later use
self.config = config
# Initialize FreqaiDataKitchen
if 'freqai' in self.config:
self.dk = FreqaiDataKitchen(self.config)
else:
self.dk = None
def populate_indicators(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
# Add technical indicators
dataframe['sma_50'] = ta.SMA(dataframe, timeperiod=50)
dataframe['sma_200'] = ta.SMA(dataframe, timeperiod=200)
dataframe['rsi'] = ta.RSI(dataframe, timeperiod=14)
dataframe['macd'], dataframe['macdsignal'], dataframe['macdhist'] = ta.MACD(dataframe, fastperiod=12, slowperiod=26, signalperiod=9)
dataframe['stoch_k'], dataframe['stoch_d'] = ta.STOCH(dataframe)
dataframe['adx'] = ta.ADX(dataframe)
return dataframe
def set_target(self, dataframe: DataFrame) -> DataFrame:
"""
Set the target column in the dataframe. This column is what the model will learn to predict.
Here, we set the target as the future return over the next 5 candles.
"""
dataframe['target'] = dataframe['close'].shift(-5) / dataframe['close'] - 1
return dataframe
def engineer_features(self, dataframe: DataFrame) -> DataFrame:
"""
Engineer features for the machine learning model. These features will be used to make predictions.
"""
# Feature engineering
dataframe['close_to_sma_50'] = dataframe['close'] / dataframe['sma_50']
dataframe['sma_diff'] = dataframe['sma_50'] - dataframe['sma_200']
dataframe['rsi'] = dataframe['rsi']
dataframe['macd'] = dataframe['macd']
dataframe['macdsignal'] = dataframe['macdsignal']
dataframe['macdhist'] = dataframe['macdhist']
dataframe['stoch_k'] = dataframe['stoch_k']
dataframe['stoch_d'] = dataframe['stoch_d']
dataframe['adx'] = dataframe['adx']
return dataframe
def feature_engineering_expand_all(self, dataframe: DataFrame, period: int, metadata: dict, **kwargs) -> DataFrame:
"""
This function will automatically expand the defined features based on the config defined
`indicator_periods_candles`, `include_timeframes`, `include_shifted_candles`, and
`include_corr_pairs`.
All features must be prepended with `%` to be recognized by FreqAI internals.
:param dataframe: strategy dataframe which will receive the features
:param period: period of the indicator
:param metadata: metadata of current pair
"""
dataframe[f"%-rsi-{period}"] = ta.RSI(dataframe, timeperiod=period)
dataframe[f"%-mfi-{period}"] = ta.MFI(dataframe, timeperiod=period)
dataframe[f"%-adx-{period}"] = ta.ADX(dataframe, timeperiod=period)
dataframe[f"%-sma-{period}"] = ta.SMA(dataframe, timeperiod=period)
dataframe[f"%-ema-{period}"] = ta.EMA(dataframe, timeperiod=period)
bollinger = qtpylib.bollinger_bands(qtpylib.typical_price(dataframe), window=period, stds=2.2)
dataframe[f"%-bb_lowerband-{period}"] = bollinger["lower"]
dataframe[f"%-bb_middleband-{period}"] = bollinger["mid"]
dataframe[f"%-bb_upperband-{period}"] = bollinger["upper"]
dataframe[f"%-bb_width-{period}"] = (dataframe[f"%-bb_upperband-{period}"] - dataframe[f"%-bb_lowerband-{period}"]) / dataframe[f"%-bb_middleband-{period}"]
dataframe[f"%-close-bb_lower-{period}"] = dataframe["close"] / dataframe[f"%-bb_lowerband-{period}"]
dataframe[f"%-roc-{period}"] = ta.ROC(dataframe, timeperiod=period)
dataframe[f"%-relative_volume-{period}"] = dataframe["volume"] / dataframe["volume"].rolling(period).mean()
return dataframe
def populate_buy_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
# Ensure dk is initialized
if self.dk is None:
raise ValueError("FreqaiDataKitchen is not initialized")
# Predictions based on the FreqAI model
predictions, _ = self.freqai.predict(dataframe, self.dk)
dataframe.loc[
(
(predictions > 0.5) & # Buy signal from the model
(dataframe['rsi'] < 30) &
(dataframe['close'] > dataframe['sma_50']) &
(dataframe['sma_50'] > dataframe['sma_200']) &
(dataframe['stoch_k'] < 20) &
(dataframe['adx'] > 25)
),
'buy'
] = 1
return dataframe
def populate_sell_trend(self, dataframe: DataFrame, metadata: dict) -> DataFrame:
dataframe.loc[
(
(dataframe['rsi'] > 70) &
(dataframe['close'] < dataframe['sma_50']) &
(dataframe['sma_50'] < dataframe['sma_200']) &
(dataframe['stoch_k'] > 80) &
(dataframe['adx'] > 25)
),
'sell'
] = 1
return dataframe
def hyperopt_loss_function(self, current_profit: float, trade_count: int) -> float:
"""
Define the loss function for Hyperopt.
"""
# Minimize the inverse of the profit to maximize profit
return -current_profit / trade_count if trade_count > 0 else 0
```
| closed | 2025-02-02T20:47:06Z | 2025-02-03T05:39:56Z | https://github.com/freqtrade/freqtrade/issues/11319 | [
"Question",
"Strategy assistance",
"freqAI"
] | danxathome | 1 |
lepture/authlib | flask | 149 | ImportError: cannot import name certificate_transparency | ### Environment info
Operating System: Ubuntu 16
Python version: 3.6
Authlib version: 0.12.1
pyopnessl version: 19.0.0
cryptography version: 2.7
gspread version: 3.1.0
### Steps to reproduce
1. Use Authlib instead of oauth2client in gspread.
2. Use the code mentioned [here](https://blog.authlib.org/2018/authlib-for-gspread).
3. Try to open a sheet using `gc.open`.
### Stack trace or other output that would be helpful
```
[Tue Sep 17 16:22:18.385805 2019] [wsgi:error] Traceback (most recent call last):
[Tue Sep 17 16:22:18.385811 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/flask/app.py", line 2446, in wsgi_app
[Tue Sep 17 16:22:18.385816 2019] [wsgi:error] response = self.full_dispatch_request()
[Tue Sep 17 16:22:18.385821 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/flask/app.py", line 1951, in full_dispatch_request
[Tue Sep 17 16:22:18.385827 2019] [wsgi:error] rv = self.handle_user_exception(e)
[Tue Sep 17 16:22:18.385847 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/flask/app.py", line 1820, in handle_user_exception
[Tue Sep 17 16:22:18.385852 2019] [wsgi:error] reraise(exc_type, exc_value, tb)
[Tue Sep 17 16:22:18.385857 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/flask/app.py", line 1949, in full_dispatch_request
[Tue Sep 17 16:22:18.385861 2019] [wsgi:error] rv = self.dispatch_request()
[Tue Sep 17 16:22:18.385866 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/flask/app.py", line 1935, in dispatch_request
[Tue Sep 17 16:22:18.385871 2019] [wsgi:error] return self.view_functions[rule.endpoint](**req.view_args)
[Tue Sep 17 16:22:18.385875 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/myapp/views/event.py", line 64, in validate
[Tue Sep 17 16:22:18.385880 2019] [wsgi:error] worksheet = gc.open("Event Validation Test").sheet1
[Tue Sep 17 16:22:18.385884 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/gspread/client.py", line 123, in open
[Tue Sep 17 16:22:18.385889 2019] [wsgi:error] self.list_spreadsheet_files()
[Tue Sep 17 16:22:18.385893 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/gspread/client.py", line 96, in list_spreadsheet_files
[Tue Sep 17 16:22:18.385898 2019] [wsgi:error] res = self.request('get', url, params=params).json()
[Tue Sep 17 16:22:18.385902 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/gspread/client.py", line 73, in request
[Tue Sep 17 16:22:18.385907 2019] [wsgi:error] headers=headers
[Tue Sep 17 16:22:18.385911 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/requests/sessions.py", line 546, in get
[Tue Sep 17 16:22:18.385916 2019] [wsgi:error] return self.request('GET', url, **kwargs)
[Tue Sep 17 16:22:18.385920 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/client/assertion_session.py", line 43, in request
[Tue Sep 17 16:22:18.385925 2019] [wsgi:error] method, url, headers=headers, data=data, auth=auth, **kwargs)
[Tue Sep 17 16:22:18.385930 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/requests/sessions.py", line 519, in request
[Tue Sep 17 16:22:18.385934 2019] [wsgi:error] prep = self.prepare_request(req)
[Tue Sep 17 16:22:18.385939 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/requests/sessions.py", line 462, in prepare_request
[Tue Sep 17 16:22:18.385944 2019] [wsgi:error] hooks=merge_hooks(request.hooks, self.hooks),
[Tue Sep 17 16:22:18.385949 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/requests/models.py", line 317, in prepare
[Tue Sep 17 16:22:18.385954 2019] [wsgi:error] self.prepare_auth(auth, url)
[Tue Sep 17 16:22:18.385966 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/requests/models.py", line 548, in prepare_auth
[Tue Sep 17 16:22:18.385971 2019] [wsgi:error] r = auth(self)
[Tue Sep 17 16:22:18.385975 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/client/assertion_session.py", line 9, in __call__
[Tue Sep 17 16:22:18.385980 2019] [wsgi:error] self.ensure_refresh_token()
[Tue Sep 17 16:22:18.385984 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/client/assertion_client.py", line 9, in ensure_refresh_token
[Tue Sep 17 16:22:18.385989 2019] [wsgi:error] return self.client.refresh_token()
[Tue Sep 17 16:22:18.385993 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/oauth2/rfc7521/client.py", line 55, in refresh_token
[Tue Sep 17 16:22:18.385998 2019] [wsgi:error] **self._kwargs
[Tue Sep 17 16:22:18.386002 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/oauth2/rfc7523/grant.py", line 24, in sign
[Tue Sep 17 16:22:18.386007 2019] [wsgi:error] expires_at, claims, **kwargs)
[Tue Sep 17 16:22:18.386012 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/oauth2/rfc7523/assertion.py", line 36, in sign_jwt_bearer_assertion
[Tue Sep 17 16:22:18.386016 2019] [wsgi:error] return jwt.encode(header, payload, key)
[Tue Sep 17 16:22:18.386021 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/jose/rfc7519/jwt.py", line 95, in encode
[Tue Sep 17 16:22:18.386025 2019] [wsgi:error] return self._jws.serialize_compact(header, text, key)
[Tue Sep 17 16:22:18.386030 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/jose/rfc7515/jws.py", line 71, in serialize_compact
[Tue Sep 17 16:22:18.386035 2019] [wsgi:error] self._algorithms, jws_header, payload, key, private=True)
[Tue Sep 17 16:22:18.386039 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/jose/util.py", line 12, in prepare_algorithm_key
[Tue Sep 17 16:22:18.386044 2019] [wsgi:error] key = algorithm.prepare_private_key(key)
[Tue Sep 17 16:22:18.386048 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/authlib/jose/rfc7518/_backends/_key_cryptography.py", line 19, in prepare_private_key
[Tue Sep 17 16:22:18.386053 2019] [wsgi:error] return load_pem_private_key(key, password=None, backend=default_backend())
[Tue Sep 17 16:22:18.386058 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/cryptography/hazmat/backends/__init__.py", line 15, in default_backend
[Tue Sep 17 16:22:18.386063 2019] [wsgi:error] from cryptography.hazmat.backends.openssl.backend import backend
[Tue Sep 17 16:22:18.386067 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/__init__.py", line 7, in <module>
[Tue Sep 17 16:22:18.386076 2019] [wsgi:error] from cryptography.hazmat.backends.openssl.backend import backend
[Tue Sep 17 16:22:18.386081 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/cryptography/hazmat/backends/openssl/backend.py", line 18, in <module>
[Tue Sep 17 16:22:18.386085 2019] [wsgi:error] from cryptography import utils, x509
[Tue Sep 17 16:22:18.386090 2019] [wsgi:error] File "/var/www/abc.example.com/public_html/venv/lib/python3.6/site-packages/cryptography/x509/__init__.py", line 7, in <module>
[Tue Sep 17 16:22:18.386095 2019] [wsgi:error] from cryptography.x509 import certificate_transparency
[Tue Sep 17 16:22:18.386099 2019] [wsgi:error] ImportError: cannot import name certificate_transparency
```
**What I have tried so far:**
1. Checked if the file path is correct. `certificate_transparency.py` does exist in `/venv/lib/python3.6/site-packages/cryptography/x509`.
2. Tried the same on Windows, it works without any error.
3. Reinstalled all dependencies, including `pyOpenSSL`.
4. Tried importing it inside `(venv)` Python console. `>>> from cryptography.x509 import certificate_transparency` works fine.
**My code:**
```python
def create_assertion_session(conf_file, scopes, subject=None):
with open(conf_file, 'r') as f:
conf = json.load(f)
token_url = conf['token_uri']
issuer = conf['client_email']
key = conf['private_key']
key_id = conf.get('private_key_id')
header = {'alg': 'RS256'}
if key_id:
header['kid'] = key_id
# Google puts scope in payload
claims = {'scope': ' '.join(scopes)}
return AssertionSession(
grant_type=AssertionSession.JWT_BEARER_GRANT_TYPE,
token_url=token_url,
issuer=issuer,
audience=token_url,
claims=claims,
subject=subject,
key=key,
header=header,
)
scopes = [
'https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive',
]
session = create_assertion_session(json_key_file, scopes)
gc = Client(None, session)
worksheet = gc.open("Event Validation Test").sheet1 # this causes the error
```
`requirements.txt`:
```
asn1crypto==0.24.0
Authlib==0.12.1
blinker==1.4
certifi==2019.6.16
cffi==1.12.3
chardet==3.0.4
Click==7.0
colorama==0.4.1
cryptography==2.7
Flask==1.1.1
Flask-Mail==0.9.1
gspread==3.1.0
httplib2==0.13.1
idna==2.8
itsdangerous==1.1.0
Jinja2==2.10.1
MarkupSafe==1.1.1
pyasn1==0.4.7
pyasn1-modules==0.2.6
pycparser==2.19
pymaging==0.1
pymaging-png==0.1
pyOpenSSL==19.0.0
requests==2.22.0
rsa==4.0
six==1.12.0
urllib3==1.25.3
Werkzeug==0.15.5
```
[1]: https://blog.authlib.org/2018/authlib-for-gspread
| closed | 2019-09-17T14:53:02Z | 2019-09-18T11:59:08Z | https://github.com/lepture/authlib/issues/149 | [] | claudiamaximus | 5 |
InstaPy/InstaPy | automation | 6,540 | window.navigator.webdriver response: None | Hi,
I'm getting blocked. Unfortunately, I noticed the following line in the log now.
`INFO [2022-03-07 21:50:25] [XXXx] - window.navigator.webdriver response: None`
Doesn't the line have to be set to true? That will probably be the reason why I am always blocked, right?
What could be the reason that the line shows none and not true? And how can I fixt this issue?
Thanks!
```
INFO [2022-03-07 21:50:25] [XXXx] Session started!
INFO [2022-03-07 21:50:25] [XXXx] -- Connection Checklist [1/2] (Internet Connection Status)
INFO [2022-03-07 21:50:25] [XXXx] - Internet Connection Status: ok
INFO [2022-03-07 21:50:25] [XXXx] - Current IP is "XXX" and it's from "Germany/DE"
INFO [2022-03-07 21:50:25] [XXXx] -- Connection Checklist [2/2] (Hide Selenium Extension)
INFO [2022-03-07 21:50:25] [XXXx] - window.navigator.webdriver response: None
INFO [2022-03-07 21:50:25] [XXXx] - Hide Selenium Extension: ok
INFO [2022-03-07 21:50:30] [XXXx] - Cookie file not found, creating cookie...
INFO [2022-03-07 21:51:22] [XXXx] Logged in successfully!
INFO [2022-03-07 21:51:22] [XXXx] Saving account progress...
INFO [2022-03-07 21:51:27] [XXXx] Tag [1/10]
INFO [2022-03-07 21:51:27] [XXXx] --> b'Tag'
INFO [2022-03-07 21:51:37] [XXXx] desired amount: 99 | top posts [disabled]: 9 | possible posts: 19768197
INFO [2022-03-07 21:51:41] [XXXx] Found media type: Photo
INFO [2022-03-07 21:51:41] [XXXx] Found media type: Photo
INFO [2022-03-07 21:51:41] [XXXx] Found media type: Photo
INFO [2022-03-07 21:51:41] [XXXx] Found media type: Carousel - Video - IGTV
INFO [2022-03-07 21:51:41] [XXXx] Post category: Carousel
INFO [2022-03-07 21:51:41] [XXXx] Found media type: Carousel - Video - IGTV
INFO [2022-03-07 21:51:41] [XXXx] Post category: Video
INFO [2022-03-07 21:51:41] [XXXx] Found media type: Photo
INFO [2022-03-07 21:51:42] [XXXx] Found media type: Photo
INFO [2022-03-07 21:51:42] [XXXx] Found media type: Carousel - Video - IGTV
INFO [2022-03-07 21:51:42] [XXXx] Post category: Carousel
INFO [2022-03-07 21:51:42] [XXXx] Found media type: Carousel - Video - IGTV
INFO [2022-03-07 21:51:42] [XXXx] Post category: Carousel
INFO [2022-03-07 21:51:42] [XXXx] Found media type: Carousel - Video - IGTV
INFO [2022-03-07 21:51:42] [XXXx] Post category: Carousel
INFO [2022-03-07 21:51:42] [XXXx] Found m
INFO [2022-03-07 21:51:43] [XXXx] Links retrieved:: [1/https://www.instagram.com/p/XX/]
INFO [2022-03-07 21:52:26] [XXXx] https://www.instagram.com/p/XX/
WARNING [2022-03-07 21:52:30] [XXXx] Unavailable Page: b'https://www.instagram.com/p/XX/'
INFO [2022-03-07 21:52:30] [XXXx] --> Image not liked: b'Unavailable Page'
``` | open | 2022-03-07T21:01:59Z | 2022-03-12T19:44:50Z | https://github.com/InstaPy/InstaPy/issues/6540 | [] | Wolkex3 | 2 |
iperov/DeepFaceLab | deep-learning | 813 | [Feature]: Training - Progress Status (Iter) | When training (xseg and model) current progress is displayed in the text console, and also the current _Iter_ in the preview window.
Can you also include the current _Iter_ in the text console. And also update the text console's progress on every _Iter_, and AND possibly every second also. More then half of the text console's status line is empty and can be used.
Also can the location of _Iter_ on the preview window can get overwitten in some cases by the graph data. Can you relocate the location to the top right of the preview window, there is more than 50% free space there, also it is faster/clearer to find spontaneously.
| open | 2020-07-05T18:11:07Z | 2020-07-05T18:11:07Z | https://github.com/iperov/DeepFaceLab/issues/813 | [] | HotDenim | 0 |
mckinsey/vizro | pydantic | 431 | typo in explore-components/#22-add-further-components | ### Question

I guess this needs to be fixed into 'create creative' or delete one create.
For the issue I was assigned was about checking link's adaptability, I wasn't sure if I were to change this and make an pull request.
### Code/Examples
_No response_
### Other information
_No response_
### Which package?
None
### Package version
_No response_
### Python version
_No response_
### OS
_No response_
### Code of Conduct
- [X] I agree to follow the [Code of Conduct](https://github.com/mckinsey/vizro/blob/main/CODE_OF_CONDUCT.md). | closed | 2024-04-21T22:14:10Z | 2024-04-22T09:09:24Z | https://github.com/mckinsey/vizro/issues/431 | [
"General Question :question:"
] | kaestro | 1 |
postmanlabs/httpbin | api | 198 | Do a deploy to httpbin.org? | To pick up new stuff
```
$ http http://httpbin.org/encoding/utf8
HTTP/1.1 404 NOT FOUND
Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: *
Connection: keep-alive
Content-Length: 233
Content-Type: text/html
Date: Tue, 23 Dec 2014 23:31:06 GMT
Server: gunicorn/18.0
Via: 1.1 vegur
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>404 Not Found</title>
<h1>Not Found</h1>
<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>
```
| closed | 2014-12-23T23:31:22Z | 2018-04-26T17:51:05Z | https://github.com/postmanlabs/httpbin/issues/198 | [] | msabramo | 3 |
JaidedAI/EasyOCR | deep-learning | 1,079 | failed to recognize decimal point | I find the model only recognize the digital number, but ignore the decimal point. Can anyone help, please? | open | 2023-07-10T11:09:23Z | 2023-07-10T11:09:23Z | https://github.com/JaidedAI/EasyOCR/issues/1079 | [] | HGGshiwo | 0 |
cvat-ai/cvat | pytorch | 9,247 | Unsuccessful restoration of a TASK from the backup | ### Actions before raising this issue
- [x] I searched the existing issues and did not find anything similar.
- [x] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
After every unsuccessful restoration of a TASK from the backup, raw images are created in the docker cvat_server/~/data/data/number_job.
How can this be avoided? And how to ensure that the directory is deleted after a failed restoration to prevent disk space clutter?
I had corrupted image files (broken ones) that were previously backed up in version 1.92. Now, when attempting restoration in 2.31.1, multiple attempts were made, and each attempt created its own job directory, copied files from the archive, but ultimately resulted in an error.
Import Backup
Started by admin on Mar 24th 25, 15:02
rest_framework.exceptions.ValidationError: [ErrorDetail(string='Incorrect file mapping to manifest content', code='invalid')]
### Expected Behavior
_No response_
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
``` | open | 2025-03-24T10:27:40Z | 2025-03-24T10:27:40Z | https://github.com/cvat-ai/cvat/issues/9247 | [
"bug"
] | AnSMru | 0 |
ckan/ckan | api | 7,736 | Package Search breaks when '+state' is in fq_list | ## CKAN version
2.9.x, at least
## Describe the bug
This call:
`get_action('package_search')(None, {'fq_list':['+state:((anything other than active))']})`
results in a solr query like:
`...&fq=state:((anything other than active))&fq=%2Bsite_id:"default"&fq=%2Bstate:active&...`
due to https://github.com/ckan/ckan/blob/2.9/ckan/lib/search/query.py#L332C11-L332C11
```
if 'fq' in query:
fq.append(query['fq'])
fq.extend(query.get('fq_list', []))
# show only results from this CKAN instance
fq.append('+site_id:%s' % solr_literal(config.get('ckan.site_id')))
# filter for package status
if not '+state:' in query.get('fq', ''):
fq.append('+state:active')
```
because the filter for package status is only checking the passed in `fq`, and not the `fq_list`.
### Expected behavior
I'd expect that `fq_list` is a more convenient version of `fq`.
I think a fix for this would be:
```
# filter for package status
if not any('+state:' in _item for _item in fq):
fq.append('+state:active')
```
| closed | 2023-08-03T11:10:57Z | 2023-11-24T11:23:14Z | https://github.com/ckan/ckan/issues/7736 | [
"Good for Contribution"
] | EricSoroos | 0 |
Asabeneh/30-Days-Of-Python | numpy | 648 | A small issue in 11_Day_Functions/11_functions.md | Hi.
In 11_functions.md, under the Function with Parameters(heading) - Single Parameter Example, instead of returning the "total" in the "sum_of_numbers(n)" function, it prints the "total" and returns None.
May I fix this? @Asabeneh | open | 2025-02-19T17:18:53Z | 2025-02-19T17:18:53Z | https://github.com/Asabeneh/30-Days-Of-Python/issues/648 | [] | rohitmhnty | 0 |
sinaptik-ai/pandas-ai | data-visualization | 1,179 | Pandasai Multiple Dataframe: sqlalchemy.exc.DatabaseError: (databricks.sql.exc.ServerOperationError) [UNRESOLVED_COLUMN.WITH_SUGGESTION] | ### System Info
Pandasai version: 2.0.42
Python version: 3.10.12
### 🐛 Describe the bug
I tried to pass in 2 tables into pandasai Agent and join a column which has different column name in 2 tables.
```
agent = Agent([table_userinfo, table_ticketinfo], config={"llm": llm})
answer = agent.chat(prompt)
```
I want to get the user info of a specific ticket.
It can produce the correct code until step 5.
[INFO] Code generated:
```
result_df = dfs[1].merge(dfs[0], left_on='reporter', right_on='name')
result_df = result_df[result_df['issue_id'] == 1][['LegalLastName', 'LegalFirstName', 'Email']]
result = {'type': 'dataframe', 'value': result_df}
```
However, it cannot execute the code successfully on step 6.
[INFO] Executing Step 6: CodeExecution
[ERROR] Failed with error: Traceback (most recent call last):
databricks.sql.exc.ServerOperationError: [UNRESOLVED_COLUMN.WITH_SUGGESTION] A column or function parameter with name `issue_id` cannot be resolved. Did you mean one of the following? [`Email`, ...].; line 3 pos 6
[SQL: SELECT *
FROM table_userinfo
WHERE issue_id = %(value_0)s]
[parameters: {'value_0': 1}]
(Background on this error at: https://sqlalche.me/e/14/4xp6)
I don't understand why the code is correct about getting issue_id from the merged table, but then the SQL code try to get issue_id from the userinfo table.
What might be the suitable input prompt for multiple table scenarios, or how to solve the issue? | closed | 2024-05-28T16:11:45Z | 2024-10-12T04:23:04Z | https://github.com/sinaptik-ai/pandas-ai/issues/1179 | [
"bug"
] | ssling0817 | 2 |
microsoft/unilm | nlp | 801 | LayoutLMv3 | Domain adaptation on the base model | I'm using the base model from LayoutLMv3 and trying to adapt it to my own local data. This data is unlabeled, so I'm trying to continue the training of the base model on my own data.
I'm having trouble adapting on how to mask the data and which collator to give to the [Trainer](https://huggingface.co/docs/transformers/main_classes/trainer). Currently my data has this structure:
```python
features = Features({
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
})
```
To mask the text part I'm using the [DataCollatorForLanguageModeling](https://huggingface.co/docs/transformers/main_classes/data_collator#transformers.DataCollatorForLanguageModeling) but this only masks the text and doesn't include the image information. Anyone that knows how to do this? | closed | 2022-07-25T08:01:40Z | 2022-08-25T06:34:09Z | https://github.com/microsoft/unilm/issues/801 | [] | louisdeneve | 1 |
scikit-multilearn/scikit-multilearn | scikit-learn | 16 | Quade test | closed | 2015-03-03T23:29:23Z | 2016-02-01T15:11:41Z | https://github.com/scikit-multilearn/scikit-multilearn/issues/16 | [
"statistics"
] | niedakh | 0 |
|
awtkns/fastapi-crudrouter | fastapi | 105 | Failing schemathesis validation | After #104, using the schemathesis tests are still failing when validating the generated openapi spec. The goal of this issue would be to resolve these failing tests (as detailed in #102).
---
>@awtkns I looked at this a little more, and I think my implementation here might be a little bit off. If you're planning to implement schemathesis I'm sure you'll spot it, but leaving this here in case it's helpful. Looking through [these](https://fastapi.tiangolo.com/advanced/additional-responses/?h=404) docs, it looks like this is the way to specify responses:
```python
class NotFoundModel(BaseModel):
detail: str
@router.get('/{item_id}/', status_code=200, response_model=Model, responses={'404': {'model': NotFoundModel}})
async def retrieve(item_id: int) -> Model:
try:
return await Model.objects.get(id=item_id)
except NoMatch:
raise HTTPException(status_code=404, detail="Item not found")
```
>In other words, I think this PR fixed one schemathesis issue: the missing response code, but didn't fully resolve it, since the model isn't correctly specified.
_Originally posted by @sondrelg in https://github.com/awtkns/fastapi-crudrouter/issues/104#issuecomment-922811907_ | open | 2021-09-20T15:53:38Z | 2021-09-20T15:58:19Z | https://github.com/awtkns/fastapi-crudrouter/issues/105 | [
"enhancement"
] | awtkns | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,273 | use of app.run vs. socketio.run | Hi,
I wonder about the usage of `app.run` vs. `socketio.run`. I have in my code '__main__':
```
port = 8081
app = create_app(port)
socketio = app.extensions['socketio']
socketio.run(app, port=port,host='0.0.0.0',debug=True)
# app.run(host='0.0.0.0', port=port, debug=True, threaded=True)
```
to be compatible with `gunicorn`. If I now run `app.run` instead of `socket.run` with the `Flask-Server` the application doesn't work. Mainly `socketio.start_background_task` is not executed properly. Running gunicorn gets only back the `app ` to run and nevertheless finally works perfect.
So what is the difference and why I cannot use always `app.run`?
| closed | 2020-05-06T12:21:23Z | 2021-04-06T13:18:00Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1273 | [
"question"
] | mhechthz | 17 |
miguelgrinberg/microblog | flask | 5 | AttributeError: class momentjs has no attribute '__call__' | From the same build circumstances as mentioned in closed issue #1
In trying to build and use this GitHub repository on its own, without reference to the foundation tutorial, I encounter the exception:
127.0.0.1 - - [30/Jun/2013 08:32:37] "GET /user/martinhbramwell HTTP/1.1" 500 -
Traceback (most recent call last):
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/app.py", line 1701, in **call**
return self.wsgi_app(environ, start_response)
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/app.py", line 1689, in wsgi_app
response = self.make_response(self.handle_exception(e))
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/app.py", line 1687, in wsgi_app
response = self.full_dispatch_request()
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/app.py", line 1360, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/app.py", line 1358, in full_dispatch_request
rv = self.dispatch_request()
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/app.py", line 1344, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask_login.py", line 650, in decorated_view
return func(_args, *_kwargs)
File "/home/temp/Desktop/microblog-master/app/views.py", line 128, in user
posts = posts)
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/templating.py", line 125, in render_template
context, ctx.app)
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/flask/templating.py", line 107, in _render
rv = template.render(context)
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/jinja2/environment.py", line 969, in render
return self.environment.handle_exception(exc_info, True)
File "/home/temp/Desktop/microblog-master/flask/lib/python2.7/site-packages/jinja2/environment.py", line 742, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/temp/Desktop/microblog-master/app/templates/user.html", line 2, in top-level template code
{% extends "base.html" %}
File "/home/temp/Desktop/microblog-master/app/templates/base.html", line 64, in top-level template code
{% block content %}{% endblock %}
File "/home/temp/Desktop/microblog-master/app/templates/user.html", line 13, in block "content"
<p><em>{{ _('Last seen:') }} {{ momentjs(user.last_seen).calendar() }}</em></p>
AttributeError: class momentjs has no attribute '__call__'
| closed | 2013-06-30T12:43:14Z | 2013-07-06T17:34:30Z | https://github.com/miguelgrinberg/microblog/issues/5 | [] | martinhbramwell | 1 |
absent1706/sqlalchemy-mixins | sqlalchemy | 11 | Limit and Offset | Is there a way to do limits and offsets in sql query mixins ? | closed | 2018-03-29T16:34:51Z | 2018-03-30T08:12:38Z | https://github.com/absent1706/sqlalchemy-mixins/issues/11 | [] | williamkibira | 2 |
huggingface/transformers | deep-learning | 36,025 | HIGGS Quantization not working properly | ### System Info
**Environment**
```
- `transformers` version: 4.48.2
- Platform: Linux-5.4.210-39.1.pagevecsize-x86_64-with-glibc2.27
- Python version: 3.11.10
- Huggingface_hub version: 0.26.2
- Safetensors version: 0.4.5
- Accelerate version: 1.1.1
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.0+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA A100-SXM4-80GB
- fast_hadamard_transform 1.0.4.post1
```
### Who can help?
@BlackSamorez
@SunMarc
@ArthurZucker
### Information
- [ ] The official example scripts
- [x] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Recently, in the [PR](https://github.com/huggingface/transformers/pull/34997) HIGGS quantization from the paper [Pushing the Limits of Large Language Model Quantization via the Linearity Theorem](https://arxiv.org/abs/2411.17525) was introduced.
But when attempting to load the quantized `Llama-3.1-8B-Instruct `model in this format as follows:
```python
model_name = "meta-llama/Llama-3.1-8B-Instruct"
quantization_config = HiggsConfig(bits=4, p=2)
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="auto",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
quantization_config=quantization_config
)
model.config.use_cache = False
```
And doing forward pass with dummy inputs
```python
inputs = torch.randint(0, model.config.vocab_size, device="cuda", size=(8,))
with torch.no_grad():
outputs = model(inputs)
```
I get the following error in the RoPE:
```bash
File ~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:271, in LlamaAttention.forward(self, hidden_states, position_embeddings, attention_mask, past_key_value, cache_position, **kwargs)
[268](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:268) value_states = self.v_proj(hidden_states).view(hidden_shape).transpose(1, 2)
[270](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:270) cos, sin = position_embeddings
--> [271](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:271) query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin)
[273](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:273) if past_key_value is not None:
[274](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:274) # sin and cos are specific to RoPE models; cache_position needed for the static cache
[275](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:275) cache_kwargs = {"sin": sin, "cos": cos, "cache_position": cache_position}
File ~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:169, in apply_rotary_pos_emb(q, k, cos, sin, position_ids, unsqueeze_dim)
[167](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:167) cos = cos.unsqueeze(unsqueeze_dim)
[168](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:168) sin = sin.unsqueeze(unsqueeze_dim)
--> [169](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:169) q_embed = (q * cos) + (rotate_half(q) * sin)
[170](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:170) k_embed = (k * cos) + (rotate_half(k) * sin)
[171](https://vscode-remote+ssh-002dremote-002bultramar.vscode-resource.vscode-cdn.net/home/dkuznedelev/FLUTE_Playground/~/miniconda3/envs/llm/lib/python3.11/site-packages/transformers/models/llama/modeling_llama.py:171) return q_embed, k_embed
RuntimeError: The size of tensor a (32) must match the size of tensor b (128) at non-singleton dimension 3
```
### Expected behavior
I would expect successful forward pass through the quantized model. | closed | 2025-02-04T08:55:00Z | 2025-02-19T05:35:52Z | https://github.com/huggingface/transformers/issues/36025 | [
"bug"
] | Godofnothing | 3 |
dpgaspar/Flask-AppBuilder | flask | 1,648 | ERROR:flask_appbuilder.security.sqla.manager:DB Creation and initialization failed | A problem has been encountered,when move FAB to another server:
/root/anaconda3/lib/python3.8/site-packages/Flask_SQLAlchemy-2.5.1-py3.8.egg/flask_sqlalchemy/__init__.py:872: FSADeprecationWarning: SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True or False to suppress this warning.
2021-06-02 00:34:07,924:ERROR:flask_appbuilder.security.sqla.manager:DB Creation and initialization failed: get_bind() takes from 1 to 3 positional arguments but 6 were given
| closed | 2021-06-01T08:55:01Z | 2021-06-15T14:59:32Z | https://github.com/dpgaspar/Flask-AppBuilder/issues/1648 | [] | HHself | 3 |
healthchecks/healthchecks | django | 400 | Problem Running ./manage.py settelegramwebhook | Greetings,
I was attempting to run `./manage.py settelegramwebhook` the other day but got the following result returned:
```
Running manage.py ...
/healthchecks/hc/settings.py:226: UserWarning: local_settings.py not found, using defaults
warnings.warn("local_settings.py not found, using defaults")
Fail: status=404, b'{"ok":false,"error_code":404,"description":"Not Found"}'
```
However, it appears I can run the following without issue which allows the webhook to be created:
```
curl -F "url=https://<YOURDOMAIN.EXAMPLE>//integrations/telegram/bot/" https://api.telegram.org/bot<YOURTOKEN>/setWebhook
```
So it seems to me there might be an issue in the - `hc/api/management/commands/settelegramwebhook.py` file. I'm no Python person (but I can report issues!!), but that would be where I would start looking for resolving this. | closed | 2020-07-11T14:56:06Z | 2022-05-21T10:51:04Z | https://github.com/healthchecks/healthchecks/issues/400 | [] | jimmybrancaccio | 6 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,281 | What to do , i dont get it . Help! | (harry) C:\Users\Cookie\Desktop\iam>
(harry) C:\Users\Cookie\Desktop\iam>python demo_toolbox.py
C:\Users\Cookie\Desktop\iam\encoder\audio.py:13: UserWarning: Unable to import 'webrtcvad'. This package enables noise removal and is recommended.
warn("Unable to import 'webrtcvad'. This package enables noise removal and is recommended.")
Traceback (most recent call last):
File "C:\Users\Cookie\Desktop\iam\demo_toolbox.py", line 5, in <module>
from toolbox import Toolbox
File "C:\Users\Cookie\Desktop\iam\toolbox\__init__.py", line 11, in <module>
from toolbox.ui import UI
File "C:\Users\Cookie\Desktop\iam\toolbox\ui.py", line 37, in <module>
], dtype=np.float) / 255
File "C:\Users\Cookie\anaconda3\envs\harry\lib\site-packages\numpy\__init__.py", line 284, in __getattr__
raise AttributeError("module {!r} has no attribute "
AttributeError: module 'numpy' has no attribute 'float' | open | 2024-01-06T17:46:38Z | 2024-03-12T09:49:07Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1281 | [] | iWEDX | 1 |
cvat-ai/cvat | computer-vision | 8,324 | Export Error to CVAT 1.1 | I have annotated masks, bounding boxes and keypoints.
have 4 tasks out 4, 2 tasks are ~20 images and i have successfully exported to cvat 1.1 format.
other tasks have more than 60 images and it is giving me `IndexError: list index out of range` error when exporting to cvat 1.1 format.
but the same export is working for other formats. | closed | 2024-08-20T17:47:55Z | 2024-08-29T14:17:31Z | https://github.com/cvat-ai/cvat/issues/8324 | [
"need info"
] | shekarneo | 2 |
huggingface/datasets | computer-vision | 6,869 | Download is broken for dict of dicts: FileNotFoundError | It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
```
Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-7-0e0d76d25b09> in <module>
----> 1 paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet"}})
.../huggingface/datasets/src/datasets/download/download_manager.py in download(self, url_or_urls)
255 start_time = datetime.now()
256 with stack_multiprocessing_download_progress_bars():
--> 257 downloaded_path_or_paths = map_nested(
258 download_func,
259 url_or_urls,
.../huggingface/datasets/src/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, batched, batch_size, types, disable_tqdm, desc)
506 batch_size = max(len(iterable) // num_proc + int(len(iterable) % num_proc > 0), 1)
507 iterable = list(iter_batched(iterable, batch_size))
--> 508 mapped = [
509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
507 iterable = list(iter_batched(iterable, batch_size))
508 mapped = [
--> 509 _single_map_nested((function, obj, batched, batch_size, types, None, True, None))
510 for obj in hf_tqdm(iterable, disable=disable_tqdm, desc=desc)
511 ]
.../huggingface/datasets/src/datasets/utils/py_utils.py in _single_map_nested(args)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/utils/py_utils.py in <listcomp>(.0)
375 and all(not isinstance(v, types) for v in data_struct)
376 ):
--> 377 return [mapped_item for batch in iter_batched(data_struct, batch_size) for mapped_item in function(batch)]
378
379 # Reduce logging to keep things readable in multiprocessing with tqdm
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_batched(self, url_or_filenames, download_config)
311 )
312 else:
--> 313 return [
314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
.../huggingface/datasets/src/datasets/download/download_manager.py in <listcomp>(.0)
312 else:
313 return [
--> 314 self._download_single(url_or_filename, download_config=download_config)
315 for url_or_filename in url_or_filenames
316 ]
.../huggingface/datasets/src/datasets/download/download_manager.py in _download_single(self, url_or_filename, download_config)
321 # append the relative path to the base_path
322 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 323 out = cached_path(url_or_filename, download_config=download_config)
324 out = tracked_str(out)
325 out.set_origin(url_or_filename)
.../huggingface/datasets/src/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
220 elif is_local_path(url_or_filename):
221 # File, but it doesn't exist.
--> 222 raise FileNotFoundError(f"Local file {url_or_filename} doesn't exist")
223 else:
224 # Something unknown
FileNotFoundError: Local file .../huggingface/datasets/{'frr': 'hf:/datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-00001.parquet'} doesn't exist
```
Related to:
- #6850
| closed | 2024-05-06T05:13:36Z | 2024-05-06T09:25:53Z | https://github.com/huggingface/datasets/issues/6869 | [
"bug"
] | albertvillanova | 0 |
strawberry-graphql/strawberry | django | 3,349 | When i use run with python3, ImportError occured | <!-- Provide a general summary of the bug in the title above. -->
<!--- This template is entirely optional and can be removed, but is here to help both you and us. -->
<!--- Anything on lines wrapped in comments like these will not show up in the final text. -->
ImportError : cannot import name 'GraphQLError' from 'graphql'
## Describe the Bug
It works well when executed with poetry run app.main:main.
However, when executing with python3 app/main.py, the following Import Error occurs.
_**Error occured code line**_
<img width="849" alt="image" src="https://github.com/strawberry-graphql/strawberry/assets/10377550/713ab6b4-76c1-4b0d-84e1-80903f8855ea">
**_Traceback_**
```bash
Traceback (most recent call last):
File "<frozen importlib._bootstrap>", line 1176, in _find_and_load
File "<frozen importlib._bootstrap>", line 1147, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/evanhwang/dev/ai-hub/hub-api/app/bootstrap/admin/bootstrapper.py", line 4, in <module>
from app.bootstrap.admin.router import AdminRouter
File "/Users/evanhwang/dev/ai-hub/hub-api/app/bootstrap/admin/router.py", line 3, in <module>
import strawberry
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/__init__.py", line 1, in <module>
from . import experimental, federation, relay
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/federation/__init__.py", line 1, in <module>
from .argument import argument
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/federation/argument.py", line 3, in <module>
from strawberry.arguments import StrawberryArgumentAnnotation
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/arguments.py", line 18, in <module>
from strawberry.annotation import StrawberryAnnotation
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/annotation.py", line 23, in <module>
from strawberry.custom_scalar import ScalarDefinition
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/custom_scalar.py", line 19, in <module>
from strawberry.exceptions import InvalidUnionTypeError
File "/Users/evanhwang/Library/Caches/pypoetry/virtualenvs/hub-api-UM7sgzi1-py3.11/lib/python3.11/site-packages/strawberry/exceptions/__init__.py", line 6, in <module>
from graphql import GraphQLError
ImportError: cannot import name 'GraphQLError' from 'graphql' (/Users/evanhwang/dev/ai-hub/hub-api/app/graphql/__init__.py)
```
## System Information
- Operating System: Mac Ventura 13.5.1(22G90)
- Strawberry Version (if applicable):
Entered in pyproject.toml as follows:
```bash
strawberry-graphql = {extras = ["debug-server", "fastapi"], version = "^0.217.1"}
```
**_pyproject.toml_**
```toml
##############################################################################
# poetry 종속성 설정
# - https://python-poetry.org/docs/managing-dependencies/#dependency-groups
# - 기본적으로 PyPI에서 종속성을 찾습니다.
##############################################################################
[tool.poetry.dependencies]
python = "3.11.*"
fastapi = "^0.103.2"
uvicorn = "^0.23.2"
poethepoet = "^0.24.0"
requests = "^2.31.0"
poetry = "^1.6.1"
sqlalchemy = "^2.0.22"
sentry-sdk = "^1.32.0"
pydantic-settings = "^2.0.3"
psycopg2-binary = "^2.9.9"
cryptography = "^41.0.4"
python-ulid = "^2.2.0"
ulid = "^1.1"
redis = "^5.0.1"
aiofiles = "^23.2.1"
pyyaml = "^6.0.1"
python-jose = "^3.3.0"
strawberry-graphql = {extras = ["debug-server", "fastapi"], version = "^0.217.1"}
[tool.poetry.group.dev.dependencies]
pytest = "^7.4.0"
pytest-mock = "^3.6.1"
httpx = "^0.24.1"
poetry = "^1.5.1"
sqlalchemy = "^2.0.22"
redis = "^5.0.1"
mypy = "^1.7.0"
types-aiofiles = "^23.2.0.0"
types-pyyaml = "^6.0.12.12"
commitizen = "^3.13.0"
black = "^23.3.0" # fortmatter
isort = "^5.12.0" # import 정렬
pycln = "^2.1.5" # unused import 정리
ruff = "^0.0.275" # linting
##############################################################################
# poethepoet
# - https://github.com/nat-n/poethepoet
# - poe를 통한 태스크 러너 설정
##############################################################################
types-requests = "^2.31.0.20240106"
pre-commit = "^3.6.0"
[tool.poe.tasks.format-check-only]
help = "Check without formatting with 'pycln', 'black', 'isort'."
sequence = [
{cmd = "pycln --check ."},
{cmd = "black --check ."},
{cmd = "isort --check-only ."}
]
[tool.poe.tasks.format]
help = "Run formatter with 'pycln', 'black', 'isort'."
sequence = [
{cmd = "pycln -a ."},
{cmd = "black ."},
{cmd = "isort ."}
]
[tool.poe.tasks.lint]
help = "Run linter with 'ruff'."
cmd = "ruff ."
[tool.poe.tasks.type-check]
help = "Run type checker with 'mypy'"
cmd = "mypy ."
[tool.poe.tasks.clean]
help = "Clean mypy_cache, pytest_cache, pycache..."
cmd = "rm -rf .coverage .mypy_cache .pytest_cache **/__pycache__"
##############################################################################
# isort
# - https://pycqa.github.io/isort/
# - python import 정렬 모듈 설정
##############################################################################
[tool.isort]
profile = "black"
##############################################################################
# ruff
# - https://github.com/astral-sh/ruff
# - Rust 기반 포맷터, 린터입니다.
##############################################################################
[tool.ruff]
select = [
"E", # pycodestyle errors
"W", # pycodestyle warnings
"F", # pyflakes
"C", # flake8-comprehensions
"B", # flake8-bugbear
# "T20", # flake8-print
]
ignore = [
"E501", # line too long, handled by black
"E402", # line too long, handled by black
"B008", # do not perform function calls in argument defaults
"C901", # too complex
]
[tool.commitizen]
##############################################################################
# mypy 설정
# - https://mypy.readthedocs.io/en/stable/
# - 정적 타입 체크를 수행합니다.
##############################################################################
[tool.mypy]
python_version = "3.11"
packages=["app"]
exclude=["tests"]
ignore_missing_imports = true
show_traceback = true
show_error_codes = true
disable_error_code="misc, attr-defined"
follow_imports="skip"
#strict = false
# 다음은 --strict에 포함된 여러 옵션들입니다.
warn_unused_configs = true # mypy 설정에서 사용되지 않은 [mypy-<pattern>] config 섹션에 대해 경고를 발생시킵니다. (증분 모드를 끄려면 --no-incremental 사용 필요)
disallow_any_generics = false # 명시적인 타입 매개변수를 지정하지 않은 제네릭 타입의 사용을 금지합니다. 예를 들어, 단순히 x: list와 같은 코드는 허용되지 않으며 항상 x: list[int]와 같이 명시적으로 작성해야 합니다.
disallow_subclassing_any = true # 클래스가 Any 타입을 상속할 때 오류를 보고합니다. 이는 기본 클래스가 존재하지 않는 모듈에서 가져올 때( --ignore-missing-imports 사용 시) 또는 가져오기 문에 # type: ignore 주석이 있는 경우에 발생할 수 있습니다.
disallow_untyped_calls = true # 타입 어노테이션이 없는 함수 정의에서 함수 호출시 오류를 보고합니다.
disallow_untyped_defs = false # 타입 어노테이션이 없거나 불완전한 타입 어노테이션이 있는 함수 정의를 보고합니다. (--disallow-incomplete-defs의 상위 집합)
disallow_incomplete_defs = false # 부분적으로 주석이 달린 함수 정의를 보고합니다. 그러나 완전히 주석이 달린 정의는 여전히 허용됩니다.
check_untyped_defs = true # 타입 어노테이션이 없는 함수의 본문을 항상 타입 체크합니다. (기본적으로 주석이 없는 함수의 본문은 타입 체크되지 않습니다.) 모든 매개변수를 Any로 간주하고 항상 Any를 반환값으로 추정합니다.
disallow_untyped_decorators = true # 타입 어노테이션이 없는 데코레이터를 사용할 때 오류를 보고합니다.
warn_redundant_casts = true # 코드가 불필요한 캐스트를 사용하는 경우 오류를 보고합니다. 캐스트가 안전하게 제거될 수 있는 경우 경고가 발생합니다.
warn_unused_ignores = false # 코드에 실제로 오류 메시지를 생성하지 않는 # type: ignore 주석이 있는 경우 경고를 발생시킵니다.
warn_return_any = false # Any 타입을 반환하는 함수에 대해 경고를 발생시킵니다.
no_implicit_reexport = true # 기본적으로 모듈에 가져온 값은 내보내진 것으로 간주되어 mypy는 다른 모듈에서 이를 가져오도록 허용합니다. 그러나 이 플래그를 사용하면 from-as를 사용하거나 __all__에 포함되지 않은 경우 내보내지 않도록 동작을 변경합니다.
strict_equality = true # mypy는 기본적으로 42 == 'no'와 같은 항상 거짓인 비교를 허용합니다. 이 플래그를 사용하면 이러한 비교를 금지하고 비슷한 식별 및 컨테이너 확인을 보고합니다. (예: from typing import Text)
extra_checks = true # 기술적으로는 올바르지만 실제 코드에서 불편할 수 있는 추가적인 검사를 활성화합니다. 특히 TypedDict 업데이트에서 부분 중첩을 금지하고 Concatenate를 통해 위치 전용 인자를 만듭니다.
# pydantic 플러그인 설정 하지 않으면 가짜 타입 오류 발생 여지 있음
# - https://www.twoistoomany.com/blog/2023/04/12/pydantic-mypy-plugin-in-pyproject/
plugins = ["pydantic.mypy", "strawberry.ext.mypy_plugin"]
##############################################################################
# 빌드 시스템 설정
##############################################################################
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
##############################################################################
# virtualenv 설정
# - 프로젝트에서 peotry 명령 호출 시 venv가 없다면 '.venv' 경로에 생성
##############################################################################
[virtualenvs]
create = true
in-project = true
path = ".venv"
```
## Additional Context
- I already run 'Invaidate Caches` in pycharm. | closed | 2024-01-19T02:17:47Z | 2025-03-20T15:56:34Z | https://github.com/strawberry-graphql/strawberry/issues/3349 | [] | evan-hwang | 6 |
roboflow/supervision | machine-learning | 1,320 | YOLOv8 + ByteTrack integration issues | ### Search before asking
- [X] I have searched the Supervision [issues](https://github.com/roboflow/supervision/issues) and found no similar feature requests.
### Question
Hello! I'm currently building a program to detect deep sea creatures in submarine video. I am using YOLOv8 to make detections and ByteTrack to assign object IDs to these detections. My output includes both an annotated video (based exclusively on YOLO output) and a csv file of all distinct detections (determined as distinct by ByteTrack). I am having an issue where certain creatures are annotated in the video output, ie. detected by YOLO, but then they are omitted from the csv output ie. not assigned a tracking ID by ByteTrack. Please help! Thanks!
### Additional
def process_video(video_path: str, output_path: str, model_path: str, location_path: str, start_time: str, time_col: int, lat_col: int, lon_col: int, depth_col: int, salinity_col: int, oxygen_col: int, altitude_col: int,
confidence_threshold: float, iou_threshold: float, track_activation_threshold: float, minimum_matching_threshold: float, lost_track_buffer: int,
frame_rate: int, min_box_area: int, aspect_ratio_thresh: float):
"""Process the video to track objects and save tracking data."""
model = YOLO(model_path)
tracker = ByteTrack(
track_activation_threshold=track_activation_threshold,
minimum_matching_threshold=minimum_matching_threshold,
lost_track_buffer=lost_track_buffer
)
location_data = get_location_data(location_path, time_col, lat_col, lon_col, depth_col, salinity_col, oxygen_col, altitude_col)
start_time_seconds = time_to_seconds(start_time)
cap = cv2.VideoCapture(video_path)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
fps = cap.get(cv2.CAP_PROP_FPS)
frame_count = int(cap.get(cv2.CAP_PROP_FRAME_COUNT))
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_path.replace('.csv', '.mp4'), fourcc, fps, (width, height))
tracking_info = {}
pbar = tqdm(total=frame_count, desc='Processing frames', leave=True, mininterval=10)
frame_index = 0
cached_boxes = None
cached_labels = None
try:
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
current_time = start_time_seconds + (frame_index / fps)
lat, lon, depth, salinity, oxygen, altitude = get_location_at_time(location_data, current_time)
if frame_index % 5 == 0: # Process frame every 5 frames
results = process_frame(frame, model, confidence_threshold, iou_threshold)
cached_boxes = results.boxes.xyxy.numpy() # Convert to numpy array
names = model.names # Class names
labels = results.boxes.cls.numpy().astype(int) # Convert to integer labels
cached_labels = [
f"{names[label]} {round(confidence, 2)}"
for label, confidence in zip(labels, results.boxes.conf.numpy())
]
# Draw bounding boxes using cached detections and labels
annotated_frame = frame.copy()
if cached_boxes is not None and cached_labels is not None:
drawn_boxes = set() # Track drawn boxes
for box, label in zip(cached_boxes, cached_labels):
x1, y1, x2, y2 = map(int, box) # Get box coordinates
class_name = label.split()[0] # Get class name from label
# Check if the box is already drawn
if (x1, y1, x2, y2) not in drawn_boxes:
# Draw rectangle with red color (BGR: (0, 0, 255)) and thicker lines (thickness=3)
cv2.rectangle(annotated_frame, (x1, y1), (x2, y2), (0, 0, 255), 3)
# Put label text with red color
cv2.putText(annotated_frame, class_name, (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 0, 255), 2)
drawn_boxes.add((x1, y1, x2, y2))
# Write the frame to the output video
out.write(annotated_frame)
if frame_index % 5 == 0:
detections = sv.Detections.from_ultralytics(results)
detections = tracker.update_with_detections(detections)
for index in range(len(detections.class_id)):
object_id = detections.tracker_id[index]
class_name = model.names[int(detections.class_id[index])]
confidence = detections.confidence[index]
if object_id not in tracking_info:
image_path = save_detection_image(frame, detections[index], object_id, current_time, SOURCE_VIDEO_PATH)
tracking_info[object_id] = {
'Class': class_name,
'Confidence': confidence,
'Start Time': seconds_to_time_str(int(current_time)),
'End Time': seconds_to_time_str(int(current_time)),
'Latitude': lat,
'Longitude': lon,
'Depth': depth,
'Salinity': salinity,
'Oxygen': oxygen,
'Altitude': altitude,
'Image Path': image_path,
'All Classes': [class_name]
}
else:
tracking_info[object_id]['End Time'] = seconds_to_time_str(int(current_time))
tracking_info[object_id]['Latitude'] = lat
tracking_info[object_id]['Longitude'] = lon
tracking_info[object_id]['Depth'] = depth
tracking_info[object_id]['Salinity'] = salinity
tracking_info[object_id]['Oxygen'] = oxygen
tracking_info[object_id]['Altitude'] = altitude
tracking_info[object_id]['All Classes'].append(class_name)
pbar.update(1)
frame_index += 1 | open | 2024-07-01T20:47:56Z | 2024-09-26T01:19:58Z | https://github.com/roboflow/supervision/issues/1320 | [
"question"
] | ddrisco11 | 6 |
ray-project/ray | machine-learning | 50,835 | Please delete | closed | 2025-02-22T20:20:04Z | 2025-02-22T21:42:55Z | https://github.com/ray-project/ray/issues/50835 | [
"bug",
"triage"
] | vladjohnson | 0 |
|
deepset-ai/haystack | pytorch | 8,175 | clean up docstrings: AzureOpenAIDocumentEmbedder & AzureOpenAITextEmbedder | closed | 2024-08-08T13:49:46Z | 2024-08-13T12:17:48Z | https://github.com/deepset-ai/haystack/issues/8175 | [
"type:documentation"
] | dfokina | 0 |
|
JaidedAI/EasyOCR | machine-learning | 483 | Suggestion: Enable discussions on this repo | A lot of people open issues to ask questions so I think it's a good idea to activate the discussions it will refocus the issues on their real uses.
It will also be necessary to move all "issues" to the discussions. | closed | 2021-07-05T16:55:00Z | 2021-10-06T09:14:47Z | https://github.com/JaidedAI/EasyOCR/issues/483 | [] | A2va | 1 |
plotly/plotly.py | plotly | 4,649 | `plotly.graph_objects.Scatter` property `fillgradient` does not plot gradients with v5.22.0 in python | ## Description
I am trying to use the `fillgradient` in a scatter plot with `plotly`.
[I copied the documentation code for its use](https://plotly.com/python/filled-area-plots/#gradient-fill), but on code execution, the gradient does not appear.
### Operating system & Environment
- OS: macOS Sonoma `14.5`
- language: python `3.11.3`
- relevant libraries (working on VSCode in a jupyter notebook)
- plotly `5.22.0`
- jupyterlab `3.6.3`
- ipywidgets `8.0.4`
- notebook `6.5.4`
### Code used
```
import plotly.graph_objects as go
fig = go.Figure(
[
go.Scatter(
x=[1, 2, 3, 4],
y=[3, 4, 8, 3],
fill=None,
mode="lines",
line_color="darkblue",
),
go.Scatter(
x=[1, 2, 3, 4],
y=[1, 6, 2, 6],
fill="tonexty",
mode="lines",
line_color="darkblue",
fillgradient=dict(
type="horizontal",
colorscale=[(0.0, "darkblue"), (0.5, "royalblue"), (1.0, "cyan")],
),
),
]
)
fig.show()
```
Output in picture:
<img width="1078" alt="image" src="https://github.com/plotly/plotly.py/assets/35875673/da82978a-1313-4f86-bd25-4add49c18ed8">
| open | 2024-06-28T17:22:26Z | 2024-09-13T22:56:54Z | https://github.com/plotly/plotly.py/issues/4649 | [
"bug",
"P3"
] | JulienRim | 3 |
kizniche/Mycodo | automation | 865 | Trigger: timer does not work with PWM-output | Linux RPi4 5.4.51-v7l+
Mycodo Version: 8.8.6
Python Version: 3.7.3 (default, Jul 25 2020, 13:03:44) [GCC 8.3.0]
Database Version: 0e150fb8020b
Daemon RAM Usage: 53.58 MB
Frontend RAM Usage: 54.272 MB
I upgraded mycodo from 8.5.8 to 8.8.6. I use the Trigger: timer to switch the LED lighting in the aquarium. Everything works like before, only the triggers don't want to switch the PWM output.
No matter if daily span or duration. When I switch the PWM output manually, the LEDs light up.
The trigger switches ON/Off GPIO-output without any problems. It looks like the trigger doesn't like PWM outputs.
Debug log does not show any errors. Hardware is ok. I have the error on two pi boards.
I have already tried several PWM-hardware-pins and also Software-PWM-Pins:
manually everything is ok but the trigger timer does not switch.
Does anyone else have the problem or do I have to make additional settings in 8.8.6?
I've already reinstalled everything - without success.
| closed | 2020-10-11T13:26:36Z | 2020-10-29T16:14:52Z | https://github.com/kizniche/Mycodo/issues/865 | [
"bug"
] | grux77 | 4 |
Miserlou/Zappa | flask | 1,717 | Cytoolz module doesn't work with zappa deployment | ## Context
I am trying to deploy flask-app to AWS Lambda with zappa. The app works fine in active environment
and is deployed without errors to AWS Lambda. But the status check doesn't work because of an import error in cytoolz. I checked that itertoolz.pyx is present in the cytoolz folder in the package made by zappa. I also tried to resolve an issue by adding cython to the environment but it didn't help.
The full stacktrace is below:
No module named 'cytoolz.itertoolz': ModuleNotFoundError
Traceback (most recent call last):
File "/var/task/handler.py", line 580, in lambda_handler
return LambdaHandler.lambda_handler(event, context)
File "/var/task/handler.py", line 245, in lambda_handler
handler = cls()
File "/var/task/handler.py", line 139, in __init__
self.app_module = importlib.import_module(self.settings.APP_MODULE)
File "/var/lang/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 978, in _gcd_import
File "<frozen importlib._bootstrap>", line 961, in _find_and_load
File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
File "/var/task/src/app.py", line 2, in <module>
import cytoolz
File "/var/task/cytoolz/__init__.py", line 1, in <module>
from .itertoolz import *
ModuleNotFoundError: No module named 'cytoolz.itertoolz'
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
As the code works locally I believe the problem has to do with how zappa deploys to lambda.
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
## Expected Behavior
<!--- Tell us what should happen -->
There should not be an error when importing cytoolz.
## Actual Behavior
<!--- Tell us what happens instead -->
The app is deployed but doesn't work because of the import error.
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. The code of app.py which must be enough to reproduce the problem:
`from flask import Flask
import cytoolz
def create_app():
flask_app = Flask(__name__)
return flask_app
app = create_app()
@app.route("/")
def index():
return "Hello world!"`
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.47.0, 0.47.1
* Operating System and Python version: Mac OS 10.14.1, Python 3.6
* The output of `pip freeze`:
argcomplete==1.9.3
boto3==1.9.57
botocore==1.12.57
certifi==2018.11.29
cfn-flip==1.1.0.post1
chardet==3.0.4
Click==7.0
cytoolz==0.9.0.1
docutils==0.14
durationpy==0.5
Flask==1.0.2
future==0.16.0
hjson==3.0.1
idna==2.7
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
MarkupSafe==1.1.0
placebo==0.8.2
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.13
requests==2.20.1
s3transfer==0.1.13
six==1.11.0
toml==0.10.0
toolz==0.9.0
tqdm==4.19.1
troposphere==2.3.4
Unidecode==1.0.23
urllib3==1.24.1
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
zappa==0.47.1
* Your `zappa_settings.json`:
{
"prod": {
"app_function": "src.app.app",
"profile_name": "****",
"project_name": "check_cytoolz_deploy",
"runtime": "python3.6",
"s3_bucket": "zappa-entities-api-bucket",
"aws_region": "us-west-2",
"slim_handler": false,
"keep_warm": false,
"memory_size": 512,
"timeout_seconds": 30
}
} | open | 2018-11-30T16:28:14Z | 2021-08-08T02:08:02Z | https://github.com/Miserlou/Zappa/issues/1717 | [] | mberledgylabs | 6 |
aminalaee/sqladmin | asyncio | 413 | Editing a model fails with an incorrect id value in the query | ### Checklist
- [X] The bug is reproducible against the latest release or `master`.
- [X] There are no similar issues or pull requests to fix it yet.
### Describe the bug
Tried editing a model and it crashed. I had to wrap the "rows = await self._run_query(stmt)" line in the get_model_objects function in the models.py file to get an exception (no exceptions get pushed to output or logged!)
Exception:
`ProgrammingError("(sqlalchemy.dialects.postgresql.asyncpg.ProgrammingError) <class 'asyncpg.exceptions.UndefinedFunctionError'>: operator does not exist: bigint = character varying
HINT: No operator matches the given name and argument types. You might need to add explicit type casts.")`
The query generated had this for the where clause:
`WHERE company.id = :id_8`
Obviously the company.id is a bigint, and it shows the correct id of '1' in the admin list view...
### Steps to reproduce the bug
_No response_
### Expected behavior
_No response_
### Actual behavior
_No response_
### Debugging material
_No response_
### Environment
WSL 2
Python 3.11.1
Latest SQLAdmin
### Additional context
_No response_ | closed | 2023-01-18T23:57:51Z | 2023-01-19T18:16:02Z | https://github.com/aminalaee/sqladmin/issues/413 | [] | L1st3r | 4 |
pennersr/django-allauth | django | 3,678 | AttributeError: 'UserSignUpForm' object has no attribute 'try_save' | version:0.61.1
```
venv\lib\site-packages\allauth\account\views.py", line 269, in form_valid
self.user, resp = form.try_save(self.request)
AttributeError: 'UserSignUpForm' object has no attribute 'try_save'
[ERROR] 2024-03-10 23:19:06 - "POST /accounts/signup/ HTTP/1.1" 500 145
[E][django.server][basehttp.py:212] "POST /accounts/signup/ HTTP/1.1" 500 145
``` | closed | 2024-03-10T15:24:58Z | 2024-03-10T19:48:59Z | https://github.com/pennersr/django-allauth/issues/3678 | [] | ohyeah521 | 1 |
arogozhnikov/einops | numpy | 4 | Release branches or tags? | Need to decide a policy on keeping reference for previous releases. | closed | 2018-10-31T18:06:04Z | 2018-11-01T01:01:25Z | https://github.com/arogozhnikov/einops/issues/4 | [] | arogozhnikov | 1 |
ludwig-ai/ludwig | computer-vision | 3,758 | ValueError: Please specify 'target_modules' in 'peft_config' | **Describe the bug**
When I try to finetune Mistral, I get this error: `ValueError: Please specify 'target_modules' in 'peft_config'`
**To Reproduce**
```python
import csv
import pandas as pd
from ludwig.api import LudwigModel
from utilz import cwd
train_data_path = '/opt/topics/data/pairs.tsv'
df = pd.read_csv(train_data_path, sep='\t', header=None, quoting=csv.QUOTE_ALL)
df = df.dropna()
df.columns = ['input', 'output']
model = LudwigModel(config=cwd('ludwig.yaml'))
results = model.train(dataset=df)
```
```yaml
model_type: llm
base_model: ehartford/dolphin-2.1-mistral-7b
prompt:
template: |
### Instruction:
Extract topics found in the input text.
### Input:
{input}
### Response:
input_features:
- name: input
type: text
preprocessing:
max_sequence_length: 2048
lowercase: false
output_features:
- name: output
type: text
preprocessing:
max_sequence_length: 2048
lowercase: false
adapter:
type: lora
quantization:
bits: 4
trainer:
type: finetune
learning_rate: 0.0001
batch_size: 1
gradient_accumulation_steps: 16
epochs: 4
learning_rate_scheduler:
warmup_fraction: 0.01
```
```bash
Traceback (most recent call last):
File "/opt/topics/debug.py", line 34, in <module>
results = model.train(dataset=df[:1_000])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/ludwig/api.py", line 619, in train
with self.backend.create_trainer(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/ludwig/backend/base.py", line 293, in create_trainer
return trainer_cls(config=config, model=model, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/ludwig/trainers/trainer_llm.py", line 418, in __init__
super().__init__(
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/ludwig/trainers/trainer.py", line 179, in __init__
self.model.prepare_for_training()
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/ludwig/models/llm.py", line 259, in prepare_for_training
self.initialize_adapter()
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/ludwig/models/llm.py", line 247, in initialize_adapter
self.model = get_peft_model(self.model, peft_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/peft/mapping.py", line 106, in get_peft_model
return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config, adapter_name=adapter_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/peft/peft_model.py", line 889, in __init__
super().__init__(model, peft_config, adapter_name)
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/peft/peft_model.py", line 111, in __init__
self.base_model = PEFT_TYPE_TO_MODEL_MAPPING[peft_config.peft_type](
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/peft/tuners/lora.py", line 274, in __init__
super().__init__(model, config, adapter_name)
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/peft/tuners/tuners_utils.py", line 88, in __init__
self.inject_adapter(self.model, adapter_name)
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/peft/tuners/tuners_utils.py", line 205, in inject_adapter
peft_config = self._prepare_adapter_config(peft_config, model_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/matt/miniconda3/envs/ludwig/lib/python3.11/site-packages/peft/tuners/lora.py", line 550, in _prepare_adapter_config
raise ValueError("Please specify `target_modules` in `peft_config`")
ValueError: Please specify `target_modules` in `peft_config`
```
**Environment (please complete the following information):**
- OS: Linux Mint (latest)
- Python 3.11.4
- Ludwig 0.8.6
**Additional context**
I got some assistance from @alexsherstinsky, who told me to install the latest versions of ludwig, transformers, and peft from their repositories. Doing this did indeed solve the problem, but it broke the `spacy-transformers` dependency. I'm not sure how Ludwig uses spaCy or if it even needs `spacy-transformers`.
```bash
pip install -U git+https://github.com/ludwig-ai/ludwig.git@master
pip install -U git+https://github.com/huggingface/transformers
pip install -U git+https://github.com/huggingface/peft.git
``` | closed | 2023-10-27T23:39:26Z | 2023-10-31T01:20:31Z | https://github.com/ludwig-ai/ludwig/issues/3758 | [] | mhillebrand | 2 |
apachecn/ailearning | scikit-learn | 410 | 真相(group-true):真正存在的潜在规律 是否存在拼写错误 | https://github.com/apachecn/MachineLearning/blob/master/docs/1.%E6%9C%BA%E5%99%A8%E5%AD%A6%E4%B9%A0%E5%9F%BA%E7%A1%80.md中的有一行
真相(group-true):真正存在的潜在规律 | closed | 2018-07-31T05:15:14Z | 2018-07-31T05:54:36Z | https://github.com/apachecn/ailearning/issues/410 | [] | xealml | 1 |
pydantic/pydantic-settings | pydantic | 488 | CliApp doesn't work with Unions in nested modules | Hi,
I love what the library is trying to do. My vision is to have several Configs and being able to switch between them by name. This is what I came up with. It works on the root level! Unfortunately, this fails when it's nested in a submodel.
Reproducible with `pydantic==2.10.2` and `pydantic-settings==2.6.1`
This example works:
```
import pydantic
import pydantic_settings
from typing import Literal
class Alpha(pydantic.BaseModel):
greek_letter: Literal["alpha"] = "alpha"
common_field: str = "A"
different_field1: str | None = None
class Beta(pydantic.BaseModel):
greek_letter: Literal["beta"] = "beta"
common_field: str = "B"
different_field2: str | None = None
mapping = {
"a": Alpha(),
"b": Beta(),
}
class RunCLI(pydantic.BaseModel):
letter_name: Literal["a", "b"] = pydantic.Field(exclude=True)
letter: Alpha | Beta = pydantic.Field(discriminator="greek_letter", default_factory=lambda data: mapping[data["letter_name"]])
def cli_cmd(self):
print(self)
pydantic_settings.CliApp.run(
RunCLI,
cli_args=["--letter-name", "a"],
)
pydantic_settings.CliApp.run(
RunCLI,
cli_args=["--letter-name", "b"],
)
```
However, this one fails:
```
import pydantic
import pydantic_settings
from typing import Literal
class Alpha(pydantic.BaseModel):
greek_letter: Literal["alpha"] = "alpha"
common_field: str = "A"
different_field1: str | None = None
class Beta(pydantic.BaseModel):
greek_letter: Literal["beta"] = "beta"
common_field: str = "B"
different_field2: str | None = None
mapping = {
"a": Alpha(),
"b": Beta(),
}
class Submodel(pydantic.BaseModel):
letter_name: Literal["a", "b"] = pydantic.Field(exclude=True, default="a")
letter: Alpha | Beta = pydantic.Field(discriminator="greek_letter", default_factory=lambda data: mapping[data["letter_name"]])
class RunCLI(pydantic.BaseModel):
sub: Submodel = Submodel()
def cli_cmd(self):
print(self)
pydantic_settings.CliApp.run(
RunCLI,
cli_args=["--sub.letter_name", "a"],
)
pydantic_settings.CliApp.run(
RunCLI,
cli_settings_source=pydantic_settings.CliSettingsSource(settings_cls=RunCLI, cli_implicit_flags=True, cli_enforce_required=False),
cli_args=["--sub.letter_name", "b"],
)
```
with the following error: `AttributeError: 'Alpha' object has no attribute 'different_field2'` | closed | 2024-11-28T23:17:53Z | 2024-12-02T08:33:39Z | https://github.com/pydantic/pydantic-settings/issues/488 | [
"unconfirmed"
] | ljendele | 5 |
alirezamika/autoscraper | web-scraping | 52 | how to use this behind a proxy | this is a great project but when i open a proxy, nothing worked. help please. | closed | 2021-02-25T00:47:58Z | 2021-12-01T08:22:26Z | https://github.com/alirezamika/autoscraper/issues/52 | [] | mikelty | 4 |
newpanjing/simpleui | django | 99 | 主页右侧显示优化 | **你希望增加什么功能?**
1.首页右侧如下显示:
simpleui 主页,报告问题
version 2.1.4.619 gitee/github链接
请问这部分内容能通过setting,屏蔽掉吗,如无法屏蔽,能否将这个区域缩小点,放在页面下方也好,谢谢!
**留下你的联系方式,以便与你取得联系**
邮箱:23420650@qq.com
| closed | 2019-06-24T01:39:43Z | 2019-06-24T04:15:57Z | https://github.com/newpanjing/simpleui/issues/99 | [
"enhancement"
] | cqrichard2018 | 1 |
davidsandberg/facenet | computer-vision | 841 | Instructions How to move further with facenet | Dears,
if somebody has instructions , my target is to recognize a face using tensor flow but I didn't find a full instructions which scripts to use to do the following :
1- crop faces
2- Train images .
3- face recognition.
if examples or sample commands that would be appreciated!
I have Ubuntu Linux with python and tensor flow environment ready.
appreciate your steps to go forward !
thanks,
Omar Atia
| open | 2018-08-07T12:20:21Z | 2018-08-09T09:26:55Z | https://github.com/davidsandberg/facenet/issues/841 | [] | atiato | 1 |
sktime/sktime | scikit-learn | 7,906 | [BUG] Race Conditions in `sktime.utils._SuppressWarningPattern` with joblib backend `threading` | **Describe the bug**
Sometimes, we observe sporadic bugs related to infinite recursions see [1]. See images below. In these images pay attention to the test params involved here especially to `backend9`, which is the `threading` backend of joblib (see method `_get_parallel_test_fixtures`). This backend is using multithreading. Thus, global variables are shared.
Now pay attention to the second image and see that `_custom_show_warning` is involved in the stack trace, which is called by a function that is not calling a warning in a `with _suppress_pd22_warning:` statement (which is the only way that `_custom_show_warning` can be active).
The reason for this weird behavior is some kind of race condition. If you change modules like it is done `_SuppressWarningPattern`, then this affects all modules that are in the same process. Thus, in a multithreading approach (which is the same process), this can lead to the fact that this `_custom_show_warning` is used by thread 2, if only thread 1 is within an `with _suppress_pd22_warning:` statement and not thread 2. Note this happens only in case of threading and not in case of multiprocessing.
<img width="950" alt="Image" src="https://github.com/user-attachments/assets/4ed385cb-f055-415f-9074-e1e3a546136b" />
<img width="942" alt="Image" src="https://github.com/user-attachments/assets/8cbe8bd6-720b-4056-9063-199d5cbc8b84" />
So we need to find a way to make the overwriting of the warning message thread-safe!
This will probably also fix the infinite recursion since I assume that this only happens because of the race-condition.
[1] https://github.com/sktime/sktime/actions/runs/13343661404/job/37942995994?pr=6774
**To Reproduce**
A minimal reproducible example for the race condition is:
```python
import time
import warnings
import joblib
def f(set_warning):
originaL_showwarning = warnings.showwarning
if set_warning:
warnings.showwarning = lambda *args: print("FOO this happens")
warnings.warn("Warning from f" + str(set_warning))
if set_warning:
time.sleep(2)
warnings.showwarning = originaL_showwarning
if __name__ == "__main__":
joblib.Parallel(n_jobs=2, backend="threading")(joblib.delayed(f)(set_warning) for set_warning in [True, False])
```
@fkiraly for your information. | open | 2025-02-27T21:22:31Z | 2025-03-02T14:03:41Z | https://github.com/sktime/sktime/issues/7906 | [
"bug",
"module:base-framework"
] | benHeid | 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.